9 appendix a – publicationsdl.lirec.eu/deliverables/lirec-d.8.1-appendix a –...

27
FP7-215554 LIREC Deliverable D8.1 27 9 Appendix A – Publications

Upload: others

Post on 05-May-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

FP7-215554 LIREC Deliverable D8.1

27

9 Appendix A – Publications 

Five Weeks in the Robot House – Exploratory Human-Robot Interaction Trials in a Domestic Setting

Kheng Lee Koay, Dag Sverre Syrdal, Michael L. Walters and Kerstin Dautenhahn Adaptive Systems Research Group, School of Computer Science, University of Hertfordshire

College Lane, Hatfield, Hertfordshire, UK. AL10 9AB {K.L.Koay, D.S.Syrdal, M.L.Walters, K.Dautenhahn}@herts.ac.uk

Abstract

This paper presents five exploratory trials investigating scenarios likely to occur when a personal robot shares a home with a person. The scenarios are: a human and robot working on a collaborative task, a human and robot sharing a physical space in a domestic setting, a robot recording and revealing personal information, a robot interrupting a human in order to serve them, and finally, a robot seeking assistance from a human through various combinations of physical and verbal cues. Findings indicate that participants attribute more blame and less credit to a robot than compared to themselves when working together on a collaborative task. Safety is a main concern when determining participants’ comfort when sharing living space with their robot. Findings suggest that the robot should keep its interruption of the user’s activities to a minimum. Participants were happy for the robot to store information which is essential for the robot to improve its functionality. However, their main concerns were related to the storing of sensitive information and security measures to safeguard such information. 1. Introduction

Robots are becoming more commonplace in our living environments, performing tasks such as meal delivery in hospitals (e.g. the Helpmate robot developed by Helpmate Robotics Inc.) and entertainment (e.g. Sony Aibo). As robot technology improves in future, humans will be living with robots in their own homes. However, it is still an open research question as to what roles these robots will play [1]. Will they assume the role of a butler, assistant, companion or friend? Clearly, these robots will be placed in social environments where they will be expected to interact with a wide variety of people from different age groups, genders and cultures, under different situations and in different contexts. According to Fong et

al. [4], most researchers assume that social interactions between humans and robots will be very similar to these between humans. In other words, humans will treat robots as social beings. However, exactly how humans and robots interact is the subject of much current research in the field of Human-Robot Interaction (HRI). Robots’ tasks range from children’s teaching aids [12, 13, 19], therapeutic uses [21, 22, 23] and companions [2, 3, 5, 25, 27, 28]. A good survey of the field of socially interactive robots can be found in [4] and a more recent survey of the field of HRI in [7].

In the fields of robotics and HRI researchers increasingly study issues of social robotics, especially in the area of robot etiquette. Various long-term [18] and short-term [10, 11, 29] studies investigated participants’ preferences when interacting with robots at home. For example, how should robot approach a person, and what social distances should a robot maintain when interacting with humans in various interaction contexts (i.e. where the participants were static, seated or standing)? In more dynamic HRI situations, studies have investigated robots standing in line [20], following a person [6, 16] or passing a person in a corridor [26] etc.

However, it still remains to explore other equally interesting interactions that are likely to occur when robots share a home with humans. We present and discuss five different exploratory trials which were conducted as part of a long-term study [18] which focused on some of these issues. Results from these exploratory trials were meant to inform our future work on robot home companions. 2. Methods 2.1. Background

Longitudinal trials [18] were conducted in the University of Hertfordshire Robot House in summer 2006. They examined the impact of habituation effects on participants’ preferences for robot approach direction and

distance. Participants had eight one hour interaction sessions with the robot over a period of five weeks. Four different robot appearances were used in the trials (see Fig. 1) to investigate the effects of robot appearance. The Robot House provided a naturalistic and ecologically valid experimental environment for the study of robot home companions, compared to laboratory conditions.

Within this longitudinal study, two sets of experiments were conducted:

Pre/Post-Trials Set – these main trials aimed to measure participants’ preferences concerning robot-to-human approach distances and directions.

Exploratory Trials Set – five smaller, independent trials were designed to support the main trials by involving participants in a variety of HRI scenarios during their habituation period. The trials were primarily exploratory and creative in nature, with the main aim being to keep participants’ interest and motivation high during the study. Participants were asked to complete a series of introductory questionnaires which gained their data on demographic, personality, technical, computing and previous robot experiences.

Twelve participants, 8 males and 4 females, participated in the trials. Their ages ranged from 21 to 40 and they were staff or students from various University of Hertfordshire departments, including Computer Science, Engineering, Psychology, Astronomy and Business Studies. Importantly, none of them was part of the HRI research team.

They were divided into four groups (of 2 males and 1 female each). Each group was assigned to interact with one of the four different robot appearances throughout the longitudinal trial, and was not exposed to any of the other robot appearances.

The Pre-/Post- Trials Set consisted of three identical trials (Pre-Trial 1, Pre-Trial 2 and Post-Trial), conducted at different times during the five week trial period in order to track participants’ preferred robot-to-human approach distances and directions for Physical-Interaction, Verbal-Interaction and No-Interaction conditions. Findings from this study have been presented in Koay et al. [18]. The

main focus of the current paper is on the five different scenarios studied in the Exploratory Trials Set and findings from these five HRI scenarios are presented in chronological order: ‘Hot and Cold Game’, ‘Robot in the Family’, ‘Confidential Information’, ‘Watching TV with the Robot’ and ‘Helping the Robot’. 2.2. Set of exploratory trials

The five exploratory trials aimed to habituate the participants with the robot and presented situations that were likely to occur when a cognitive robot companion shares a home with a person. The scenarios were decided upon after discussions within our interdisciplinary research team. After each trial, participants completed questionnaires and were de-briefed in semi-structured interviews. The robots’ behaviours and movements during the trials were implemented using a mixture of autonomous and human operator remote control (Wizard-of-Oz) methods. Robot movements and speech were largely autonomous, but cued by a remote human operator in order to ensure reliable and repeatable robot operation (mostly due to the technical problems of using on-board speech recognition systems on noisy, mobile robots in human inhabited environments).

2.2.1. ‘Hot and Cold’ game trial. The trial consisted of three rounds where the participant identified an object to the experimenter in the room and then subsequently directed the robot toward the object. The robot was directed by using the word “hotter” if the robot was moving closer toward the object, or “colder” if the robot was moving further away from the object. The participant was instructed to use the word “finish” when they considered the robot to be close enough to the identified object (see Fig. 2a). They were told that the aim of this trial was to evaluate three different algorithms used by the robot to locate their chosen objects.

However, in the three trial runs, only two strategies were used for controlling the robot. The first strategy followed two basic rules: the robot continued moving forward when the participant said “hotter”, and randomly changed direction when the participant said “colder”. The second strategy was based on a random wandering behaviour with obstacle avoidance abilities. In both strategies, the robot was set to verbally respond (with “ok”) to the participant’s command. The robot that used the first strategy appeared more responsive to the participants’ commands than the robot following the second strategy as the robot changed its direction when the participant gave the command “colder”. The robot that used the second strategy only changed its direction when an obstacle forced it to do so.

During the trial, the robot used the first strategy in both the first and the third round, while the second strategy

Figure 1. Four different robot appearances used in the trials, from left to right - the short mechanoid, short humanoid, tall mechanoid, and tall humanoid. Note, the term ʻmechanoidʼ refers to a mechanical-looking robot.

was only used in the second round of the trial. Participants were asked to judge the robot’s behaviour by answering the following set of questions at the end of the trial. 1. How intelligently did the robot respond to your

directions in each trial? 2. How accurate did you feel your directions to the robot

were in the trial (i.e. for each of the algorithms)? 3. How satisfied were you with the cooperation between

you and the robot as a whole for the trial (i.e. for each of all the algorithms)?

4. In which trial did the robot behave the most intelligently?

2.2.2. ‘Robot in the Family’ trial. This trial explored issues of space negotiation in a (simulated) family setting and involved a generic task which was different from that used in our previous space negotiation study [15]. This trial aimed to investigate how comfortable participants felt with regards to a robot moving within their living space, and explored the differences between sharing a living space with a robot as opposed to a human. The trial scenario involved the participant and the experimenter moving around the living room discussing the design and how to redecorate it, while the robot repeatedly passed through the living room between the kitchen and the hallway (see Fig. 2b). The human operator controlled the robot by navigating around the humans using available free space. If the participant or experimenter obstructed the robot (i.e. no free space could be found for the robot to navigate through), then the robot would physically stop, say “excuse me” then continue saying “thank you” after the participant or experimenter had moved aside. A post-trial questionnaire asked the following questions: 1. Would you say the robot made you uncomfortable

when moving around the room? 2. Did you find the robot annoying when moving around

the room? 3. Do you think you would have felt more or less

comfortable if a human had been moving around the room in a similar manner?

2.2.3. ‘Confidential Information Disclosure’ Trial. This trial aimed to explore issues regarding a personal robot recording and revealing personal and possibly confidential information. The participant and the experimenter had a casual (semi-scripted) conversation about personal habits, as might occur between new acquaintances. The experimenter guided the conversation and asked the participants’ opinions on various topics, ranging from sleeping and housekeeping habits to attitudes towards healthy eating and smoking. As part of this discussion the experimenter disclosed personal information to the participants. At predetermined points in the conversation, the robot would interrupt the

experimenter in order to volunteer factually correct, yet socially inappropriate information (e.g. revealing inconsistencies in the experimenter’s claims) about the experimenter to the participant (see Fig. 2c). Note, the robot was not trying to be malicious but provided truthful information ‘to the best of its knowledge’. See below for example:

Experimenter: “I always try to go to sleep early so I can get a good night’s sleep.” Robot: “Dag (the experimenter) did not go to bed until 3 o’clock last night.”

To obtain a wide range of responses from this exploratory study, participants were asked several open-ended questions as well as questions intended to stimulate thoughts on robots and personal information. They were asked about 1. How do you feel about the robot storing information

about you? (Likert scale 1-5) 2. How should a robot treat information it stores about

you and your household? 3. Categories of information they are most comfortable

and least comfortable about a robot storing information regarding, and with possible reasons (Sleeping Habits and Daily Routine, Eating and Drinking Habits, Housekeeping Style, Social Spaces and Dialogue Style Preferences, Friends and Acquaintances, Health Issues, Personality and Other Psychological Characteristics, Demographic Information). Detailed results of this particular trial have been

published in Syrdal et al. [24], thus we only provide a brief overview and discussion in this paper.

2.2.4. ‘Watching TV with the Robot’ trial. This trial explored participants’ opinions and tolerance to interruptions by the robot. The scenario involved the robot repeatedly interrupting participants (at intervals of about 4 minutes) in order to serve them drinks and foods while they were enjoying watching a TV program (see Fig. 2d). At each approach, the robot offered a choice (i.e. a normal coke). If the participant rejected the first choice, the robot would return four minutes later to offer an alternative choice (i.e. diet coke). After offering the choices of all the drinks available, the robot returned repeatedly to offer various choices of snacks. The robot’s behaviour was chosen deliberately to be ‘annoying’ to participants.

2.2.5. ‘Helping the Robot’ Trial. This trial aimed to explore how a robot might indicate that it required assistance from the participant. One issue involved the effectiveness of the robot using various cues (i.e. physical, verbal and a combination of both) to attract attention and express its intentionality to the participant.

A second issue was to understand how participants felt about helping the robot.

The robot tried to get the participant (who was watching TV in the living room) to open the door using several hard-coded robot behaviors listed below (see Fig. 2e). These behaviors were presented in a preset order. A subsequent behaviour would only be used if the previous one had failed. This allowed us to examine which robot behaviour would cause the participant to interrupt their television watching and help the robot, and why certain robot behaviors succeed or fail.

The six hard-coded robot behaviours used for

attracting participants’ attention and expressing the robot’s intentionality were: 1. Moving to the door, stopping and facing the door for

20 seconds. 2. Moving backward and then forward toward the door,

and facing the door for 20 seconds. 3. Beeping twice while facing the door. 4. Approaching the participant, and staying next to the

participant for 5 seconds before moving back toward the door.

5. Approaching the participant, stopping and making a “beep beep” sound while staying next to the participant for 5 seconds before moving back toward the door.

6. Approaching the participant, stopping and using speech to ask for help “please can you help me open the door” while staying next to the participant before moving back toward the door.

3. Findings 3.1. ‘Hot and Cold’ game trial

The median ratings for all three scales with regard to post-trials questionnaire’ question 1, 2 and 3 are presented in graphical form in figure 3. Participants differentiated between trial 2 (random) and the other trials by rating performances across all the scales as less satisfactory in the second trial. They also rated their own performance, the robot's intelligence and their satisfaction with the overall co-operative effort as significantly worse in the second trial than in the other two trials (Friedman nonparametric tests: χ2=12.1–7.2, p<.05).

There was a significant correlation between co-operation satisfaction and participant accuracy in the first trial (rho=.636, p=.03). In the second trial, robot intelligence was significantly correlated with the co-operation satisfaction (rho=.858, p<.01). These results strongly indicate two interesting patterns, 1) participants assigned less credit to the robot and more to themselves; 2) participants assigned more blame to the robot and less to themselves.

In the third trials, all the different scores were significantly correlated with each other: cooperation satisfaction and robot intelligence (rho=.615, p=.03), cooperation satisfaction and participant accuracy (rho=.728, p=.01) and robot intelligence and participant accuracy (rho=.659, p=.02). The graph (Fig. 3) also indicates that the relationship between the scores differed between each trials.

A majority of the participants thought that the robot behaved the most intelligently in the third trial (8 out of 12), followed by the first and then the second trial.

Figure 2. a). A participant directing the robot toward an object in the room during the ʻHot and Coldʼ game trial, b) A participant and the experimenter discussing the design of the living room while the robot was negotiating space between them during the ʻRobot in the Familyʼ trial, c) The robot interrupting the experimenter and participant to correct inconsistencies in the experimenterʼs claim during the ʻConfidential Information Disclosureʼ trial, d) The robot serving a participant a bottle of coke in the ʻWatching TV with the Robotʼ trial, e) The robot seeking a participantʼs assistance to open the door by stopping and facing the door in the ʻHelping the Robotʼ trial.

(a) (b)

(c) (d)

(e)

3.2. ‘Robot in the Family’ trial

Figure 4(a) indicates that overall participants reported that they were quite comfortable with the robot moving around the room (M=4.17, SD =.937) and did not find this annoying (M=4.25, SD=1.215). They also suggest that they felt toward the robot similarly as they would towards a human moving around the room (M=3.17, SD=1.030). This might indicate an habituation effect as found in [18]. Only one participant (participant 1) was uncomfortable and when asked why, said they had to move all the time for the robot to pass and so felt uncomfortable and unable to pay attention to the speaker.

Figure 4(b) suggests that most participants did not find the robot moving around the room particularly annoying. One participant was neutral, and one participant found the robot very annoying. The participant (participant 1) who found the robot very annoying indicated that the reason being that they had to focus on the robot. Perhaps not unexpectedly, this was the same participant who felt uncomfortable with the robot moving around the room.

Figure 4(c) suggests that six participants felt that they would feel no more or less comfortable with a human moving around in a similar manner. Four participants felt more comfortable with the robot than if it had been a person walking around the room. Two participants felt they would have been more comfortable if it had been with a person rather than a robot, because they would not have to keep track or pay any attention to the person as they did with the robot.

When the participants were asked if there were any rules that they felt the robot should follow when navigating in the same room as them, most participants thought that the robot should avoid them completely and not interrupt them when they are speaking with someone

(i.e. take an alternate route instead of going near to them) unless the robot is performing a task. In this case the robot could move in front or behind them, but avoid moving in between them and the speaker. One participant suggested that the robot should always move in front of them so they would know where the robot is heading. The robot should not come to close to them or any objects in the room (i.e. maintain a minimum distance from both objects and people.) One participant highlighted that they were pleased with the distance the robot maintained between them during the trial; one pointed out that they should have priority over the robot for space within the house (i.e. the robot should wait for them to move away instead of saying “excuse me”); another participant indicated that they would prefer the robot to perform its duties at night. 3.3. ‘Confidential Information’ trial

With regard to a robot storing information about them, five participants rated themselves as feeling Uncomfortable or Very Uncomfortable, five participants as neutral and two participants as comfortable. Most responses related to security measures of the robot and the information stored (e.g. only accessible by the user), and the utility of the data for carrying out its tasks. Based on participants’ feedback, the most acceptable categories of information to be stored by the robot were those related to

Figure 3. The median scores of participantsʼ perceived robot intelligence (1 = very intelligent, 5 = not at all intelligent), their subjective direction accuracy (1 = very accurate, 5 = very inaccurate) and the cooperation satisfaction (1 = very satisfied, 5 = very dissatisfied) for all the trials.

(c) Figure 4. Frequency plots of participantsʼ questionnaire results regarding a) how comfortable, and b) how annoying they felt when the robot was moving around the room during the trial, and c) their opinion of how they would felt if a human had been moving around the room in a similar manner.

(b) (a)

sleep, daily routine and housekeeping style. The least accepted categories were personality and other psychological characteristics. However, there was disagreement between participants regarding health and medical issues: 4 participants listed it as the most acceptable and 5 listed it as the least acceptable. Based on the reasoning behind participants’ feedback, this disagreement was founded on participants' acceptance of the use of the robot for health benefit reasons rather than the sensitivity of the information. 3.4. ‘Watching TV with the Robot’ trial

Overall, participants did not find that the robot’s behavior improved or detracted from their viewing experience (M=2.83, SD=1.11). Four participants were happy with the robot’s behaviour and indicated that the robot’s behaviour actually improved their viewing experience (i.e. two participants indicated a Much Better viewing experience, and two other indicated Better experience). Four participants indicated that the robot’s behavior neither improved nor detracted from their viewing experience and four participants indicated that the robot’s behaviour made their viewing experience worse.

Overall, most participants (58.3%) thought that the robot was making too much effort to make them feel comfortable (M=4.33, SD=1.15). Only one participant stated that the robot was not making enough effort.

Overall, participants rated the robot as responsive to their needs during the interaction (i.e. three participants indicated that the robot was very responsive and six participants indicated responsive). Two participants rated robot responsiveness as neutral and one participant indicated the robot was not responsive during their interaction. (M=2.08, SD=0.9). 3.5. ‘Helping the Robot’ trial

Most participants only responded to the robot’s request for help to open the living room door when the robot exhibited behaviour 6 (when the robot approached the participants and said “Please can you help me open the door?”). Participants were asked why they did not help the robot when it exhibited behaviours 1 to 5. All participants, except for one, responded that they did not know that the robot needed help to open the door. However, during the semi-structured interviews, some participants stated that they suspected that the robot was in need of assistance, but were unsure of either how they could help the robot or if it was appropriate for them to help. The participants who indicated that earlier robot behaviours had communicated the robot’s need for help in opening the door but said that “my viewing television was more important than the robot's needs”.

4. Discussion and conclusions

In this study we have presented findings from our exploration of participants’ responses to robot behaviours in various likely domestic scenarios.

With regard to the human-robot joint task scenario presented in section 3.1, participants attribute more blame and less credit to the robot compared to themselves when working together on a collaborative task. These results echo Kim and Hinds [14] findings where people attribute more blame and less credit to a robot that has more autonomy. This allows the user to feel less responsible for their joint task given that they can blame the robot if something goes wrong. The findings suggested a negative relationship between robot autonomy and human responsibility when both are working together on a collaborative task. Further research needs to be conducted on this issue and exploring solutions to reverse or minimize this effect since it will negatively impact on human-robot collaborative scenarios involving future robots with high autonomy.

Other issues we addressed in this study relate to humans sharing their living space with their robot. Previously, we studies participants’ experience of comfort during their initial encounter with a robot and performing an independent task in the same living space (simulated living room), as discussed in [15, 17]. The main findings from that study indicated that participants disliked the robot blocking their path, moving behind them, or on a collision path towards them. Broadly, these can all be classified as participants’ safety concerns.

In the current study, based on participants’ feedback, we also found that users’ safety is a main factor that influences participants’ comfort when sharing the same living space with their robot. Therefore, we propose the following recommendations: a robot companion which ‘lives’ with people in their home needs to make its users feel comfortable by exhibiting behaviours that minimize participants’ safety concerns. The robot should maintain a suitable passing distance, direction and velocity when moving in the vicinity of the users. Clearly, the robot should never bump into the user but it should also not be obstructive. The robot should also say “excuse me” (or related expressions) when getting close to users to inform them about its presence, rather than appearing without warning from behind. When interacting with users, the robot should select an appropriate approach direction and maintain a suitable interaction distance based on the user’s preferences, the robot’s own appearance, and the context of the interaction.

Interruption is also a main cause of participants feeling uncomfortable. For example in the ’Watching TV with the Robot‘ trial, participants thought that the robot was interrupting them too often to offer them a refreshment,

even though they welcomed the kind gesture of offering them refreshments. One participant was very happy with the way the robot approached (i.e. from the side) without getting in the way when offering a refreshment. Another participant was pleased when the robot kept each of its interruptions short. To reduce the number of interruptions, the majority of the participants suggested that the robot should ask a more general question such as “can I offer you any refreshment?”, “do you need anything to eat or drink?” instead of moving back and forth offering a selection of drinks or food at each approach.

To avoid the robot annoying its users, its behaviour should be transparent in a manner that allows the user to easily recognize the robot’s intentions. This would help overcome situations in which the robot appears to be moving around aimlessly and annoying the user when it is actually performing a task, and situations where the robot moves close to the user, who then has to focus on the robot because they do not know what the robot’s intentions are, or its strategies for avoiding the user.

Participants also identified the way the robot moved in between two people who were engaged in conversation as the main difference between our implemented robot behaviours and common human behaviour. Humans do not usually pass between people who are engaged in conversation. In the cultures from which our participants were drawn from, the appropriate behaviour is to walk around a dyad engaged in conversation. This difference, along with the lack of transparency, leads a majority of the participants to conclude that they felt as if they had to pay more attention to the movements of the robot than they would to that of a human moving around them. However, participants also highlighted that there was less need to consider the robot socially: while one cannot ignore humans easily, one can much more easily ignore robots. This might mitigate some of the discomfort felt by potential users of autonomous robots. The issue of communicating intention also became an issue in the 'Helping the robot' trial and highlighted the importance of finding easily recognisable cues and gestures that a robot companion may adopt for effective communication with its users.

As for a personal robot storing personal information, participants see the need for a robot to possess such functionality in order for the robot to learn and behave in a socially acceptable manner and improve its functionality to serve users. However, participants’ main concerns were related to the storing of sensitive information and the security measures installed to protect such information. Thus, it appears that the robot needs to be able to earn the participants’ trust and allow them to feel safe knowing that there are security measures in place to protect the information that the robot has collected about them from others.

In the light of the findings from these studies, we have discussed and raised various key issues that are important for the acceptability of socially aware personal robots. We also highlighted different methodologies and approaches applicable for the study of social robotics and discussed means of investigating these issues in more detail by refining and expanding these exploratory trials. We are aware that there were limitations to these exploratory trials, e.g. regarding the low number of participants and the limited duration of each interaction episode. Nevertheless this exploratory study does provide us with interesting insights into what one should focus on with regard to future HRI experimental investigations into social robotics and robot home companions. 5. Acknowledgements

The work described in this paper was conducted within the EU Integrated Projects, COGNIRON (“The Cognitive Robot Companion”) and LIREC (LIving with Robots and intEgrated Companions). Funded by the European Commission under FP6- IST and FP7-ICT under contracts FP6-002020 and FP7-215554. 6. References [1] K. Dautenhahn, “Roles of robots in human society -

Implications from research in autism therapy," Robotica, Vol. 21, pp. 443-452, 2003.

[2] K. Dautenhahn, “Robots We Like to Live With?! – A Developmental Perspective on a Personalized, Life-Long Robot Companion”, in Proceeding of IEEE RO-MAN (Kurashiki Okayama, Japan, 2004), pp. 17–22.

[3] K. Dautenhahn, S. Woods, C. Kaouri, M. Walters, K. L. Koay, and I. Werry, “What is a Robot Companion - Friend, Assistant or Butler?”, Proceeding of IEEE/RSJ IROS (Edmonton Alberta, Canada, 2005), pp. 1488–1493.

[4] T. Fong, I. Nourbakhsh and K. Dautenhahn, “A survey of socially interactive robots”, Robotics and Autonomous Systems 42(3–4) (2003) 143–166.

[5] B. Friedman, P. H. Kahn (Jr.), and J. Hagman, “Hardware Companions?: What Online AIBO Discussion Forums Reveal about the Human-Robotic Relationship”, International Conference for Human-Computer Interaction (CHI) (Fort Lauderdale, Florida, 2003), Vol. 5, pp. 273–280.

[6] R. Gockley, “Natural Person-Following Behavior for Social Robots”, in Proceeding of ACM HRI, pp.17-24, 2007.

[7] M. A. Goodrich and A. C. Schultz, “Human-Robot Interaction: A Survey”, Foundations and Trends in Human-Computer Interaction, Vol. 1, pp. 203-275, 2007.

[8] L. R. Goldberg, “A broad-bandwidth, public domain, personality inventory measuring the lower-level facets of

several five-factor models”, Personality Psychology in Europe, Vol. 7, pp. 7-28, 1999.

[9] P. J. Hinds, T. L. Roberts, and H. Jones, “Whose Job Is It Anyway? A Study of Human-Robot Interaction in a Collaborative Task”, Human-Computer Interaction, Vol. 19, pp. 151-181, 2004.

[10] H. Hüttenrauch and K. Severinson Eklundh “To help or not to help a service robot: Bystander intervention as a resource in human-robot collaboration”, Interaction Studies, 7 (3), pp. 455-477, 2006.

[11] H. Huettenrauch, K. Severinson Eklundh, A. Green, and E. A. Topp, “Investigating Spatial Relationships in Human-Robot Interaction”, in Proceeding of IEEE/RSJ IROS, pp. 5052-5059, Oct. 2006.

[12] T. Kanda, T. Hirano, D. Eaton and H. Ishiguro, “Interactive Robots as Social Partners and Peer Tutors for Children: A Field Trial”, Human-Computer Interaction 19(1–2), pp.61–84, 2004.

[13] T. Kanda and H. Ishiguro, “Communication Robots for Elementary Schools”, in Artificial Intelligence and the Simulation of Behaviour Symposium Robot Companions: Hard Problems and Open Challenges in Robot-Human Interaction (AISB), (Hatfield, Hertfordshire, 2005), pp. 54–63.

[14] T. Kim and P. Hinds “Who Should I Blame? Effects of Autonomy and Transparency on Attributions in Human-Robot Interaction”, in Proceeding of IEEE RO-MAN, (Hatfield, UK, 2006), 80-85.

[15] K. L. Koay, M. L. Walters and K. Dautenhahn, “Methodological Issues using a Comfort Level Device in Human-Robot Interactions”, in Proceeding of IEEE RO-MAN (Nashville TN, August 2005), 359-364.

[16] K. L. Koay, Z. Zivkovic, B. Krose, K. Dautenhahn, M. L. Walters, N. R. Otero, and A. Alissandrakis, “Methodological Issues of Annotating Vision Sensor Data using Subjects' Own Judgement of Comfort in a Robot Human Following Experiment”, in Proceeding of IEEE RO-MAN, pp. 66-73, 2006.

[17] K.L. Koay, K. Dautenhahn, S.N. Woods and M.L. Walters (2006). “Empirical Results from Using a Comfort Level Device in Human-Robot Interaction Studies”, in Proceeding of AMC Human-Robot Interaction (Salt Lake City, Utah, March 2006), 194-201.

[18] K. L. Koay, D. S. Syrdal, M. L. Walters, and K. Dautenhahn, "Living with Robots: Investigating the Habituation Effect in Participants' Preferences During a Longitudinal Human-Robot Interaction Study," in Proceeding of IEEE RO-MAN, pp. 564-569, 2007.

[19] F. Michaud, J-F. Laplante, H. Larouche, A. Duquette, S. Caron, D. Letourneau and P. Masson, “Autonomous Spherical Mobile Robot for Child-Development Studies”, IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans, 35(4) (2005) 471–480.

[20] Y. Nakauchi and R. Simmons, "A social robot that stands in line," in Proceedings of IEEE/RSJ IROS, Takamatsu, Japan, pp. 357-364, 2000.

[21] B. Robins, K. Dautenhahn, R. te Boekhorst, and A. Billard, “Effects of repeated exposure to a humanoid robot on children with autism”, in Universal Access and Assistive Technology (Cambridge, UK, 2004), pp. 225–236.

[22] T. Saito, T. Shibata, K. Wada, and K. Tanie, “Relationship between interaction with the mental commit robot and change of stress reaction of the elderly”, in Proceedings of IEEE CIRA (Kobe, Japan), pp. 119–124, 2003.

[23] T. Salter, R. te Boekhorst, and K. Dautenhahn, “Detecting and Analysing Children’s Play Styles with Autonomous Mobile Robots: A Case Study Comparing Observational Data with Sensor Readings", in The 8th Conference on Intelligent Autonomous Systems, (Amsterdam, The Netherlands, 2004), pp. 61–70.

[24] D. S. Syrdal, M. L. Walters, N. R. Otero, K. L. Koay,

and K. Dautenhahn “He knows when you are sleeping - Privacy and the Personal Robot”, in Technical Report from the AAAI-07 Workshop W06 on Human Implications of Human-Robot Interaction, AAAI Press, Vancouver, British Columbia, Canada, 2007, 28-33.

[25] D. S. Syrdal, K. L. Koay, M. L. Walters and K. Dautenhahn “A Personalized Robot Companion? - the Role of Individual Differences on Spatial Preferences in HRI Scenarios”, Proceeding of IEEE RO-MAN (Jeju Island, Korea, 2007), pp. 1143-1148.

[26] E. A. Topp and H. I. Christensen, "Tracking for following and passing persons," in Proceeding of IEEE/RSJ IROS, pp. 2321-2327, 2005.

[27] M. L. Walters, K. Dautenhahn, R. t. Boekhorst, K. L. Koay, C. Kaouri, S. N. Woods, C. L. Nehaniv, D. Lee, and I. Werry, “The Influence of Subjects' Personality Traits on Predicting Comfortable Human-Robot Approach Distances”, in XXVII Annual Conference of the Cognitive Science Society (COGSCI) (Stresa, Italy, 2005), pp. 29–37.

[28] M. L. Walters, K. Dautenhahn, K. L. Koay, C. Kaouri, R. t. Boekhorst, C. L. Nehaniv, I. Werry, and D. Lee, “Close encounters: Spatial distances between people and a robot of mechanistic appearance”, in IEEE-RAS International Conference on Humanoid Robots (Tsukuba, Japan, 2005), pp. 450–455.

[29] Michael. L. Walters, Kerstin. Dautenhahn, Sarah. N. Woods, Kheng. L.ee Koay (2007). “Robotic Etiquette: Results from User Studies Involving a Fetch and Carry Task”, in Proceeding of AMC Human-Robot Interaction (March 8-11, 2007, Arlington, Virginia, USA) pp. 317-324.

The boy-robot should bark! – Children’s Impressions of Agent Migration into Diverse Embodiments

Dag Sverre Syrdal, Kheng Lee Koay, Michael L. Walters and Kerstin Dautenhahn1

ABSTRACTThis paper presents results from a series of focused group discussions with a sample consisting of approximately 180 children during which views and opinions regarding agents migrating between different embodiments were elicited. The discussions attempted to ground the concept of a migrating agent in the children''s own experience of interacting with virtual characters in electronic toys and video games. The results suggest a complex interplay between expectations and appearance, and that disentangling the form of an agent may take from the underlying structures defining the agent’s personality may be problematic for potential users.

1 INTRODUCTIONThe aims of the LIREC (LIving with Robots and intEractive Companions) project [1] are to investigate the theoretical aspects of artificial companions and embody these aspects into robust and innovative technologies, both in the form of virtual agents as well as physical robots. This will allow for examining the applicability of these theories in actual social environments and facilitating the creation of artificial companions suitable for long-term interactions.

This endeavour includes studying both how a single agent can migrate into different embodiments depending on the tasks that it performs or the preferences of its users, as well as aspects of personalisation and adaptation to the particular idiosyncratic needs and preferences of diverse users.

While a major part of the project is to conceptualise, define and implement technological solutions to facilitate this process, it is also important to consider how prospective users may perceive and understand migration. Key questions for the LIREC project are how the unique underlying agent may be recognisable to the user in these different embodiments, as well as express personalised social behaviour when interacting with its users.

The importance of the use of affective and relational cues when creating and maintaining relationships between an agent and its users has been addressed by Bickmore et al. [2-4] who propose the use of such strategies and demonstrates the impact of their use with anthropomorphic virtual conversational agents.

Kasap et al. [5] propose an emotion engine which allows for emotive communication across different embodiments using a virtual anthropomorphic character and an anthropomorphic robot head. It also allows for episodic memory of previous interactions to be stored, facilitating long-term interactions

and the formation of a long-term relationship between the agent and its interactants. While not addressing the topic of migration as such. they suggest that the affective communication ability in combination with the ability to retain memories of previous interactions are key in the development of relationships with artificial agents.

Martin et al. [6] propose an agent capable of migrating into diverse embodiments, and highlights the issue of agent perception in the user, and suggest that the changing nature of an agent’s embodiment may degrade the visual cues that may be necessary for recognition and be an impediment to sustaining a continuous relationship. The use of persistent visual or behavioural cues is suggested as a means to counter this impediment. Their findings suggest that the use of visual cues is a powerful tool in aiding recognisability of the individual agent across diverse embodiments. However the use of other cues, as unique behavioural patterns or auditory cues are highlighted as issues that remain open.

2 THE PRESENT STUDYThis study explored children’s perceptions of agent migration. Previous work in HRI has addressed how robots are perceived in terms of capabilities and moral agency by children [7, 8] and has indicated that children are capable of quite sophisticated reasoning regarding the nature of artificial entities. As such, insights from such a sample would be beneficial in the design and implementation of relational and affective behaviours as well as for identifying strategies for a migrating artificial agent. While the LIREC project does not focus on children in particular, products such as video games and electronic toys that target this demographic group often incorporate artificial, interactive agents. Therefore, this age-group is likely to have more everyday experience of interacting with such agents than an older population sample.

The topics of interest that were addressed were as follows:

1. Would a sample consisting of 8-10 year old children understand the concept of migration?

2. How would the relationship between the agent and its embodiment be considered?

3. How would this relationship be considered in light of the possibility of migration?

3 PROCEDUREThis study was conducted as part of a larger event in which children from local primary schools visited the Adaptive Systems Research Group at the University of Hertfordshire in

1Adaptive Systems Research Group, School of Computer Science, AL10 9AB Email: (d.s.syrdal, k.dautenhahn, k.l.koay, m.l. walters, }@herts.ac.uk.

May 2008. It was conducted as a series of group discussions in which one of the researchers would lead a discussion through which the children’s impressions and ideas related to agent migration were elicited. There were a total of around 180 children participating in these discussions, in groups ranging from 3 to 6 children in any given group. While no attempt was made to balance the sample according to gender, all the students were from mixed-gender schools and the composition of the groups reflected this.

The discussion was conducted in a similar manner to a school class, where information was presented in order to facilitate further discussion. While the discussion was divided into stages, the researchers endeavoured to make the topic as responsive to the input of the children as possible, The stages of this discussion are reported below, with the particular questions that the children were encouraged to discuss in italics

1. Introduction to the notion of artificial agents:a. Highlighting the difference between

characters in interactive media vs. films and books.

b. Use of characters from video games (see Figure 1) as well as tamagotchis and other interactive toys.

i. Are these characters unique, what makes them different from one another?

2. Introducing the concept of migration:a. Transfer of saved computer games to

other consoles.b. Using electronic pets on websites.

i. Is it the same agent, even if it has moved to a different computer?

3. Introducing the notion of migration into physical embodiment:

a. Pictures of different robot embodiments, an iCub, a Sony AIBO and a Pioneer robot (see Figures 2-3).

i. What robot body would you prefer?

ii. What robot body is the most useful?

4. Introducing the notion of migration from one physical embodiment to another:

a. Pictures of iCub and Sony Aibo shown;i. How would you know it was the

same agent?

The above points were deliberately addressed in a loose manner, attempting to let input from the children drive the discussions during each stage as well as the introduction to the next stage.

The picture representing the notion of a migrating agent personality was a character in EA Game’s MySims for the

Nintendo Wii [9], see Fig. 1. This image was chosen for several reasons. First of all, the Wii console is very popular amongst the demographic that the presentations were given to. As such, the probability of one or more of the children in each group having experience of this game was quite high. Secondly, the notion of a MySims character being recognisable as an individual entity, both in terms of appearance, choice of activities and having a unique interaction history, both with the user as well as other in-game characters, is easily conveyable. The researchers would then use the sharing of experiences by some of the children in each group as a launch point for a discussion around the notion of agent personality. The initial grounding of the concept of an artificial agent, within the sphere of everyday experience of the children, was also intended to allow the children a greater repertoire for reasoning around the ideas that were explored in the discussion sessions.

Figure 1 Screenshot from EA game MySims for the Nintendo Wii used to exemplify a virtual character.

The images representing the robots were chosen in order to explore the large design space available to personal robots [10]. They offer three qualitatively different possible embodiments with three sets of equally different affordances. Having an anthropomorphic and zoomorphic robot as well as a clearly mechanical robot, facilitated exploration of a wide range of scenarios. This was both in terms of activities the participants could envisage the robots performing, as well as the nature of the interactions that would take place using these embodiments. Also, the researchers did not give out any information about the capabilities of these robots, so that any discussion regarding the use of the robots emerged from the capabilities the children projected unto them.

Following the discussion, the researcher demonstrated a form of migration where an agent ‘personality’ migrated between a Pioneer and a Peoplebot (see Figure 5). For the purposes of this demonstration, the personality was described to the children as being the way it avoided obstacles. The robot embodiments also used voice utterances as the agent migrated from one embodiment to the other.

During the demonstrations, one of the researchers took notes of the discussions and also noted interesting reactions to the

demonstrations. These notes formed the core of the data to be analysed, but also served to highlight themes and issues that could be addressed in subsequent discussions with later groups. The discussions were also videotaped, with the consent of the participating schools as well as the guardians of the chidlren. There was considerable background noise, which made transcription and analysis of the raw video difficult.

4 RESULTSThe results from the discussions are described below. The focus of this analysis was primarily to explore how the children understood the role of an agent in different embodiments as well as migration. As such, the analysis presented is primarily descriptive in nature.

Figure 2 Picture of iCub [11] and SONY AIBO [12] shown during presentation

Artificial AgentsThe main themes that emerged from discussing the notion of artificial agents were that of relating this to the children’s own experience of video-games and other electronic toys. An interesting point here was that most of the groups explicitly made clear divisions between agents in computer games which are directly controlled by the player, and as such are extensions of the player, and agents that displayed different degrees of autonomy.

This was particularly relevant to how the children discussed the uniqueness of a given instance of a video game character. Most groups initially approached this in terms of the appearance of a character. However, probes from the researchers regarding behaviour were often associated with references to personality.

‘Sims like different things, some Sims like to clean while others like playing more’

References to Tamagotchis tended to be linked with the possibility of the death of the agent. This particular feature of these electronic toys was in most groups associated with discussions of the uniqueness of the character emerging from a shared interaction history.

‘You can start a new game with a new one…it is not the same. You haven’t done anything with the new one…it doesn’t know you.’

Migration from one Computer to Another:The notion of a character in a video game being transferred from one computer/games console to another was not problematic to the sample. All the groups could easily volunteer means of doing so, including email transfer of game data as well as physically moving storage media from one place to the other, before connecting them to the new media. There was a general consensus in all the groups that the character would remain unchanged throughout this process.

Figure 3 Picture of Pioneer shown during presentation [14].

Migration into Physical EmbodimentDiscussion centred around the groups’ preferences as to what robot body the agent should inhabit. The majority of groups (likely due to having a bias towards game-like characters introduced earlier) focused on the play possibilities of the different embodiments. This led to a preference for the iCub and the Sony AIBO embodiments.

‘The dog-robot looks like it can play.’

‘The human looking one, because he can play games with me.’

Preferences for the AIBO were often justified in terms of it being dog-like, and reflecting an underlying liking of dogs in general, as well as a clear understanding of the play-possibilities with dogs that could be transferred to interactions with the AIBO embodiment.

‘I like the robot dog…no reason, but I really like dogs.’

‘I like the robot dog, because I have a dog and I play with it all the time, and we have fun together.’

‘I like the robot-dog, it could run after balls and it would be fun.’

Likewise, the iCub was credited with human-like capabilities in terms of speech, as well as intelligence.

‘The boy-robot could keep me company…we could talk’

‘I would like it [the iCub] to help me with my homework.’

On the other hand, groups in which the discussions were led towards other tasks started to have more detailed discussions regarding the possibilities and limitations of each embodiment when executing specific tasks. A common task that was discussed by a large portion of these groups was that of fetching and carrying drinks or snacks. These discussions highlighted apparent affordances based on the images of the robot presented, both in terms of possibilities and limitations:

Figure 4 Group Discussion

‘The human one has arms so he can lift things, and walk on his legs to bring you a drink’

‘The pioneer-robot could bring you things and drive around.’

‘The human one would catch fire if it got water on the wires; maybe it shouldn’t use the tap.’

‘The one with wheels doesn’t have any arms, so it can’t pick anything up.

Interestingly, the AIBO embodiment was only considered suitable for particular tasks that the participants considered appropriate for dogs to do:‘It [the AIBO] could get the newspaper.’

‘The robot-dog could guard my things.’

Also, some of the groups started considering the possibilities of collaboration between the robot embodiments to better perform tasks. The following quote regarding a fetch and carry task serves to illustrate this:

‘The human one[the iCub] can’t walk very fast…maybe it could put the glass on the one with wheels [the Pioneer] so it could bring it to you?...I have never seen a fast walking robot’

Interestingly, some children considered the difficulty for the agent in terms of orienting itself to a new body. This line of reasoning concluded that the humanoid iCub would be the

most suitable for the agent, which was represented by the MySims screenshot, due to similarities of form:

‘I think the human one, this one has two arms and two legs [points to MySim screenshot], and so does the human robot. It doesn’t have to learn anything new, so it is easier for it.’

Migration from one Physical Embodiment to AnotherThis particular issue raised questions from the children related to how the agent might represent itself across different embodiments. Drawing upon the discussions of the previous sections, the majority of the groups had already considered the notion of a persistent, unique agent, with a particular interaction history with its users. Two particular themes emerged as to how the agent could/should signal its identity to an onlooker.

Figure 5 Migration Demonstration.

The first arose through reasoning which posited an original, ideal embodiment for the agent. This particular theme tended to incorporate an implicit assumption that the agent had a form which it spent the majority of its time in, and other embodiments were only adopted at a task-based basis. This led to suggestions of the robot adopting habits and behaviours that were clues to this original form in order to inform the user of its identity:

‘If the character was in the dog and then moved to the boy-robot, then maybe the boy-robot should bark?…it would say woof woof!’

‘Maybe the dog robot could walk on two legs?’

‘When it is in the boy robot it would be very good at rolling over.’

‘You would know that it has moved from the human one to the dog, because the dog robot could talk.’

It is interesting to note that participants did not consider such a transfer from the Pioneer embodiment. There was however, some comments that suggested such transfer from the iCub and the AIBO to the Pioneer.

The second theme that emerged followed a line of reasoning in which the group would see the interaction history and personality of the agent as something independent to the embodiments themselves. Following this argument, the groups would argue for analogous behaviours communicating affective behaviour.

‘If the character is happy and in the dog it would bark and roll around…if it is in the boy, it could smile and laugh’

‘If the character moves into the one with wheels it could spin around really fast if it is happy to see you.’

5 DISCUSSION The findings from these focused group discussions suggest that children in this age-range are certainly capable of understanding the concept of agent migration into diverse physical embodiments. The use of examples and imagery from the children’s everyday experience, through games and electronic toys, was particularly effective in eliciting meaningful responses from the participants.

Many of the responses from the children focused heavily on the play-aspect of such companions. This was to be expected due to initial focus on entertainment applications artificial agents in the slides used in the presentation.. Also, for this age-range most electronics products are intended as vehicles for entertainment. It is important to note that the participants did not have difficulty when prompted to consider applications other than play for the agents in different embodiments. Also, considerations such as engagement across different embodiments is still valid in interactions that are not intended as being solely for entertainment purposes [2].

This study was an exploratory study and the main focus was to gain a wide range of comments and insights into the relationship between how an agent is perceived in terms of its embodiment. Also, our aim was to examine how migration was perceived by the children, rather than examining specific pre-determined relationships between concepts. However, there were some interesting insights from the sample.

One of the most salient themes emerging in the discussions related to how the role of affordances, based on an embodiment, determined the role of the agent. This was in some instances based on the physical capabilities of the embodiment. For example, as in the discussions of whether to use iCub or the Pioneer for the fetch and carry task. However, the iCub and the AIBO embodiments also carried with them a set of expectations. These were not just related to apparent capabilities, but drew on expectations based on the form of the robot, wherein the robot would take on a social role based on what it appeared to be. Thus, fetching the newspaper, running after balls, and guard duty were considered appropriate tasks for the agent in the AIBO embodiment. Likewise, for the iCub, the ability to talk and help with tasks of a more intellectual nature was also considered appropriate

This was also reflected in the views of migration. In those discussions that posited an original form, the agent would

retain the social and intellectual aspects of the role afforded to it by the original form. As such, identification of the unique agent would here be accomplished using cues that would hint at these roles, e.g. barking and rolling over if the migration was from the AIBO embodiment, or speaking if the migration was from the iCub.

A similar issue emerged in the statements of those groups who, when considering the best embodiment for the robot to take, decided upon the humanoid form of the iCub. The agent could then apply its knowledge about its virtual embodiment directly to that of the iCub.

These results can be considered in the light of previous work such as Walters et al. [15], which suggests that the behaviour of a robot should be consistent with the expectations created by its particular appearance. However, these results also suggest that adding migration to the mix might create a more complex and dynamic interplay between embodiment and expectations. The discussions suggested that behaviours could clarify an original set of affordances for the agent, despite those of its current embodiment.

It should be noted however, that some of the groups focused on the role of the agent as an entity divorced from its embodiment. These groups considered the various embodiments as avenues for interaction which the agent could use to express itself and act upon the world. However, these groups were in the minority and as such, the data on this reasoning is sparser.

6 CONCLUSIONSThis was an exploratory study and these results were not intended to be directly applicable to the implementation of migration processes of agents within the LIREC project. They are however, a source of future avenues of investigation.

The most prominent of these is the issue of how the agent initially should present itself. The power of a perceived ‘ideal’ embodiment for the agent should not be underestimated, both in terms of framing expectations as to (perceived) intellectual capabilities as well as its social role, As such, when initially presenting itself to the user, the form the agent is introduced in, might impact subsequent perceptions of the agent across different embodiments. This may be a powerful tool in terms of situating the role of the agent within the everyday experience of the user, especially if the social role afforded it by its embodiment is congruent with its capabilities. For instance, a robot intended for fetch and carry as suggested by [16] may benefit from being initially presented as having an original dog-like embodiment. Dogs are trained to perform such tasks for users and thus these affordances would then support the interactions resulting from these tasks. On the other hand, this may prove an obstacle to interactions if the agent is embodied in a form that can use different modalities to communicate than those which the user perceives in its original form. In which case, the user may find these modalities inconsistent with their expectations from the agent.

Therefore, examining the processes of how the perception of the agent's original or ideal embodiment is created by the user,

as well as possible ways of shaping the creation of such a perception, may be useful in future work. Also, ways of utilising these perceptions are also an interesting avenue of investigation.

ACKNOWLEDGEMENTSWe would like to thank our colleague Cathy Beer for her help in compiling notes for analysis. The work described in this paper was conducted within the EU Integrated Project LIREC (LIving with Robots and intEgrated Companions). Funded by the European Commission under FP7-ICT under contract FP7-215554.

REFERENCES[1] LIREC, http://www.lirec.org, 2008.[2] T. W. Bickmore, L. Caruso, K. Clough-Gorr, and T. Heeren,

"‘It’s just like you talk to a friend’ relational agents for older adults," Interacting With Computers, vol. 17, pp. 711-735, 2005.

[3] T. W. Bickmore and J. Cassell, "Relational Agents: A Model and Implementation of Building User Trust," Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 396-403, 2001.

[4] T. W. Bickmore and J. Cassell, "Social Dialogue with Embodied Conversational Agents," in Natural, Intelligent and Effective Interaction with Multimodal Dialogue Systems, J. v. Kuppevelt, L. Dybkjaer, and N. Bernsen, Eds. New York: Kluwer Academic, 2005, pp. 23-54.

[5] Z. Kasap, B. Moussa, P. Chaudhuri, D. Hanson, and N. Magnenat-Thalmann., "From virtual characters to robots - a novel paradigm for long-term emotional human-robot interaction.," Submitted to Human-Robot Interaction Conference, 2009., Submitted.

[6] A. Martin, G. M.P.O'Hare, B. R. Duffy, B. Schön, and J. F. Bradley, "Maintaining the Identity of Dynamically Embodied Agents," The Fifth International Working Conference on Intelligent Virtual Agents - IVA 2005 Kos, Greece, September 2005, 2005.

[7] P. H. Kahn, N. G. Freier, T. Kanda, H. Ishiguro, J. H. Ruckert, R. L. Severson, and S. K. Kane, "Design patterns for sociality in human-robot interaction," in Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction. Amsterdam, The Netherlands: ACM, 2008.

[8] S. Turkle, W. Taggart, C. D. Kidd, and O. Dasté, "Relational artifacts with children and elders: the complexities of cybercompanionship," Connection Science, vol. 18, pp. 347-361, 2006.

[9] S. Zhu, course materials for CMS.600 / CMS.998 Videogame Theory and Analysis, Fall 2007. MIT OpenCourseWare (http://ocw.mit.edu/), Massachusetts Institute of Technology. Downloaded 09-02-2009.

[10] K. Dautenhahn, 'Design spaces and niche spaces of believable social robots', Proceedings of the 11th Annual International Workshop on Robot and Human Interactive Communication (RO-MAN 02), Berlin, Germany, 192-197, (2002).

[12] Sony, "Sony AIBO," http://support.sony-europe.com/aibo/, 2009.

[13] RobotCub, http://www.robotcub.org, 2009.[14] Pioneer, "Mobile Robots inc.,"

http://www.activrobots.com/ROBOTS/p2dx.html, 2009.

[15 ] M. L. Walters, D. S. Syrdal, K. Dautenhahn, R. te Boekhorst, K. L. Koay, 'Avoiding the Uncanny Valley – Robot Appearance, Personality and Consistency of Behavior in an Attention-Seeking Home Scenario for a Robot Companion', Journal of Autonomous Robots, 24(2), 159-178, (2008).

[16] H. Huettenrauch and K. Severinson Eklundh, "Fetch-and-carry with CERO: Observations from a long-term user study with a service robot.," Proceeding of the 11th IEEE International Workshop on Robot and Human Interactive Interactive Communication(RO-MAN 2001), Berlin, Germany Sept 25-27, 2002, pp. 158-163, 2002.

A User Study on Visualization of Agent Migration between Two Companion Robots1

Kheng Lee Koay, Dag Sverre Syrdal, Michael L. Walters and Kerstin Dautenhahn

Adaptive Systems Research Group, School of Computer Science, University of Hertfordshire

College Lane, Hatfield, Hertfordshire, AL10 9AB, UK. {K.L.Koay, D.S.Syrdal, M.L.Walters, K.Dautenhahn}@herts.ac.uk

Abstract. In order to provide continuous user assistance in different physical situations and circumstances, it is desirable that an agent can maintain its identity as it migrates between different physical embodiments. A user study was conducted, with 21 primary school students which investigated the use of three different visual cues to support the user's belief that they are still interacting with the same agent migrating between different robotic embodiments.

Keywords: Agent Migration, Robot Companion, Human-Robot Interaction.

1 Introduction The limitations of a specific robotic embodiment often constrains its functionality within a particular environment [1], [2]. Changes in the robot’s embodiment to achieve a new desired functionality are often impossible (e.g. changing the size of the robot or changing from a humanoid to a mechanical appearance). As robots become more commonplace they may assume the role of butler, assistant, or companion. They will need to learn about their users, and their preferences, habits and living conditions in order to assist them. Rather than ‘training’ and familiarizing the user with a number of different robots, it may be desirable to use a single ‘character’ (or ‘personality’) of the robot and migrate it from one embodiment to another as required. Here we define the ‘personality’ of a robot as those features that persist and make it unique and recognizable from the owner’s perspective (and that can be encapsulated in a software agent).

For example, it is not always feasible to transport larger scale robots, so being able to migrate a personalized companion robot [4] personality (agent) to a smaller embodiment (e.g. a handheld device) may allow the agent to travel with the user. With different robot embodiments the agent is less constrained to a particular information space [1] and may provide continuous assistance by accompanying the user [8]. By interacting with the user in different embodiments, the agent may also be able to achieve a stronger sense of contextual and situational awareness [6] of the physical and social environment, and improve the agent’s understanding of, and

1 The work described in this paper was conducted within the EU Integrated Projects, LIREC (LIving with Robots and interactivE

Companions). Funded by the European Commission under FP7-ICT under contract FP7-215554.

relationship with, its user contributing to a sense of companionship for the user independent of the agent’s specific embodiment.

An important aspect of agent migration is the ability for the agent to maintain its identity [7] and the user’s belief that they are still interacting with the ‘same agent’ in different embodiments (e.g., as it migrates from a humanoid robot to a zoomorphic robot platform). We believe a first step in achieving user believability is through the visual realization of migration to reinforce the agent’s identity and character across different embodiments. In order to achieve a visual realization of migration, we first need to understand users’ mental models of agent migration. If users have no concept of their companions (software ‘personalities’) being able to move between different physical bodies, then they may e.g. mistake the visual process of migration as a form of communication between two robots (e.g. one robot requesting another robot to perform a certain task).

A user study was conducted with the aim of exploring participants’ thoughts and feelings regarding robot-to-robot agent migration. The main objectives were to understand participants’ mental models of the migration process and to determine the key components that help companion agents express convincingly the migration process through non-verbal (visual) cues.

2 Methodology The user study took place in July 2008 with 21 primary school student participants all boys aged 11 to 14. A Pioneer and a PeopleBot robot (by MobileRobots Inc. with added custom hardware extensions, see Fig. 1) were equipped with LED panels to display three different visual cues to visually indicate the agent migration process:

Moving Bar – the panel light array at the agent’s departing platform emptied, while the panel at the agent’s arrival platform was simultaneously filled up over a period of 30 seconds.

Moving Face – a smiley moved from the bottom to the top of the panel of the agent’s departing platform and then jumped to the top of the agent’s arrival platform display panel and moved to the bottom of that panel to signify completion of the migration process over a period of 20 seconds.

Flashing Lights - the display panel at the agent’s departing platform flashed initially slowly, but increased in frequency every 2 seconds. After 9 seconds, the display panel at the departing platform stopped flashing while the display panel at the agent’s arrival platform started flashing at the highest flashing rate but slowly decreasing in frequency every 2 seconds and finally stopped flashing to signify completion of the migration process. This process took 18 seconds. Note, these three conditions represented different modalities to visualize migration of the agent’s ‘personality’: geometrical (Moving Bar), iconic (Moving Face) and temporal (Flashing Lights), see Fig. 1.

A Video-based Human-Robot Interaction (VHRI) methodology was employed (cf. [3, 5]) using three identical videos, except for the agent migration episodes which showed one of the three different visual cues. The videos were produced in the University of Hertfordshire Robot House, living room and kitchen areas. The scenario started with the user asking his companion (software agent), which was residing in a PeopleBot (the tall robot in Fig. 1), to fetch him a cup. The companion

realized that the cup was located in a low profile cupboard , and so decided to migrate to the Pioneer robot (the short robot in Fig.1) in order to use its more versatile arm. The PeopleBot then entered the “Migration Portal” (a specific physical location) where the Pioneer robot was located. The companion agent (the robot’s ‘personality’) then migrated and took control of the Pioneer, which then fetched the cup and placed it on the PeopleBot’s tray. The companion then migrated back to the PeopleBot, which then handed the cup to the user.

Fig. 1. The three different visual cues used in the trial, from the left to the right – Moving Bar, Moving Face and Flashing Lights.

Procedure. The participants were divided into three groups, each assigned to one of the three visual cue conditions (i.e. Moving Bar, Moving Face or Flashing Lights). The participants’ background information, including favorite subjects in school, hobbies and experience with computer games (i.e. Transformers, The Sims, Legend of Zelda, etc.) was collected by means of a questionnaire. To explore participants’ mental models of agent migration, we used a short open-ended questionnaire to obtain a wide range of responses after each phase:

Phase 1: Each group was shown the entire scenario with only one of the three migration visual cues without explanation as to what was occurring. They were then shown just the same migration scenes again (i.e. agent migrating from PeopleBot to Pioneer and back to the PeopleBot) and then asked what they thought had happened during the migration scenes.

Phase 2: The experimenter then explained to the participants that the companion was actually migrating from one robot body to another. The participants then watched two videos showing the two remaining visual cues used to signal migration. They were then asked to indicate their preferred visual cue and explain their decisions.

3 Results For Phase 1 a qualitative analysis classified responses into three categories – Communication, Migration and Energy Transfer (see Fig. 2). Explanations suggested that participants mostly considered the interaction between the robots (migration visual cues) to be communication. The Moving Face condition was most effective for displaying the process of migration to an uninitiated audience.

Phase 2 – The sample preferred the moving face and changing bar signals (see Fig. 3a). However, a significant difference was found related to the initial cue that the

groups had been exposed to. The groups that had been initially exposed to the moving bar and flashing light signals preferred the moving face more than participants that had initially been exposed to the moving face signal (see Fig. 3b).

Fig. 4 indicates that participants who preferred the changing bars, referred to the analogous way in which the bars communicated the process of migration. Participants who preferred the moving face referred to speed as well as the ability of the face to communicate additional information.

Fig. 2. Classification of participants’ responses with regard to the three visual cues into 4 different categories.

(a) (b)

Fig. 3. (a) Participants feedback with regard to the best visual cues for expressing the migration process, (b) Categorization of participants from Fig. 3a based on their initial exposure.

Fig. 4. Reasoning behind participants preferred visual cues.

4 Discussion Our participants (primary school students) have a mental model for artificial personalities migrating between different physical embodiments. Also, the visual cues for agent migration highlight the idea of a personality migrating from one embodiment to another, and seem to reinforce the agent’s identity and character in the new embodiment after migration. This is an important finding for our research. Our focus is to use the agent’s personality as the main vehicle of identity rather than using identity cues from the embodiments as proposed by Martin et al. [7]. We believe that agents should be able to share embodiments, and it may not be possible for a physical robot to change its appearance on the fly if inhabited by different personalities. One might argue that the various habitants of the robot could use different colour schemes on the display panel as visual identity cues, this does not guarantee that different personalities will not share the same visual cues.

The Moving Bar and Moving Face were rated by the participants as the best visual cues to represent the realization of agent migration. The Moving Bar visual cue may be a spatial analogy to the process of migration, where one display empties as the other fills up (illustrating a connection between the two robots). The Moving Face visual cue shows the agent identity, symbolized by the face, moving from one robot to another robot. Additional information could be expressed through facial expression (although not implemented in this trial). Feedback for the Moving Face visual cue also indicated that the duration of the migration process was important for user acceptability of agent migration. More studies need to be conducted to verify and expand these findings. We are currently conducting the same study with adult participants and the new findings will be published in the near future.

References 1. Brian R. Duffy and Gregory M. P. O’hare and Alan N. Martin and John F. Bradley and Bianca Schön:

Agent Chameleons: Agent Minds and Bodies. The 16th International Conference on Computer Animation and Social Agents, Rutgers, 2003, 7-9

2. Imai, M., Ono, T., Etani, T: Agent migration: communications between a human and robot. IEEE International Conference on Systems, Man, and Cybernetics, 1999, vol.4, 1044-1048

3. Woods, S. N., Walters, M. L., Koay, K. L., Dautenhahn, K: Methodological Issues in HRI: A Comparison of Live and Video-Based Methods in Robot to Human Approach Direction Trials. Proceedings of The 15th IEEE International Symposium on Robot and Human Interactive Communication, University of Hertfordshire, UK, 2006, 51-58

4. Dautenhahn, K: Robots We Like to Live With?! - A Developmental Perspective on a Personalized, Life-Long Robot Companion, Proc. IEEE RO-MAN 2004, 13th IEEE International Workshop on Robot and Human Interactive Communication, Kurashiki, Okayama Japan, September 20-22, 2004, 17-22

5. Walters, M. L., Syrdal, D. S., Dautenhahn, K., te Boekhorst, R., Koay, K. L., Avoiding the Uncanny Valley – Robot Appearance, Personality and Consistency of Behavior in an Attention-Seeking Home Scenario for a Robot Companion. Journal of Autonomous Robots. 24(2), 2008, 159-178

6. O’hare, G.M.P., Duffy, B.R., Bradley, Martin, J.F., A.N.: Agent Chameleons: Moving Minds from Robots to Digital Information Spaces, Proceedings of Autonomous Minirobots for Research and Edutainment, 2003, 18-21

7. Martin, A., O'Hare, G.M.P., Duffy, B.R., Schön, B, Bradley, J.F: Maintaining the Identity of Dynamically Embodied Agents, Proc. of the Fifth International Working Conference on Intelligent Virtual Agents, Kos, Greece, Vol. 3361 of Lecture Notes in Artificial Intelligence, 2005, 454-465

8. Ogawa, K., Ono, T: ITACO: Constructing an emotional relationship between human and robot, The 17th IEEE International Symposium on Robot and Human Interactive Communication, 2008, 35-40

FP7-215554 LIREC Deliverable D8.1

28

10 Appendix B – Questionnaires  

[WRUT-UH Joint Trial, May 2009] [LIREC WP8 Questionnaire]

1

What do you think? We are interested in what different people think about virtual agents, and we would like you to answer some questions about the interactions you had with the agent today. We are interested in your opinion, and there is no right or wrong answer to our questions. If you are not sure about something, just ask us, and we will help you. To answer our questions please put a tick in one box like you see below: Definitely

Ice cream Probably Ice cream

Either-or Probably chocolate

Definitely Chocolate

Do you like ice cream or chocolate more ?

For example, if you like ice cream the most, but sometimes like chocolate, you could answer like this: Definitely

Ice cream Probably Ice cream

Either-or Probably chocolate

Definitely Chocolate

Do you like ice cream or chocolate more ?

First – We would like to know some things about you.

1. Are you male or female: _____________________________________ 2. How old are you: _____________________________________ 3. What is your field of study: _____________________________________

4. Please tell us how interested you are in these things:

Very Interested

Interested A little interested

Not Interested

Definitely not

interested

Computers

Robots

Programming

These questions are about the two agents:

5. How did you feel about the way that permission was asked of the second user for the first agent to migrate into the second user’s computer?

____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

6. Can you think of a better way for it to ask for permission?

____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

7. Could you tell the difference between the agents in terms of behaviour, sound, appearance etc.?

____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

8. Can you think of a better way of distinguishing between the agents?

____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

9. Which Agent did you like better? ALICE □ EVA □ Neither □

Why?

____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

[WRUT-UH Joint Trial, May 2009] [LIREC WP8 Questionnaire]

3

These questions are about the agents migrating into the users' computer: 10. How do you feel about the idea that an agent belonging to you may migrate to someone

else's computer? ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

11. How do you feel about the idea that an agent belonging to someone else may migrate into your computer? ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

12. What are the possible benefits of using agents migrating into different computers in the way you saw in the video? ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

13. What are the possible drawbacks of this? ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

These questions are about the agent migrating into the robot:

14. What did you think of the way the agent migrated into the robot?

____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

15. Was there anyway you could tell from the robot’s appearance, behaviour or sound that it was the same first user’s agent that had migrated into the robot (as opposed to any other agent)? ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

16. Describe what you think the robot and the agent are doing in the last segment of the video, after the message has been delivered?

____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

17. What are the possible benefits of using agents migrating into a robot body in the way you saw in the video? ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

18. What are the possible drawbacks of this? ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

[WRUT-UH Joint Trial, May 2009] [LIREC WP8 Questionnaire]

5 These questions are about the agent and the robot:

19. What do you think these screens mean?

____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

20. Do you think the messages from these screens are clear?

_____________________________________________________________________________________________________________________________________________________________________________________________

21. Is there a better way to communicate?

____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

These questions are about your views in general:

22. Could you see yourself using this type of technology in the future and why?

____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

23. What sort of tasks would you wish to use agent computer migration for (different from those shown in the video)?

____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

24. What sort of tasks would you wish to use agent robot migration for (different from those shown in the video)?

____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

25. How would you customize your agent (make it different from other agents) to make it recognizable across different embodiments?

____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

26. What would you want your agent to learn about you?

____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________