computer emotions

10

Click here to load reader

Upload: mehmid-ashik

Post on 09-May-2017

214 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Computer Emotions

Computers and Emotions

Merciadri [email protected]

First written: April 23, 2009Last update: July 21, 2009

Abstract. A trend is to speak about computers and the emotions they could possibly show, feel andrecognize. This article simply gives a basic overview about computers and emotions, and their futureroles.

Keywords: affective computing, human-computer interaction, emotions.

1 Introduction

More and more scientific papers are dedicated to theemotions that computers could possibly show, feel,and recognize. This subject involves non-trivial the-ories in various sciences and we here want to givesome basic concepts about this subject, as it is not al-ways well understood, and people counfound science-fiction and current situation.

2 Human Emotions

2.1 Our Everyday’s Concern

Before giving emotions to robots, the human’s emo-tions have to be studied. Everyday, each human feels

emotions, which vary with intensity.Emotions are the externalization of feelings, and

dictate the way we live. They are ubiquitous in ahuman’s life and a clearly characteristic of humans.They sometimes propulse us, and can also demoti-vate us, because intrinsically linked to the feelings.All complex emotions are connected to the two basicemotions of joy and sorrow, which are in turn con-nected to the physical sensations of pleasure and pain([8]). All the emotions also have a major impact inrational human thinking and decision-making ([13]).

An example of propulsion emotion is the exter-nalization of the fear feeling, which often triggersresearches on various science subjects (e.g. in biol-ogy: diseases). Henry (1986) has described five ba-sic emotions (anger, depression, elation, fear, andserenity) linked to hormonal activities ([8]). These

“simplest forms of” emotions have impacts on thebody, mainly because of their chemical effects. Forexample, depression causes hypothalamus to releasecorticotrophin-releasing hormones, which influencethe pituitary gland to release adrenocorticotropin. Inturn, adrenocorticotropin activates the outer adrenalgland which releases glucocorticoid hormones includ-ing cortisone, corticosterone, and hydrocorticosone([8]). It is also known that depression acts as a re-verse of elation ([8]).

2.2 Complexity

Humans’ emotions are complex processes, even forother humans. For example, it is very difficult forautistic children to understand typical emotions([15]). Anyway, it is better if an autistic child is“trained” about these emotions, for example byshowing him different faces and gestures linked tovarious emotions. The main problem is that it needsa lot of patience.

Even non-autistic persons do sometimes misun-derstand others’ emotions, thus misunderstandingtheir feelings, and it is known to be the cause of someproblems in our society, because emotions’ show isalso a way to communicate. Amongst the causes ofthis phenomenon is the subjective interpretation ofemotions. For example, a same laught (e.g. comingfrom an elation feeling) can be interprated in differ-ent ways. One can take it as a sarcasm, when anotherperson will thinks it is frank ([14]). Another problemis that, sometimes, people cannot show their emo-tions to others, for various reasons.

1 There even exists a tone scale (also called “emotional tone scale”), whose aim is briefly to characterize humans’emotions according to their voice tone.

Page 2: Computer Emotions

Studies have proved that main emotions are re-flected in the voice1 tone and posture.

2.3 Impact and Classifications

Emotions have a clear influence on our behaviour.Behavioral responses to environmental stimuli canbe categorized as cognitive, emotional or reflexive([8]). Some emotions’ classifications have been writ-ten. For example, Plutchik (1980) identified eight ba-sic emotions, when Henry (1986, already given) de-scribed five basic emotions (anger, depression, ela-tion, fear, and serenity) linked to hormonal activi-ties ([8]); Ekman (1992) identified five basic emotionsbased on universal signals ([8]). Johnson-Laird andOatley (1992) developed their basic emotions on theanalysis of the ontology of simple social mammals.

3 Feel – Show: The Distinction

It is important to make a distinction between theverbs feel and show. We here consider the case of thecomputer. Showing emotions is not such a compli-cated task, but feeling them is totally different. Infact, machines showing emotions exist and are notdifficult to build, but they never feel emotions.

A machine able to show some emotions is thusnot necessarily able to feel them.

4 Why Emotion Computers

4.1 Not Useful Everywhere

It is clear that all the future computers do neitherhave to feel emotions, nor to show them. For exam-ple, there is no real interest in using emotional com-puters in a car ([13]). Using two kinds of machines:the “emotional” ones, and the “primary” (i.e. unableto feel) ones would be useful, for two evident reasons:

1. Producing2 “emotional” machines is expensive,because of the precise training they would need.Using “emotional” machines only when necessarywould thus make us save money,

2. Using, and thus building “emotional” machinesin every domain would create security issues, be-cause machines would then be(a) numerous,

(b) less controlled in “non-emotional” domains(such as a car’s emotional machine).

The field of use of emotion computers is an importantresearch subject. Another important area of researchis based on the question ([19])

“Do such computers will require eitherconsciousness or emotions of their own?”

4.2 A Need in Some Domains

Emotion computers would be much appreciated insome precise domains, and their main advantages arethe following.

1. They could provide better performance in help-

ing humans. For example:(a) Assisting users

i. In everyday tasks, such as helping thecomputer’s user(s);

ii. Helping autistic children. As said at thepoint 2, constantly helping autistic chil-dren (e.g. by providing them some exam-ples of emotions) is difficult for humans,because emotions are, for non-autisticpersons, natural things, and are thus dif-ficult to explain. Furthermore, it can beirritating to “teach” such natural thingsto a-priori non-receptive persons. Thus,emotions computers could per se auto-mate this task, but they could not be toomuch “emotion-capable,” like most hu-mans; if it was the case, their help wouldbe less interesting, because they would bebored after a given time, and we clearlydo not want them to. There is thus a com-promise to make,

2. Emotions might enhance computers’ abilities tomake decisions ([13]).

Frustration in Front of the Computer Everycomputer’s user sometimes encounters frustration invarious programs, and the computer has always apassive reaction towards any human’s emotion, be-cause, firstly, it does not know what is an emotion,and, secondly, its first role was not to detect how

humans’ emotions are expressed.Another example where the lack of emotions is a

problem is the e-mail case. When one writes an e-mail, the tone is often omitted ([14]). This is due todifferent facts:

2 We here assume the machine is “emotional” when it has learnt sufficiently (from humans) to pass the Turing test.

Page 3: Computer Emotions

1. Sometimes, the sender of the message does notthink that his message could be interpreted dif-ferently;

2. It is difficult to communicate emotion (oremotion-neutral behaviour) in a message, evenwith choosen words, especially when writingbusiness mails.

Thus, it often results in an ambiguous message. Itis essentially a real problem in serious domains, i.e.

when the mail content has a significative content. In-deed, persons could answer in an inappropriate way,or understand points differently, compared with thepoint of view of the message sender. If computerswere able to detect the mood of the writer, they coulddeliver it with the message, giving some supplemen-tary (and non-negligible) pieces of information aboutthe received message. It would result in a “supple-mentary dimension,” which lacks when reading mes-sages.

Only with these two examples, it is clear that thecomputer-assisted emotion recognition is an interest-ing subject. Here, computers are able to detect ouremotions, throughout a given interface, resulting invarious human-computer interactions (HC-I).

4.3 Uses of Such Machines: PractialExamples

As far as here, we used the words’ collection “emo-tional machines,” or “emotions-capable computers”to designate machines who were able to feel and, pos-sibly show emotions. It is still vague, and we herewant to give a brief overview (with practical exam-ples) of the use, on a daily basis, of such machines,and put names under concepts.

Robots Robots are used everyday in various appli-cations: industry, medicine (e.g. surgery), domestic,spare-time activities, . . . Their aim is principally toautomate actions which could either be performed byhumans, and which would lead to muscular fatigue, ifthey were performed too much (in a too long period,or too intensively) in a given time, or which could notbe performed by humans, often for security reasons.

Using robots able to feel and show emotionswould greatly enhance the communication human-robot, especially in domestic and leisure applications:the robots would be able to participate more ef-fictively in users’ task, understand him better, and

seem more “true.” In medicine, especially in surgery,a robot able to feel a patient’s emotions (e.g. dur-ing a surgery) could help finding new concepts inmedicine, finding diseases, and, above all, improvingthe communication between the patient and the med-ical corpse, helping nurses and doctors in their dailytasks.

Intelligent Sensors Plethora of sensors is used inmany domains: temperature probing (in a computer,in a house, in a car, in an industry), heatbeat of apatient, . . . We are using more and more sensors ev-eryday, and they help other machines or humans tomake decisions according to some given input.

A trend is to speak about intelligent sensors, i.e.

sensors which would react intelligently (and not onlyusing heuristic methods) to their input. A simple ex-ample is shown by the action of using your bicycle.Let’s say that there are dark clouds out, and thatyou want to go for a ride, using your favourite bike.If you are preventive-enclinded, you go for it wearingspecial clothes adapted for rain. Such clothes are of-ten studied to have special properties: be as lightas possible, but protect you from rain, for exam-ple. However, despite of their quality, it is possiblethat, during your ride, clouds leave room for a blue,clear sky, or that the environment’s temperature be-comes warmer. You are thus likely to sweat, to loweryour corporal temperature. This sweat will housesbetween your body and the different layers of theclothes you are wearing. If you are riding at this mo-ment, you do not specially want to take off theseclothes. You thus have to tolerate these effects in or-der to continue your ride without being disturbated.It would be no smooth to ride this way a hundredkilometers.

Here come the intelligent sensors, which would,for example, if you feel too cold, launch processes towarm you, and, if you feel too warm, launch processesto cool you. Evidently, such development would notonly rely on computer science, but also on biologyand chemistry.

Another examples are clear when speaking aboutprobes in industry: nowadays’ sensors are able to in-form persons if an action is requested, but they can-not react intelligently. Their only method is the useof conditions, i.e. heuristic methods.

We would also have intelligent sensors that woulddetect what we are doing, share knowledge amongstthem, and interpret, generally, what is going on

Page 4: Computer Emotions

([12]): imagine your clothes collaborate so that youfeel better!

5 Future

5.1 Development

To design emotion-capable robots, stepping out ofscience-fiction and moving into the laboratory ([19]),using this model, the engineer needs to be inspired byhumans: we need to look at verbal and non-verbal be-haviour (especially their emotions, as stated at point2) of humans when they interact with each other,their children, and their pets. We should understandwhat roles are played by gaze, head orientation, ges-tures, posture, and verbal interaction [12].

Engineers would also define pain and pleasure (asgiven at point 2) for the robot ([8]). For example,“attacks” (engine overheat, electrical shock, . . . ) tothe “robot health” could launch a pain feeling inrobots. Symetrically, pleasure feelings could be gen-erated when the user operates a battery recharge, oran engine oil change ([8]).

It would even be better if the robots could learn

themselves their proper emotion system. By this way,they would be able to adapt to their environment,probably as we do, and, more significantly, theywould feel, like us: the moment where one feels thathe can feel is when he has quantitatively and quali-tatively important emotions.

5.2 Fear

Many films and novels were written about “intelli-gent robots,” assuming a being is intelligent if andonly if it has feelings. Producers are fond of scenarii

where robots have a significant role.These films have a non-neglectible impact on the

public, and, depending on the film’s point of view (incaricatured sayings, “robots are dangerous for us,” or“robots are good for us”), engender (positive or neg-ative) criticism.

The main novelist on this subject is I. Asimov. Hetransformed the genre of science fiction when he, withthe help of John W. Campbell, formulated laws forrobots in the 1940’s (Asimov in 1950). Asimov’s pro-posal was “to build the positronic brain of all robotsaround laws directing each machine to “not injurehumans or allow humans to come to harm, to obeyhumans unless this conflicts with the first law, and to

protect itself unless this conflicts with either of thefirst two laws.”” In 1985, Asimov added a “Zerothlaw,” which superseded the other three, instructingrobots to “not injure humanity, or, through, inaction,allow humanity to come to harm.” ([19])

Roger Clarke suggested that an engineer mightwell conclude that rules or laws are not an effectivedesign strategy for building robots whose behaviourmust be moral ([6,19]). More directly, these laws are([2,3])

0. A robot must not injure humanity, or, through,inaction, allow humanity to come to harm,

1. A robot may not injure a human being, or,through inaction, allow a human being to cometo harm,

2. A robot must obey the orders given it by humanbeings except where such orders would conflictwith the first law,

3. A robot must protect its own existence as longas such protection does not conflict with the firstor second law.

However, I think that Isamov’s idea was a good be-ginning, even if it needs to be tuned to envision everypossible scenario.

Where Do We Want Them to Go To Dowe really want computers making moral decisions([19,20])? They could be emotions-capable, and havemany human properties, but would it be useful to al-low them to make moral decisions? Put simply, willthe full array of human cognitive faculties have to beemulated by computers?

The answer to this question would have a greatnegative impact in various subjects, if it was (at leastpartially) neglected.

Furthermore, if we allow them to do so, whatcould differentiate them from us, ignoring the physi-cal aspect (which is, actually not a problem)? Beingunable to differentiate (morally) them from us wouldcreate various problems: How to judge them if theycommit defavorable actions (for us)? It leads to thetwo following questions:

1. Would their creator be responsible for their be-haviour?,

2. Would their user be responsible for their be-haviour?

Page 5: Computer Emotions

As systems become more and more complex, it isextremely difficult to establish blame when some-thing does go wrong ([19]). Whose or what moral-ity would have to be implemented in such systems([19])? The four prima facie duties, known respec-tively as respect, autonomy, beneficence, and non-maleficence ([4,19]), human-centered, must be takeninto account. Other questions such as

1. What is the ultimate goal of machine ethics?,2. What does it mean to add an ethical dimension

to machines?,3. Is ethics computable?,4. Is there a single correct ethical theory that we

should try to implement?,5. Should we expect the ethical theory we imple-

ment to be complete, that is, should we exceptit to tell the machine how to act in any ethicaldilemma in which it might find itself?,

6. Is it necessary to determine the moral status ofthe machine istelf, if it is to follow ethical prin-ciples?

are the purview of machine metaethics ([1]).The Organic view maintains that (artificial) hu-

manoids (i.e. robots looking as much as possiblelike humans) agents, based on current computationaltechnologies, could not count as full-blooded moralagents, nor as appropriate targets of intrinsic moralconcern. On this view, artifical humanoids lack cer-tain key properties of biological organisms, whichpreclude them from having full moral status ([18]).More formally, the organic view propose, as [18] ar-ticulates it, five concepts, and here are four:

1. There is a crucial dichotomy between beings thatpossess organic or biological characteristics, onthe one hand, and “mere” machines on the other,

2. Moral thinking, feeling and action arises organi-cally out of the biological history of the humanspecies and perhaps many more primitive specieswhich may have certain forms of moral status, atleast in prototypical or embryonic form,

3. Only beings, which are capable of sentient feelingor phenomenal awareness could be genuine sub-jects of either moral concern or moral appraisal,

4. Only biological organisms have the ability to begenuinely sentient or conscious.

At the opposite, the conclusion of [5] is:

“A non-biological machine, at least in theory, canbe viewed as a legal person.”

I think that if emotions-capable computers arecreated, they must be (in a certain way) carefully,drastically and systematically checked, not to havecompromising3 emotions, or behaviours, for clear rea-sons. They must continue serving us as they are doingeveryday, and still have to obey us.

If these rules are applicated, there would be noreal problem in using these famous emotions-capablecomputers.

Anyway, it is always difficult to speak about A.I.’sfuture without crossing the thin line between appear-ing provocative and appearing foolish ([17]). Man isa planning animal, and it is our nature to attemptto control our destiny: to have food on the table to-morrow, to be prepared for any dangers, to continuethe existence of our species ([17]). Furthermore, hu-mans are often proud of their ability to feel and think,because they consider this as inborn characteristics,which they do not want to share, because they arepart of their intelligence. Knowing our intelligencecould be simulated is a threatening fact.

5.3 When

Based on a computation theory of mind and the pro-jection of Moore’s law over the next few decades,some scientists ([9,10]) predict the advent of com-puter systems with intelligence comparable to humanaround 2020-2045. For now, there is no necessity toimagine such scenarii, outside of fiction.

5.4 How: Their Collaboration

As we are often more productive when one worksin with another person, it could be useful to giveemotions-capable machines a way of communicating,for many reasons:

1. Knowledge transfer often fails ([7]), and allowingcomputers to share knowledge with others wouldcreate robots which

(a) learn easier and faster,(b) are cheaper at building,

2. Following the human model, actions are done inan easier way when not performed alone.

3 From our point of view.

Page 6: Computer Emotions

6 A Model

Now that we have briefly presented the subject, wegive a model which would prevent emotions-capablemachines from harming us, but which would allowthem to learn effectively. It is described in the fol-lowing subsections.

6.1 The Four Pillars

Four pillars would be necessary to ensure a good be-haviour from the emotions-capable machines:

1. Actions: the actions which would really be per-formed by the emotions-capable machines. Theseactions would only be determined by Knowl-edge Basis; after performed, a feedback is sentto Learning.

2. Initial Competence: the static (cannot bemodified) competence which would always beavailable to the machine, and which would serveas a basis for next inferences. It could be eitherput directly into Knowledge Basis, or separated,

3. Knowledge Basis: Ethics (and its axioms),Factual: the place where Initial Competence ismodified, and completed (both thanks to Learn-ing). It also decides whether actions can or can-not be performed,

4. Learning: different processes (e.g. reinforcementlearning) would allow the machine to learn fromactions, assuming it learns as well as it can, de-pending on previous ways of learning, which werelearnt, then stored in Knowledge Basis, which itcan communicate with. Once learnt, it is storedin Knowledge Basis, with the way it was learnt.

6.2 How It Works

The principle is simple: every action (susceptible tobe performed; they all verify this hypothesis) is veri-fied. When an action has to be done, the KnowledgeBasis (which we shall now abridge “KB”) is ques-tioned. The action has to pass a first test: it has torespect some given principles (e.g. Asimov’s ones),and axioms. If this test is passed, and depending onthe actions (linked to the current action) which wereperformed before, a score s ∈ R is calculated, and, de-pending on the value of s, the action is undertaken or

not. This decision is motivated by a threshold func-tion which can be modified after a number of givenactions.

The Two Major Problems There are two majorproblems, using this approach:

1. At the beginning, especially if the Initial Compe-tence (“IC”) is very reduced, and if the actionsare complex (i.e. seem not to be linked with pre-vious knowledge), we shall have |s| �, as themachine has not a precise way to decide whetheran action has to be performed or not, because oflack of data.This is actually not a problem, as the machine isgoing to learn faster, and more precisely (whichmeans that s will become more and more precise,i.e. that the values which will be taken by s willbe justified). As a result, it will potentially takebetter decisions, assuming requested actions aregradually more and more difficult.More formally, if we symbolize by d the decid-ability (i.e. the easiness to answer to the question“Can this task be performed with no harm?”) ofa given i action, and that there are n actions,with i ∈ {2, · · · , n} ⊂ Z, di is the decidability ofthe ith action. If di ≈ 0, the decidability of theith task is said to be easy. If di ≈ +∞, the decid-ability of the ith task is said to be hard. To makethe machine learning efficiently, the sequence

d1 � d2 < · · · < di < · · · < dn

must be imposed. We now give one proof of thefact that we must have d1 �.(a) Let “LA” denote the Linked Actions per-

formed intelligently4 in the past. We assumethey are easily retrievable. There are p =(i − 1) ∈ Z

+ such actions (even at the verybeginning) iff all the past actions were linkedto the same subject. (This means that i ≥ 2:we assume a first action has already been per-formed, e.g. in IC.) Thus,

di =

p∑

j=1

dj

represents the total decidability of the lastperformed actions, which are linked to thecurrent one (which rises to i − 1). It is clear

4 We here consider that the choice, realized by the machine, to perform (or not) the first action is not intelligent,as it only relies on its primary database: the IC.

Page 7: Computer Emotions

Knowledge Basis: Ethics (with axioms), Factual

Actions

Learning

NewInfo

Learn Better

Initial Competence

Once for

All

Know What to Do

Act

ions

’Fe

edba

ck

that, if the actions we give are more and morelinked with the past ones, we then have

−∞ < limp→+∞

p∑

j=1

dj = d+∞ < 0,

simply iff d1 < 0 and if

|d1| >

p∑

j=2

dj .

The actions will be thus more and more de-cidable if and only if the first is very easy. Itis the only case which can leads to this an-swer, ut

(b) Another measure of the decidability of a cur-rent task, according to other tasks whichcould be linked to the current one, is thefollowing. Let’s still denote by di the decid-ability of the ith task. Let’s now denote thenumber of LA to the current ith task by ci (cstanding for “connections”). We then have, ifthere were p actions performed in the past,

each having a score attributed to it,

di =

p∑

j=1

cj · n

−1

= n−1

p∑

j=1

cj

−1

= n−1c−1i ,

n ∈ R being a constant so that

n−1c−1i =

p∑

j=1

dj .

As we also want p = i − 1 here, we havep = i− 1 ∈ Z+ iff i = 1 (i.e. we are perform-ing an action linked to no others, as it is thefirst one). As a result, p = 0 iff i = 1, and

di

∣∣i=1

= d1 =

0∑

j=1

cj · n

−1

= (0)−1 = ±∞,

showing the difficulty of the ith task is, tak-ing the modulus, equal to +∞. This is nowclear that this (second) method is completelyuninteresting, and gives false results.

Page 8: Computer Emotions

Remark 1. Even if p was fixed at i (thus,p := i), we would have had

d1 =

1∑

j=1

cj︸︷︷︸

= c1 = 0as it is first task

·n

−1

= (0)−1,

leading to same conclusions.

2. As the KB will become larger and larger, com-putations will be more and more complex, espe-cially for general (i.e. which are linked to manyothers) actions. This can be solved easily, usinga sufficiently new hardware architecture, or ef-fective algorithms, which can be linked to GraphTheory.Practically, the second method (proof) is impos-sible to implement, as there is no other way tocompute n than using the first method (proof),or, if the first method is used, the second has nointerest. We shall only focus on the first method.Practically, the first di (i.e. d1) has to be, asshown above, smaller than zero, and tiny. Wemust impose simultaneously

d1 < 0

|d1| �,

thus asking an easy action to the machine: itshould be able to infer what to do (doing it ornot), easily, from the IC. If this first requseted ac-tion follows this recommendation, and that the di

are progressively bigger and bigger, the machinewill learn efficiently.

The s Value The interpretation of the s value isvery simple. As stated above, a threshold functionhas to be implemented. Given a threshold st, anasked ith action is performed iff

si ≥ st.

For each action (whatever the value of s), s is mem-orised. By this way, if the same ith action is askedtwice, or more, and that there are still no more ci

than before, si can be directly evaluated. The st

threshold is at first in Z∗

+, but can rapidly be in R, if

an exterior agent tells the machine that too much ac-tions are rejected, i.e. that the machine decided notto perform some actions that were humans-harmless,or the opposite.

The calculus of s is very simple. As the higherthe si, the safer the ith task, a weight wi is associ-ated with each task linked to the current one. Likebefore, we suppose that all the past actions are atleast substantially linked to the current one. Thus,there are p = i − 1 such interesting actions. We herepropose to come with the following model:

si =

p∑

j=2

wj · ej ,

the wj ’s being the weights associated to j, j ∈{2, · · · , p} (the more the5 sj , the higher the ej), andthe ej ’s following

ej =

1 iff the jth action is very similar to the ith,

0, 75 iff the jth action is like the ith,

0, 50 iff the jth action has some similarities with the ith,

0, 25 iff the jth action has small similarities with the ith

.

6.3 Showing Emotions

The machine could be confronted to many situationsusing this scheme, and a good way to show its reac-tion to the user would be that it shows emotions. Thefact of feeling emotions has currently no interest here,as the machine does not have to execute tasks whichcould be achieved in a better way if it was emotions-capable. A simple way to show correctly-chosen (i.e.which reflects the feeling the machine would have tofeel if it was really emotion-capable) emotions wouldonce again be to use a system of points. Each situ-ation the machine could encounter would be decom-posed from its origin concepts, each one giving somepoints. Here would be some situations the machinecould have to deal with:

1. The s value, as determined above, has been com-puted, and, whatever its value, there was only atiny amount of data to make inferences to com-pute s (i.e. IC and one or two taks related). Itwould, in this case, be useful to show, e.g. a feel-ing of discomfort,

5 It is possible that we always have sj = wj , but using this scheme, this system can be modulated as we want it tobe.

Page 9: Computer Emotions

2. The s value has been computed, and the amountof data is biggest. The machine should appeare.g. self-confident,

3. The s value has not been computed, as there is atechnical problem. The machine should e.g. cry,

4. The s values previously computed were reflect-ing the user’s beliefs, and the machine has thus“well” inferred (even if it is only a matter of alge-

bra, and that it should not be neither conscientnor proud of it!). The machine should appear im-

portant (i.e. threatening).

Once again, it is clear that, to show these feelings,the feelings’ expressions in humans have to be stud-ied to seem as real as possible when displayed on themachine’s face.

Page 10: Computer Emotions

References

1. Anderso, Susan Leigh, Asimov’s “three laws of robotics” and machine metaethics, AI and Society, (2008).2. Asimov, I., The bicentennial man, Doubleday, 1984.3. Asimov, I. and Silverberg, R, The positronic man, Doubleday, 1992.4. Beauchamp and Childress, Principles of biomedical ethics, Oxford University Press, 2001.5. Calverley, David J., Imagining a non-biological machine as a legal person, AI and Society, (2007).6. Clarke, Roger, Asimov’s laws of robotics: implications for information technology, IEEE Comput 26, (1993).7. Fruchter, Renate and Swaminathan, Subashri and Boraiah, Manjunath and Upadhyay, Chhavi,

Reflection in interaction, AI and Society, (2007).8. Ganjoo, Afshin, Designing Emotion-Capable Robots, One Emotion at a Time.9. Kurzweil, R, The Age of spiritual machines: when computers exceed human intelligence, (1999).

10. , The singularity is near, (2005).11. Moravec, H, Robot: mere machine to transcendent mind, Oxford University Press, 1998.12. Nijholt, A. and Stock, Oliviero and Nishida, Toyoaki, Social intelligence design in ambient intelligence,

AI and Society, (2009).13. Picard, R.W., Affective Computing. (MIT Media Laboratory Perceptual Computing Section Technical Report

No. 321.).14. , Affective Computing, MIT Press, 2000.15. Rieffe, Carolien and Terwogt, Mark Meerum and Stockmann, Lex, Understanding Atypical Emotions

Among Children with Autism, Journal of Autism and Developmental Disorders, (2000).16. Scherer, Klaus R., What are emotions? And how can they be measured?17. Tanimoto, Steven L., The Elements of Artificial Intelligence (An Introduction Using LISP), Computer Science

Press, 2000.18. Torrance, Steve, Ethics and consciousness in artificial agents, AI and Society, (2007).19. Wallach, Wendell, Implemeting moral decision making faculties in computers and robots, AI and Society,

(2007).20. Whitby, Blay, Computing machinery and morality, AI and Society, (2007).