robots emulating children
TRANSCRIPT
-
7/29/2019 Robots Emulating Children
1/3
science & society
EMBO reports VOL 7 | NO 5 | 2006 2006 EUROPEAN MOLECULAR BIOLOGY ORGANIZATION
analysis
474
In many respects, computers are superiorto human beingsthey can hold moreinformation and easily retrieve every bit
of it, they can calculate the square root of
any number within a fraction of a second,and they even beat humans at chess.However, there are many tasks in whichcomputers do not stand a chance against ahuman counterpart, even a toddler. Take,for example, ASIMO, one of the most
sophisticated humanoid robots, devel-oped by Honda (Tokyo, Japan). ASIMO
can walk around a party, shake hands withthe guests and serve food. However, it stillwalks relatively clumsily and slowly, andcan carry out only a limited number ofpre-programmed actionstrying toengage ASIMO in a conversation aboutpolitics would be futile. AIBO, the robot
dog produced by Sony (Tokyo, Japan),seemed promising in the beginning; how-ever, it barked up the wrong tree too oftenand did not behave like a real dog, so Sony
eventually cancelled its production. Suchis the disappointing state of affairs in thecreation of robots with intelligent behav-iour. Is there a chance that future robots
will fare any better?In fact, there is. Robot scientists are
turning to new strategies that are inspiredby neuroscience. This is a major step awayfrom earlier approaches in which anypotential task had to be programmedexplicitlyfor every new task that a com-
puter or robot had to perform, the pro-gramme behind it had to be adapted. This
can actually work successfully, as long asthe computers perform according to strictrules in a static environment, such aswhen playing chess. However, succeedingin a more complex environment withunexpected challenges requires far more
flexibility than any hardwired programmecan achieve. The behaviour of humans and
animals in everyday life is far too complexto ever be formulated in any programminglanguageeven a seemingly simple task,
such as the coordination of movements,has constantly challenged robot scientists.Moreover, no engineer can predict everysituation that a robot might encounter, andprogramme its reactions accordingly.
Therefore, for a robot to perform in a nat-ural environment, it must be able to adapt
its behaviour autonomously. In short, itneeds to learnjust as animals andhumans doand this is exactly whererobotics is heading.
These new generation computers androbots imitate cognitive behaviour byemulating various aspects of human oranimal learning. The human brain is a
good model for robot learning. There isnothing in nature that learns more effi-ciently, said Florentin Wrgtter,Professor of Informatics at the BernsteinCentre for Computational Neuroscience
and the University of Gttingen(Germany). This raises the question towhat extent neural processes can be trans-
ferred to silicon. Will such robots developreasoning and autonomously find strate-gies to solve problems? Will they out-perform our expectations and act moreautonomously than we want them to?
Machine learning can be achievedat different levels of complexity,much as different scientific
fieldsfrom cellular biology to aetio-
logyinvestigate learning processes atdifferent levels of biological complexity.At the most basic cellular level, Hebbianlearning describes the mechanism thatincreases synaptic efficacy when the pre-synaptic and post-synaptic neurons fire inshort sequence, one after the other
what fires together, wires together.Transferring such rules into mathematicalequations to emulate learning processes isactually relatively simple (Arbib, 1995;
Torras, 2002).However, Hebbian learning is not prac-
tical for programming more complex behav-iourit would amount to reconstructing
the brain by simulating every synapserequired for any expected behaviour. Tosimulate more complex learning, scientistshave therefore taken a step up the ladder
of complexity. Reinforcement learning, forexample, is a mathematical model fre-quently used in artificial intelligence andcognitive science (Torras, 2002), whichparallels the brains dopamine-basedreward system.
Dopamine-releasing neurons performan important function in learning behav-
ioural reactions controlled by rewardthey encode rewarding aspects of environ-mental stimuli like food, sex or pleasure.Neurons with different response types are
important for distinct aspects of rewardlearning. For example, prediction-errorneurons increase their firing rate inresponse to a reward-predicting stimulus.
The mathematical model of reinforcementlearning reflects the dopamine-basedreward system down to the level of singlecells: specific characteristics of prediction-error and other neurons of the dopaminesystem have their mathematical counter-parts in this model (Montague et al, 2004;
Wrgtter & Porr, 2005). Reinforcement-learning algorithms therefore prompt the
robot to carry out behaviours that max-imise whatever it has been programmedto regard as reward in much the same wayas the dopaminergic system does in ani-mals. Just like reinforcement learning,many other mathematical models now
induce various forms of learning withstrong parallels to higher organisms.
Teaching robots to learn has alreadyproved to be successful in various applica-tions. For instance, it allows robots tonavigate through unknown terrain withoutbumping into walls or to improve coordi-
nation, making them move more smoothlyand efficiently.
Robots emulating childrenScientists are developing robots using biology as their inspiration.Will they succeed in building cognitive agents?
for a robot to perform in anatural environment,it must beable to adapt its behaviourautonomously
the question remains as towhether we want robots toacquire too much autonomy
-
7/29/2019 Robots Emulating Children
2/3
science & society
2006 EUROPEAN MOLECULAR BIOLOGY ORGANIZATION EMBO reports VOL 7 | NO 5 | 2006
analysis
47
There is more to come. Fed with theright algorithms, future robots mightbe able to acquire much more
sophisticated skills than just being able to
navigate through a maze. One such abilitywould be to understand the function ofan object, which is something that babiesand toddlers must also learn. Although
todays robots can see objects and handlethem, they still lack any understanding oftheir essence or function. A cup, for exam-ple, can be filled with liquid and used to
drink frombe it a mug, a tea cup or anespresso cup. However, a robot is still notable to classify these objects as being dif-ferent in shape, size and colour, yetbelonging to the same category based ontheir function.
Programming a robot with curiositymight help it to acquire such abilities. In
principle, this can be done and has already
been attempted in some labs, saidWrgtter. Curiosity could be implementedby programming the robot to regard noveltyas reward. Such a robot would learn in thesame way as a baby or a toddlerby show-ing a keen interest in and exploring the
environment to investigate anything that isunknown. A curious robot might eventuallybe able to understand the function ofobjects and acquire cognition. Althoughscience is still far from creating such robots,their construction is feasible. The basicconcepts required to construct cognitive
robots are probably already there. I dontthink it will require an Einstein of cognitive
science, who will suddenly understandbrain function and put it into a single for-mula, said Wrgtter. It is more a matterof bringing together multiple components
into an autonomous dynamic system.
Several joint research projects are nowunderway across the European Union todevelop robots with cognitive abilities thatcan interact with humans in a more sophis-ticated manner. Researchers in the field arequite optimistic about the feasibility of this
work based on algorithms that emulatevarious aspects of human learning.
According to the web site for PACO-PLUSa project that aims to design a robotcombining perception, action and cogni-tionrobots need to understand bothwhat they perceive and what they do(www.paco-plus.aau.dk). The cognitive
robot companion COGNIRON is not onlyconsidered as a ready-made device but asan artificial creature, which improves itscapabilities in a continuous process ofacquiring new knowledge and skills(www.cogniron.org). According to the pro-
ject description of MirrorBot, a biologi-
cal and neuroscience-oriented approachfor multimodal processing will lead to new
life-like perception action systems(www.his.sunderland.ac.uk/mirrorbot).
The cognitive abilities of humans are
very complex and include a multimodalintegration of different cognitive processes.In the MirrorBot project we consider differ-ent cognitive processes together, explainedStefan Wermter, Professor of Computer
Science at the University of Sunderland(UK), and Coordinator of MirrorBot. The aimis to construct robots that understand and
can relate their actions to spoken language,and will be able to accomplish simple tasksthat they are instructed to perform, such asselecting certain objects from a table andgrasping them. The project is inspired by theconcept of mirror neurons (Wermter et al,2005)nerve cells that fire both when ananimal perceives an action, through sound
or vision, and when it carries out the sameaction. This indicates that these neurons
have a role in linking perception and action.Mirror neurons are also involved in under-standing language. The robot can thenlearn through instructions, said Wermter,which would be another leap forward in the
implementation of learning in robots.
However, to what extent the combi-nation of cognitive sciences androbotics will succeed in building
more human-like robots is not clear.Moreover, the question remains as towhether we want robots to acquire too
much autonomy. We dont want a servicerobot to get so autonomous that it decides to
If free will requires no soulbutcan emerge from physical matter,such as the brain,why should itnot also emerge from silicon?
-
7/29/2019 Robots Emulating Children
3/3
science & society
EMBO reports VOL 7 | NO 5 | 2006 2006 EUROPEAN MOLECULAR BIOLOGY ORGANIZATION
analysis
476
take a rest rather than vacuuming the floor,said Wrgtter. Although he said this jokingly,the research does raise interesting questionsof both biology and philosophy: Can a
machine, an artificial construct, make deci-
sions? Will it eventually acquire free will?
There are different interpretations ofwhat human free will is and where it comesfrom (Greene & Cohen, 2004; Rose, 2005).
As our understanding of the brain grows, theold view of an immaterial mind or soul
that is responsible for complex neuronalfunctions, such as decision making,becomes increasingly outdated (Farah,2005). Instead, the view prevails that freewill emerges from physical processes in thebrainan idea that is often referred to ascompatibilism. If free will requires no
soul but can emerge from physical matter,such as the brain, why should it not alsoemerge from silicon? Indeed, any simplecomputer can make decisionsit only
needs information and algorithms to make achoice. Simple systems that make suchdecisions already exist in great numbers. Athermostat decides to turn on the heating
system if the room temperature drops belowa certain level. It certainly doesnt think inthe way humans do, but the question iswhether that is just a different degree ofcomplexity, or whether there is some reallyqualitatively different process that makeshuman choice not just a physical process,said Joshua Greene, a psychologist at
Princeton University (NJ, USA). According
to Greene, it is a matter of complexity: Ourconception of ourselves as being above thelaws of nature is an illusion.
We dont know whether we can makesomething out of silicon that is intelligent,but I think, and so do other people, that
we probably can in the end, said NedBlock, Professor of Philosophy andPsychology at New York University (NY,
USA). My view is that free will will justcome with intelligence. The possibility of
constructing such robots strengthens thenotion that free will in humans is a matterof physical processesan idea that manypeople are still reluctant to accept. Theconcept of the soul might become obsolete,said Wrgtter.
However, it will not be straightforwardto determine whether a robot actually hasintelligence that is sufficiently genuine toproduce free will (Block, 1995; Buttazzo,2001; Franklin, 1995). A seemingly intelli-gent being can be created using simplealgorithms that simulate some aspects of
human behaviour. For example, in 1966,Joseph Weizenbaum, then at theMassachusetts Institute of Technology(Boston, MA, USA), developed a computertherapist called ELIZA, which analyseswritten sentences by detecting keywords,and formulates an appropriate responseaccording to simple rules, despite having
no real understanding of what it is saying.Nonetheless, ELIZA fooled some peopleinto thinking that they were communicat-ing with a fellow human (Block, 1995).Whether a robot is intelligent or not is a
matter not just of its behaviour but how thebehaviour is produced, said Block.Systems that only fool us will be debunked
sooner or later. I think when people comeup with a computational structure for anintelligent robot, we will very well agreethat it is intelligent, said Block.
However, what is true for intelligencemight not be true for consciousness(Buttazzo, 2001). Thereis more of
a problem in figuring out whether a robot
is conscious. We would need to under-
stand not just the biology of human con-sciousness, but also consciousness ingeneral, commented Block. We wouldneed some kind of conceptual and theo-retical breakthrough to be able to tellwhether a robot is conscious.
If scientists ever do construct human-like robots, we will have great difficulties inunderstanding whator whowe aredealing with. First of all, we might find it
hard to evaluate whether a robot has a free
will that is based on understanding. In thiscontext, the consequences are nebulous. Ifit has a free will, would we need robot laws,as suggested by the science fiction writerIsaac Asimov? Moreover, we will probablynever know whether a robot has self-awareness or consciousness. Maybe con-
sciousness requires something that is uniqueto the human brain and cannot be repro-duced in silicon, but there will be no way to
judge. How will we relate to such an object?
REFERENCESArbib MA (1995)Handbook of Brain Theory and
Neural Networks. Cambridge, MA, USA: MITPress
Block N (1995) The mind as the software of thebrain. In Osherson D, Gleitman L, Kosslyn S,Smith E, Sternberg S (eds)An Invitation toCognitive Science, Vol. 3. Cambridge, MA,USA: MIT Press
Buttazzo G (2001) Artificial consciousness:utopia or real possibility?Computer Mag34:2430
Farah MJ (2005) Neuroethics: the practical andthe philosophical.Trends Cogn Sci9:3440
Franklin S (1995)Artificial Minds. Cambridge,MA, USA: MIT Press
Greene J, Cohen J (2004) For the law,neuroscience changes nothing and everything.
Philos Trans R Soc Lond B Biol Sci359:17751785
Montague PR, Hyman SE, Cohen JD (2004)Computational roles for dopamine inbehavioural control. Nature431:760767
Rose SP (2005) Human agency in theneurocentric age. EMBO Rep6:10011005
Torras C (2002) Neural computing increasesrobot adaptivity. Natural Comput1:391425
Wermter S, Palm G, Elshaw M (eds; 2005)Biomimetic Neural Learning for IntelligentRobots: Intelligent Systems,CognitiveRobotics,and Neuroscience. Heidelberg,Germany: Springer
Wrgtter F, Porr B (2005) Temporal sequencelearning, prediction, and control: a review of
different models and their relation tobiological mechanisms. Neural Comput17:245319
Katrin Weigmanndoi:10.1038/sj.embor.7400694
If scientists ever do constructhuman-like robots,we will havegreat difficulties inunderstanding whatorwhowe are dealing with