modeling the role of energy management in embodied cognition

136
Link¨oping Studies in Science and Technology Dissertations. No. 1455 Modeling the Role of Energy Management in Embodied Cognition by Alberto Montebelli Department of Computer and Information Science Link¨opingsuniversitet SE-58183 Link¨oping, Sweden Link¨oping2012

Upload: others

Post on 18-Jan-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page i — #1

Linkoping Studies in Science and Technology

Dissertations. No. 1455

Modeling the Role of EnergyManagement in Embodied Cognition

by

Alberto Montebelli

Department of Computer and Information ScienceLinkopings universitet

SE-581 83 Linkoping, Sweden

Linkoping 2012

Page 2: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page ii — #2

Copyright: Alberto Montebelli 2012 c© (unless otherwise noted)

ISBN 978-91-7519-882-8ISSN 0345–7524

Printed by LiU Tryck 2012

URL: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-76866

Page 3: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page iii — #3

“... e non per un dio ma nemmeno per gioco...”

(Fabrizio De Andre, Un medico, 1971)

Page 4: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page iv — #4

Page 5: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page v — #5

Abstract

The quest for adaptive and autonomous robots, flexible enough to smoothlycomply with unstructured environments and operate in close interactionwith humans, seems to require a deep rethinking of classical engineeringmethods. The adaptivity of natural organisms, whose cognitive capacitiesare rooted in their biological organization, is an obvious source of inspiration.While approaches that highlight the role of embodiment in both cognitivescience and cognitive robotics are gathering momentum, the crucial role ofinternal bodily processes as foundational components of the biological mindis still largely neglected.

This thesis advocates a perspective on embodiment that emphasizes therole of non-neural bodily dynamics in the constitution of cognitive processes,in both natural and artificial systems. In the first part, it critically exam-ines the theoretical positions that have influenced current theories and theauthor’s own position. The second part presents the author’s experimentalwork, based on the computer simulation of simple robotic agents engagedin energy-related tasks. Proto-metabolic dynamics, modeled on the basisof actual microbial fuel cells for energy generation, constitute the founda-tions of a powerful motivational engine. Following a history of adaptation,proto-metabolic states bias the robot towards specific subsets of behaviors,viably attuned to the current context, and facilitate a swift re-adaptation tonovel tasks. Proto-metabolic dynamics put the situated nature of the agent-environment sensorimotor interaction within a perspective that is functionalto the maintenance of the robot’s overall ‘survival’. Adaptive processes tendto convert metabolic constraints into opportunities, branching into a richand energetically viable behavioral diversity.

v

Page 6: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page vi — #6

Page 7: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page vii — #7

Popularvetenskaplig sammanfattning

Dagens mobila robotar blir allt mer sjalvstandiga och det finns redan robot-dammsugare och -grasklippare. Sadana robotar har fortfarande valdigt svartatt hantera mer komplexa uppgifter, sarskilt nar de maste agera i naturligamiljoer eller interagera med manniskor. Daremot kannetecknas de allraflesta levande organismer av en stor adaptivitet som tillater dem att inter-agera med sin omvarld pa ett mycket flexibelt satt. Mycket forskning inomkognitionsvetenskap, artificiell intelligens och robotik har de senaste arenbetonat kroppens roll i dessa flexibla interaktioner samt i relaterade kog-nitiva processer sasom perception och inlarning. Dock reduceras kroppeni den tekniska forskningen vanligtvis till den fysiska kroppens interaktionmed omvarlden med hjalp av sensorer och motorer, medan interna kropp-sliga tillstand oftast ignoreras.

Denna avhandling presenterar ett antal simuleringsexperiment med en-kla adaptiva robotar som regelbundet maste leta upp olika energikallor foratt ‘overleva’. Forfattarens detaljerade analyser av dessa experiment il-lustrerar hur interna tillstand, som kan liknas vid organismers behov ochmotivationer, spelar en central roll i robotarnas beslutsfattande och i dendynamiska, kontextberoende styrningen av deras adaptiva beteende.

vii

Page 8: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page viii — #8

Page 9: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page ix — #9

Acknowledgments

My doctoral research has given me moments of authentic intoxication. Ihighly value the sense of deep accomplishment that I experienced each timeI was able to recognize order in the mess of data emerging from my sim-ulations. Despite the fact that anything in life can be improved upon, Iwill treasure such memories without a single regret. I have many people tothank for that.

First of all, my advisor, Tom Ziemke, for making all of this possible. Ifeel honored to have worked with Tom. Together with his intellectual andmaterial support, he gave me his trust, and the freedom to work on what Iwanted to explore; that can never be forgotten.

One day my co-supervisor, Robert Lowe, landed in Skovde, with his per-sonal luggage of enthusiastic intelligence, passion and deep integrity. Askedfor a first impression, I simply replied: “Rob is the kind of person you wantto work with.” At that time, I didn’t know that my intuition was as closeas possible to clairvoyance. Rob, thank you for the generous amount oftime that you have spent on my work, at times helping me to identify myuntamed intuition and reorienting it towards more coherent and rationalpaths.

By generously sharing their experimental data, Chris Melhuish, IoannnisIeropoulos and John Greenman (Bristol Robotics Laboratory) largely con-tributed to the scientific value of my research. Similarly, I want to thank allthose who took the energy to give constructive criticism, a surprisingly rareand therefore extremely precious effort.

Thanks to Silvia Coradeschi for her kind and pragmatic support duringher initial co-supervision, to Ron Chrisley for his ‘heroism’ in offering mefeedback on the first chapters of my thesis, and to Sten Andler for introduc-ing me to the basics of scientific marketing: never call ‘crazy’ what can becalled ‘original’.

I would also like to thank my committee for their attention to my work,and in particular Richard T. Vaughan for his willingness to venture into atransoceanic travel.

I have to acknowledge the administrative support I received from AnneMoe (Linkoping University) and Lena Liljekull (University of Skovde), andtheir patience in the face of my apparent although unintentional inadequacyregarding bureaucratic accomplishments.

Thanks to the ICEA project, to the University of Skovde, and to theEUCog network for offering me a salary, to Linkoping University and OrebroUniversity for honoring me with their doctoral studentship, and to SweCog(Swedish National Graduate School in Cognitive Science) for its frequentand generous hospitality.

Life in little Skovde would have been much less enjoyable without mycolleagues and friends at the Cognition & Interaction Lab and at the Univer-sity. In particular I am thinking of Diego Federici (your initial suggestions

ix

Page 10: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page x — #10

really helped me to reach the goal), Anthony Morse, Carlos Herrera, MalinAktius, Paul Hemeren, Henrik Svensson (thank you for editing with Tomthe ‘popularvetenskaplig sammanfattning’ of this thesis), Serge Thill, MariaNilsson, Maria Riveiro, Boris Duran, Jana Rambush, Thomas Fischer, IndiaMorrison and Hector, Pierre Philippe, Filippo Saglimbeni, Gauss Lee, KirilKiryazov, Peter Veto, Sandor Ujvari and Sofia Berg, and to the ‘guests’Robert Clowes, Tom Froese, Michail Maniadakis, Borys Wrobel and MichalJoachimczak. Several other researchers have honored me with their friend-ship: human resonance is yet another terrific mystery with no scientificexplanation.

Despite the help of all these people, my life would have been much harderwithout the winter thrill of cross country skiing on ‘Mount’ Billingen. Overits tracks, Claes Rygert taught me most of the little I know about Nordicski, and amazed me with the authenticity of his friendship for the sake offriendship. Claes, if I were sure that this work of mine could stand at thelevel of your humanity, it would be dedicated to your unfading memory.

I will avoid to be Italian to the extreme and list generations over gen-erations of relatives, living or not. They should know that they have myunconditioned gratitude, and large space in my thoughts. The same appliesto some admittedly rare but extraordinary friends, with the only exceptionsof Nicoletta Bizzi and Natasha Jankovic, that I cannot refrain from explicitlynaming.

I am compelled to thank Katri Shaller and our daughter Zoe. I startedthis section by mentioning moments of intoxication, and they both hadthe daunting misfortune of sharing my ‘scientific hang overs’. Katri helpedme with rare understanding and compassion. She transforms everythingaround me with her mesmerizing artistic talent. Zoe is now three, and thereis nothing that I crave more than her laughter. Katri and Zoe are family,despite of all of my limits and insanity. Without them nothing would makeany sense. It’s that simple.

x

Alberto MontebelliLinköping April 2012

Page 11: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page xi — #11

Contents

1 Introduction 11.1 Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3 Complete list of publications . . . . . . . . . . . . . . . . . . 7

1.3.1 Journal papers . . . . . . . . . . . . . . . . . . . . . . 71.3.2 International book chapters, conference and workshop

papers . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.3.3 National conference and workshop papers . . . . . . . 91.3.4 Conference and Workshop oral presentations . . . . . 91.3.5 Poster presentations . . . . . . . . . . . . . . . . . . . 9

2 Classical Cognitive Science 112.1 The mind and its philosophical investigation . . . . . . . . . . 112.2 Science meets the mind . . . . . . . . . . . . . . . . . . . . . 132.3 The birth of cognitive science . . . . . . . . . . . . . . . . . . 142.4 A computational theory of mind . . . . . . . . . . . . . . . . 172.5 The conceptual siege . . . . . . . . . . . . . . . . . . . . . . . 21

2.5.1 The frame problem . . . . . . . . . . . . . . . . . . . . 222.5.2 The Chinese room argument . . . . . . . . . . . . . . 232.5.3 The symbol grounding and the common-sense knowl-

edge problems . . . . . . . . . . . . . . . . . . . . . . 242.6 In search of meaning . . . . . . . . . . . . . . . . . . . . . . . 25

3 Embodied Cognitive Science 273.1 What counts as embodied? . . . . . . . . . . . . . . . . . . . 27

3.1.1 Varieties of embodiment . . . . . . . . . . . . . . . . . 273.1.2 The concept of embodiment . . . . . . . . . . . . . . . 283.1.3 Examples of bodily processing . . . . . . . . . . . . . 29

3.2 Embodied cognition is situated . . . . . . . . . . . . . . . . . 303.2.1 Use of environmental physical factors for enhanced

sensing . . . . . . . . . . . . . . . . . . . . . . . . . . 303.2.2 Action under time pressure and intrinsic dynamics . . 313.2.3 Use of environmental physical factors for cheap com-

putation . . . . . . . . . . . . . . . . . . . . . . . . . . 32

xi

Page 12: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page xii — #12

CONTENTS

3.2.4 Use of environmental physical resources for cognitiveoffloading . . . . . . . . . . . . . . . . . . . . . . . . . 33

3.2.5 Use of environmental physical factors for informationstructuring . . . . . . . . . . . . . . . . . . . . . . . . 34

3.2.6 Social resources . . . . . . . . . . . . . . . . . . . . . . 35

3.2.7 Experimental implications . . . . . . . . . . . . . . . . 35

3.3 What does embodiment bring to cognitive science? . . . . . . 35

3.3.1 The constitution hypothesis . . . . . . . . . . . . . . . 36

3.3.2 The replacement hypothesis . . . . . . . . . . . . . . . 39

3.3.3 The conceptualization hypothesis . . . . . . . . . . . . 41

3.4 A new perspective . . . . . . . . . . . . . . . . . . . . . . . . 42

4 Inside the body 45

4.1 Internal Robotics . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.2 Emotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4.2.1 What is an emotion? . . . . . . . . . . . . . . . . . . . 47

4.2.2 Theories of emotion . . . . . . . . . . . . . . . . . . . 48

4.2.3 Exploring the cognitive/emotion divide . . . . . . . . 51

4.2.4 Internal robotics revisited . . . . . . . . . . . . . . . . 53

4.3 Metabolism, energy, regulation . . . . . . . . . . . . . . . . . 54

5 Cybernetics, dynamics, autonomy 59

5.1 The 1950s British cybernetics . . . . . . . . . . . . . . . . . . 59

5.1.1 Walter: the brain as dynamic interaction . . . . . . . 61

5.1.2 Ashby: the brain as ongoing adaptation . . . . . . . . 63

5.2 Dynamic Systems and cognition . . . . . . . . . . . . . . . . . 68

5.3 Autonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

5.3.1 McFarland’s levels of autonomy . . . . . . . . . . . . . 74

5.3.2 Action selection . . . . . . . . . . . . . . . . . . . . . . 75

5.3.3 Biological causation in cognitive architectures . . . . . 77

5.3.4 Autonomy in autopoietic and enactive systems . . . . 80

6 Experimental work 83

6.1 Artificial neural networks . . . . . . . . . . . . . . . . . . . . 83

6.2 Evolutionary algorithms . . . . . . . . . . . . . . . . . . . . . 84

6.3 Evolutionary robotics . . . . . . . . . . . . . . . . . . . . . . 85

6.4 Overview of included publications . . . . . . . . . . . . . . . . 89

6.4.1 Paper I . . . . . . . . . . . . . . . . . . . . . . . . . . 89

6.4.2 Paper II . . . . . . . . . . . . . . . . . . . . . . . . . . 91

6.4.3 Paper III . . . . . . . . . . . . . . . . . . . . . . . . . 93

6.4.4 Paper IV . . . . . . . . . . . . . . . . . . . . . . . . . 94

6.4.5 Paper V and VI . . . . . . . . . . . . . . . . . . . . . 95

xii

Page 13: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page xiii — #13

7 Conclusions 977.1 Metabolism as a motivational engine . . . . . . . . . . . . . . 987.2 Further scientific contributions . . . . . . . . . . . . . . . . . 997.3 Final notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

Appendices 105

A Paper I 105

B Paper II 127

C Paper III 141

D Paper IV 151

E Paper V 175

F Paper VI 197

xiii

Page 14: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page xiv — #14

CONTENTS

xiv

Page 15: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 1 — #15

Chapter 1

Introduction

1.1 Motivations

During recent years, robotics has redirected much of its traditional em-phasis on precision, speed and controllability towards three new objectives:adaptivity, learning and autonomy (Pfeifer and Gomez, 2005). After mas-tering the artificially protected environment within the high tech factory,thus establishing economic and social progress, robots face a novel com-pelling challenge. Future robots are asked to cope with the world in itsleast structured form, e.g. the exploration of inhospitable and unexploredterritories, the participation in search and rescue actions, the social contextin robot-robot and human-robot interactions. The uncertain, sometimes theunknown, potentially described by limited, inconsistent and unreliable in-formation, characterizes most part of these activities. The robot needs toadaptively comply to its local spatial and temporal context that underde-termines the appropriate behavior for open ended tasks in unconstrainedenvironment. The environmental intrinsic dynamics express an inertia thatthe robot often has no power to influence directly (e.g. the case of a marinetidal stream for a small robotic explorer or a hostile and non-collaborativehuman interlocutor for a service robot). The robot has to adapt by syn-chronizing with exogenous dynamics, thus operating under time pressure.Furthermore, an autonomous robot is expected to manage and provide itsown energetic needs by finding in its surroundings the means for its energeticautonomy with limited or no human intervention.

This redefinition of the original problem in robotics does not naturallymatch the traditional techniques of control engineering. Awkwardly walkingcontemporary humanoid robots often operate under state of the art controltechnology, strictly coordinating the relative position of their limbs and cen-ter of mass with respect to their surroundings. Indeed, no sensible companywould produce a tap dancing robot that, although performing with the samelegendary ability of Fred Astaire, would also show a tendency to step on its

1

Page 16: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 2 — #16

CHAPTER 1. INTRODUCTION

purchaser’s toes. Nevertheless, a certain level of freedom and autonomymight be crucial to boost performance. A novel set of methods is requiredand growing attention moves towards the one system that, to our currentknowledge, masters the three new objectives: the (biological) mind, as aninvaluable source of inspiration. In parallel, a critical rethinking of the gen-eral organization of cognitive systems has rediscovered a more systemic viewof the mind. The brain chauvinism of cognitive processes is undergoing deepreevaluation (Elman et al., 1997; Clark, 1997).

Of course the scientific community has good reasons for a basic intel-lectual interest. The problem of the mind is one of the most complex andfascinating mysteries that contemporary science is trying to reveal. Be-yond rhetoric, the properties of the mind offer per se a collection of highlyengaging puzzles under several different perspectives. Nevertheless, thereis more on the plate even for the pragmatically oriented. Marketing de-partments are eagerly waiting for further information about human choicebehavior (decision making) at the individual and social level. The civil andmilitary electronic market is also ardently craving the full exploitation ofthe promised (and long waited) ‘artificial intelligence’. Robots are currentlyperceived as ideal candidates for the next electronic revolution (e.g. Gates,2007). Biologist and roboticist David McFarland (2008) overtly asks: whatwould make robots more palatable for the electronics consumer? Beyondthat, mastering the implementation of even relatively simple levels of ‘in-telligence’ would not simply boost the performance of current artifacts. Itwould rather launch a broad technological revolution. Problems that areat the moment infeasible would become treatable. For example, imagineproblems like the activation and control of still functional muscle groupsor exoskeletal frames for paraplegic individuals (e.g. Harkema et al., 2011);or the quality of speech recognition and automatic translation; or the cre-ation of genuinely autonomous robotic systems displaying flexible behaviorand some level of empathy with their human users. These are lines thateconomic analysts could easily convert in terms of huge flows on money,with the result to put the scientific field of cognitive architectures under aremarkable external pressure.

However, science lacks a satisfactory theory of the mind. To date thiscan be reasonably regarded as a fact with serious consequences. The bodyof work in cognitive science and cognitive architectures relates quite pe-culiarly to other scientific fields resting on more solid theoretical grounds.A multifaceted variety of often incongruent conceptual and methodologicalpositions fuels the always vivid debate within a culturally heterogeneouscommunity. At the same time, none of these contrasting paradigms, despitetheir ambitions, has the intellectual strength to rise to a solid dominant po-sition and cognitive science endures its age of scientific immaturity (Kuhn,1962; Chemero, 2009).

This thesis reports, and puts in a broader perspective, the author’s re-search in cognitive architectures developed in the period February 2006 -

2

Page 17: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 3 — #17

November 2011 at the University of Skovde, Sweden. Broadly speaking,this work deals with artificial intelligence (AI; see e.g. Russell and Norvig,2003) in a relatively recent variation. The approach of embodied and situatedcognitive science regards the mind as a phenomenon that emerges from themutual coupling of nervous system, body and their environment (e.g. Clark,1997; Clancey, 1997; Ziemke, Zlatev, and Frank, 2007). Thus, the body ofthe robotic agent is not simply a passive tool that relocates in space and timethe agent’s interface with the world. It becomes an essential constituent ofcognition, for without a body there is no mind. The classical human-, adult-and language-centric view of cognition dissolves in a radically different setof concepts (Elman et al., 1997; Chemero, 2009).

The core of this thesis explores a very specific research track that hasrecently emerged within cognitive science and connects the original experi-mental work reported in the pages to the synthetic approach of artificial life(AL; Varela, 1997). In the 1990s, the work of neuroscientists like LeDouxand Damasio revitalized, in the light of contemporary findings in neuro-science, the theories of William James (1842-1910): emotional dynamics canbe interpreted as sophisticated strategies for the survival of the organism,‘viscerally’ rooted in the inner bodily mechanisms (James, 1890; LeDoux,1996; Damasio, 2000, 2003). Under different emotionally relevant stimuli,the body prepares for action and participates in the cognitive act. Theseideas are obviously relevant for cognitive science in general and for cognitiverobotics in particular (Ziemke and Lowe, 2009). Whilst the fundamentalrole of the body in most forms of cognition is now largely acknowledged, itsinfluence is typically explored on its surface dimension (Clark, 1997; Pfeiferand Bongard, 2006; Chemero, 2009; Shapiro, 2011). For example, the lengthof the upper limbs of toddlers induces rather different dynamics of visualinteraction with objects than for the adult counterpart. The object’s manip-ulation by their own hands and the object itself occupy with large saliencetheir visual field and promote a very particular kind of visual experience(Yoshida and Smith, 2008)

The question about possible non trivial effects of internal bodily dynam-ics on cognitive processes remains largely unexplored. The above consider-ations might justify the introduction of the term deep-embodiment, to opento a novel dimension for embodied cognition, orthogonal to the traditionalperspective of embodiment related to the external interface of the body. Theterm refers to the internal bodily phenomena that play a causal role accord-ing to somatic theories of emotions. Thus, the general question underlyingthe work described in these papers might be: (how) do internal, non-neuralbodily mechanisms affect cognition and behavior? Given the research focusin cognitive robotics of this thesis, the work here reported is targeted on thehow as much as on the if components of this question.

What exactly is the major subject of this thesis supposed to be? Mayberobotics? Cognitive science or cognitive architectures? Or rather artificiallife? As will be clearer for those whose patience will allow further reading,

3

Page 18: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 4 — #18

CHAPTER 1. INTRODUCTION

much of what this field is to date was pioneered by British cybernetics al-ready in the 1940s. The interdisciplinary nature of early cybernetics (e.g.involving psychiatrists, psychologists, mathematicians, engineers and so on)was so intense that it is hard to clearly pinpoint it in a classical academictaxonomy. With a pinch of irony, some authors emphasize this transversalcharacter with the adjective ‘antidisciplinary’ (Pickering, 2010). Therefore,I fear that the above question cannot be easily answered. The answer mightsimply be a matter of finding connections with those specific research pathsthat can intellectually resonate and cross-fertilize with the work that is re-ported here.

In this thesis, I will refer to cognition as naturally derived from its ety-mological root. The term cognition comes from the Latin verb cognoscere(to know), and therefore the word will be used as related to the processof knowledge accumulation in a wide sense. Unfortunately, we have to ac-knowledge that the exact meaning of ‘cognition’, ‘mind’ and of several otherterms currently used in cognitive science and related fields will remain some-what vague, in the absence of a sound theory of mind. Yet, I will not followsome authors using a restrictive use of the term cognition (e.g. see Mc-Farland, 2008). Tentatively, the following restrictive description of the termfits the spirit and intention of this work (Chemero, 2009, p.212): “Cognitionis the ongoing active maintenance of a robust animal-environment system,achieved by closely coordinated perception and action.”

1.2 Thesis outline

The overall goal of this work is to collect and report the experimental resultsin cognitive robotics achieved by the author, and frame them within thelarger theoretical picture of current research in cognitive science.

This thesis is structured in two main parts. The first part will outlineand cross-relate, in necessarily schematic (and therefore incomplete) terms,the overall picture of the broad, articulated and hectically evolving field ofcognitive science that constitutes the theoretical background. The purposeof this part is to position the author’s theoretical stance within this picture.Not even for a minute will this be meant with the dogmatic tension of thepartisan. The same way a carpenter needs a hammer and a saw, scientificwork requires tools. The necessary tools of science are a set of methods and atheoretical stance. Such tools urge to be explicitly declared as accurately aspossible, for any experiment is necessarily designed, implemented, analyzedand interpreted with a theoretical and methodological frame in mind. Alack of awareness or objectivity about this fact has pernicious side effects.Peculiar biases and idiosyncrasies that necessarily characterize the wholeprocess of experimental design and analysis, far from silent, would becomeimplicit and, therefore, blurred and uncontrolled.

In particular:

4

Page 19: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 5 — #19

• Chapter 2 will sketch a necessarily brief history of the concept andstudy of the mind. Particular emphasis will be on the historical andconceptual relation between the research programs of cognitive scienceand AI, that are crucially relevant to the work presented here. Someof the main philosophical objections that have questioned the classicalapproach, and contributed to its critical rethinking, will also be brieflyintroduced.

• Chapter 3 will characterize embodied cognitive science, as a sciencethat puts the body inside the dynamics where cognition belongs, asembedded in an environment together with its nervous system. In par-ticular, the chapter will explore what kind of system can be consideredembodied, the implications of embodiment, and which kind of noveltythis perspective introduces in more classical views of AI.

• Chapter 4 will extend embodiment to the set of non-neural internalmetabolic and bioregulatory processes that take place inside of thebody. This will be done in the particular context of somatic theoriesof emotions, i.e. a particular class of theories of emotions centered onthe driving force of bioregulatory processes in the generation of affec-tive behavior. The remainder of this chapter will clarify in physico-biological terms what is meant by metabolism, energy and regulation.

• Chapter 5 will sketch the still largely overlooked work by some ofthe pioneers of cybernetics and relate it to their closest descendant,pursuing the dynamic systems approach to the study of cognition. Thegeneral presentation of (i) a particular form of reductionism (one thatsimplifies to the extreme the different components of the system yetcarefully safeguards the links among them) and (ii) of different levelsof autonomy will complete the general theoretical background.

In the second part:

• Chapter 6 will first introduce and review the crucial research meth-ods that have been applied to the experiments in this book: artificialneural networks, evolutionary algorithms and evolutionary robotics.In particular, the use of artificial neural network combined with evo-lutionary algorithms will raise questions about the traditional role ofcontrol system design in robotics. Finally, a selection of articles pub-lished as international journals and conference proceedings, or as bookchapters, will be briefly presented and to some extent commented onand extended. This collection summarizes the experimental researchactivity of the author, main results and directions for future research.

• Finally, Chapter 7 will briefly review and organize the main elementsreported in this thesis.

5

Page 20: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 6 — #20

CHAPTER 1. INTRODUCTION

PapersContributions

I II III IV V VI

analysis of behavioral attractors ✘ ✘

self-organized dynamic action selection mechanism ✘ ✘ ✘ ✘

structuring of information via multiple time scales ✘ ✘

artificial metabolism as a motivational engine ✘ ✘ ✘ ✘ ✘

modeling MFCs as an artificial metabolism ✘

readaptation to novel tasks via bodily modulation ✘ ✘

bodily data compression ✘ ✘

theoretical organization of the experimental material ✘ ✘

Figure 1.1: Mapping between the main scientific contributions of this thesisand the published papers paper Collected in the appendices.

Figure 1.1 illustrates the mapping between the selected published papersthat are collected in the final appendices and their main scientific contribu-tions.

6

Page 21: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 7 — #21

1.3 Complete list of publications

1.3.1 Journal papers

• A. Montebelli, R. Lowe, T. Ziemke (2012). Towards metabolicrobotics: insights from modeling embodied cognition in a bio-mechatronic symbiont. Artificial Life - in press.1

• A. Montebelli, I. Ieropoulos, R. Lowe, C. Melhuish, J. Green-man, T. Ziemke (2011). An oxygen-diffusion cathode MFCmodel for simulation of energy-autonomous robots. Submit-ted.

• A. F. Morse, C. Herrera, R. Clowes, A. Montebelli, T. Ziemke (2011).The role of robotic modelling in cognitive science. New Ideas in Psy-chology, 29(3):312-324.

• A. Montebelli, C. Herrera, and T. Ziemke (2008). On cogni-tion as dynamical coupling: An analysis of behavioral attrac-tor dynamics. Adaptive Behavior, 16(2-3):182-195.

1.3.2 International book chapters, conference and work-shop papers

• B. Wrobel, M. Joachimczak, A. Montebelli, R. Lowe (2012). TheSearch for Beauty: Evolution of Minimal Cognition in an AnimatControlled by a Gene Regulatory Network and Powered by a MetabolicSystem. Submitted.

• A. Montebelli, R. Lowe and T. Ziemke (2011). Energy constraintsand behavioral complexity: the case study of a robot with a livingcore. AAAI Fall Symposium - Complex Adaptive Systems: Energy,Information and Intelligence, electronic proceedings.

• A. Montebelli (2011). Ecological autonomy: the case of a roboticmodel of biological cognition. ECAL 2011 Workshop on ArtificialAutonomy, electronic proceedings.

• K. Kiryazov, R. Lowe, C. Becker-Asano, A. Montebelli, T. Ziemke(2011). From the virtual to the robotic: bringing emoting and ap-praising agents into reality. Proceedings of The European Future Tech-nologies Conference and Exhibition.

• A. Montebelli, R. Lowe, I. Ieropoulos, C. Melhuish, J. Green-man, T. Ziemke (2010). Microbial Fuel Cell driven behavioraldynamics in robot simulations. Proceedings of the 12th In-ternational Conference on the Synthesis and Simulation ofLiving Systems, pages 749-756, The MIT Press.

1Papers marked in bold are included in this collection (Appendix A-F).

7

Page 22: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 8 — #22

CHAPTER 1. INTRODUCTION

• R. Lowe, A. Montebelli, I. Ieropoulos, C. Melhuish, J. Greenman, T.Ziemke (2010). Towards an Energy-Motivation Autonomous Robot:A Study of Artificial Metabolic Constrained Dynamics. Proceedingsof the 12th International Conference on the Synthesis and Simulationof Living Systems, pages 725-732, The MIT Press.

• A. Montebelli, R. Lowe and T. Ziemke (2010). More from theBody: Embodied anticipation for swift re-adaptation in neu-rocomputational cognitive architectures for robotic agents.In J. Gray, and S. Nefti-Meziani, editors, Advances in Cog-nitive Systems, pages 249-270, IET.

• A. Montebelli, R. Lowe and T. Ziemke (2009). Embodied anticipationfor swift re-adaptation in neurocomputational cognitive architecturesfor robotic agents. Proceedings of the 31st Annual Conference of theCognitive Science Society, Electronic proceedings, The Cognitive Sci-ence Society.

• A. Montebelli, R. Lowe and T. Ziemke (2009). The cogni-tive body: from dynamic modulation to anticipation. In G.Pezzulo, M. Butz, O. Sigaud, and G. Baldassarre, editors,Anticipatory Behavior in Adaptive Learning Systems, pages132-151, Springer.

• R. Lowe, P. Philippe, A. Montebelli, A. Morse, T. Ziemke (2008). Af-fective Modulation of Embodied Dynamics. In R. Lowe, A. Morse, T.Ziemke, editors, Proceedings of the SAB 2008 workshop: The role ofemotion in adaptive behaviour and cognitive robotics, electronic pro-ceedings.

• A. Montebelli and T. Ziemke (2008). The cognitive body: from dy-namic modulation to anticipation. In G. Pezzulo, M. Butz, O. Sigaud,and G. Baldassarre, editors, Proceedings of the Fourth Workshop onAnticipatory Behavior in Adaptive Learning Systems (ABiALS 2008),electronic proceedings.

• A. Montebelli, C. Herrera, and T. Ziemke (2007). An analysis of be-havioral attractor dynamics. In F. Almeida e Costa, editor, Advancesin Artificial Life: Proceedings of the 9th European Conference on Ar-tificial Life, pages 213-222. Springer.

• C. Herrera, A. Montebelli, and T. Ziemke (2007). The role of internalstates in the emergence of motivation and preference: a robotics ap-proach. In A. Paiva, R. Prada, and R. W. Picard, editors, AffectiveComputing and Intelligent Interaction, pages 739-740, Springer.

• C. Herrera, A. Montebelli, and T. Ziemke (2007). Behavioral flex-ibility: An emotion based approach. In F. Sandoval, A. Prieto, J.

8

Page 23: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 9 — #23

Cabestany, and M. Grana, editors, Computational and Ambient Intel-ligence: 9th International Work-Conference on Artificial Neural Net-works (IWANN 2007), pages 798-805, Springer.

• A. Montebelli, E. Ruaro, and V. Torre (2001). Towards the neurocom-puter. In Proceedings of the first World Congress on Neuroinformatics,electronic proceedings.2

1.3.3 National conference and workshop papers

• A. Montebelli, R. Lowe and T. Ziemke (2009). Embodied anticipationin neurocomputational cognitive architectures for robotic agents. InProceedings of the SAIS Workshop 2009, electronic proceedings, Uni-versity of Linkoping.

1.3.4 Conference and Workshop oral presentations

• AAAI Fall Symposium - Complex Adaptive Systems: Energy, Infor-mation and Intelligence, Arlington, VA - USA (November 2011).

• Workshop on Artificial Autonomy ECAL 2011, Paris - France (August2011).

• ALife XII, Odense - Denmark (August 2010).

• SAIS Workshop 2009, Linkoping - Sweden (May 2009).

• ABiALS 2008, Munich - Germany (June 2008).

• 4th EUCognition Conference, Venice - Italy (January 2008).

• ECAL 2007, Lisbon - Portugal (September 2007).

1.3.5 Poster presentations

• AAAI Fall Symposium - Complex Adaptive Systems: Energy, Infor-mation and Intelligence, Arlington, VA - USA (November 2011).

• 4th EUCogII Conference, Thessaloniki - Greece (April 2011).

• CogSci 2009, Amsterdam - the Netherlands (July 2009).

• SAB 2008, Osaka - Japan (July 2008).

• Workshop on multiple time scale in the dynamics of the nervous sys-tem, ICTP Trieste - Italy (June 2008).

2This early work on the computational properties of a hybrid system, partly made ofcultured biological neurons and partly of traditional electronic devices, is not relevant tothe content of this thesis.

9

Page 24: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 10 — #24

CHAPTER 1. INTRODUCTION

• First World Congress on Neuroinformatics, Wien - Austria (September2001).

10

Page 25: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 11 — #25

Chapter 2

Classical Cognitive Science

In this chapter, the pursuit for the effect of metabolic proprioception on cog-nitive processes that motivates this thesis takes a very generalist diversion.The goal is to explore how the current problem of mind and cognition fitsin a broader historical and scientific perspective.

2.1 The mind and its philosophical investiga-tion

What constitutes the mind? What is the biological function of the mind?How does the mind affect the body (and vice versa)? How is a first-personconscious experience even possible? These are some of the fundamentalquestions about the mind. They are so ancient and pervasively basic in thehistory of human thought, that countless generations of philosophers haveengaged with the puzzle, trying to solve it. Nevertheless, the attempt to castsome insight on the problem has reached only partial and still unsatisfactoryresults over the centuries. Over time, several antithetical perspectives havebeen explored and have alternated as the predominant view in the westernphilosophy of mind. Social contingency more than theoretical soundness hasseemingly determined the relative strength of one front against the others.

Rene Descartes (1596-1650) synthesized the first theoretical formulationthat we can still consider relevant to the contemporary debate. He advo-cated a substance dualism, describing the world as composed of two distinctrealms: the physical, constituted of matter carrying physical attributes suchas extensional properties, and the mental, constituted of conscious thoughtand characterized by the absence of any physical attribute. In his view, thephysical matter - res extensa - would be fully determined by the laws ofnature, infinitely divisible and destructible. On the other hand, no phys-ical law might affect the mental, thinking substance - res cogitans - as itlacks any physical property. It would be indivisible and indestructible (ob-

11

Page 26: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 12 — #26

CHAPTER 2. CLASSICAL COGNITIVE SCIENCE

vious cultural reference to the unity and indestructibility of the ‘soul’). Thedifferent properties of the two constitutive substances determine a peculiarepistemological asymmetry. Whilst thought could directly perceive itselfthrough an act of intuitive introspection, the matter of the physical worldcould only be indirectly perceived as being part of the content of the mind.

At the time and in the historical context, when Western science wasexperiencing one of its most fecund periods and Christianity was torn by‘heretical drifts’, the theory conveniently negotiated largely independent do-mains for science and religion. Its popularity was widespread, and its influ-ence is still quite apparent at our days, for the typical Western layman stillfollows Cartesian categories. With few exceptions, though, those who pro-fessionally study the mind in our times tend to adhere to a form of monism,materialism, in one of its several instantiations. Materialism reduces theantithesis between mental and physical substances in favor of the latter 1

(Searle, 2004; Kim, 2006; Wilson, 2001).Materialism emerged following the broad success of physics in the 20th

century. Physical laws, together with a methodical reduction of the materialuniverse to its basic components, seemed to grasp some crucial insight intothe world as we know it. The most urgent task of materialism was to copewith a number of serious philosophical problems that were affecting thephilosophical debate since Descartes’ times. Among them (Searle, 2004;Kim, 2006; Wilson, 2001):

• the body-mind problem, i.e. the identification of how mind and body(i.e. knowledge and action) are connected and mutually carryingcausal power;

• the problem of other minds, skeptically highlights how the subjectiveconscious experience can be reconciled with the dichotomous and onlyindirectly accessible phenomenology of other people’s mental contents;

• the (above mentioned) indirect perception of the external world pro-motes an inevitable skepticism about its actual nature.

Materialism tries to harmonize these problems within the frame of a the-ory of the mind grounded in the natural sciences. It aims at the reductionof the mind to purely physical components without any ad hoc introduc-tion of mental categories that, like a classical deus ex machina, would haveno explanatory power. How could the material world as described by thephysico-chemical laws, to which no level of mental properties could rea-sonably be ascribed, generate the conscious mental phenomena that are sointrospectively urgent? Several theses have provided contrasting views incasting their influence. Over time, each thesis proved unsatisfactory, offer-ing opportunities for conceptual attacks.

1This is of course not meant to deny the historical and cultural importance of thedichotomous monist choice, idealism, that largely dominated the philosophical debateduring the 19th century.

12

Page 27: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 13 — #27

Historically, behaviorism was the first largely influential instance of amaterialistic theory. Whereas logical behaviorism rejected explanations interms of mental categories altogether, methodological behaviorism took aless extreme stance. Its pursuers, e.g. J. B. Watson (1878-1958) and B.F. Skinner (1904-1990) argued that any scientific approach to the study ofthe mind should simply refer to systematic observations of physically mea-surable quantities. This was compatible with operationalism, the view thatany scientific concept should be defined in terms of a process of measure-ment (Wilson, 2001, p.xx). In the behaviorist view, each mental content,e.g. a belief, would be mapped into a sequence of overt behavior. Despiteits deep impact on experimental psychology, in particular for the resultsachieved in conditioning and learning, the theory eventually succumbed toits philosophical attack (Searle, 2004; Wilson, 2001).

Taking here a little of a historical hiatus, functionalism played a cru-cial role in the development of ‘classical’ cognitive science. Functionalismdefined mental states as functionally related to the external and internalstimuli, thus producing new mental states and behavior (Searle, 2004). Thecausal relation between input, current internal state and output charac-terizes each mental function. Therefore, since functionalism focuses on thecomputational process that produces the mental function, it understates theimportance of the particular physical implementation (LeDoux, 1996).

Technically speaking, functionalism does not even necessarily belong tomaterialism (Searle, 2004). Nevertheless, its historically widespread com-mitment to representationalism and computationalism, determined a defacto identification of its models with digital computer technology. Roughlyspeaking, representationalism built on the idea of meaningful relations be-tween items belonging to the (represented) world and their mental stand-ins(representations). Independently of their physical implementation, the spe-cific manipulation of representations would fully characterize mental func-tions. Computationalism developed this scenario in strict analogy with dig-ital computers, dynamically reconfigurable general purpose computationaldevices that store, fetch and manipulate symbols. Such manipulation oper-ates through purely syntactic rules, i.e. only the symbols’ form is taken intoaccount, whilst ignoring their semantic content.

Functionalism had a tremendous influence on the study of the mind. Itsnatural spin-off, cognitive science in its classical form, catalyzed a scientificrevolution. This will be specifically addressed in section 2.3.

2.2 Science meets the mind

Only in relatively recent times, has the mind become a feasible subject forthe scientific domain. Consistent with the tradition of reductionism in nat-ural science, the initial focus of the investigation centered on the brain, thenervous system and its basic neural components. From the early intuitionson animal electricity by Luigi Galvani (1737-1798) and, in the 19th century,

13

Page 28: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 14 — #28

CHAPTER 2. CLASSICAL COGNITIVE SCIENCE

the outstanding histological progress introduced by 1906 Nobel laureatesCamillo Golgi (1843-1926) and Santiago Ramon y Cajal (1852-1934), theunprecedented development of scientific instruments and methods duringthe 20th century seemed to prelude to an age of discovery. Electromagneticphenomena appeared to be the right key to understand the activity of thenervous system (Walter, 1963).

Despite their severe limitations in terms of temporal and spatial resolu-tion several methods today offer a direct insight into the integrated activityof large neural populations in the brain (Akay, 2006). For example, the sys-tematic measurement of spontaneous and evoked electric potentials on thescalp give us a dynamic image of the electric field inside the brain. On topof anatomical information already available with older methods, functionalneuroimaging traces higher densities of metabolically active chemical mark-ers, to create correlations with specific mental or physical activity. In vitroand in vivo single and multielectrode electrophysiological techniques, config-ured for acute or chronic observations, zoomed in the above observation tothe level of single or small sets of neural cells (Nicolelis, 2008). Starting fromthe single neuron, the neurophysiological atomic unit was further reduced tothe single ionic channel by combining electrophysiological and biomoleculartechniques (Nicholls et al., 2001; Kandel, Schwartz, and Jessell, 2000).

Unfortunately, the overwhelming harvest of details gained at anatomical,physiological and biomolecular level is not paired by similar achievementswhen it comes to even the lowest levels of systemic neural integration. Todate, the mind remains as much of an elusive object as ever. The problem-atic investigation of even small assemblies of neural networks becomes clearonce we observe the simulations of theoretical models in computational neu-roscience. Minimal networks of very few reciprocally interconnected neu-rons can already generate dynamics of surprising complexity (e.g. JohnRinzel@@@).

During the last century, at a different and more abstract level of ap-proach, psychology and psychiatry struggled to find their peculiar spacewithin the scientific disciplines. There is enough evidence that pathologicalminds receive significant help from drugs as well as from expert driven self-analysis. Nevertheless, these disciplines have not managed to penetrate thekernel of the mystery either. Despite the wide social consensus they received,they have not been able to produce a resolutive theory of the mind.

2.3 The birth of cognitive science

In their ambitious classical presentation of the field, Stillings et al. eluci-date their theoretical perspective and the role of the researcher in cognitivescience (Stillings et al., 1995, p.1):

“Cognitive scientists view the human mind as a complex sys-tem that receives, stores, retrieves, transforms and transmits

14

Page 29: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 15 — #29

receive

transmit

transform

transmit

receive

store

input output

Figure 2.1: Representation of the basic operations within a cognitive systemaccording to the information-processing perspective in cognitive science, asdefined by Stillings et al. (Stillings et al., 1995).

information. These operations on information are called com-putations or information processes, and the view of the mind iscalled the computational or information-processing view.”

The first sentence in this quote might read as being sufficiently general toactually encompass several different classes of systems and views on cogni-tion. Yet, immediately emerges the idea of a mind as local and proprietaryto a specific entity, its owner. Fig. 2.1 shows a schematic interpretationof the open concatenation of basic operations that compose this ‘complexsystem’. The environment, i.e. the whole world that can be considered ex-ternal to this entity, is in no way a constitutive part of the mind, althoughthe mind is obviously immersed in and interacting with it. Nevertheless, thefull meaning of the quote, and thus the mission of this research program,becomes intelligible only once we deepen our exploration of the historicalroots of cognitive science.

During the last 60 years, the explosive development of computer sciencehas built up the brief illusion that science finally possessed a well defined,clear theory of the mind (Searle, 2004). It has been the time of theoreticaldevelopment of revolutionary digital computer technology, i.e. the imple-mentation of a machine that can be dynamically reconfigured for the solutionof a large class of so called ‘computable’ problems. Several concepts, intro-duced for the development of information theory have appeared suitablefor the study of the mind (Newell and Simon, 1976; LeDoux, 1996). More

15

Page 30: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 16 — #30

CHAPTER 2. CLASSICAL COGNITIVE SCIENCE

specifically, among the crucial theoretical ideas, Searle lists (Searle, 2004):

• Algorithms: the systematic description of the finite sequence of stepsthat determine the solution to a problem. Indeed, an ancient concept,that received full formalization during the first half of the 20th century.

• Turing machine: a mathematical idealization, conceived by Alan Tur-ing (1912-1954), of a minimalist state machine capable of basic opera-tions. The machine is endowed with a tape of infinite length, dividedin adjacent cells in a ‘blank’ initial state, and with a state register,initialized to a ‘start’ state. A machine’s head can read from andwrite on the tape a collection of logically dichotomous symbols (e.g.‘0’, ‘1’ and ‘blank’), or delete them (thus producing ‘blank’). Themachine can also move its tape stepwise, forward or backward, in theone possible dimention. Its operations are controlled by a finite se-quence of transition functions, rules in the form if <condition>thendo <action> sequentially specified by a program. The <condition>field refers to the particular input read on the tape and on the currentstate of the machine, whilst <action> implies a writing operation onthe tape or a movement of the tape or an update of the internal stateof the machine itself. Turing gave a formal proof of the existence ofa universal Turing machine, capable of carrying out the simulation ofany given Turing machine.

• Church-Turing thesis: this unproven hypothesis, independently elab-orated by Turing and by his doctoral advisor Alonzo Church (1903-1995), claims that any problem whose solution can be algorithmicallyformulated can also be solved on a Turing machine. This also mathe-matically formalizes the concept of computable function, as a functionthat can be calculated by a Turing machine.

• Turing test : in order to test the level of performance reached by in-telligent artifacts, Turing devised a measuring procedure for humanlevel intelligence (Turing, 1950). Different versions of the test are pos-sible in principle, but in general it amounts to a ‘filtered’ interactionbetween a human expert and either another human or (in the inter-esting case) an intelligent artifact. The expert guesses the real natureof the concealed interlocutor. The obvious limitations of this ‘gameof deception’ were probably clear to Turing, who suggested his teston much less ambitious grounds than the following consensus actuallyestablished (Turing, 1950). However, the fundamental idea behind thetest is the definition of a general metric to compare the performanceof different intelligent machines.

• Multiple realizability : the computational properties of a Turing ma-chine would be unaffected by its physical implementation in any avail-able technology. Given the current technology, the commercially avail-able physical approximation of the Turing machine, i.e. the digital

16

Page 31: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 17 — #31

computer, relies on the technology of semiconductors for a mere mat-ter of performance and convenience. Nevertheless, computing devices,in principle equivalent to digital computers, could be implemented inseveral other technologies. For example, see the billiard ball model,a computational device whose functional components are based onelastic collisions between the molecules of a perfect gas confined in aspecifically shaped container under appropriate initial conditions andcomputationally equivalent to a digital computer (Fredkin and Toffoli,1982), or a biomolecular approximation of a Turing machine (Benen-son et al., 2001).

• Levels of description: in the most influential formalization of this idea(Marr, 1982), a system can be analyzed at different levels of abstrac-tion. At the computational level, the description of the system mightbe given in terms of functions, thus drawing abstract generalizations.For example, we might say that a calculator performs the sum ratherthan, say, the multiplication of two natural numbers. At the algorith-mic level, the focus would be on the specific sequence of basic stepsthat produces the result. At the implementational level, the atten-tion would be on the detailed physical mechanisms that implementthe function (e.g. the connectivity of hydraulic, pneumatic, digital oranalogical electronic devices). Each of these levels uniquely capturesimportant elements for the understanding of the system. The imple-mentational level of detail might mask the actual function that thesystem is performing. At the same time, at the computational levelwe would ignore everything about the infinite possibilities that samefunction might be actually implemented on the physical medium.

• Recursive decomposition: a large and complex problem can be brokendown into simpler problems. This reduction can be recursively applied,until an atomic level or readily implementable simplicity is achieved(divide et impera method).

On these grounds a brand new cross-discipline, cognitive science, bloomedas an interdisciplinary aggregation of researchers in psychology, linguistics,computer science, applied mathematics, philosophy and neuroscience (toname a few).

2.4 A computational theory of mind

Probably, the summit and intellectually most daring presentation of this par-ticular perspective on cognition was advocated by Allen Newell and HerbertSimon, during their famed 1975 lecture as recipients of the Turing Award(Newell and Simon, 1976). Their physical symbol system hypothesis is sup-posed to apply to intelligent information processing for human, biologicaland artificial minds.

17

Page 32: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 18 — #32

CHAPTER 2. CLASSICAL COGNITIVE SCIENCE

At any given moment, a physical symbol system contains a set of pro-cesses that create, modify, reproduce and destroy physical patterns (sym-bols) that can be combined in expressions (symbol structures) to create newexpressions. A physical symbol system is thus a machine that dynamicallyproduces and maintains a set of symbol structures in time. The system pos-sesses accessory capacities: designation and interpretation. “An expressiondesignates and objects if, given the expression, the system can either affectthe object itself or behave in ways dependent on the object. [...] The sys-tem can interpret an expression if the expression designates a process andif, given the expression, the system can carry out the process” (Newell andSimon, 1976, p.116).

The core of the hypothesis is expressed as follows: “A physical symbolsystem has the necessary and sufficient means for general intelligent action”(Newell and Simon, 1976, p.116). Necessity implies that any system that iscapable of general intelligent action must be some sort of instantiation of aphysical symbol system. Sufficiency states that an appropriate implemen-tation of a physical symbol system can be organized in order to producegeneral intelligent action. By “general intelligent action” the authors meanthe typical capacity of human cognisers to deploy “in any real situation be-havior appropriate to the ends of the system and adaptive to the demandsof the environment”, given some obvious limitations due to time constraintsand complexity. Indeed, the above is scientifically alluring, for it frames acomprehensive and coherent research program. Cognitive science took upthis research program (“understanding the functional organization and pro-cesses that underlie and give rise to mental events” LeDoux, 1996, p.29)and coordinated the relevant research activity for a computational theory ofmind.

One of the most intriguing aspects of research in cognitive science, andmost relevant for the present thesis, should be obvious by now. There is anovert quest for a general organizational principle of the matter that is heldresponsible for the existence of the mind and that would work equally wellfor organisms and machines. As Stillings et al. put it (Stillings et al., 1995,p.8):

“Just as biologists concluded in the nineteenth century that lifearises from particular organizations of matter and energy andnot from a special life force, so cognitive scientists proceed fromthe assumption that cognition arises from material structure andprocesses and not from any mysterious extraphysical power.”

An analogous idea somehow permeates the work described in these pages.Biology made some significant progress when it acknowledged that biolog-ical organisms are a specific form of organization of physical matter. Thisimplied taking seriously the chemical-physical properties of organisms. Simi-larly cognitive science might benefit from acknowledging that cognition restson particular form of organization of living matter.

18

Page 33: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 19 — #33

Apparently, the emerging computer technology is assumed as a soundmetaphor of the mind. The issue of multiple realizability unifies biologicaland artificial cognitive systems within a functionalist perspective (Stillingset al., 1995, p.6):

“... since algorithms can be carried out without any higherknowledge about their meanings, they can be carried out byphysical systems, which can be biological or engineered.”

The same authors are well aware of the danger of naively taking themetaphor too far. Of course, they assert, computer programs and humancognition are two quite different things, and the former are not a necessarlygood model of the latter. Nevertheless, they claim (Stillings et al., 1995,p.11):

“Our stress on the independence of an information process fromits physical implementation is akin to the common distinctionbetween software and hardware in the computer world.”

Roughly, this had been the answer of classical cognitive science about theorganizational principle of the mind. In his critique, philosopher of mind J.Searle concisely describes this attitude with the analogy (Searle, 1980, 2004;LeDoux, 1996): “the mind is to the brain as the program is to the computerhardware.”

A second assumption, the deep commitment to a representational de-scription of the world, is deeply rooted in the traditional approach of arti-ficial intelligence (AI), the branch of computer science dedicated to the at-tempt (Russell and Norvig, 2003) “not just to understand, but also to buildintelligent entities.” According to the classical AI approach, the relevantinformation is represented within the system by suitable physical entities.What counts as relevant constitutes a first fundamental choice of the levelof abstraction, and how to represent it is a further choice made by the de-signer of the system. These are the problems of knowledge representation(Rich and Knight, 1990; Russell and Norvig, 2003; Brachman and Levesque,2004).

Once this is set, the hypothesis states that a manipulation of represen-tations, following a set of mechanical rules (an algorithm) and obliviousto any externally ascribed semantic, would be capable of intelligent actiongovernance. This manipulation can count as (e.g.) perceiving, reasoningand planning. It generates new knowledge, in the sense of drawing newconclusions that were not already explicitly represented. Traditionally, AIprograms used to generate new knowledge by exploring new possibilitiesaccording to the rules of logic (deduction and inference) applied to propo-sitional structures. Other typical methods explored a set of possibilitieswithin a given problem domain (search). More recently, statistical methods(e.g. Bayesian and Markov methods) have proved more efficient at dealingwith uncertainty, i.e. those situations where the problem is not completely

19

Page 34: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 20 — #34

CHAPTER 2. CLASSICAL COGNITIVE SCIENCE

and unambiguously specified (Russell and Norvig, 2003). In parallel, dif-ferent forms of learning made artificial minds more flexible, endowing themwith the capacity to modify their behavior in the face of the unexpected.

The implications of the information-processing view presented aboveshould now be clearer. Cognition, in the classical perspective of cognitivescience, boils down to a skillful set of algorithms, operating on an appropri-ate choice of representations. Within this perspective, Stillings et al. listsome fundamental characteristics that are common to all cognitive systems(Stillings et al., 1995, p.2-7):

• information processes are contentful and purposeful : the informationprocesses support an organism (or system) in establishing and main-taining its adaptive or goal-oriented relation to the environment;

• information processes are representational : representations within theorganism (or system) form a new domain that ‘stands for’ the infor-mation in the original domain;

• information processes can be described formally : the manipulation ofinformation within the organism (or system) can be completely speci-fied in terms of processes (algorithms) that operate on representations,following purely syntactical rules.

Recourse to representations tends to emphasize a separation betweenthe mind and its environment. The former creates the domain that standsfor the latter. The purely formal, syntactic manipulation of representationsmaintains their semantic value. Representations allow for a dynamic growthof a more articulated representation of the world. Typical properties of rep-resentational structures are, in fact, systematicity and compositionality, i.e.the capacity to produce or understand a structure of symbols is systemati-cally related, in ways that depend on its semantic content, to the capacityto produce or understand other structures of symbols (Fodor and Pylyshyn,1988).

While we lack a satisfactory formal definition of intelligence, intelligentagents are expected to behave ‘rationally’ (Russell and Norvig, 2003). Oneof the contributions of AI has been the redefinition of what should be meantby ‘rational’. Direct experience showed that the classical idea of perfectrationality, i.e. agents that behave so as to maximize their expected util-ity, was unrealistic in sufficiently complex environments, characterized byhigher levels of uncertainty and time constraints. This class of problemdomains constitutes the actual challenge of present and future AI. Looserconcepts of rationality are required to deal with such situations, where theagent might simply not find enough information or time for processing acomplete, perfect rational solution to its problem. In the 1950s, Nobel lau-reate Herbert Simon radically transformed the theory of microeconomics,classically based on the concept of optimization, by introducing the bio-logically inspired concept of bounded rationality. In his theory, satisficing

20

Page 35: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 21 — #35

consumer choices, i.e. decisions that are simply ‘good enough’, took the tra-ditional place of optimal and perfectly informed consumer decisions. Tryingto formalize Simon’s original intuition, Russell and Norvig suggested theuse of the concept of ‘bounded optimality’ for a feasible agent theory. Abounded optimal agent ‘behaves as well as possible’, based on a metric thatcompares its performance to other agents provided with similar computa-tional capacity under similar environmental constraints, even if concludedwith incomplete or unsound reasoning or search (Russell and Norvig, 2003,p.973).

The promising first implementations of general purpose AI systems showconvincing performance when faced with the so called ‘microworlds’ thatconstitute their simplified problem domain. The idealized goal beyond thesefirst tests was the coping with open-ended tasks, i.e. tasks in natural anddynamically changing scenarios, not fully known in advance (Newell andSimon, 1976). A deeper understanding was achieved when it became clearthat scaling up from this stage to more complex scenarios, e.g. producingreal-time action in realistic environments, could be typically achieved onlyat the price of providing the AI systems with domain-specific knowledgeprovided at design time. At a practical level, this implied the fragmentationof the original generalist concept of AI into special subdomains. Differentbranches of this initially unified discipline withdrew into highly specializedniches, each dealing with very specific subproblems and relying on verydistinctive sets of ad hoc methods. The relative, growing isolation of topicssuch as ‘vision’ and ‘robotics’ from the rest of the discipline is a clear signof this drift (Russell and Norvig, 2003).

2.5 The conceptual siege

The explosive growth and success of computer science over recent decadeshas had powerful effects on establishing the field of cognitive science. Thistheoretical stance proved extraordinarily productive, as it offered clear,testable hypotheses about the unconscious processing of information (Newelland Simon, 1976; LeDoux, 1996; Chemero, 2009). The different disciplineswithin cognitive science cross-fertilized to a large extent. Applied cogni-tive science provided several practical solutions. For example, the cognitivestudies in linguistics translated into supporting children with speech disor-ders. Expert systems, probably the most successful achievement of AI, flankhumans in critical decision making (e.g. in medicine and nuclear plants).

The use of the computer as a research tool became prevalent. The com-puter was not simply a (questionable) metaphor of the mind. The largeavailability of relatively cheap and readily available digital computers oftenencouraged the good habit of translating cognitive theories into computerprograms, thus thoroughly testing the consequences of the theory in simula-tion. This helped researchers to discover omission, inaccuracy and ambiguitythat may unconsciously occur in theoretical formulations. The level of ac-

21

Page 36: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 22 — #36

CHAPTER 2. CLASSICAL COGNITIVE SCIENCE

curacy required for coding a computer program translates into a powerfultool for an effective practice of exploration and debugging of a theory (e.gsee the next Section 2.5.1).

We should not forget that classical cognitive science also developed acharacteristic set of experimental methods for the validation of its models(Shapiro, 2011; Stillings et al., 1995). For example, the method of reactiontime, based on measures of the delay between the stimulus and the sub-ject’s response during the execution of a simple task, had been used sincethe early times of experimental psychology in the 19th century (Sternberg,1969). This method typically relies on two basic assumptions. First, thestage theory, that states that mental processes are organized as a sequenceof elementary operations, each starting as soon as the preceding one has beenaccomplished. Second, the assumption of pure insertion, that affirms the rel-ative independence of different subtasks that comprise a more complex task(Sternberg, 1969). In cognitive psychology, the systematic comparison ofhuman and computer generated reaction times is still often used to corrobo-rate the cognitive hypothesis behind the computational model implementedin the program. Sternberg (Sternberg, 1969) asked his subjects to memorizeshort lists of items and measured their reaction times as he asked questionssuch as ones regarding the presence of a given item in the list. The length ofthe list was one of the experimental parameters. The search algorithm thatbest fitted the data was the exhaustive serial search, where the set of itemsis scanned one at a time from its first to last item, regardless of whether theitem has already been found. An interesting observation is that this resultdoes not correlate with our conscious experience: all subjects reported ei-ther that they stopped the search as soon as they found the item, or thatthey simply ‘knew’ that the object was or was not in the list, with no searchwhatsoever (Sternberg, 1969).

This and other experimental methods, increased the confidence in thegeneral validity of the perspective of cognitive science. Nevertheless, justas they were conquering large popular attention, the field was soon subjectto several serious conceptual attacks, giving reasons for the slow progress ofa large body of well funded scientific research. Among the most influentialconceptual attacks are the frame problem, the Chinese room argument, therelated symbol grounding problem and the common-sense knowledge problem.

2.5.1 The frame problem

In his entertaining description of the frame problem, philosopher D. Dennettintroduces us to a robot under life threat (Dennett, 1987): its spare batteryis locked in the same room with a time-bomb ready to blast. Several genera-tions of increasingly ‘smarter’ control systems results in the robot’s repeatedexplosions, that inspire yet other and more sophisticated controllers. Thecombinatorial explosion of facts that must be explicitly considered for thesolution of a seemingly trivial problem dooms the control system to failure

22

Page 37: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 23 — #37

in the presence of an even mild time constraint.

Dennett was the first to influentially import the frame problem into thephilosophical domain (Shanahan, 2009). In fact, the problem was first iden-tified in its technical form in classical logic-based robotics (McCarthy andHayes, 1969). How can an action and its consequences be represented in aformal language, without the need to also explicitly represent a potentiallyinfinite number of trivially evident ‘non-effects’? If Dennett’s robot pullsits battery out of the room (action) can the explicit representation of thefact that the blue color of the walls in the room will not turn red or thatthe planet Jupiter’s trajectory will not be affected (as mere examples of aninfinite number of likely non-effects) be avoided? Given the pragmaticallyinspired mission of AI (i.e. the creation of working artifacts displaying ‘in-telligent’ properties) within a few decades, the technical problem establishedseveral satisfactory solutions to most cases (Shanahan, 2009, 1997). In gen-eral, the solutions rely on two assumptions: (i) the common sense law ofinertia, i.e. the common sense observation that an action does not modifymost parts of a set of properties used to describe the context of the action,unless we have specific reasons to think of an exception; (ii) the use of non-monotonic logic, i.e. a logic that, differently from monotonic logic, assumesdefault reasoning and allows for rules with open-ended sets of exceptions(Shanahan, 1997).

Nevertheless, this (partial) technical success did not affect in any sensethe epistemological frame problem as raised by Dennett, for it rests on thevery logical structure of classical cognitive science sketched in Section 2.4.Dennett’s argument attacks on basic grounds the view of cognition builtin terms of representations and manipulation of propositional structures.If the common sense law of inertia can satisfy the pragmatically engineer-ing minded, Dennett’s argument poses a deep epistemological problem thatquestions the possibility to draw a systematic and complete revision of thoseaspects of the knowledge base that are affected by a given action.

2.5.2 The Chinese room argument

To criticize the ascription of mental properties, such as understanding andintentionality, to ‘intelligent’ artifacts, a practice quite common back in the1970s by the proponents of the so called strong AI stance, Searle introducedhis classical thought experiment, the Chinese room argument (Searle, 1980,2004). The argument summons our capacity of introspection, in a scenariothat is obviously reminiscent of the Turing test. In fact, an explicit assump-tion of the argument is to consider the AI program fully successful, i.e. AIengineers can produce programs reliably capable of passing the Turing test.The Chinese room argument investigates the implications of this scenario.

Imagine being isolated in a room with no other form of contact withthe external world other than a slot, from where you can receive or delivera pad. Imagine that the pad that you receive is covered with symbols in

23

Page 38: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 24 — #38

CHAPTER 2. CLASSICAL COGNITIVE SCIENCE

some mysterious language that is totally unintelligible to you. The Chineselanguage, chosen for the original formulation of the argument, would dofor most Western readers. Unknown to you the strings of symbols on theincoming pad carries specific questions. You are also provided with a ‘bookof rules’ (in this metaphor of AI, to all effects the computer program) anda data base of symbols. You only have to bovinely follow the rules thatlead you to the generation of a brand new string of Chinese symbols. Thebook only gives you ‘recipes’, i.e. the syntactic rules for combining symbols,without casting any light about the meaning of the symbols. Nevertheless,those outside the room who receive the pad from you, and can understandChinese, will be delighted in reading the correct answer to their own question(the success of this procedure is granted by the hypothesis on the success ofAI research program).

At this point of the argument, you are asked to appeal to your introspec-tive awareness. Searle would provocatively ask: “Even though you delivereda correct answer, would you really feel like you can understand Chinese atall?” If you agree that you cannot, then no ‘intelligent’ machine based onthe classic principles of the computational theory of mind can achieve anyunderstanding or even intentionality 2 only in virtue of running the right‘program’, i.e. the right sequence of merely syntactic rules. In fact, infor-mation processing of the kind performed by digital computers is completelydepleted of intrinsic semantic value. Pure syntax is not sufficient to gen-erate understanding, it does not matter if the machine passed the Turingtest or not. In short, the argument is meant to show that purely syntacticprocessing is not sufficient to produce a fundamental mental property suchas intentionality. Therefore, this does not only attack the research programof classical cognitive science (that is expected to explain how mental prop-erties are generated), but also undermines the more pragmatically orientedAI project, that should at least be able to demonstrate basic capacities ofthe biological mind by directing itself towards a goal.

2.5.3 The symbol grounding and the common-sense knowl-edge problems

Searle’s Chinese room argument was mainly aimed at being an appraisal ofthe possible level of ‘understanding’ achieved by an artifact capable of pass-ing the Turing test. Harnad extended this scenario, to investigate, within asymbol system, the capacity of symbols to carry meaning (Harnad, 1990).Harnad confronts us with the problem of learning Chinese as a second lan-guage, given that the only available prop is a Chinese-Chinese dictionary.Within the dictionary, the meaning of the Chinese symbol we would liketo know (the definendum) would be given in terms of a sequence of Chi-

2Intentionality is a technical concept that in philosophy denotes the capacity of themind to be directed towards a goal, to be about something, to stand for an external stateof affairs.

24

Page 39: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 25 — #39

nese symbols (the definens). This would most likely start a combinatorialexplosion of searches for new entries several of which would probably leadto circular definitions. Field experience in cryptography of ancient lan-guages shows that the code can hardly be broken, unless an even minimalistkey to the translation into a known language is offered. A further, andmore difficult problem is learning Chinese as a first language from the sameChinese-Chinese dictionary. According to Harnad this unsolvable problemis the situation actually faced by classical AI (Harnad, 1990): ”How is sym-bol meaning to be grounded in something other than just more meaninglesssymbols?”

There seems to be a deep problem for the ‘atomistic’ symbol systems ofclassical AI. In these kinds of systems, knowledge is declarative, i.e. aboutthe know-what of the world, specified at a linguistic level and detached fromaction. According to Dreyfus, the important features of intelligence havea ‘holistic’ nature and involve procedural knowledge, i.e. they regard theknow-how of an engagement in action (Dreyfus, 1992). The characteristicsof symbol systems make the acquisition of procedural and common-senseknowledge in symbolic systems problematic. It technically involves a sys-tematic conversion from procedural, bodily and action based, to declarativeknowledge and vice-versa. However, the formation of this common-senseknowledge seems fundamental in order to cope with even the simplest of ev-eryday life situations. Further technical problems related to common-senseknowledge involve how this knowledge should be structured in order to sup-port inferential creation of new knowledge and how the relevant informationshould be extracted from the knowledge base as required by the specificcontext (Dreyfus, 1992).

2.6 In search of meaning

The systematic treatment of the objections sketched above and the relativecounterarguments is a huge technical problem in itself, that goes far beyondthe scope of this work. Nevertheless it’s important to observe the generalnature of these conceptual attacks and how they appear to be, at least tosome extent, related.

• Their target is the AI subfield of cognitive science. Nevertheless theycarry general relevance to the whole classical view in cognitive science,i.e. the classical computational theory of mind. Traditional AI createsphysical models of general cognitive theories (which explicitly arguefor their realization-independent nature). Therefore, the objections toAI seep to the level of its underlying theoretical structure, despite thefact that AI is pragmatically committed to the creation of intelligentartifacts.

• A crucial common element might be traced to the arbitrariness of therelation between the ‘intelligent’ artifact and the universe of its ex-

25

Page 40: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 26 — #40

CHAPTER 2. CLASSICAL COGNITIVE SCIENCE

perience. This is obvious in Searle’s Chinese room argument, as wellas in the related symbol grounding problem. Yet, the frame prob-lem also refers to the lack of a structuring force in the creation ofbehavior-driving knowledge, in artifacts for whom everything appearsungrounded, neutrally valued and meaningless. In all these examples,meaning only emerges in the eyes of an external human observer.

• They all argue on a conceptual basis. As strong as the philosophicalintuition on which they are founded might be, this kind of attack fallsshort of a robust explanation, as witnessed by the massive literatureof reciprocal counter-objections that they triggered. Although highlyinspirational, the philosophical debate does not leave us with the sen-sation of getting any closer to the answers that we are looking for(Chemero, 2009).

These and similar other attacks led philosopher H. L. Dreyfus to la-bel the classic AI view, following Lakotos’ definition, as “a degeneratingresearch program”, i.e. a scientific field started with outstanding resultsand expectations, yet stagnating due to seemingly critical methodologicallimitations (Dreyfus, 1992). Indeed, these attacks had at least one majorside effect. They broadened the horizon of cognitive science to issues thatwere initially largely disregarded but that are fundamentally important forthe experimental work here reported. First, a research program addressingthe role of the body as a non-trivial cognitive resource can nowadays beconsidered well established as a research program (Clark, 1997; Pfeifer andBongard, 2006; Ziemke, Zlatev, and Frank, 2007; Chemero, 2009; Shapiro,2011). Second, the importance of the environmental context as a direct sup-port to the ongoing cognitive process. Third, advances in dynamic systemstheory also offered a new formal approach to the study of cognition, highlyreminiscent of the pioneering work of the first cyberneticists. Finally, artifi-cial neural networks, i.e. a relatively abstract mathematical model inspiredby biological nervous systems, together with dynamical systems theory of-fered an alternative to classical computationalism, as a novel non-symboliccomputational paradigm. These new elements that characterize a ‘secondphase’ of cognitive science will be discussed in the next chapters.

26

Page 41: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 27 — #41

Chapter 3

Embodied CognitiveScience

The concept of embodiment extended the debate within cognitive scienceto the role of the body within the dynamics of cognition. Historically, thisissue mainly originated with the work of philosophers, psychologists andbiologists (among the most influential: Heidegger, Merleau-Ponty, Dreyfus,Vygotsky, Piaget and von Uexkull), to become more popular within the sci-entific community through the interdisciplinary work by Varela, Thompsonand Rosch (see Varela, Thompson, and Rosch, 1991; Chrisley and Ziemke,2002). Of course the matter of the discussion is not unproblematic and farfrom settled. Shapiro observes how, to date, embodied cognition should stillbe considered a research program rather than a well-defined theory. Yet apivotal one, as the same author maintains (Shapiro, 2011, p.1):

“At stake in the debate between proponents of embodied cog-nition and standard cognitive science are nothing less than pro-found and entrenched ideas about what we are - about what itmeans to be a thinking thing.”

This chapter will explore the main aspects of this vast and vivid discussionfollowing the trace of some fundamental questions. What counts as ‘embod-ied’? What are the implications of embodiment? What does embodimentadd to AI?

3.1 What counts as embodied?

3.1.1 Varieties of embodiment

There is at least one loose interpretation of embodiment that does not ig-nite discussions: it is the obvious allegation that intelligence, in order to bedeployed in the world, requires a physical substrate of some sort. This claim

27

Page 42: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 28 — #42

CHAPTER 3. EMBODIED COGNITIVE SCIENCE

bears no threat to even the most classical AI positions, for existing in termscurrently accepted by science requires a physical instantiation. Any com-puter program or a virtual web agent is embodied in this sense, as it causallyrelies on a physical computer to be executed in the first place. Chrisley andZiemke (2002) define as physical realization this trivial connotation of em-bodiment. They portray several other varieties of embodiment by drawing ahierarchy of embedded subsets, defined by increasingly strict criteria. Phys-ical embodiment requires the persistence over time of a “coherent, integralsystem”. Although a virtual web agent would not meet the criterion due toits insubstantial and distributed nature, any robotic artifact would.

In Chrisley and Ziemke’s taxonomy, more interesting forms of embod-iment closely refer to the properties of natural organisms. Organismoidembodiment, requires sensorimotor capacities that are at least reminiscentof biological organisms. The mere ownership of a bodily frame, loaded withsensors and actuators in various configurations is quite trivial in terms ofembodied cognitive science, unless such a sensorimotor richness can be co-ordinated in some interesting way. This is the level explicitly consideredby most authors in embodied cognitive architectures. This is also the levelwhere a non trivial interplay between the physical instantiation of the bodyand its ‘control system’, the artificial analogous of the nervous system, be-comes possible (e.g. see Pfeifer and Gomez, 2005). Finally, organismalembodiment denotes those systems that are also endowed with the charac-teristic causal properties of the living systems.

3.1.2 The concept of embodiment

Having clarified what it means to be embodied, claims about embodimentgenerally argue that the phenomenon of cognition emerges from a dynamicinterweaving of the properties of the coupled body, brain and environment(Varela, Thompson, and Rosch, 1991; Clark, 1997; Chrisley and Ziemke,2002; Pfeifer and Gomez, 2005). Thelen et al. express the extent of theimplications in a straightforward way (Thelen et al., 2001):

“To say that cognition is embodied means that it arises frombodily interactions with the world. From this point of view,cognition depends on the kinds of experiences that come fromhaving a body with particular perceptual and motor capabili-ties that are inseparably linked and that together form the ma-trix within which reasoning, memory, emotion, language, and allother aspects of mental life are meshed.”

From a classical cognitive science perspective, the body is merely thecohesive surface for the exchange of information in the form of physicalenergy (mechanical, chemical and electromagnetic) between the agent andthe external world. With some obvious limitations, the nervous system canspatially relocate and reorient the surface of this interaction by eliciting

28

Page 43: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 29 — #43

action in its body. Within a few microns around that boundary the nervoussystem picks up sensory information, as the external world manifests itselfin the sensory capabilities of the body by physicochemical interaction. Yetthe border tends to remain a passive boundary between two metaphysicallydistinct entities. To the contrary, embodied cognition assumes the body tobe an active resource for the emergence of cognitive phenomena at severaldifferent levels.

3.1.3 Examples of bodily processing

The body can be usefully seen as a pre- and post-processor of sensorimo-tor information (Chiel and Beer, 1997). As an example of pre-processing,Chiel and Beer point to the evolution of two resonant cavities in the tho-rax of crickets Gryllus bimaculatus, selectively responsive to the frequenciesof the cricket’s calling song. Such cavities are connected through hollowtubes (acoustic trachea) to the ipsi- and contralateral ears, located in thetibiae of the cricket’s front legs. An alteration of this bodily structure im-pairs the directional capacity of the animal in response to the calling song(Wendler and Lohe, 1993; Lohe and Kleindienst, 1994). As it comes topost-processing, Clark reports an example by control engineer Jim Nevins(Clark, 1997). Imagine a very tightly connected jigsaw puzzle, and a rigidrobotic arm that must recompose it. Once the gross location of a piece isdetermined within the picture, fitting it requires dexterous control (whoseaccuracy is determined by the accuracy of the cutting process) of the po-sition and orientation of the piece with respect to the picture. This is thesolution that Clark attributes to ‘pure thought’. Now imagine to use an armendowed with some level of viscoelastic properties (just like ours is). Themechanical tension deriving from the interactions of the robotics arm withthe incomplete picture (via the piece to be placed in the jigsaw) amounts tocountless micro-adjustments that play the functional role of an intrinsicallybuilt-in feedback. In this situation the arm control can offload large part ofthe computational effort onto the physical interaction. The level of precisionrequired on the second setup is significantly relaxed.

An interesting further example of post-processing is the biomechanicsof human and animal movement. In virtue of the mechanical properties ofbiological materials, the level and dynamic of activation in the muscles (e.g.according to different gaits) and the different postures (i.e. the geometricconfiguration of the system) modify the structural quality of biomechan-ical systems. For example, tendons respond with high tension to suddenstretches; after stretch they progressively relax following their intrinsic timescale (Fung, 1993). This implies that the structural properties of the cou-pled musculoskeletal system change quantitatively over time during motionand qualitatively during different types of motion. They are a function ofthe specific dynamic, level of activation and fatigue of the muscle. In everyinstant, the stable points of the system are dynamically changing. Motion

29

Page 44: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 30 — #44

CHAPTER 3. EMBODIED COGNITIVE SCIENCE

takes place within a transient scenario. Within this picture, as Chiel andBeer (1997) correctly observe, the physiological properties of the muscles actas a low-pass filter on the motor output from the nervous system. To thecontrary, mainstream humanoid robotics models the musculoskeletal systemas a static actuator, constituted by rigid connecting elements, rigid jointswith various degrees of freedom and, typically, electric motors. Biologicalbiomechanical properties differ by orders of magnitude from those that char-acterize the materials in traditional robotics. There is nothing wrong withthis level of abstraction. Yet, when it comes to studying the way humansand other biological agents control their limbs, we cannot fail to notice thatthe whole metaphor might simply not apply.

3.2 Embodied cognition is situated

As we have seen, the ownership of a body does not simply create a richly ar-ticulated interaction between the nervous system and the body itself. Apartfrom a few very special, and in the long term unviable, cases (e.g. envi-ronments specifically designed for the induction of sensory deprivation), italso necessarily translates into a massive physicochemical interaction withan environment. Does this interaction have any non-trivial cognitive conse-quence? The remainder of this section explores this issue.

3.2.1 Use of environmental physical factors for enhancedsensing

Stochastic resonance is a non-linear stochastic effect that exploits back-ground noise to enhance the sensitivity to weak signals (Wiesenfeld andMoss, 1995; Moss and Wiesenfeld, 1995). It offers an interesting exampleof how environmental characteristics can be used to augment the sensitivityto external stimuli. Moss and Wiesenfeld described this notion through anintuitive mechanical model.

Imagine a system consisting of two adjacent wells and a marble ball freeto move inside of it, subject to friction and to a gravitational field. Also,imagine that the only information that can be extracted from the systemis the occurrence of the ball switching from one to the other well. A weakperiodic signal modulates the depth of the two wells with respect to theridge that separates them. ‘Weak’ means that the signal is not sufficient totransfer to the ball enough kinetic energy to climb the potential gap up tothe ridge and switch well. Thus, under the influence of the signal the ballkeeps rolling inside either one or the other well. In other words, the periodicsignal is ‘under threshold’, as it lacks the energy to ‘kick’ the ball beyondthe ridge of the well and, consequently, to be detectable. Finally, imaginethat we can superimpose on the weak signal different levels of a randomnoise signal. Its intensity can be modulated and its maximum possible levelis strong enough to induce, on its own, random transitions of the ball.

30

Page 45: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 31 — #45

Our problem is the detection of the weak periodic signal. The injectionof minimal levels of random noise does not meet our goal: the energy ofthe ball is still too low to climb the ridge. Increasing the level of noise,though, will stochastically induce transitions. At first, it will be possibleto observe a surprisingly regularity in the timing of the transitions, whoseperiod matches the one of the hidden signal. Increasing further the level ofnoise would simply reveal random transitions of the ball (the injected noiseis too energetic and masks the underlying signal). Therefore, stochastic res-onance is a non linear effect that can be exploited to detect under thresholdsignals (Wiesenfeld and Moss, 1995; Moss and Wiesenfeld, 1995).

How does this relate to cognition? Moss and Wiesenfeld studied themechanoreceptor cells located on the tail fan of ordinary crayfish (Proclam-barus clarkii). These sensory cells end in thin, 25-100 microns long hairs.When mechanically stimulated they produce a 100 millivolts spike with 200milliseconds of duration. Anatomical considerations showed that this im-pulse propagates to the tail ganglion, where the incoming signal from all themechanoreceptors are integrated and the escape reflex is triggered. Mossand Wiesenfeld’s mounted an excised piece of crayfish tail, containing sev-eral mechanoreceptors, their relative nerve cords and the tail ganglion, onan electromechanical device immersed in a saline solution. An intracellu-lar electrode monitored the activation of one specific sensory neuron, whilstnanometric movements of the electromechanical device produced a periodicand under threshold stimulation of the cells’ hair. In this case, under thresh-old means that the monitored mechanoreceptor would not generate spikes.This weak periodic signal was meant to model an analogously weak signalcoming from a nearby swimming predator. The superposition of differentlevels of environmental noise replicated with satisfactory accuracy the the-oretical results derived by the mechanical analogue (Wiesenfeld and Moss,1995; Moss and Wiesenfeld, 1995). The injection of a specific level of noiseenhanced the sensitivity of the mechanoreceptor.

This shows how external, random noise (ubiquitous in nature) can beeffectively and efficiently exploited by biological organisms in order to en-hance the sensory capacity of these mechanoreceptors. Indeed, there is animportant question that still remains open: should environmental noise beconsidered as a constitutive part of the crayfish sensory system, or as merelycausally related? This will be further discussed in Section 3.3.

3.2.2 Action under time pressure and intrinsic dynam-ics

The fact that the ‘control’ (the nervous system) has to cope with a bodythat is also embedded in an environment opens further important issues.Meeting the strict constraints of time pressure is always crucial to real-timeaction (Chrisley and Ziemke, 2002; Wilson, 2002). The opening door of theelevator calls for immediate action: failing to enter in it shortly before its

31

Page 46: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 32 — #46

CHAPTER 3. EMBODIED COGNITIVE SCIENCE

door has opened results in a frustrating experience, negative enough to fadewhatever thought might be holding us on wait.

In classical disembodied AI systems, time does not constitute an implicitconstraint. Classical disembodied computational paradigms are only inci-dentally related to time, i.e. there is no explicit timing in the Turing machinethat describes computers at their abstract level (Bickhard, 2009). Timingin computers is nothing but a pragmatic detail, fully determined once thespeed of the internal clock is set as a function of the available technology.Even when action in real environments is required, the classical AI agent,disengaged from the real world through its input-output interface whilst itscognitive processes algorithmically operate on disembodied representations,can only explicitly depend on time (i.e. time must be logged as an explicitparameter).

To the other hand, an embodied agent is intrinsically provided with time.The specific kind of embodiment and environmental conditions dictate thelimits of the sustainable dynamics. Gravitational force, mass distributionsand geometric constraints (and not merely neural activity) intrinsically de-termine the possible dynamics of interaction between, say, ants rather thanelephants and their environment. The swing of a pendulum depends onspecific internal and external physical conditions (length of the pendulum,mass, gravitational force, air and mechanical friction) as much as the swingof a limb does. This is somehow analogous to the case of piloting a transat-lantic ship or a small offshore boat in bad sea conditions: the relative inertiadue to the different mass would make the whole control experience quitedifferent in the two cases. Synchronizing internal and external timing is acontinuous implicit concern an embodied system. The idea is that of a tightcoupling of body, brain and environment, each mutually influencing and be-ing influenced by the others and establishing, for all cognitive purposes, anindissoluble triad. This suggests that embodied and embedded systems callfor a radically different language for the study of cognition, relying on themathematical formulation of non-linear dynamic system theory (Thelen andSmith, 1994; Kelso, 1995; Beer, 1997, 2000; Chemero, 2009).

3.2.3 Use of environmental physical factors for cheapcomputation

At the same time, the continuous interaction of brain, body and environmentdetermines a rich mechanism of multiple feedback. This interaction createsconstraints, but it also builds opportunities that the cognitive agent canexploit (Chiel and Beer, 1997; Wilson, 2002; Pfeifer and Gomez, 2005; Pfeiferand Bongard, 2006; Clark, 2008). In a complex interaction, and under tighttime pressure, deploying action that is both computationally inexpensiveand temporally efficient is of crucial importance.

A paradigmatic example of parsimonious action generation is given bythe passive walker (McGeer, 1990). Originally, passive walkers had been

32

Page 47: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 33 — #47

created as purely mechanical devices, ‘passive’ in the sense that they didnot board any motor actuator nor traditional control system. Passive walk-ers have matured have a long tradition and have been presented in severalvarieties (see a review in Collins et al., 2005). Despite the sometimes sur-prisingly natural feel of their walking gait, they simply rely on externalforces (gravity and friction) and mass distribution to generate movement.Passive walkers are naturally bounded to their peculiar “niche”: a narrowmicroworld made of smooth slopes with appropriate negative gradients andhighly motivated researchers who carry their artifacts back to the top of theslope for the next ride. State of the art research is trying to remedy thislimitation by adding to the original project relatively simple actuation andcontrol. Interestingly, within the temporal constraints posed by a walkingdynamic, the combination of computational economy and energy minimiza-tion brings to the emergence of natural-looking walking gaits (Collins etal., 2005; Srinivasan and Ruina, 2006), particularly when compared to thealready mentioned AI humanoid robots.

Based on this and related evidence, Pfeifer and colleagues developed alist of guidelines for the design of robotic agents that would exploit whatthey name morphological computation (Pfeifer, Iida, and Bongard, 2005;Pfeifer and Bongard, 2006). There is no need to aggravate the computationalburden for the control system of a robotic agent, when it is possible to rely onthe properties of the materials or of the particular assemblage to drasticallyreduce it. The cited literature shows examples of how complex behaviorcan be achieved via minimalist control systems once we exploit propertiesof the physical interaction between the components of the robot’s body andits environment. This approach seems particularly powerful and promisingonce we allow the parallel co-adaptation of the agent’s morphology and of itscontrol system, either in simulation (Bongard, 2011) or via rapid prototypingby 3D printers (Lipson, 2005).

3.2.4 Use of environmental physical resources for cog-nitive offloading

The interaction with an environment offers opportunities for the reductionof the cognitive effort by offloading at least part of the cognitive work (Clark,1999; Wilson, 2002; Clark, 2008). Paradigmatic examples of cognitive of-floading are: solving hard arithmetic operations using pen and paper; usingVenn diagrams to clarify logical relations between classes of objects; usingcalendars and notes; leaving a watering can in the center of our desktop toremind us that tomorrow we really have to water that drying plant on thewindow. By the execution of specific actions we can alter the environmentto alleviate our cognitive load and facilitate the solution of the problem athand (Wilson, 2002).

Imagine you have been commissioned to copy Henri Rousseau’s ‘Thedream’ by some Russian billionaire. Imagine painting as you stand in front

33

Page 48: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 34 — #48

CHAPTER 3. EMBODIED COGNITIVE SCIENCE

of the original at the Museum of Modern Arts in New York, rather thanhaving a peek and then, trying to retain as much as possible in your memory,frantically running home to paint the next bit. In the former case you areallowed to recruit externally available resources that massively facilitate yourtask.

3.2.5 Use of environmental physical factors for infor-mation structuring

The embodiment and embeddedness of a cognitive system also offers thepossibility to structure multimodal information in cognitively useful ways.Robotic experiments provide quite powerful evidence to defend this case.Clark and Thornton (1997) introduced the notion of hard Type-2 problems,a class of problems where “regularities enjoy only an attenuated existence ina body of training data”. The problem is which kind of resources a learningmachine requires in order to effectively adapt its behavior based on suchmarginal and scarce regularities. Nolfi (1997a) adapted an artificial neuralnetwork for robot control using evolutionary algorithms. The robot wasasked to avoid walls and remain close to small cylindrical objects, a type-2problem from the perspective of a simple robot relying on coarse infraredsensory information. By coordinating its sensorimotor activity, though, theadapted robot proved capable of selecting a sequence of data that facili-tated wall/cylinder disambiguation (Nolfi, 1997a; Nolfi and Floreano, 2002).Analogously, Metta and Fitzpatrick showed how visual techniques for objectsegmentation in complex environments are facilitated by their integrationwith basic object manipulation (e.g. poking) in a process of ‘active segmen-tation’ (Metta and Fitzpatrick, 2003). Suzuki and Floreano exploited a sim-ilar kind of integration, investigating how ‘active vision’, the co-developmentof perception and behavior, might facilitate some traditional tasks based oncomputer vision (Suzuki and Floreano, 2008).

This capacity to coordinate action and perception is central to the theoryof perceptual experience formulated by O’Regan and Noe. These authorsinterpret perception as a matter of mastering the structure of sensorimotorcontingencies: the capacity to anticipate the regularities between sensoryand motor information is the key to understanding perception (O’Regan andNoe, 2001). The particular form that the body of an organism takes seemscapable of directing and enhances this integration. For example, the specificanatomical configuration of human upper limbs is such that the objects wegrasp are quite naturally brought within the visual field, while we also receivehaptic and proprioceptive information (Pfeifer, Iida, and Bongard, 2005).This example can be easily extended, for the very same gesture potentiallypromotes a closer scrutiny over all other sensory modalities (smell, taste,sound). Thus, the typical pattern of movement that is possible for ourupper limbs allows a global multimodal sensorimotor experience.

34

Page 49: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 35 — #49

3.2.6 Social resources

We might wonder about the consequences of being embedded in a rich socialcontext for organisms that display some level of social organization. Inparallel with genetically determined traits, the social environment also moldsthe organism’s behavior, from social insects to humans. The organism’s formdevelops under the action of evolutionarily as well as socially determinedforces. This interaction, though, does not necessarily have a unidirectionalnature (i.e. from society to individual). For example, infant learning canoccur as a transfer of cultural skills from adults to children, yet in a complexarticulation of mutual feedback, during imitation. This feedback is crucialfor the effectiveness of the interaction (e.g. Smith and Gasser, 2005; Froeseand Di Paolo, 2008).

3.2.7 Experimental implications

Finally, we have seen how the ambitious project of classical AI, the actual-ization of a general-purpose and implementation-independent artificial intel-ligence that could be deployed on a huge spectrum of problems and environ-ments, actually splintered into a number of very specifically niches-orientedapplications. An embodied reinterpretation of this ambition is probablynot even possible (McFarland, 2008). In fact, taking the embodiment andembeddedness of a cognitive agent seriously implies focusing on its specificnatural and developmental history within its niche. Cognition co-evolves andco-develops in close interaction with a body (Chiel and Beer, 1997), withthe function to support the activities of the body (Chrisley and Ziemke,2002; Wilson, 2002) and under the crucial influence of the environment. Ac-cordingly, cognitive robotics can synthetically investigate specific instancesof cognition, via a collection of dedicated artifacts designed for specific envi-ronmental niches. In the best hypothesis, our dedication can achieve generalindications. This evokes the words by W. Ross Ashby (see Section 5.1.2),eminent member of the pioneering cybernetic community, about the study ofanimal cognition (Ashby, 1960, p.42): “In practice one does not experimenton animals in general, one experiments on one of a particular species.”

3.3 What does embodiment bring to cogni-tive science?

Despite their intuitive meaning, the actual implications of embodiment aremanifold and far from unproblematic. In his critical analysis of embod-ied cognition, Shapiro (2011) defines three main hypothesis and researchdirections. The first, constitution hypothesis, explores the possibility thatdifferent kinds of embodiment and some environmental characteristics mightplay a foundational, constitutive role in the deployment of cognitive capac-ities. The second, replacement hypothesis addresses the use of models that

35

Page 50: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 36 — #50

CHAPTER 3. EMBODIED COGNITIVE SCIENCE

are alternative to the classical representational accounts. Is the rejection ofclassical representations possible at all? And how does that imply for cog-nitive science? Finally, the conceptual hypothesis focuses on the fact thatthe specific embodiment might or might not determine the kind of conceptsthat can be developed by a cognitive agent. The coarse partitioning drawnby Shapiro seems of some practical use. The remainder of this section willadopt and develop Shapiro’s tripartition.

3.3.1 The constitution hypothesis

The constitutional hypothesis regards the claim that “The body or worldplays a constitutive rather than merely causal role in cognitive processing”(Shapiro, 2011). The body of work that falls under the constitution hypoth-esis entails the extreme perspective of a body and environment that directlycontribute to the cognitive process. Shapiro finds this specific branch ofembodied cognition research particularly compelling and successful, for itopened a brand new research direction with respect to classical cognitivescience. In fact, the intuition behind this view used to be explicitly dis-carded more often than overlooked. Therefore, the results achieved so fartend to cast some light on a still rather unexplored field.

Indeed, the most urgent conceptual problem is then to outline the dif-ference between having a causal effect on a phenomenon rather than beingconstitutive of an event. Shapiro exemplifies a merely causal influence bythe murder of Archduke Franz Ferdinand of Austria and his wife, killed inSarajevo on 28 June 1914 by Gavrilo Princip. The interpretation that hisassassination originated a chain of episodes that eventually led to WorldWar I, has had ever since a broad historical consensus. Yet for our goal therelevant question is simply: is this episode causally related to World War I,or is it a constitutive part of it? If we identify World War I as a massiveand relatively stable deployment of human and non human resources in abloody sequence of trench combats then, pace Archduke Franz, his suddendeath should be considered a mere causal antecedent to the war itself. Onthe other hand, we have strong reasons in favor of identifying harsh battlesand other military operations as constituents of World War I.

An even more subtle distinction is possible. In February 1665, mathe-matician, astronomer and physicist Christiaan Huygens (1629-1695) mailedhis father his last serendipitous and surprising observation. Two pendulumclocks hanging in parallel orientation from the same beam of his ship cabin,or otherwise installed so as to allow a continuous mechanical path throughrelatively rigid materials, would synchronize, i.e. the swing of their pendu-lum would infallibly become consonant, either in phase or in counter-phase.Incidentally, this is due to their (extremely weak) mechanical coupling thatcreates a negative feedback that forces the phase discrepancy to converge tothe two possible values (Strogatz, 2004).

The question that concerns us in this moment is: should we consider each

36

Page 51: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 37 — #51

of these coupled clocks as a constituent of the other or merely as causallyrelated (Shapiro, 2011)? Indeed, the first interpretation seems less naturalthan the second. A second example of coupled system, though, might invertthis feeling. Consider a turbocharger coupled to an internal combustionengine (Clark, 2008; Shapiro, 2011). With this assemblage, a turbine derivesits power from the exhaustion fumes of the engine. This energy drives acompressor that boosts pressure, temperature and density of the air injectedin the engine (forced induction), thus increasing efficiency and power of thecombustion within it. This extra power induces an increased energy in theexhaustion fumes, that enhances the energy to the turbine and, from there,back to the compressor. A positive feedback amplifies this sequence up to apoint, where friction will naturally limit this escalation. This example canbe generalized to the class of phenomena characterized by physical resonancein different forms of energy (e.g. the critical increase in amplitude observedat the natural frequency of a mechanical or electric oscillator).

According to Clark (2008), this sort of resonant coupling seems to alsocharacterize the puzzling relation between speech and gesturing. The cogni-tive process dynamically produces outputs (speech, gestures, facial expres-sions and body postures) that in turn become inputs that support and “drivethe system along”. Any attempt to isolate components of this process in aclear sequence of causal influences seems at odds with a sensible analysis.Gestures and other forms of bodily communication during speech occur evenwhen the subject is not aware of being observed, i.e. when they have noapparent communicative role. The use of bodily communication seems toresonate with speech (Clark, 2008; Shapiro, 2011). Similar considerationsapply quite naturally to the examples of the auditory system of crickets (seeSection 3.1) and of the tail mechanoreceptors of crayfish (see Section 3.2).In the former case, the body (through the the resonant cavity in the crick-ets’ thorax) enhances the quality of the acoustic information. In the latter,the characteristic environmental background noise enhances the sensitivityof the escape-reflex system. An incremental alteration of these bodily orenvironmental conditions will meet a critical point where the whole sensoryexperience is drastically altered and unfit to the organism’s biological needs.

It might be provocatively asked whether water itself should be similarlyconsidered a constitutive part of the cognitive system of the crayfish. We allknow that choosing a reference system for the mathematical description of aphysical system is an arbitrary exercise. Nevertheless such a choice, althoughtheoretically indifferent, has huge practical impact. Analogously, drawingthe boundary of a system is a matter of arbitrary choice. Whilst watercould more naturally be seen as the niche for the crayfish, the issues high-lighted by stochastic resonance demonstrate the impossibility of achieving asatisfactory characterization of the sensitivity of crayfish mechanoreceptorswithout taking into account the environmental noise. The latter might stillbe legitimately considered as a mere external input of the system that wewant to describe, yet depletes the system under scrutiny of a foundational

37

Page 52: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 38 — #52

CHAPTER 3. EMBODIED COGNITIVE SCIENCE

functional component. 1

Such claims are and remain of course controversial, and ignite a debateeven within the embodied cognition community. Rupert (2004) favors theexplanatory power of the hypothesis of embedded cognition (i.e. “cognitiveprocesses depend very heavily, in hitherto unexpected ways, on organismi-cally external props and devices and on the structure of the external envi-ronment in which cognition takes place” Rupert, 2004, p.393) over the lessparsimonious hypothesis of extended cognition (“human cognitive processingliterally extends into the environment surrounding the organism, and humancognitive states literally comprise-as wholes do their proper parts-elementsin that environment; in consequence, while the skin and scalp may encasethe human organism, they do not delimit the thinking subject” Rupert,2004, p.389).

Similarly, Wilson (2002) whilst agreeing that “the forces that drive cog-nitive activity do not reside solely inside the head of the individual, butinstead are distributed across the individual and the situation as they in-teract”, takes a stance against a stronger, constitutional role of body andenvironment, in what appears to her a phenomenon merely causally dis-tributed across different coupled systems. In Wilson’s own words (Wilson,2002, p.630):

“The fact that causal control is distributed across the situationis not sufficient justification for the claim that we must studya distributed system. Science is not ultimately about explain-ing the causality of any particular event. Instead, it is aboutunderstanding fundamental principles of organization and func-tion. [...] Distributed causality, then, is not sufficient to drivean argument for distributed cognition.”

Wilson founds her argument on some general concepts of systems theory(e.g. Brogan, 1990). Wilson defines a system as an aggregate of distinctelements, spatially, temporally or otherwise related, whose properties areaffected by the participation in the system itself. In particular, she focusesher attention on three hierarchically related properties:

1. open systems, as opposed to closed systems, allow interactions throughwhich they affect, or are affected by, the external environment (e.g. aflow of matter, energy or information);

2. the organization defines how the elements of the system are causallyrelated;

3. this organization can either be facultative, when only temporarily sus-tained, or obligate, if its lifetime extends for a duration that comparesto the order of magnitude as the one of the components of the system.

1An hypothesis is here advanced: that negative versus positive feedback might markthe difference between mere causally related coupling and constitutive coupling. Thediscussion of this conjecture will not be developed any further in this thesis.

38

Page 53: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 39 — #53

It is worth repeating that the arbitrary nature of the system’s boundaryis always a matter of the observer’s choice, often determined by the specificgoal. Wilson argues most of the extended cognitive systems, are in factexamples of facultative open systems. Accordingly, Wilson insists that adescription in terms of distributed systems is more appropriate than raisinga case for extended cognition.

Upon closer scrutiny, though, Wilson’s point does not actually applyto our examples about the cognitively constitutional nature of some formsof embodiment and environmental interaction. Both the auditory systemof crickets and the mechanoreceptors of crayfish rely on forms of ‘resonantcoupling’ that appear far from facultative. The cricket develops its acoustictrachea as part of its own body and a certain level of environmental noise isreliably present in the crayfish’s natural niche.

3.3.2 The replacement hypothesis

Chapter 2 has highlighted the central role that explicit representations ofexternal entities and their algorithmic manipulation played and still playin the majority of cognitive architectures. In the mid 1980s, though, suchemphasis started to be criticized. In Shapiro’s taxonomy, the replacementhypothesis investigates the possibility of rejecting the representational-basedconception of cognitive architectures (Shapiro, 2011). Rodney Brooks wasamong the first influential voices within the cognitive robotics communityto question the need for the proliferation of internal representations. Hisargument was based on practical considerations. Building explicit internalrepresentations of complex environments and algorithmically operating onthem determines a ‘representational bottleneck’ that jeopardizes the expres-sion of behavior that is compatible with the real time dynamics imposed bythe external world (Brooks, 1991, 1990).

Brooks developed several generations of agents based on the subsumptionarchitecture. According to this approach, the control system of the cognitiveagent is structured in layers. Each layer is designed as a finite state ma-chine, responsible for a specific behavior. Each finite state machine sendsand receives messages that asynchronously coordinate the whole set of possi-ble behaviors by eliciting the appropriate state transitions. Sensing directlyfeeds into action. The machine operates with no central control, but insteadfollows pre-designed rules that coordinate the sequence of currently activelayer and state. Consistent and robust behavior can be generated followingthis method (for a good collection of Brook’s early work see Brooks, 1999).Indeed this was revolutionary as classical AI was already rather stereotypi-cally fixed on cognitive architectures based on the sequence of sensing (i.e.capturing a sensory flow from the environment), planning (deriving deci-sions and motor plans) and acting (driving the planned action through therobot’s actuators).

In the last two decades, a number of experimentalists and theoreticians

39

Page 54: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 40 — #54

CHAPTER 3. EMBODIED COGNITIVE SCIENCE

have withdrawn from the classical representation-centric view of cognition,in some cases explicitly denying a role for representations in cognition (e.g.Thelen and Smith, 1994; Kelso, 1995; Beer, 1995; ?). Chemero labels suchan attitude the metaphysical claim against representations. Nevertheless, itappears generally problematic, if not universally impossible, to support themetaphisical claim, therefore rejecting the possibility of a representationalaccount of any event (Chemero, 2009; Shapiro, 2011). On the other hand,Chemero argues, we might have good reasons to defend an epistemologi-cal claim against representations. In fact, the explanatory and predictivepower that we might achieve via representation-based models of cognitiveprocesses could be unsatisfactory when compared to the alternative compu-tational paradigm offered by dynamical systems theory (see Section 5.2). Inother words, Chemero argues that dynamical systems theory are the naturallanguage and methodological tool of an embodied cognitive science and ourbest models of cognitive phenomena. On this ground, several advocates ofthe dynamical systems perspective argue in favor of a more or less radicalrejection of this concept (Thelen and Smith, 1994; Kelso, 1995; Chemero,2009).

Less radical positions are also common among theoretical and experi-mental researchers (e.g. see Clark, 1997, 1999; Steels, 2003). In particularClark and Toribio observe the existence of a special class of problems, thosewhere the object of interest can be temporarily decoupled from the cogni-tive system (e.g. when I close my eyes the apple on my desk disappears)or simply absent (when i engage in abstraction about what is not spatiallyand temporally present or maybe even non existent). How could we copewith such situations without the use of some form of representation? Forthis reason Clark and Toribio introduced the term representation-hungryproblems (Clark and Toribio, 1994). Chemero agrees that representation-hungry problems constitute the relevant test-bed for different research ap-proaches to cognition. Nevertheless, he points to relatively recent work(e.g. Rooij, Bongers, and Haselager, 2002) that seems to accurately modelrepresentation-hungry problems in dynamic systems terms (see also Section5.2). Rooij and colleagues (Rooij, Bongers, and Haselager, 2002), handedthe participants of their experiment rods of systematically increasing or de-creasing length. The subjects were then asked to imagine if they couldor could not reach a distant object by using the rod. The results were inagreement with the extended HKB dynamic model (Tuller et al., 1994).

In any case, embodied cognition and the dynamic systems approach seemto inspire either a rejection or a far-reaching change in the nature of repre-sentations. Dynamical systems-inspired models tend to be fully explanatoryof the phenomenon without invoking explicit object representation or otherknowledge structures. Classical explicit representation, in its typical sym-bolic form, tends to be replaced by quite different entities. For example,this is the case of emulators (Clark and Grush, 1999; Grush, 2004), or of theemergence of attractors or dynamic fields in a dynamic space (Skarda and

40

Page 55: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 41 — #55

Freeman, 1987; Thelen et al., 2001). In general terms, Clark portrays this asa shift from objective and action-independent (i.e. sentence-like representa-tions implemented as symbol structures) to local and action-dependent rep-resentations (Clark, 1997). This latter kind of representation can be inter-preted as more primitive than the former. In fact, they can directly generateeffective behavior without the need of being embedded in a representationalstructure detailing the current state of the world and the cognitive system’scurrent goal (Chemero, 2009).

3.3.3 The conceptualization hypothesis

According to Shapiro, the research path that historically gave rise to thebirth of embodied cognition is also the one that achieved the least con-crete results (Shapiro, 2011). The conceptualization hypothesis addressesthe claim that the ownership of a specific kind of embodiment determinesthe type and nature itself of the concepts that the cognitive agent can au-tonomously acquire.

In their 1980 paper, Lakoff and Johnson ground the understanding ofmost of everyday language in the use of a basic set of metaphors. Theseconceptual metaphors are often founded on embodied properties (physicaland orientational metaphors) and cannot be understood without a directexperience of bodily spatial and temporal dynamics. For example, our sym-metry with respect to the sagittal plane determines the arbitrary differencebetween left and right. Our asymmetry with respect to the frontal andtransverse planes draw our attention, respectively to what is in front or be-hind us, and above or below us. Quite naturally, our future is in front of us(that matches our natural direction of physical movement), whilst the pastis behind our back. Thus, Lakoff and Johnson argue that the peculiar buildof our body informs the basic concepts on which we erect our linguistic skills.Even highly abstract forms of cognition, such as mental imagery, reasoningand problem solving, might be deeply rooted in the body (Wilson, 2002).

Much of the innovative work by Humberto R. Maturana and Francisco J.Varela (see 5.3.4) is inspired by the observation that we cannot understandperception directly on the basis of the raw physical information offered bythe external world (Maturana and Varela, 1980, 1992; Varela, Thompson,and Rosch, 1991). Our cognitive system is responsible for deleting the ef-fect of the blind spot due to the retinal optic disk (the receptorless area ofthe retina where the optic nerve and blood vessels emerge), masking eyesaccades, creating a stable color experience when physical measures showthat not such a stable experience can be physically sensed. Maturana andVarela conceived the biological mind as an informationally closed (yet ther-modynamically open) system: it can exchange matter and energy with theexternal world, but it has to generate its own informational content thatcannot be passively foraged from its environment. They contributed largelyto the philosophical position known as constructivism (Stewart, 1995). For

41

Page 56: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 42 — #56

CHAPTER 3. EMBODIED COGNITIVE SCIENCE

example, Varela et al. argue that the color experience emerges following ahistory of meaningful coupling with the environment:

“[...] Color categories are not to be found in some pregiven worldthat is independent of our perceptual and cognitive capacities.The categories red, green, yellow, blue, purple, orange [...] areexperiential, consensual, and embodied: they depend upon ourbiological and cultural history of structural coupling.” (Varela,Thompson, and Rosch, 1991, p.171)

We might embrace an objectivist stance and assume colors as a property ofthe world alone. On the other extreme we might take a subjectivist view,thus denying the role of the world altogether. Alternatively, Varela et al.contend that the coupling between world and perceiver implies that worldand subject mutually specify each other (Varela, Thompson, and Rosch,1991).

3.4 A new perspective

Of course, none of the above should be interpreted in absolute terms (e.g.see Wilson, 2002). Loosely situated cognition might exist. For example, theengagement with imagined unreal and sometimes unrealistic events takesplace ‘offline’. Similarly, cognition under limited time pressure might ex-ist. Sometimes we can afford disengaging from the pressing dynamic of theevents and take our time to find a better solution. With their existence,mathematicians stubbornly keep reminding us that their often overlookedefforts are sound examples of symbolic reasoning. By offline symbolic rea-soning we can solve quite outstanding problems that not only prove theexistence of this cognitive strategy but also give good reasons for it.

To what degree does cognition follow the routes of embodiment? In otherwords, what is the extent of the phenomenon? Does it only involve thelowest, sensorimotor elements of cognition or does it apply to any cognitivelevel? If not all, how much of higher level cognition depends on embodiedand embedded paths? To date, an answer to these questions is not possible.Brooks (1990) denied the actual need for representations up to the mostabstract cognitive performance . To the opposite extreme, Kirsh (1991)argued that even rather simple sensorimotor activity benefits from the useof explicit internal representations. The highly technical body of specificongoing research, unfortunately, goes beyond the scope of this work.

Even defining embodied cognition in its ‘purest’ form is controversial.Chemero (2009) considers radical embodied cognitive science (i.e. embodiedcognitive science minus representational accounts) as a natural descendentof Gibson’s ecological psychology, and more mainstream embodied cognitivescience the watering down of the former, due to the effort of maintainingthe compatibility with mainstream classical computationalism. However,supporters of the competing view (radical embodied cognitive science as

42

Page 57: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 43 — #57

a radicalization of embodied cognitive science) might also find good argu-ments. At the same time,Clark (1999) argues that dynamic systems andcomputational approaches are complementary, whilst others prefer more ex-treme stances.

What is certain is that embodiment has introduced a fundamentally newperspective with respect to the classical approach to cognitive science. Withan intuitive metaphor, Chiel and Beer (1997) refer to the distinction betweena classical orchestra, where all the members of the orchestra have to followin a quite strict sense the indication of a central controller (the director),and a jazz ensemble, where, in absence of a clearly defined hierarchy, thedifferent musicians self-organize their semi-improvised efforts in order toproduce good music. Making sense of the metaphor, Chiel and Beer (1997)claim that rather than wondering how the nervous system affects adaptivebehavior, we should ask how different parts contribute to adaptive behavior.The holistic sense of this interaction is nicely portrayed by Shapiro (2011,p.61):

“Cognition is embodied insofar as it emerges not from an intri-cately unfolding cognitive program, but from a dynamic dancein which body, perception, and world guide each other’s steps.”

The next chapter will explore a research track, that is currently regain-ing attention, that deeply roots principles of embodiment into the essentialmechanisms of life.

43

Page 58: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 44 — #58

Page 59: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 45 — #59

Chapter 4

Inside the body

4.1 Internal Robotics

An article by Parisi (2004) questioned the traditional disregard in cognitivescience and cognitive robotics for the rich sets of non-neural processes takingplace inside of the body. His argument has a pivotal role in the conception ofthe experimental work described in this thesis. Parisi (2004) first highlightedthe important role that cognitive robotics has in the study of the behaviorof organisms, for “it makes entirely clear the role of an organism’s bodyin determining the organism’s behavior” (Parisi, 2004, p.325). Within thisrole, the main methodological concern of the author was the lack of attentionthat cognitive robotics models pay to the rich set of (non-neural) bodilyprocesses that take place inside of the body of an organism. ThereforeParisi encouraged the integration of current focus on the interaction betweenrobots and their external environment (external robotics) with a broaderperspective encompassing the interaction of the robot’s control system witha representation (at some level of abstraction) of what happens inside of anorganism’s body (internal robotics).

In Parisi’s formulation of the problem, the nervous system interacts with“two worlds”, an external and an internal environment. In his paper, Parisiemphasizes the role of mediator between the external and the non-neuralinternal world played by the nervous system. Nevertheless, it should benoted that these two domains are clearly interconnected, i.e. external eventscan directly elicit internal effects, and vice versa, without any mediation ofthe nervous system (e.g. imagine thermodynamic interactions between thetwo). Of course, Parisi’s stance opens a problematic dichotomy between ‘in-ternal’ and ‘external’, for a distinction between the two is largely a matter ofadopting a particular point of view (Varela, 1997; Moreno and Barandiaran,2004; Barandiaran and Moreno, 2008). Whereas Moreno and Barandiaran(2004) developed a naturalized account for this problem from an autopoieticperspective (see Section 5.3.4), Parisi (2004) distinguished the interactions

45

Page 60: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 46 — #60

CHAPTER 4. INSIDE THE BODY

external internal

physicochemicalmainly physical

neurotransmissionfast, qualitative, specific

mainly chemicalneuromodulation

slow, quantitative, diffuse

cognitive

evolution bodyhigh-frequency signals

publiccognitivevoluntary

coevol. body & nerv. syst.low-frequency signals

privateaffective

involuntary

Figure 4.1: Parisi’s phenomenological characterization of the interactionsbetween the nervous system of an organism and its external/internal envi-ronment (Parisi, 2004).

of the nervous system according to the internal and external environmentthrough a phenomenological distinction (see Fig. 4.1).

The relation between the external environment and the nervous systemtends to take a predominantly physical nature (Parisi, 2004). The organ-ismic perception of the external world relies on mechanical (e.g. hapticand auditory perception), electromagnetic (visual perception) and chemi-cal (olfactory and gustatory perception) forms of interaction. Similarly, theorganism can use all of these modalities to influence their external environ-ment. Neurotransmission is the typical form of information transfer in thisdomain: it is specific, fast and qualitative (Parisi, 2004). On the other hand,the interaction within the internal milieu largely relies on chemical forms ofmediation. This is the domain of neuromodulation, whose nature is diffuse,slow and quantitative (Parisi, 2004).

Under a more cognitive point of view, Parisi (2004) also listed signifi-cant differences. Whilst the environment tends to appear highly refractoryto changes to most organisms, the nervous system coevolves and adaptstogether with the body, and perceives the internal milieu as significantlymore stable than the common sensorimotor dynamics of the external world.Furthermore, the perception of the internal mechanisms of the body deter-mine a largely involuntary private experience, and the foundation for affect.It should be noted that the different speed of neuromodulation and neuro-trasmission, and the different properties in terms of stability of the internaland external environments translate into a collection of different time scalesthat characterize the inputs of the nervous system.

Parisi’s proposal, a challenge to cognitive robotics, informed an impor-tant aspect of the work reported in this thesis. It offered conceptual groundsfor the transfer of a long scientific tradition in the study of biological emo-

46

Page 61: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 47 — #61

tions, namely somatic theory of emotions, into cognitive robotics - a tra-dition that aimed for the integration of cognition and non-neural bodilymechanisms.

4.2 Emotion

Pioneer American psychologist and philosopher William James (1842-1910),in parallel with and independently of Danish physiologist Karl Lange (1834-1900), proposed an account of emotional dynamics that overtly revertsthe most established theories based on introspection and common sense(LeDoux, 1996; Walter, 1963). In his monumental ‘Principles of psychol-ogy’, James (1890, p.449, original emphasis) specified his position:

“Our natural way of thinking about these coarser emotions isthat the mental perception of some fact excites the mental affec-tion called the emotion, and that this latter state of mind givesrise to the bodily expression. My theory, on the contrary, is thatthe bodily changes follow directly the perception of the excitingfact, and that our feeling of the same changes as they occur isthe emotion.”

In James’ own famous example, whilst common sense suggests that “wemeet a bear, are frightened and run”, his theory questions the correctness ofthis sequence (James, 1890; LeDoux, 1996; Prinz, 2004). The mental state isnot prodromal of the bodily engagement with the situation, but rather theopposite: the sight of the bear triggers fleeing as a specific bodily reaction.During the act of running an escalation of physiological events take place inthe body (LeDoux, 1996, p.44): “blood pressure rises, heart rate increases,pupils dilate, palms sweat, muscles contract in certain ways.” The mentalconnotation of an emotion, its feeling, is “slave to its physiology” (LeDoux,1996, p.44), i.e. the particular nuance of each feeling, is determined by itsspecific physiological antecedents.

It might be the case that James’ theory emerged from a general intuition(“Without the bodily states following on the perception, the latter wouldbe purely cognitive in form, pale, colorless, destitute of emotional warmth.”James, 1890, p.450). Nevertheless, James grounded it in the physiological anpsychological terms of his times, and his effort gained his theory a prominentposition for more than 30 years. Ever since James, the theory of emotionshas tried to clarify the causal sequence connecting the emotionally chargedstimulus to the emergence of a conscious feeling (LeDoux, 1996).

4.2.1 What is an emotion?

Broadly conceived (for the lack of agreed definitions - see e.g. Pessoa, 2008;Lowe and Ziemke, 2011), an emotion is a composite reaction to a particu-lar subset of stimuli, characterized by a crucial and urgent relation to the

47

Page 62: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 48 — #62

CHAPTER 4. INSIDE THE BODY

organismic survival. Exemplary emotions are fear, anger, disgust and joy.Lists of emotions suggested by different authors might differ in number andnature of the items (LeDoux, 1996). Nevertheless, the exact content of eachlist is mostly a matter of nuances and moderate variations.

Emotional states can be reduced to a number of basic components (Prinz,2004; LeDoux, 1996; Damasio, 2000, 2003):

• The experience of an emotions is always about the perception of a bod-ily state. Our body reacts to fear with unmistakable and objectivelymeasurable signals, e.g. autonomic nervous system responses induceincreased heart rate and blood pressure, sweaty hands, tense muscles,effects on salivary glands, piloerection; slower hormonal responses re-lease stress hormones, like adrenaline, adrenal steroids and peptidesin the bloodstream (LeDoux, 1996).

• Emotions determine action tendencies, as they seem to prepare ourbody for action. For example, fear sets animals in a fight or flightmode, i.e. their bodily adaptation prepares the organism for fleeingor, if danger is perceived as too pressing, for defensive aggression.

• Modulations of mental processes are typically triggered by emotions.For example, selective attentional (e.g. enhanced focus on the salientaspects of a scenario during an accident) and dis-attentional mecha-nisms (e.g. pain suppression allows for the neglect of otherwise painfulwounds during a fight).

• In conscious animals (i.e. at least in humans) emotions can be asso-ciated with peculiar conscious feelings. Depending on the theory, thedistinction between emotions and feelings can range from virtually null(i.e. synonyms) to related yet different concepts.

• Emotions affect thoughts, in the sense that specific emotional statesseem to cast a particular bias on our thinking. For example, sadnesstends to color our thoughts with gloomy hues.

Animals that are capable of conscious awareness (e.g. humans and prob-ably, at least to some extent, higher primates) integrate this capacity withinthe emotional brain mechanisms (LeDoux, 1996). Our direct experience asconscious emotional animals tells us that the relevant stimuli can be of anexogenous or endogenous nature. In the former case, we react to emotionallyloaded stimuli in our environment. In the latter, the reaction is determinedby our mental activity.

4.2.2 Theories of emotion

James’ theory of emotions is far from being ‘the only game in town’. AsLeDoux (1996) observes, each of the five basic components of emotions (spec-ified in Section 4.2.1), have inspired alternative theories of emotions where

48

Page 63: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 49 — #63

Somatic TheoryJames (1884)Lange (1885)

Damasio (1994)

stimulus ➔ bodily response ➔ bodily feedback ➔ feeling

Cognitive Labeling Theory

Schachter & Singer (1962) stimulus ➔ arousal ➔ cognition ➔ feeling

Cognitive Cause Theory

Arnold (1960)Lazarus (1991) stimulus ➔ appraisal ➔ action-tendency ➔ feeling

Affective Primacy Theory

Zajonc (1980) stimulus ➔ unconscious-affect ➔ feeling

Figure 4.2: Comparison in the stimulus-to-feeling sequence in some contem-porary theories of emotions (adapted from LeDoux, 1996; Prinz, 2004).

each component is the main focus, or where multiple components are em-phasized in so called ‘hybrid theories of emotions’ (LeDoux, 1996; Prinz,2004).

To exemplify the different lines of ongoing research, the following listbriefly describes some of the currently most influential theories of emotionsand their main features. The significant difference between them can befound in the interpretation of the causal sequence from the onset of theemotional stimulus to the emergence of feelings is also summarized in Fig.4.2.

• Somatic Theories: After playing a pivotal role in establishing thescientific study of emotions, James’ analysis was largely rejected onthe basis of the observations of physiologist Walter Cannon (LeDoux,1996). According to Cannon, the set of bodily reactions foundationalto James’ theory failed to meet criteria of specificity (that would ac-count for the rich emotional nuances) and timing (bodily informationwould travel too slowly to justify their role in the onset of emotionaldynamics). In the light of current neurophysiological understanding,Cannon’s critique seems now too harsh (e.g. see LeDoux, 1996; Fried-man, 2010; Stephens, Christie, and Friedman, 2010), and somatic the-ories of emotions regained relatively recent interest through the workof neuroscientists like Joseph LeDoux and Antonio Damasio (LeDoux,1996; Damasio, 2000, 2003). In Damasio’s theory, similar to James’,emotionally competent stimuli (ECS) determine an unconscious, au-tomatic reaction in the body, imagined as a complex network of in-terrelated physiological processes (broadly including the state of themusculoskeletal, respiratory, circulatory, digestive and endocrine sys-tems, and of the ‘internal milieu’). The change in the state of the body,mirrored via somatosensoy signals in the neural state of the brain, con-

49

Page 64: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 50 — #64

CHAPTER 4. INSIDE THE BODY

stitutes the emotional reaction. Feelings emerge as experience of thesebodily changes, as (either unconscious or conscious) feelings of bodilychanges (Damasio, 2000, 2003)1.

• Cognitive Labeling Theories: Stanley Schachter and Jerome Singerbuilt their theory of emotions on the basis of experimental evidencecollected in the early 1960s. They injected adrenaline in human sub-jects, and then directed each of them into a different environmentalcontext. Whilst adrenaline aroused the body, the specific context (of-fering pleasant or unpleasant situations) seemed to play a crucial rolein the determination of a specific emotional connotation. They con-cluded that cognitive processing interprets the situation and generatesthe emotional experience upon unspecific bodily arousal triggered byan emotionally relevant stimulus (LeDoux, 1996; Prinz, 2004).

• Cognitive Cause Theories: Several theories appear a little elusiveabout the nature of the mechanism that triggers or generates theemotional response. Magda Arnold’s defined appraisal as the cog-nitive assessment of the potential harm or benefit of a situation. Inher view, extremely influential in contemporary psychology of emo-tions, emotion rises with the felt tendency towards or away anythingwhose appraisal holds, respectively, a positive or negative undertone.Further dimensions of appraisal involve assessing whether the objectsare present or absent, and whether those objects are difficult or easyto attain or avoid (Prinz, 2004). The appraisal process is entirelyunconscious, but its outcomes imbues the conscious dimension withemotional feelings. Thus, emotion builds up action tendency (includ-ing a disposition to inner bodily reactions) rather than proper action,whereas the appraisal process distinguishes emotions from other con-scious states (LeDoux, 1996).

• Affective Primacy Theories: In the 1980s Robert Zajonc brought newevidence to support those who failed to be persuaded that emotion nec-essarily involves conscious cognition. He showed how priming humansubjects with subliminal images (below the time threshold for con-scious perception and therefore unconsciously registered), would sig-nificantly bias their later choices (LeDoux, 1996). Considering prefer-ence as a simple emotional reaction, these mere exposure effects couldapparently manipulate affective state.

Indeed, much in emotion research requires further investigation. Re-search programs that aim to ground psychological functions onto brain sys-tems by objective criteria promise a rich harvest of information. JosephLeDoux’s work, briefly introduced in the following section, goes in this di-rection. His research appears relevant to this thesis, for it enlightens how in

1More about Damasio’s theory can be found in Paper VI in Appendix F.

50

Page 65: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 51 — #65

EMOTIONALSTIMULUS

EMOTIONALRESPONSES

SENSORYTHALAMUS

AMYGDALA

SENSORYCORTEX

‘low road’

‘high road’

Figure 4.3: LeDoux’s model of the emotional pathways of fear reactions(adapted from (LeDoux, 1996)). The model is partly derived by using neu-roscience methods to follow the natural flow of information through thebrain of animal models.

the brain various neural structures, all responding on different time scales,adapt and cooperate among them and with the body to the creation of com-plex cognitive functions having crucial impact on the organismic survival.

4.2.3 Exploring the cognitive/emotion divide

LeDoux (1996) focused his research on fear, a pervasive emotion, expressedwith great similarity by humans and other animals. His focus on fear al-lowed LeDoux to rely on more sound neurophysiological methods than thestudy of less basic emotions would allow. Fear is also relevant to severalpsychopathological conditions, when maladaptation tends to degenerate inpsychopathology. LeDoux’s approach relies on the methods of cognitive neu-roscience (LeDoux, 1996; Nicholls et al., 2001). Methodologically, LeDouxsimplified the object of his study, emotional response to fear, in at least twoimportant ways. First, he made use of classical conditioning in order toclarify the pathway of the incoming stimulus. For example, by conditioningthe association of a noxious stimulus (e.g. light electric shock) to sound, hecould follow the onset of fear reaction based on this simple stimulus, andalong a well known perceptual path that does not need much cognitive pro-cessing (LeDoux, 1996). Second, he followed the natural flow of unconsciouselectrophysiological signals, thus studying the neuroscience of elementaryemotions without the burden of studying the neuroscience of consciousness(LeDoux, 1996). Whilst the latter is an inaccessible subjective experience,emotional responses are objectively measurable.

By using traditional methods of the study of neuroanatomy and neu-

51

Page 66: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 52 — #66

CHAPTER 4. INSIDE THE BODY

rophysiology, i.e. systematically dissecting neural paths and observing thepropagation of action potentials signals (antero- or retro-grade) across brainareas by dye based (WGA-HRP) cell tracing techniques and electrophysi-ological signals, LeDoux mapped a candidate brain fear system (LeDoux,1996; Gazzaniga, Ivry, and Mangun, 2008). The fear response brain systemdescribed by LeDoux is shown in Fig. 4.3. The emotional stimulus (e.g.the acoustic signal associated with a light electric shock) is distributed tothe sensory (auditory) thalamus, from where it bifurcates. From there, afirst path carrying coarsely coded information stimulates the lateral nucleusof the amygdala. Inside the amygdala, through intermediate stations, thesignal propagates to the central nucleus, that in turn projects to distinct out-puts, each one triggering or modulating the typical behavioral, autonomicand endocrine responses to fear (e.g. freezing, increased blood pressure,release of stress hormones). Nevertheless, from the thalamus the triggersignal also propagates to the sensory cortex, and after cortical processingit reaches the amygdala where it integrates with the previously describedpathway. The former pathway, LeDoux’s ‘low road’, is unconscious, auto-matic, quick and coarse; the latter, ‘high road’, adds the flexibility of corticalprocessing and can rise to the level of conscious experience (LeDoux, 1996).As LeDoux puts it (LeDoux, 1996, p.175): “One of the reasons that cogni-tion is so useful a part of the mental arsenal is that it allows this shift fromreaction to actions.”

This is of course a simplified view, and several mechanisms are still poorlyunderstood. We know that the amygdala, receives rich connections fromlarge portions of the neocortex, sending back an even richer set of projections(e.g. processed visual information can reach the amygdala, but the amyg-dala projects back to the different levels of visual processing). Therefore, itis capable of indirect modulation of the cortical activity via acetylcholine,noradrenaline, dopamine and serotonin mediated arousal systems (LeDoux,1996; Gazzaniga, Ivry, and Mangun, 2008). At the same time, the set ofbodily changes systematically induced by the emotional response kick in asa massive feedback that enriches the available information. Despite the factthat the particular behavior that is displayed during fear might be extremelydifferent across species (e.g. flying, swimming or running), phylogeneticallyhighly preserved brain systems seem to serve the common functional need(e.g. fleeing LeDoux, 1996).

An important take home message from LeDoux’s work on fear response,comprises a holistic and evolutionary perspective on the biology of the emo-tional machine (LeDoux, 1996, p.40):

“But emotions did not evolve as conscious feelings. They evolvedas behavioral and physiological specializations, bodily responsescontrolled by the brain, that allowed ancestral organisms to sur-vive in hostile environments and procreate. If the biologicalmachine of emotions, but not cognition, crucially includes thebody, then the kind of machine that is needed to run emotion

52

Page 67: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 53 — #67

is different from the kind needed to run cognition. Even if thefunctionalist argument (that the hardware is irrelevant) couldbe accepted for the mind as cognition (and it is not clear that itcan), it would not seem to work for the emotional aspects of themind (since the hardware does seem to make a difference whenit comes to emotion).”

Rather than interpreting the brain as a monolithic biological machine,LeDoux (1996, p.76) invites us to look at it as a multiplicity of finely tunedand largely unconscious mechanisms, where “functions are mediated by in-terconnected systems of brain regions working together rather than by in-dividual areas working in isolation”. Nevertheless, the different functionsevolved in isolation and at different times to cope with specific problems. Ef-fective solutions in brain organization appear to be preserved across species(thus justifying the use of animal models to integrate studies in humans).‘Natural emotional triggers’ seem to be automatically appraised in orderto produce reactions that have proved effective (i.e. the reaction is likelyto succeed in keeping the animal alive) over evolutionary time. For exam-ple, several prey species react to their natural predator at birth (LeDoux,1996). The class of emotional triggers can be extended through experience(‘learned triggers’) mediated via a multiplicity of memory systems (involvinghippocampus and areas in the temporal lobe) whose activity is directly orindirectly modulated by the amygdala (LeDoux, 1996). Importantly, emo-tional appraisal and emotional response are tightly and directly integrated.

According to LeDoux (1996, p.299), the conscious aspects of emotionshould not be overrated: “the core of an emotion is not an introspectivelyaccessible conscious representation.” Instead, emotion and cognition arebest thought of as separate but interacting mental functions, mediated byseparate but interacting brain systems. Feelings are the conscious expe-rience of the unconscious biological machine that brings forth emotions.Ultimately, the mind unifies cognition and emotion (LeDoux, 1996). Nev-ertheless, LeDoux’s work seems to build on the hypothesis of a more cleardivide between cognition and emotion. Thus, by presenting LeDoux’s workwe have also introduced yet another perspective on the study of emotionswith respect to the deeper integration advocated by the theoretical posi-tion of some researchers (see e.g. Damasio, 2003; Varela, 2005; Lewis, 2005;Phelps, 2006; Thompson, 2010; Ziemke and Lowe, 2009; Parisi and Pet-rosino, 2010) and by the original experimental work that is reported in thesecond part of this thesis.

4.2.4 Internal robotics revisited

Obviously, somatic theories of emotion should captivate the attention of amaterialist research approach to cognition, one that is sensitive to Parisi’surge to also integrate the cognitive effect of non-neural internal processesin its research program (Section 4.1). In fact somatic theories of emotions,

53

Page 68: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 54 — #68

CHAPTER 4. INSIDE THE BODY

differently from other emotion theories, are crucially rooted in measurablephysiological quantities. Part of the mission of the experimental work incognitive robotics described in this thesis (see Chapter 6) is to produce aphysically grounded basic robotic model of neural and regulatory interac-tions.

Over recent years a growing body of work has been reconsidering theimportance of non-neural internal processes as a constitutional componentof cognition. Influential neuroscientists seem increasingly keen to abandonthe brain chauvinist stance in favor of a more integrated view of the mind.An argument that seems to be equally embraced under a neuroscientific(e.g. Damasio, 2003; Phelps, 2006) and cognitive science perspective (e.g.Varela, 2005; Ziemke and Lowe, 2009; Parisi and Petrosino, 2010) defendsa view of cognition that is “intertwined from early perception to complexreasoning” with emotion (Phelps, 2006, p.46). As Damasio (2003, p.128)puts it, evolution “has built the apparatus of rationality not just on top ofthe apparatus of biological regulation, but also from it and with it”.

For many years emotions have been considered an embarrassing and un-desirable burden for the rational mind. Currently a drastically differenthypothesis seems to find growing support (for a critical review see Loweand Ziemke, 2011): the set of emotional mechanisms supports organismsengaged in making decisions in uncertain scenarios that are crucial to theirbasic needs and motivations (e.g. eat, drink, reproduce, escape predators).Emotion provides these organisms with the capacity to respond more effec-tively and quickly to the environmental challenges, thus promoting a longerlife and a higher reproductive chance (e.g. Ziemke and Lowe, 2009; Parisi andPetrosino, 2010). Emotion is not simply functional to survival, but tendsto pull survival within a region of ‘well being” (Damasio, 2000, 2003). Toget a more direct feel of the bodily mechanisms involved in the emotionalmachinery, the next section will briefly explore basic non-neural internalmechanisms that support life.

4.3 Metabolism, energy, regulation

Metabolism is typically defined as the set of biochemical processes broughtforth by an organism in order to sustain its own life (Hill, Wyse, and Ander-son, 2008; Guyton and Hall, 2005). Catabolic processes break down complexbiomolecules to produce energy and basic chemical building blocks. An-abolism uses part of the outcomes of catabolism (i.e. energy and simplermolecules) to synthesize complex molecules (e.g. proteins). In particular,energy metabolism represents a fundamental subclass of metabolic precesses.

In a living organism several different mechanisms coordinate for energyacquisition, transformation and use (including unavoidable dissipation), anddifferent forms of energy (Hill, Wyse, and Anderson, 2008). Chemical-bondenergy is the energy that can be liberated following the reconfigurationof the atoms of food molecules. Electrical energy is associated with the

54

Page 69: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 55 — #69

separation of positive and negative electrical charges (e.g. transmembranepotential in muscle and neural cells). Mechanical energy is associated withorganized motion, whenever a certain mass (e.g. an arm) macroscopicallymoves in a given direction. Heat can be interpreted as the statistical out-come of random microscopic molecular motion. Chemical-bond, electricaland mechanical energy are defined as ‘high-grade energy’ as they can betransformed to produce physiological work, i.e. they can be used to gen-erate order (Hill, Wyse, and Anderson, 2008), thus contrasting the naturaldrift towards higher entropy in thermodynamically isolated systems (Reif,1967). Chemical-bond energy is also defined totipotent, for it can be trans-formed to generate any type of physiological work within the domain ofpossibilities of the specific organism. On the other hand, heat is ‘low-gradeenergy’, since it cannot be used for any form of physiological work (Hill,Wyse, and Anderson, 2008). This implies a qualitative regression of thedifferent forms of energy, from its totipotent and therefore maximally versa-tile form (chemical-bond), to its maximally degraded one (heat). No livingcreature is capable of converting heat back into chemical-bond energy again(Hill, Wyse, and Anderson, 2008).

In virtue of the second law of thermodynamics, if we consider the organ-ism’s body as a persistent form of organization based on the continuous flowof its constitutive matter (Hill, Wyse, and Anderson, 2008; Maturana andVarela, 1992), energy is necessary for the very maintenance of such organiza-tion (Hill, Wyse, and Anderson, 2008). Each animal species is characterizedby a metabolic rate, i.e. the rate of conversion of chemical-bond energy intoheat and external work (Hill, Wyse, and Anderson, 2008). Basic assessmentsof this quantity can typically be measured for each organism during fastingand resting. Interestingly: (i) the weight-specific metabolic rate decays forspecies with higher body mass; (ii) although a reasonable estimate of theenergetic cost of the brain is about 20% of the whole metabolic rate, intensemental activity doesn’t seem to significantly increase this cost (Hill, Wyse,and Anderson, 2008).

Energy is the most powerful general concept that science currently pos-sesses in order to allow a synthetic description of integrated systems. Energycan be regarded as the common currency that spans across any physiologi-cal process and level (Hill, Wyse, and Anderson, 2008). The richness of itsinformational content is unmatched within several physical domains (Hill,Wyse, and Anderson, 2008; Winter, 2009).

Organisms essentially display two different strategies to cope with thechallenges of their environment (Hill, Wyse, and Anderson, 2008). The first,conformity, boils down to surrendering to the environment, thus avoidingthe investment of energy resources that are necessary to counteract the en-vironmental strain. This implies that the organism’s internal environmentwill simply follow the dynamics of its external environment. The secondstrategy is regulation, i.e. arranging a set of processes for the maintenanceof the internal parameters of the body at an approximately steady state,

55

Page 70: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 56 — #70

CHAPTER 4. INSIDE THE BODY

irrespective of the external conditions. Indeed, this second solution is en-ergetically expensive: regulation is a form of organization, and as such itrequires energy (Hill, Wyse, and Anderson, 2008). Different species canadopt different strategies to face the same environmental challenges, andcan freely select any combination of the two. For example, in extreme con-ditions some insects can allow their body to reach temperatures as low as-40C during polar winters. Salmon’s body temperatures closely match thetemperature of the surrounding water, whereas the level of chloride in theblood is strictly maintained over large ranges of external concentrations(Hill, Wyse, and Anderson, 2008).

The study of this family of phenomena translated into a key concept inthe physiological study of organisms that deploy regulation, i.e. homeostasis(Guyton and Hall, 2005; Hill, Wyse, and Anderson, 2008). For example, thechemical distribution in the blood and tissues of healthy mammals is largelyindependent of their surroundings. Claude Bernard (1813-1878) was thefirst to point to the added freedom derived by the protection of internal tis-sue from external variability, via maintenance of constant bodily parameters(Hill, Wyse, and Anderson, 2008). Walter B. Cannon (1871-1945) integratedthe observation of internal constancy with the study of the mechanisms ofthe regulatory systems (Cannon, 1963; Hill, Wyse, and Anderson, 2008).Contemporary physiology tends to regard homeostatic mechanisms from abroad evolutionary perspective. Far from ranking species based on the effi-ciency and complexity of their regulation, the message is that organisms canachieve equally effective adaptation to an environment by deploying dras-tically different strategies. An energetically expensive complexification of abody plan is not necessarily any better than an inexpensive simplificationwhen they both accomplish survival (Hill, Wyse, and Anderson, 2008).

As an extension of the concept of homeostasis, over the last four decadesSterling and Eyer introduced the notion of allostasis (Sterling and Eyer,1988; Sterling, 2004). Historically, homeostasis had been tailored aroundthe idea of negative feedback. The classical scheme of negative feedback(Fig. 4.4, panel a) requires an externally given reference (setpoint). If theoutput of the controlled system temporarily diverges from such a value, thefunction of negative feedback is to drag the system dynamic back towardsits target (Wiener, 1961; Brogan, 1990). This form of control does not allowfor much flexibility and, crucially, “presumes the existence of a capacityto represent a goal”, as implicit in the existence of the setpoint (Carverand Scheier, 2002, p.305). In its classical formulation, stability within ahomeostatic system is achieved through constancy.

Allostasis takes the opposite stance, as stability is achieved throughchange (Sterling, 2004). The long term goal of regulation is to allow satis-factory fitness under natural selection. Accordingly, the regulatory systemhas the function of continuous modulation of the bodily parameters so asto physiologically reconfigure the organism in a way that fits its ongoingenvironmental challenges (Sterling and Eyer, 1988; Sterling, 2004; McEwen

56

Page 71: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 57 — #71

setpoint

-+

sensor controlledvariable

effectorΣ

(a)

setpoint

-

+

sensor controlledvariable

effector

predictionpriorknowledge

(b)

Σ

Figure 4.4: Panel a): schematic of a negative feedback system. Panel b):allostatic system. Adapted from (Sterling, 2004).

and Wingfield, 2003; Schulkin, 2003). This reconfiguration is enabled bypredictive mechanisms (Woods and Ramsay, 2007). Therefore, in the per-spective of allostasis, the variation of the regulated quantity does not amountto a pathological condition, but simply displays the adaptive response of ahealthy organism. The chronic maintenance of physiological patterns thatallow coping with extreme stress factors determines pathological states (Ster-ling, 2004; McEwen and Wingfield, 2003).

Brain, body and behavior all contribute to the natural goal of a systemthat is challenged by a rather unpredictable and potentially hostile envi-ronment. Clearly, the general idea of allostasis is highly relevant to thisthesis. This may not entirely apply to the emphasis on the role of the brain,imagined as a necessary ‘central mechanism’ (Sterling, 2004). A typical rep-resentation of of allostasis is sketched in Fig. 4.4, panel b. With respect tothe classical schematic of negative feedback (Fig. 4.4, panel a), the brain,including its capacity to use prior knowledge for effective prediction of thefuture scenario, is directly engaged in the task to set the target. Neverthe-less, it might be argued that this image is more due to the limited impactthat the mathematics of non-linear dynamical systems still have on biolog-ical science. We have no evidence of fixed or brain modulated setpointsin biological systems. Furthermore, any physiological regulatory system isbetter described as a nonlinear dynamic system, and the existence of the set-point is not necessary in such systems in order to maintain stable behavior(e.g. see Carver and Scheier, 2002). Rather, what we know from non-lineardynamical systems makes more plausible the notion that a complex inter-play between organism and environment, mediated by the high adaptivityof the nervous system and of the same regulatory system, might determinethe dynamics of the regulated variable (e.g. by dynamically creating andsustaining stable fixed points or other types of attractors).

57

Page 72: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 58 — #72

Page 73: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 59 — #73

Chapter 5

Cybernetics, dynamics,autonomy

5.1 The 1950s British cybernetics

As mentioned in Chapter 1, several ideas that have been introduced overthe last two chapters were already anticipated, at a more or less developedstage, in the pioneering work of cyberneticists. Cybernetics is a scientificmovement that originated in the UK and USA during the 1940s, from aremarkably interdisciplinary context. Norbert Wiener (1894-1964) coinedits name from the Greek ‘steersman’ (Wiener, 1961). Historically, this new‘science of steersmanship’ is the discipline from which system theory, con-trol theory and information theory stemmed in their modern form. In thewords of Wiener (1954, p.17): “It is the purpose of Cybernetics to developa language and techniques that will enable us indeed to attack the prob-lems of control and communication in general.” This level of abstractionallowed cyberneticists, for the first time in the history of science, to detachtheir ‘theory of machine’ from the physical details of the specific system un-der observation, and abstractly focus on general properties instead (Ashby,1956).

This new perspective determined an obvious concern about what con-stitutes a machine, i.e. which kind of physical instantiation belongs to theactual domain of cybernetics. The position of the first cyberneticists couldnot be more clear and unanimous. Consistent with a materialistic stance,traditional machines as well as living tissues, different forms of organizationof the same physical matter, can be effectively treated by the very same setof theoretical tools (Wiener, 1961; Ashby, 1956). This is one reason for thedeep concern of cybernetics with the understanding of biological cognition,ever since its inception.

Given the reluctance of the early cybernetic movement to fall within tra-ditional academic taxonomies, it comes as no surprise that both large simi-

59

Page 74: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 60 — #74

CHAPTER 5. CYBERNETICS, DYNAMICS, AUTONOMY

larities and differences marked this heterogeneous community. The Ameri-can community developed into a more academically structured approach tocontrol and communication theory. The British branch often originated ex-travagant and unconventional research paths. The differences between thetwo communities were significant and the different scientific impact fullyperceived by their members, as made explicit by W. Grey Walter (1963,p.131) with the following lines:

“If the principles [of ‘mechanical life’] are preserved, no matterhow elaborate the functions of the machine, its mimicry of lifewill be valid and illuminating. On the other hand, if the prin-ciples are abandoned in favor of programming the machine forspecial purposes, as Wiener foresees, the result may be produc-tive, the machine may entirely supplant human labour in thefactory, but it will be of little interest to the physiologist.”

Being deeply committed to a scientific program, the exploration of thebasic principles of biological cognition and its extension to complex sys-tems, while far enough from a marketable perspective, British cyberneticseventually dissolved with the death of its third generation protagonists. Itsextinction seems now ineluctable, largely lacking academic support and thusthe capacity to entrust a continuous intellectual flow with its legacy.

Nevertheless, Pickering (2010, p.32) grasps the intellectual scope of thismovement, that extends well beyond the traditional concept of control. Thepractice of designing artifacts, the traditional formulation of a plan to beimposed upon the matter, was rejected in favor of a “continuous interactionwith materials”, adaptively reshaped in a sort of evolutionary approach todesign and responsive to environmental changes. This brought forth a wholenew ontological perspective (Pickering, 2010, p.32):

“The entire task of cybernetics was to figure out how to get alongin a world that was not enframable, that could not be subjugatedto human design–how to build machines and construct systemsthat could adapt performatively to whatever happened to cometheir way.”

Stafford Beer (1926-2002) highlighted with powerful words the revolu-tionary spirit of the cybernetic movement, as he questioned the foundationalnotion of cybernetics itself (Beer, 1967):

“The fact is that our whole concept of control is naive, primitiveand ridden with an almost retributive idea of causality. Controlto most people (and what a reflection this is upon a sophisticatedsociety) is a crude process of coercion.”

Beer’s mentioning of society should not strike us as bizarre. Severalcybernetic scientists conceived and applied their methods to very different

60

Page 75: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 61 — #75

scenarios and levels of aggregation, from the analysis of electromechanicalartifacts to the study of the mind and social organization (Wiener, 1961;Ashby, 1960; Walter, 1963; Beer, 1967). Broadly speaking, the ontology ofBritish cybernetics drastically rejected the dualist academic taxonomy dis-uniting people from things, a science of the matter (e.g. physics and chem-istry) from a science of the human (e.g. psychology and sociology Pickering,2010). Beer suggested a tripartition of all possible systems: “simple” (e.g.billiards and penny tossing), “complex” (e.g. digital computers, planetarysystems, conditioned reflexes) and “exceedingly complex” (e.g. economy,brains, companies). Whilst the first two categories, largely knowable andpredictable, can be studied with the traditional methods of science, the lat-ter is the elective domain of cybernetics, that he defined as the “the scienceof the exceedingly complex systems”. Consistently, he also worked on theidea of a cybernetic factory, and collaborated with the government of Sal-vador Allende to the “cybernetic governance” of Chilean economy from thesummer 1971 to the military coupe in September 1973 (Pickering, 2010).

A common thread can be found throughout the cybernetics community.Several of its members actively engaged in the construction of physical mod-els of their theories. This attitude is well represented in the formulation ofthe law of uphill analysis and downhill invention by Valentino Braitenberg(1984). Whenever ‘opening the box’ that contains an unknown complexsystem is impractical, the understanding of its internal mechanisms leadingto an observable complex behavior, i.e. achieving a direct analysis of the in-trinsic dynamics of the system, is often problematic. Human observers arebiased with a tendency to overestimate the necessary complexity, for theinduction of the general laws behind observations is a troublesome mentalprocess. On the other hand, a synthetic approach often offers quick andsimple solutions, that can be used to form a useful basic reference. Thissynthetic approach is particularly advantageous for the study of biologicalminds, a case where the complexity of the black box is rarely matched. Theremainder of this section will focus on the models implemented by two ofthe pioneers of British cybernetics, namely W. Grey Walter and W. RossAshby.

5.1.1 Walter: the brain as dynamic interaction

W. Grey Walter (1910-1977) was already well established as a brain scientistwhen he embarked on his first cybernetics experiences (Pickering, 2010). Hissuccess was largely due to his improvement on the ‘Berger’s machine’, des-tined to become the modern electroencephalograph, or EEG (Walter, 1963).His work contributed to the discrimination and systematic study of a vari-ety of normal and pathological brain waves at characteristic frequencies. Heapplied the cybernetic concept of feedback to investigate the mental effectsof ‘flickering’ and to the applications of ‘biofeedback’ (Pickering, 2010).

Walter’s tortoises represent his attempt to explore the biological brain

61

Page 76: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 62 — #76

CHAPTER 5. CYBERNETICS, DYNAMICS, AUTONOMY

by modeling some of its properties (Walter, 1963). Initially, Elsie and Elmerwere two simple wheeled ‘electromechanical creatures’ of the species machinaspeculatrix, in Walter’s own humorous taxonomy (Walter, 1950, 1951, 1963).Their sensory capabilities were limited to a revolving photosensitive cell,that continuously scanned for environmental light, and a bumper connectedto a switch. Electric motors, its motor organs, drove its wheels, whilstrelatively simple hardwired analogue electronics (based on valve tubes tech-nology) would connect sensing to action. Power was supplied by an on-boardrechargeable battery. Finally, a small light bulb was placed on top of eachmachine. The control circuit was designed so that moderate lights wouldattract the machine, whereas intense lights would repel it. Electromechani-cal relays, controlled by the actual input, would select the current behavior:navigation in an open environment, back and forth oscillations to escape anobstacle and locking at the distance of optimal light intensity from a lightsource.

Elsie and Elmer’s evolution as a couple, or rather as engaged with theirown image in front of a mirror, struck popular imagination to the point ofbecoming object of poetic inspiration (Queneau, 1950). Under a scientificpoint of view, they embody a model that goes beyond the role of mere epis-temological tool. Indeed they seem to outline an ontological view of themind in action. The tortoises were designed to be sensitive to specific in-teractions (environmental light distribution and mechanical contacts withobjects). Their actual behavior is causally determined by the fields of forcedue to the interplay between the environment and the machine itself. Themetaphor of mind that is brought forth by the model is one of dynamic inter-action between two interacting systems, the environment and the machine.The environment was explicitly considered part of the feedback loop (Wal-ter, 1963). In the analysis by Pickering (2010) this is the ontological view ofa performative (and radically non-representational) biological brain, beingdesigned by natural evolution to produce effective action in its environment.

Complexity, in the form of partly unpredictable behavior, emerged fromthis interaction, rather than in virtue of a complex control circuitry. In fact,the design of machina speculatrix was extremely parsimonious, and involvedthe use of two valve tubes only, assumed as ‘brain functional units’. InWalter’s intentions, this models how the complexity and variety of biologicalbehaviors should be achieved through the richness in connections in thefirst place, rather than from a large number of functional units (Walter,1963). Such simple machines were capable of alternating and balancingseemingly purposeful behaviors like positive tropism (approaching moderatelight sources), negative tropism (avoidance of intense lights and obstacles),discernment of optimal light conditions, convergence to a recharging station(landmarked by a light source) when energy deficit was sensed. Furthermore,the tortoises engaged in characteristic behaviors during mutual-recognition- as Elsie and Elmer were left free to interact with each other - and self-recognition - when a lone tortoise was released in front of a mirror (Walter,

62

Page 77: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 63 — #77

1963).The next generation of machina speculatrix, machina docilis, featured

CORA (COnditioned Reflex Analogue). This masterpiece of analogue elec-tronics detects correlated inputs and is capable of learning associationsacross multimodal sensory inputs. CORA transferred into hardware Wal-ter’s expertise in conditioned reflex derived by a tight collaboration withIvan Pavlov’s group. Pickering offers a nice description of machina docilisas a model of performative mind (Pickering, 2010, p.67):

“... the knowledge in machina docilis waxed and waned withits performance, integrating over its experience – associationsbetween stimuli would be lost if the robot’s expectations didnot continue to be reinforced – and thus functioned as a heuris-tic guide, emerging from and returning to performance, not asany kind of controlling center. Docilis thus offers us a vision ofknowledge as engaged in, and as part of , performance...”

Beyond metaphor, Walter considered CORA as a sophisticated modelof brain activity. He demonstrated analogies between electrical activity inCORA and brain rhythms as displayed by EEGs. Furthermore, CORAexhibited pathological behaviors (e.g. obsessions): as Walter suggests (Wal-ter, 1963, p.184): “... the power to learn implies the danger of breakdown.”CORA as a model of the (pathological) brain reinforced the confidence inpsychiatric therapies later made obsolete by the introduction of psychoactivedrugs. In fact, Walter (1963) used the model to show how a pathologicalsystem (either machina docilis or a human mind) could be brought back tonormal behavior by: (i) simply waiting, thus viewing the pathological behav-ior as a transient state (prolonged sleep treatment); (ii) inducing instability(electroconvulsive therapy); (iii) severing the circuit (lobotomy).

5.1.2 Ashby: the brain as ongoing adaptation

W. Ross Ashby (1903-1972) had a reputable position as clinical psychiatristwhen he embarked on his exploration of cybernetics. The keyword in hiscybernetic interpretation of the biological role of the brain is ‘adaptation’(Ashby, 1960). Which force can drive an organism, typically endowed withenough power to be potentially self-destructive, when it learns how to copewith new situations in an uncertain world?

Ashby grounded his answer, and the description of a self-coordinatingbrain, in the materialist tradition of British psychiatry (Ashby, 1960). Hisaim for a transformation of the study of the mind into an operationally rig-orous activity, rooted in the coevally emerging concepts of control systemstheory, is manifest throughout his books (Ashby, 1956, 1960). The Ashbianconcepts that describe the mind can be defined and related through oper-ational observations, i.e. in terms of measurable quantities. His researchstrategy is as systematic as his method. Ashby devised the use of a model

63

Page 78: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 64 — #78

CHAPTER 5. CYBERNETICS, DYNAMICS, AUTONOMY

of physical interaction between the brain and its environment. He arguedfor the same level of idealization that is proper to frictionless planes, leaklesscapacitors, perfect gases and massless springs. Such idealization is an in-valuable epistemological tool in that (Ashby, 1960, p.29, original emphasis):

“Maybe it will be found eventually that not a single mechanismin the brain corresponds exactly to the types described here;nevertheless the work will not be wasted if a thorough knowledgeof these idealised forms enables us to understand the workings ofmany mechanisms that resemble them only as approximations.”

According to Ashby, the brain is an organ made up of common matter,and as such it deterministically follows the same laws of physical matteras its environment. The brain is a specialized “means to survival” (Ashby,1960), evolved to cope with the challenge of natural selection. The ‘whole’constituted by environment (functionally defined through a set of variablesthat affect the organism and are affected by the behavior of the organism)and organism is a state-determined feedback system (Ashby, 1960). Dif-ferently from more traditional approaches to the study of the mind (e.g.physiology and psychology), whose “experiments are deliberately arrangedto avoid feedback” (Ashby, 1960, p.38), the properties relevant to the studyof the mind emerge from the whole. Ashby (1960, p.40) was well aware ofthe problems that such a choice might generate:

“As the organism and its environment are to be treated as a sin-gle [feedback] system, the dividing line between ‘organism ’ and‘environment ’ becomes partly conceptual, and to that extent ar-bitrary. [...] we shall always treat the system as a whole, dividingit into parts in this unusual way merely for verbal conveniencein description.”

Within the whole, essential variables (hereafter EVs) form a key conceptin Ashby’s work. EVs indicate a subset of the organism’s variables that arecritical to its viability (e.g. blood pressure, heart rate, blood concentrationof hemoglobin and glucose). The species dependent specificity of these vari-ables and of the range admitted for them takes the idea of embodiment,applied to Ashby’s thought, to rather extreme consequences (see Section3.2.7). The relation between different EVs and lethality is species specificand non-uniform. Adaptation (and survival) has to do with the maintenanceof every EV within physiologically viable limits (Ashby, 1960).

There is an obvious relation between adaptation and stability. A stablefeedback system is intrinsically goal-seeking, as it tends to control its error,i.e. the distance between its actual and desired state (Ashby, 1960). A fewbrief comments are necessary: (i) a feedback system is typically either ina stable or unstable state; (ii) stability has nothing to do with fixity andrigidity (it is rather about the tendency of the system to converge towards

64

Page 79: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 65 — #79

E

OP

E

O(a) (b)

EVs

Figure 5.1: Schematic of an ultrastable system (see text; adapted fromAshby, 1960).

an attractor); (iii) stability is a property of the whole; (iv) stability of EVsmight involve “vigorous changes” in other system variables (Ashby, 1960).Therefore, adaptive behavior amounts to the behavior of a stable system,and survival is the only criterion for adaptivity (Ashby, 1960).

In the Ashbyan lexicon, the system’s field is “the phase space containingall lines of behavior found by releasing the system from all possible initialstates in a particular set of surrounding conditions” (Ashby, 1960, p.23).Whilst EVs and the other variables determine the state of the system in itsphase space, a change in parameters determine a change in the field.

On this basis, Ashby introduced his concept of ultrastability. In Fig. 5.1ais schematically represented the relation between the two systems, e.g. anorganism (O) and its environment (E). The relation between the two shouldbe actually considered as “fine grained”, in the sense that it is constituted bya network of multiple micro-interactions, extending within each subsystemand from one subsystem to the other (Ashby, 1960). A complete adaptivesystem, according to Ashby, is represented by a double-feedback loop (Fig.5.1b), in the simplified situation where only the environment can affect theEVs of the system. When any of the EVs escapes its physiological range,the event triggers a change in the system parameters (P) that directly affectthe organism (but not the environment). This determines a change in thestructure of the phase space (the field), thus triggering the switch to adifferent behavior. Ashby considered this kind of organization necessaryand sufficient for adaptation (Ashby, 1960).

In the representation of Fig. 5.1b, O is an organism that is trying tocontrol the output of an unknown and largely inaccessible environment (ablack box ). Its only possible strategy is to operate through ‘trial and er-ror’. Whenever a behavior proves inviable, i.e. the environment pulls theorganism’s EVs out of range, the organism reacts with a drastic change inbehavior. This exploration of new strategies will be repeatedly deployed aslong as the EVs are restored within physiological limits (i.e. the homeostaticregime).

65

Page 80: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 66 — #80

CHAPTER 5. CYBERNETICS, DYNAMICS, AUTONOMY

Figure 5.2: A four units homestat (Ashby, 1960, with permission).

The homeostat (Fig. 5.2), Ashby’s elegantly stylized model of mind,followed the same principle (Ashby, 1960; Pickering, 2010). The homeostatwas meant as a proof of concept, abstracting away from the entertainingostentation of Walter’s machines. In fact (and not without irony), Walter(1963, p.124) referred to the homeostat as ‘machina sopora1’ and synthesizedits properties with a laconic: “...it is all negative feedback...”. Of coursethere is more to the homeostat than mere negative feedback, namely theimplementation of an ultrastable system, as proved by its hectic activationduring post-perturbation transients.

Ashby’s homeostat is implemented through the interconnection of two ormore basic units. Each element generates a DC electric current as output,and receives as input the output from all the other units of the homeostat.The input currents, flowing in coils (A, B, C in Fig. 5.3), produce a vari-able magnetic field that depends on the intensity of the algebraic sum ofthe currents through each coil. In each unit, the magnetic field deflects apivoted magnetic bar (M), whose deviation from its central position linearlydetermines the output current. Between the input and the coils, though,a potentiometer (P) determines which percentage of the input current canreach the coil, and a commutator (X) adjusts its polarity. It can be shownthat, if there is sufficient viscosity in the movement of Ms, the homeostatis a state-determined system. For appropriate values of parameters P, thesystem can reach a stable fixed point corresponding to all Ms in their centralposition. For a different configuration, though, the system is unstable, andone or more Ms would eventually reach the ends of their throughs. The sys-tem described so far, either stable or unstable, has no property whatsoever

1From the Latin sopor-soporis, deep sleep.

66

Page 81: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 67 — #81

Figure 5.3: Circuit diagram for a homeostat unit (Ashby, 1960, with per-mission).

related to ultrastability.

To create an ultrastable system, switch S in Fig. 5.3 can be closedon an ‘uniselector’ (U), whose components act similarly to the sequencecommutator X followed by potentiometer P, but have been deliberately ran-domized. Therefore the collection of available uniselector positions providesthe homeostat with a rich set of random configurations for its parameters.In the homeostat abstraction, the values of all Fs (that get activated onlywhen the output current of the relative unit exceeds a predetermined rangefor more than a predetermined interval of time) represent the EVs of thesystem. The activation of F triggers the selection of a new position in theuniselector of its unit. In other words, a disturbance that takes any EV outof its stable fixed point determines a sequence of changes in the system’s‘field’ through a random reconfiguration of its parameters until a stableequilibrium is achieved again (Ashby, 1960).

Ashby justified his choice of the term ‘ultrastable’ through an example(Ashby, 1960). The classical automatic pilot of an airplane is a device, basedon negative feedback, with the function to maintain a stable attitude of theaircraft in the air. If the airplane tends to roll to the right, the device has tocompensate with a correction to the left. Now imagine suddenly invertingthe relation between the command and the ailerons (i.e. the command usedto compensate to the left amplifies the roll to the right instead). The system,whose feedback has been transformed from negative into positive, becomesunstable, i.e. incapable of maintaining the airplane attitude during its flight.On the other hand, the homeostat would be able to cope with analogousradical changes in the system, by randomly modifying its behavior until

67

Page 82: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 68 — #82

CHAPTER 5. CYBERNETICS, DYNAMICS, AUTONOMY

adaptation to the new situation has been achieved.Ashby experimentally demonstrated the robustness of his homeostat.

The system would typically reach equilibrium after perturbation. It couldalso be trained according to protocols reminiscent of Pavlovian conditioning.For example, a 3 unit homeostat could be trained to respond to a displace-ment (forced by a human operator) of the magnetic bar of Unit 1 (M1) to theright with a spontaneous displacement of M2 to the left, by forcing M3 tothe end of its through in case the system disobeyed this rule - an operationalform of punishment (Ashby, 1960). Ashby also showed pathological condi-tions under which the homeostat would fail to reach its equilibrium. Thehomeostat could not adapt to an environment that manifests itself throughsudden discontinuities or imposes a critical level of time pressure (Ashby,1960).

Ashby’s homeostat extended the image of a brain continuously engagedin a dynamical interaction with its environment by adding to the systemthe means for a continuous adaptation to new and unexpected challenges.Wiener (1954, p.38), remarking on Ashby’s idea of an “...unpurposeful ran-dom mechanism which seeks for its own purpose through a process of learn-ing...”, celebrated Ashby’s model as “...one of the great philosophical con-tributions of the present day...”.

Pickering (2010) observes that, consistent with their classical scientificcareers, in neurology and psychiatry respectively, Walter and Ashby’s activ-ity in cybernetics should be considered “securely within the realm of mod-ern science”. Indeed, Walter and Ashby tried to open up the ‘black box’constituted by the brain, using their (somewhat unconventional) models todisclose it to our “representational understanding”, as commonly done inscience (e.g. see Shapiro, 2011). But Pickering also points to the innovativeaspects that Walter and Ashby introduced in the scientific tradition of theirage. Their explanations emerge from an “articulation of parts”, and aimto the discovery of complexity. Walter and Ashby’s surprise, in the face ofthe unexpected phenomena emerging from well known basic components,reminds us that systemic understanding based on the mere reductive knowl-edge “can be difficult to the point of impossible” (Pickering, 2010, p.29).

5.2 Dynamic Systems and cognition

As we have seen, the early cyberneticists tackled the study of the mind froma systemic perspective. They also pioneered, building on the foundationalconcept of feedback (Wiener, 1961; von Bertalanffy, 1969; Brogan, 1990), thedevelopment of the new discipline of control theory. The latter, integratedby breakthrough mathematical developments for the study of non-linear dif-ferential equations, anticipated over a century ago by Henri Poincare (1854-1912) and now made numerically tractable by computer technology, deter-mined the conditions for the emergence of a (non-linear) dynamical systemstheory.

68

Page 83: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 69 — #83

Historically, the problematic analytical tractability of non-linear differ-ential equation, translated into a systematic linearization of the equations,in order to consider the linear form as a valid local approximation in theneighborhood of a single point. Not for a minute does this implies a neg-ative opinion about this mathematical procedure, fully motivated by thelimitation in technical resources and the analytical insight that it provides.In fact, this move allows the analytical treatment of non-linear problemsby applying the superposition of effects on a local scale. Nevertheless, thiscomputational strategy, reductionist in spirit, implied the systematic disap-pearance of higher order effects on larger scales.

The application of powerful numerical methods for the solution of sys-tems of differential equations, made possible by electronic computers, al-lowed the beginning of a systematic exploration of non-linear phenomenology(within a huge literature at any possible level of mathematical rigor see e.g.Guckenheimer and Holmes, 1983; Strogatz, 1994). The core of the theorydeploys geometric methods for the qualitative grasp of the dynamic proper-ties of the observed system (Arnold, 1992), whilst numeric methods supporta thorough investigation of such dynamics. The insight of dynamical sys-tems theory is that even formally simple and fully deterministic systems cangive rise to surprisingly complex dynamics and chaos (Lorenz, 1963; Rossler,1976, 1979; Strogatz, 1994). Whilst (unavoidable) small errors in the mea-sure of the starting position of the system are predictably propagated in aclassical dynamical system, a system under chaotic regime remains quantita-tively predictable only for a very limited time interval. A complete descrip-tion of the main classes of bifurcations, i.e. sudden macroscopic changes inthe geometric properties of the system’s dynamics in response to a variationof some parameter, can be explored by means of a remarkably simple set ofbasic equations (Strogatz, 1994).

If we were to choose the one lesson that dynamical systems theory canteach us, this would probably be the awareness that in a complex systemthe links among components, rather than the components alone, constitutea crucial property. Locally the system might settle unstable dynamics, yetdissipative phenomena (e.g. friction) present in any actual physical systemwill globally balance and therefore damp divergence. This will often ’tame’the dynamics of the system within limited regions (unless the system disin-tegrates along the transient). Dynamical systems theory gave us the oppor-tunity to investigate dynamics that develop on this broader scale. Indeed,in modeling complex systems, reductionism applies, but not as the system-atic decomposition of the system into more elementary components. Rather,each component is simplified to the limit that still captures the phenomenonunder study, yet preserving intact connections among all components.

This theoretical revolution profited all fields of quantitative applied sci-ence, from physics to electronics, from meteorology to economy, just to namea few. In the late 1980s, a dynamical systems inspired approach to the studyof cognition emerged on these grounds. For example, Kelso et al. applied

69

Page 84: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 70 — #84

CHAPTER 5. CYBERNETICS, DYNAMICS, AUTONOMY

dynamical systems principles to the study of simple problems of motor co-ordination (an overview is provided in Kelso, 1995). A neuroscience twistis nicely presented by Skarda and Freeman (Skarda and Freeman, 1987),who modeled classical conditioning in the olfactory system, using a set ofnon-linear ordinary differential equations. Their results, supported by elec-troencephalographic data, showed how chaotic dynamics might be used bythe nervous system as a neutral state from which a specific attractor (i.e. aspatial pattern formed by the amplitude distribution of synchronous oscil-lations) would emerge in response to each conditioned odor stimulus.

In the field of cognitive robotics, the idea of a systemic approach is high-lighted in the theoretical work by Beer (1995, 1997, 2000), that emphasizedthe interweaving dynamics of brains, bodies and environment as constitu-tional components of cognitive phenomena. The experimental work by thesame author (e.g. Beer, 1995, 1997, 2003) can be considered exemplary, forit tries to explicitly flesh out the dynamic properties of the simple roboticsystems. Beer and collaborators try to implement a ‘cognitive frictionlessplane’ to test their robots on task that require “the simplest behavior thatraises cognitively interesting issues” (minimally cognitive behavior ; see Beer,1996; Slocum, Downey, and Beer, 2000) in simple, yet complete simulatedagents. Perception and discrimination emerge within their relation to on-going action, are deployed over a continuous time interval and holisticallyachieved via the exploitation of multiple available resources within the agent-environment coupled systems (e.g. Beer, 2003).

Models of minimally cognitive robotics will be the backbone of the orig-inal experimental work reported in Chapter 6. Simplicity should be in-terpreted as functional to a thorough analysis of the system, and to theisolation of possible basic cognitive dynamics from the ‘background noise’of highly articulated and complex models. Therefore, despite the often crit-icized lack of technical complexity of the robotic architecture, this class ofmodels should be considered as belonging to a sound methodology. Thedetermination of the appropriate level of abstraction is a crucial step in themodeling of systems in general and cognitive systems in particular. Beer(1997) criticizes the ‘microworlds’ of early AI investigations not in virtue oftheir being abstractions, but for being the wrong kind of abstractions.

Beer follows Ashby in considering the division between an agent and itssystem somehow arbitrary (Ashby, 1960; Beer, 1997). “Is an artificial limbpart of the agent or part of the environment?”, asks Beer (1997). Never-theless the meaning of the constant interaction between an agent A andits environment E is formally specified as coupled non-autonomous dynam-ical systems (Ashby, 1960; Beer, 1997). Here non-autonomous refers to themathematical property that the system (be it the agent or its environment)receives inputs. Coupling implies that some of the parameters of each sys-tem depend on the state variables of the other. In more formal terms (Beer,1997):

70

Page 85: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 71 — #85

xA = A(xA; S(xE); u′A) (5.1)

xE = E(xE ; M(xA); u′E) (5.2)

where, repectively for agent A and environment E , xA and xE are theinternal states, and u′

A and u′E are system parameters. S(xE) and M(xA)

are a subset of the system parameters that is given as an input from therespective coupled system. S represents all environmental effects on theagent A (typically through sensory paths); analogously M represents alleffects of agent A on its environment E (typically through its effectors). Thetwo coupled systems induce perturbation on each other, mutually modifyingthe dynamical structure of the coupled system, and therefore the quality ofits possible dynamics.

Often though, the dynamical systems inspired approach to the studyof cognition has been limited to a mere phenomenological description. Forexample, this is the case for the pioneering work by Kelso (1995) and col-leagues on motor coordination, or for Schoner et al.’s model of the classicalA-not-B problem (Thelen et al., 2001), or Thelen and Smith (1994) treat-ment of the motor development in infants as parametrized by leg mass or thework by Rooij, Bongers, and Haselager (2002) cited in Section 3.3.2. This isconsistent with van Gelder’s specification of a high level of abstraction forthe dynamical systems approach (van Gelder, 1998, p.619):

“Another noteworthy fact about these models is that the vari-ables they posit are not low level (e.g., neural firing rates) but,rather, macroscopic quantities at roughly the level of the cogni-tive performance itself.”

According to van Gelder (1998), this means that dynamical systems in-spired research is revealing us the organization of cognitive systems “at thehighest level relevant to an explanation of cognitive performances”. In out-lining his hypothesis, van Gelder seems rather comfortable with his owndetachment from a physically grounded world. He (generically) refers tomodels where the variables of interest are entities like ‘valence’ or ‘prefer-ence’. Indeed, at times this approach gains a surprising analytical insighton cognitive systems (Chemero, 2009). This perspective suggests to us thatcognition can be described as the activity of a dynamical system.

In assessing this stance two observations are necessary: Firstly, dynam-ical systems theory is merely a mathematical methodology, and as such ismetaphysics-free (Beer, 1997; Chemero, 2009). Its set of tools may correctlydescribe the kinematics of cognition, nevertheless it does not have the powerto penetrate the specific forces that mold the causal chain of biological cog-nition. The use of dynamical systems theory alone has nothing to tell aboutactual dynamics underneath the cognitive phenomenon. Chemero, whilstsuggesting dynamical systems theory as the natural language for embodied

71

Page 86: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 72 — #86

CHAPTER 5. CYBERNETICS, DYNAMICS, AUTONOMY

cognition, refers to this fact as a reinstantiation of ‘the problem of discovery’made infamous by Boltzmann: as it happened to physics, cognitive sciencemight need an actual background theory to form its backbone and spellout what we still ignore about cognition (Chemero, 2009). Secondly, sayingthat a cognitive system can be described with dynamical systems conceptsimplies a basic assumption, i.e. that the basic laws of cognition can be de-scribed using the same kind of laws that we use to characterize the rest ofthe physical nature, and we don’t need a special language to describe cogni-tive phenomena. Therefore, dynamical systems theory seems like the idealbridge between mind and world, psychology and biology, and “specially wellsuited for situated and embodied approaches to cognition” (Barandiaran,2006).

From this point of view the dynamical systems approach (as a specificmethodology) seems to offer a radical computational alternative to the rep-resentationalist stance. Several authors agree that arguments in favor ofthe total rejection of representations cannot be defended (e.g. Chemero,2009). Nevertheless, the methods of dynamical systems theory are “neu-tral over whether to consider parameter or variable values as representa-tions” (Chemero, 2009, p.26), and allow “no compromise with the represen-tational theoretical primitives to model cognitive systems” (Barandiaran,2006). This approach opens a new epistemological path to the study of themind, as a representationalist translation of dynamical systems accountsseems to imply a loss of explanatory power, and therefore appears inappro-priate if not mistaken (Beer, 1997; Chemero, 2009; Barandiaran, 2006). Onthese grounds, Chemero refers to the dynamical stance as “a methodologi-cal commitment to explaining perception, action, and cognition dynamicallyand without referring to representations” (Chemero, 2009, p.xi).

Nevertheless, a position paper, no matter how intriguing, can only havea limited impact on science: the devil’s homeland is still, stably, in the de-tails. In his commentary to van Gelder (van Gelder, 1998), Jaeger pointsto the current limitations of dynamical systems theory, still in a phase oftheoretical development and refinement, and still rather ineffective for thecharacterization of a large class of non-autonomous systems, i.e. systemsthat receive external inputs (see also Beer, 1995). Jaeger effectively sum-marizes the nature of the theoretical gap, as the complexity required forsensible models of complete cognitive systems is still well beyond our cur-rent capacity (commentary by H. Jaeger to van Gelder, 1998, p.644):

“Complete cognitive systems arguably have the following prop-erties: (1) they are driven by stochastic input that varies onthe system’s own characteristic timescales, (2) they are high-dimensional, with many variables developing according to differ-ent laws in different subsystems, and (3) and they are nonsta-tionary. Nonstationarity is here understood as resulting from anonparametric change in the dynamical law itself, as occasioned,for example, by topological restructuring of neural connectivity,

72

Page 87: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 73 — #87

by evolutionary processes, or by growth. I will call systems hav-ing properties (1)-(3), wild systems.

Wild systems cannot be captured by today’s mathematicians.Even basic concepts such as attractors or bifurcations cease tobe of much help in systems driven by fast, stochastic input. Cur-rently high-dimensional systems can be approached only withrespect to some kinds of collective parameters, as in synergeticsystems or mean-field approaches. Finally, with nonstationarityof the strong kind noted in (3), we are simply lost. [...]

Hence, given the premises (1) cognition is irreducibly a propertyof complete, situated agents; (2) complete, situated agents arewild systems; and (3) current dynamical systems theory cannotcatch wild systems, it follows that current dynamical systemstheory cannot capture cognitive systems.”

5.3 Autonomy

Over recent years, autonomy (from the greek ‘self’ and ‘law’) has becomea crucial research topic (e.g. Pfeifer and Gomez, 2005; Ziemke, 2008). Asrecently observed by Xavier Barandiaran2 , a question about the nature ofautonomous agency is relevant to a broad spectrum of disciplines rangingfrom the science of the living to social science, across biology, neuroscience,cognitive science, various approaches to artificial intelligence, psychology,economics and sociology. Despite its importance, the concept of autonomyoften endures vague definitions, of the kind that generically refers to ‘anemancipation from exogenous control’.

Autonomy remains a controversial and hard philosophical problem. Someargue in favor of a sharp separation between the concepts of autonomy andagency (e.g. Muntean and Wright, 2007). In philosophy, the latter con-cept often refers to teleology and normativity. The former, once the termis pruned of its implicit connotation of autonomous agency, can be simpli-fied from “doing something for oneself” into “doing something by oneself”(Muntean and Wright, 2007). Thus, according to Muntean and Wright(2007), autonomy can be reduced to a mere requirement of independentbehavior or activity.

As anticipated in Chapter 1, the reference scenario in which autonomousrobots are called to offer convincing performance is dominated by variouslevels of uncertainty within the situation of interest. The dynamics thatcharacterize the environment and the same possible environment-agent in-teractions are not known a priori. Robot operations outside direct humancontrol, yet under highly predictable conditions, are of little interest forresearch in autonomy.

2Talk at the ‘ECAL 2011 Workshop on Artificial Autonomy’, Paris.

73

Page 88: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 74 — #88

CHAPTER 5. CYBERNETICS, DYNAMICS, AUTONOMY

A meritorious effort for the creation of a metric capable of ranking au-tonomous systems has been attempted for some research areas. For example,Anderson et al. (1994) proposed a qualitative and subjective scale suitablefor research in psychology and sociology. Recently, Seth (2010) has suggesteda more objective measure for robotics, that backs a more dated approachbloomed in technological applications, especially military unmanned aerialvehicles (e.g. see Barber and Martin, 1999; Huang et al., 2005). Time andfurther research will allow a clearer appraisal of the success of such initia-tives.

5.3.1 McFarland’s levels of autonomy

Recently, McFarland (2008) introduced a pragmatic distinction among dif-ferent levels of autonomy that are relevant to robots:

• Energy autonomy: “concerns the ability of the robot to obtain itsown energy” (McFarland, 2008, p.15). Typically, state of the art au-tonomous robots rely on the use of rechargeable chemical batteries.This availability liberates the robot from the constraints derived by acontinuous, exogenous energy supply (e.g. an electric wire, that createobvious limitations on the robot’s freedom of movement). A classicalexample of this kind of autonomy has already been encountered in thedescription of Walter’s ‘turtles’.

• Motivational autonomy: “freedom to utilize behavior that is in [therobot’s] own vital interest” (McFarland, 2008, p.19). For example,McFarland refers to the case of a recharging robot, that needs to dis-engage every now and then from its current activity in order to reachto its energy supply (McFarland, 2008). This choice can be simplyregulated by the internal information about its energy level. Never-theless, a more sophisticated system might deploy more articulatedbehavior, displaying the capacity of considering other motivationalalternatives and opportunistic behavior (e.g. delaying the disengage-ment from a particularly advantageous situation) (McFarland, 2008).Further issues relevant to motivational autonomy will be presented inthe following Section 5.3.2.

• Mental autonomy: “freedom to think up better ways of doing things”(McFarland, 2008, p.22). Despite its obvious interest for those branchesof AI dealing with the generation of “artificial thoughts”, this categorygoes beyond the scope of the present work and will not be exploredany further.

A (probably problematic) philosophical analysis of McFarland’s triparti-tion of levels of autonomy is not in the scope of this dissertation. Therefore,the three levels of autonomy portrayed by McFarland have been consideredhere for they introduce an empirically useful distinction.

74

Page 89: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 75 — #89

Considering energy autonomy, the traditional approach using recharge-able batteries, manifests an obvious limitation derived by the necessity of anartificial infrastructures that provides a recharge for the exhausted batteries(recharging station). Accordingly, this kind of solution is largely unfit forthe exploration of vast and non-colonized areas over extended time scales.Several solutions can be deployed in order to dissolve this limitation. Solarpanel technology can be effectively used in order to recharge the robot’sbatteries by exploiting solar energy. This solution is of course unfeasible inthe case of submarine or underground solutions, or whenever the light is notavailable with the necessary timing to allow an effective recharge (e.g. theduration of the lunar day is slightly longer than 27 earth-days). Meteorolog-ical events can also jeopardize or significantly limit the originally designedenergy potential. For example, experience has showed how months-long duststorms, by blocking up to 99% of direct sun light, reduced the daily recharg-ing capacity of Mars rovers Opportunity and Spirit by almost a whole orderof magnitude (from the expected 700 W hours to less than 130 W hours)3 .

In recent years, several types of fuel cells have offered valid technolog-ical alternatives to the sockets for the recharge of electric energy. Crucialparameters for robotic applications are the volume and weight of the wholefuel cell system (including the tank for the fuel), the availability of the fueland the efficiency of the energy conversion (from chemical to electrical).Microbial fuel cells (see Paper II in Appendix B) offer a good compromisefor robotic applications under all these points of view, and a possible alter-native at least in environments naturally provided with reservoirs of waterand biomass. McFarland reminds us of the variety of possible behaviorsthat are deployed in nature in order to cope with foraging. For example,some animals spend little time foraging but with high energy expenditure,while others use the opposite strategy. Some species consume their meal onthe spot, whilst others prefer to store it in a reservoir . These choices seemto obey to the distribution and availability of the food in the environment(McFarland, 2008).

As already remarked, this work will not consider mental autonomy. Morein general, it will not even consider mental categories (e.g. consciousness,intentional states, etc.). This does not to deny the importantce of mentalstates and thus imply a behaviorist stance. Indeed, consciousness and inten-tionality are fundamental elements for the adaptive life of a cognitive agent.Nevertheless, the level of complexity achieved during the reported studiesseems unsuitable for the modeling of an even elementary level of mentality.

5.3.2 Action selection

The problem of action selection, i.e. the arbitration among possible com-peting behaviors within a given context, is obviously related to motivational

3Information available at http://marsrover.nasa.gov/newsroom/pressreleases/20070720a.htmlas of November 27, 2011

75

Page 90: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 76 — #90

CHAPTER 5. CYBERNETICS, DYNAMICS, AUTONOMY

autonomy as introduced in the previous section. McFarland observes howa robot capable of some level of self-control cannot be a purely reactivesystem, but its behavior must be regulated by its internal state. Thus,motivations can be thought as “a set of internal reversible processes thatinfluence behavior” (e.g. hunger, thirst, sleep and sexual arousal), whoserequirements cannot be all satisfied at the same time and impose a criterionfor the selection that is appropriate to the current circumstances (McFar-land, 2008). The selected behavior has the effect to dynamically regulatethe set of motivational processes according to a delicate balance of costs andbenefits (McFarland, 2008). Indeed, behavior is inherently affected by costsand triggered by benefits, but its history also poses the conditions for both.

Seth (2007), making use of the models of Artificial Life (AL; see e.g.Varela, 1997), highlights the distinction between behavior (defined as “ob-served ongoing agent-environment interactivity”) and mechanisms (“the agent-side structure subserving this interactivity”). The author refers to sim-ple simulated robotic models of foraging behaviors, where the agent has tomake choices among the approach or avoidance of different environmentalresources (Seth, 1998, 1999, 2001). The analysis of the experiments revealsthat (Seth, 2007, p.1548):

“[...] it is possible for a simple form of action selection to arisefrom the concurrent activity of multiple sensorimotor processeswith no clear distinctions among sensation, internal processingor action, without internal behavioral correlates and without anyexplicit process of internal arbitration.”

In other words, a complex articulation of behavior can be dynamicallyachieved via distributed mechanisms, where tightly interweaved perception,action and neural control equally contribute to contingent behavioral selec-tion4. On a similar note, Papers I (Appendix A) and III (Appendix C )explore in depth the nature of an analogous dynamic action selection mech-anism, self-organized by evolutionary algorithms (Holland, 1992; Goldberg,1989) in order to maintain an internal energy level.

This line of research represents a clear fracture with respect to: (i) op-timal foraging theory (Stephens and Krebs, 1987), where adaptation is an-alytically presented, in normative form, in order to interpret and modelbehavioral data, and (ii) agent-based hierarchical models (either individ-ual or collective) resting on an exogenously predefined and explicit valuesystem, e.g. as in Brook’s subsumption architectures (Brooks, 1999), orEdelman’s Darwin III (Franklin, 1997). Seth (2007) argues that AL meth-ods, despite their lack of “analytical transparency”, usefully integrate andextend optimal foraging theories by allowing experimental scenarios that

4Of course Seth (2007) also acknowledges the possibility that such models might over-look the actual complexity of biological action selection, or the evolutionary role of mod-ularity.

76

Page 91: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 77 — #91

are not analytically tractable, and the systematic study of non explicit in-ternal mechanisms. Similarly to optimal foraging models AL models thatuse evolutionary algorithms do apply optimization, and through the use ofthe fitness function make use of a concept of “currency” and constrain theset of possible agent-environment interactions. Nevertheless, by renouncingto the a priori use of a “decision variable” they allow the exploration of awider set of mechanisms (Seth, 2007).

Whilst action selection is a crucial contingent problem for the roboticsystem, the behavior of the robot needs to be projected and interpreted ona more global scale. McFarland’s concept of behavioral stability, i.e. thecondition under which the robot avoids situations of “irrecoverable debt”(e.g falling short of vital resources), relates the selected sequence of micro-behaviors to the temporal horizon of viability of the agent (McFarland,2008). This issue becomes particularly pressing once the robot has to ne-gotiate its limited time across different activities. For example, a servicerobot will have to mediate between the access to resources that are crucialto its immediate viability (e.g. energy) and to its need to somehow produce‘utility’, i.e. being perceived as useful by its owner, that would otherwiselose interest, quit maintenance and thus implicitly dictate the end of its op-erating. Interestingly , McFarland (2008, p.34) acknowledges that: “in theshort term [...] some index of the long term costs and benefits is required,in order to guide the animal or robot along the ‘right path’.” One of themost ambitious goals of this thesis is, in fact, to demonstrate that the pro-prioception of metabolic information represents one of the possible crucialmarkers of exactly this kind of ‘right path’.

5.3.3 Biological causation in cognitive architectures

Emotions are powerful motivators of behavior (LeDoux, 1996). This ob-servation alone would be sufficient to justify the growing interest towardsemotions in generation of artificial behavior research. Obviously the actualform that this interest takes tends to reflect the paradigmatic differences tothe study of cognition that have been sketched in the previous chapters. Forexample, classical AI first produced a number schemata approaches to thedeployment of artificial emotions (e.g. Grossberg, 1982; Sloman, 1987; Dyer,1987), seemingly inspired by the theories that see emotions as a particularclass of thoughts. More recent insights in the neuroanatomy of emotionalbehavior translated into computational neuroscience models (e.g. Armony etal., 1995; Lowe, Humphries, and Ziemke, 2009). Models of pre-programmed‘emotional value systems’ have also been presented in the robotics litera-ture (e.g. Ortony, Clore, and Collins, 1988; Avila-Garcıa, Canamero, and teBoekhorst, 2003; Avila-Garcıa and Canamero, 2004; Canamero, 2005; Rolls,2007; Becker-Asano, 2008).

Somatic theories of emotions (see Section 4.2.2) consider emotions asemerging from tangible processes, physically rooted in the body and ob-

77

Page 92: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 78 — #92

CHAPTER 5. CYBERNETICS, DYNAMICS, AUTONOMY

jectively measurable. As discussed above (Section 4.2.4), this situation isalmost compelling for a cognitive robotics that takes the challenge of model-ing biological embodiment seriously. Several theoretical positions reinforcethe appeal of this line of research. On a similar note, Searle (1980, p.422)argues that something is lacking in current cognitive architectures in or-der to convincingly replicate biological intentionality, assumed as the basicproperty for cognitive processes:

“It is not because I am the instantiation of a computer programthat I am able to understand English and have other forms ofintentionality (I am, I suppose, the instantiation of any numberof computer programs), but as far as we know it is because I .ama certain sort of organism with a certain biological (i.e. chem-ical and physical) structure, and this structure, under certainconditions, is causally capable of producing perception, action,understanding, learning, and other intentional phenomena. Andpart of the point of the present argument is that only somethingthat had those causal powers could have that intentionality.”

According to Searle, cognition is a biological phenomenon and what ismissing is “biological causal power”. In the absence of the specific quid, thatbelongs to the kind of embodiment characterizing living systems, the wholeproject of building intelligent machines is doomed (Searle, 1980, p.424):

“Could a machine think? My own view is that only a machinecould think, and indeed only very special kinds of machines,namely brains and machines that had the same causal powersas brains. And that is the main reason strong AI has had littleto tell us about thinking, since it has nothing to tell us aboutmachines. By its own definition, it is about programs, and pro-grams are not machines. Whatever else intentionality is, it is abiological phenomenon, and it is as likely to be as causally de-pendent on the specific biochemistry of its origins as lactation,photosynthesis, or any other biological phenomena.”

On a similar and more analytical vein, Bickhard (2009) describes thebiological foundations for the emergence of cognition. According to his ap-proach, namely interactivism, there are three basic necessary conditions fora cognitive system. Listed in base of growingly stricter constraints, they are(Bickhard, 2009):

1. the system must display sustained existence far-from-equilibrium, e.g.as in Rayleigh-Bernard convection cells where regular patterns of con-vection cells form on the surface of a fluid maintained under appropri-ate heat differential between its bottom and its open surface (see e.g.Strogatz, 1994);

78

Page 93: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 79 — #93

2. the system must be self-maintenant, i.e. as in a candle flame it mustbe capable of regenerating the conditions for the process to occur;

3. the system must be recursively self-maintenant, i.e. it must be capableof meintaining the property of being self-maintenant. Bickhard (2009)cites the example of a ‘science-fictional candle flame’ that can refuelwhen the candle is almost consumed.

From self-maintenant systems emerge functions, defined as “a contri-bution, or a tendency to contribute, to the maintenance of a far-from-equilibrium system” (e.g. the flame in the burning candle); from recursivelyself-maintenant system emerge representations5 , i.e. at their most primi-tive level, the predication associated to the selection of of specific modalitiesof interaction between the system and its environment, having anticipatorynature and, consequently, emergent truth value (a detailed set of referencesby the same author, distributed over two decades of intellectual activity canbe found in Bickhard, 2009). Biological dynamics imbue functions and rep-resentations with normative properties, as they relate to the very existenceof the living system.

According to Bickhard (2009), classical and connectionist computers canmerely simulate but not instantiate cognitive properties, for contrary to or-ganisms they have no significant properties of self-maintenance and openness(i.e. they do not significantly interact with their environment). To the con-trary, the form that far-from-equilibrium takes in living systems is a deepontological condition, rather than an incidental fact, due to the thermody-namical irreversibility of the process. In fact, once the far-from-equilibriumliving process is sufficiently disturbed it cannot be restarted, differently froma machine operating far-from-equilibrium. Bickhard (2009) concedes thatbattery operated robots can be considered, in some minimal and marginalsense, far-from-equilibrium, self-maintenant and recursively self-mantenant.But the reversibility of the machine process marginalizes the normative as-pects of function and representation. As Bickhard (2009, p.82) expressesit:

“The degree of far-from-equilibrium status for the ‘body’ of thesystem involved indirectly affects the degree of normativity. Func-tion and representation are normative to the extent that thereis something that is ontologically at stake in their ‘successful’functioning.”

5According to Bickhard’s interactivism, the concept of representation does not followthe mainstream approach. Bickhard maintains neutral with respect to the necessity orusefulness of correspondence-based representations. Yet he explicitly rejects the assimi-lation of representations and correspondences. For a detailed discussion we redirect thereader to the cited reference.

79

Page 94: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 80 — #94

CHAPTER 5. CYBERNETICS, DYNAMICS, AUTONOMY

5.3.4 Autonomy in autopoietic and enactive systems

In the light of the ideas presented in the previous section, the endeavorto critically reexamine the defining properties of living systems, autopoiesis(from the Greek ‘self’ and ‘creation’ or ‘production’ Maturana and Varela,1980), seems at least creditable. More recent theoretical contributions con-cur in fortifying this position (e.g. Ziemke, 2008; Ziemke and Lowe, 2009;Di Paolo, 2003). A further reason is that the work on autopoiesis by Hum-berto Maturana and Francisco Varela originated a more exacting definitionof autonomy.

The theory of autopoiesis is a radical redefinition of the concept of livingsystems, outside mainstream biology, aiming to establish necessary and suf-ficient conditions for the organization of the living (Maturana and Varela,1980). An autopoietic system is characterized by: (i) a network of processesthat continuously renovate the conditions for its self-production, thus (ii)defining in space and time a separate unity with respect to its environment(Maturana and Varela, 1980, 1992; Varela, 1997).

The organization of the system denotes the autopoietic network: it de-fines the set of interactions that must be satisfied in order to designate thesystem as member of a specific class (Maturana and Varela, 1980, 1992).This designation is independent on the structure, i.e. the factual materialimplementation of the system (Maturana and Varela, 1980, 1992). In fact,the physical matter that constitutes the autopoietic system is in perpet-ual flow for its structure is continuously regenerating whilst maintainingits stable organization. If homeostatic systems are defined as systems thatstably maintain one or more of their variables within a given range (vonBertalanffy, 1969), then autopoietic systems form a subset of the former,where, crucially, the internal organization is one of the invariant properties(Maturana and Varela, 1980; Varela, 1997).

Therefore, according to Maturana and Varela, unity, identity and auton-omy are constitutional rather than accessory properties of the living and assuch central to a research program committed with the exploration of theorganization of the living. The centrality of autonomy in the organizationof the living is explicitly made in the primary literature on autopoiesis (e.g.Maturana and Varela, 1980; Varela, 1997). For example Varela (1997, p.73)asserted:

“I want to start by declaring that I think that understanding oforganisms and the living is possible, that defining these terms ina satisfactory manner is not a utopian dream, and that we evenhave a good deal of the road already charted. However, this isunder a fundamental condition: that the autonomy of the livingis highlighted instead of forgotten, as it has been...”

Autopoietic systems are obviously thermodynamically open, i.e. they al-low a flux of energy and matter to and from their environment (e.g. energetic

80

Page 95: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 81 — #95

perturbations, that are the physical basis for perception). Nevertheless, theyare operationally closed in their organization, in the sense that the opera-tion of the autopoietic network of processes is self-referential, and as suchit can only directly affect itself (Maturana and Varela, 1980, 1992). Theautopoietic system interacts with its medium through a history of struc-tural coupling, i.e. the medium and the organism act as two organization-ally independent systems whilst inducing structural changes on each other.Adaptation allows for a sustained interaction that mutually preserves theorganization of the systems. If this does not occur, the system disintegrates(Maturana and Varela, 1980).

The circular and self-referential nature of the organization of the livingcan be also identified in cognitive systems. According to Maturana andVarela an autopoietic organization is actually necessary to explain the phe-nomenology of perception (Maturana and Varela, 1980, 1992). Accordingly,life and cognition appear as tightly interweaved. Cognition emerges withlife as the flip side of the same coin, along a continuously graded scale ofcomplexity, spanning from the simplest to the most complex forms of lifeorganization (Maturana and Varela, 1992; Stewart, 1995). Cognition is astructural adaptive mechanism aimed and constrained to the maintenanceof the autopoietic, operationally closed organization (Maturana and Varela,1980). An implication of the operationally closed nature of cognition isthat the theory of autopoiesis is “intrinsically and radically non-objectivist”(Stewart, 1995).

Varela (1997, p.73) synthesizes the meaning of giving autonomy a cen-tral role with two propositions: (i) “Organisms are fundamentally a processof constitution of an identity” and (ii) “The organism’s emergent identitygives, logically and mechanistically, the point of reference for a domain ofinteractions”. Elaborating on these ideas, Varela draws a circular scheme,here presented in Fig. 5.3. Varela highlighted the nature of an autopoi-etic account for autonomy, emergent on the cyclic and “complementary”interweaving of the two main propositions, one describing the constitutiona unitary identity, and the other relating this identity to its domain of ex-istence. That the living unity introduces a fundamental asymmetry in therelation to its environment: for it plays the active role in this interaction,it becomes quite naturally the privileged point of reference. The organismengaged with “the permanent, relentless action on what is lacking” to themaintenance of its internal organization is recognized by the observer assource of cognitive action. This activity marks the fundamental differencebetween the autopoietic system and its environment (Varela, 1997).

Within this cycle, meaning emerges spontaneously as an observable prop-erty, inherently nested within the tight interaction between the autopoieticsystem and its environment. “There is no food significance in sucrose exceptwhen a bacteria swims upgradient and its metabolism uses the molecule ina way that allows its identity to continue” (Varela, 1997, p.79). Obviously,Varela was not advocating a purely functional interpretation of the emer-

81

Page 96: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 82 — #96

CHAPTER 5. CYBERNETICS, DYNAMICS, AUTONOMY

Identity Domain of Interactions

Operational Closure

Significance- Information

entails

intentionallink

causesemergence of

• autopoiesis• sensorimotor loops• immune networks

• cell signaling• perception-action• somatic recognition

causesemergence of

Figure 5.4: Schematic of an autopoietic account for autonomy (adapted fromVarela, 1997).

gence of significance (i.e. swimming upgradient in itself has nothing to dowith significance). The existence of the domain of interaction between theliving unity and its environment, and the set of dynamics that constitute it,is the crucial condition under which we can recognize the significance of arelation.

The legacy of autopoiesis has been carried on by an approach to thestudy of cognition named enactivism (Varela, Thompson, and Rosch, 1991).Varela, Thompson, and Rosch (1991) himself realized that despite the carefulformalization the theory of autopoiesis on scientific grounds, some aspectswere still unsatisfactory and in need of further work, e.g. the level of detailin the description of evolution and reproduction. In a recent paper, Froeseand Stewart overview a number of “fundamental criticisms of the originalconcept of autopoiesis”, that seem to suggest the necessity of moving beyondclassical autopoiesis (Froese and Stewart, 2010). Nevertheless, it could bereasonably argued that the clarity of the powerful original intuition, as fromthe primary literature, risks getting blurred by the at times convoluted oreven questionable arguments of more recent attempts to expand the theory.

In the following Paper IV (Appendix D), the ideas expressed by Varela(1997) and Bickhard (2009, Section 5.3.3) will be reconsidered in order toassess a specific and original instantiation of an energy and motivationallyautonomous robotic system based on a hybrid bio-mechatronic architecture.The living component of the system extracts energy from food and waterand provides the mechatronic component with power. In return, the latterprovides the ability for sensing and action in order to seek and fetch foodand water from the environment.

82

Page 97: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 83 — #97

Chapter 6

Experimental work

The goal of this chapter is to introduce to the author’s experimental work.Firstly, a few experimental methods that are essential to the full comprehen-sion of the following papers, namely artificial neural networks, evolutionaryalgorithms and evolutionary robotics, are presented. Then a collection ofselected papers are briefly introduced, to prime the reader with a general feelof the direction, motivation and main achievements of the author’s doctoralresearch.

6.1 Artificial neural networks

Artificial neural networks (hereafter ANNs) have a long and well establishedtradition as alternative computational paradigm for the study of the mind.In ANNs, the sequential and centralized computational model of symbolmanipulation that characterizes classical computationalism is rejected in fa-vor of a parallel and distributed approach (connectionism). In the originalexperimental work reported in this chapter, ANNs are used as ‘neurocon-trollers’, i.e. an adaptive system that represents in abstract terms the ‘ner-vous system’ of the robot.

An ANN is built by combining in an architecture a number of atomicelements. The general idea is that each basic unit receives an input vector.Each component of the input is scaled by a weight factor, that amplifiesor fades the relative input. Positive and negative weights, respectively, de-termine excitatory or inhibitory connections. The sum of all the weightedinputs is passed as argument to an activation function, linear or non-linear,that produces the actual output of the unit. Several kinds of basic units,with similar computational power, have been described in the literaturestarting in the 1940s. Similarly, different kinds of feedforward and recurrentarchitectures, together with appropriate training algorithm for the adapta-tion of their weights to the specific task, have been presented. A general,broad and mathematically rigorous review is offered by (Haykin, 1998).

83

Page 98: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 84 — #98

CHAPTER 6. EXPERIMENTAL WORK

In addition to their growing technical use, ANNs can be used and inter-preted as model of the biological nervous system. As any abstract descrip-tion, ANNs fail to incorporate several properties of the biological nervoussystem. For example, properties related to spatial arrangement (e.g. local-ization, energetic cost of long inter-unit connections, spatial segregation inbrain giri) or biochemical inertial properties are not acknowledged by themost common models. Nevertheless, even extremely simple ANNs displaypeculiar capacities and idiosyncrasies that make them particularly appealingfor the modeling and simulation of mental properties (Elman et al., 1997).

Despite the proliferation in connectionist research and technology of ap-plications of feedforward ANNs, the most interesting developments, particu-larly relevant to robotics, exploits the properties of recurrent ANN architec-tures (RNNs). In recent years, particularly inspiring work has been broughtforth by Jun Tani and his group. This body of work explores the modu-lation of the behavior generated by RNNs and the interaction of networksoperating at different time scales (Tani and Nolfi, 1999; Tani and Ito, 2003;Tani, 2003; Paine and Tani, 2004, 2005; Ito and Tani, 2004; Ito et al., 2006;Yamashita and Tani, 2008).

Nevertheless, although extensively tested, RNNs will hardly be men-tioned in the original experimental work presented in this thesis. Our pointis that in order to achieve interesting dynamics we can exploit the com-plex dynamics of interaction between the agent and its environment. Inother words, we observed the kind of behavior that is typically displayedby non-linear dynamical systems emerge from the non-linear nature of themotor-sensory mapping, rather than from a complexification of the sensori-motor map (that in our experiments was typically implemented as a trivialfeedforward ANN with no hidden layers).

6.2 Evolutionary algorithms

In our experiments the adaptation of the neurocontrollers is achieved via astandard evolutionary algorithm (EA Holland, 1992; Goldberg, 1989). Thisclass of algorithms is a powerful optimization tool in situations of uncertainty(when analytical or other heuristic methods fail or don’t achieve satisfactoryperformance). With explicit reference to robotics, they have been nicely andintuitively presented by Braitenberg (1984). Imagine to build a collectionof actuated robots, with a very simple random sensorimotor system. Oncethe population of robots is completed, drop it on a table and leave it freeto move. Each time that a robot falls from the table pick up another robotfrom the table, clone it (i.e. create and exact identical copy of it) and at thesame time introduce a small change in its sensorimotor circuit. Robots that,because of their specific sensorimotor controller, tend to spend more time onthe table are more likely to be selected for cloning. Over a longer timescalethis process will tend to create a population of robots that skillfully avoidfalling off from the table (Braitenberg, 1984).

84

Page 99: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 85 — #99

As a computational metaphor of natural evolution, there are two es-sential components in EAs: (i) reproductive success of the fittest elementswithin the population; (ii) the capacity to create descent with modifications.Typically, the former component implies a metric that explicitly quantifiesperformance (Beer, 1995).

LeDoux (1996) makes three important observations about natural evo-lution:

• a measure of fitness cannot be identified in natural evolution, wherefitness is intrinsically inherent to the natural environment rather thanexternally accessible;

• natural evolution is terribly inefficient on a small time scale, whilstpowerful on evolutionary time;

• natural evolution is a rather peculiar kind of optimization technique,for (LeDoux, 1996, p.123): “evolution should not be thought of as anascending scale. It is more like a branching tree.”

We can translate these observations in terms of EAs. Firstly, ‘evolution-ary time’ implies a procedure where a sufficiently large evolving populationcan be repeatedly tested over several generations. Secondly, in the nextsection we will see how the creation of more naturalistic agent-environmentdynamics relaxes the need for explicit metrics. Finally, we should keepin mind that a well balanced use of EAs in dynamic environments wouldpreserve the tendency of natural evolution to maintain diversity within apopulation, so to better perform in case of environmental changes.

A technique that combines ANNs and EAs, i.e. that makes use of EAsin order to adapt the neurocontroller of a robot, will be the subject of thenext section.

6.3 Evolutionary robotics

Evolutionary robotics is a relatively recent “technique for automatic creationof autonomous robots” (Nolfi and Floreano, 2000). This methodology makesuse of EAs to adapt (i.e. determine the free parameters of) the ANN-basedneurocontroller of a robotic system. This research field was pioneered by theResearch Group of Artificial Life in Rome, and teams at the Swiss FederalInstitute of Technology in Lausanne, at the University of Sussex at Brightonand at the University of Southern California (Nolfi and Floreano, 2000;Harvey et al., 1997, 2005).

Despite the simplicity of the robotic systems that are typically used, thepresence of the robot strongly characterizes this method. Historically, con-nectionism developed as a disembodied alternative to classical AI. The evolu-tionary robotics approach emphasized at least rudimentary agent-environment

85

Page 100: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 86 — #100

CHAPTER 6. EXPERIMENTAL WORK

dynamic interactions. This is a first crucial property of evolutionary tech-nique applied to robotics (Beer, 1995, p.192): “This ability to automati-cally tailor agent dynamics to fit the dynamical and statistical structure ofa given environment is a significant advantage of automated agent designtechniques.” At the same time, we have to acknowledge the challengingnature of the analysis of systems created with the methods of evolutionaryrobotics (Nolfi and Floreano, 2000, p.53): “Evolved controllers cannot befully understood by isolating them from their environment and looking attheir structure because their functioning has become intimately related tothe physical interactions between the robot and the environment.” EAshave been used to study the interaction between evolutionary adaptationand lifelong learning (for a review see Nolfi and Floreano, 2000), and inrecent years to coevolve the neurocontroller together with the body plan inreal and simulated robots (e.g. Lipson, 2005; Bongard, 2011).

Within this context EAs offered the designer the opportunity to be min-imally invasive in the determination of the solution to the given task. Aclear example of avoidance of this ‘designer’s bias’ is offered by the homingbehavior described by Floreano and Mondada (1996). They introduced asimple (real) Khepera robot, provided with a simulated battery, in a squarearena. Within the arena, a light turret marked a recharging area and perme-ated the environment with a light gradient. The robot could overcome thenatural discharge of its battery by periodically entering in the rechargingarea, thus instantaneously refilling the battery. A discharged battery wouldprevent further movement (‘death’). The potential lifetime was three timesthe duration of a charged battery. The fitness function set by the authorssimply rewarded motor activation (movement) without collision with theboundary walls of the arena (Floreano and Mondada, 1996). Despite thefact that nothing had been explicitly mentioned about the recharging areaor the need of recharge, this fitness function implicitly poses a requirementfor the robot about its capacity to get recharged. In fact, a shorter life thanthe available potential lifespan, due to battery discharge, will not allow formuch action. This way the authors only posed a minimal and naturalisticconstraint on the selective process.

Here we can observe a further crucial property of evolutionary roboticsat work. Nolfi and Floreano (2000) introduce the concept of fitness space toemphasize the general properties of fitness functions (Fig. 6.1). The fitnessspace, where a generic fitness function can be roughly characterized as apoint, can be represented as an abstract three-dimensional space having thefollowing dimensions:

• the external-internal dimension determines in which measure the fit-ness function makes use of information that is available to an externalobserver (e.g. absolute distance from the recharging area) or ratherinternally available to the robot (e.g. activation of the battery sensor);

• the functional-behavioral dimension quantifies how much the fitness

86

Page 101: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 87 — #101

self-orga-nization

functional behavioral

implicit explicit external in

ternal

traditionaloptimization

Figure 6.1: Fitness space (see text; adapted from Nolfi and Floreano, 2000).

function poses a functional constraint (e.g. maintaining a predeter-mined timing and trajectory to the recharging area) or a behavioralone (e.g. reaching the recharging area);

• the explicit-implicit dimension estimates how much analytical detail isrequired in the fitness function (e.g. an explicit fitness function mightreward the maintenance of a certain range of energy level, speed, etc.,whilst an implicit one might select robots that do not run out of energyafter n time steps of their life). A naturalistic implicit requirement isviability, maintained up to the moment of reproduction.

A fitness function that combines explicit, external and functional charac-teristics takes the use of EAs closer to a traditional optimization technique.The choice of an implicit, internal and behavioral fitness function promotesthe emergence of self-organized solutions, where the designer is maximallydetached from the solutions to the task returned by the algorithm. In thiscase (Nolfi and Floreano, 2000, p.118): “Artificial evolution: a) tends toproduce a variety of different solutions for a single problem; b) tends to se-lect solutions which rely on emergent behavior resulting from the interactionbetween the agent and the environment, and less on internal mechanisms(i.e. robots that are rather simple in their control system). Similar pointshave been recently risen by Nelson, Barlow, and Doitsidis (2009).

We would like to conclude this section by briefly reporting three resultsin evolutionary robotics. We consider their message so clear and importantthat it should belong to the basic knowledge of any engineer, rather than tothe specialized community of evolutionary robotics. The first result comesfrom a discussion about the problem of perceptual aliasing, i.e. the situationwhere remarkably similar inputs require different outputs (Nolfi and Flore-

87

Page 102: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 88 — #102

CHAPTER 6. EXPERIMENTAL WORK

ano, 2000, see also 3.2.5). For example, the disambiguation between a walland a small cylinder might be a non trivial task for a feedforward ANN, ifthe sensory input only comes from a small array of infrared sensors (Nolfi,1997a; Nolfi and Floreano, 2000). Similar problems led Clark and Thornton(1997) to draw a distinction between type-1 and type-2 problems. In theformer case, the regularities in the data set facilitate the production of acorrect output. In the latter, regularities are marginal and hidden withinthe statistics of the inputs and that makes the production of the correct out-put problematic (Scheier, Pfeifer, and Kunyioshi, 1998; Clark and Thornton,1997; Nolfi, 1997a; Nolfi and Floreano, 2000). In the mentioned case (disam-biguation between walls and cylinders), the variation in sensory informationinduced by one and the same objects form different angles and distancescan be larger than the variation coming from different objects. Therefore,achieving a correct classification based on the sensory input is a hard taskfor a feedforward ANN with no hidden layers (Nolfi, 1997a; Nolfi and Flo-reano, 2000). Nevertheless, the problem can be drastically simplified if weallow behavior to structure the sensory input in a time sequence. In otherwords, a robot that can also act in its environment during disambiguation,rather than statically sense, will face a simpler problem. In our example,feedforward ANN with no hidden layers controlling a simple moving robotcan solve a problem that called for further computational resources in thestatic case. We have a first example where traditional engineering methodsbased on the divide et impera metaphor (partitioning the problem in sub-tasks, e.g. solving the problem of perception as separated from the problemof navigation) might be suboptimal.

A second example of how the divide et impera approach might be sub-optimal comes from an experiment where a simple robot was asked to track,grasp and clear small cylinders from an arena (Nolfi, 1997b; Nolfi and Flo-reano, 2000). The author adapted by EA different neurocontrollers. Amongothers, he compared a hardwired modular architecture with a self-organizedone. In the former, one module took control of the robot when the grip-per was empty (thus being responsible for the ability to avoid walls, findcylinders, grasp them); the other module kicked in when the gripper wasfull (hence providing the ability to avoid other cylinders, find a wall andstop in front of it, release the cylinder). The latter modular architectureself-organized its own arbitration mechanism. Two different modules wereallowed to drive the robot’s motors and alternate in the control the ac-tuation. The ‘emergent’ modular architecture outperforms all the otherneurocontrollers by accelerating evolution, achieving higher fitness and de-veloping more robust controllers. Interestingly, the analysis showed the lackof correspondence between the use of the two modules and distal behaviorsthat a human observer could intuitively describe (Nolfi, 1997b; Nolfi andFloreano, 2000).

The final result comes from the evolution of homing behavior that wasalready mentioned as a paradigmatic example implicit fitness function (Flo-

88

Page 103: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 89 — #103

reano and Mondada, 1996). In the analysis of an evolved architecture, theauthors showed how the activation of the internal units of the recursive neu-rocontroller implemented a rather impressive ‘topological map’ of the salientproperties of the robot’s environment (Floreano and Mondada, 1996). Thissuggest that detailed representations might emerge from an adaptive pro-cess that is centered on the machine’s perception of the task, rather thanbeing directly imposed, via pre-wiring or predefined learning, by the humandesigner.

The following section will briefly introduce a selection of articles report-ing the original experimental work in evolutionary robotics developed by theauthor.

6.4 Overview of included publications

6.4.1 Paper I

Published as: A. Montebelli, C. Herrera, and T. Ziemke (2008). On cogni-tion as dynamical coupling: An analysis of behavioral attractor dynamics.Adaptive Behavior, 16(2-3):182-195.

In February 2007, we started to work on an experimental set up where physi-cal energy, that represents the outcome of an abstract metabolic mechanism,would cover a central role. Our work started with the replication in simula-tion of the results presented some 10 years ahead by Floreano and Mondada(1996). The main elements that triggered our interest in this work were: (i)energy as a motivational determinant of behavior; (ii) self-organization ofthe system via basic requirements for survival. In fact, with reference to thelatter, the system adapted through a standard EA (Holland, 1992; Gold-berg, 1989), whose fitness function simply rewarded the capacity to findand exploit the available energy source to recharge the simulated batteryof the robot, without explicit indications of what should be done and how(Floreano and Mondada, 1996).

Soon we produced our own setup and our work spanned over differentdomains of investigation, ranging from a more high level perspective on af-fect and emotion (Herrera, Montebelli, and Ziemke, 2007), to a quantitativeanalysis that resulted in this paper. The evolved robot was obviously ca-pable to face the elementary task at hand. Nevertheless I felt compelledto understand how the system organized during adaptation its behavioralpotential in order to achieve effective behavior. Patient observations slowlytranslated into a map, systematically relating different classes of behaviorto different energy levels.

The main contribution of this work, that resulted in two publications(Montebelli, Herrera, and Ziemke, 2007, 2008), is a methodology that al-lowed us to expose the dynamic structure of the system’s phase space asa function of the energy level (Fig. 6.2), where the system is defined as

89

Page 104: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 90 — #104

CHAPTER 6. EXPERIMENTAL WORK

Figure 6.2: Update of Fig. 5 in Paper I, to correctly integrate into thestatistics of attractor ‘H’ the instances of the non-existing attractor ‘B’ (seetext). The information is represented as in the original picture. Top graph:relative frequency of the nine observed behavioral attractors over all 500replications. Bottom graph: the intensity of the pixels for each column(corresponding to attractors A-I) represents the relative frequency of thebehavioral attractor for all ten replications at each level of energy (adaptedfrom Montebelli, Herrera, and Ziemke, 2008).

the environment-agent coupling. This method highlights the nature of theself-organized dynamic action selection mechanism, as adapted via EA andachieved without explicit rules. We displayed the absence of static behav-ioral repertoire, but rather a continuous remodulation of the phase spacethat creates, reshapes and destroys behavioral attractors. At the same timeemphasized the importance of the interactions between multiple time scales:the fast time scales of normal sensorimotor interactions as contrasted to theslow time scale of the arbitrary energy mechanism.

As in any ‘debut’ some naivete can be easily spot in this paper in itsat times fussy prose. Unfortunately, a more serious problem emerged withtime in the classification of the behavioral attractors. What in the paperis called attractor ‘B’ is, in fact, but an instance of attractor ‘H’. This hasbecome clear during further experimental work that resulted in Paper VAppendix E and Paper VI Appendix F . In the updated set up we forcedstrong perturbations in the simulated robot’s sensory flow (see details inAppendix E and F). We observed how perturbations quite systematicallyinduced instances of attractor ‘H’ for the values of energy where we would

90

Page 105: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 91 — #105

expect attractor ‘B’ instead.

Accordingly we can now redraw a more correct version of Fig. 5 in PaperI for the lowest levels of energy (Fig. 6.2). Nevertheless, this mistake doesnot affect the core of the result, and rather hinders the experience of aneven more rich and interesting dynamic landscape. For each energy level,a higher number of concurrent attractors than we visualized in Fig. 6.2might actually be hiding in the phase space. In most cases their expressionas actual attractors of the system might be highly improbable, due to theirextremely limited basin of attraction.

It seems interesting to comment this image in the light of what Parisiand Petrosino (2010) call the motivational and the cognitive level of behav-ior. In our scenario, the energy level plays the functional role of the controlparameter. Its different levels set the system in different motivational states.For low levels of energy, the behavior of the robot is biased towards a broadexploration of its environment in search for the source of recharge. Con-versely, for high levels of energy, the robot is motivated to remain nearbythe area where it obviously had recently found a recharge, thus adoptinglocal behaviors. Within this main motivational drive, the robot’s dynamicsselect the actual behavior according to the laws of dynamical systems, for anumber of behavioral attractors is competing with each other for each levelof energy.

The contribution of the various authors of this paper should be betterspecified. Carlos Herrera designed the final experimental set up and raninitial simulations including the ‘energy binary-switch’ scenario. He alsoinspired the analysis resulting in Figure 6. Furthermore, the first section ofthe paper, ‘Introduction and background’, is mostly his work. Tom Ziemkeinspired our general motivations. In particular, he primed us with the idea ofinternal robotics (discussed in Chapter 4). Finally he contributed to severalparts of the manuscript. Within this collaboration, the rest of the papershould be considered as entirely due to the author of this thesis.

6.4.2 Paper II

Currently under review: A. Montebelli, I. Ieropoulos, R. Lowe, C. Melhuish,J. Greenman, T. Ziemke (2011). An oxygen-diffusion cathode MFC modelfor simulation of energy-autonomous robots. Submitted for journal publica-tion.

The arbitrariness of the energy mechanism in the work reported in PaperI had been an obvious reason of partial discontent. We had shown thatnon-neural internal mechanisms could be crucially involved in the determi-nation of motivated behavior, yet in doing so we had to rely on a simpleand artifactual energy mechanism. The challenge was clear: could we finda sound reference for a more realistic energy dynamic?

In October 2009, Robert lowe and I visited the Bristol Robotics Lab

91

Page 106: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 92 — #106

CHAPTER 6. EXPERIMENTAL WORK

in Bristol, UK. Their prototype microbial fuel cells (MFCs) powered robot(EcobotIII, see Ieropoulos et al., 2010) was exactly the kind of artificialmetabolic system we were after. Accordingly, we started to develop a modelof energy generation dynamics in MFCs. A low computational overhead onthe already computationally demanding robot simulator seemed a necessaryprecondition for the ideal model. This ruled out the few existing mathemat-ical models of MFCs and suggested focusing on macroscopic quantities at ahigh level of abstraction.

A first version of the model was accomplished by spring 2010 and waspresented at the ALife XII conference in August 2010 (see the followingpaper in Section 6.4.3 Montebelli et al., 2010). The design of the modelwas complicated by the fact that the available data were far from optimal.Several variables that are crucial for modeling purpose actually were largelyout of control in the experimental conditions for the available data, andthis complicated the identification of the model parameters. We had tonecessarily try to filter out as much background noise as possible, and still tryto extract a satisfactory representation of the actual MFC power dynamics.This process translated into a drastic revision of the basic equations of themodel, completed within fall 2010, that has now achieved its stable currentform, here presented.

It might be questioned the reason for investing time and energy on themodeling of MFC, especially considering the main focus of this work oncognitive robotics. Would it not be sufficient to simply implement somereasonable yet arbitrary artificial metabolism? I will use a metaphor tomake clear my position. Consider all biological organisms. In the history ofthe life on our planet many different strategies have been deployed, refined,upgraded, discarded, rediscovered. They might have been suboptimal, nev-ertheless viable strategies can be tracked back several thousands, millionsor even billions of years ago. However, some classes of solutions, no matterhow efficient, never emerged in the natural history, and probably will neveremerge in future. Skating, or cycling represent two energy efficient formsof movement in the space, yet they are human cultural constructs, for nomacroscopic biological organism ever developed (to our knowledge) anythingsimilar. There might be something intrinsic, structural to the organizationalpossibilities of the biological matter, that prevents the generation of, e.g.,wheeled animal extremities.

Similarly we can easily imagine more or less biomimetic artificial metabolismsand provide our robotic models with their functional support. Operatively,the implications of such solutions might lead to a similar or even better gen-eral performance with respect to the ones achieved with the use of a MFCmodel. Nevertheless, MFCs keeps us in contact with an implementable (andactually implemented) technology, and as such with the domain of the real,of the objectively ’possible’. Deciding to invest our efforts in MFC mod-eling we had perfectly in mind the fact that our choice would ground ourwork on the track of physically realistic hybrid systems, rather than blur

92

Page 107: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 93 — #107

our reference on an arbitrary, purely synthetic world.

Ioannis Ieropoulos is the author of the experiments that offered us realMFC data. He also significantly improved the general presentation of MFCtechnologies (Section 2, ‘Microbial fuel cells’). Long conversations withRobert Lowe led us to a clearer conceptual framework for the developmentof the model. He is the main author of Section 4, ‘A robotic demonstra-tion’ and, together with Ioannis, also offered precious feedback about themanuscript. Chris Melhuish, John Greenman and Tom Ziemke were involvedin few yet enlightening and inspiring discussions, and also helped improvingthe manuscript. The remainder of the work, and in particular the analysisof the data, the development of the model and most of the writing, wasindependently accomplished by the author of this thesis.

6.4.3 Paper III

Published as: A. Montebelli, R. Lowe, I. Ieropoulos, C. Melhuish, J. Green-man, T. Ziemke (2010). Microbial Fuel Cell driven behavioral dynamicsin robot simulations. Proceedings of the 12th International Conference onthe Synthesis and Simulation of Living Systems (ALife XII), pages 749-756,The MIT Press.

The first version if the MFC model was used for this simple experiment,presented at the ALife XII conference (Montebelli et al., 2010). The sim-ulated MFC powered robot was expected to exploit and negotiate betweenthe two environmental resources (water and food) necessary to its energyproduction and behavioral stability. The system displayed a motivationalmodulation of behavior similar to the one demonstrated in Paper I. In thiscase two control parameters (levels of chemical energy in the substrate) dic-tated the local subset of behavioral options from which the actual behavioremerged.

Also, the limitation on the available energy resulted in interesting exam-ples of behavioral diversity. Different agents achieved similar performance bycontinuous or pulsed behavior. In nature, pygmy three-toed sloth (Bradypuspygmaeus) is an arboreal mammal populating small areas of Central Amer-ica that lives and feeds on red mangroves. It is reported for its extremelyslow yet persistent movements. On the other hand, crocodiles are prototypi-cal animals that display explosive release of energy, followed by long periodsduring which they stay dormant, thus conserving energy.

The present paper was broadly created within the same network of col-laborations described for the previous one. Nevertheless, the practical exper-imental work and the writing to report it were brought forth independentlyby the author of this thesis.

93

Page 108: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 94 — #108

CHAPTER 6. EXPERIMENTAL WORK

6.4.4 Paper IV

Accepted for publication: A. Montebelli, R. Lowe and T. Ziemke (2012).Towards Metabolic Robotics: Insights from Modeling Embodied Cognitionin a Bio-mechatronic Symbiont. Artificial Life, in press.

This paper is the least technical of this whole collection. It is based onlarge part of my experimental work and puts it in a broader perspective.The general question that it explores is: what are the general consequencesof choosing a robotic system with the characteristics detailed in the previouspapers? A first element to consider is the low rate of energy that is madeavailable by the MFCs powering the simulated robot. What are the cur-rent and foreseeable limits of such a system? And what is peculiar in thissystem? What remains that differentiates it from more traditional roboticsystems once we remove any energy constraints so that the simulated robotcan be provided with virtually infinite energy?

Energy constraints force both MFC and traditional battery poweredrobots to create a tight interaction with its environment. Nevertheless, MFCcan couple to a natural rather than to an artificial environment (where by‘artificial’ we mean a human environmental artifact, e.g. provided withpower sockets for the battery recharge) and it is has to necessarily establishand maintain an ecological balance with it.

This paper also explores in higher detail the role of signals at differenttime scales. For years I looked with interest at the elegant experimentswith recurrent neural networks with parametric bias (RNNPB) developedby the group of Jun Tani (Tani, 2003; Tani and Ito, 2003; Ito et al., 2006).Apparently, the parametric signal, with its slow variations compared to thesensorimotor flow, plays in Tani’s robots a role that is very similar to hy-dration and chemical energy in our setup. Nevertheless, we should notoverlook that the low frequency control signals emerging from our artificialmetabolism are deeply rooted in a mechanism that determines the system’s‘health’.

Low frequency metabolic dynamics measure the health and viability ofthe relation between the robot and its environment. They associate the highfrequency sensorimotor flow, that characterizes the interaction in a given en-vironment at a more contingent level, with the essence of adaptivity: theorganismic well being. By neglecting this fact, cognitive science risks to re-main trapped in contingent and local dynamics. Cognitive architectures aremutilated of a probably powerful mechanism to provide them with intrinsicmotivations. Neglect of internal bodily dynamics trivializes and marginal-izes the role of the body within the cognitive and adaptive process, and thisis probably the crucial claim of this whole thesis.

For this and all the remaining papers, the author of this thesis was themain responsible for ideas, writing and (where it applies) production of ex-perimental material. Long discussions with Robert Lowe were fundamentalfor the creation of a more clear theoretical frame of reference. Tom Ziemke

94

Page 109: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 95 — #109

often offered precious suggestions and helped improving the quality of thefinal manuscripts.

6.4.5 Paper V and VI

Paper V, published as: A. Montebelli, R. Lowe and T. Ziemke (2009). Thecognitive body: from dynamic modulation to anticipation. In G. Pezzulo,M. Butz, O. Sigaud, and G. Baldassarre, editors, Anticipatory Behavior inAdaptive Learning Systems (ABiALS 2009), pages 132-151, Springer.Paper VI, published as: A. Montebelli, R. Lowe and T. Ziemke (2010).More from the Body: Embodied anticipation for swift re-adaptation in neu-rocomputational cognitive architectures for robotic agents. In J. Gray, andS. Nefti-Meziani, editors, Advances in Cognitive Systems, pages 249-270,IET.

The results described in the paper in Section 6.4.1 clearly taught usthat through the non internal artificial metabolism a lot of ‘knowledge’ wasactually stored within the system. This knowledge could be easily evokedand exploited with a much broader scope than the original task, e.g. whenthe robot was faced with novel situations. Furthermore, we realized that theset of system’s control parameters that can be extracted from the artificialmetabolism represents a drastic dimensional reduction with respect to thenumber of parameters that can be adapted in the artificial neural network.How could these two idea be put at work?

Paper V is more focused on the experimental results, whilst the followingPaper VI in Section F aimed to organizing the material in a broader theo-retical picture. The robot’s task, initially limited to finding and exploitingthe recharging areas available in the environment, was further extended.Intervals of perceptual perturbation (see details in Paper V) correspondedto a negative (energy drain) rather than positive effect provided by therecharging areas. The system (a simple cognitive architecture with the ca-pacity to modulate its metabolic dynamics), initially trained on the basictask, demonstrated its capacity to swiftly readapt via random changes tothe extended and novel scenario.

During the creations of the manuscripts though, emerged the remark-able similarity with the work by cyberneticist Ross Ashby (see Chapter 5).Despite obvious technical differences, and a different emphasis on some spe-cific aspects (e.g. information compression), our work fits well within thetheoretical background drawn some six decades ago by Ashby (1960).

95

Page 110: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 96 — #110

Page 111: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 97 — #111

Chapter 7

Conclusions

In a recent book, Hutter (2005) presented a formal theory of problem solvingfor unknown goals and environments, namely universal artificial intelligence.The author offers mathematical proof of the optimality of the algorithm. Inother words he proves it to be the fastest possible algorithm for well-definedproblems. In mathematics, a well-defined expression implies a completelyunambiguous definition, i.e. a univocal possible interpretation. Indeed,there is no reason to question the intellectual importance of such a result,or the soundness of a method that takes the formal approach to the point ofproviding optimal solutions. However, the crucial point is the specific stancein relation to cognition. Which fraction of the problems that biological cog-nizers face during their daily roaming in the wild takes a mathematicallywell-defined form? The intuition that underlies this thesis is that a sensi-ble answer would be: “a small fraction”. The hypothesis of well-definitionappears hardly compatible with natural organism-environment dynamics.Despite the popularity of artificial intelligence approaches, it seems reason-able to look for a solution to the problem of survival and reproduction inthe domain where such a solution has been apparently found and put ontrial: the realm of biological living systems.

The search that has been reported in these pages starts with the hypoth-esis that biological mechanisms must be foundational to cognition. Cogni-tive science should take basic biological mechanisms into account in orderto achieve a deeper understanding of cognitive phenomena. To better un-derstand cognition we have to take biology more seriously. Biology expe-rienced a similar process, achieving breakthrough advances once it startedto consider physicochemical mechanisms as its sound and solid conceptualsubstrate. Drawing a chain of physical causal relations is paramount to athorough understanding of the underlying mechanisms of cognition, and aconceptual step that cannot be ignored or bypassed.

Crucially, the needs of the biological body do not just constitute a bur-den of constraints. The method of artificial evolution, the best theoretical

97

Page 112: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 98 — #112

CHAPTER 7. CONCLUSIONS

model we have to study natural evolution and its dynamics, have shownhow the process of evolution can rather systematically turn limitations, andother supposedly unrelated phenomena available into the environment, intoopportunities. For example, an electronic oscillator with no internal clockdesigned by evolutionary algorithm might surprise its human designers bydiscovering and exploiting the internal clock of a nearby computer (Bird andLayzell, 2002).

7.1 Metabolism as a motivational engine

It should be added that internal bodily mechanisms are not merely incidentalphenomena accessible to cognition. They constitute the basic dynamics thatkeep organisms alive and healthy. Metabolic variables are fundamentallyinformative of the organism’s state, well being and of its chance to surviveand successfully contribute to the maintenance of its species by reproduction.

Once this has been clarified, it should come as almost no surprise howmetabolic information proved to be a powerful motivational mechanism forour simulated robotic systems. This is particularly clear in the analysisoutlined in Paper I (Appendix A), where the energy level determines thegeneral behavioral attitude of the robot. During energy deficit, and whenthe adapted modalities of interaction with its environment do not apparentlyprovide what is needed (i.e. energy recharge), the robot is biased towardsexploration, i.e. searching for what is necessary and not available. For highenergy levels, the robot tends to remain in the proximity of safe places, thatarguably provided energy in the close past and could potentially keep doingso in the future. A similar mechanism, presented in Paper III (AppendixC), applied to a two-dimensional scenario, modulating the access to ‘food’and water, resources that were crucial to the robot’s availability of energy.

The main contribution that emerges from this thesis is the observationthat, through evolutionary adaptation, the slow dynamics of metabolic infor-mation play a fundamental role in the determination of the robot’s behavior.We can distinguish two levels:

1. Metabolic signals ground motivational forces that structure the sen-sory motor flow. Such a capacity is functional to the accomplishmentof the robot’s mission. For example, with reference to Paper I, thesame sensory information triggers drastically different behaviors ac-cording to the current energy level. The view of an apple pie biasesour behavior in peculiar ways, dependent on if we feel hungry or com-pletely satiated.

2. Even more crucially, metabolic information puts sensorimotor informa-tion under a new perspective. It associates the contingent modality ofagent-environment sensorimotor engagement with the very dynamicsthat lead to the agent’s well being. We can consider this integration as

98

Page 113: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 99 — #113

a pivotal cognitive capacity, for cognition is the deployment of powerfuland flexible means to survival.

This potential for integration of different levels of information, and theirgrounding in the dynamics of survival, seems to specifically require a (nat-ural or artificial) metabolism. For example, in Section 6.4.4 we have men-tioned how the parameter in RNNPB architectures (Tani, 2003; Tani andIto, 2003; Ito et al., 2006) might be highly reminiscent of the control pa-rameters in our artificial metabolic systems. Nevertheless, we could arguethat RNNPB based architectures possess the first property in the list thatwe have just outlined, i.e. the capacity to provide motivational forces thatredraw the phase space according to current context. For example, the para-metric bias can induce a switch in the robot behavior from tossing the ballfrom the left to the right hand and vice versa to lifting and dropping it (Itoet al., 2006). Nevertheless, such a system is intrinsically lacking the secondand more interesting property. In fact, the two classes of behavior are com-pletely neutral in terms of robot viability. This second property promotesgrounding of the robot behavior.

7.2 Further scientific contributions

In parallel with the achievements highlighted in the previous section, theoriginal work presented in this thesis has explored and consolidated resultsunder several other points of view. The present section will briefly overviewfurther main scientific contributions that can be ascribed to the collectedpapers. Figure 1.1 in Section 1.2 illustrates the relation between each paperand a list of scientific contributions.

• Analysis of behavioral attractors: In Paper I (Appendix A) we deviseda particular form of analysis inspired by neurophysiological and controlsystem methods. By clamping the value of the control parameter andletting the robot free to roam in its environment (i.e. neutralizing alldirect interactions between environment and control parameter) wewere allowed to systematically explore the set of behavioral attractorsthat were created during evolutionary adaptation, and map them as afunction of the control parameter (energy level). This method provideda clear description of the distribution of the behavioral attractors fordifferent levels of energy. The same method was applied to the two-dimensional space of the control parameters in Paper III (AppendixC).

• Self-organized dynamic action selection mechanism: The method ofclamped analysis allowed us to clarify the modality of selection ofspecific behavioral attractors. The set of behavioral attractors, self-organized through the use of a standard evolutionary algorithm thatmakes use of generic fitness functions, was dynamically exploited as a

99

Page 114: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 100 — #114

CHAPTER 7. CONCLUSIONS

function of the current level of energy. The final attractor was selectedon the basis of the current level of energy, of the initial position of therobot and of the integrated effects of noise. Paper I and III bothexplore this aspect. Papers V and VI (Appendix E and F ) piggy backon and extend the results of Paper I.

• Modeling and analysis of MFCs as an artificial metabolism: We cre-ated a model of a physical microbial fuel cell implementation, adaptedto power autonomous terrestrial robots (Paper II, Appendix B). Themodel was developed at a high level of abstraction. Accordingly, themodel resulted in an extremely compact programming code, particu-larly suitable to be run in parallel with robot simulations. The modelprovided our simulated robots with realistic MFC energy dynamics,that can be thought as a first approximation of artificial metabolism(see Paper III and IV in Appendix C and D for an experimental andmore theoretical analysis of the model respectively).

• Readaptation to novel tasks via bodily modulation: The adapted agentmasters a huge cognitive potential. In Paper V and VI, we showedhow part of its knowledge can actually be easily elicited by actingon non-neural bodily parameters, and used to effectively readapt therobot to novel tasks.

• Bodily data compression: An interesting byproduct of the previouspoint is that non-neural bodily quantities can be part of a mechanismfor massive compression of information. During readaptation to noveltasks, we showed how the manipulation of a minimal subset of systemparameters, rather than the vast number of neurally related ones, couldeffectively modify the robot’s behavior (Paper V and VI).

7.3 Final notes

Taking biology seriously does not imply an unlimited commitment to bi-ological issues. Modeling ineluctably introduces some level of abstraction.Furthermore, a golden rule in modeling is to create the simplest model thatdisplays the phenomena of interest. This means that something must be in-terpreted as a peripheral component of the phenomenon under observation,and therefore neglected. Accordingly, biology should be considered a crucialsource of inspiration for the design of cognitive systems, although literal anddetailed biological modeling might not be a necessity.

Neural and non-neural bodily mechanisms, seen as as two componentsof biological cognition, do not seem to fit with the image of two indepen-dent and sequentially interacting layers. The amalgamation of neural andnon-neural bodily mechanism that is outlined and experimentally exploredin these pages adds a further and serious source of complexity. In fact,

100

Page 115: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 101 — #115

each component appears tightly interwoven and bringing forth a history ofcircular causation.

If logic seems to be the natural language for a more classical approach toAI, this new complex perspective entails a different computational paradigm,namely dynamic systems theory. The experimental work reported in thisthesis has made extensive use of a dynamic systems approach and minimalistmodels of cognition. Despite the fact that it is normally adverse, the min-imalist choice still appears scientifically fecund. The major objection thatis systematically raised to such an approach is founded on its problematicscalability to higher dimensions and more complex problems. Nevertheless,minimalist models seem to effectively isolate phenomena and explain themin terms of bottom-up causal chains of elementary dynamical interactions.This could be considered a highly desirable property if compared to morecomplex models where much more stringent assumptions do not often allowa sufficient level of analytical transparency.

It could also be argued that the angle of attack in terms of the scal-ability of this approach simply misses the point. Is really the problem inthe systematic exploration of basic cognitive dynamics? Or does it lay inour current inability to express the appropriate catalyst for the scaling upof such dynamics? For example, in classical physics we can calculate themoment of inertia for a microscopic volume of known mass that constitutesa macroscopic body. By mathematical integration we can then total the mo-ment of inertia for the whole body. Back to minimal models of cognition, isthe problem in the modeling approach itself or in our incapacity to deploy asimilarly effective mechanism for the integration of microscopic quantities?

A final admission seems appropriate. We do not simply lack a generaltheory of mind. We are also still far from a satisfactory answer to several andmuch simpler questions too. This thesis addresses some of these questions,yet does not fill the intellectual gap that currently separates science from asatisfactory solution to the problem of the mind. The somehow original andunconventional research path that is here reported and the results achievedso far appear relevant and promising. The declared ambition of this thesisis to illustrate the potential of this line of research. My hope is that theresearch here described will contribute to an increased understanding ofbiological cognitive systems and to the design of more effective artificialones. How long and what such a process will take, nobody really knows.What is certain is that the combined effort of an heterogeneous communityof researchers, each investigating on different research paths and led bydifferent personal viewpoints and sensitivity, will create the conditions forsignificant advances.

101

Page 116: Modeling the Role of Energy Management in Embodied Cognition

 

The articles are removed du to copyright issues. 

Page 117: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 219 — #233

Bibliography

Akay, M., ed. 2006. Wiley Encyclopedia of Biomedical Engineering. Hobo-ken, NJ: John Wiley & Sons, Inc.

Anderson, R. A.; Worthington, L.; Anderson, W. T.; and Jennings, G. 1994.The development of an autonomy scale. Contemporary Family Therapy16(4):329–345.

Armony, J. L.; Servan-Schreiber, D.; Cohen, J. D.; and LeDoux, J. E. 1995.An anatomically constrained neural network model of fear conditioning.Behavioral neuroscience 109(2):246–57.

Arnold, V. I. 1992. Ordinary Differential Equations. Berlin: Springer.

Ashby, W. R. 1956. An Introduction to Cybernetics. London: Chapman &Hall.

Ashby, W. R. 1960. Design for a Brain. New York: John Wiley & SonsInc., 2nd edition.

Avila-Garcıa, O., and Canamero, L. 2004. Hormonal modulation of per-ception in motivation-based action selection architectures. In Schaal, S.;Ijspeert, A.; Billard, A.; Vijayakumar, S.; Hallam, J.; and Meyer, J. A.,eds., From Animals to Animats 8: Proc. 8th Intl. Conf. on Simulation ofAdaptive Behavior, 243–252. Cambridge, MA: The MIT Press.

Avila-Garcıa, O.; Canamero, L.; and te Boekhorst, R. 2003. Analyzing thePerformance of ’Winner-Take-All’ and ’Voting-Based’ Action SelectionPolicies within the Two-Resource Problem. In Banzhaf, W.; Christaller,T.; Dittrich, P.; Kim, J.; and Ziegler, J., eds., Advances in Artificial Life.Berlin Heidelberg: Springer-Verlag. 733–742.

Barandiaran, X., and Moreno, A. 2008. Adaptivity: From Metabolism toBehavior. Adaptive Behavior 16(5):325–344.

Barandiaran, X. 2006. On What Makes Certain Dynamical Systems Cog-nitive: A Minimally Cognitive Organization Program. Adaptive Behavior14(2):171–185.

219

Page 118: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 220 — #234

BIBLIOGRAPHY

Barber, K., and Martin, C. 1999. Agent Autonomy : Specification, Measure-ment and Dynamic Adjustment. In Proceedings of the Autonomy ControlSoftware Workshop at Autonomous Agents 1999.

Becker-Asano, C. 2008. WASABI : Affect Simulation for Agents with Be-lievable Interactivity. Phd thesis, University of Bielefeld.

Beer, S. 1967. Cybernetics and management. English Universities Press,2nd edition.

Beer, R. D. 1995. Artificial Intelligence A dynamical systems perspectiveon agent-environment interaction. Artificial Intelligence 72:173–215.

Beer, R. D. 1996. Toward the evolution of dynamical neural networks forminimally cognitive behavior. In Maes, P.; Mataric, M. J.; Meyer, J.-A.;and Pollack, J., eds., From Animals to Animats 4: Proc. 4th Int. Conf.on Simulation of Adaptive Behavior, 421–429. Cambridge, MA: The MITPress.

Beer, R. D. 1997. The dynamics of adaptive behavior: a research program.Robotics and Autonomous Systems 20(2-4):257–289.

Beer, R. D. 2000. Dynamical approaches to cognitive science. Trends inCognitive Sciences 4(3):91–99.

Beer, R. D. 2003. The Dynamics of Active Categorical Perception in anEvolved Model Agent. Adaptive Behavior 11(4):209–243.

Benenson, Y.; Paz-Elizur, T.; Adar, R.; Keinan, E.; Livneh, Z.; and Shapiro,E. 2001. Programmable and autonomous computing machine made ofbiomolecules. Nature 414(6862):430–434.

Bickhard, M. 2009. The biological foundations of cognitive science. NewIdeas in Psychology 27(1):75–84.

Bird, J., and Layzell, P. 2002. The evolved radio and its implications formodelling the evolution of novel sensors. In Evolutionary Computation,2002. CEC’02. Proceedings of the 2002 Congress on, volume 2, 1836–1841.IEEE.

Bongard, J. 2011. Morphological change in machines accelerates the evolu-tion of robust behavior. Proceedings of the National Academy of Sciences2010.

Brachman, R., and Levesque, H. 2004. Knowledge Representation andReasoning. San Francisco, CA: Morgan Kaufmann.

Braitenberg, V. 1984. Vehicles: Experiments in Synthetic Psychology. Cam-bridge, MA: The MIT Press.

220

Page 119: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 221 — #235

Brogan, W. L. 1990. Modern Control Theory (3rd Edition). Upper SaddleRiver, NJ, USA: Prentice Hall.

Brooks, R. A. 1990. Elephants don’t play chess. Robotics and AutonomousSystems 6(1-2):3–15.

Brooks, R. A. 1991. Intelligence without representation.

Brooks, R. A. 1999. Cambrian Intelligence: The Early History of the NewAI. Cambridge, MA: The MIT Press.

Canamero, L. 2005. Emotion understanding from the perspective of au-tonomous robots research. Neural networks 18(4):445–55.

Cannon, W. B. 1963. Wisdom Of The Body. W. W. Norton and Company,Inc.

Carver, C. S., and Scheier, M. F. 2002. Control Processes and Self-Organization as Complementary Principles Underlying Behavior. Per-sonality and Social Psychology Review 6(4):304–315.

Chemero, A. 2009. Radical Embodied Cognitive Science (Bradford Books).Cambridge, MA: The MIT Press.

Chiel, H., and Beer, R. D. 1997. The brain has a body: Adaptive behav-ior emerges from interactions of nervous system, body and environment.Trends in Neurosciences 20(12):553–557.

Chrisley, R., and Ziemke, T. 2002. Embodiment. In Encyclopedia of Cog-nitive Science, number October. Macmillan Publishers. 1102–1108.

Clancey, W. J. 1997. Situated Cognition: On Human Knowledge and Com-puter Representations. Cambridge University Press.

Clark, A., and Grush, R. 1999. Towards a cognitive robotics. Robotics7(1):5–16.

Clark, A., and Thornton, C. 1997. Trading spaces: Computation, repre-sentation, and the limits of uninformed learning. Behavioral and BrainSciences 20:57–90.

Clark, A., and Toribio, J. 1994. Doing without representing? Synthese101:401–431.

Clark, A. 1997. Being there: Putting brain, body, and world together again.Number 2. The MIT Press.

Clark, a. 1999. An embodied cognitive science? Trends in cognitive sciences3(9):345–351.

221

Page 120: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 222 — #236

BIBLIOGRAPHY

Clark, A. 2008. Supersizing the Mind: Embodiment, Action, and CognitiveExtension. New York: Oxford University Press, USA.

Collins, S.; Ruina, A.; Tedrake, R.; and Wisse, M. 2005. Efficient bipedalrobots based on passive-dynamic walkers. Science 307(5712):1082–5.

Damasio, A. 2000. The Feeling of What Happens: Body and Emotion inthe Making of Consciousness. San Diego, New York, London: MarinerBooks.

Damasio, A. 2003. Looking for Spinoza: Joy, Sorrow, and the Feeling Brain.Orlando, Austin, New York, San Diego, Toronto, London: Mariner Books.

Dennett, D. C. 1987. Cognitive Wheels : the Frame Problem of AI. InPylyshyn, Z. W., ed., The robot’s dilemma: the frame problem in ArtificialIntelligence. Norwood, NJ: Ablex Publishing Company. 41–64.

Di Paolo, E. 2003. Organismically-inspired robotics: Homeostatic adap-tation and natural teleology beyond the closed sensorimotor loop. InMurase, K., and Asakura, T., eds., Dynamical Systems Approach to Em-bodiment and Sociality. Adelaide: Advanced Knowledge International. 19–42.

Dreyfus, H. L. 1992. What Computers Still Can’t Do: A Critique of ArtificialReason. Cambridge, MA: The MIT Press.

Dyer, M. G. 1987. Emotions and their computations: three computermodels. Cognition and Emotion 1:323–347.

Elman, J. L.; Bates, E. A.; Johnson, M. H.; Karmiloff-Smith, A.; Parisi,D.; and Plunkett, K. 1997. Rethinking Innateness: A ConnectionistPerspective on Development. Cambridge, MA: The MIT Press.

Floreano, D., and Mondada, F. 1996. Evolution of homing navigation in areal mobile robot. IEEE Transactions on Systems, Man and Cybernetics,Part B 26(3):396–407.

Fodor, J. A., and Pylyshyn, Z. W. 1988. Connectionism and cognitivearchitecture: a critical analysis. Cognition 28(1-2):3–71.

Franklin, S. P. 1997. Artificial Minds. Cambridge, MA: The MIT Press.

Fredkin, E., and Toffoli, T. 1982. Conservative logic. International Journalof Theoretical Physics 21(3-4):219–253.

Friedman, B. H. 2010. Feelings and the body: the Jamesian perspective onautonomic specificity of emotion. Biological psychology 84(3):383–93.

222

Page 121: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 223 — #237

Froese, T., and Di Paolo, E. 2008. Stability of Coordination RequiresMutuality of Interaction in a Model of Embodied Agents of embodiedagents. In Asada, M., ed., Proc. of the 10th Int. Conf. on the Simulationof Adaptive Behavior, number June, 52–61. Berlin: Springer-Verlag.

Froese, T., and Stewart, J. 2010. Life After Ashby : Ultrastability andthe Autopoietic Foundations of Biological Autonomy. Cybernetics andHuman Knowing 17(4):7–49.

Fung, Y. C. 1993. Biomechanics: Mechanical Properties of Living Tissues.New York: Springer-Verlag, 2nd edition.

Gates, B. 2007. A robot in every home. Scientific American (296):58–65.

Gazzaniga, M. S.; Ivry, R. B.; and Mangun, G. R. 2008. Cognitive Neu-roscience: The Biology of the Mind (Third Edition). W. W. Norton &Company.

Goldberg, D. E. 1989. Genetic Algorithms in Search, Optimization, andMachine Learning. Addison-Wesley Professional.

Grossberg, S. 1982. A psychopathological theory of reinforcement, drive,motivation and attention. Journal of Theoretical Biology 1:286–369.

Grush, R. 2004. The emulation theory of representation: motor control,imagery, and perception. Behavioral and Brain Sciences 27:377–442.

Guckenheimer, J., and Holmes, P. 1983. Nonlinear Oscillations, DynamicalSystems, and Bifurcations of Vector Fields. Berlin: Springer.

Guyton, A. C., and Hall, J. E. 2005. Textbook of Medical Physiology.Philadelphia, Pennsylvania: Saunders, 11th edition.

Harkema, S.; Gerasimenko, Y.; Hodes, J.; Burdick, J.; Angeli, C.; Chen,Y.; Ferreira, C.; Willhite, A.; Rejc, E.; Grossman, R. G.; and Edgerton,V. R. 2011. Effect of epidural stimulation of the lumbosacral spinalcord on voluntary movement, standing, and assisted stepping after motorcomplete paraplegia: a case study. Lancet 377(9781):1938–47.

Harnad, S. 1990. The symbol grounding problem. Physica D 42:335–346.

Harvey, I.; Husbands, P.; Cliff, D.; Thompson, A.; and Jakobi, N. 1997.Evolutionary robotics : the Sussex approach. Robotics and autonomoussystems 20:205–224.

Harvey, I.; Di Paolo, E.; Wood, R.; Quinn, M.; and Tuci, E. 2005. Evo-lutionary robotics: a new scientific tool for studying cognition. Artificiallife 11(1-2):79–98.

Haykin, S. 1998. Neural Networks: A Comprehensive Foundation. UpperSaddle River, NJ, USA: Prentice Hall PTR, 2nd edition.

223

Page 122: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 224 — #238

BIBLIOGRAPHY

Herrera, C.; Montebelli, A.; and Ziemke, T. 2007. The role of internal statesin the emergence of motivation and preference: a robotics approach. InPaiva, A.; Prada, R.; and Picard, R. W., eds., Affective Computing andIntelligent Interaction, 739–740. Springer.

Hill, R. W.; Wyse, G. A.; and Anderson, M. 2008. Animal Physiology.Sunderland, MA: Sinauer Associates, Inc.

Holland, J. H. 1992. Adaptation in Natural and Artificial Systems. Cam-bridge, MA: The MIT Press.

Huang, H.-M.; Pavek, K.; Novak, B.; Albus, J.; and Messin, E. 2005.A framework for autonomy levels for unmanned systems (ALFUS). InProceedings of the AUVSI’s Unmanned Systems North America, numberJune.

Hutter, M. 2005. Universal artificial intelligence: sequential decisions basedon algorithmic probability. Berlin Heidelberg: Springer-Verlag.

Ieropoulos, I.; Greenman, J.; Melhuish, C.; and Horsfield, I. 2010. EcoBot-III : a robot with guts. In Fellermann, H.; Dorr, M.; Hanczyc, M.; Laursen,L. L.; Maurer, S.; Merkle, D.; Monnard, P.-A.; Sty, K.; and Rasmussen,S., eds., Artificial Life XII, 733–740.

Ito, M., and Tani, J. 2004. On-line Imitative Interaction with a HumanoidRobot Using a Dynamic Neural Network Model of a Mirror System. Adap-tive Behavior 12(2):93–115.

Ito, M.; Noda, K.; Hoshino, Y.; and Tani, J. 2006. Dynamic and interactivegeneration of object handling behaviors by a small humanoid robot usinga dynamic neural network model. Neural Networks 19(3):323–337.

James, W. 1890. The Principles of Psychology, Vol.2. New York: HenryHolt and Co., Inc.

Kandel, E. R.; Schwartz, J.; and Jessell, T. M. 2000. Principles of NeuralScience. McGraw-Hill Medical.

Kelso, J. A. S. 1995. Dynamic Patterns: The Self-Organization of Brainand Behavior. Cambridge, MA: The MIT Press.

Kim, J. 2006. Philosophy of Mind. Cambridge, MA: Westview Press, 2ndedition.

Kirsh, D. 1991. Today the earwig, tomorrow man? Artificial Intelligence47:161–184.

Kuhn, T. S. 1962. The Structure of Scientific Revolutions. University OfChicago Press.

224

Page 123: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 225 — #239

LeDoux, J. 1996. The emotional brain. New York: Simon & Schuster.

Lewis, M. D. 2005. Bridging emotion theory and neurobiology throughdynamic systems modeling. The Behavioral and brain sciences 28(2):169–94; discussion 194–245.

Lipson, H. 2005. Homemade. Ieee Spectrum 42(5):24–31.

Lohe, G., and Kleindienst, H.-U. 1994. The role of the medial septum inthe acoustic trachea of the cricket Gryllus bimaculatus II . Influence ondirectionality of the auditory system. Journal Of Comparative PhysiologyA: Neuroethology Sensory Neural And Behavioral Physiology 174(5):601–606.

Lorenz, E. N. 1963. Deterministic Nonperiodic Flow. Journal of Atmo-spheric Sciences 20:130–141.

Lowe, R., and Ziemke, T. 2011. The feeling of action tendencies : on theemotional regulation of goal-directed behavior. Frontiers in Psychology2:346.

Lowe, R.; Humphries, M.; and Ziemke, T. 2009. The dual-route hypothe-sis: evaluating a neurocomputational model of fear conditioning in rats.Connection Science 21(1):15–37.

Marr, D. 1982. Vision: A Computational Investigation into the HumanRepresentation and Processing of Visual Information. New York: HenryHolt and Co., Inc.

Maturana, H. R., and Varela, F. J. 1980. Autopoiesis and Cognition. Therealization of the living. Dordrecht, Holland: Reidel Publishing Company.

Maturana, H. R., and Varela, F. J. 1992. Tree of Knowledge. Boston, MA:Shambhala.

McCarthy, J., and Hayes, P. J. 1969. Some Philosophical Problems fromthe Standpoint of Artificial Intelligence. In Michie, D., and Meltzer, B.,eds., Machine Intelligence 4. Edimburgh: Edimburgh University Press.463–502.

McEwen, B. S., and Wingfield, J. C. 2003. The concept of allostasis inbiology and biomedicine. Hormones and Behavior 43(1):2–15.

McFarland, D. 2008. Guilty robots, happy dogs: the question of alien minds.Oxford University Press.

McGeer, T. 1990. Passive dynamic walking. International Journal ofRobotics Research 9(2):62–82.

Metta, G., and Fitzpatrick, P. 2003. Better Vision through Manipulation.Adaptive Behavior 11(2):109–128.

225

Page 124: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 226 — #240

BIBLIOGRAPHY

Montebelli, A.; Lowe, R.; Ieropoulos, I.; Melhuish, C.; Greenman, J.; andZiemke, T. 2010. Microbial fuel cell driven behavioral dynamics in robotsimulations. In Fellermann, H.; Dorr, M.; Hanczyc, M.; Laursen, L. L.;Maurer, S.; Merkle, D.; Monnard, P.-A.; Sty, K.; and Rasmussen, S., eds.,Artificial Life XII, number 1960, 749–756. T.

Montebelli, A.; Herrera, C.; and Ziemke, T. 2007. An Analysis of BehavioralAttractor Dynamics. In Almeida e Costa, F., ed., Advances in ArtificialLife: Proceedings of the 9th European Conference on Artificial Life, 213–222. Berlin: T.

Montebelli, A.; Herrera, C.; and Ziemke, T. 2008. On Cognition as Dynam-ical Coupling: An Analysis of Behavioral Attractor Dynamics. AdaptiveBehavior 16(2-3):182–195.

Moreno, A., and Barandiaran, X. 2004. A naturalized account of the inside-outside dichotomy. Philosophica 73:11–26.

Moss, F., and Wiesenfeld, K. 1995. The benefits of background noise.Scientific American 273(2):66–69.

Muntean, I., and Wright, C. D. 2007. Autonomous agency, AI, and allostasis:a biomimetic perspective. Pragmatics ‘&’ Cognition 15(3):485–513.

Nelson, A. L.; Barlow, G. J.; and Doitsidis, L. 2009. Fitness functions inevolutionary robotics: A survey and analysis. Robotics and AutonomousSystems 57(4):345–370.

Newell, A., and Simon, H. A. 1976. Computer science as empirical inquiry:symbols and search. Communications of the ACM 19(3):113–126.

Nicholls, J. G.; Martin, A. R.; Wallace, B. G.; and Fuchs, P. A. 2001. FromNeuron to Brain: A Cellular and Molecular Approach to the Function ofthe Nervous System. Sunderland, MA: Sinauer Associates, 4th edition.

Nicolelis, M. A. L., ed. 2008. Methods for Neural Ensemble Recordings.Hoboken, NJ: CRC Press, 2nd edition.

Nolfi, S., and Floreano, D. 2000. Evolutionary Robotics: The Biology, Intel-ligence, and Technology of Self-Organizing Machines (Intelligent Roboticsand Autonomous Agents). Cambridge, MA: The MIT Press.

Nolfi, S., and Floreano, D. 2002. Synthesis of autonomous robots throughevolution. Trends in cognitive sciences 6(1):31–37.

Nolfi, S. 1997a. Evolving non-trivial behavior on autonomous robots : Adap-tation is more powerful than decomposition and integration. In Gomi, T.,ed., Evolutionary Robotics, number April. Ontario, Canada: AAI Books.

226

Page 125: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 227 — #241

Nolfi, S. 1997b. Using emergent modularity to develop control systems formobile robots. Adaptive behavior 5(3-4):343–363.

O’Regan, J. K., and Noe, A. 2001. A sensorimotor account of vision andvisual consciousness. The Behavioral and brain sciences 24(5):939–73;discussion 973–1031.

Ortony, A.; Clore, G. L.; and Collins, A. 1988. The Cognitive Structure ofEmotions. Cambridge University Press.

Paine, R. W., and Tani, J. 2004. Motor primitive and sequence self-organization in a hierarchical recurrent neural network. Neural Networks17:1291–1309.

Paine, R. W., and Tani, J. 2005. How hierarchical control self-organizes inartificial adaptive systems. Adaptive Behavior 13(3):211–225.

Parisi, D., and Petrosino, G. 2010. Robots that have emotions. AdaptiveBehavior 18(6):453–469.

Parisi, D. 2004. Internal Robotics. Connection Science 164:325–338.

Pessoa, L. 2008. On the relationship between emotion and cognition. Naturereviews. Neuroscience 9(2):148–58.

Pfeifer, R., and Bongard, J. C. 2006. How the Body Shapes the Way WeThink. The MIT Press.

Pfeifer, R., and Gomez, G. 2005. Interacting with the real world: designprinciples for intelligent systems. Artificial Life and Robotics 9(1):1–6.

Pfeifer, R.; Iida, F.; and Bongard, J. 2005. New robotics: design principlesfor intelligent systems. Artificial life 11(1-2):99–120.

Phelps, E. a. 2006. Emotion and cognition: insights from studies of thehuman amygdala. Annual review of psychology 57(Miller 2003):27–53.

Pickering, A. 2010. The Cybernetic Brain: Sketches of Another Future.Chicago, IL: University Of Chicago Press.

Prinz, J. J. 2004. Gut Reactions: A Perceptual Theory of Emotion (Philos-ophy of Mind). Oxford University Press, USA.

Queneau, R. 1950. Petite cosmogonie portative. Paris: Les Francs-Bibliophiles.

Reif, F. 1967. Statistical Physics: Berkeley Physics Course. McGraw-Hill.

Rich, E., and Knight, K. 1990. Artificial Intelligence. McGraw-Hill.

Rolls, E. T. 2007. Emotion explained. Oxford University Press.

227

Page 126: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 228 — #242

BIBLIOGRAPHY

Rooij, I.; Bongers, R. M.; and Haselager, F. 2002. A non-representationalapproach to imagined action. Cognitive Science 26(3):345–375.

Rossler, O. E. 1976. An equation of continuous chaos. Physics Letters A57(5):397–398.

Rossler, O. E. 1979. An equation for hyperchaos. Physics Letters A71(2):155–157.

Rupert, R. D. 2004. Challenges to the Hypothesis of Extended Cognition.The Journal of Philosophy 101(8):389–428.

Russell, S., and Norvig, P. 2003. Artificial intelligence: a Modern Approach.Upper Saddle River, NJ, USA: Prentice Hall, 2nd edition.

Scheier, C.; Pfeifer, R.; and Kunyioshi, Y. 1998. Embedded neural net-works: exploiting constraints. Neural networks : the official journal ofthe International Neural Network Society 11(7-8):1551–1569.

Schulkin, J. 2003. Rethinking Homeostasis: Allostatic Regulation in Physi-ology and Pathophysiology. Cambridge, MA: The MIT Press.

Searle, J. R. 1980. Minds, brains, and programs. Behavioral and BrainSciences 3(03):417–457.

Searle, J. R. 2004. Mind: A Brief Introduction. Oxford University Press,USA.

Seth, A. K. 1998. Evolving Action Selection and Selective Attention WithoutActions , Attention , or Selection. In Pfeifer, R.; Blumberg, B.; Meyer,J.-A.; and Wilson, S., eds., From animals to animats 5: Proceedings of theFifth International Conference on the Simulation of Adaptive Behavior,139–147. Cambridge, MA: The MIT Press.

Seth, A. K. 1999. Evolving Behavioural Choice : An Investigation intoHerrnstein’s Matching Law. In Floreano, D.; Nicoud, J. D.; and Mondada,F., eds., Proceedings of the Fifth European Conference on Artificial Life,number 1, 225–236. Berlin: Springer-Verlag.

Seth, A. K. 2001. Modeling Group Foraging: Individual Suboptimality,Interference, and a Kind of Matching. Adaptive Behavior 9:67–90.

Seth, A. K. 2007. The ecology of action selection: insights from artificiallife. Philosophical transactions of the Royal Society of London. Series B,Biological sciences 362(1485):1545–58.

Seth, A. K. 2010. Measuring autonomy and emergence via Granger causality.Artificial life 16(2):179–96.

228

Page 127: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 229 — #243

Shanahan, M. 1997. Solving the Frame Problem: A Mathematical Investi-gation of the Common Sense Law of Inertia. Cambridge, MA: The MITPress.

Shanahan, M. 2009. The Frame Problem.

Shapiro, L. 2011. Embodied Cognition. Routledge.

Skarda, C. A., and Freeman, W. J. 1987. How brains make chaos in orderto make sense of the world. Behavioral and brain sciences 10(2):161–195.

Slocum, A. C.; Downey, D. C.; and Beer, R. D. 2000. Further Experimentsin the Evolution of Minimally Cognitive Behavior: From Perceiving Affor-dances to Selective Attention. In From animals to animats 6: Proceedingsof the Sixth International Conference on Simulation of Adaptive Behavior.Cambridge, MA: The MIT Press.

Sloman, A. 1987. Motives, mechanisms, and emotions. Cognition andEmotion 1(3):217–234.

Smith, L. B., and Gasser, M. 2005. The development of embodied cognition:six lessons from babies. Artificial life 11(1-2):13–29.

Srinivasan, M., and Ruina, A. 2006. Computer optimization of a minimalbiped model discovers walking and running. Nature 439(7072):72–5.

Steels, L. 2003. Intelligence with representation. Philosophical trans-actions. Series A, Mathematical, physical, and engineering sciences361(1811):2381–95.

Stephens, D. W., and Krebs, J. R. 1987. Foraging Theory. PrincetonUniversity Press.

Stephens, C. L.; Christie, I. C.; and Friedman, B. H. 2010. Autonomic speci-ficity of basic emotions: evidence from pattern classification and clusteranalysis. Biological psychology 84(3):463–73.

Sterling, P., and Eyer, J. 1988. Allostasis: a new paradigm to explainarousal pathology. In Fisher, S., and Reason, J., eds., Handbook of lifestress, cognition and health. Oxford, UK: John Wiley & Sons Inc. 629–649.

Sterling, P. 2004. Principles of allostasis : optimal design , predictiveregulation , pathophysiology and rational therapeutics. In Schulkin, J.,ed., Allostasis Homeostasis and the Costs of Physiological Adaptation.Cambridge University Press. 1–36.

Sternberg, S. 1969. Memory-scanning: Mental processes revealed byreaction-time experiments. American Scientist 57(4):421–457.

Stewart, J. 1995. Cognition = life: Implications for higher-level cognition.Behavioural Processes 35(1-3):311–326.

229

Page 128: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 230 — #244

BIBLIOGRAPHY

Stillings, N. A.; Weisler, S. W.; Chase, C. H.; Feinstein, M. H.; Garfield,J. L.; and Rissland, E. L. 1995. Cognitive Science: An Introduction,Second Edition. Cambridge, MA: The MIT Press.

Strogatz, S. H. 1994. Nonlinear Dynamics And Chaos. Cambridge, MA:Westview Press.

Strogatz, S. H. 2004. Sync: How Order Emerges From Chaos In the Uni-verse, Nature, and Daily Life. New York: Hyperion.

Suzuki, M., and Floreano, D. 2008. Enactive robot vision. Adaptive Behavior16:122–128.

Tani, J., and Ito, M. 2003. Self-organization of behavioral primitives asmultiple attractor dynamics: A robot experiment. IEEE Transactions onSystems, Man, and Cybernetics-PartB: Cybernetics 33(4):481–488.

Tani, J., and Nolfi, S. 1999. Learning to perceive the world as articulated:an approach for hierarchical learning in sensory-motor systems. NeuralNetworks 12(7-8):1131–1141.

Tani, J. 2003. Learning to generate articulated behavior through thebottom-up and the top-down interaction processes. Neural Networks16:11–23.

Thelen, E., and Smith, L. B. 1994. A Dynamic Systems Approach to theDevelopment of Cognition and Action (Cognitive Psychology). Cambridge,MA: The MIT Press.

Thelen, E.; Schoner, G.; Scheier, C.; and Smith, L. B. 2001. The dynamicsof embodiment: A field theory of infant perseverative reaching. Behavioraland Brain Sciences 24:1–86.

Thompson, E. 2010. Mind in Life: Biology, Phenomenology, and the Sci-ences of Mind. Belknap Press of Harvard University Press.

Tuller, B.; Case, P.; Ding, M.; and Kelso, J. A. S. 1994. The nonlineardynamics of speech categorization. Journal of Experimental Psychology:Human Perception & Performance 20(1):3–16.

Turing, A. M. 1950. Computing Machinery and Intelligence. Mind49(236):433–460.

van Gelder, T. 1998. The dynamical hypothesis in cognitive science. TheBehavioral and brain sciences 21(5):615–665.

Varela, F. J.; Thompson, E. T.; and Rosch, E. 1991. The Embodied Mind.Cambridge, MA: The MIT Press.

Varela, F. J. 1997. Patterns of life: intertwining identity and cognition.Brain and cognition 34(1):72–87.

230

Page 129: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 231 — #245

Varela, F. 2005. At the Source of Time: Valence and the constitutional dy-namics of affect: The Question, the Background: How Affect OriginarilyShapes Time. Journal of Consciousness Studies, 12 12(8-10):61–81.

von Bertalanffy, L. 1969. General System Theory: Foundations, Develop-ment, Applications (Revised Edition). New York: George Braziller, Inc.

Walter, W. G. 1950. Imitation of life. Scientific American 182(5):42–45.

Walter, W. G. 1951. A machine that learns. Scientific American 185(2):60–63.

Walter, W. G. 1963. The Living Brain. New York: W. W. Norton andCompany, Inc., 2nd edition.

Wendler, G., and Lohe, G. 1993. The role of the medial septum in theacoustic trachea of the cricket Gryllus bimaculatus. Journal of Compar-ative Physiology A 173(5):557–564.

Wiener, N. 1954. The Human Use Of Human Beings: Cybernetics AndSociety. Da Capo Press.

Wiener, N. 1961. Cybernetics: or the Control and Communication in theAnimal and the Machine. Cambridge, MA: The MIT Press, 2nd edition.

Wiesenfeld, K., and Moss, F. 1995. Stochastic resonance and the benefitsof noise: from ice ages to crayfish and SQUIDs. Nature 373:33–36.

Wilson, R. A. 2001. Philosophy. In Wilson, R. A., and Keil, F. C., eds.,The MIT Encyclopedia of the Cognitive Sciences. Cambridge, MA: TheMIT Press. xv–xxxvii.

Wilson, M. 2002. Six views of embodied cognition. Psychonomic bulletin &review 9(4):625–36.

Winter, D. A. 2009. Biomechanics and Motor Control of Human Movement.Wiley.

Woods, S. C., and Ramsay, D. S. 2007. Homeostasis: beyond Curt Richter.Appetite 49(2):388–98.

Yamashita, Y., and Tani, J. 2008. Emergence of Functional Hierarchy in aMultiple Timescale Neural Network Model: A Humanoid Robot Experi-ment. PLoS Computational Biology 4(11).

Yoshida, H., and Smith, L. B. 2008. What’s in View for Toddlers? Using aHead Camera to Study Visual Experience. Infancy 13(3):229–248.

Ziemke, T., and Lowe, R. 2009. On the Role of Emotion in Embodied Cog-nitive Architectures: From Organisms to Robots. Cognitive computation1(1):104–117.

231

Page 130: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 232 — #246

BIBLIOGRAPHY

Ziemke, T.; Zlatev, J.; and Frank, R. M., eds. 2007. Body, Language andMind - Volume 1: Embodiment. Berlin, New York: Mouton de Gruyter.

Ziemke, T. 2008. On the role of emotion in biological and robotic autonomy.BioSystems 91:401–408.

232

Page 131: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 233 — #247

Department of Computer and Information Science

Linköpings universitet

Dissertations

Linköping Studies in Science and Technology

Linköping Studies in Arts and Science Linköping Studies in Statistics

Linköpings Studies in Informatics

Linköping Studies in Science and Technology

No 14 Anders Haraldsson: A Program Manipulation

System Based on Partial Evaluation, 1977, ISBN 91-

7372-144-1.

No 17 Bengt Magnhagen: Probability Based Verification of

Time Margins in Digital Designs, 1977, ISBN 91-7372-

157-3.

No 18 Mats Cedwall: Semantisk analys av process-

beskrivningar i naturligt språk, 1977, ISBN 91- 7372-

168-9.

No 22 Jaak Urmi: A Machine Independent LISP Compiler

and its Implications for Ideal Hardware, 1978, ISBN

91-7372-188-3.

No 33 Tore Risch: Compilation of Multiple File Queries in

a Meta-Database System 1978, ISBN 91- 7372-232-4.

No 51 Erland Jungert: Synthesizing Database Structures

from a User Oriented Data Model, 1980, ISBN 91-

7372-387-8.

No 54 Sture Hägglund: Contributions to the Development

of Methods and Tools for Interactive Design of

Applications Software, 1980, ISBN 91-7372-404-1.

No 55 Pär Emanuelson: Performance Enhancement in a

Well-Structured Pattern Matcher through Partial

Evaluation, 1980, ISBN 91-7372-403-3.

No 58 Bengt Johnsson, Bertil Andersson: The Human-

Computer Interface in Commercial Systems, 1981,

ISBN 91-7372-414-9.

No 69 H. Jan Komorowski: A Specification of an Abstract

Prolog Machine and its Application to Partial

Evaluation, 1981, ISBN 91-7372-479-3.

No 71 René Reboh: Knowledge Engineering Techniques

and Tools for Expert Systems, 1981, ISBN 91-7372-

489-0.

No 77 Östen Oskarsson: Mechanisms of Modifiability in

large Software Systems, 1982, ISBN 91- 7372-527-7.

No 94 Hans Lunell: Code Generator Writing Systems, 1983,

ISBN 91-7372-652-4.

No 97 Andrzej Lingas: Ad vances in Minimum Weight

Triangulation, 1983, ISBN 91-7372-660-5.

No 109 Peter Fritzson: Towards a Distributed Programming

Environment based on Incremental Compilation,

1984, ISBN 91-7372-801-2.

No 111 Erik Tengvald: The Design of Expert Planning

Systems. An Experimental Operations Planning

System for Turning, 1984, ISBN 91-7372- 805-5.

No 155 Christos Levcopoulos: Heuristics for Minimum

Decompositions of Polygons, 1987, ISBN 91-7870-

133-3.

No 165 James W. Goodwin: A Theory and System for Non-

Monotonic Reasoning, 1987, ISBN 91-7870-183-X.

No 170 Zebo Peng: A Formal Methodology for Automated

Synthesis of VLSI Systems, 1987, ISBN 91-7870-225-9.

No 174 Johan Fagerström: A Parad igm and System for

Design of Distributed Systems, 1988, ISBN 91-7870-

301-8.

No 192 Dimiter Driankov: Towards a Many Valued Logic of

Quantified Belief, 1988, ISBN 91-7870-374-3.

No 213 Lin Padgham: Non-Monotonic Inheritance for an

Object Oriented Knowledge Base, 1989, ISBN 91-

7870-485-5.

No 214 Tony Larsson: A Formal Hardware Description and

Verification Method , 1989, ISBN 91-7870-517-7.

No 221 Michael Reinfrank: Fundamentals and Logical

Foundations of Truth Maintenance, 1989, ISBN 91-

7870-546-0.

No 239 Jonas Löwgren: Knowledge-Based Design Support

and Discourse Management in User Interface

Management Systems, 1991, ISBN 91-7870-720-X.

No 244 Henrik Eriksson: Meta-Tool Support for Knowledge

Acquisition, 1991, ISBN 91-7870-746-3.

No 252 Peter Eklund: An Epistemic Approach to Interactive

Design in Multiple Inheritance Hierarchies, 1991,

ISBN 91-7870-784-6.

No 258 Patrick Doherty: NML3 - A Non-Monotonic

Formalism with Explicit Defaults, 1991, ISBN 91-

7870-816-8.

No 260 Nahid Shahmehri: Generalized Algorithmic

Debugging, 1991, ISBN 91-7870-828-1.

No 264 Nils Dahlbäck: Representation of Discourse-

Cognitive and Computational Aspects, 1992, ISBN

91-7870-850-8.

No 265 Ulf Nilsson: Abstract Interpretations and Abstract

Machines: Contributions to a Methodology for the

Implementation of Logic Programs, 1992, ISBN 91-

7870-858-3.

No 270 Ralph Rönnquist: Theory and Practice of Tense-

bound Object References, 1992, ISBN 91-7870-873-7.

No 273 Björn Fjellborg: Pipeline Extraction for VLSI Data

Path Synthesis, 1992, ISBN 91-7870-880-X.

No 276 Staffan Bonnier: A Formal Basis for Horn Clause

Logic with External Polymorphic Functions, 1992,

ISBN 91-7870-896-6.

No 277 Kristian Sandahl: Developing Knowledge Manage-

ment Systems with an Active Expert Methodology,

1992, ISBN 91-7870-897-4.

No 281 Christer Bäckström: Computational Complexity of

Reasoning about Plans, 1992, ISBN 91-7870-979-2.

No 292 Mats Wirén: Stud ies in Incremental Natural

Language Analysis, 1992, ISBN 91-7871-027-8.

No 297 Mariam Kamkar: Interprocedural Dynamic Slicing

with Applications to Debugging and Testing, 1993,

ISBN 91-7871-065-0.

No 302 Tingting Zhang: A Study in Diagnosis Using

Classification and Defaults, 1993, ISBN 91-7871-078-2

No 312 Arne Jönsson: Dialogue Management for Natural

Language Interfaces - An Empirical Approach, 1993,

ISBN 91-7871-110-X.

No 338 Simin Nadjm-Tehrani: Reactive Systems in Physical

Environments: Compositional Modelling and Frame-

work for Verification, 1994, ISBN 91-7871-237-8.

No 371 Bengt Savén: Business Models for Decision Support

and Learning. A Study of Discrete-Event

Manufacturing Simulation at Asea/ ABB 1968-1993,

1995, ISBN 91-7871-494-X.

Page 132: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 234 — #248

No 375 Ulf Söderman: Conceptual Modelling of Mode

Switching Physical Systems, 1995, ISBN 91-7871-516-

4.

No 383 Andreas Kågedal: Exploiting Groundness in Logic

Programs, 1995, ISBN 91-7871-538-5.

No 396 George Fodor: Ontological Control, Description,

Identification and Recovery from Problematic

Control Situations, 1995, ISBN 91-7871-603-9.

No 413 Mikael Pettersson: Compiling Natural Semantics,

1995, ISBN 91-7871-641-1.

No 414 Xinli Gu: RT Level Testability Improvement by

Testability Analysis and Transformations, 1996, ISBN

91-7871-654-3.

No 416 Hua Shu: Distribu ted Default Reasoning, 1996, ISBN

91-7871-665-9.

No 429 Jaime Villegas: Simulation Supported Industrial

Training from an Organisational Learning

Perspective - Development and Evaluation of the

SSIT Method , 1996, ISBN 91-7871-700-0.

No 431 Peter Jonsson: Stud ies in Action Planning:

Algorithms and Complexity, 1996, ISBN 91-7871-704-

3.

No 437 Johan Boye: Directional Types in Logic

Programming, 1996, ISBN 91-7871-725-6.

No 439 Cecilia Sjöberg: Activities, Voices and Arenas:

Participatory Design in Practice, 1996, ISBN 91-7871-

728-0.

No 448 Patrick Lambrix: Part-Whole Reasoning in

Description Logics, 1996, ISBN 91-7871-820-1.

No 452 Kjell Orsborn: On Extensible and Object-Relational

Database Technology for Finite Element Analysis

Applications, 1996, ISBN 91-7871-827-9.

No 459 Olof Johansson: Development Environments for

Complex Product Models, 1996, ISBN 91-7871-855-4.

No 461 Lena Strömbäck: User-Defined Constructions in

Unification-Based Formalisms, 1997, ISBN 91-7871-

857-0.

No 462 Lars Degerstedt: Tabulation-based Logic Program-

ming: A Multi-Level View of Query Answering,

1996, ISBN 91-7871-858-9.

No 475 Fredrik Nilsson: Strategi och ekonomisk styrning -

En stud ie av hur ekonomiska styrsystem utformas

och används efter företagsförvärv, 1997, ISBN 91-

7871-914-3.

No 480 Mikael Lindvall: An Empirical Study of Require-

ments-Driven Impact Analysis in Object-Oriented

Software Evolution, 1997, ISBN 91-7871-927-5.

No 485 Göran Forslund: Opinion-Based Systems: The Coop-

erative Perspective on Knowledge-Based Decision

Support, 1997, ISBN 91-7871-938-0.

No 494 Martin Sköld: Active Database Management

Systems for Monitoring and Control, 1997, ISBN 91-

7219-002-7.

No 495 Hans Olsén: Automatic Verification of Petri Nets in

a CLP framework, 1997, ISBN 91-7219-011-6.

No 498 Thomas Drakengren: Algorithms and Complexity

for Temporal and Spatial Formalisms, 1997, ISBN 91-

7219-019-1.

No 502 Jakob Axelsson: Analysis and Synthesis of Heteroge-

neous Real-Time Systems, 1997, ISBN 91-7219-035-3.

No 503 Johan Ringström: Compiler Generation for Data-

Parallel Programming Languages from Two-Level

Semantics Specifications, 1997, ISBN 91-7219-045-0.

No 512 Anna Moberg: Närhet och d istans - Stud ier av kom-

munikationsmönster i satellitkontor och flexibla

kontor, 1997, ISBN 91-7219-119-8.

No 520 Mikael Ronström: Design and Modelling of a

Parallel Data Server for Telecom Applications, 1998,

ISBN 91-7219-169-4.

No 522 Niclas Ohlsson: Towards Effective Fault Prevention

- An Empirical Study in Software Engineering, 1998,

ISBN 91-7219-176-7.

No 526 Joachim Karlsson: A Systematic Approach for

Prioritizing Software Requirements, 1998, ISBN 91-

7219-184-8.

No 530 Henrik Nilsson: Declarative Debugging for Lazy

Functional Languages, 1998, ISBN 91-7219-197-x.

No 555 Jonas Hallberg: Timing Issues in High-Level Synthe-

sis, 1998, ISBN 91-7219-369-7.

No 561 Ling Lin: Management of 1-D Sequence Data - From

Discrete to Continuous, 1999, ISBN 91-7219-402-2.

No 563 Eva L Ragnemalm: Student Modelling based on Col-

laborative Dialogue with a Learning Companion,

1999, ISBN 91-7219-412-X.

No 567 Jörgen Lindström: Does Distance matter? On geo-

graphical d ispersion in organisations, 1999, ISBN 91-

7219-439-1.

No 582 Vanja Josifovski: Design, Implementation and

Evaluation of a Distribu ted Mediator System for

Data Integration, 1999, ISBN 91-7219-482-0.

No 589 Rita Kovordányi: Modeling and Simulating

Inhibitory Mechanisms in Mental Image

Reinterpretation - Towards Cooperative Human-

Computer Creativity, 1999, ISBN 91-7219-506-1.

No 592 Mikael Ericsson: Supporting the Use of Design

Knowledge - An Assessment of Commenting

Agents, 1999, ISBN 91-7219-532-0.

No 593 Lars Karlsson: Actions, Interactions and Narratives,

1999, ISBN 91-7219-534-7.

No 594 C. G. Mikael Johansson: Social and Organizational

Aspects of Requirements Engineering Methods - A

practice-oriented approach, 1999, ISBN 91-7219-541-

X.

No 595 Jörgen Hansson: Value-Driven Multi-Class Overload

Management in Real-Time Database Systems, 1999,

ISBN 91-7219-542-8.

No 596 Niklas Hallberg: Incorporating User Values in the

Design of Information Systems and Services in the

Public Sector: A Methods Approach, 1999, ISBN 91-

7219-543-6.

No 597 Vivian Vimarlund: An Economic Perspective on the

Analysis of Impacts of Information Technology:

From Case Stud ies in Health -Care towards General

Models and Theories, 1999, ISBN 91-7219-544-4.

No 598 Johan Jenvald: Methods and Tools in Computer-

Supported Taskforce Training, 1999, ISBN 91-7219-

547-9.

No 607 Magnus Merkel: Understanding and enhancing

translation by parallel text processing, 1999, ISBN 91-

7219-614-9.

No 611 Silvia Coradeschi: Anchoring symbols to sensory

data, 1999, ISBN 91-7219-623-8.

No 613 Man Lin: Analysis and Synthesis of Reactive

Systems: A Generic Layered Architecture

Perspective, 1999, ISBN 91-7219-630-0.

No 618 Jimmy Tjäder: Systemimplementering i praktiken -

En stud ie av logiker i fyra projekt, 1999, ISBN 91-

7219-657-2.

No 627 Vadim Engelson: Tools for Design, Interactive

Simulation, and Visualization of Object-Oriented

Models in Scientific Computing, 2000, ISBN 91-7219-

709-9.

Page 133: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 235 — #249

No 637 Esa Falkenroth: Database Technology for Control

and Simulation, 2000, ISBN 91-7219-766-8.

No 639 Per-Arne Persson: Bringing Power and Knowledge

Together: Information Systems Design for Autonomy

and Control in Command Work, 2000, ISBN 91-7219-

796-X.

No 660 Erik Larsson: An Integrated System-Level Design for

Testability Methodology, 2000, ISBN 91-7219-890-7.

No 688 Marcus Bjäreland: Model-based Execution

Monitoring, 2001, ISBN 91-7373-016-5.

No 689 Joakim Gustafsson: Extending Temporal Action

Logic, 2001, ISBN 91-7373-017-3.

No 720 Carl-Johan Petri: Organizational Information Provi-

sion - Managing Mandatory and Discretionary Use

of Information Technology, 2001, ISBN -91-7373-126-

9.

No 724 Paul Scerri: Designing Agents for Systems with Ad -

justable Autonomy, 2001, ISBN 91 7373 207 9.

No 725 Tim Heyer: Semantic Inspection of Software

Artifacts: From Theory to Practice, 2001, ISBN 91

7373 208 7.

No 726 Pär Carlshamre: A Usability Perspective on Require-

ments Engineering - From Methodology to Product

Development, 2001, ISBN 91 7373 212 5.

No 732 Juha Takkinen: From Information Management to

Task Management in Electronic Mail, 2002, ISBN 91

7373 258 3.

No 745 Johan Åberg: Live Help Systems: An Approach to

Intelligent Help for Web Information Systems, 2002,

ISBN 91-7373-311-3.

No 746 Rego Granlund: Monitoring Distributed Teamwork

Training, 2002, ISBN 91-7373-312-1.

No 757 Henrik André-Jönsson: Indexing Strategies for Time

Series Data, 2002, ISBN 917373-346-6.

No 747 Anneli Hagdahl: Development of IT-supported

Interorganisational Collaboration - A Case Study in

the Swedish Public Sector, 2002, ISBN 91-7373-314-8.

No 749 Sofie Pilemalm: Information Technology for Non-

Profit Organisations - Extended Participatory Design

of an Information System for Trade Union Shop

Stewards, 2002, ISBN 91-7373-318-0.

No 765 Stefan Holmlid: Adapting users: Towards a theory

of use quality, 2002, ISBN 91-7373-397-0.

No 771 Magnus Morin: Multimedia Representations of Dis-

tributed Tactical Operations, 2002, ISBN 91-7373-421-

7.

No 772 Pawel Pietrzak: A Type-Based Framework for Locat-

ing Errors in Constraint Logic Programs, 2002, ISBN

91-7373-422-5.

No 758 Erik Berglund: Library Communication Among Pro-

grammers Worldwide, 2002, ISBN 91-7373-349-0.

No 774 Choong-ho Yi: Modelling Object-Oriented Dynamic

Systems Using a Logic-Based Framework, 2002, ISBN

91-7373-424-1.

No 779 Mathias Broxvall: A Study in the Computational

Complexity of Temporal Reasoning, 2002, ISBN 91-

7373-440-3.

No 793 Asmus Pandikow: A Generic Principle for Enabling

Interoperability of Structured and Object-Oriented

Analysis and Design Tools, 2002, ISBN 91-7373-479-9.

No 785 Lars Hult: Publika Informationstjänster. En stud ie av

den Internetbaserade encyklopedins bruksegenska-

per, 2003, ISBN 91-7373-461-6.

No 800 Lars Taxén: A Framework for the Coord ination of

Complex Systems´ Development, 2003, ISBN 91-

7373-604-X

No 808 Klas Gäre: Tre perspektiv på förväntningar och

förändringar i samband med införande av

informationssystem, 2003, ISBN 91-7373-618-X.

No 821 Mikael Kindborg: Concurrent Comics -

programming of social agents by child ren, 2003,

ISBN 91-7373-651-1.

No 823 Christina Ölvingson: On Development of

Information Systems with GIS Functionality in

Public Health Informatics: A Requirements

Engineering Approach, 2003, ISBN 91-7373-656-2.

No 828 Tobias Ritzau: Memory Efficient Hard Real-Time

Garbage Collection, 2003, ISBN 91-7373-666-X.

No 833 Paul Pop: Analysis and Synthesis of

Communication-Intensive Heterogeneous Real-Time

Systems, 2003, ISBN 91-7373-683-X.

No 852 Johan Moe: Observing the Dynamic Behaviour of

Large Distributed Systems to Improve Development

and Testing – An Empirical Study in Software

Engineering, 2003, ISBN 91-7373-779-8.

No 867 Erik Herzog: An Approach to Systems Engineering

Tool Data Representation and Exchange, 2004, ISBN

91-7373-929-4.

No 872 Aseel Berglund: Augmenting the Remote Control:

Stud ies in Complex Information Navigation for

Digital TV, 2004, ISBN 91-7373-940-5.

No 869 Jo Skåmedal: Telecommuting’s Implications on

Travel and Travel Patterns, 2004, ISBN 91-7373-935-9.

No 870 Linda Askenäs: The Roles of IT - Stud ies of

Organising when Implementing and Using

Enterprise Systems, 2004, ISBN 91-7373-936-7.

No 874 Annika Flycht-Eriksson: Design and Use of Ontolo-

gies in Information-Provid ing Dialogue Systems,

2004, ISBN 91-7373-947-2.

No 873 Peter Bunus: Debugging Techniques for Equation-

Based Languages, 2004, ISBN 91-7373-941-3.

No 876 Jonas Mellin: Resource-Pred ictable and Efficient

Monitoring of Events, 2004, ISBN 91-7373-956-1.

No 883 Magnus Bång: Computing at the Speed of Paper:

Ubiquitous Computing Environments for Healthcare

Professionals, 2004, ISBN 91-7373-971-5

No 882 Robert Eklund: Disfluency in Swedish human-

human and human-machine travel booking d i-

alogues, 2004, ISBN 91-7373-966-9.

No 887 Anders Lindström: English and other Foreign

Linguistic Elements in Spoken Swedish. Stud ies of

Productive Processes and their Mod elling using

Finite-State Tools, 2004, ISBN 91-7373-981-2.

No 889 Zhiping Wang: Capacity-Constrained Production-in-

ventory systems - Modelling and Analysis in both a

trad itional and an e-business context, 2004, ISBN 91-

85295-08-6.

No 893 Pernilla Qvarfordt: Eyes on Multimodal Interaction,

2004, ISBN 91-85295-30-2.

No 910 Magnus Kald: In the Borderland between Strategy

and Management Control - Theoretical Framework

and Empirical Evidence, 2004, ISBN 91-85295-82-5.

No 918 Jonas Lundberg: Shaping Electronic News: Genre

Perspectives on Interaction Design, 2004, ISBN 91-

85297-14-3.

No 900 Mattias Arvola: Shades of use: The dynamics of

interaction design for sociable use, 2004, ISBN 91-

85295-42-6.

No 920 Luis Alejandro Cortés: Verification and Scheduling

Techniques for Real-Time Embedded Systems, 2004,

ISBN 91-85297-21-6.

No 929 Diana Szentivanyi: Performance Stud ies of Fault-

Tolerant Middleware, 2005, ISBN 91-85297-58-5.

Page 134: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 236 — #250

No 933 Mikael Cäker: Management Accounting as

Constructing and Opposing Customer Focus: Three

Case Stud ies on Management Accounting and

Customer Relations, 2005, ISBN 91-85297-64-X.

No 937 Jonas Kvarnström: TALplanner and Other

Extensions to Temporal Action Logic, 2005, ISBN 91-

85297-75-5.

No 938 Bourhane Kadmiry: Fuzzy Gain-Scheduled Visual

Servoing for Unmanned Helicopter, 2005, ISBN 91-

85297-76-3.

No 945 Gert Jervan: Hybrid Built-In Self-Test and Test

Generation Techniques for Digital Systems, 2005,

ISBN: 91-85297-97-6.

No 946 Anders Arpteg: Intelligent Semi-Structured Informa-

tion Extraction, 2005, ISBN 91-85297-98-4.

No 947 Ola Angelsmark: Constructing Algorithms for Con-

straint Satisfaction and Related Problems - Methods

and Applications, 2005, ISBN 91-85297-99-2.

No 963 Calin Curescu: Utility-based Optimisation of

Resource Allocation for Wireless Networks, 2005,

ISBN 91-85457-07-8.

No 972 Björn Johansson: Joint Control in Dynamic

Situations, 2005, ISBN 91-85457-31-0.

No 974 Dan Lawesson: An Approach to Diagnosability

Analysis for Interacting Finite State Systems, 2005,

ISBN 91-85457-39-6.

No 979 Claudiu Duma: Security and Trust Mechanisms for

Groups in Distributed Services, 2005, ISBN 91-85457-

54-X.

No 983 Sorin Manolache: Analysis and Optimisation of

Real-Time Systems with Stochastic Behaviour, 2005,

ISBN 91-85457-60-4.

No 986 Yuxiao Zhao: Standard s-Based Application

Integration for Business-to-Business

Communications, 2005, ISBN 91-85457-66-3.

No 1004 Patrik Haslum: Admissible Heuristics for

Automated Planning, 2006, ISBN 91-85497-28-2.

No 1005 Aleksandra Tešanovic: Developing Reusable and

Reconfigurable Real-Time Software using Aspects

and Components, 2006, ISBN 91-85497-29-0.

No 1008 David Dinka: Role, Identity and Work: Extending

the design and development agenda, 2006, ISBN 91-

85497-42-8.

No 1009 Iakov Nakhimovski: Contributions to the Modeling

and Simulation of Mechanical Systems with Detailed

Contact Analysis, 2006, ISBN 91-85497-43-X.

No 1013 Wilhelm Dahllöf: Exact Algorithms for Exact

Satisfiability Problems, 2006, ISBN 91-85523-97-6.

No 1016 Levon Saldamli: PDEModelica - A High-Level Lan-

guage for Modeling with Partial Differential Equa-

tions, 2006, ISBN 91-85523-84-4.

No 1017 Daniel Karlsson: Verification of Component-based

Embedded System Designs, 2006, ISBN 91-85523-79-8

No 1018 Ioan Chisalita: Communication and Networking

Techniques for Traffic Safety Systems, 2006, ISBN 91-

85523-77-1.

No 1019 Tarja Susi: The Puzzle of Social Activity - The

Significance of Tools in Cognition and Cooperation,

2006, ISBN 91-85523-71-2.

No 1021 Andrzej Bednarski: Integrated Optimal Code Gener-

ation for Digital Signal Processors, 2006, ISBN 91-

85523-69-0.

No 1022 Peter Aronsson: Automatic Parallelization of Equa-

tion-Based Simulation Programs, 2006, ISBN 91-

85523-68-2.

No 1030 Robert Nilsson: A Mutation-based Framework for

Automated Testing of Timeliness, 2006, ISBN 91-

85523-35-6.

No 1034 Jon Edvardsson: Techniques for Automatic

Generation of Tests from Programs and

Specifications, 2006, ISBN 91-85523-31-3.

No 1035 Vaida Jakoniene: Integration of Biological Data,

2006, ISBN 91-85523-28-3.

No 1045 Genevieve Gorrell: Generalized Hebbian

Algorithms for Dimensionality Reduction in Natural

Language Processing, 2006, ISBN 91-85643-88-2.

No 1051 Yu-Hsing Huang: Having a New Pair of Glasses -

Applying Systemic Accident Models on Road Safety,

2006, ISBN 91-85643-64-5.

No 1054 Åsa Hedenskog: Perceive those things which cannot

be seen - A Cognitive Systems Engineering

perspective on requirements management, 2006,

ISBN 91-85643-57-2.

No 1061 Cécile Åberg: An Evaluation Platform for Semantic

Web Technology, 2007, ISBN 91-85643-31-9.

No 1073 Mats Grindal: Handling Combinatorial Explosion in

Software Testing, 2007, ISBN 978-91-85715-74-9.

No 1075 Almut Herzog: Usable Security Policies for Runtime

Environments, 2007, ISBN 978-91-85715-65-7.

No 1079 Magnus Wahlström: Algorithms, measures, and

upper bounds for Satisfiability and related problems,

2007, ISBN 978-91-85715-55-8.

No 1083 Jesper Andersson: Dynamic Software Architectures,

2007, ISBN 978-91-85715-46-6.

No 1086 Ulf Johansson: Obtaining Accurate and Compre-

hensible Data Mining Models - An Evolu tionary

Approach, 2007, ISBN 978-91-85715-34-3.

No 1089 Traian Pop: Analysis and Optimisation of

Distributed Embedded Systems with Heterogeneous

Scheduling Policies, 2007, ISBN 978-91-85715-27-5.

No 1091 Gustav Nordh: Complexity Dichotomies for CSP-

related Problems, 2007, ISBN 978-91-85715-20-6.

No 1106 Per Ola Kristensson: Discrete and Continuous Shape

Writing for Text Entry and Control, 2007, ISBN 978-

91-85831-77-7.

No 1110 He Tan: Aligning Biomedical Ontologies, 2007, ISBN

978-91-85831-56-2.

No 1112 Jessica Lindblom: Minding the body - Interacting so-

cially through embodied action, 2007, ISBN 978-91-

85831-48-7.

No 1113 Pontus Wärnestål: Dialogue Behavior Management

in Conversational Recommender Systems, 2007,

ISBN 978-91-85831-47-0.

No 1120 Thomas Gustafsson: Management of Real-Time

Data Consistency and Transient Overloads in

Embedded Systems, 2007, ISBN 978-91-85831-33-3.

No 1127 Alexandru Andrei: Energy Efficient and Pred ictable

Design of Real-time Embedded Systems, 2007, ISBN

978-91-85831-06-7.

No 1139 Per Wikberg: Eliciting Knowledge from Experts in

Modeling of Complex Systems: Managing Variation

and Interactions, 2007, ISBN 978-91-85895-66-3.

No 1143 Mehdi Amirijoo: QoS Control of Real-Time Data

Services under Uncertain Workload , 2007, ISBN 978-

91-85895-49-6.

No 1150 Sanny Syberfeldt: Optimistic Replication with For-

ward Conflict Resolution in Distributed Real-Time

Databases, 2007, ISBN 978-91-85895-27-4.

No 1155 Beatrice Alenljung: Envisioning a Future Decision

Support System for Requirements Engineering - A

Holistic and Human-centred Perspective, 2008, ISBN

978-91-85895-11-3.

Page 135: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 237 — #251

No 1156 Artur Wilk: Types for XML with Application to

Xcerpt, 2008, ISBN 978-91-85895-08-3.

No 1183 Adrian Pop: Integrated Model-Driven Development

Environments for Equation-Based Object-Oriented

Languages, 2008, ISBN 978-91-7393-895-2.

No 1185 Jörgen Skågeby: Gifting Technologies -

Ethnographic Stud ies of End -users and Social Media

Sharing, 2008, ISBN 978-91-7393-892-1.

No 1187 Imad-Eldin Ali Abugessaisa: Analytical tools and

information-sharing methods supporting road safety

organizations, 2008, ISBN 978-91-7393-887-7.

No 1204 H. Joe Steinhauer: A Representation Scheme for De-

scription and Reconstruction of Object

Configurations Based on Qualitative Relations, 2008,

ISBN 978-91-7393-823-5.

No 1222 Anders Larsson: Test Optimization for Core-based

System-on-Chip, 2008, ISBN 978-91-7393-768-9.

No 1238 Andreas Borg: Processes and Models for Capacity

Requirements in Telecommunication Systems, 2009,

ISBN 978-91-7393-700-9.

No 1240 Fredrik Heintz: DyKnow: A Stream-Based Know-

ledge Processing Middleware Framework, 2009,

ISBN 978-91-7393-696-5.

No 1241 Birgitta Lindström: Testability of Dynamic Real-

Time Systems, 2009, ISBN 978-91-7393-695-8.

No 1244 Eva Blomqvist: Semi-automatic Ontology Construc-

tion based on Patterns, 2009, ISBN 978-91-7393-683-5.

No 1249 Rogier Woltjer: Functional Modeling of Constraint

Management in Aviation Safety and Command and

Control, 2009, ISBN 978-91-7393-659-0.

No 1260 Gianpaolo Conte: Vision-Based Localization and

Guidance for Unmanned Aerial Vehicles, 2009, ISBN

978-91-7393-603-3.

No 1262 AnnMarie Ericsson: Enabling Tool Support for For-

mal Analysis of ECA Rules, 2009, ISBN 978-91-7393-

598-2.

No 1266 Jiri Trnka: Exploring Tactical Command and

Control: A Role-Playing Simulation Approach, 2009,

ISBN 978-91-7393-571-5.

No 1268 Bahlol Rahimi: Supporting Collaborative Work

through ICT - How End-users Think of and Adopt

Integrated Health Information Systems, 2009, ISBN

978-91-7393-550-0.

No 1274 Fredrik Kuivinen: Algorithms and Hardness Results

for Some Valued CSPs, 2009, ISBN 978-91-7393-525-8.

No 1281 Gunnar Mathiason: Virtual Full Replication for

Scalable Distributed Real-Time Databases, 2009,

ISBN 978-91-7393-503-6.

No 1290 Viacheslav Izosimov: Scheduling and Optimization

of Fault-Tolerant Distribu ted Embedded Systems,

2009, ISBN 978-91-7393-482-4.

No 1294 Johan Thapper: Aspects of a Constraint

Optimisation Problem, 2010, ISBN 978-91-7393-464-0.

No 1306 Susanna Nilsson: Augmentation in the Wild : User

Centered Development and Evaluation of

Augmented Reality Applications, 2010, ISBN 978-91-

7393-416-9.

No 1313 Christer Thörn: On the Quality of Feature Models,

2010, ISBN 978-91-7393-394-0.

No 1321 Zhiyuan He: Temperature Aware and Defect-

Probability Driven Test Scheduling for System-on-

Chip, 2010, ISBN 978-91-7393-378-0.

No 1333 David Broman: Meta-Languages and Semantics for

Equation-Based Modeling and Simulation, 2010,

ISBN 978-91-7393-335-3.

No 1337 Alexander Siemers: Contributions to Modelling and

Visualisation of Multibody Systems Simulations with

Detailed Contact Analysis, 2010, ISBN 978-91-7393-

317-9.

No 1354 Mikael Asplund: Disconnected Discoveries:

Availability Stud ies in Partitioned Networks, 2010,

ISBN 978-91-7393-278-3.

No 1359 Jana Rambusch: Mind Games Extended:

Understanding Gameplay as Situated Activity, 2010,

ISBN 978-91-7393-252-3.

No 1373 Sonia Sangari: Head Movement Correlates to Focus

Assignment in Swedish,2011,ISBN 978-91-7393-154-0.

No 1374 Jan-Erik Källhammer: Using False Alarms when

Developing Automotive Active Safety Systems, 2011,

ISBN 978-91-7393-153-3.

No 1375 Mattias Eriksson: Integrated Code Generation, 2011,

ISBN 978-91-7393-147-2.

No 1381 Ola Leifler: Affordances and Constraints of

Intelligent Decision Support for Military Command

and Control – Three Case Stud ies of Support

Systems, 2011, ISBN 978-91-7393-133-5.

No 1386 Soheil Samii: Quality-Driven Synthesis and

Optimization of Embedded Control Systems, 2011,

ISBN 978-91-7393-102-1.

No 1419 Erik Kuiper: Geographic Routing in Intermittently-

connected Mobile Ad Hoc Networks: Algorithms

and Performance Models, 2012, ISBN 978-91-7519-

981-8.

No 1451 Sara Stymne: Text Harmonization Strategies for

Phrase-Based Statistical Machine Translation , 2012,

ISBN 978-91-7519-887-3.

No 1455 Alberto Montebelli: Modeling the Role of Energy

Management in Embodied Cognition, 2012, ISBN

978-91-7519-882-8.

Linköping Studies in Arts and Science

No 504 Ing-Marie Jonsson: Social and Emotional

Characteristics of Speech-based In-Vehicle

Information Systems: Impact on Attitude and

Driving Behaviour, 2009, ISBN 978-91-7393-478-7.

Linköping Studies in Stat ist ics

No 9 Davood Shahsavani: Computer Experiments De-

signed to Explore and Approximate Complex Deter -

ministic Models, 2008, ISBN 978-91-7393-976-8.

No 10 Karl Wahlin: Roadmap for Trend Detection and As-

sessment of Data Quality, 2008, ISBN 978-91-7393-

792-4.

No 11 Oleg Sysoev: Monotonic regression for large

multivariate datasets, 2010, ISBN 978-91-7393-412-1.

No 13 Agné Burauskaite-Harju: Characterizing Temporal

Change and Inter-Site Correlations in Daily and Sub-

daily Precipitation Extremes, 2011, ISBN 978-91-7393-

110-6.

Linköping Studies in Informat ion Science

No 1 Karin Axelsson: Metodisk systemstrukturering- att

skapa samstämmighet mellan informationssystem-

arkitektur och verksamhet, 1998. ISBN-9172-19-296-8.

No 2 Stefan Cronholm: Metodverktyg och användbarhet -

en stud ie av datorstödd metod baserad

systemutveckling, 1998, ISBN-9172-19-299-2.

No 3 Anders Avdic: Användare och utvecklare - om

anveckling med kalkylprogram, 1999. ISBN-91-7219-

606-8.

No 4 Owen Eriksson: Kommunikationskvalitet hos infor-

mationssystem och affärsprocesser, 2000, ISBN 91-

7219-811-7.

Page 136: Modeling the Role of Energy Management in Embodied Cognition

“Thesis16” — 2012/4/28 — 1:40 — page 238 — #252

No 5 Mikael Lind: Från system till process - kriterier för

processbestämning vid verksamhetsanalys, 2001,

ISBN 91-7373-067-X.

No 6 Ulf Melin: Koord ination och informationssystem i

företag och nätverk, 2002, ISBN 91-7373-278-8.

No 7 Pär J. Ågerfalk: Information Systems Actability - Un-

derstanding Information Technology as a Tool for

Business Action and Communication, 2003, ISBN 91-

7373-628-7.

No 8 Ulf Seigerroth: Att förstå och förändra system-

utvecklingsverksamheter - en taxonomi för

metau tveckling, 2003, ISBN91-7373-736-4.

No 9 Karin Hedström: Spår av datoriseringens värden –

Effekter av IT i äld reomsorg, 2004, ISBN 91-7373-963-

4.

No 10 Ewa Braf: Knowledge Demanded for Action -

Stud ies on Knowledge Mediation in Organisations,

2004, ISBN 91-85295-47-7.

No 11 Fredrik Karlsson: Method Configuration method

and computerized tool sup port, 2005, ISBN 91-85297-

48-8.

No 12 Malin Nordström: Styrbar systemförvaltning - Att

organisera systemförvaltningsverksamhet med hjälp

av effektiva förvaltningsobjekt, 2005, ISBN 91-85297-

60-7.

No 13 Stefan Holgersson: Yrke: POLIS - Yrkeskunskap,

motivation, IT-system och andra förutsättningar för

polisarbete, 2005, ISBN 91-85299-43-X.

No 14 Benneth Christiansson, Marie-Therese

Christiansson: Mötet mellan process och komponent

- mot ett ramverk för en verksamhetsnära

kravspecifikation vid anskaffning av komponent-

baserade informationssystem, 2006, ISBN 91-85643-

22-X.