dynamical systems: a framework for man machine...

7
1 Dynamical Systems: A Framework for Man Machine Interaction IOANNIS IOSSIFIDIS 1 and AXEL STEINHAGE 2 1 Institut f¨ ur Neuroinformatik, Ruhr-Univerit¨ at Bochum, Germany Phone +49 234 3225567, Email: Ioannis.Iossifi[email protected] 2 Infineon Technologies AG, CPR ST, 81739 M¨ unchen, Germany Phone +49 89 234 55181, Email: Axel.Steinhage@infineon.com Abstract: - We present an architecture to generate behavior for anthropomorphic robots. The goal is to equip the robots with the capacity to interact naturally with a human sharing the same interaction- channels. Motivated by the research on biological systems, our basic assumption is that the behavior to perform determines the external and internal structure of the behaving system. We describe the anthropomorphic design of our robots and present a distributed control system that generates human- like navigation and manipulation behavior. As the mathematical framework for this purpose we have developed a control system which is entirely based on dynamical systems in the form of instantiated dynamics and neural fields. Key-Words: Anthropomorphic Robot, Dynamical systems, Dynamic Approach 1 Introduction The behavior of living organisms is based on their own sensory information. As the acquisition of sensor information is an active process which changes the state of the system within the en- vironment, the behaving system directly receives feedback from the external world. This feedback is the basis for the high flexibility of living sys- tems which allows them to autonomously react to changing conditions in their surrounding. Within their environment, biological organisms always have to interact with other living sys- tems. In particular among human beings this interaction can have the form of a partnership which allows them to solve a given task more effi- ciently. As every biological organism is the result of a long-term evolution, their internal behavioral control mechanism as well as their bodily shape are optimized with respect to the tasks they have to solve. Optimizing the solution of certain tasks has al- ways been the major goal of robotics research too. Within the field of industrial robotics this has led to machines which are highly specialized for their specific manufacturing purpose. The “inter- action” between robots and humans is restricted to supervision or supply by the human. Industrial robots are not built to flexibly behave in dynami- cally changing environments. In most cases, their environment, i.e. the factory cells, is specifically designed to allow for a reliable operation of the preprogrammed robot. Another line of robotics research has con- centrated on artificial autonomous systems. It turned out that closing the feedback loop between the robot and the external world can be achieved for low-level behaviors based on low-level sensor information [5]. Along with this goes the idea to omit central high-level representations of the robot’s behavioral state and its environment in favor of decentralized sensor-near representations that are sufficient just for the behavior to gener- ate. This view has been motivated by findings of the sciences investigating nervous systems. The analysis of Braitenberg [4], for instance, has of- fered a perspective on how the structure of ner- vous systems embodies behaviors in a rather dis- tributed and low-level fashion. Along the way of trying to translate the basic principles of biological behavior generation into an architecture applicable to artificial robot sys- tems, surprisingly complex behavior has already been achieved [12]. However, in these applica- tions autonomy is still restricted to a reaction of the robot to changes in the lifeless world. Hence, our goal is to design a system, which has the capacity to interact with human beings as partners in solving common tasks. Based on the biologically motivated conviction that the be- havior to perform determines the external and in- ternal structure of the behaving system, we use robots which has an anthropomorphic design. As formal framework for translating basic principles of biological information processing for behavior generation we choose the so-called dynamic ap- proach to robotics [11] which is based on the math- ematical theory of dynamical systems. This paper demonstrates our approach for a number of behaviors implemented on the anthro- pomorphic robots Arnold (Fig. 1) and CoRA. The selection of these behaviors is directed to- wards an interaction between the robot and the

Upload: others

Post on 03-Jun-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Dynamical Systems: A Framework for Man Machine Interactionwseas.us/e-library/conferences/skiathos2001/papers/190.pdf · ematical theory of dynamical systems. This paper demonstrates

1

Dynamical Systems: A Framework for Man Machine

Interaction

IOANNIS IOSSIFIDIS1 and AXEL STEINHAGE2

1 Institut fur Neuroinformatik, Ruhr-Univeritat Bochum, GermanyPhone +49 234 3225567, Email: [email protected]

2 Infineon Technologies AG, CPR ST, 81739 Munchen, GermanyPhone +49 89 234 55181, Email: [email protected]

Abstract: - We present an architecture to generate behavior for anthropomorphic robots. The goal isto equip the robots with the capacity to interact naturally with a human sharing the same interaction-channels. Motivated by the research on biological systems, our basic assumption is that the behaviorto perform determines the external and internal structure of the behaving system. We describe theanthropomorphic design of our robots and present a distributed control system that generates human-like navigation and manipulation behavior. As the mathematical framework for this purpose we havedeveloped a control system which is entirely based on dynamical systems in the form of instantiateddynamics and neural fields.

Key-Words: Anthropomorphic Robot, Dynamical systems, Dynamic Approach

1 Introduction

The behavior of living organisms is based ontheir own sensory information. As the acquisitionof sensor information is an active process whichchanges the state of the system within the en-vironment, the behaving system directly receivesfeedback from the external world. This feedbackis the basis for the high flexibility of living sys-tems which allows them to autonomously react tochanging conditions in their surrounding.

Within their environment, biological organismsalways have to interact with other living sys-tems. In particular among human beings thisinteraction can have the form of a partnershipwhich allows them to solve a given task more effi-ciently. As every biological organism is the resultof a long-term evolution, their internal behavioralcontrol mechanism as well as their bodily shapeare optimized with respect to the tasks they haveto solve.

Optimizing the solution of certain tasks has al-ways been the major goal of robotics research too.Within the field of industrial robotics this hasled to machines which are highly specialized fortheir specific manufacturing purpose. The “inter-action” between robots and humans is restrictedto supervision or supply by the human. Industrialrobots are not built to flexibly behave in dynami-cally changing environments. In most cases, theirenvironment, i.e. the factory cells, is specificallydesigned to allow for a reliable operation of thepreprogrammed robot.

Another line of robotics research has con-centrated on artificial autonomous systems. Itturned out that closing the feedback loop betweenthe robot and the external world can be achieved

for low-level behaviors based on low-level sensorinformation [5]. Along with this goes the ideato omit central high-level representations of therobot’s behavioral state and its environment infavor of decentralized sensor-near representationsthat are sufficient just for the behavior to gener-ate. This view has been motivated by findings ofthe sciences investigating nervous systems. Theanalysis of Braitenberg [4], for instance, has of-fered a perspective on how the structure of ner-vous systems embodies behaviors in a rather dis-tributed and low-level fashion.

Along the way of trying to translate the basicprinciples of biological behavior generation intoan architecture applicable to artificial robot sys-tems, surprisingly complex behavior has alreadybeen achieved [12]. However, in these applica-tions autonomy is still restricted to a reaction ofthe robot to changes in the lifeless world.

Hence, our goal is to design a system, whichhas the capacity to interact with human beingsas partners in solving common tasks. Based onthe biologically motivated conviction that the be-havior to perform determines the external and in-ternal structure of the behaving system, we userobots which has an anthropomorphic design. Asformal framework for translating basic principlesof biological information processing for behaviorgeneration we choose the so-called dynamic ap-proach to robotics [11] which is based on the math-ematical theory of dynamical systems.

This paper demonstrates our approach for anumber of behaviors implemented on the anthro-pomorphic robots Arnold (Fig. 1) and CoRA.The selection of these behaviors is directed to-wards an interaction between the robot and the

Page 2: Dynamical Systems: A Framework for Man Machine Interactionwseas.us/e-library/conferences/skiathos2001/papers/190.pdf · ematical theory of dynamical systems. This paper demonstrates

2

Fig. 1. Robot Arnold: The active vision head and theanthropomorphic arm are mounted on top of a slimbody. The rotation axis of the shoulder joint, the ve-hicle and the head’s pan joint are identical, simplifyingthe coordination of the coupled control systems.

human being.

1.1 Anthropomorphic Robots

Every robot needs to be adapted to its environ-ment and task - just like evolution adapts crea-tures to their environment and ecological niche.Our first project objective was to develop a robotfor service tasks like bringing and fetching objectsin an indoor environment or assisting a human inassembly tasks. Adapting to these tasks meansthat the robot has to move on flat surfaces insufficently lit rooms and to use passive photode-tectors (cameras) as its main sensors. Doors willhave a certain minimum size and the robot willhave to avoid both static and dynamic obstacles.It also will have to manipulate objects that arepositioned on tables or other horizontal surfacesin locations that are comfortable for a standingor sitting human to manipulate objects within.

In fact, the case of an indoor environmentmakes designing an appropriate robot much eas-ier since this specific environment is artificial inthat it is highly adapted to the specifics of ourhuman anatomy, physiology and skills.

In our projects NEUROS (Neural Robot Skills)and MORPHA we designed and built the mobilerobot Arnold and the assembly assistent robotCoRAas a development and demonstration plat-form.

1.1.1 The Mobile Robot Arnold

Arnold’s main components are the camera

head, the arm and the mobile base (Fig. 1).The camera head is designed to perform well forArnold’s typical tasks which include orientationand locomotion in an indoor environment, recog-nizing, locating and grasping objects. While fornavigation tasks a wide field of view is necessaryto avoid obstacles in cluttered spaces, recognitionand visually controlled manipulation of small ob-jects require a sufficiently high sensor resolution.We adopted the basic principles of the human vi-sual system within the limits of currently avail-able hardware by using two pairs of cameras, onewith a small field of view and one with wide angleoptics. This mimics the varying resolution of thehuman eye which results in foveal and peripheralvision [7]. The use of stereo vision - i.e. depth per-ception based on triangulation between a sensorpair - for manipulation tasks is another analogyto the biological model: due to the accuracy lim-itations of our eye distance and retina resolution,humans use stereo based depth mainly up to a dis-tance of about one meter which also defines thespace in which we can do manipulation tasks [6].

The general shape of the body has to enablemovement in cluttered environments and espe-cially through tight passages like door frames. Tonavigate in the close vicinity of walls and obsta-cles leads to the requirement to see these placeswithout self-occlusion while wandering around.Since we did not use a biped for technical rea-sons, we chose to employ a pyramidal shape forthe robots body to allow for visually controlledmovement in tight spaces.

With a height of 1.35 meters Arnold is aboutas tall as a sitting adult, an ideal size to manip-ulate objects on tables. The arm with its sevendegrees of freedom is a good approximation of thehuman arm within the limits of currently avail-able hardware. The maximum grasping radiusis about one meter with a maximum load of 1.5kilogram. Since only six degrees of freedom arenecessary to grasp an object at any location andin any orientation, the seventh degree of freedomresults in redundancy: The angle of the elbowrelative to the shoulder-end-effector axis can bechosen independently of the end-effector positionand orientation. Similar to the human elbow thiscan be used to either satisfy additional contraintslike avoiding obstacles or to generate simpler andsmoother trajectories. The anthropomorphic as-sembly with the arm sideways and below the headensures that the arm does not occlude the objectto grasp.

Arnold’s mobile platform can be controlledby setting the forward velocity and the rotationspeed around the center axis meaning that therobot is able to rotate in place. The platform’ssquare footprint is small enough to pass doors and

Page 3: Dynamical Systems: A Framework for Man Machine Interactionwseas.us/e-library/conferences/skiathos2001/papers/190.pdf · ematical theory of dynamical systems. This paper demonstrates

3

to navigate in common office environments.

2 The Assistance Robot Cora

Cora(=Cooperative Robot Assistant, seeFig. 2) was built as a prototype of a service robotsystem to assist a human partner in industrialassembly tasks. The research on this platformranges from investigating the behavioral aspectsof man-machine interaction, visual scene repre-sentation and interpretation to behavior genera-tion and behavioral organization.

Stereo Camerasystem

Shoulder

Elbow

Wrist

Tactile Sensor Skin

Trunk

Fig. 2. The service- and assistance robot Cora. A sevenDoF manipulator arm is mounted on a one DoF trunkwhich is fixed on a table. The robot posesses a twoDoF stereo camera head with microphones.

In our opinion successful man-machine inter-action requires that both, the robot and the hu-man partner possess a similar sensor- and effectorequipment. Therefore, we designed Cora anthro-pomorphic: the seven DoF manipulator is con-nected to a one DoF trunk which is fixed on theedge of a table. The arm trunk configurationconsists of a modular system manufactured byAmtec. Each module carries a micro-controllerwith a CAN-bus interface. For communicationwith the arm a PCI-card implementing the CAN-bus protocol is used. Above the trunk we assem-bled a two DoF pan/tilt unit carrying a stereocolor camera system (Sony XC-999P) and micro-phones.

By turning the trunk joint, the robot canchange its configuration from left to right handed.Two of the manipulator arm modules are coveredwith a touch-sensitive so-called artificial skin.When a human partner is sitting at the oppo-site side of the table, the robot and the humanpartner share the same eye-level. Relying on thestereo camera, the microphones and the artificialskin, Cora uses similar sensor channels as thoseavailable to the human partner. The restrictionto audio, vision and touch and the redundant con-figuration of the arm make great demands on thecontrol structure of the robot. The goal of our

research is a robot system that can perform thefollowing tasks: a) visual identification of objectspresented by a human teacher, b) recognition ofthese objects from a complicated scene, c) grasp-ing and handing over objects, d) acting out simpleassembly tasks. All behaviors should be realizedunder acustical and haptic control by the humanpartner. This user interaction consists of speechcommands and correction of the manipulator con-figuration via touch.

The computational power is provided by a fastethernet network of 5 PCs with 600Mhz PentiumCeleron CPUs running under Linux and the realtime operating system QNX.

3 Dynamic Approach

Controlling antrhopomorphic robots of Arnold’sand Cora’s complexity requires an advancedcontrol architecture. The main requirements forthis architecture are the following:stability : to avoid hazards when interacting withhumans, the behavior of the robot must be stableat all timesadaptivity : as the human’s behavior is unpre-dictable, the robot must be able to quickly adaptto new situationsflexibility : to be a helpful partner, the robotmust possess a large behavioral repertoire so thatit can react appropriately in very different situa-tionsscalability : to minimize the effort when imple-menting new sensors or behaviors, the control ar-chitecture must possess a certain degree of mod-ularitydynamic : as the interaction between man andmachine happens in real time, the control systemmust be fast.All these requirements are met by the dynamicapproach to robotics which was first developed bySchoner ten years ago [11] and then extended bySteinhage and others (see e.g. [12] and [3]). Thebasic idea is to represent behavioral states of therobot by stable states of dynamical systems. Theoutput of the robot’s sensors control the parame-ters of these dynamical systems so that the stablestates of the dynamics and hence the behavior ofthe robot changes. Qualitative changes of the be-havior are brought about by bifurcating from onestable state to another. To allow for a large reper-toire of different behaviors, the dynamics mustpossess multiple different stable states. There-fore we use nonlinear dynamics. The robot’s be-havior corresponds to a trajectory in the high di-mensional state space of a nonlinear dynamicalsystem.

In the following we will briefly present the mainsteps for designing a behavior generating dynam-ical system. Then we will give two practical ex-

Page 4: Dynamical Systems: A Framework for Man Machine Interactionwseas.us/e-library/conferences/skiathos2001/papers/190.pdf · ematical theory of dynamical systems. This paper demonstrates

4

amples of dynamical systems which generate be-havior for our two robots Cora and Arnold.

3.1 Behavior Generating Dynamics

The first step towards a mathematical formu-lation of a behavior generating dynamics is thespecification of the corresponding behavioral vari-ables. These variables can be scalar or vector val-ued. In some simple cases a behavioral variabledirectly parameterizes the elementary behavior tobe generated. As an example for such an instan-tiated dynamics we describe the target acquisi-tion behavior of our robot [9]. (see [13] and forother examples). This behavior can be param-eterized by the robot’s current heading directionϕ(t) measured relative to an arbitrary but fixedglobal reference direction ϕ0 = 0. Varying theheading direction while keeping the forward ve-locity v constant generates a trajectory. The vari-able ϕ(t) therefore describes the behavioral stateof the system with respect to the behavior of loco-motion. The basic idea of the dynamic approachis to map the behavioral state of the system ontothe mathematical state of a dynamics describedby an appropriate differential equation. For theexample of locomotion, a trajectory can be gen-erated by integrating the dynamical system

τϕϕ(t) = F (ϕ, t, ~p) (1)

in time. Here, ϕ = dϕdt is the temporal deriva-

tive of the heading direction. The parameter τϕ

is the time scale of the dynamics which controlshow fast the behavioral variable can change. Thefunction F describes how the heading directionchanges in time depending on the current valueof ϕ, the time t and on a vector of parameters ~p.

3.2 Target acquisition and Obstacle avoid-ance

To generate behavior, the dynamical system isdesigned such that the behavioral tasks are sta-ble fixed points or attractors of (1). An exam-ple for such a behavioral task is the directionφ = arctan(fracyt − yxt − x towards a target atlocation (xt; yt). Theactual values of the fixedpoints are specified by the current sensor infor-mation through forcelets (3), i.e. additive com-ponents of the vector field F in eq. 1. The value ofthe behavioral variable is attracted by the forcesand relaxes into one of the fixed points. A vectorfield of eq. 2 which generates target acquisitionin a navigation task with N targets is

Ft =N∑

i=1

λi sin(φi,t−φh) exp(−1 − cos(φi,t − φh)σi

)

(2)as a solution φh(t) of φ = Ft generates a trajec-

tory towards a target if, in the simplest case, the

3

2

1

Φ .

A

A

A

A

R

Ziel 2

Ziel 1

Roboter

Φ0

Fig. 3. Phaseplot of the dynamics (2)) for different dis-tances to two targets. A zero crossing with negative

slope ∂ϕ∂ϕ

|∂ϕ=0 = −λ in the phaseplot characterizes a

stable fixed point (attractor), a positive slope definesan unstable fixed point (repellor). While approachingtwo nearby targets of different size, a phase transitionor bifurcation occurs in (1): for big distances (po-sition 1), the ranges of the two target-contributionsoverlap and the robot’s heading direction relaxes toan attractor at an averaged position between the twosingle attractors. When approaching the targets (po-sition 2), the ranges start to separate and for smalldistances (position 3), we have two attractors with arepellor inbetween. The robot ”decides” for one of thetwo targets.

forward velocity v is a positive constant. In (2),λi defines the strengths of the target attractors,i.e. the time scale τi = λ−1

i on which the dynam-ics relaxes into that fixed point. The strengthmeasures the importance of the task, for exampleits distance to the agent or its size. The widthσi of the gaussian termy in (2) defines the rangeof the attractor, i.e. the range of influence ofthe task: if a target 20o off the heading directionof the agent should not be considered, σi < 20o

must be chosen. The range also defines the an-gular distance up to which two target directionsshould be averaged and above which the agentshould decide for one of them. This already im-plements the feature of decision making: at largedistances, two target directions are averaged, atsmall distances, the agent decides for one (Fig.3).

Obstacle avoidance is combined with target ac-quisition simply by treating obstacles as ”nega-tive” targets and making their strengths λo neg-ative so that the direction of the obstacles ϕo,i

are unstable fixed points or repellors of the dy-namics. A good choice is to make the strength*o of the obstacles being an inverse exponentialfunction of their distance to the agent measuredwith an appropriate distance sensor (in our casethe stereo camera system) such that near obsta-cles have a large repelling force. The contributionFo of the obstacle avoidance which is the sum ofthe obstacle repellors, is then added to the targetacquisition contribution Ft to yield ϕh = Ft +Fo.

We have implemented this mechanism on our

Page 5: Dynamical Systems: A Framework for Man Machine Interactionwseas.us/e-library/conferences/skiathos2001/papers/190.pdf · ematical theory of dynamical systems. This paper demonstrates

5

robot Arnold so that it is able to avoid movingobstacles and to approach arbitrary targets.

3.3 Controlling the Manipulator

A second, more complex example for the ap-plication of dynamical systems is the control ofCora’s manipulator arm. To generate grasp-ing behavior, we apply the concept of NeuralField Dynamics to the control of the manipula-tor’s joints [8].

These fields have initially been invented byAmari [1] to model information processing of cor-tical layers. They are equivalent to continuousneural networks embedded in a low dimensionalspace in which the “units”, i.e. locations ϕ withinthe field are laterally coupled through an inter-action kernel ω(ϕ, ϕ′) of defined metric in thatspace and receive external input S(ϕ, t). In theAmari-equation for the field activation u(ϕ, t)

τu(ϕ, t) = −u(ϕ, t) + S(ϕ, t) + h +∫ π

−π

ω(ϕ, ϕ′)σ(u(ϕ′, t))dϕ′ (3)

the constant h defines the global mean activationlevel within the field and the transfer functionσ(u) controls the local threshold of activation.Depending on the parameter h and the form ofS, σ and ω, the activation dynamics (3) can havedifferent types of solutions [1]. From the pointof view of behavior generation the homogeneoussolution u(ϕ, t) = h and the localized solutionsare of particular interest. The first represents thestate in which no value of ϕ is specified, i.e. whereno behavioral goal is given. The second repre-sents the state in which one or more values ϕi arespecified through the maxima of localized peaksof activation at the positions ϕi in the field rep-resenting one or more behavioral goals.

In our approach the activity distribution withinthe field does not describe neural activation: wedescribe the position of the robot’s end-effectorwithin a two dimensional horizontal plane parallelto the table by a local excitation peak(see Fig. 4).

By means of a the mathematical concept of anasymmetric interaction kernel [10] we generate agrasping trajectory by setting the dynamic pa-rameters of the neural field such that the peakmoves withing the field in the direction towardsthe target object to grasp. By a simple transientdynamics the vertical position of the field-plane isdriven towards the vertical position of the targetobject.

Obstacles inject negative input into the neu-ral field depending on their position on the tableand on their intersection with the currect verticalposition of the field plane. These regions are au-

−1200 −1000 −800 −600 −400 −200 0 200 400 600 800 1000 1200

0 200 400 600 800 1000 1200

0

200

400

600

800

1000

Peak Negative Input

Target

Targetposition

ObstacleTable

Neural Field

Trajectory

Fig. 4. Generating grasping trajectories: the position ofthe end-effector is specified by a peak solution of aneural field.

tomatically avoided by the moving peak solutionon the way to the target position.

Given a current end-effector position ~rEEF =(x, y, z) in the three dimensional work space ofthe robot, we span a two dimensional space xf , yf

parallel to the table. The vertical position of thisplane is given by the z-coordinate of the end-effector (i.e. the height above the table). Theorigin ~rf,0 = (xf , yf ) = (0, 0) of this space ishorizontally fixed in the work space. In this twodimensional coordinate system we erect a neuralfield which is goverened by the following equation:

τu(~rf , t) = −u(~rf , t) + s(~rf , t) − h

+∫ ∞

−∞ω(~rf − ~rf

′)f(u(~rf′, t))d~rf

(4)

Herein, u(~rf , t) is the activity of the neural fieldat the position ~rf and ω(~rf − ~rf

′) is the inter-action kernel, which defines the interaction be-tween different field sites ~rf and ~rf

′. The func-tion f(u) = 1

e−cu+1 with a positive constant cis a sigmoidal function which guarantees thatonly active field sites ~rf with sufficiently posi-tive activity u(~rf ) are considered in the convo-lution in (4). The constant h < 0 defines a globalactivation level for the neural field. The inputs(~rf , t) stems from the sensor system: if an ob-stacle in the workspace intersects the horizontalplane (xf , yf), the intersecting area specifies anegative input s < 0 for the field (see Fig. 4).

Initially, the interaction kernel ω(~rf − ~rf′) is

chosen as a mexican hat-shaped rotationally sym-metric gaussian

ωs(~rf − ~r′f ) = k exp

(−|~rf − ~r′

f |22σ2

)− H0 (5)

with constants k, H0 > 0. Using this interactionkernel, the neural field equation (4) allows for the

Page 6: Dynamical Systems: A Framework for Man Machine Interactionwseas.us/e-library/conferences/skiathos2001/papers/190.pdf · ematical theory of dynamical systems. This paper demonstrates

6

so-called self stabilizing monostable peak solution:if we inject a positive excitation into the field atthe position of the end-effector, this peak will re-main there even after the input has been switchedoff. Therefore, we use the position of such a peakto control the current end-effector position ~rEEF .

For generating a peak movement method weuse a method developed in [10]: we make the in-teraction kernel slightly asymmetric, by overlay-ing its own spatial derivative

ω = ωs + nx∂ωs

∂ex+ ny

∂ωs

∂ey. (6)

Herein, ∂ωs

∂ex,yis the spatial derivative of ωs (from

equation (5)) taken in the direction of the ele-mentary coordinate vectors ex, ey. The values ofnx, ny ∈ [0, 1] define the strength of asymmetryin the interaction kernel. The effect of the ker-nel’s asymmetry on the peak in the neural field isthe following: the peak will move in the directionof the asymmetry avoiding the areas of negativeinput s < 0 i.e. the areas where obstacles arelocated. Therefore, we set nx, ny such that thismovement is directed towards the target location(xt, yt):

nx = tanh(xt−xEEF ), ny = tanh(yt−yEEF )(7)

Herein, xEEF , yEEF specifies the current posi-tion of the end-effector and (xt, yt) the target po-sition. The usage of the hyperbolic tangens func-tion ensures, that |nx, ny| ≤ 1 which is needed tokeep the Amari equation (4) stable.

The function of this mechanism can be seen inthe following simulation (Fig. 5):

In the initial position A , the distance to thetarget is large. Therefore, a strong asymmetry isintroduced in the interaction kernel which leadsto a movement of the peak towards the target anda simple transient dynamics moves the verticalposition of the neural-field-plane towards to thevertical position of the target object.

On this trajectory, the peak avoids the areasof negative input, i.e. the position of the obsta-cles. In D the z-position of the neural field-planereaches the high of the obstacle which injects neg-ative input in the neural field with reference to itsdimensions. Once the target position is reached,the asymmetry is switched off and the peak stops.

We use the position of the peak within the fieldto set the x, y-position of Cora’s end-effector.The vertical z-position is obtained by driving thevertical position zEEF from the initial location tothe target location.

This results in a constant vertical lowering ofthe field-plane if the initial position of the armwas vertically higher than the target position. By

A−1200 −1000 −800 −600 −400 −200 0 200 400 600 800 1000 1200

0 200 400 600 800 1000 1200

0

X−axis

t = 0sZ = 400mm

B−1200 −1000 −800 −600 −400 −200 0 200 400 600 800 1000 1200

0 200 400 600 800 1000 1200

0

X−axis

t = 1sZ = 360mm

C−1200 −1000 −800 −600 −400 −200 0 200 400 600 800 1000 1200

0 200 400 600 800 1000 1200

0

X−axis

t = 2sZ = 320mm

D−1200 −1000 −800 −600 −400 −200 0 200 400 600 800 1000 1200

0 200 400 600 800 1000 1200

0

X−axis

t = 5sZ = 280mm

E−1200 −1000 −800 −600 −400 −200 0 200 400 600 800 1000 1200

0 200 400 600 800 1000 1200

0

X−axis

t = 8sZ = 240mm

F−1200 −1000 −800 −600 −400 −200 0 200 400 600 800 1000 1200

0 200 400 600 800 1000 1200

0

X−axis

t = 11sZ = 200mm

Fig. 5. Simulation of the neural field with asymmetricinteraction kernel. The peak moves from the startingposition to the target position (peak position in F )

avoiding the negative inputs (obstacles) on the way.

means of this lowering, the intersection of obsta-cles with the field plane and thus the position ofthe negative inputs s into the field dynamics maychange (compare C and D in figure 5). How-ever, the end-effector is able to avoid arising ob-stacles by moving in the horizontal plane.

The location of the obstacles is obtained froma vision processing module by analysing the im-ages captured by Cora’s stereo camera system.However, a description of this visual processing isfar beyond the scope of this article.

4 Results

The neural field dynamics described so far havebeen used to control a simulated robot in a Mat-Lab simulation (see figure 4) and to control thereal robot Cora. In this section we present ourresults on Cora by means of figure 6.

The dimensions of the 2D neural field are 61neurons for xf and 31 neurons for yf (see sectionV.): whith a resolution of 40 mm per neuron theneural field represents a workspace of 2400 mmx 1200 mm. The large size of the workspace isneeded to represent every possible grasp positionfor right- and left handed configurations. Theprocessing time of 60ms for one iteration is alsoa presupposition for a real time application likethis.

In figure 6 a series of photos has been taken asthe robot’s arm moved towards the target object.The corresponding representation of the endeffec-tor can be seen in the plots of figure 5.

Page 7: Dynamical Systems: A Framework for Man Machine Interactionwseas.us/e-library/conferences/skiathos2001/papers/190.pdf · ematical theory of dynamical systems. This paper demonstrates

7

A B

C D

E F

Fig. 6. Cora’s endeffector follows the excitation peak ofthe neural field which generates a trajectorie to thetarget avoiding the obstacle.

The movement starts with the initial arm con-figuration at the position shown in A . In B andC a linear movement on the way to the target

is shown. In D the end-effector penetrates therange of the obstacle and begins to change thedirection of motion in order to avoid the obstacle(compare whith figure 5 D where the obstacle in-jects negative input in the neural field dependingon its position and on its dimensions).

At E and F the obstacle avoidance is com-pleted and the target is reached.

5 Conclusion and Outlook

By means of two examples, navigation and ma-nipulation, we have demonstrated the feasibilityof the dynamic approach to robotics for control-ling complex anthropomorphic robots. It shouldbe noted, that navigation and manipulator con-trol are only two small parts of the overall behav-ioral repertoire of our robots ranging from trajec-tory planning, exploration and acoustical guid-ance on Arnold [2] to gaze recognition, handtracking and manipulation on Cora. Each el-ementary behavior and also the mechanism fororganizing all the behaviors in complex behav-ioral sequences work on the basis of explicitlyformulated nonlinear dynamics. Future researchwill deal with the problem of internal represen-tation and simulation of behavior which will alsobe based on the dynamic approach.

Acknowledgment

This work is supported by the BMBF grantMORPHA (925 52 12).

References[1] S. Amari. Dynamics of pattern formation in lateral-

inhibition type neural fields. Biological Cybernetics,27:77–87, 1977.

[2] T. Bergener, C. Bruckhoff, P. Dahm, H. Janßen,F. Joublin, R. Menzner, A. Steinhage, and W. vonSeelen. Complex behavior by means of dynamicalsystems for an anthropomorphic robot. Neural Net-works 12, pages 1087–1099, 1999.

[3] E. Bicho and G. Schoner. The dynamic approachto autonomous robotics demonstrated on a low level-vehicle platform. Robotics and Autonomous Systems,21:23–35, 1997.

[4] V. Braitenberg. Vehicles. Experiments in SyntheticPsychology. MIT Press, Cambridge, Mass., 1984.

[5] R. A. Brooks. New approches to robotics. Science,253:1227–1232, 1991.

[6] H. Collewijn and C. Erkelens. Binocular eye move-ments and the perception of depth. In E. Kowler, ed-itor, Eye Movements and Their Role in Visual andCognitive Processes, pages 213 – 261. Elsevier SciencePublishers B.V., Amsterdam, 1990.

[7] R. DeValois and K. DeValois. Spatial Vision. OxfordPsychology Series No. 14. Oxford University Press,Oxford, 1988.

[8] I. Iossifidis and A. Steinhage. Control of an 8dof manipulator by means of neural fields. InFSR2001, International Conference on Field andService Robotics, Helsinki, Finland, 2001. submitted.

[9] Ioannis Iossifidis. Visuelle navigation auf einem mo-bilen autonomen roboter. Master’s thesis, RuhrUniversitt Bochum, Institut fr Neuroinformatik,Lehrstuhl fr Theoretische Biologie, 1999.

[10] R. Menzner, A. Steinhage, and W. Erlhagen. Gen-erating interactive robot behavior: A mathematicalapproach. In J.A. Meyer, A. Berthoz, D. Floreano,H. Roitblat, and S.W. Wilson, editors, From Ani-mals to Animats 6: Proceedings of the Sixth Interna-tional Conference on Simulation of Adaptive Behav-ior, pages 135–144. The MIT Press/Bradford Books,2000.

[11] G. Schoner, M. Dose, and C. Engels. Dynamics ofbehavior: Theory and applications for autonomousrobot architectures. Robotics and Autonomous Sys-tems, 16:213–245, 1995.

[12] A. Steinhage. Dynamical Systems Generate Naviga-tion Behavior (Ph.D. thesis). Number ISBN 3-8265-3508-1 in Berichte aus der Physik. SHAKER-Verlag,Aachen, Germany, 1998.

[13] A. Steinhage and G. Schoner. Self-calibration basedon invariant view recognition: Dynamic approachto navigation. Robotics and Autonomous Systems,20:133–156, 1997.