springer complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos,...

30
Springer Complexity Springer Complexity is a publication program, cutting across all traditional disciplines of sciences as well as engineering, economics, medicine, psychology and computer sci- ences, which is aimed at researchers, students and practitioners working in the field of complex systems. Complex Systems are systems that comprise many interacting parts with the ability to generate a new quality of macroscopic collective behavior through self-organization, e.g., the spontaneous formation of temporal, spatial or functional struc- tures. This recognition, that the collective behavior of the whole system cannot be simply inferred from the understanding of the behavior of the individual components, has led to various new concepts and sophisticated tools of complexity. The main concepts and tools – with sometimes overlapping contents and methodologies – are the theories of self- organization, complex systems, synergetics, dynamical systems, turbulence, catastrophes, instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics treated within Springer Complexity are as diverse as lasers or fluids in physics, machine cutting phenomena of workpieces or electric circuits with feedback in engineering, growth of crystals or pattern formation in chemistry, morphogenesis in bi- ology, brain function in neurology, behavior of stock exchange rates in economics, or the formation of public opinion in sociology. All these seemingly quite different kinds of structure formation have a number of important features and underlying structures in common. These deep structural similarities can be exploited to transfer analytical meth- ods and understanding from one field to another. The Springer Complexity program there- fore seeks to foster cross-fertilization between the disciplines and a dialogue between theoreticians and experimentalists for a deeper understanding of the general structure and behavior of complex systems. The program consists of individual books, books series such as “Springer Series in Synergetics", “Institute of Nonlinear Science", “Physics of Neural Networks", and “Understanding Complex Systems", as well as various journals.

Upload: others

Post on 27-May-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

Springer Complexity

Springer Complexity is a publication program, cutting across all traditional disciplinesof sciences as well as engineering, economics, medicine, psychology and computer sci-ences, which is aimed at researchers, students and practitioners working in the field ofcomplex systems. Complex Systems are systems that comprise many interacting partswith the ability to generate a new quality of macroscopic collective behavior throughself-organization, e.g., the spontaneous formation of temporal, spatial or functional struc-tures. This recognition, that the collective behavior of the whole system cannot be simplyinferred from the understanding of the behavior of the individual components, has ledto various new concepts and sophisticated tools of complexity. The main concepts andtools – with sometimes overlapping contents and methodologies – are the theories of self-organization, complex systems, synergetics, dynamical systems, turbulence, catastrophes,instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata,adaptive systems, and genetic algorithms.

The topics treated within Springer Complexity are as diverse as lasers or fluids inphysics, machine cutting phenomena of workpieces or electric circuits with feedback inengineering, growth of crystals or pattern formation in chemistry, morphogenesis in bi-ology, brain function in neurology, behavior of stock exchange rates in economics, orthe formation of public opinion in sociology. All these seemingly quite different kindsof structure formation have a number of important features and underlying structures incommon. These deep structural similarities can be exploited to transfer analytical meth-ods and understanding from one field to another. The Springer Complexity program there-fore seeks to foster cross-fertilization between the disciplines and a dialogue betweentheoreticians and experimentalists for a deeper understanding of the general structureand behavior of complex systems.

The program consists of individual books, books series such as “Springer Seriesin Synergetics", “Institute of Nonlinear Science", “Physics of Neural Networks", and“Understanding Complex Systems", as well as various journals.

Page 2: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

Springer Series in Synergetics

Series Editor

Hermann HakenInstitut für Theoretische Physikund Synergetikder Universität Stuttgart70550 Stuttgart, Germany

and

Center for Complex SystemsFlorida Atlantic UniversityBoca Raton, FL 33431, USA

Members of the Editorial Board

Åke Andersson, Stockholm, SwedenGerhard Ertl, Berlin, GermanyBernold Fiedler, Berlin, GermanyYoshiki Kuramoto, Sapporo, JapanJürgen Kurths, Potsdam, GermanyLuigi Lugiato, Milan, ItalyJürgen Parisi, Oldenburg, GermanyPeter Schuster, Wien, AustriaFrank Schweitzer, Zürich, SwitzerlandDidier Sornette, Nice, France and Zürich, SwitzerlandManuel G. Velarde, Madrid, Spain

SSSyn – An Interdisciplinary Series on Complex Systems

The success of the Springer Series in Synergetics has been made possible by the contri-butions of outstanding authors who presented their quite often pioneering results to thescience community well beyond the borders of a special discipline. Indeed, interdisci-plinarity is one of the main features of this series. But interdisciplinarity is not enough:The main goal is the search for common features of self-organizing systems in a greatvariety of seemingly quite different systems, or, still more precisely speaking, the searchfor general principles underlying the spontaneous formation of spatial, temporal or func-tional structures. The topics treated may be as diverse as lasers and fluids in physics,pattern formation in chemistry, morphogenesis in biology, brain functions in neurologyor self-organization in a city. As is witnessed by several volumes, great attention is be-ing paid to the pivotal interplay between deterministic and stochastic processes, as wellas to the dialogue between theoreticians and experimentalists. All this has contributedto a remarkable cross-fertilization between disciplines and to a deeper understanding ofcomplex systems. The timeliness and potential of such an approach are also mirrored –among other indicators – by numerous interdisciplinary workshops and conferences allover the world.

Page 3: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

Vadim S. Anishchenko Vladimir AstakhovAlexander Neiman Tatjana VadivasovaLutz Schimansky-Geier

Nonlinear Dynamicsof Chaotic and StochasticSystemsTutorial and Modern Developments

Second Edition

With 222 Figures

ABC

Page 4: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

Professor Dr. Vadim S. AnishchenkoProfessor Dr. Vladimir AstakhovProfessor Dr. Tatjana VadivasovaSaratov State UniversityFaculty of PhysicsAstrakhanskaya ul. 83410026 Saratov, RussiaE-mail: [email protected]

[email protected]@chaos.ssu.runnet.ru

Professor Dr. Alexander NeimanUniversity of Missouri at St. LouisCenter for Neurodynamics8001 Natural Bridge RoadSt. Louis, MO 63121, USAE-mail: [email protected]

Professor Dr. Lutz Schimansky-GeierHumboldt-Universität BerlinInstitut PhysikLS Stochastische ProzesseNewtonstr. 1512489 Berlin, GermanyE-mail: [email protected]

Library of Congress Control Number: 2006935959

ISSN 0172-7389ISBN-10 3-540-38164-3 Springer Berlin Heidelberg New YorkISBN-13 978-3-540-38164-8 Springer Berlin Heidelberg New York

This work is subject to copyright. All rights are reserved, whether the whole or part of the material isconcerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting,reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publicationor parts thereof is permitted only under the provisions of the German Copyright Law of September 9,1965, in its current version, and permission for use must always be obtained from Springer. Violations areliable for prosecution under the German Copyright Law.

Springer is a part of Springer Science+Business Mediaspringer.comc© Springer-Verlag Berlin Heidelberg 2007

The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply,even in the absence of a specific statement, that such names are exempt from the relevant protective lawsand regulations and therefore free for general use.

Typesetting: by the author and techbooks using a Springer LATEX macro packageCover design: Erich Kirchner, Heidelberg

Printed on acid-free paper SPIN: 11833901 54/techbooks 5 4 3 2 1 0

Page 5: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

To our teachers and friends:

Werner Ebeling, Yuri L. Klimontovichand Frank Moss

Page 6: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

Preface to the Second Edition

We present an improved and enlarged version of our book Nonlinear Dy-namics of Chaotic and Stochastic Systems published by Springer in 2002.Basically, the new edition of the book corresponds to its first version. Whilepreparing this edition we made some clarifications in several sections and alsocorrected the misprints noticed in some formulas. Besides, three new sectionshave been added to Chapter 2. They are “Statistical Properties of DynamicalChaos,” “Effects of Synchronization in Extended Self-Sustained OscillatorySystems,” and “Synchronization in Living Systems.” The sections indicatedreflect the most interesting results obtained by the authors after publicationof the first edition.

We hope that the new edition of the book will be of great interest for awide section of readers who are already specialists or those who are beginningresearch in the fields of nonlinear oscillation and wave theory, dynamicalchaos, synchronization, and stochastic process theory.

Saratov, Berlin, and St. Louis V.S. AnishchenkoNovember 2006 A.B. Neiman

T.E. VadiavasovaV.V. Astakhov

L. Schimansky-Geier

Page 7: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

Preface to the First Edition

This book is devoted to the classical background and to contemporary resultson nonlinear dynamics of deterministic and stochastic systems. Considerableattention is given to the effects of noise on various regimes of dynamic systemswith noise-induced order.

On the one hand, there exists a rich literature of excellent books on non-linear dynamics and chaos; on the other hand, there are many marvelousmonographs and textbooks on the statistical physics of far-from-equilibriumand stochastic processes. This book is an attempt to combine the approach ofnonlinear dynamics based on the deterministic evolution equations with theapproach of statistical physics based on stochastic or kinetic equations. Oneof our main aims is to show the important role of noise in the organizationand properties of dynamic regimes of nonlinear dissipative systems.

We cover a limited region in the interesting and still expanding field ofnonlinear dynamics. Nowadays the variety of topics with regard to determin-istic and stochastic dynamic systems is extremely large. Three main criteriawere followed in writing the book and to give a reasonable and closed presen-tation: (i) the dynamic model should be minimal, that is, most transparentin the physical and mathematical sense, (ii) the model should be the sim-plest which nevertheless clearly demonstrates most important features of thephenomenon under consideration, and (iii) most attention is paid to modelsand phenomena on which the authors have gained great experience in recentyears.

The book consists of three chapters. The first chapter serves as a briefintroduction, giving the fundamental background of the theory of nonlineardeterministic and stochastic systems and a classical theory of the synchro-nization of periodic oscillations. All basic definitions and notions necessaryfor studying the subsequent chapters without referring to special literatureare presented.

The second chapter is devoted to deterministic chaos. We discuss variousscenarios of chaos onset, including the problem of the destruction of two- andthree-frequency quasiperiodic motion. Different aspects of synchronizationand chaos control as well as the methods of reconstruction of attractors anddynamic systems from experimental time series are also discussed.

Page 8: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

X Preface to the First Edition

The third chapter is concerned with stochastic systems whose dynamicsessentially depend on the influence of noise. Several nonlinear phenomenaare discussed: stochastic resonance in dynamic systems subjected to har-monic and complex signals and noise, stochastic synchronization and sto-chastic ratchets, which are the noise-induced ordered and directed transportof Brownian particles moving in bistable and periodic potentials. Special at-tention is given to the role of noise in excitable dynamics.

The book is directed to a large circle of possible readers in the naturalsciences. The first chapter will be helpful for undergraduate and graduatestudents in physics, chemistry, biology and economics, as well as for lecturersof these fields interested in modern problems of nonlinear dynamics. Special-ists of nonlinear dynamics may use this part as an extended dictionary. Thesecond and the third chapters of the book are addressed to specialists in thefield of mathematical modeling of the complex dynamics of nonlinear systemsin the presence of noise.

We tried to write this book in such a manner that each of the threechapters can be understood in most parts independently of the others. Par-ticularly, each chapter has its own list of references. This choice is based onthe desire to be helpful to the reader. Undoubtedly, the lists of references areincomplete, since there exists an enormously large number of publicationswhich are devoted to the topics considered in this book.

This book is a result of the long-term collaboration of the Nonlinear Dy-namics Laboratory at Saratov State University, the group of Applied Stochas-tic Processes of Humboldt University at Berlin, and the Center for Neurody-namics at the University of Missouri at St. Louis. We want to express our deepgratitude to W. Ebeling, Yu.L. Klimontovich and F. Moss for their support,scientific exchange and constant interest. We acknowledge fruitful discussionswith C. van den Broeck, P. Hanggi, J. Kurths, A. Longtin, A. Pikovski andYu.M. Romanovski. The book has benefited a lot from our coauthors of theoriginal literature. We wish to thank A. Balanov, R. Bartussek, V. Bucholtz,I. Diksthein, J.A. Freund, J. Garcıa-Ojalvo, M. Hasler, N. Janson, T. Kap-itaniak, I. Khovanov, M. Kostur, P.S. Landa, B. Lindner, P. McClintock,E. Mosekilde, A. Pavlov, T. Poschel, D. Postnov, P. Reimann, R. Rozenfeld,P. Ruszczynsky, A. Shabunin, B. Shulgin, U. Siewert, A. Silchenko, O. Sos-novtseva, A. Zaikin and C. Zulicke for regular and fruitful discussions, criti-cism and valuable remarks which give us deeper insight into the problems westudy.

We acknowledge the Series Editor H. Haken for fruitful comments on themanuscript. We thank P. Talkner, F. Marchesoni, M. Santos and coworkersand students from the group of Humboldt-University at Berlin for helpfulremarks and comments during proofreading.

We are especially grateful to Ms. Galina Strelkova for her great work inpreparing the manuscript and for translating several parts of this book intoEnglish, and to A. Klimshin for technical assistance.

Page 9: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

Preface to the First Edition XI

V.S. Anishchenko, T.E. Vadivasova and V.V. Astakhov acknowledgesupport from the US Civilian and Development Foundation under GrantNo. REC-006, and the Russian Foundation for Basic Research; V.S. Anish-chenko acknowledges support from the Alexander von Humboldt Foundation.A.B. Neiman was supported by the Fetzer Institute and by the US Office ofNaval Research, Physics Division. L. Schimansky-Geier acknowledges supportfrom the Deutsche Forschungsgemeinschaft (Sfb 555 and GK 268).

Saratov, Berlin and St. Louis V.S. AnishchenkoDecember 2001 A.B. Neiman

T.E. VadivasovaV.V. Astakhov

L. Schimansky-Geier

Page 10: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

Contents

1. Tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.2 The Dynamical System and Its Mathematical Model . . 11.1.3 Stability – Linear Approach . . . . . . . . . . . . . . . . . . . . . . . 81.1.4 Bifurcations of Dynamical Systems, Catastrophes . . . . . 171.1.5 Attractors of Dynamical Systems. Deterministic Chaos 291.1.6 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

1.2 Fluctuations in Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . 371.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371.2.2 Basic Concepts of Stochastic Dynamics . . . . . . . . . . . . . 381.2.3 Noise in Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . 461.2.4 The Fokker–Planck Equation . . . . . . . . . . . . . . . . . . . . . . 551.2.5 Stochastic Oscillators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631.2.6 The Escape Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701.2.7 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

1.3 Synchronization of Periodic Systems . . . . . . . . . . . . . . . . . . . . . . 801.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801.3.2 Resonance in Periodically Driven

Linear Dissipative Oscillators . . . . . . . . . . . . . . . . . . . . . . 821.3.3 Synchronization of the Van der Pol Oscillator.

Classical Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841.3.4 Synchronization in the Presence of Noise.

Effective Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . 931.3.5 Phase Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 971.3.6 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

2. Dynamical Chaos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1092.1 Routes to Chaos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

2.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1092.1.2 Period-Doubling Cascade Route.

Feigenbaum Universality . . . . . . . . . . . . . . . . . . . . . . . . . . 110

Page 11: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

XIV Contents

2.1.3 Crisis and Intermittency . . . . . . . . . . . . . . . . . . . . . . . . . . 1182.1.4 Route to Chaos via Two-Dimensional

Torus Destruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212.1.5 Route to Chaos via a Three-Dimensional Torus.

Chaos on T 3. Chaotic Nonstrange Attractors . . . . . . . . 1302.1.6 Route to Chaos via Ergodic Torus Destruction.

Strange Nonchaotic Attractors . . . . . . . . . . . . . . . . . . . . . 1332.1.7 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

2.2 Statistical Properties of Dynamical Chaos . . . . . . . . . . . . . . . . . 1392.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1392.2.2 Diagnosis of Hyperbolicity in Chaotic Systems . . . . . . . 1412.2.3 Chaos in the Presence of Noise . . . . . . . . . . . . . . . . . . . . . 1432.2.4 Relaxation to a Stationary Probability Distribution

for Chaotic Attractors in the Presence of Noise . . . . . . . 1442.2.5 Spectral-Correlation Analysis of Dynamical Chaos . . . . 1502.2.6 Phase Diffusion in an Active Inhomogeneous Medium

Described by the Ginzburg–Landau Equation . . . . . . . . 1552.2.7 The Autocorrelation Function and Power Spectrum

of Spiral Chaos in Physical Experiments . . . . . . . . . . . . . 1602.2.8 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

2.3 Synchronization of Chaos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1642.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1642.3.2 Phase–Frequency Synchronization of Chaos.

The Classical Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 1652.3.3 Complete and Partial Synchronization of Chaos . . . . . . 1712.3.4 Phase Multistability in the Region

of Chaos Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . 1762.3.5 Bifurcation Mechanisms of Partial and Complete Chaos

Synchronization Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1802.3.6 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184

2.4 Effects of Synchronization in Extended Self-SustainedOscillatory Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1852.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1852.4.2 Cluster Synchronization in an Inhomogeneous Chain

of Quasiharmonic Oscillators . . . . . . . . . . . . . . . . . . . . . . . 1872.4.3 Effect of Noise on Cluster Synchronization in a Chain

of Quasiharmonic Oscillators . . . . . . . . . . . . . . . . . . . . . . . 1902.4.4 Cluster Synchronization in an Inhomogeneous

Self-Sustained Oscillatory Medium . . . . . . . . . . . . . . . . . . 1952.4.5 Cluster Synchronization in Interacting

Inhomogeneous Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2002.4.6 Forced Synchronization of a Chain of Chaotic

Self-Sustained Oscillators . . . . . . . . . . . . . . . . . . . . . . . . . . 203

Page 12: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

Contents XV

2.4.7 Synchronization and Multistability in a Ringof Anharmonical Oscillators . . . . . . . . . . . . . . . . . . . . . . . 207

2.4.8 Synchronization and Multistability in a Ringof Oscillators with Period Doubling . . . . . . . . . . . . . . . . . 217

2.4.9 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2242.5 Synchronization in Living Systems . . . . . . . . . . . . . . . . . . . . . . . . 225

2.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2252.5.2 Stochastic Synchronization of Electroreceptors

in the Paddlefish . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2252.5.3 Synchronization of Cardiorhythm . . . . . . . . . . . . . . . . . . . 2282.5.4 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232

2.6 Controlling Chaos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2332.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2332.6.2 Controlled Anti-Phase Synchronization of Chaos

in Coupled Cubic Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . 2352.6.3 Control and Synchronization of Chaos

in a System of Mutually Coupled Oscillators . . . . . . . . . 2422.6.4 Controlled Chaos Synchronization

by Means of Periodic Parametric Perturbations . . . . . . 2482.6.5 Stabilization of Spatio-Homogeneous Motions

by Parametric Perturbations . . . . . . . . . . . . . . . . . . . . . . . 2512.6.6 Controlling Chaos in Coupled Map Lattices . . . . . . . . . . 2542.6.7 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262

2.7 Reconstruction of Dynamical Systems . . . . . . . . . . . . . . . . . . . . . 2642.7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2642.7.2 Reconstruction of Attractors from Time Series . . . . . . . 2652.7.3 Global Reconstruction of DS . . . . . . . . . . . . . . . . . . . . . . . 2752.7.4 Reconstruction from Biological Data . . . . . . . . . . . . . . . . 2812.7.5 Global Reconstruction in Application

to Confidential Communication . . . . . . . . . . . . . . . . . . . . 2862.7.6 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292

3. Stochastic Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3073.1 Stochastic Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307

3.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3073.1.2 Stochastic Resonance: Physical Background . . . . . . . . . . 3093.1.3 Characteristics of Stochastic Resonance . . . . . . . . . . . . . 3113.1.4 Response to a Weak Signal. Theoretical Approaches . . 3133.1.5 Array-Enhanced Stochastic Resonance . . . . . . . . . . . . . . 3203.1.6 Doubly Stochastic Resonance in Systems with Noise-

Induced Phase Transition . . . . . . . . . . . . . . . . . . . . . . . . . . 3313.1.7 Stochastic Resonance for Signals

with a Complex Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . 336

Page 13: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

XVI Contents

3.1.8 Stochastic Resonance in Chaotic Systemswith Coexisting Attractors . . . . . . . . . . . . . . . . . . . . . . . . 343

3.1.9 Analog Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3483.1.10 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350

3.2 Synchronization of Stochastic Systems . . . . . . . . . . . . . . . . . . . . 3513.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3513.2.2 Synchronization and Stochastic Resonance . . . . . . . . . . . 3523.2.3 Forced Stochastic Synchronization

of the Schmitt Trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3603.2.4 Mutual Stochastic Synchronization

of Coupled Bistable Systems . . . . . . . . . . . . . . . . . . . . . . . 3643.2.5 Forced and Mutual Synchronization

of Switchings in Chaotic Systems . . . . . . . . . . . . . . . . . . . 3673.2.6 Stochastic Synchronization of Ensembles

of Stochastic Resonators . . . . . . . . . . . . . . . . . . . . . . . . . . 3723.2.7 Stochastic Synchronization as Noise-Enhanced Order . 3773.2.8 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380

3.3 The Beneficial Role of Noise in Excitable Systems . . . . . . . . . . 3813.3.1 Coherence Resonance Near Bifurcations

of Periodic Solutions of a Dynamical System . . . . . . . . . 3813.3.2 Coherence Resonance in Excitable Dynamics . . . . . . . . . 3833.3.3 Noise-Enhanced Synchronization

of Coupled Excitable Systems . . . . . . . . . . . . . . . . . . . . . . 3943.3.4 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398

3.4 Noise-Induced Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3993.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3993.4.2 Flashing and Rocking Ratchets . . . . . . . . . . . . . . . . . . . . . 4013.4.3 The Adiabatic Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 4043.4.4 The Overdamped Correlation Ratchet . . . . . . . . . . . . . . . 4073.4.5 Particle Separation by Ratchets

Driven by Colored Noise . . . . . . . . . . . . . . . . . . . . . . . . . . 4093.4.6 Two-Dimensional Ratchets . . . . . . . . . . . . . . . . . . . . . . . . 4143.4.7 Discrete Ratchets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4183.4.8 Sawtooth-like Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4253.4.9 Making Spatial Structures Using Ratchets . . . . . . . . . . . 4283.4.10 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445

Page 14: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

1. Tutorial

1.1 Dynamical Systems

1.1.1 Introduction

The knowledge of nonlinear dynamics is based on the notion of a dynamicalsystem (DS). A DS may be thought of as an object of any nature, whose stateevolves in time according to some dynamical law, i.e., as a result of the actionof a deterministic evolution operator. Thus, the notion of DS is the result ofa certain amount of idealization when random factors inevitably present inany real system are neglected.

The theory of DS is a wide and independent field of scientific research. Thepresent section addresses only those parts, which are used in the subsequentchapters of this book. The main attention is paid to a linear analysis of thestability of solutions of ordinary differential equations. We also describe localand nonlocal bifurcations of typical limit sets and present a classification ofattractors of DS.

The structure of chaotic attractors defines the properties of regimes of de-terministic chaos in DS. It is known that the classical knowledge of dynamicalchaos is based on the properties of robust hyperbolic (strange) attractors. Be-sides hyperbolic attractors, we also consider in more detail the peculiaritiesof nonhyperbolic attractors (quasiattractors). This sort of chaotic attractorreflects to a great extent the properties of deterministic chaos in real systemsand serves as the mathematical image of experimentally observed chaos.

1.1.2 The Dynamical System and Its Mathematical Model

A DS has an associated mathematical model. The latter is considered to bedefined if the system state as a set of some quantities or functions is deter-mined and an evolution operator is specified which gives a correspondencebetween the initial state of the system and a unique state at each subsequenttime moment. The evolution operator may be represented, for example, as aset of differential, integral and integro-differential equations, of discrete maps,or in the form of matrices, graphs, etc. The form of the mathematical modelof the DS under study depends on which method of description is chosen.

Page 15: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

2 1. Tutorial

Depending on the approximation degree and on the problem to be studied,the same real system can be associated with principally different mathemat-ical models, e.g., a pendulum with and without friction. Moreover, from aqualitative viewpoint, we can often introduce into consideration a DS, e.g.,the cardio-vascular system of a living organism, but it is not always possibleto define its mathematical model.

DS are classified based on the form of state definition, on the propertiesand the method of description of the evolution operator. The set of somequantities xj , j = 1, 2, . . . , N , or functions xj(r), r ∈ RM determines thestate of a system. Here, xj are referred to as dynamical variables, whichare directly related to the quantitative characteristics observed and mea-sured in real systems (current, voltage, velocity, temperature, concentration,population size, etc.). The set of all possible states of the system is calledits phase space. If xj are variables and not functions and their number Nis finite, the system phase space RN has a finite dimension. Systems withfinite-dimensional phase space are referred to as those with lumped parame-ters, because their parameters are not functions of spatial coordinates. Suchsystems are described by ordinary differential equations or return maps.

However, there is a wide class of systems with infinite-dimensional phasespace. If the dynamical variables xj of a system are functions of some vari-ables rk, k = 1, 2, . . . , M , the system phase space is infinite-dimensional. Asa rule, rk represent spatial coordinates, and thus the system parameters de-pend on a point in space. Such systems are called distributed parameter orsimply distributed systems. They are often represented by partial differentialequations or integral equations. One more example of systems with infinite-dimensional phase space is a system whose evolution operator includes a timedelay, Td. In this case the system state is also defined by the set of functionsxj(t), t ∈ [0, Td].

Several classes of DS can be distinguished depending on the properties ofthe evolution operator. If the evolution operator obeys the property of su-perposition, i.e., it is linear and the corresponding system is linear; otherwisethe system is nonlinear. If the system state and the evolution operator arespecified for any time moment, we deal with a time-continuous system. If thesystem state is defined only at separate (discrete) time moments, we havea system with discrete time (map or cascade). For cascades, the evolutionoperator is usually defined by the first return function, or return map. If theevolution operator depends implicitly on time, the corresponding system isautonomous, i.e., it contains no additive or multiplicative external forces de-pending explicitly on time; otherwise we deal with a nonautonomous system.Two kinds of DS are distinguished, namely, conservative and nonconserva-tive. For a conservative system, the volume in phase space is conserved duringtime evolves. For a nonconservative system, the volume is usually contracted.The contraction of phase volume in mechanical systems corresponds to lostof energy as result of dissipation. A growth of phase volume implies a supply

Page 16: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

1.1 Dynamical Systems 3

of energy to the system which can be named negative dissipation. Therefore,DS in which the energy or phase volume varies are called dissipative systems.

Among a wide class of DS, a special place is occupied by systems whichcan demonstrate oscillations, i.e., processes showing full or partial repetition.Oscillatory systems, as well as DS in general, are divided into linear and non-linear, lumped and distributed, conservative and dissipative, and autonomousand nonautonomous. A special class includes the so-called self-sustained sys-tems.

Nonlinear dissipative systems in which nondecaying oscillations can ap-pear and be sustained without any external force are called self-sustained, andoscillations themselves in such systems are called self-sustained oscillations.The energy lost as dissipation in a self-sustained system is compensated froman external source. A peculiarity of self-sustained oscillations is that theircharacteristics (amplitude, frequency, waveform, etc.) do not depend on theproperties of a power source and hold under variation, at least small, of initialconditions [1].

Phase Portraits of Dynamical Systems. A method for analyzing oscil-lations of DS by means of their graphical representation in phase space wasintroduced to the theory of oscillations by L.I. Mandelstam and A.A. An-dronov [1]. Since then, this method has become the customary tool for study-ing various oscillatory phenomena. When oscillations of complex form, i.e.,dynamical chaos, were discovered, this method increased in importance. Theanalysis of phase portraits of complex oscillatory processes allows one tojudge the topological structure of a chaotic limit set and to make sometimesvalid guesses and assumptions which appear to be valuable when performingfurther investigations [2].

Let the DS under study be described by ordinary differential equations

xj = fj(x1, x2, . . . , xN ) , (1.1)

where j = 1, 2, . . . , N , or in vector form

x = F (x) . (1.2)

x represents a vector with components xj , the index j runs over j =1, 2, . . . , N , and F (x) is a vector-function with components fj(x). The setof N dynamical variables xj or the N -dimensional vector x determines thesystem state which can be viewed as a point in state space RN . This pointis called a representative or phase point, and the state space RN is calledthe phase space of DS. The motion of a phase point corresponds to the timeevolution of a state of the system. The trajectory of a phase point, startingfrom some initial point x0 = x(t0) and followed as t → ±∞, represents aphase trajectory. A similar notion of integral curves is sometimes used. Thesecurves are described by equations dxj/dxk = Φ(x1, x2, . . . , xN ), where xk isone of the dynamical variables. An integral curve and a phase trajectory of-ten coincide, but the integral curve may consist of several phase trajectories

Page 17: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

4 1. Tutorial

if it passes through a singular point. The right-hand side of (1.2) defines thevelocity vector field F (x) of a phase point in the system phase space. Pointsin phase space for which fj(x) = 0, j = 1, 2, . . . , N , remain unchanged withtime. They are called fixed points, singular points or equilibrium points of theDS. A set of characteristic phase trajectories in the phase space representsthe phase portrait of the DS.

Besides the phase space dimension N , the number of degrees of freedomn = N/2 is often introduced. This tradition came from mechanics, wherea system is considered as a set of mass points, each being described by asecond-order equation of motion. n generalized coordinates and n generalizedimpulses are introduced so that the total number of dynamical variablesN = 2n appears to be even and the number of independent generalizedcoordinates n (the number of freedom degrees) an integer. For an arbitraryDS (1.1) the number of degrees of freedom will be, in general, a multiple of0.5.

Consider the harmonic oscillator

x + ω20x = 0 . (1.3)

Its phase portrait is shown in Fig. 1.1a and represents a family of concentricellipses (in the case ω0 = 1, circles) in the plane x1 = x, x2 = x, centered atthe origin of coordinates:

ω20 x2

1

2+

x22

2= H(x1, x2) = const. (1.4)

Each value of the total energy H(x1, x2) corresponds to its own ellipse. Atthe origin we have the equilibrium state called a center. When dissipation isadded to the linear oscillator, phase trajectories starting from any point in thephase plane approach equilibrium at the origin in the limit as t → ∞. Thephase trajectories look like spirals twisting towards the origin (Fig. 1.1b)if dissipation is low and the solutions of the damped harmonic oscillatorcorrespond to decaying oscillations. In this case the equilibrium state is astable focus. With an increasing damping coefficient, the solutions becomeaperiodic and correspond to the phase portrait shown in Fig. 1.1c with theequilibrium called a stable node.

By using a potential function U(x), it is easy to construct qualitativelythe phase portrait for a nonlinear conservative oscillator which is governedby

x +∂U(x)

∂x= 0 .

An example of such a construction is given in Fig. 1.2. Minima of the poten-tial function conform to the center-type equilibrium states. In a potential wellabout each center, a family of closed curves is arranged which correspond todifferent values of the integral of energy H(x, x). In the nearest neighborhood

Page 18: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

1.1 Dynamical Systems 5

x 2

x1

x2

x1 x1

x2

a b c

Fig. 1.1. Phase portraits of linear oscillators: (a) without dissipation, (b) with lowdissipation, and (c) with high dissipation

of the center these curves have an ellipse-like shape but they are deformedwhen moving away from the center. Maxima of U(x) correspond to equilibriacalled saddles. Such equilibrium states are unstable. Phase trajectories tend-ing to the saddle Q (Fig. 1.2) as t → ±∞ belong to singular integral curvescalled separatrices of saddle Q. A pair of trajectories approaching the saddleforwardly in time forms its stable maniflod W s

Q, and a pair of trajectoriestending to the saddle backwardly in time its unstable maniflod W u

Q. Separa-

U(x)

x

O21 x =x1

.x =x2

uW

WQ

s

Q

QO

Fig. 1.2. Qualitative construction of the phase portrait of a nonlinear conservativeoscillator using the potential function U(x)

Page 19: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

6 1. Tutorial

trices divide the phase space into regions with principally different behaviorof phase trajectories. They can also close, forming separatrix loops (Fig. 1.2).

Phase portraits of nonautonomous systems have some peculiarities. Oneof them is that phase trajectories are not located within a certain boundedregion of the phase space, since the time variable t varies from −∞ to +∞. Aperiodically driven nonautonomous system can be reduced to an autonomousone by introducing the phase of the external forcing Ψ = Ω t and by addingthe equation Ψ = Ω. However, if Ψ is taken unwrapped such a new variableyields nothing and the phase trajectories remain unbounded as before. Forphase trajectories to be bounded, one needs to introduce a cylinderical phasespace (in general, multi-dimensional) taking into account that Ψ ∈ [0, 2π].Figure 1.3 represents phase trajectories of a nonautonomous system lyingon a cylinder (for visualization purposes only one dynamical variable, x, isshown).

Consider the Van der Pol oscillator

x − (α − x2)x + ω20x = 0 . (1.5)

For α > 0 and as t → ∞, the self-sustained system (1.5) demonstrates peri-odic oscillations independent of initial conditions. These oscillations follow aclosed isolated curve called the Andronov–Poincare limit cycle [1]. All phasetrajectories of (1.5), starting from different points of the phase plane, ap-proach the limit cycle as t → ∞. The only exception is the equilibrium at theorigin. For small α the limit cycle resembles an ellipse in shape and the equi-librium at the origin is an unstable focus. The corresponding phase portraitand time dependence x(t) are shown in Fig. 1.4a. With increasing α the limitcycle is distorted and the character of equilibrium is changed. For α > 2ω0,the unstable focus is transformed into an unstable node and the duration oftransient process significantly decreases (Fig. 1.4b).

x

0 x0Ψ

Ψ

π

a b

Fig. 1.3. Phase portraits of the nonautonomous system x = f(x) + B sin Ω t(a) for Ψ = Ω t defined in the interval (−∞, +∞) and (b) for Ψ ∈ [0, 2π]

Page 20: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

1.1 Dynamical Systems 7

−1.0 −0.5 0.0 0.5 1.0x1

−1.0

−0.5

0.0

0.5

1.0

x2

0 20 40 60 80t

−1.0

−0.5

0.0

0.5

1.0

x1

−10 −5 0 5 10x1

−60

−40

−20

0

20

40

60

x2

0 20 40 60 80t

−10

−5

0

5

10

x1

a

b

Fig. 1.4. Phase portraits and waveforms in the Van der Pol oscillator (1.5) withω0 = 1 and (a) for α = 0.1 and (b) for α = 10. x1 = x, x2 = x

Phase portraits of three-dimensional systems are not so illustrative. In thiscase it is reasonable to introduce a plane or a surface of section such thatall trajectories intersect it at a nonzero angle. On the secant plane we obtaina set of points corresponding to different phase trajectories of the originalsystem and which can give us an idea of the system phase portrait. Thepoints usually considered are those which appear when the system trajectoryintersects the secant plane in one direction, as shown in Fig. 1.5. The evolutionoperator uniquely (but not one-to-one) determines a map of the secant planeto itself, called the first return function or the Poincare map [3]. The Poincaremap reduces the dimension of the sets being considered by one and thusmakes the system phase portrait more descriptive. Finite sequences of points(periodic orbits of the map) correspond to closed curves (limit cycles of the

x

x

x2

1

3S

Fig. 1.5. The Poincare section

Page 21: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

8 1. Tutorial

initial system) and infinite sequences of points correspond to non-periodictrajectories.

1.1.3 Stability – Linear Approach

The property of stability or instability is one of the important characteristicsof motions realized in a system. There are several notions of stability, namely,stability according to Lyapunov, asymptotic stability, orbital stability andstability according to Poisson [2, 4, 5]. The motion x∗(t) is said to be stableaccording to Lyapunov if for any arbitrary small ε > 0 there is such δ(ε) thatfor any motion x(t), for which ‖x(t0) − x∗(t0)‖ < δ, the inequality ‖x(t) −x∗(t)‖ < ε is satisfied for all t > t0. The sign ‖o‖ denotes the vector norm.Thus, a small initial perturbation does not grow for a motion that is stableaccording to Lyapunov. If additionally the small perturbation δ vanishes astime goes on, i.e., ‖x(t) − x∗(t)‖ → 0 as t → ∞, the motion possesses astronger stability property, namely, asymptotic stability. Any asymptoticallystable motion is stable according to Lyapunov but, in the general case, theopposite is not true.

The definition of orbital stability is somewhat different from the definitionof stability according to Lyapunov. In the last case the distance betweenphase points of the studied and the perturbed motion is considered at thesame time moments. The orbital stability takes into account the minimaldistance between the phase point of the perturbed trajectory at a given timemoment t and orbit Γ ∗ corresponding to the motion under study. The motionthat is stable according to Lyapunov is always orbitally stable. In the generalcase, the opposite statement is not valid.

The weakest requirement is the stability of motion x∗(t) according toPoisson. Its definition assumes that the phase trajectory does not leave abounded region of the phase space as t → ∞. Spending an infinitely longperiod of time inside this region, the phase trajectory inevitably returns toan arbitrarily small neighborhood of the initial point. Return times maycorrespond to the period or quasiperiod of a regular motion or represent arandom sequence in the regime of dynamical chaos.

The stability properties of phase trajectories belonging to limit sets, e.g.,attractors, are of special importance for understanding the system dynamics.The change of stability character of one or another limit set can lead to aconsiderable modification of the system phase portrait.Limit Sets of a Dynamical System. Let the state of a system be specifiedby vector x0 at the time moment t0, and by vector x(t) = T∆tx0 at the instantt, where T∆t is the evolution operator on the interval ∆t = t − t0. Assumethat there are two sets, V and L ∈ V , where V is a set of all points x0 in thephase space for which x(t) ∈ L as t → +∞ or t → −∞. We call L the limitset of DS.

Consider the possible types of limit sets of a dissipative DS. If all pointsof set V belong to L in the limit as t → +∞, then limit set L is attracting,

Page 22: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

1.1 Dynamical Systems 9

i.e., an attractor. Consequently, V is the basin of attraction of attractor L.If points of V belong to L in the limit as t → −∞, set L is repelling, i.e., arepeller. If set V consists of two subsets, V = W s∪W u, and points belongingto W s approach L forward in time, while points belonging to W u tend toL backward in time, then L is called a saddle limit set or simply a saddle.The sets W s and W u are the stable and unstable manifolds of the saddle,respectively. Under time inversion, t → −t, regular attractors of the systembecome repellers, repellers are transformed into attractors, and the roles ofstable and unstable manifolds of saddles are interchanged [6].

The simplest limit set of a DS is an equilibrium. There are equilibria whichcan be attractors (stable focus and stable node), repellers (unstable focus andunstable node), and saddles (simple saddle or a saddle-focus realized in phasespace with the dimension N ≥ 3). A center-type point is neither attractornor repeller nor saddle because there is no set of points tending to the centerforward or backward in time. This is a particular case of the limit set forwhich V = L.

A limit set in the form of a closed curve, a limit cycle, can also be anattractor, a repeller or a saddle. The toroidal limit sets, corresponding toquasiperiodic oscillations, and the chaotic limit sets are similarly subclassi-fied.

Linear Analysis of Stability – Basic Concepts. Stability according toLyapunov and asymptotic stability are determined by the time evolutionof a small perturbation of a trajectory, namely, whether the perturbationdecreases, grows or remains bounded with time. The introduction of pertur-bation allows one to linearize the evolution operator in the vicinity of thetrajectory being studied and to perform a linear analysis of its stability.

Let us have an autonomous DS in the form

x = F (x,α) , (1.6)

where x ∈ RN , and α ∈ RM is the parameter vector. We are interested inanalyzing the stability of a solution x0(t). Introduce a small perturbationy = x(t) − x0(t), for which we may write

y = F (x0 + y) − F (x0) . (1.7)

Expanding F (x0 +y) into a series in the neighborhood of x0 and taking intoaccount the fact that the perturbation is small, we arrive at the followinglinearized equation with respect to y:

y = A(t)y , (1.8)

where A is a matrix with elements

aj,k =∂fj

∂xk

∣∣∣∣x0

, j, k = 1, 2, . . . , N , (1.9)

Page 23: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

10 1. Tutorial

called the linearization matrix of the system in the neighborhood of solutionx0(t), and fj are the components of function F . Matrix A is characterizedby eigenvectors ei and eigenvalues ρi:

Aei = ρiei, i = 1, 2, . . . , N. (1.10)

The eigenvalues ρi are roots of the characteristic equation

Det[

A − ρE]

= 0 , (1.11)

where E is a unit matrix. The initial perturbation specified at the time mo-ment t∗ and along the ith eigenvector evolves as follows:

yi(t) = yi(t∗) exp (t − t∗)ρi . (1.12)

The growth of or decrease in the magnitude of perturbation ‖yi(t)‖ is de-termined by the sign of the real part of ρi. In general, A is a matrix withtime-varying elements, and, consequently, its eigenvalues and eigenvectorsalso change. Therefore, (1.12) is fulfilled only in the limit t − t∗ → 0, i.e.,locally, in the neighborhood of x0(t∗). As t∗ is varied, the index of the expo-nent ρi takes a different value. Hence, in general, it is plausible that a smallperturbation y(t) =

∑Ni=1 yi(t) grows exponentially at some points of the

trajectory under study x0(t), whereas it decreases at others.Assume that at the time moment t0 we have an infinitesimal initial per-

turbation yi(t0) of the trajectory being studied along the ith eigenvectorof matrix A and a perturbation yi(t) at an arbitrary time moment t. Thestability of the trajectory along the eigenvector ei(t) is characterized by theLyapunov characteristic exponent λi,

λi = limt→∞

1t − t0

ln ‖yi(t)‖, (1.13)

where the bar means the upper limit. If the trajectory x0(t) belongs to N -dimensional phase space, the linearization matrix has the N × N dimensionand thus N eigenvectors. In this case the trajectory stability is defined by aset of N Lyapunov exponents. The set of N numbers arranged in decreasingorder, λ1 ≥ λ2 ≥ . . . ≥ λN , forms the so-called Lyapunov characteristicexponents spectrum (LCE spectrum) of phase trajectory x0(t).

Let us elucidate how Lyapunov exponents are related to the eigenvaluesof the linearization matrix ρi(t). Consider (1.12) at the initial time momentt∗ = t0 assuming the interval ∆t = t − t0 to be small. Now we move to thepoint x(t0 + ∆t) and take the following as the initial perturbation:

yi(t0 + ∆t) = yi(t0) exp ρi(t0)∆t.

We suppose that since ∆t is small, the direction of eigenvectors ei does notchange over this time interval, and vector yi(t0 + ∆t) may be viewed to be

Page 24: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

1.1 Dynamical Systems 11

directed along the ith eigenvector. The initial perturbation yi(t0) is takento be so small that it can also be considered to be small at subsequent timemoments. Moving each time along the curve x0(t) with small step ∆t, wecan obtain an approximate expression describing the evolution of the smallperturbation along the ith eigenvector:

yi(t) ≈ yi(t0) exp∑

k

ρi(tk)∆t . (1.14)

Passing to the limits ‖yi(t0)‖ → 0 and ∆t → 0, we deduce the rigorousequality

yi(t) = yi(t0) exp∫ t

t0

ρi(t′)dt′ . (1.15)

Substituting (1.15) into (1.13) we derive

λi = limt→∞

1t − t0

∫ t

t0

Re ρi(t′)dt′ . (1.16)

Thus, the ith Lyapunov exponent λi can be thought of as the value, av-eraged over the trajectory under study, of the real part of the eigenvalueρi of linearization matrix A(t). It shows what happens with an appropri-ate component of the initial perturbation, on average, along the trajectory.The divergence of the flow and, consequently, the phase volume evolution aredetermined by the sum of Lyapunov exponents. It can be shown that

N∑

i=1

λi = limt→∞

1t − t0

∫ t

t0

divF (t′)dt′ . (1.17)

If trajectory x0(t) is stable according to Lyapunov, then an arbitrary ini-tial perturbation y(t0) does not grow, on average, along the trajectory. In thiscase it is necessary and sufficient for the LCE spectrum not to contain posi-tive exponents. If an arbitrary bounded trajectory x0(t) of the autonomoussystem (1.6) is not an equilibrium or a saddle separatrix, then at least oneof the Lyapunov exponents is always equal to zero [5, 7]. This is explainedby the fact that the small perturbation remains unchanged along the direc-tion tangent to the trajectory. A phase volume element must be contractedfor phase trajectories on the attractor. Consequently, the system has a neg-ative divergence and the sum of Lyapunov exponents satisfies the followinginequality:

N∑

i=1

λi < 0 . (1.18)

Page 25: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

12 1. Tutorial

Stability of Equilibrium States. If the particular solution x0(t) of system(1.6) is an equilibrium, i.e., F (x0,α) = 0, the linearization matrix A isconsidered at only one point of phase space, and, consequently, it is a matrixwith constant elements ai,j . The eigenvectors and the eigenvalues of matrixA are constant in time, and the Lyapunov exponents coincide with the realparts of the eigenvalues, λi = Reρi. The LCE spectrum signature indicateswhether the equilibrium is stable or not. To analyze the behavior of phasetrajectories in a local neighborhood of the equilibrium one also needs to knowthe imaginary parts of the linearization matrix eigenvalues. In a phase plane,N = 2, the equilibrium is characterized by two eigenvalues of matrix A,namely, ρ1 and ρ2. The following cases can be realized in the phase plane: (i)ρ1 and ρ2 are real negative numbers; in this case the equilibrium is a stablenode. (ii) ρ1 and ρ2 are real positive numbers; the equilibrium is an unstablenode. (iii) ρ1 and ρ2 are real numbers but with different signs; the equilibriumin this case is a saddle. (iv) ρ1 and ρ2 are complex conjugates with Reρ1,2 < 0;the equilibrium is a stable focus. (v) ρ1 and ρ2 are complex conjugates withReρ1,2 > 0, the equilibrium is an unstable focus. (vi) ρ1 and ρ2 are pureimaginary, ρ1,2 = ±iω; the equilibrium in this case is a center. Figure 1.6shows a diagram of equilibria which are realized in the plane depending onthe determinant and the trace of matrix A (respectively, DetA = ρ1ρ2 andSpA = ρ1 + ρ2).

Besides the aforementioned states, in phase space with dimension N ≥ 3,other kinds of equilibria are possible, e.g., an equilibrium state that is unsta-ble according to Lyapunov, which is called a saddle-focus. Figure 1.7 shows

Det A

Sp A

Sp A( ) 21__4

Saddle

Center

Saddle

Stable node

Unstable focusStable focus

Unstable node

Fig. 1.6. Diagram of equilibria in the plane (phase portraits are shown in trans-formed coordinates [1])

Page 26: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

1.1 Dynamical Systems 13

a b

Fig. 1.7. Saddle-foci in three-dimensional phase space: (a) ρ1 is real and negative,ρ2,3 are complex conjugate with Reρ2,3 > 0; (b) ρ1 is real and positive, ρ2,3 arecomplex conjugate with Reρ2,3 < 0

two possible types of a saddle-focus in three-dimensional phase space, whichare distinguished by the dimensions of their stable and unstable manifolds.

To identify which type of limit set the equilibrium is, it is enough to knowthe Lyapunov exponents. The equilibrium is considered to be an attractorif it is asymptotically stable in all directions and its LCE spectrum consistsof negative exponents only (stable node and focus). If the equilibrium isunstable in all directions, it is a repeller (unstable node and focus). If theLCE spectrum includes both positive and negative exponents, the equilibriumis of saddle type (simple saddle or a saddle-focus). In addition, the exponentsλi ≥ 0 (λi ≤ 0) determine the dimension of the unstable (stable) manifold.

Stability of Periodic Solutions. Any periodic solution x0(t) of system(1.6) satisfies the condition

x0(t) = x0(t + T ) , (1.19)

where T is the period of solution. The linearization matrix A(t) that is calcu-lated at points of the trajectory corresponding to the periodic solution x0(t)is also periodic:

A(t) = A(t + T ) . (1.20)

In this case the equation for perturbations (1.8) is a linear equation withperiodic coefficients. The stability of periodic solution can be estimated if itis known how a small perturbation y(t0) evolves in period T . Its evolutioncan be represented as follows [8]:

y(t0 + T ) = MT y(t0) , (1.21)

where MT is the matrix of monodromy. MT is independent of time. Theeigenvalues of monodromy matrix, i.e., the roots of the characteristic equation

Det[MT − µE] = 0, (1.22)

Page 27: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

14 1. Tutorial

are called multipliers of periodic solution x0(t) and define its stability. Indeed,the action of the monodromy operator (1.21) is as follows: The initial pertur-bation of a periodic solution, considered in projections onto the matrix A(t0)eigenvectors, is multiplied by an appropriate multiplier µi in period T . Thus,a necessary and sufficient requirement for the periodic solution x0(t) to bestable according to Lyapunov is that its multipliers |µi| ≤ 1, i = 1, 2, . . . , N .At least one of the multipliers is equal to +1. The multipliers being theeigenvalues of the monodromy matrix obey the following relations:

N∑

i=1

µi = SpMT ,

N∏

i=1

µi = DetMT . (1.23)

They are related to the Lyapunov exponents of periodic solution as follows:

λi =1T

ln |µi| . (1.24)

One of the LCE spectrum exponents of a limit cycle is always zero andcorresponds to a unit multiplier. The limit cycle is an attractor if all theother exponents are negative. If the LCE spectrum includes different signexponents, the limit cycle is a saddle. The dimension of its unstable manifoldis equal to the number of non-negative exponents in the LCE spectrum, andthe dimension of its stable manifold is equal to the number of exponents forwhich λi ≤ 0. If all λi > 0, then the limit cycle is a repeller.

Stability of Quasiperiodic Solutions. Let a particular solution x0(t) ofsystem (1.6) correspond to quasiperiodic oscillations with k independent fre-quencies ωj , j = 1, 2, . . . , k. Then the following is valid:

x0(t) = x0(

ϕ1(t), ϕ2(t), . . . , ϕk(t))

= x0(

ϕ1(t) + 2πm,ϕ2(t) + 2πm, . . . , ϕk(t) + 2πm)

, (1.25)

where m is an arbitrary integer number and ϕj(t) = ωjt, j = 1, 2, . . . , k. Thestability of the quasiperiodic solution is characterized by the LCE spectrum.The linearization matrix A(t) is quasiperiodic, and the Lyapunov exponentsare strictly defined only in the limit as t → ∞. The periodicity of the solutionwith respect to each of the arguments ϕj in the case of ergodic quasiperiodicoscillations results in the LCE spectrum consisting of k zero exponents. Ifall other exponents are negative, the toroidal k-dimensional hypersurface (weshall use the term “k-dimensional torus” for simplicity) on which the quasi-periodic trajectory being studied lies is an attractor. When all other expo-nents are positive, the k-dimensional torus is a repeller. The torus is said tobe a saddle1 if the LCE spectrum of quasiperiodic trajectories on the torushas, besides zero exponents, both positive and negative ones.1 This situation should be distinguished from the case of chaos on a k-dimensional

torus, which is observed for k ≥ 3.

Page 28: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

1.1 Dynamical Systems 15

Stability of Chaotic Solutions. A chaotic trajectory, whether it belongsto a chaotic attractor, or a repeller or a saddle, is always unstable at leastin one direction. The LCE spectrum of a chaotic solution always has at leastone positive Lyapunov exponent. The instability of phase trajectories andthe attracting character of the limit set to which they belong do not contra-dict one another. Phase trajectories starting from close initial points in thebasin of attraction tend to the attractor but they are separated on it. Hence,chaotic trajectories are unstable according to Lyapunov but stable accordingto Poisson.

Such behavior of trajectories is typical for attractors with a complicatedgeometrical structure and, therefore, they are called strange attractors. Astrange attractor can be modeled by a limit set arising in the so-called horse-shoe map (Smale’s map) [9]: a unit square is contracted in one dimensionand stretched in another one but the area in this case is decreased. The stripobtained in doing so is bent in the form of a horseshoe and folded again in theinitial square as shown in Fig. 1.8. This procedure is repeated many times.In the limit, a set with non-zero area is formed which is not a collection of acountable set of points or lines and has a Cantor structure in its cross-section.

Fig. 1.8. Appearance of a strange attractor in Smale’s horseshoe map

Stability of Phase Trajectories in Discrete Time Systems. Let a dis-crete time system be described by the return map

x(n + 1) = P (x(n),α) , (1.26)

where x ∈ RN is the state vector, n is a discrete time variable, P (x) isa vector function with components Pj , j = 1, 2, . . . , N , and α ∈ RM is thesystem parameter vector. Analyze the stability of an arbitrary solution x0(n).Introducing a small perturbation y(n) = x(n) − x0(n) and linearizing themap in the vicinity of solution x0(n), we deduce the linear equation forperturbation:

y(n + 1) = M(n)y(n) , (1.27)

where M(n) is the linearization matrix with elements

Page 29: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

16 1. Tutorial

mj,k =∂Pj(x,α)

∂xk

∣∣∣∣x∈x0(n)

. (1.28)

It follows from (1.27) that the initial perturbation evolves according to thelaw

y(n + 1) = M(n)M(n − 1) . . . M(1)y(1) . (1.29)

By analogy with differential systems we introduce into consideration the Lya-punov exponents of solution x0(n):

λi = limn→∞

1n

ln ‖yi(n)‖ . (1.30)

Taking into account the fact that

M(n)yi(n) = µi(n)yi(n) , (1.31)

where µi is the eigenvalue of matrix M(n), corresponding to the ith eigen-vector, from (1.29) and (1.30) we derive

λi = limn→∞

1n

n∑

k=1

ln |µi(k)| . (1.32)

The stability of fixed points and limit cycles of the map is characterizedby multipliers. The sequence of states x0

1,x02, . . . ,x

0l is called a period-l-cycle

of the map, or simply l-cycle, if the following condition is fulfilled:

x01 = P (x0

l ) . (1.33)

If l = 1, i.e.,x0 = P (x0) , (1.34)

the state x0 is a fixed point or period-1-cycle. The linearization matrix Malong periodic solution x0(n) is periodic, i.e., M(n+ l) = M(n). The pertur-bation component yi(1) is transformed in the period l as follows:

yi(l + 1) = M(l)M(l − 1) . . . M(1)yi(1) = Mlyi(1) . (1.35)

The matrix Ml does not depend on the initial point and is an analogueof the monodromy matrix in a differential system. The eigenvalues µl

i ofmatrix Ml are called multipliers of the l-cycle of the map. They characterizehow projections of perturbation vector onto the eigenvectors of linearizationmatrix M change in the period l. The multipliers µl

i are connected with theLyapunov exponents by the relation

λi = ln1l|µl

i| . (1.36)

The l-cycle of the map is asymptotically stable if its multipliers |µli| < 1,

i = 1, 2, . . . , N . Thus, the LCE spectrum involves negative numbers only.

Page 30: Springer Complexity · 2013-07-19 · instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics

1.1 Dynamical Systems 17

If the map under study with the phase space dimension (N − 1) isthe Poincare map in a section of some N -dimensional continuous-time sys-tem, then it possesses the following property: The matrix Ml eigenvaluesµl

i, i = 1, 2, . . . , (N − 1), of the l-cycle, complemented by the unit multiplierµl

N = 1, are strictly equal to the eigenvalues of the monodromy matrix of thecorresponding limit cycle in this continuous-time system. On this basis, thestability of periodic oscillations in differential systems can be quantitativelydescribed by the multipliers of a relevant cycle in the Poincare map.

1.1.4 Bifurcations of Dynamical Systems, Catastrophes

Mathematical modeling of most of practical problems in nonlinear dynamicsis most often accomplished by using differential equations which depend ona number of parameters. Variation of a system parameter may cause a qual-itative change of the system phase portrait, called bifurcation [2, 6, 7, 10]. Bythe qualitative change of phase portrait we mean its structural rebuilding,which breaks the topological equivalence. The parameter value at which thebifurcation takes place is called the bifurcation value or the bifurcation point.Besides the phase space, a DS is also characterized by its parameter space.The particular set of parameter values α1, α2, . . . , αM is associated with aradius-vector α in this space. In a multi-dimensional parameter space of thesystem, bifurcation values may correspond to certain sets representing points,lines, surfaces, etc. Bifurcation is characterized by conditions which imposecertain requirements on system parameters. The number of such conditionsis called the codimension of bifurcation. For example, codimension 1 meansthat there is only one bifurcation condition.

Local and nonlocal bifurcations of DS can be distinguished. Local bifurca-tions are associated with the local neighborhood of a trajectory on a limit set.They reflect the change of stability of individual trajectories as well as of theentire limit set or the disappearance of the limit set through merging withanother one. All the above-listed phenomena can be found in the frameworkof linear analysis of stability. For example, when one of the Lyapunov expo-nents of a trajectory on a limit set changes its sign, this testifies to a localbifurcation of the limit set. Nonlocal bifurcations are related to the behaviorof manifolds of saddle limit sets, particularly, formation of separatrix loops,homoclinic and heteroclinic curves, and tangency between an attractor andseparatrix curves or surfaces. Such effects cannot be detected by using thelinear approach only. In this situation one must also take into considerationthe nonlinear properties of the system under study.

Bifurcations can happen with any limit sets, but of considerable interestare bifurcations of attractors, because they cause experimentally observedregimes to change. Attractor bifurcations are usually classified into internal(soft) bifurcations and crises (hard) bifurcations [6,11]. Internal bifurcationsresult in topological changes in attracting limit sets but do not affect their