astrom kumar control a perspective

59
Control: A Perspective ? K. J. ˚ Astr¨ om a and P. R. Kumar b a Department of Automatic Control, Lund University, Lund, Sweden b Department of Electrical & Computer Engineering, Texas A&M University, College Station, USA Abstract Feedback is an ancient idea, but feedback control is a young field. Nature long ago discovered feedback since it is essential for homeostasis and life. It was the key for harnessing power in the industrial revolution and is today found everywhere around us. Its development as a field involved contributions from engineers, mathematicians, economists and physicists. It is the first systems discipline; it represented a paradigm shift because it cut across the traditional engineering disciplines of aeronautical, chemical, civil, electrical and mechanical engineering, as well as economics and operations research. The scope of control makes it the quintessential multidisciplinary field. Its complex story of evolution is fascinating, and a perspective on its growth is presented in this paper. The interplay of industry, applications, technology, theory and research is discussed. Key words: Feedback, control, computing, communication, theory, applications. 1 Introduction Nature discovered feedback long ago. It created mech- anisms for and exploits feedback at all levels, which is central to homeostasis and life. As a technology, control dates back at least two millennia. There are many exam- ples of control from ancient times [460]. Ktesibios (285– 222 BC) developed a feedback mechanism to regulate flow to improve the accuracy of water clocks. In the mod- ern era, James Watts’s use of the centrifugal governor for the steam engine was fundamental to the industrial revolution. Since then, automatic control has emerged as a key enabler for the engineered systems of the 19th and 20th centuries: generation and distribution of elec- tricity, telecommunication, process control, steering of ships, control of vehicles and airplanes, operation of pro- duction and inventory systems, and regulation of packet flows in the Internet. It is routinely employed with in- ? This paper was not presented at any IFAC meeting. Cor- responding author K. J. ˚ Astr¨ om. Tel. +46 46 222 8781. Fax +46 46 138118. This paper is partially based on work supported by the Swedish Research Foundation LCCC Linneaus Center, the ELLIIT Excellence Center, the NSF under the Science and Technology Center Grant CCF-0939370 and Contract Nos. CNS-1232602, CNS-1302182, and CPS-1239116, and the AFOSR under contract No. FA 9550-13-1-0008. Email addresses: [email protected] (K. J. ˚ Astr¨ om), [email protected] (P. R. Kumar). dividual components like sensors and actuators in large systems. Today control is ubiquitous in our homes, cars and infrastructure. In the modern post-genomic era, a key goal of researchers in systems biology is to under- stand how to disrupt the feedback of harmful biological pathways that cause disease. Theory and applications of control are growing rapidly in all areas. The evolution of control from an ancient technology to a modern field is a fascinating microcosm of the growth of the modern technological society. In addition to being of intrinsic interest, its study also provides insight into the nuances of how theories, technologies and applications can interact in the development of a discipline. This pa- per provides a perspective on the development of con- trol, how it emerged and developed. It is by no means encyclopedic. To describe the field, we have, somewhat arbitrarily, chosen the years 1940, 1960 and 2000 as sepa- rators of four periods, which are covered in sections with the titles; Tasting the Power of Feedback Control: before 1940, The Field Emerges: 1940–1960, The Golden Age: 1960–2000, and Systems of the Future: after 2000. We provide a reflection on the complexity of the interplay of theory and applications in a subsequent section. It was only in the mid 20th century that automatic con- trol emerged as a separate, though multidisciplinary, discipline. The International Federation of Automatic Control (IFAC) was formed in 1956 [513,443,354], the first IFAC World Congress was held in Moscow in 1960, Preprint submitted to Automatica 17 October 2013

Upload: zohaibshabir

Post on 07-Dec-2015

274 views

Category:

Documents


8 download

TRANSCRIPT

Page 1: Astrom Kumar Control a Perspective

Control:APerspective ?

K. J. Astrom aand P. R. Kumar b

aDepartment of Automatic Control, Lund University, Lund, Sweden

bDepartment of Electrical & Computer Engineering, Texas A&M University, College Station, USA

Abstract

Feedback is an ancient idea, but feedback control is a young field. Nature long ago discovered feedback since it is essentialfor homeostasis and life. It was the key for harnessing power in the industrial revolution and is today found everywhere aroundus. Its development as a field involved contributions from engineers, mathematicians, economists and physicists. It is the firstsystems discipline; it represented a paradigm shift because it cut across the traditional engineering disciplines of aeronautical,chemical, civil, electrical and mechanical engineering, as well as economics and operations research. The scope of control makesit the quintessential multidisciplinary field. Its complex story of evolution is fascinating, and a perspective on its growth ispresented in this paper. The interplay of industry, applications, technology, theory and research is discussed.

Key words: Feedback, control, computing, communication, theory, applications.

1 Introduction

Nature discovered feedback long ago. It created mech-anisms for and exploits feedback at all levels, which iscentral to homeostasis and life. As a technology, controldates back at least two millennia. There are many exam-ples of control from ancient times [460]. Ktesibios (285–222 BC) developed a feedback mechanism to regulateflow to improve the accuracy of water clocks. In the mod-ern era, James Watts’s use of the centrifugal governorfor the steam engine was fundamental to the industrialrevolution. Since then, automatic control has emergedas a key enabler for the engineered systems of the 19thand 20th centuries: generation and distribution of elec-tricity, telecommunication, process control, steering ofships, control of vehicles and airplanes, operation of pro-duction and inventory systems, and regulation of packetflows in the Internet. It is routinely employed with in-

? This paper was not presented at any IFAC meeting. Cor-responding author K. J. Astrom. Tel. +46 46 222 8781. Fax+46 46 138118.This paper is partially based on work supported by theSwedish Research Foundation LCCC Linneaus Center, theELLIIT Excellence Center, the NSF under the Science andTechnology Center Grant CCF-0939370 and Contract Nos.CNS-1232602, CNS-1302182, and CPS-1239116, and theAFOSR under contract No. FA 9550-13-1-0008.

Email addresses: [email protected] (K. J. Astrom),[email protected] (P. R. Kumar).

dividual components like sensors and actuators in largesystems. Today control is ubiquitous in our homes, carsand infrastructure. In the modern post-genomic era, akey goal of researchers in systems biology is to under-stand how to disrupt the feedback of harmful biologicalpathways that cause disease. Theory and applications ofcontrol are growing rapidly in all areas.

The evolution of control from an ancient technology to amodern field is a fascinating microcosm of the growth ofthe modern technological society. In addition to being ofintrinsic interest, its study also provides insight into thenuances of how theories, technologies and applicationscan interact in the development of a discipline. This pa-per provides a perspective on the development of con-trol, how it emerged and developed. It is by no meansencyclopedic. To describe the field, we have, somewhatarbitrarily, chosen the years 1940, 1960 and 2000 as sepa-rators of four periods, which are covered in sections withthe titles; Tasting the Power of Feedback Control: before1940, The Field Emerges: 1940–1960, The Golden Age:1960–2000, and Systems of the Future: after 2000. Weprovide a reflection on the complexity of the interplay oftheory and applications in a subsequent section.

It was only in the mid 20th century that automatic con-trol emerged as a separate, though multidisciplinary,discipline. The International Federation of AutomaticControl (IFAC) was formed in 1956 [513,443,354], thefirst IFAC World Congress was held in Moscow in 1960,

Preprint submitted to Automatica 17 October 2013

Page 2: Astrom Kumar Control a Perspective

and the journal Automatica appeared in 1962 [42,149].By 2000 IFAC had grown to 66 Technical Committees.As a key enabler of several technological fields, con-trol is quintessentially multidisciplinary. This is clearlyreflected in the diverse organizations, AIAA, AIChE,ASCE, ASME, IEEE, ISA, SCS and SIAM that areincluded in the American Automatic Control Council(AACC) and IFAC.

There is yet another sense in which control has beenmultidisciplinary – in its search for theories and princi-ples physicists, engineers, mathematicians, economists,and others have all contributed to its development. Thephysicist Maxwell laid the theoretical foundation forgovernors [456]. Later, one of the first books [343] waswritten by a physicist, a mathematician and an engineer.The mathematicians Richard Bellman [59], Solomon Lef-schetz [269], and L. S. Pontryagin [534] contributed tothe early development of modern control theory. Indeed,respect for mathematical rigor has been a hallmark ofcontrol systems research, perhaps an inheritance fromcircuit theory [272,95].

Control theory, like many other branches of engineer-ing science, has developed in the same pattern as nat-ural sciences. Although there are strong similarities be-tween natural and engineering science, there are how-ever also some fundamental differences. The goal of nat-ural science is to understand phenomena in nature. Acentral goal has been to find natural laws, success be-ing rewarded by fame and Nobel prizes. There has beena strong emphasis on reductionism, requiring isolationof specific phenomena, an extreme case being particlephysics. The goal of engineering science, on the otherhand, is to understand, invent, design and maintain man-made engineered systems. A primary challenge is to findsystem principles that make it it possible to effectivelyunderstand and design complex physical systems. Feed-back, which is at the core of control, is such a principle.While pure reductionism has been tremendously suc-cessful in natural science, control is more complex sinceinteraction is a key element of engineered systems.

Many overviews of control have been presented in con-nection with various anniversaries. IFAC held a work-shop in Heidelberg in September 2006 to celebrate its50th anniversary [324]. Automatica celebrates its 50thanniversary in 2014 [149]. A comprehensive overview ofsensors and industrial controllers was published on the50th anniversary of the International Society of Automa-tion (ISA) [632]. The American Society of MechanicalEngineers (ASME) published a series of papers on thehistory of control in connection with the 50th anniver-sary of the Journal Dynamic Systems, Measurement,and Control in 1993 [552]. The IEEE Control SystemsSociety sponsored the reprint of 25 seminal papers oncontrol theory, selected by an editorial board [50]. TheEuropean Journal of Control published a special issue:On the Dawn and Development of Control Science in the

XX-th Century in January 2007, in which researchersreflected on their view of its development [82]. A spe-cial issue on the history of control systems engineering[43] was published in 1984 at the centennial of IEEE.The IEEE Control Systems Society organized a work-shop on the Impact of Control: Past, Present and Futurein Berchtesgaden, Germany, in 2009. Material from theworkshop was combined with an extensive collection ofsuccess stories and grand challenges in a comprehensivereport [576]. The National Academy of Engineering pub-lished two studies about the future of engineering at theturn of the century [488,489]. They point out the grow-ing importance of systems and the role of modeling andsimulation for computer based design and engineering.The US Air Force Office of Scientific Research (AFOSR)sponsored a panel to study future directions in control,dynamics and systems, which resulted in a comprehen-sive report [485], summarized in [486].

The field of control is even attracting the attention of his-torians, perhaps an indication that it has had a complexdevelopment process that needs to be brought to light.There are books on the history of control [65,66,80], onindividual researchers [318], and on organizations andprojects [448,472,471]. There are sessions on the historyof the field at many control conferences.

Paradoxically, in spite of its widespread use, control isnot very much talked about outside a group of special-ists; in fact it is sometimes called the “hidden technol-ogy” [30]. One reason could be its very success whichmakes it invisible so that all the attention is riveted tothe end product device. It is more difficult to talk aboutideas like feedback than to talk about devices. Anotherreason is that control scientists have not paid enoughattention to popular writing; a notable exception is the1952 issue of Scientific American which was devoted toAutomatic Control [657,122].

By 1940 control was used extensively for electrical sys-tems, process control, telecommunication and ship steer-ing. Thousands of governors, controllers for process con-trol, gyro-compasses and gyro-pilots were manufactured.Controllers were implemented as special purpose analogdevices based on mechanical, hydraulic, pneumatic andelectric technology. Feedback was used extensively to ob-tain robust linear behavior from nonlinear components.Electronic analog computing was emerging; it had orig-inally been invented to simulate control systems [307].Communication was driven by the need for centralizedcontrol rooms in process control and fire control systems.The benefits derived from the power of control were thedriving force.

Although the principles were very similar in the diverseindustries, the commonality of the systems was notwidely understood. A striking illustration is that fea-tures like integral and derivative action were reinventedand patented many times in different application fields.

2

Page 3: Astrom Kumar Control a Perspective

The theoretical bases were linearized models and theRouth-Hurwitz stability criterion. A few textbooks wereavailable [644,351]. Research and development wereprimarily conducted in industry.

Control was an established field by 1960 because of itsdevelopment during the Second World War. Servomech-anism theory was the theoretical foundation. Tools formodeling from data, using frequency response, togetherwith methods for analysis and synthesis, were available.Analog computing was used both as a technology for im-plementation of controllers and as a tool for simulation.Much of the development had been driven by require-ments from applications and practical needs. After a longand complex evolution there finally had emerged a holis-tic view of theory and applications, along with many ap-plications in diverse fields. Control systems were massproduced, large companies had control departments, andthere were companies which specialized in control. An in-ternational organization IFAC had been created, its firstWorld Congress was held in Moscow in 1960. Most ofthe research and development had been done in researchinstitutes, and industries with collaborations with a fewuniversities. By 1960 more than 60 books on control hadbeen published.

However, many changes began occurring around 1960;the digital computer, dynamic programming [59], thestate space approach to control [360], and the lin-ear quadratic regulator [358] had appeared, with theKalman filter just around the corner [359]. There com-menced a very dynamic development of control, whichwe have dubbed the Golden Age. There were challengesfrom the space race and from introduction of com-puter control in the process industry as well as in manyother applications such as automobiles and cellulartelephones. There was a rapid growth of applicationsand a very dynamic development of theory, and manysubspecialties were developed. University educationexpanded rapidly both at the undergraduate and thegraduate levels. One consequence was that the paritythat had been achieved between theory and practiceafter many decades was once again breached, this timein the reverse direction. Pure theory seized attentionto a significant extent and there emerged a perceptionamong some that there was a “gap” [41], and that theholistic view had been lost [73].

It is of course difficult to have a good perspective on re-cent events but our opinion is that there are indicationsthat yet another major development and spurt is now inprogress. By around 2000, there had occurred a phasetransition in technology, due to the emergence and pro-liferation of wireline and wireless networking, and the de-velopment of sensors, powerful computers, and complexsoftware. At the turn of the century there were thereforenew challenges; control of networks and control over net-works, design of provably safe embedded systems, andautonomy and model based design of complex systems.

The dramatic growth in technological capabilities thusprovided many opportunities but also presented manychallenges that require a tight integration of control withcomputer science and communication. This recognitionled to the creation of many major research programssuch as ARTIST2 [20] and ArtistDesign [21] focused onembedded systems in EU, and Cyber-Physical Systemsin the US [44].

Closer interaction with physics, biology and medicineis also occurring. Control is a key ingredient in devicessuch as adaptive optics and atomic force microscopes.Control of quantum and molecular systems is being ex-plored. The need and interest in using ideas from sys-tems and control to obtain deeper insight into biologicalsystems has increased. The field of systems biology hasemerged and groups with control scientists and biolo-gists have been created; noteworthy are the departmentsof bioengineering in engineering schools.

2 Tasting the Power of Feedback Control

In order for the industrial revolution to occur, it requiredpower, and control was essential to harness steam power.Therefore a major development of control coincided withthe industrial revolution. Feedback control was a pow-erful tool. It made it possible to reduce the effect of dis-turbances and process variations, to make good systemsfrom bad components, and to stabilize unstable systems.The major drawback was that feedback could cause in-stabilities. Recognition and solution of these problemsled to major advances in control. As the industrial rev-olution progressed, the emergence of new technical in-novations and industries made control an essential andcentral part of the electrical, chemical, telephone andother industries. The evolution of control and industryhave been strongly connected ever since.

2.1 The Centrifugal Governor

The need for control devices appeared already in theoperation of windmills. The centrifugal governor, whichdates back to 1745, was invented to keep windmills run-ning at constant speed [460]. Similar requirements ap-peared when steam power was used in the textile indus-try to keep looms and other machines running at con-stant speed. James Watt successfully adapted the cen-trifugal governor to fit the steam engine and patentedit in 1788. The centrifugal governor combines sensing,actuation and control.

Designing a governor was a compromise; heavy balls areneeded to create strong actuation forces but they alsoresult in sluggish response. Other practical difficultieswere created by friction and backlash in the mechani-cal devices. The basic governor yields proportional ac-tion because the change in the angle is proportional to

3

Page 4: Astrom Kumar Control a Perspective

the change in velocity. Such a governor gives a steadystate error. A controller with integral action has the re-markable property that it always approaches the correctsteady state if the closed loop system is stable. Integralaction was introduced around 1790 in a governor de-signed by the Perrier brothers. They used a hydraulicdevice where the inflow to a vessel was proportional tothe velocity and the steam valve was driven by the level[460, p 110–113]. In 1845 Werner and William Siemensintroduced integral action by using differential gears [65,p 21-22]. The Siemens brothers also introduced deriva-tive action based on an inertia wheel. The governor be-came an integral part of all steam engines. The governorwas further developed over a 200 year period stretchingfrom late 1700, as is well described in [65].

Theoretical investigation of governors started with thepaper by Maxwell [456]. He analyzed linearized mod-els and demonstrated the benefits of integral action. Healso found that the stability of the closed loop systemcould be determined by analyzing the roots of an alge-braic equation. Maxwell derived a stability criterion forthird order systems and turned to his colleague Routhwho solved the general problem [569]. Vyshnegradskiianalyzed a steam engine with a governor independentlyof Maxwell [670], and also developed a stability criterionfor third order systems. Vyshnegradskii’s results wereengineering oriented and strongly coupled to the designof governors. He had been trained as a mathematician,and was director of St. Petersburg’s Technological In-stitute, where he pioneered courses in machine-buildingwith a strong science base. He ended his career as Min-ister of Finance of the Russian empire [14].

Vyshnegradskii’s results were used by Stodola [626] todesign water turbine governors. He used more compli-cated models and turned to his colleague Hurwitz at Ei-dgenossische Technische Hochschule (ETH), Zurich forhelp with stability analysis. Hurwitz developed a gen-eral stability criterion using techniques other than usedby Routh [320]. Today we know the result as the Routh-Hurwitz criterion. Stodola also introduced dimensionfree variables and time constants. Interesting perspec-tives on the work of Maxwell and Vyshnegradskii aregiven in [548,14,65,77]. There was little interaction be-tween the scientists [239, p 172–173]. Routh and Hurwitzwere not aware of each other’s contributions and theyused different mathematical techniques [445]. Stodolaonly mentioned Routh in his later papers [14].

At the beginning of the 19th century there was a firmlyestablished engineering base for controlling machineswith governors. Many companies invented and man-ufactured governors. According to Bennett [65] (page74), there were more than 75000 governors installed inEngland in 1868. Proportional, integral and derivativeactions were understood and implemented by mechan-ical or hydraulic devices. The theoretical foundationwas based on work by Maxwell and Vyshnegradskii

and the Routh-Hurwitz criterion. Education in controlstarted at a few universities. Tolle compiled the resultsin a textbook [644] “Der Regelung der Kraftmaschinen(Control of Power Machines)” in 1905. Analysis anddesign were based on linearization and examination ofthe roots of the characteristic equation. The aerody-namicist Joukowski at Moscow University publishedthe first Russian book [351] on control, “The Theory ofRegulating the Motion of Machines,” in 1909.

2.2 Generation and Transmission of Electricity

The electric power industry emerged in the late 19thcentury and grew rapidly at the beginning of the 20thcentury. Electricity was generated by water turbines orby boiler-turbine units, and was originally distributedlocally with DC networks. Control of turbines and boil-ers were major application areas as discussed in Sec-tion 2.1. While the early development of governors waslargely empirical, the demands from the electric indus-try required a more systematic approach, and theorystarted to be developed and applied. Vyshnegradskii’spaper [670] had a strong influence on engineering prac-tice and was widely used in control systems for turbinecontrol [14]. Tolle’s book [644] was reprinted in 1909 and1929, and remained a standard work on control of elec-trical machines for a long time.

New control problems emerged when the distance be-tween generation and consumption increased. Manygenerators were connected in large networks to sup-ply sufficient power and to increase reliability. Chal-lenges arose when the electrical networks expanded.The generators had to all be synchronized when thetransmission switched from DC to AC. Stability prob-lems were encountered in the control of frequency andvoltage. For safe operation it was necessary to under-stand the response of generators to disturbances suchas load changes, faults, lightning strikes, etc. CharlesSteinmetz had developed the foundations of alternat-ing current analysis, with his introduction of “compleximaginary quantities” and phasors in the late eighteenhundreds [624]. This work addressed only steady-statebehavior and could not deal with dynamics. Motivatedby this, Harold Hazen, working under Vannevar Bushat Massachusetts Institute of Technology (MIT), builta “network analyzer” in the late 1920s. The analyzerwas a laboratory model of a power system, built usingphase-shifting transformers and transmission lines, andwas reconfigurable using a plug board from a telephoneexchange. The system was replicated at General Electricand other power companies.

The emergence of the electrical industry was a gamechanger because it was developed by large industriesin collaboration with public and state utilities whichwere large actors. Due to the requirement of operat-ing large networks safely, utilities and electric compa-

4

Page 5: Astrom Kumar Control a Perspective

nies built groups for research and development to un-derstand, design and operate them. Research and devel-opment teams were created in companies like GeneralElectric, Westinghouse, ASEA, BBC, Alstom, and inmany public and state utilities like ENEL, EDF and theSwedish State Power Board in Europe. One example isthe General Electric Research Laboratory that was cre-ated by Thomas Edison, Willis R. Whitney, and CharlesSteinmetz in 1900. It was the first industrial researchlaboratory in the US.

2.3 Industrial Process Control

Automation of process and manufacturing industriesevolved from the late 19th century and acceleratedat the beginning of the 20th century. The productionprocesses in the chemical, petroleum, pulp and paper,and pharmaceutical industries required accurate con-trol of pressure, temperature and flow to achieve goodproduct quality. Boilers, reactors, distillation columns,mixers and blenders were the typical processes. A widevariety of sensors for different physical variables wasdeveloped. The actuators were typically valves andpumps. Pneumatics became the common technologyto implement the sensing, actuation and control func-tions. Sensors and actuators had to be located at theprocess. Originally the controllers were attached to theprocess equipment; they communicated with sensorsand actuators via pressure signals. The connectors, thepressure tubes and the signal levels (3 to 15 psi) werestandardized, permitting equipment from different ven-dors to combined. The controllers were later moved tobe combined a central control room where recorders forsignals were also provided, greatly simplifying the workand the working environment of the operator.

Development of the controllers was driven by engineer-ing insight rather than theory. The effects of integral andderivative action were rediscovered by tinkering. An in-terview [91] with John Ziegler, from the Taylor Instru-ment Company, provides a perspective:

Someone in the research department was tinkering withthe Fulscope (a pneumatic PI controller) and somehowhad got a restriction in the feedback line to the capsulethat made the follow-up in the bellows. He noted thatthis gave a strange kicking action to the output. Theytried it on the rayon shredders and it gave perfect con-trol of the temperature.

The controller components were also used as pneumaticanalog controllers to simulate processes. Since the simu-lator used pneumatic signals it could easily be connectedto a pneumatic controller. Feedback was used extensivelyin the sensors and actuators, and the controllers them-selves. The key idea was to create good linear behaviorby combining passive components in the form of vol-umes with restrictions with pneumatic amplifiers that

had high gain, very similar to the feedback amplifiersdiscussed later in Section 2.6.

The controllers became standardized general purposedevices, not built for a specific process like the governor,and they were equipped with dials that permitted ad-justment of the parameters of the PID controller. Thefirst general purpose PID controller was the Stabilog de-veloped by Foxboro; the gain could be adjusted between0.7 and 100. It appeared in 1931, soon after other manu-facturers developed similar products. Since there couldbe many controllers in a process, there was a need formethods for finding good values of the controllers fordifferent processes. Ziegler and Nichols [703] developedtuning rules where the controller parameters could bedetermined from simple experiments on the process.

The emergence of sensors, instruments and controllersled to the creation of new companies. The industry washighly diversified, and by mid 1930 there were more than600 control companies, with Bailey, Brown, Fisher &Porter, Foxboro, Honeywell, Kent, Leeds & Northrup,Siemens, Taylor Instruments, and Yokogawa, among theleading ones [632]. Bennett ([66], p 28) estimates thatabout 75000 controllers were sold in the US in the period1925-35.

The industrial structure for process control differed fromthat in the communications and power industries. Ideaswere not disseminated but protected as proprietary se-crets. In process control there were a large number ofcompanies. Concentrated resources that were availablein the communications and power industries were lack-ing as was a theoretical foundation.

2.4 Ship Steering

There were many interesting developments in ship steer-ing. Actuation was a major issue because of the largeforces required to turn the rudder of a large ship. Theword “servo motor” was coined by the French engineerFarcot who developed hydraulic steering engines [65].These devices, which provided actuation, were impor-tant ingredients in the automation of ship steering. Con-trol also benefited from advances in sensors.

Major advances in ship steering were inspired by ex-ploitation of gyroscopic action. The collection of ideasand devices based on gyroscopic action had a ma-jor impact, and has been labeled “the gyro culture”[448,472,471].

The first gyro compass was developed by Anschutz-Kaempfe who started the company Anschutz in 1905.The company collaborated with Max Schuler who washead of the Institute of Applied Mechanics at the Uni-versity of Gottingen. Schuler invented a clever techniqueto make the gyro compass insensitive to the motion of

5

Page 6: Astrom Kumar Control a Perspective

the ship (Schuler tuning) [586]. Schuler also taught acontrol course at the university [585,450].

In 1910 Sperry started the Sperry Gyroscope Companyto develop a gyro compass and many other devices basedon gyroscopes. The company Brown also developed agyro compass a few years later, and there were courtbattles with Anschutz about intellectual property rights[448]. Sperry combined the gyro compass with an electricmotor connected to the steering wheel to obtain a gyro-pilot. By observing experienced pilots Sperry had foundthat:

An experienced helmsman should ‘meet’ the helm, thatis, back off the helm and put it over the other way toprevent the angular momentum of the ship carrying itpast its desired heading.

Sperry tried to create an electro-mechanical device withthis behavior. The design, which is well documented in[65,318,472], is a typical PID controller; the function ofmeeting the helm is obtained by derivative action. In-tegral action is obtained by the motor which drives thesteering wheel. Amplification was often based on on-offdevices and feedback is exploited to obtain linear behav-ior. Sperry’s gyro-pilot relieved the helmsman of the te-dious job of adjusting the rudder to keep the course. Thegyro-pilot had adjustments to set the desired course, andto change the controller parameters. There was also alever to connect and disconnect it. Sperry’s gyro-pilot,which was nicknamed the “Metal-Mike,” was very suc-cessful. Sperry also provided recorders so that the steer-ing errors of automatic and manual control could becompared. In 1932 there were more than 400 systemsinstalled [318].

There were interesting theoretical developments in shipsteering by Minorsky [473] who was educated at St.Petersburg Imperial Technical Institute. He presenteda taxonomy of controllers and recommended the use ofPID control for ship steering. His design method basedon a simplified linear model is what is today calledpole placement. Minorsky built an autopilot which wastested, but it did not lead to a product, and he soldhis patents to Bendix [66]. Later Minorsky became aprofessor at Stanford University and wrote a book onnonlinear oscillations [474]. In Bennett’s book [65, p147-148] and in [68], there are interesting discussions ofthe contributions of Sperry, Minorsky and Anschutz,and their impact on actual auto-pilot design.

New control problems in ship steering appeared in theFirst World War in connection with the intensive pro-gram for modernization of the navies [52]:

Touched off by the gyro-compass and its repeaters ofdata transmitters, the possibilities of transmitting tar-get bearings, turret angles, true azimuth, and ship’s

heading automatically from topside to plotting roomsto guns opened a vast new field.

The orientation and distance to the target were mea-sured by optical devices, typically observers aft and for-ward on the ship. The future position of the target wascomputed and the large gun turrets was oriented byservos. Self-synchronous motors (synchros) transmittedthe information from the optical devices to the com-puter, and from the analog computer to the servos. Thecomputers were analog electro-mechanical devices usingwheel and disk integrators [472]; they were manufac-tured by the Ford Instrument Company, General Elec-tric, and Sperry.

2.5 Flight Control

There were many experiments with manned flight in the18th century. One reason why the Wright brothers suc-ceeded was that they understood the relations betweendynamics and control. Wilbur Wright expressed it in thefollowing way when lecturing to the Western Society ofEngineers in 1901 [461]:

Men already know how to construct wings ... Men alsoknow how to build engines ... Inability to balance andsteer still confronts students of the flying problem. ...When this one feature has been worked out, the age offlying will have arrived, for all other difficulties are ofminor importance.

By combining their insight with skilled experiments, theWright brothers made the first successful flight in 1905.An interesting perspective on their success is given inthe 43rd Wilbur Wright Memorial Lecture delivered byCharles Stark Draper at the Royal Aeronautical Societyon May 19, 1955 [175]:

The Wright Brothers rejected the principle that air-craft should be made inherently so stable that the hu-man pilot would only have to steer the vehicle, playingno part in stabilization. Instead they deliberately madetheir airplane with negative stability and depended onthe human pilot to operate the movable surface controlsso that the flying system - pilot and machine - would bestable. This resulted in an increase in maneuverabilityand controllability.

The fact that the Wright Flyer was unstable stimulateddevelopment of autopilots [318]. Sperry used his under-standing of gyroscopes and autopilots for ships to designan autopilot for airplanes. The deviations in orientationwere sensed by gyroscopes, and the rudder and aileronswere actuated pneumatically. There was a spectaculardemonstration of the autopilot in a competition in Parisin 1912. Sperry’s son Lawrence flew close to the groundwith his hands in the air while his mechanic walked on

6

Page 7: Astrom Kumar Control a Perspective

the wing to demonstrate that the autopilot could copewith disturbances.

The success of the Wright brothers is an early exampleof what we today call integrated process and controldesign. The key idea is that automatic control gives thedesigner extra degrees of freedom. The Wright Brothersmade a maneuverable airplane and relied on the pilotto stabilize it. Minorsky was well aware of these issues.He captured it in the phrase [473]: It is an old adagethat a stable ship is difficult to steer. It is interestingto observe that modern high performance fighters aredesigned to be unstable; they rely on a control systemfor stabilization.

There was also a strong gyro culture in Germany asso-ciated with development of autopilots [516]. Lufthansahad international long distance flights in the 1920s.There was a demand for directional control to fly safelyin all weather conditions. The German companies Aska-nia, Siemens and Moller-Patin developed autopilotsthat competed with Sperry’s equipment.

Sperry continued to develop autopilots; a refined modelA-2 used air-driven gyroscopes and pneumatic-hydraulicactuators. A spectacular demonstration of the benefitsof autopilots was when the Sperry A-2 autopilot wasused in Wiley Post’s solo flight around the World in1933. Airlines started to introduce autopilots in the early1930s and companies like Bendix and Honeywell startedto make autopilots.

The autopilots made extensive use of feedback both inthe individual components and in the systems. Althoughthere was a good theoretical understanding of flight dy-namics based on linearized equations and analysis of thecharacteristic equation as early as 1911, the theoreticalwork did not have any impact on practical autopilot de-sign until the mid-1950s [464]. One reason was lack ofcomputational tools. As in the case of ship steering, en-gineering ability was more important than theory.

2.6 Long Distance Telephony

Graham Bell patented the telephone in 1876. Originallythe phones were connected with wires to a central lo-cation with a switchboard. The number of phones grewrapidly. Many phone calls were transmitted over thesame wire using frequency separation. The telephone in-dustry was highly centralized, more so than the electricindustries because it was driven by private or state mo-nopolies and large industries.

One driver of communications, and indirectly but pro-foundly of control, was the growth of transcontinentaltelephony in the USA [472]. Around 1887, Oliver Heav-iside showed that adding inductance to lines could beused to reduce distortion. In 1899, Mihajlo Pupin of

Columbia University patented the loading coil [550],while, at about the same time, George Campbell ofAT&T, developed it and implemented it on a telephonecable in 1900. This was subsequently used in long dis-tance lines and cables. Transmission of signals overlong distances was however passive, and the loadingcoil technique reached its limits in terms of allowabledistortion and attenuation around 1911 with its im-plementation in the New York – Denver line. In 1913,AT&T bought the rights to the triode which Lee deForest [421] had invented in 1907, and had it furtherstudied and developed by Harold Arnold. It used eightrepeaters (amplifiers) to connect New York and SanFrancisco, extending the line from Denver to California.The number of repeaters increased as more cities wereinterconnected, but distortion then became a majorproblem, as noted by Bode [95]:

Most of you with hi-fi systems are no doubt proud ofyour audio amplifiers, but I doubt whether many ofyou would care to listen to the sound after the signalhad gone in succession through several dozen or severalhundred even of your fine amplifiers.

Consequently, there was great impetus to increase thecapacity of telephone lines by using carrier multiplexing,which together with the employment of cable, greatlyincreased the number of repeaters needed. This requiredhigh quality amplifiers with low distortion. The elec-tronic tube was the prime device for amplification at thetime, but it had severe drawbacks such as a nonlinearcharacteristic that changed with time. There were manyefforts but no real progress was made until Harold Blackof Bell Labs developed the negative feedback amplifierin 1927 [83]. The critical idea was to provide an ampli-fier with feedback via passive linear elements to reducethe distortion in amplifiers. We quote from Bode [95]:

The causes of distortion were of various sorts. Theyincluded power supply noises, variations in gain andso on, the dominant problem, however, was the inter-modulation due to the slight nonlinearity in the char-acteristics of the last tube. Various efforts were madeto improve this situation, by the selection of tubes, bycareful biasing, by the use of matched tubes in push-pull to provide compensating characteristics, and soon. Until Black’s invention, however, nothing made aradical improvement of the situation.

It should be noted that Black used the word “stable”to describe constancy of the amplifier gain in spite oftemperature changes, rain, weather, component aging,etc., but not its immunity to “singing” or oscillation [83].Feedback was an enabler which made it possible to makea good amplifier even while employing components withmany undesirable features. A perspective on the inven-tion is given in Black’s paper [84], which was written 50years after the invention:

7

Page 8: Astrom Kumar Control a Perspective

I suddenly realized that if I fed the amplifier output backto the input, in reverse phase, and kept the device fromoscillating (singing, as we called it then), I would haveexactly what I wanted: a means of canceling out thedistortion in the output. ... By building an amplifierwhose gain is deliberately made, say 40 decibels higherthan necessary and then feeding the output back on theinput in such a way as to throw away the excess gain,it had been found possible to effect extraordinary im-provement in constancy of amplification and freedomfrom non-linearity.

It took nine years for Black’s patent to be granted, par-tially because the patent officers refused to believe thatthe amplifier would work. They did not believe that itwas possible to have a stable feedback loop with a loopgain of several hundred [84].

Instability or “singing” was frequently encounteredwhen experimenting with feedback amplifiers. Thus thetechnological challenge of long distance telephonic com-munication led to the issue of stability of the feedbackloop. Harry Nyquist encountered the problem in 1932,when he participated in a joint project with Black totest the negative feedback amplifiers in a new carriersystem. To address this, Nyquist used ideas that werevery different from the stability results of Maxwell andVyshnegradskii. Instead of analyzing the characteristicequation, he explored how sinusoidal signals propa-gated around the control loop, resulting in the “Nyquistcriterion” [507]. Stability of electronic amplifiers wasindependently investigated by Kupfmuller [402]. He in-troduced signal-flow diagrams and analyzed the circuitsusing integral equations [517].

The performance requirements of communication re-quired further advances in the design of feedback loops.While working on the design of an equalizer networkin 1934, Hendrik Bode developed a deep insight intofeedback amplifiers. He investigated the relationshipbetween attenuation and phase and introduced theconcepts of gain and phase margin and the notion ofminimum phase [93]. He also proved that there arefundamental limitations to control system design. Inparticular he showed that the integral of the logarithmof the magnitude of the sensitivity function is constant,which means that control is inherently a compromise;making the sensitivity smaller for one frequency in-creases it at other frequencies. He also showed thatthere were even more stringent limitations if systemsare not minimum phase. Bode also developed tools todesign feedback amplifiers based on graphical methods(Bode plots) that we today call loop shaping. A par-ticular difficulty was to deal with the large variationsin the gain of the triode. He showed that a constantphase margin can be maintained for very large gainvariations by shaping the loop transfer function so thatits Nyquist curve is close to a straight line through theorigin, which he called the “ideal cut-off characteristic.”

Bode’s design method was the first example of robustcontrol. His results were based on the theory of complexvariables and are summarized in the seminal book [94].

The AT&T Company started an industrial researchlaboratory as part of its strategy of controlling Amer-ican telecommunications. To implement the strategythe company wanted to control the rate and directionof technical change by obtaining, or preventing othersfrom obtaining, key patents. The research laboratoriesplayed a major part in ensuring that AT&T kept controlof the technology and the patent rights [66, p 70-71].The environment at Bell Labs, which had a mix of scien-tists like Bode, Shannon and Nyquist and engineers likeBlack, was a very fertile ground for technology develop-ment and basic research. The lab has had 13 Nobel Lau-reates. Insight into the personalities and the researchenvironment is presented in Mindell’s book [472].

A major difference between the telecommunications in-dustry and the other industries where control was usedwas that the industry was supported by a research lab-oratory with many qualified researchers. Theory was in-terleaved with the practical development, and repeatersfor land lines and underwater cables were mass pro-duced.

2.7 Early Electro-Mechanical Computers

It was recognized early on that computers could be usedto simulate and thereby understand the behavior of dy-namic systems in the absence of a mathematical solu-tion. In fact, mechanical devices for integrating differ-ential equations had been designed already in 1876 byWilliam Thomson (Lord Kelvin) [639,640], who used aball-and disk integrator to perform integration. Moti-vated by the problems of simulating power system net-works, Vannevar Bush improved the mechanical designsignificantly, and also designed a torque amplifier toavoid loading [524]. Bush’s first mechanical differentialanalyzer had six integrators [125]. The differential ana-lyzer at MIT was used for a variety of applications be-yond power systems.

A first step was the “product integraph,” a device forintegrating the product of two functions [126], which wasan important element in network analysis. This requiredhuman tracking of each of the input waveforms of thefunctions that then generated an electrical signal fed toa watt-hour meter whose output was a turning wheel. Ifthe output of this calculation was to be used as the inputto a next stage, then to avoid loading the wheel, a servo-motor was used to replicate the movement. It served asthe mechanical analog of the amplifier repeater in thetelephone network. The next stage of evolution in 1931was to feed the output signals of the integrators after theservos back to the inputs, which provided the capabilityto solve differential equations. Servomechanisms played

8

Page 9: Astrom Kumar Control a Perspective

the crucial role in connecting the stages of computation.Thus control played a central role in the construction ofthis early electro-mechanical analog computer.

In turn, the development of this “computer” stimulatedHazen to pursue further work on servo-mechanisms[290]. However this work did not explicitly make theconnection with the earlier work of Nyquist and Bode.It did however, cite the earlier work of Minorsky whohad introduced the PID Controller in connection withsteering of US Navy ships [473]. Early work on ser-vomechanisms were also done at Bell Labs [100,446].

The next generation of the differential analyzer was the“Rockefeller Differential Analyzer,” which transmitteddata electronically, and thus allowed reconfiguration ofthe system by resetting telephone switches rather thanby the more time-consuming process of mechanically ro-tating shafts. This project was funded at MIT by War-ren Weaver of the Rockefeller Foundation, a partner-ship which played a very important role in the subse-quent development of anti-aircraft fire control. Punchedpaper tape could be used to “program” this computer,making it a “hybrid” digital/analog system. Motivatedby this, Claude Shannon examined in his MIT Master’sThesis the problem of switching circuits and showedhow Boolean algebra can be used for design [597]. Sub-sequently, George Sibitz built on this work in makingprogress towards the digital computer. Shannon later in-vestigated the class of problems that could be solved bythe differential analyzer [598].

Copies of the differential analyzers were built by severaluniversities and research organizations. Nichols used thedifferential analyzer at MIT when he developed the tun-ing rules for PID control [91]. The analog computer atthe University of Manchester was used to analyze con-trol of systems with time delays [130].

In 1938 George Philbrick of Foxboro invented an elec-tronic analog computer called Polyphemus for simula-tion of process control systems [307]. This system wasused extensively at Foxboro for training and demonstra-tion. Analog computing would later have a major impacton control.

3 The Field Emerges

Control emerged as a discipline after the Second WorldWar. Prior to the war it was realized that science couldhave a dramatic impact on the outcome of the war.Fire-control systems, gun-sights, autopilots for ships,airplanes, and torpedoes were developed. Significantprogress was also made in process control. There wasclose collaboration between military agencies, industry,research labs, and university [517,472]. Engineers andresearchers with experiences of control systems from

different specialties were brought together in cross dis-ciplinary teams. It was recognized that there was acommon foundation for all control problems, even if theapplication areas were very diverse.

Fire control was one of the major challenges. Cities, fac-tories and ships needed guns to protect them from at-tacking aircraft. Radar was uses as a sensor, electric orhydraulic motors were used to direct the guns. Commu-nication was required, because the radar and the gunswere separated physically. An additional difficulty wasthat the radar signal was noisy. Fire control for shipsalso had to deal with the motion of the ships. Early firecontrol systems used manual control which became in-feasible when the speed of aircraft increased. Automatedaiming was implemented using mechanical analog com-puters. Feedback was used extensively both at the sys-tem level and at the component level.

Germany had a strong tradition in control; Tolle’s text-book [644] appeared already in 1905. By 1940 here wascontrol expertise at many companies Askania, Kreisel-gerate, Siemens, and at some universities. The VDI(Verein Deutscher Ingenieure, Association of GermanEngineers) had recognized the importance of controland they had organized a committee on control engi-neering in 1939, with Hermann Schmidt as a chairmanand Gerhard Ruppel as a secretary.

Germany had severe restrictions on military activitiesin the Versailles treaty; for example, it was not allowedto have an airforce. In spite of this there were many se-cret projects. Auto-pilots for aircraft and missiles weredeveloped [61,517]. The navy established a secret com-pany “Kreiselgerate (Gyro devices)” in 1926. The com-pany played a central role in navigation and guidancethroughout the Second World War [252,448]. Severalcompanies manufactured autopilots in 1940. Accordingto Oppelt [516], thousands of autopilots were producedevery month. Siemens alone had manufactured 35,000systems by the end of the war. The autopilots were basedon gyroscopes and analog computing using pneumatic,hydraulic, and electro-mechanical technologies.

The German army secretly created a Ballistics Coun-cil to develop military rockets. The program, which wasled by Walter Dornberger, started in 1930 and it wastransferred to Peenemunde in 1937. At that time thegroup had about 90 people [385,61]. Guidance and con-trol were critical elements. Several missiles were devel-oped among them the cruise missile V-1 and the ballis-tic missile V-2. Much research and development was re-quired for the guidance systems. Askania developed andproduced the autopilot for V-1. The V-2 missile and itsguidance system were developed by a team led by Wern-her von Braun [61]. More than 8,000 V-1’s and 3000 V-2’s were launched during the war. The German rocketscientists subsequently went to the USA and the USSR

9

Page 10: Astrom Kumar Control a Perspective

after the war and played leading roles in the develop-ment of missile technology. The USSR launched the firstartificial Earth satellite, Sputnik, in 1957, triggering theSpace Race. The first rocket to reach the Moon was theSoviet Union’s Luna 2 mission in 1959.

Research in the USSR was highly centralized [403,79].The Academy of Sciences directed the research andthere were large engineering institutes for applications:Electrotechnical, Boiler and Turbine, Power Engineer-ing, Naval and Aviation. The USSR had a long traditionin automatic control going back to Vyshnegradskii,Joukowski, and Lyapunov, recall that Joukowski’s bookwas published in 1909. Control also benefited from astrong tradition in nonlinear dynamics with schools inMoscow, led by Mandelstam and Andronov [15], andin Kiev, led by Krylov and Bogolyubov [397]. A tech-nical Commission on remote control and automationwas created in in 1934, with A. A. Chernyshov as chair-man. An All-Union Conference on automatic controland dispatch design was organized in 1936 with about600 participants [403]. The Institute of Automation andRemote Control was founded in Moscow in 1939 [16].It became a power house for control systems researchwith many prominent researchers, including A. A. An-dronov, M. A. Aizerman, A. A. Butkovsky, A. A. Feld-baum, N. N. Krasovski, B. Ya. Kogan, A. Ya. Lerner,B. N. Petrov, V. V. Solodovnikov, Ya. Z. Tsypkin, andS. V. Yemelyanov. The institute published the journalAutomatika i Telemechanika, which was translated toEnglish and widely read in the west. In 1944 Andronovorganized a research seminar at the Institute of Automa-tion and Remote Control with a group of very talentedresearchers [652]. He correctly predicted a grand era ofautomation [422]. Mathematicians like Pontryagin andGamkrelidze from the Steklov Institute of Mathematicsgave significant contributions (the Maximum Princi-ple). There were also institutes in many other cities, forexample Leningrad, Sverdlovsk, and Kiev.

In the US a group of scientists, including Karl T. Comp-ton (president of MIT), James B. Conant (president ofHarvard), and Frank Jewett (director of Bell Labs), ledby Vannevar Bush, petitioned President Roosevelt toform an organization that could exploit scientific andtechnical expertise for the war effort [684, p 182–184].The result was the formation of the National DefenseResearch Committee (NDRC) in 1940, with Bush as itschair. Within a year the NDRC became a part of theOffice of Scientific Research and Development (OSRD).Its director Bush reported directly to the President.NDRC built on laboratories around MIT and Bell Labs.The Instrumentation Laboratory at had been created in1930 [166,17] by Charles Stark Draper with the missionof making precise measurements of velocity and angu-lar rate. Pioneering work on servomechanisms has beendone by Harold Hazen in the 1930s [290,289]. In 1939the US Navy requested a special course on servomech-anism and fire control. The course was given by Gor-

don Brown who shortly after created the Servomecha-nisms laboratory [684, p 212–217]. NDRC also createdthe Radiation Laboratory at MIT, which at one timehad about 4000 researchers. The laboratories had an in-terdisciplinary staff with a wide range of academic andindustrial backgrounds [472]. There were fertile interac-tions between engineers and scientists in the differentgroups and engineers in industry [472,448].

The Bureau of Ordinance of the US Navy funded jointprojects between the Servomechanism and Instrumenta-tion Laboratories at MIT. Gordon Brown developed im-proved hydraulic systems to turn the turrets and Draperdesigned the Mark 14 gun-sight based on gyros. The gun-sight was manufactured by Sperry, and more than 85000systems were produced by the end of the war [472].

Inertial navigation and guidance based gyros and ac-celerometers are key technologies for fire control. Afterhis success with the Mark 14 gun-sight, Draper startedan intensive program to reduce the drift of the gyro-scopes and to develop inertial guidance systems. By 1950there were successful flight tests of inertial navigatorsfrom the Instrumentation Laboratory and from Auto-netics [176,448]. To avoid accelerometers from misinter-preting gravity as an acceleration it was essential to keepthe accelerometers aligned orthogonally to the gravity.Schuler had shown that his could be accomplished bydesigning a feedback loop with a natural period of 84minutes [586].

The Instrumentation Laboratory was renamed theDraper Laboratory in 1970 and became a not-for-profitresearch organization in 1973 the ServomechanismLaboratory remained as part of MIT and is now theLaboratory for Information and Decision Systems [477].

The Radiation Laboratory was dissolved after the warbut it was decided to publish the research in a series of28 volumes. We quote from the foreword to the series:

The tremendous research and development effort thatwent into the development of radar and related tech-niques during World War II resulted not only in hun-dreds of radar sets for military use but also in a greatbody of information and techniques... Because this ba-sic material may be of great value to science and en-gineering, it seemed most important to publish it assoon as security permitted. The Radiation Laboratoryof MIT, ... , undertook the great task of preparing thesevolumes. The work described herein, however, is thecollective result of work done at many laboratories,Army, Navy, university, and industrial, both in thiscountry and in England, Canada, and other Domin-ions. ... The entire staff agreed to remain at work atMIT for six months or more after the work of the Ra-diation Laboratory was complete.

Most of the volumes deal with radar and microwaves

10

Page 11: Astrom Kumar Control a Perspective

but at least two of them are highly relevant to con-trol; Volume 27 Computing Mechanisms and Linkagesand particularly Volume 25 Theory of Servomechanisms.Although there are earlier books on servomechanisms[100,283,279,446,94], Volume 25 [343] is unquestionablya landmark. The multidisciplinary nature of control isillustrated by the fact that the prime authors includeHubert James, a physics professor of Purdue, NathanielNichols, director of research at Taylor Instrument Com-pany, and Ralph Phillips, a professor of mathematicsat University of Southern California. The book was fol-lowed by others written by authors from the Servomech-anism Laboratory [121].

Before the outbreak of the war, research and develop-ment in control in the UK was carried out by the Admi-ralty Research Laboratory, the Royal Aircraft Establish-ment, the Telecommunication Research Establishment,the National Physical Laboratory and in industries inshipbuilding, chemical, and electrical industries [539,67].A committee, under the chairmanship of Sir Henry Ti-zard, Rector of Imperial College London, was created in1935 to examine the problem of the defense of Britainfrom air attack. Many schemes were explored and it wasdecided to focus on the development of radar. Success-ful experiments were carried out in late 1935. Workingground stations were available by 1940 and airborne sta-tion in in the spring of 1941 [684, p 193-194]. WhenChurchill became prime minister he selected ProfessorFrederick Lindemann (Viscount Cherwell) as his scien-tific advisor, and there were frequent conflicts betweenTizard and Lindemann [608,147]. There was an exten-sive exchange of ideas and hardware with the USA [459][684, p 195].

The Admiralty explored use of radar for naval gun-nery. The development was done at companies likeMetropolitan-Vickers where Arnold Tustin was one ofthe leading researchers. The company had experience inservo systems and analog computing because they hadbuilt a mechanical differential analyzer in 1935. Tustinalso chaired a group of companies working for the Ad-miralty [78]. A Servo Panel was formed in 1942 withHartree as a chairman and Porter as a secretary [539].The mission of the panel was to exchange experiencesof servo systems; Tustin and Whiteley were among themembers. The Servo Panel was followed by a more for-mal organization the Interdepartmental Committee onServomechanisms and Related Devices (ICSR) estab-lished by the Ministry of Supply in 1944. The missionwas to follow research in the field, to advise the Ministryand to act as an advisory body on servomechanism toany firm engaged in Government work.

There were also activities in many other countries France[223], Italy [271], Japan [384] and China [139], and evenin small countries such as Sweden [32] where the Re-search Institute of National Defense (FOA) was createdin 1945.

3.1 The Development of Servomechanism Theory

The early work on control had a rudimentary the-ory based on linear differential equations and Routh-Hurwitz stability criterion. The frequency responsemethod developed by Bode [93] and Nyquist [507] wasa paradigm shift. Bode [95] expressed the differencesbetween process control and telecommunications asfollows:

The two fields are radically different in character andemphasis. ... The fields also differ radically in theirmathematical flavor. The typical regulator system canfrequently be described, in essentials, by differentialequations by no more than perhaps the second, third orfourth order. On the other hand, the system is usuallyhighly nonlinear, so that even at this level of complex-ity the difficulties of analysis may be very great.... As amatter of idle curiosity, I once counted to find out whatthe order of the set of equations in an amplifier I hadjust designed would have been, if I had worked with thedifferential equations directly. It turned out to be 55.

The fire control problems were particularly challengingbecause they involved radar and optical sensing, predic-tion and servoing. Servomechanisms had been investi-gated early at MIT by Hazen in connection with work onnetwork analyzers and Bush’s differential analyzer [290],as described in Section 2.7. By combining it with theideas of Bode and Nyquist it was refined to a coherentmethod to analyze and design control systems at the Ra-diation Laboratory. Many applications centered aroundservo problems; typical examples were gun-sights andradar. One of the pioneers, Hall of MIT, expresses it asfollows [280]:

Real progress results from strong stimulation. ... Thewar brought three problems to the controls engineer.The first was handling problems and systems of con-siderable dynamics complexity dictated by the require-ments of more accurate and rapid fire-control systems.The second was that of designing systems that wouldcope with large amounts of noise, occasioned by the useof radar in fire control. The third problem, raised bythe guided missile, was that of designing so accuratelya dynamic system that it could be used successfully al-most at once with negligible field trials.

The key elements of servomechanism theory are blockdiagrams, transfer functions, frequency response, ana-log computing, stochastic processes and sampling. Themathematical foundation was based on linear systems,complex variables, and Laplace transforms.

A block diagram is an abstraction for information hid-ing, where systems are represented by blocks with in-puts and outputs. The internal behavior of the systemsis hidden in the blocks. The behavior of the blocks was

11

Page 12: Astrom Kumar Control a Perspective

described by ordinary differential equations, or transferfunctions derived using Laplace Transforms. A centralidea was that relations between signals in a block di-agram could be found by algebra instead of manipula-tion of differential equations, an idea which goes backto Heaviside. Block diagrams and transfer functions al-lowed a compact representation of complex systems. Animportant consequence was that the similarity of manydifferent control systems became apparent because theirblock diagrams revealed that they had the same struc-ture.

An important factor that significantly contributed to thesuccess of servomechanism theory was that the transferfunction of a system could be determined experimentallyby investigating the response to sinusoidal inputs. In thisway it was possible to deal with systems whose physicalmodeling was difficult. Control engineers were fearlessin finding models of technical systems by injecting si-nusoidal perturbations and observing the responses. Anexample is given in [8,510], where in an attempt to de-termine the dynamics of the Swedish power network, thefull output of a major power station was used to perturbthe system. Special frequency analyzers were also devel-oped to generate sinusoidal signals and to compute thetransfer functions.

Graphical design methods for controller design werebased on shaping the frequency response of the looptransfer function (loop shaping). The design methodyielded controllers in the form of rational transfer func-tions; they were not restricted to PID controllers. Com-pensators were often obtained as combinations of leadand lag networks. The limitations caused by processdynamics that are not minimum phase were apparentin the design procedure. The graphical representationsof Bode and Nichols charts were easy for engineers touse since they also provided significant physical insight,as is illustrated by the following quote from an engineerin ASEA’s research department [528,242]:

We had designed controllers by making simplified mod-els, applying intuition and analyzing stability by solv-ing the characteristic equation. At that time, around1950, solving the characteristic equation with a me-chanical calculator was itself an ordeal. If the systemwas unstable we were at a loss, we did not know how tomodify the controller to make the system stable. TheNyquist theorem was a revolution for us. By drawingthe Nyquist curve we got a very effective way to designthe system because we know the frequency range whichwas critical and we got a good feel for how the controllershould be modified to make the system stable. We couldeither add a compensator or we could use extra sensor.

The design methods were originally developed for sys-tems with one input and one output; they could be ex-tended to systems with several inputs and outputs bycombining the Nyquist plots for different loops [243].

Disturbances are a key ingredient in a control problem;without disturbances and process uncertainties thereis no need for feedback. Modeling of disturbances istherefore important. In servomechanism theory, it wasproposed to model disturbances as stochastic processes[343,614,655]. The book [343] has formulas for calculat-ing the mean square error for linear systems driven bystochastic processes. A key problem in fire control was topredict the future motion of an aircraft. Solutions to thisproblem were given independently by Wiener [683] andKolmogorov [389]. The work had no impact on the firecontrol systems during the war [472, p 280–283]. New-ton, Gould and Kaiser [500] used Wiener’s predictiontheory to design control systems that minimize meansquare fluctuation. An interesting feature of their ap-proach is that they converted the feedback problem to anequivalent feedforward problem, which was much easierto solve. Today we call this approach Youla parameteri-zation. Other books on control of systems with randomprocesses are [674,615,412,163].

The Semi-Automatic Ground Environment (SAGE), asemi-automatic system for detecting missiles approach-ing North America, was developed in the late 1950sat the Lincoln Laboratory [561]. The system consistedof of a network of radar, computers and commandand control centers. The scanning radar stations pro-vided periodic samples of missile position; this spawnedmuch research in sampled data systems. Significantcontributions were made by Franklin and Jury in thecontrol group at Columbia University led by Ragazzini[553,352]. There was also significant research on sam-pled data systems by Tustin in the UK [655], and byTsypkin at the Institute of Automation and RemoteControl in the USSR [651]. Earlier Oldenburger andSartorius [511] had worked on sampling motivated bychopper-bar systems used in process control.

Since fire and flight control systems involved a human inthe loop it was natural to investigate the dynamic char-acteristics of a human in a feedback loop [656],[518],[90,Chapter 13]. Partially inspired by this, Norbert Wienercoined the term Cybernetics (Control and Communica-tion in the Animal and the Machine) in the book [682]published in 1948. Wiener emphasized interdisciplinaryresearch, convergence of control, communication, biol-ogy and system theory. Ross Ashby explored the ori-gin of the adaptive ability of the nervous systems inthe book [26], resonating with the idea of cybernetics[27]. An engineering view of cybernetics was howevergiven in Tsien’s book Engineering Cybernetics [646],which anticipated much of the development of controlafter 1954. Cybernetics caught the imagination of bothprofessionals and the public in general but eventuallyit fell into disrepute, perhaps because of a lack of anysignificant research outcome, over-promising, and over-exploitation. The word survived in some institutions.Yakubovich founded the Department of Theoretical Cy-bernetics in Leningrad in 1970. The control department

12

Page 13: Astrom Kumar Control a Perspective

at the Norwegian Institute of Technology was namedTeknisk Kybernetikk.

Information about servomechanisms was spread in manyconferences leading to the formation of the InternationalFederation of Automatic Control in 1956. The Depart-ment of Scientific and Industrial research in the UK ar-ranged a conference in Cranfield in July 1951. The pro-ceedings [658] was edited by Tustin, a central personin control research in the UK. Another conference wasarranged by ASME in December 1953. The conferenceproceedings was edited by Rufus Oldenburger, Direc-tor of Research of the Woodward Governor Company,and it was dedicated to Harry Nyquist [514]. (The high-est ASME award in control systems is the OldenburgerMedal.) The Italian National research council arrangeda series of meetings in Milan culminating in an Interna-tional Congress on the Problems of Automation in April1956 with more than 1000 attendees [150].

3.2 The Wide Applicability of Servomechanism Theory

Although servomechanism theory was developed pri-marily for the fire control problem it quickly becameclear that the theory had wide applicability to practi-cally any control problem. All fields where control hadbeen used earlier were invigorated by influx of ideas fromservomechanism theory. The associated systems engi-neering methodology, that had been developed to dealwith complex systems, also had very wide applicability.

Pioneering work on numerically controlled machine toolswas done at MIT’s Servomechanism Laboratory [684,p. 218-225]. A numerically controlled three-axis millingmachine was demonstrated in 1952. The first version of alanguage APT for programming the machines was laterdeveloped. APT was widely used through the 1970s andis still an international standard.

Servomechanism theory had a strong impact on processcontrol. Oldenburger and Sartorius of Siemens showedthat concepts and methods from servomechanism the-ory were useful for process control [511]. Smith [607] andEckman [184] made similar observations. Equipment forprocess control were also improved. Electronics replacedpneumatics, but valve actuation was still pneumatic be-cause of the forces required. One consequence was thatthe delay in the pneumatic lines used for signal trans-mission was reduced significantly. The linearity and pre-cision of sensors and actuators were improved signifi-cantly by using force feedback. Feedback was also usedextensively to improve the electronic controllers.

Drive systems with electric motors were improved signif-icantly when the thyristor became available in the mid1950s. There were major developments in power systemsas electric power networks increased in size and complex-ity. High voltage DC transmission systems were devel-

oped. They required sophisticated electronics and con-trol systems for AC to DC and DC to AC conversions.The first system was a 20MW 100kV transmission frommainland Sweden to the island of Gotland in 1954 [410].

The systems engineering capability required to buildcomplex systems became an important part of controlduring the war. A dramatic demonstration of the ad-vances of control was made in September 1947 when theaircraft “Robert E. Lee” made a completely autonomoustransatlantic flight [464]:

The aircraft had a Sperry A-12 autopilot with approachcoupler and a Bendix automatic throttle control. ...It also had some special purpose IBM equipment thatpermitted commands to its automatic control to bestored on punch cards fed automatically. From thetime that the brakes were released for takeoff fromStephenville, Newfoundland, until the landing wascompleted at Brize-Norton, England the next day, nohuman had touched the control. The selection of radiostation, course, speed, flap setting, landing gear posi-tion, and the final application of wheel brakes were allaccomplished from a program stored on punched cards.The complete automation of aircraft flight appeared tobe at hand.

3.3 From Mechanical to Electronic Computers

Controllers developed before 1940 were special purposeanalog computers based on mechanical, pneumatic orelectrical technology. There was a breakthrough whenmechanical technology was replaced by electronics. Theinvention of the operational amplifier [554,128,529,307]was the key. By providing the operational amplifiers withinput and feedback impedances it was possible to cre-ate components that could add and integrate signals.Adding multipliers and function generators made it pos-sible to develop powerful computing devices for imple-mentation of control systems.

Electronic analog computing had significant advantagesover electro-mechanical devices, particularly in airborneequipment where low weight was important. The oper-ational amplifier was also used to build general purposeanalog computers. They were fast because operation wasparallel. It was even possible to run them in repetitiveoperation so that effects of parameter variations couldbe visualized instantaneously. The number of differentialequations that could be solved was equal to the numberof integrators; large installations had more than 100 in-tegrators. The computers were programmed with a de-tachable patch panel. Problems of oscillations arose ifthere was an algebraic loop, i.e., a closed loop withoutan integrator.

The analog computer became a popular tool for researchinstitutes, and electrical, aerospace and chemical com-panies. The computers were typically run by well staffed

13

Page 14: Astrom Kumar Control a Perspective

computing centers that attended to the hardware andassisted with programming. There were also smaller ana-log computers that could be placed on a desk top. Sev-eral institutes built their own systems, and universitiesalso acquired analog computers.

The analog computers made it possible to simulate largesystems. For the first time it was possible to use mathe-matical models to explore the behavior of systems undera wide range of operating conditions. Analog comput-ers could also be used for hardware-in-the-loop simula-tion where real components were combined with simu-lation. Analog computing became an academic subject[316,442].

Digital computing emerged with the ENIAC, developedin the mid 1940’s by Mauchly and Eckert of the Uni-versity of Pennsylvania’s Moore School of Electrical En-gineering. Mauchly and Eckert left the university andformed a company that became Univac. The first com-puter Univac 701 appeared in 1951. A year later IBMannounced the IBM 701. Several companies entered thecomputer business but by 1960 IBM totally dominatedthe industry.

In 1944 the Servomechanism Laboratory at MIT got acontract from the US Navy to develop a general purposesimulator for training naval bombers. Originally it wasattempted to base the simulator on analog computing,but the program shifted to digital computing inspired bythe emerging new technology. The computer was called“Whirlwind” after the name of the project [560]. Theproject changed direction several times. In the begin-ning of the 1950s it was used in the SAGE program,where Whirlwind became an early example of real-timecomputing. It was connected to radar stations for feasi-bility studies in the SAGE program. Whirlwind was de-signed as a 16-bit machine with 2K of memory. When ex-perimenting with memory, Forrester explored magneticcores in 1949, and core memory was installed two yearslater [219]. Forrester and others patented the technologywhich became the standard random-access memory for atwenty year period. Ken Olsen worked on the core mem-ory in the Whirlwind team as a student. Later he movedto Lincoln Labs to make TR-0, a transistorized versionof the Whirlwind. In 1957 he founded Digital Equipment(DEC). DEC’s PDP1, which appeared in 1959, was thefirst of a long string of successful minicomputers [136].

3.4 Communication

There was a need for centralization of control rooms,both in fire-control systems and in process control. Pre-cision of the synchros, that were used for fire control,was improved and standardized. There were significantadvances in synchros for communication of angles in fire-control systems, and the synchros and associated equip-ment became standard commodities.

In process control the pneumatic tubes that were usedfor communication were replaced by electrical systems.Signal levels were standardized to 4–20 mA. The factthat the zero signal corresponds to a nonzero current wasused for diagnostics. The electric systems reduced thetime delay caused by the limited speed of sound in thepneumatic systems. Cabinets with maneuvering equip-ment, controllers and recorders improved significantly.They were also augmented by relay panels for automaticstart-up and shutdown and for safety interlocks. Central-ized control rooms became common in process control.

There was a seminal breakthrough in communicationtheory with the publication of Shannon’s paper on infor-mation theory in 1948 [599]. Shannon defined what is the“capacity” of a communication link, showed what arethe appropriate tools to study it, and characterized it interms of the mutual information. Shannon also studiedwhether feedback could be used to increase capacity, andshowed that for a discrete memoryless channel it couldnot; however its implication for control is limited since itdoes not address delay or a finite horizon. These themesare being revisited currently, as detailed in Section 5.2.

3.5 The Growth of Institutions and Research Labs

Control was nurtured in large laboratories that were cre-ated during the Second World War, such as the lab-oratories around MIT and in Moscow. The RadiationLaboratory was closed after the war but some of theMIT labs such as the Draper Lab and the Instrumen-tation Lab continued to operate. Lincoln Lab at MITwas established in 1951 to build the air defense systemSAGE, many of the engineers having previously workedat the Radiation Lab. There were also significant con-trol groups at General Electric, Hughes Aircraft, BellLabs, Minneapolis Honeywell, Westinghouse and Leedsand Northrup.

There was a strong control group at Columbia Universityunder the leadership of John Ragazzini and Lotfi Zadeh,created around 1950. Among the graduate students werefuture leaders like Rudolf Kalman, John Bertram, GeneFranklin, and Eliahu Jury. Seminal work on sampleddata systems was conducted; there was a weekly seminardominated by Kalman and Bertram [232]. The groupat Columbia dissolved in the late 1950s when Jury andZadeh moved to Berkeley, Franklin to Stanford, Kalmanto RIAS, and Bertram to IBM.

The RAND corporation in the US was created as a thinktank, operated by the Douglas Aircraft Company andfinanced by the Air Force. In the 1950’s it carried outsignificant research related to control. George Danzigdeveloped linear programming [162]. Bellman, who haddone his PhD under Solomon Lefschetz at Princeton,developed dynamic programming [56,58,59].

14

Page 15: Astrom Kumar Control a Perspective

Solomon Lefschetz had established a center for researchin nonlinear differential equations and dynamics atPrinceton in the late 1940s. In 1955 the Glenn MartinCompany created the Research Institute for AdvancedStudy (RIAS) in Baltimore with very close relations tothe Princeton group. Lefschetz and many of his groupmembers joined RIAS, among them were: Bellman, Bha-tia, Hale, Kalman, Kushner, LaSalle, Lee, and Marcus,who all would make contributions to control. Lefschetzand many of his colleagues moved to Brown Universityin 1964 to form the Lefschetz Center for DynamicalSystems. Lawrence Marcus moved to the University ofMinnesota to create the Center for Control Science andDynamical Systems.

In the late 1950s IBM and other computer manufactur-ers saw the potential for using computers for processcontrol. They started a research group in control in theDepartment of Mathematics at the T. J. Watson Re-search Center in Yorktown Heights, with Kalman as thefirst leader [185]. Kalman left after a short time and JohnBertram, a classmate of Kalman at Columbia, took overas the leader. The group later moved to San Jose. IBMalso started laboratories in Europe; the IBM Nordic Lab-oratory in Stockholm was devoted to process control.

In England several of the major researchers moved touniversities. Tustin became head of Electrical Engineer-ing at Imperial College in 1953 where Westcott alreadywas a lecturer, and Coales moved to Cambridge in 1953[67,459,677]. National Physical Laboratory in Englandstarted a research group in control.

China had a long tradition in science. Professor Hsue-shen Tsien had worked with von Karman at Caltech andthe Jet Propulsion laboratory on missile guidance. In1954 he wrote the remarkable book “Engineering Cy-bernetics” [646]. Tsien returned to China in 1955, hegave lectures based on control on and proposed to es-tablish research facilities for aeronautics and missile de-velopment - the Fifth Academy of the Ministry of Na-tional Defense [137]. The Chinese Academy of Sciencescreated The Institute of Automation in 1956. The math-ematician Z. Z. Guan established a research laboratoryin control theory at the Institute of Mathematics, Chi-nese Academy of Sciences in 1962. The Chinese Associ-ation of Automation (CAA) was founded in 1961 aftersubstantial preparatory work [139].

There were similar activities in many other coun-tries with growth of control research in industry andacademia [82]. Research institutes were also created bythe academies of science in Budapest and Prague. TheResearch Institute of National Defense in Stockholmhad one group for analog computing and another formissile guidance and control theory. In 1955 Saab cre-ated a new division called R-System, patterned afterRAND corporation and sponsored by the Swedish AirForce [32].

3.6 The Onset of Control Education

Most of the research in control had been done in industryand research institutes and at a few universities. Whenservomechanism theory emerged it was recognized as auseful technology that could be widely applied. Controlcourses were introduced at practically all engineeringschools. Control groups were created in many compa-nies, and new industrial enterprises specializing in con-trol were established. Many textbooks were written. Inaddition to [607,446,184,343], other popular US bookswere [121,143,637,103,645]. Among the books from theUSSR we can mention [616,669,5,391]. Books were alsopublished in Germany [515,512,585], UK [538,449,676]and France [254]. A list of early textbooks on controlwas compiled in connection with the 50th anniversary ofIFAC [250]. The list includes 33 books published in 1960and earlier. The book by Truxal [645] is representativeof the state of the art of control education in the mid1950s. The topics covered included linear systems the-ory based on Laplace transforms, the root locus method,stochastic processes, sampled data systems, analysis ofnonlinear systems based on phase-plane and describingfunction methods. The book summarized many of theresults and presented a systematic method for controllerdesign inspired by circuit theory [272,659].

3.7 The Emergence of Professional Control Societies

The American Society of Mechanical Engineers (ASME)created a division for instruments and regulators in 1943.The Instrument Society of America (ISA) was founded in1946 by companies interested in industrial instruments.They published a journal in 1954 that was later calledinTech.

Much of the early work in automatic control was clas-sified because of its military connection. After the warthere was a need for more open interaction. The IRE(now IEEE) formed a Professional Group on AutomaticControl in 1954. A journal that was to become the IEEETransactions on Automatic Control was started in 1954with George Axelby as the editor.

There were also international activities. In 1956 therewere plans for no less than eight national meetings onautomatic control in Europe. Wise leadership resulted inthe formation of IFAC, which became the internationalforum for the field of control [142,443,354]. Many orga-nizational issues were settled in a meeting in Heidelbergin 1956 with participants from 19 countries. An organi-zational structure was set up with triennial World Con-gresses, symposia, and workshops. Harold Chestnut fromthe General Electric Research Laboratory was electedas the first president and it was decided to hold the firstWorld Congress in Moscow in 1960. This conference hada great impact because it provided an opportunity for

15

Page 16: Astrom Kumar Control a Perspective

researchers who had been working in isolation to meetwith colleagues who had worked on similar problems.

4 The Golden Age

Any field would have been proud of the accomplish-ments that control had achieved by 1960, but more wasto come. The space race and the use of digital comput-ers to implement control systems triggered new develop-ments. Servomechanism theory was not well suited forsystems with many inputs and many outputs, perfor-mance had to be optimized, and computer control gaverise to new challenges. Modeling based on injection of si-nusoidal signals was time consuming for process control.These challenges required new tools, and control scien-tists eagerly turned to mathematics for new ideas. Manysubspecialties were explored, which required focused anddeep dives into applied mathematics. In contrast to theprevious era when theory lagged applications, in thisera the theoretical investigation went ahead of practice.Many ideas were investigated in an open-loop manner,without the benefit of feedback from implementation. Insome cases, the computational power was not yet power-ful enough, or networking had yet to emerge, to permittesting of the ideas. Research and education expandedsignificantly and there was ample funding. The devel-opment was also heavily influenced by the advances incomputing. In 1960 computers were slow, bulky, unreli-able and expensive. In 2000 they were fast, small, reli-able and cheap.

The appropriate theory was state-space based ratherthan frequency domain based [360]. The earlier work ofAleksandr Lyapunov in the USSR on stability of dif-ferential equations [445] was found to be very useful inaddressing the problem of stability of systems describedby differential equations [367,367]. In the USSR, theproblem of optimal control of systems based on differ-ential equations was investigated by Pontryagin and hiscoworkers [534,99], and by researchers at the Instituteof Control Sciences. This was a generalization of theearlier work on calculus of variations [204,363]. RudolfKalman laid a broad foundation for linear systems[357,360,364,361,362]. The state-space theory found im-mediate application. Swerling [634], Kalman [368], andKalman and Bucy [364] extended the filtering theory ofWiener so that it addressed transient behavior as wellas time-varying systems. Richard Bellman developeddynamic programming for the optimization of bothdeterministic as well as stochastic systems, includinga foundation for Bayesian adaptive control [58,59,55].In the ensuing five decades, all these efforts were thor-oughly investigated, and a grand edifice of “systemstheory” was developed. The concepts of linear systems,optimal control, dynamic programming, partially ob-served systems, system identification, adaptive control,nonlinear estimation, robust control, nonlinear systems,distributed parameter systems, decentralized systems,

discrete-event systems, etc., were all explored. Whatis very interesting is that many of the ideas were in-vestigated at a time when the technology was not yetavailable for their implementation.

The aerospace industry has always been at the frontier oftechnology with extreme demands on safety and perfor-mance. During 1960–1980 process control was a strongdriver for computer control, while around 1980 the auto-motive industry became a technology driver. Problemsof manufacturing and queuing also drove the develop-ment of control with applications in operations research.

The golden age was a very prolific period; our treatmentis by no means complete and we apologize for omissions.In particular we do not adequately cover mechatronics,robotics, distributed parameter control (PDEs), Hamil-tonian control, to mention just a few of many such ex-amples.

4.1 The Space Race

Space travel and ballistic missiles posed many chal-lenges. There were guidance, control and estimationproblems. How to make effective use of moderate sizedrockets to put a satellite in orbit? How to find efficienttrajectories for interplanetary travel? How to minimizeheating at reentry into the earth’s atmosphere? Howto control rockets during launch, coasting and reentry?How to determine position, velocity and orientationfrom accelerometers, gyroscopes and star sights?

The Soviet program was led by Sergei Korlev withGerman engineers and scientists from Peenemunde asconsultants. The first rocket, R-7 Semyorka, was basedon the V2 with a new control system. Semyorka wasused to launch Sputnik in 1957. Four years later YuriGagarin became the first astronaut. Wernher von Braunwith several coworkers joined the Army Ballistic MissileAgency in Huntsville Alabama. Sputnik caused muchconsternation in the US. A new agency, NASA, wascreated in 1958. In 1961 President Kennedy announcedthe goal of landing a man on the moon within 10 years.NASA received significant funding and quickly grewto 8000 persons. Much research and development wassub-contracted to industry and universities.

The new challenges in the aerospace industry could notbe met by servomechanism theory, and many new av-enues were explored. Large resources were focused, withhighly diversified groups, to solve specific engineeringproblems. Control research benefited dramatically froma strong influx of ideas from applications, mathematicsand computing.

Inertial navigation was an enabler for intercontinentalmissiles and space flight; it required significant devel-opment of gyroscopes, accelerometers, computers and

16

Page 17: Astrom Kumar Control a Perspective

guidance theory. The Instrumentation Laboratory atMIT led by Charles Stark Draper was a major player,working closely with industry and serving as a majorcontractor for several systems [448].

4.2 Computer Control

The emergence of the digital computer spawned specu-lations about its use for control; indeed the Whirlwind(See Section 3.3 computer was designed for that verypurpose. Today it is hard to grasp the state of comput-ers in the 1950s. We illustrate with the following quotefrom a 1958 paper of Kalman [357] where he describedan attempt to implement an adaptive controller:

In practical applications, however, a general-purposedigital computer is an expensive, bulky, extremely com-plex, and somewhat awkward piece of equipment. ...For these reasons, a small special-purpose computerwas constructed.

A perspective on the tremendous impact of computing isillustrated by the following quote of Herman Goldstine,Head of the Mathematics Department at IBM Researchin Yorktown Heights, delivered at a staff meeting in 1962:

When things change by two orders of magnitude it isrevolution not evolution.

Combining Goldstein’s statement with Moore’s Law itfollows that from 1971 onwards computers have enjoyeda revolution every 10 years. There has been a tremen-dous impact on how control systems are designed andimplemented.

The poor capability and the poor reliability of gen-eral purpose computers was the reason why the PolarisICBM used a digital differential analyzer (DDA), anemulation of an analog computer [471, p 98], [448]. Thecomputer was developed at the Instrumentation Lab-oratory. It was followed by the Apollo Guidance Com-puter which was implemented using integrated circuitswith a conventional computer architecture [471, ch 6].The first version of the computer, Block I, had a corememory of 1K 16 bit words and a read only memoryof 24K words. The clock speed was 1 Mhz. Versions ofthe AGC were later used to show the feasibility of fly-by-wire. By the time Block I, the first version of AGC,flew in August 1966, computers had been controllingindustrial processes for 7 years.

There were major developments in industrial processcontrol. Even if the computers were slow, bulky andexpensive, their capabilities matched the basic require-ments of process control. Process companies saw po-tential for improved operation, and computer compa-nies saw business opportunities. Control groups wereformed in the process industries and feasibility studies

were executed jointly with computer companies [286].The first system in operation was a Ramo-WoolridgeRW-300 computer at the Port Arthur refinery in Texas.The early installations used supervisory control wherethe computer provided set-points to PID controllers thathandled the basic control loops.

When IBM entered the field they used a small transis-torized, scientific computer, IBM 1620, as a base. An in-teresting aside is that Ted Hoff was inspired by the IBM1620 when he developed the first microcomputer. TheIBM 1720 was based on the 1620 [287]. It had variableword length, one hardware interrupt, and analog anddigital inputs and outputs. An upgraded version was an-nounced as the Process Control Computer System IBM1710 in 1961. A typical configuration was a CPU witha core memory of 40K decimal digits (80 K-bytes), anda hard disk with 2M decimal digits (4M-bytes), 80 ana-log inputs, 22 pulse counter inputs, 100 digital outputsand 45 analog outputs. The computer had a clock rate of100 kHz. Control was done in a supervisory mode. Typ-ical installations performed supervisory control of manyloops, production planning, quality control and produc-tion supervision [188]. In 1964 the IBM 1800 was an-nounced. It was the first computer designed for real timeprocess control applications. The machine was success-ful and several thousand machines were delivered [287].Many computer companies entered the field.

When computers became more powerful and more re-liable it was possible to let them control actuators di-rectly. A systems architecture called Direct Digital Con-trol (DDC) emerged in 1962 when Imperial ChemicalIndustries (ICI) in England used a Ferranti Argus com-puter to control a soda ash plant. Computer controlwas used for all control functions including the low levelloops. There were sensors for 224 variables and the com-puter controlled 129 valves directly. Computer controlpermitted operator panels to be simplified, and the sys-tem could be reconfigured by programming instead ofre-wiring.

Computerized process control developed rapidly as tech-nology went through the phases of special purpose ma-chines, mini-computers and microcomputers, and therewere many actors. Computer companies started to with-draw from the field which was taken over by instrumen-tation companies. It was attractive to distribute com-puting. In 1975 Honeywell and Yokogawa introduced dis-tributed control systems (DCS), the TDC 2000 and theCENTUM. The systems permit direct digital control infunctionally and spatially distributed units. The systemshave standardized units for interaction with the process,with analog and digital signals and human-machine in-terfaces. Several manufacturers followed, and DCS be-came the standard for process control systems.

Use of computer control in the process industry ex-panded rapidly as distributed control systems (DCS)

17

Page 18: Astrom Kumar Control a Perspective

based on mini- and micro-computers appeared. InMarch 1962 there were 159 systems; increasing to 5000by 1970, and a million systems by 1980. Computercontrol for the process industry became a major busi-ness with many diverse vendors; the companies ABB,Emerson, Honeywell, Siemens, Rockwell and Yokogawaemerged as the dominating suppliers.

Traditionally process control systems had two types ofequipment: a control panel with controllers, recordersand displays, and a relay cabinet for start and stopsequences and safety interlocks. When minicomputersemerged the control panel was replaced by a DCS sys-tem. There was a similar development of the relay sys-tems that were also used for automation in the manu-facturing industry. General Motors challenged the elec-tronics industry with requirements for a standard ma-chine controller that could replace the relays. Severalcompanies responded to the challenge. A system fromDigital Equipment based on a mini-computer was re-jected. A successful demonstration of a special purposesystem was made by Bedford Associates and Modicon in1969. The unit was rugged with conductive cooling andno fans. In 1971 Allen Bradley developed a device calledProgrammable Logic Controller (PLC). The system ar-chitecture was based on round robin schedulers with dif-ferent cycle rates. PLCs were originally programmed ina graphical language called ladder diagrams (LD), whichemulated the ladder logic used to describe relay circuits.Later several different programming styles were stan-dardized [322,425]: function block diagrams (FBD), se-quential function charts (SFC) and structured text (ST).The PLCs developed rapidly and became a standard toolfor automation.

Process control systems are typically widely distributed.Wires from sensors and actuators, typically 4–20 mAcurrent loops, were brought to a central cabinet anddistributed to the computer. These systems were ex-pensive, difficult to maintain and upgrade; the systemshad a lifetime of tens of years. When networks ap-peared in the 1970s it was natural to replace expensivewiring with networks and several different systemsemerged. National standards were developed in Ger-many (PROFIBUS [547]) and in France (FIP [690]),and in the US the manufacturers formed the consortiumFOUNDATION Fieldbus [205] which absorbed FIP.There were divergent opinions driven by commercialinterests of the vendors [203]. After more than a decadethe IEC in 2000 introduced a standard, IEC 61784,which included many of the different suppliers’ features.Similar standards appeared in the building industry.Some vendors used Ethernet instead.

4.3 Automotive Applications

The automotive area is an important application areafor control. It is a strong technology driver because of

its scale; about 40 million cars were manufactured inthe year 2000. By providing a large market, the au-tomotive industry contributed strongly to the develop-ment of the micro-controller, a microprocessor with in-tegrated analog and digital inputs and outputs. The au-tomotive industry also stimulated the development ofinexpensive emission sensors, accelerometers and gyro-scopes. Together with the aerospace industry it was anearly adopter of model based design, and provided a fer-tile ground for research in modeling, integrated processand control design, and implementation of control sys-tems [379,275]. The impact of the automotive industryon control became stronger towards the turn of the cen-tury and even stronger in the 21st century.

The excellent experience with these systems inspiredcar manufacturers to introduce more sophisticated sys-tems such as collision avoidance, parking assist and au-tonomous driving [133].

Environmental concerns and recurring oil crises createddemands for reduced emissions and reduced fuel con-sumption. In 1967 California established The Clean AirResources Board, and requirements on automotive ex-haust emissions became federal US laws in 1970. Feed-back emission control made it possible to satisfy thenew laws. The control system used a new oxygen sen-sor (lambda sensor), a catalytic converter, and a feed-back system which kept oxygen levels at the convertervery close to the stoichiometric condition. General Mo-tors was one of the early adopters; we quote from JohnCassidy who was head of the control group at GM:

I recall a meeting with Ed Cole, an engineer by back-ground, who was then president of GM. A workableclosed loop system was possible using a fairly simplecircuit based on an operation amplifier. Mr. Cole madethe decision at that meeting that GM would take anadvanced technical approach based on the newly emer-gent microprocessor technology. Others in the industryfollowed.

Systems went into production in the late 1970s. Oncecomputer based feedback control was introduced in cars,its use expanded rapidly into many other functions.Anti-lock braking systems (ABS) were introduced toprevent the wheels from locking up. Electronic brakingsystems (EBS) and electronic stability control (ESC)controlled the brakes individually to improve stabilityand steering. These systems used accelerometers andgyroscopes as sensors. Automatic cruise control hadbeen used earlier, but implementation by computercontrol was much more convenient. A consequence isthat cruise control is now a standard feature. Adaptivecruise control, based on radar sensors, was introducedto maintain a constant distance to the car in front.The excellent experience with these systems inspiredcar manufacturers to introduce more sophisticated sys-tems such as collision avoidance, parking assist and

18

Page 19: Astrom Kumar Control a Perspective

autonomous driving [133].

In the beginning, control systems were typically add-on features. Over time there has been a move towardsintegrated design of mechanics and control. Control ofturbochargers permits smaller engines. Hybrid and elec-trical vehicles are even more prominent examples of co-design of systems and control.

In 1986 Pravin Varaiya initiated an ambitious researchproject, Program for Advanced Technology for High-ways (PATH), at the University of California, Berkeley,in collaboration with Caltrans [523]. Platooning of carsthat were linked electronically was explored. In 1997, theprogram demonstrated platooning of cars traveling at60mph separated by 21 feet on a San Diego freeway, andshowed that capacity could be doubled. The PATH pro-gram still continues with much effort directed towardscontrol of traffic flow. Platooning is particularly efficientfor heavy duty vehicles [427,7].

4.4 Optimal Control

The early rockets did not have great thrust, and so a cru-cial problem was to launch the rocket most effectively.Attempts to solve problems of this type led to the de-velopment of optimal control theory. Major contribu-tions were made by mathematicians and control engi-neers. There was a revitalization of the classical calculusof variations which has its origins in the Brachistochroneproblem posed by Bernoulli in 1696 [246]. Pontryaginand his coworkers in Moscow followed the tradition ofEuler and Lagrange and developed the maximum prin-ciple [534]. They were awarded the 1962 Lenin Prize forScience and Technology. In the United States, Bellmaninstead followed the ideas of Hamilton and Jacobi anddeveloped dynamic programming [59,56].

The case of linear systems with quadratic criteria wassolved by Bellman in special cases [56], and a completesolution was provided by Kalman [358]. The books byAthans and Falb [38] and Bryson and Ho [123] presentedthe results in a form that was easily accessible to en-gineers; they also dealt with computational issues. Aspectacular demonstration of the power of optimal con-trol was given by Bryson [124]. He calculated the op-timal trajectory for flying an aircraft from sea level to20 km, and found that it could be done in 332 seconds.The optimal trajectory was flight tested and the planereached 20 km in 330 seconds. The traditional quasi-steady analysis predicted that the airplane could not getup to 20 km. Optimal control grew rapidly, many bookswere written and courses were introduced in control cur-ricula [418,12,424].

Another computational approach to optimal control,model predictive control, which emerged from industry

is now widely used [148,447,551,131,559,563]. The pa-per [457] was selected for the first High Impact PaperAward at the IFAC World Congress in Milan in 2011.

4.5 Dynamic Programming

Multi-stage decision making was a problem of interestto the RAND Corporation, supported by the U.S. AirForce, in the 1950s. Richard Bellman was attracted tothis problem. He initiated the field of dynamic program-ming and developed the principle of optimality [59]. Itis of particular interest in the case of stochastic systemssince it provides optimal policies in state-feedback form.Howard developed the policy iteration algorithm [314](see Section 4.14), which is a very efficient algorithm todetermine optimal policies when the number of statesand actions is finite. It has become very popular in op-erations research and industrial engineering; see Section4.14. This was further sharpened by Blackwell [86]. Hecomprehensively showed the differences arising in the in-finite horizon case from positive and negative cost func-tions as well as the case of discounted cost functions[87–89,629]. The continuous time version of the dynamicprogramming equation is the Hamilton-Jacobi-Bellmanequation for the optimal cost-to-go.

Dynamic programming is, however, computationallycomplex; it suffers from the “curse of dimensionality.”With the advent of fast computers, methods to approx-imate the cost-to-go function by nonlinear functions,e.g., neural networks have received attention. In 1995,TD-Gammon, a temporal difference based learningscheme using a neural network trained by self-play [636]played at the level of a world class human player.

Dynamic programming has also become useful as amethod to establish qualitative properties of the op-timal solution. This has been found to be extremelyuseful in areas such as inventory control and produc-tion planning [662,75]; as described in Section 4.14.The teaching of Markov Decision Processes, which isdynamic programming for finite state stochastic sys-tems, is a standard part of the curriculum of operationsresearch and industrial engineering departments.

Dynamic programming has found wide applicability. Inthe Internet, the Distributed Bellman Ford algorithm fordetermining the shortest path between two nodes on agraph is a key element of distance-vector based routingalgorithms such as RIP [292,452] and IGRP [328]. Withincreasing interest in fast computational methods formachine learning and artificial intelligence, the ideas ofdynamic programming are becoming widely used.

4.6 Dynamic Games

Game theory was pioneered by John von Neumann inhis attempt to develop a foundation for economic behav-ior [667]. He analyzed both static two person zero-sum

19

Page 20: Astrom Kumar Control a Perspective

games where one agent’s cost is the negative of that ofthe other agent, as well as static teams, where all theagents have the same cost criterion that they are seekingto minimize. For two person zero-sum “matrix games”where each agent has only a finite number of choices,he showed that there is a saddle-point in randomizedstrategies [668]. Subsequently, Nash [495] showed a sim-ilar result for static nonzero-sum games.

At the same time that Bellman was developing dynamicprogramming at RAND, Rufus Isaacs was studying dy-namic continuous time two-person zero-sum games. The“Isaacs equation” is a two-sided version of the Hamilton-Jacobi equation [337,338]. This differential game theorywas applied to military problems such dog-fights andtank battles [694], [301], and later to robust control [48].

At around the same time, Shapley [600] and Ev-erett [199] were also investigating discrete-time games.Zachrisson [694] provided a particularly cogent treat-ment of Markov games. Interest continued in the subse-quent decades with the investigation of Nash equilibria,Pareto optimality, Stackelberg solutions and incentivesin dynamic games[621,602,305,49] .

4.7 Linear Systems

Linear approximations have been extremely useful foranalysis and design of control systems. Differential equa-tions were used in the early development, but there wasa switch to frequency response when the servomecha-nism theory was introduced. In the 1960s there was areturn to differential equations because frequency re-sponse was not well suited for numerical computations,and it was inconvenient for systems with many inputsand many outputs. The return to differential equationsbecame known as “the state space approach,” becauseNewton’s notion of state played a central role. It wasalso called “modern control theory” to separate it fromservomechanism theory. The mathematical sophistica-tion of the research, and consequently also textbooks,increased. The books by Zadeh and Desoer [695], Brock-ett [114] and Kailath [355] were popular.

The reformulation of the models naturally raised twoquestions: can all states be reached by appropriatechoices of the control signal and can the state be recon-structed from measurements of the outputs. Kalmanposed these questions and defined the notions of reach-ability and observability [360,366,362,253]. Kalman’sresults also provided clear insight into the relationshipbetween the linear differential equations and the asso-ciated transfer functions, which cleared up a classicalquestion on the effect of cancellation of poles and zerosin a transfer function [92].

There was also work on the structure of linear feedbacksystems in a classic setting. Horowitz [308] introduced

a controller architecture, with two degrees of freedom,that combined feedback and feedforward so that require-ments on command signal following could be separatedfrom requirements on robustness and disturbance atten-uation. The servomechanism was analyzed in the state-space model [165].

The theory of linear systems drew heavily on linear al-gebra, matrix theory and polynomial matrices. Resultsfrom numerical linear algebra could also be exploitedfor computations [415]. The size of textbooks grew to700 pages and more, when chapters on state-space the-ory were added to classic material on servomechanisms[229,401,508,169].

In standard state space theory, the state space is theEuclidean space and time is a real variable. Extensionsto systems over rings was also established [365]. A uni-form framework for linear systems, finite state machinesand automata can be established. The introductory sig-nals and systems book by Lee and Varayia [420] is writ-ten in this spirit. A theory of discrete event systems wasinitiated in [556] to address control theoretic notions ofcontrollability, observability, aggregation, decentralizedand hierarchical control for automata and formal lan-guage models [557,587,98]. Lately there has been signifi-cant interest in hybrid systems [117,451,257] which havea combination of continuous and discrete behavior.

Singular perturbation theory [388] and descriptor sys-tems [177] were introduced to deal with systems havingwidely differing time scales. Differential algebraic sys-tems were used to model large electrical circuits [244].Inspired by circuit theory, Willems [533] introduced sys-tem models called behavioral systems, that deempha-sized the role of inputs and outputs, which also were de-scribed as differential algebraic systems. Differential al-gebra is a natural framework for modeling physical sys-tems, and it is the mathematical framework behind themodeling language Modelica [643]. There is an exten-sive body of literature on infinite dimensional dynami-cal systems [429,156,69,47]; control of fluid flow is oneapplication area [1].

The field of linear systems has been declared many timesto be “mature” from a research point of view, but interesthas repeatedly been renewed due to new viewpoints andintroduction of new theories.

4.8 State Feedback Kalman Filtering and LQG

When state-space theory is used for design, it is naturalto use state feedback because the state contains all rele-vant information about the past. A linear controller canthen be represented by a matrix which maps state vari-ables to control variables. Kalman formulated the designproblem for state models as an optimization problemwhere the criterion to be minimized is a quadratic form

20

Page 21: Astrom Kumar Control a Perspective

in states and control variables, the so-called LQ prob-lem. He solved the problem elegantly and showed thatthe optimal feedback is given by a solution to a Riccatiequation. To quote from [358]:

One may separate the problem of physical realizationinto two stages:(A) Computation of the “best approximation” x(t1) ofthe state x(t1) from knowledge of (the output) y(s) fort ≤ t1.(B) Computing (the control) u(t1) given x(t1).... Somewhat surprisingly, the theory of Problem (A),which includes as a special case Wiener’s theory of fil-tering and prediction of time series, turns out to beanalogous to the theory of Problem (B) developed inthis paper. This assertion follows from the duality the-orem discovered by the author.

Kalman’s solution also applies to linear time-varying sys-tems. The corresponding problem for difference equa-tions is very similar, and led to a reformulation of thetheory of sampled systems. The condition for a solutionis that the system is reachable. A remarkable propertyof the solution is that it gives a closed loop system withinfinite gain margin and a phase margin of 60◦. Glad ex-tended these results on robustness of the LQ controllerto nonlinear systems [256]; which was further general-ized in [590].

Kalman also showed that the optimal filter for a linearsystem with Gaussian noise is a process model drivenby the measured observation with the gain specified bya Riccati equation. The condition for solvability is thatthe system is observable. The optimality of the controllerconsisting of state feedback and a Kalman filter, which isknown as the LQG controller, was first proven in a spe-cial case by the economist Simon [603]. There are somesubtleties about the separation that have only recentlybeen sorted out [247].

The controllers obtained by servomechanism theory canbe viewed as compensators that shape the frequencyresponse of the loop transfer function. The LQG con-trollers have a very different interpretation. They havetwo elements, a state feedback and a Kalman filter oran observer. The dynamics of the controller comes fromthe observer which is a dynamic model of the processand its environment. This idea is captured by the in-ternal model principle introduced by Francis and Won-ham [228]. A reference signal generator can be added tothe LQG controller to provide command signal follow-ing using an architecture with two degrees of freedom[558, Section 7.5]. The LQG controller is very well suitedfor systems with many inputs and many outputs. Thecomputations required for design are based on solid al-gorithms from numerical linear algebra. The LQG con-troller does not automatically provide integral action,illustrating the fact that it is important to capture all

aspects when formulating an optimization problem. In-tegral action can be provided by augmenting the processmodel with a model of the disturbances.

The LQG paradigm has proved to be a useful tool foriteratively designing linear control systems due to theexplicit form of the solution, as well as the well developedasymptotic theory for the infinite horizon case. It is astandard tool for design of control system [424,12].

The important issue of what information is available toa decision maker in a system was studied by Witsen-hausen [686]. He showed that even in a linear Gaussiansystem with a quadratic cost, if there is no memory ofwhat observation was made in a previous stage, then alinear control law is not optimal. He showed the severalcomplexities that arise depending on the informationavailable to agents in a distributed system at the timethat they have to take a decision [688,687]. Informationstructures that lead to tractable solutions were furtherinvestigated in [304].

4.9 Nonlinear Systems

Linear theory has, somewhat surprisingly, been ex-tremely useful for analysis and synthesis of controlsystems even though most real systems are nonlinear.The necessity of considering nonlinear effects was partof classical control theory; to quote from Truxal [645, p.viii]:

Fifth, the designer must be acquainted with the basictechniques available for considering nonlinear systems.He must be able to analyze the effects of unwanted non-linearities in the system and to synthesize nonlineari-ties into the system to improve dynamic performance

Typical nonlinearities he mentions are friction, backlash,saturation, and hysteresis [514,264,39].

Approximate methods for analyzing nonlinearities weredeveloped in nonlinear dynamics [397,15,474]. Onemethod to explore limit cycles, called harmonic bal-ance, consisted of investigating the propagation of thefirst harmonic, similar to Nyquist’s analysis of linearsystems. A version of this method became known asthe describing function method [654,386]. On-off con-trol was popular in the early days of control because itwas possible to obtain high gain with simple devices;significant theory was also developed [647,648,217,650].

Lyapunov stability theory was used extensively in theUSSR [453]. Much research was stimulated in the Westwhen it was popularized by Kalman and Bertram [367],who had picked up the ideas from Lefschetz at Prince-ton. Useful extensions were provided by Krasovskii andLaSalle [414,392]. Willems showed that the notions ofenergy and dissipation are closely related to Lyapunov

21

Page 22: Astrom Kumar Control a Perspective

theory and developed a theory for dissipative systems[685]. Lyapunov theory is now commonly used both foranalysis and design [230]. The notions of control Lya-punov functions and input-to-state stability introducedby Sontag [618] have proven useful. Khalil’s book [375]is a popular standard text.

The problem of the stability of a system obtained byfeedback around a memory-less nonlinearity and a lin-ear feedback system was proposed by Lurie and Post-nikov [444]. Aizermann conjectured that the closed loopsystem would be stable if the nonlinearity was sectorbounded and if the linear system is stable for any lin-ear gain in the sector [4]. The conjecture was false but itstimulated much creative research. Originally the prob-lem was approached by Lyapunov theory but a majorbreakthrough was made by Popov who provided a stabil-ity condition in terms of a restriction of the Nyquist plotof the linear part [535,536]. Yakubovich [692] showedthat Popov’s results could be expressed and extended interms of linear matrix inequalities (LMI’s).

Yet another approach to stability was presented by Sand-berg [577] and Zames [697] at the National ElectronicsConference in 1964. The presentations were later fol-lowed by detailed publications [578,579,698,699]. Zamesfocused on input-output properties and avoided the no-tion of state space. He had picked up functional analysisfrom Singer at MIT and he introduced the small gaintheorem and the passivity theorem. These concepts gen-eralize the notions of gain and phase for linear systems.The ideas garnered much following and they quickly be-came part of the core of control theory [167,663].

In the 1970s there was also an influx of ideas from dif-ferential geometry [101], leading to the development ofgeometric control theory. Brockett, Jurdjevic, Hermann,Krener, Lobry, and Sussman were key researchers whodrove the research agenda. The notions of controlla-bility and observability of nonlinear systems were in-vestigated for systems which are affine in the control[299,288,115,633,393,437,438,116,298]; the criteria werebased on Lie algebra. Feedback linearization was intro-duced as a technique for design of nonlinear systems[319]. Fliess used differential algebra to define the notionof differential flatness which became a powerful methodto design feedforward and tracking [212–214]. Computeralgebra was used to compute Lie brackets. Isidori andByrnes introduced the notion of zero dynamics, an ex-tension of the zeros of a linear system [340]. There aremany interesting applications of geometrical control the-ory, e.g., attitude control of spacecraft [601], aircraft fly-ing at high angles of attack [625, Section 7.4], backing oftrailers [211], walking robots [678], and quantum systems[317,377]. Geometric control theory is part of the core ofnonlinear control theory with several books [339,503].

4.10 Stochastic Systems

Dynamic programming can be used even when the stateof the system is only noisily observed, by considering theconditional probability distribution of the state as the“hyperstate” [29]. The optimality of separated policieswas thoroughly investigated by Striebel [631].

For linear Gaussian systems, by the separation theorem,the hyperstate is finite dimensional since the conditionalprobability distribution is Gaussian and thus describedcompletely by the conditional mean and conditional co-variance. As described in Section 4.8, when the cost func-tion is further taken to be a quadratic function of thestate and control one obtains the separation theoremwith certainty equivalence [603,638,349,540]. The costfunction consisting of the expected value of the exponen-tial of a quadratic cost can also be solved explicitly sinceit is multiplicatively decomposable [342]. It can be usedto model risk-averting or risk-seeking behavior, and alsohas connections to differential games and robust control.

Bellman also expounded on the fact that dynamic pro-gramming could be used to develop adaptive controllersfor systems when the parameters are unknown, by view-ing the conditional distribution of the unknown param-eters as the hyperstate [55]. In this case control serves adual purpose, as a tool for exciting the system and de-termining its characteristics, and also as a tool to movethe state to a desirable region. This was dubbed “dualcontrol” by Feldbaum [202].

Conceptually it is very attractive to formulate and solvethe adaptive control problem using dynamic program-ming. There are, however, significant computationalproblems because of the large state space – the curseof dimensionality. For that reason an alternative non-Bayesian certainty equivalence approach was pursued,resulting in the self-tuning approach; see Section 4.12.An early Bayesian attempt was to approximate the lossfunction locally by a quadratic function [458]; anotherapproach is to estimate the cost-to go using MonteCarlo methods [74].

One special adaptive control problem, which capturesthe quintessential tradeoff implied by the dual roles ofcontrol, is the multi-armed bandit problem. In a moreuseful incarnation it models the problem of testing drugswhose efficacies are unknown. In the bandit version, itfeatures several slot machines with unknown probabili-ties of rewards, with the probabilities themselves mod-eled as random variables with a prior probability distri-bution. A compulsive gambler has to play one arm eachday, with the goal of maximizing the expected total re-ward obtained by playing the arms. This problem has thespecial structure that nothing is learned about an armif it is not played on a given day; hence its hyperstateremains unchanged. For the case of discounted rewards,

22

Page 23: Astrom Kumar Control a Perspective

this celebrated problem was shown to have a very ap-pealing structure by Gittins and Jones [255]. Every armhas an index, defined by its hyperstate, and the optimalpolicy is to just play the arm with the highest index. Theindex of an arm is the maximal expected discounted re-ward up to a stopping time divided by the discountedtime.

With the advent of powerful computation, the prob-lem of “partially observed Markov decision pro-cesses,” (POMDPs) [606], has acquired great atten-tion as a methodology for modeling and solving prob-lems in machine learning and artificial intelligence[531,619,501,641,493,245,596].

Beginning in the late 1950s, there was great interest indeveloping optimal filters for nonlinear systems. In thediscrete-time case, obtaining the conditional distribu-tion of the state of the system given past noisy measure-ments amounts simply to an application of Bayes Rule.By allowing for unnormalized distributions where thedenominator in Bayes Rule is ignored, one can obtainlinear recursive equations for the conditional distribu-tion [400]. In the continuous time case featuring nonlin-ear stochastic differential equations, the optimal filteringequations are also nonlinear [628,404,405,235]. However,by propagating the unnormalized probability distribu-tion, the resulting equations are linear [178,181,696,483].The central difficulty is that except in special cases [63]the filters are generally not finite-dimensional. As in thecase of dynamic programming, with the availability ofincreasingly fast computers, one can judiciously exploitthe capably to perform simulations to approximate un-known distributions; an example in this vein is parti-cle filtering [282,261] which is useful for nonlinear non-Gaussian systems.

Early in the 1960s there was already interest in devel-oping stochastic control theory for continuous time sys-tems [215,209]. There was a great effort in the 1960s and1970s in developing a theory of optimal control of contin-uous nonlinear stochastic systems described by stochas-tic differential equations for partially observed systems.This work has found application principally in mathe-matical finance [469], as noted in [478]. There were deepmathematical challenges, and several control researchersdelved into the field and conducted frontline mathemat-ical research into stochastic differential equations andmartingale theory. Issues related to the nature of solu-tion of stochastic differential equations, existence of op-timal solutions, representation of the optimal solution inthe case of partial (i.e., noisy) observations, etc., were in-vestigated [216,62,179,180,146,164,208]. A good accountis available in [102]. The problem of existence of solu-tions to the Hamilton-Jacobi-Bellman equations was ad-dressed by the viscosity approach [153,431,430,432].

Motivated originally by problems in biology, a filteringtheory for counting processes was developed by Snyder

[609]. The problem of interest was to estimate the under-lying intensity of a process given measurement of “ticks.”This spurred much mathematical work in stochastic pro-cesses [109,661,97]. It has found application in queuingsystems [108]. As one example, it has been used to an-alyze flows of customers in queuing networks [671]. Thestochastic control of point processes was also investi-gated [96].

4.11 Identification

One factor that contributed to the success of servomech-anism theory was that the transfer function of a processcould be obtained empirically by frequency response.Frequency response was, however, not suitable for pro-cess control because the processes were typically slowand it took a very long time to perform the experiments.It was also desirable to obtain models that addition-ally captured noise characteristics, for example to applyLQG controllers.

For computer control it was natural to use discrete timemodels. Much inspiration came from time series analy-sis where Box and Jenkins [104] had developed methodsof estimating parameters in time series. Three popularmodels are auto-regressive (AR), moving average (MA)and auto-regressive moving average (ARMA) models.These models are difference equations driven by discretetime white noise. The models do not have inputs, and forcontrol applications it was necessary to extend the mod-els by adding controlled inputs. The presence of inputsalso raised interesting problems of finding input signalsthat provide a sufficiently rich excitation. By combin-ing ideas from probability theory, statistics and time se-ries analysis, it was possible to obtain powerful methodswith good statistical properties. An early applicationwas to determine paper machine dynamics and to designcontrol laws that minimized fluctuations in quality vari-ables [34,31]. Research in this area, which became knownas system identification, started in the 1960s. Identi-fication brings control engineers, probabilists, statisti-cians and econometricians together. Typical issues arenot only statistical issues such as consistency and ef-ficiency but also control inspired problems such as in-put selection and experiments in open and closed loop[251]. Several books were written as the research pro-gressed [400,506,435,611]. The Matlab toolbox devel-oped by Ljung has led to system identification techniquesbeing widely used in industry and academia. The IFACsymposia series on System Identification which startedin Prague in 1967 is still continuing.

4.12 Adaptive Control

Adaptive control emerged in the 1950s in flight and pro-cess control [268,226,357]. Supersonic flight and ballisticmissiles posed new challenges because the dynamic be-havior of air vehicles changes drastically with altitude

23

Page 24: Astrom Kumar Control a Perspective

and Mach number. Autopilots based on constant-gain,linear feedback can be designed to work well in one flightcondition but not for the whole flight envelope. Manyadaptive flight control systems were proposed and flighttested [268,475]. Interest in adaptive flight control di-minished toward the end of the 1960s. One reason wasthe crash of a rocket powered X15 with an adaptive con-troller [182]. Another was the success of gain schedulingbased on air-data sensor [622]. There was also early in-terest in adaptation in process control [226,357].

Research in the 1950s and early 1960s contributed to aconceptual understanding of Bayesian adaptive control,as described in Section 4.10. However, as noted there,due to its complexity, an alternative non-Bayesian cer-tainty equivalence approach was pursued, resulting inthe self-tuning approach.

Draper and Li investigated on-line optimization of air-craft engines and developed a self-optimizing controllerthat would drive the system towards optimal operation.The system was successfully flight tested [85,174] andinitiated the field of extremal control. Tsypkin showedthat schemes for learning and adaptation could be cap-tured in a common framework [649].

Interest in adaptive control resurged in the 1970s. Therewas significant research on model reference adaptive con-trol (MRAC) [679]. MRAC automatically adjusts theparameters of a controller so that the response to com-mand signals is close to that given by a reference model.The original MRACC which was based on a gradientscheme called the MIT Rule, was improved by applyingLyapunov theory to derive adaptation laws with guaran-teed stability [127,521,411]. Variations of the algorithmwere introduced using the augmented error [479,482].The MRAC was extended to nonlinear systems usingback-stepping [394]; Lyapunov stability and passivitywere essential ingredients in developing the control laws.

One motivation for using adaptation for process controlis that system identification experiments on real plantsare tedious and time consuming, besides also requiringskilled personnel. It was therefore attractive to exploreif an adaptive controller could be used instead. The self-tuning regulator (STR) estimates the process parame-ters and finds controller parameters that minimize a cri-terion, for example the variance of the process output.Steady state regulation is a typical problem which canbe modeled by an ARMAX process. Estimation of pa-rameters in such a model is a complex nonlinear prob-lem. A surprising result in [36] showed that a controllerbased on least squares estimation and minimum variancecontrol could converge to the desired controller. Indus-trial use was demonstrated [36,356,411,64] and a num-ber of applications ensued, autopilots for ship steering,rolling mills, continuous casting, distillation columns,chemical reactors, distillation columns and ore crushers[64,259,37,25,206]. Many variations and generalizations

evolved to consider different control objectives for noisysystems.

The self-tuning regulator stimulated a great deal of the-oretical work. The problem was complicated by both thenonlinearity and the stochastic nature of the overall sys-tem. Similar issues had arisen in analysis of recursivealgorithms such as stochastic approximation and recur-sive identification of ARMAX systems; the prior workpaved the way for the analysis of the stochastic adaptivecontrol systems [436,613,407,409,140,406]. Proofs of sta-bility, convergence, self-optimality and self-tuning tookseveral years to come [258,260,54,273]. The similaritiesbetween MRAS and STR were also studied [187].

Early on, Egardt [187] had shown that even smallbounded disturbances can cause adaptive controllersto lose stability. Ioannou and Kokotovic analyzed theeffects of unmodeled high frequency dynamics andbounded disturbances on adaptive control schemes [335].An investigation by Rohrs of robustness to unmodeleddynamics [566] stimulated much research that providedinsight into modified algorithms. Stability proofs re-quired bounded estimates. Normalization of signals[542,541] was proved to guarantee stability. Stabilitycould also be achieved by projection alone [693,491],

Adaptive control was extended to feedback lineariz-able nonlinear systems [369]. It was also extendedto include nonlinearities of the type commonly en-countered in applications, such as dead-zone, back-lash and hysteresis [635]. Adaptive control designmethodologies such as backstepping became an inte-gral part of the design of nonlinear control systems[395]. The increased knowledge in adaptive control thatcame from all this work is well documented in books[187,259,11,400,37,494,580,334].

Variations of adaptive algorithms are still appearing.The L1 adaptive controller is one example; it inheritsfeatures of both the STR and the MRAC. The model-free controller by Fliess [210] is another example whichis related to the self-tuning regulator.

Products use MRAC and STR to tune controllers ondemand, to generate gain schedules and for continuousadaptation. There are systems that have been in opera-tion for more than 30 years, for example for ship steer-ing and rolling mills [206,270]. Automatic tuning of PIDcontrollers is widely used; virtually all new single loopcontrollers have some form of automatic tuning. Auto-matic tuning is also used to build gain schedules semi-automatically [35].

There are strong similarities between adaptive filteringand adaptive control. Gabor worked on adaptive filter-ing [236] and Widrow developed an analog neural net-work (Adaline) for adaptive control [680,681]. The adap-tation mechanisms were inspired by Hebbian learning in

24

Page 25: Astrom Kumar Control a Perspective

biological systems [291]. Today noise cancellation andadaptive equalization are widespread implementationsof adaptation in consumer electronics products.

There is a renewed interest in adaptive control in theaerospace industry, both for aircraft and missiles. Goodresults in flight tests have been reported both usingMRAC [416] and the L1 adaptive controller [313]. In thefuture, adaptive control may be an important compo-nent of emerging autonomous systems.

4.13 Robust Control

Horowitz followed Bode to explore feedback in the con-text of linear systems in the book [308]. He investigatedthe cost of feedback and introduced a controller archi-tecture having two degrees of freedom.

Bode had designed feedback systems that were robustto variations in the amplifier gain. He had shown thatthe open loop gain had to be much larger than its closedloop gain in order to obtain a robust amplifier. Ro-bustness is thus obtained at the cost of a gain reduc-tion. Horowitz, who was Bode’s intellectual grandson viaGuillemin, extended this observation and introduced thenotion of cost of feedback in general feedback systems[308, p 280–284]. Horowitz also generalized Bode’s ro-bust design technique to more general process variations.The method is called QFT (Quantitative Feedback The-ory) [311,309]. It is based on graphical constructs usingNyquist or Nichols plots.

There was a significant development of robust controlin the state space framework, which had the advantageof leading to techniques that are well suited to numer-ical computations. The LQ controller, with state feed-back, has amazing robustness properties, as noted inSection 4.8. In the 1970s much research was devotedto explore if the robustness could be extended to theLQG controller, which employs output feedback. Theonly condition required for solvability is that the systemis reachable and observable. Researchers schooled in ser-vomechanism theory did not understand why the classi-cal limitations imposed by non-minimum phase dynam-ics did not show up [568,310]. Much work was done at theMIT Electronic Systems Laboratory and at Honeywell.An insightful summary is given by Safonov [571,572]. Akey observation was that robustness measures should bebased on the singular values of the loop transfer functionand not on the eigenvalues. The main result is that theLQG controller is not robust. A simple counter exam-ple is given in the paper by Doyle entitled “GuaranteedMargins for LQG Regulators” [170] with the somewhatprovocative abstract “There are none.” Several attemptswere made to impose constraints on LQG control designbut the real solution would come later from a differentdirection.

In 1981 George Zames published a paper [700] which laidthe foundation for H∞ control. Following Bode’s ideashe considered input-output descriptions and designedcontrollers that minimized the H∞-norm of the sensi-tivity function for systems with right half plane zeros.Zames used functional analysis and interpolation theoryto solve the problem. Zames’s work has a strong follow-ing, with many extensions and generalizations. The so-called four-block problem, consisting of addressing allfour sensitivity functions became a standard formula-tion. The paper [172] by Doyle, Glover, Khargonekar andFrancis was a major advance because it showed that theH∞ problem could be solved by state space methods,and that feedback and observer gains were given by Ric-cati equations. The controller obtained has the same ar-chitecture as the LQG controller but with different filterand feedback gains gains. McFarlane and Glover gener-alized classic loop shaping to multivariable systems andshowed the relations to H∞ control [462]. H∞ controldeveloped into a standard design method with books[171,266,604,702,701,383] and Matlab toolboxes.

A side effect ofH∞ control was a renewed interest in fun-damental limitations [604,591]. It was shown that a sys-tem with right half plane zeros and time delays could notbe controlled robustly if the bandwidth is too high, thatrobust control of a system with right half plane poles re-quires high bandwidth, and that systems with right halfplane poles and zeros could not be controlled robustly ifthe poles and zeros were too close. A striking example ofthe difficulties is given in [373]; a practical illustration isa bicycle with rear-wheel steering. The paper [373] illus-trates the need and importance of carefully investigatingto what extent the end result of any design is fragile.

Zames also investigated the problem of finding normsthat are suitable for comparing systems. The problem isstraightforward for stable systems; simply compare theoutputs for a given input. For unstable systems he in-troduced the gap metric [189] which only admits inputsthat do not result in unbounded outputs. Vidyasagarprovided an alternative graph metric [664]. Georgiou andSmith showed that robustness optimization in the gapmetric is equivalent to robustness optimization for nor-malized coprime factor perturbations [248]; they also ob-tained results for nonlinear systems [249]. Vinnicombeintroduced the ν-gap metric that was adapted to robuststabilization [666].

Doyle and co-workers introduced the structured singularvalue (mu-analysis) to demonstrate that conservatismof gain arguments can be drastically reduced by opti-mization of frequency weights [173]. They used this ef-fectively for analysis of systems with both parametricuncertainty and uncertain linear dynamics. The workwas a pioneering application of convex optimization incontrol. It was extended to nonlinear components in thework on Integral Quadratic Contraints by Megretski andRantzer [467]. This generalized the methods of Zames,

25

Page 26: Astrom Kumar Control a Perspective

Yakubovich and Willems from the 1960s and 70s and in-tegrated them with mu-analysis and semi-definite pro-gramming.

Linear matrix inequalities became a useful design toolwhen efficient computational procedures based on in-terior point methods were developed [498]. Many de-sign problems can be captured by convex optimiza-tion and LMI’s, as was shown by Boyd and others[105,237,519,583,468,370,129].

Basar and Bernhard [48] formulated the problem of ro-bust control as a game problem. The task of the con-troller is to deliver good performance even against anopponent who tries to perturb the system in the worstpossible way. They showed that in the case of linear sys-tems the optimal controller is the H∞ controller.

4.14 Control in Operations Research: Inventory, Man-ufacturing and Queuing Systems

Control is widely used in dynamic system problems thatarise in operations research. Many applications can bemodeled as problems involving the control of Markovchains over an infinite horizon with a discounted cost orlong term average cost criterion, called Markov DecisionProcesses. One way to solve them is by the “value iter-ation method” that consists of determining the infinitehorizon optimal cost as the limit of finite horizon costs[57].

In the late 1950s, when confronted with the problemof optimizing which customers should be mailed Searscatalogs based on profits from previous purchase history,Howard [314] developed the policy iteration method thatconverges in finite time for finite state and control sets[315]:

This all took place in the days when computers stillhad vacuum tubes. And so the runs were fairly time-consuming ... The optimum policy balanced ... returnwith the effect on future state transitions. The net resultwas a predicted few percent increase in the profitabilityof the catalog operation, which, however, amounted toseveral million dollars per year.

Dynamic programming has been very useful in inventoryproblems. A celebrated result of Scarf [581], generalizedthe work of Arrow, Harris and Marschak [18]. It analyzeda general model where the cost of an order is affine (i.e.,constant plus linear) in the number of units ordered, andwhen there are costs both for holding inventory as wellas shortages. They showed that if the demand is random,and there is a lag in fulfilling orders, then the optimalpolicy is of the (S, s)–type: if the level of inventory is lessthan s then order up to inventory level S. Extension ofthis type of result is still an ongoing area of operationsresearch [691].

Of great recent research interest is supply chain manage-ment of material flow over a network, coupling severalagents who order from upstream suppliers and deliver todownstream customers, possibly also involving assem-bly, with the goal of minimizing cost of holding inventoryor cost of shortages; see [672] for a recent review. Inter-estingly, an early investigator in this area was Forrester(see Section 3.3), who moved to the MIT Sloan Schoolof Management and started a research program in Sys-tem Dynamics in 1956. His book Industrial Dynamics[220] explored the dynamics of storage of goods in thechain from manufacturer to consumer via wholesalers.He developed the simulator Stella [220,565], which is stillavailable [565,220]. Motivated by “what if” questions,Ho and coworkers developed the perturbation analysismethod to obtain sensitivities to parameters of queuing,inventory, and other discrete-event systems, from simu-lations or traces of evolution [303,302].

It is interesting to note that Forrester continued to ex-plore dynamics in broader contexts; in 1969 he publishedUrban Dynamics [221] that modeled population housingand industry in an urban area, and in 1971 he publishedthe book World Dynamics [222] that modeled popula-tion, energy and pollution in the whole world. The bookcaught the attention of the newly founded Club of Rome[525] which funded a more detailed study “Limits toGrowth” [466]. Forrester’s original model consisting offour differential equations was expanded to about 1000.The book predicted that growth was limited by naturalresources. It was controversial because of many unvali-dated assumptions; however, more than 12 million copieswere sold, boosted by the 1973 oil crisis. Its central con-tention though is currently of great topical importancewith respect to global warming as well as other environ-mental and ecological matters.

In an influential paper, Kimemia and Gershwin [382] for-mulated the problem of short-term scheduling of flexiblemanufacturing systems where machines are subject torandom failures and repairs as a stochastic control prob-lem, and exhibited interesting switching structure of thesolution. In some cases the resulting stochastic optimalcontrol problems have been explicitly solved to deter-mine the optimal hedging point policies [6,75]. Kimemiaand Gershwin also articulated a dynamic system ap-proach to manufacturing systems, and proposed a hier-archical time-scale decomposition of the overall manu-facturing problem ranging from long term capacity plan-ning at the higher end to very short term part loadingissues at lower end.

The dynamic systems viewpoint was further developedin [526], emphasizing the importance of schedulingpolicies that maintain stability of buffer levels. Coun-terexamples showed that even simple networks couldbe destabilized by scheduling policies when there waseffective two-way interaction between machines, i.e.,“feedback” [399,441,588,106]. There was much effort to

26

Page 27: Astrom Kumar Control a Perspective

understand the stability of manufacturing systems andqueuing networks. A powerful approach to establishingthe stability of queuing networks, the fluid limit ap-proach, was developed [570,161] as a complement to thedirect Lyapunov-type analysis of the original stochas-tic system via Foster’s criterion for positive recurrenceof Markov chains [224]. Another powerful approachto study performance, Brownian network models, wasdeveloped based on Brownian motion models of queu-ing networks [284]. They can be used to approximateheavy traffic behavior [325,326] of queuing networks.Fluid limits are analogous to the law of large numbersthat provides information on the mean, while Brown-ian limits are analogous to the central limit theoremthat provides information on the variance. A particu-lar motivating system for this work was semiconductormanufacturing plants that feature re-entrant materialflow [675,398], i.e., loops that create feedback effects.Policies based on the approach of viewing manufactur-ing systems as dynamic stochastic systems [440] wereimplemented on IBM’s 200mm wafer fab [481]. There ismuch current interest in stochastic processing networks[285]. They allow modeling of more general systemsthan queuing networks, allowing complex interactionsbetween buffers, resources and activities. They encom-pass models not only of manufacturing systems but alsoof packet switches, call centers, etc.

The cumulative impact of all these control related devel-opments was transformative in terms of emphasizing thedynamic stochastic nature of manufacturing and othersuch systems in contrast to static deterministic models.With respect to queuing systems, the first wave of workin the early 1900s due to Erlang [197,119] was motivatedby problems of telephony, the second wave in the 1950sdue to Jackson [341] was motivated by problems of jobshops, and the third wave was motivated by problems ofcomputer systems [51]. The fourth wave, motivated byproblems of semiconductor manufacturing, and the mostrecent wave aiming to integrate very general problemsof resource scheduling, have been heavily influenced bycontrol theory.

There are also significant advantages in integrating thebusiness systems for supply chain management and en-terprise resource planning (ERP) with the process con-trol systems at the job floor. This makes it possible tomatch process control with business objectives. Typ-ical objectives are increased throughput, reduced en-ergy consumption, improved capacity utilization, andreduced quality variability. The process control systemsDCS and PLC systems are used for process control, andbusiness systems like ERP (Enterprise Resource plan-ning) MRP (Material Resource planning) and masterplanning systems, delivered by companies like SAP andIBM, are used for plant management and business op-erations. To support interoperability between businesssystems and the process control system, an intermedi-ate layer referred to as MES (Manufacturing Execution

System) is often used. The international standard IEC62264 [323], also known as ISA95, is providing supportfor Enterprise-Control System integration [107,584].

4.15 Simulation, Computing and Modeling

Simulation is useful because it allows exploration of thebehavior of complex systems in a safe setting. The me-chanical differential analyzer was driven by the needto understand power systems, and the electronic ana-log computer was invented to simulate control systems,as noted in Section 2.7. By 1960 analog computing wasavailable at a number of industries and at some univer-sities. At the turn of the century simulation was readilyavailable on the desks of all engineers and students. Sim-ulators were also combined with hardware to test con-trollers before they were delivered, so called hardware-in-the-loop simulation.

When digital computers appeared it was natural to usethem for simulation [560]. The development was trig-gered by a paper of Selfridge [589] that showed how adigital computer could emulate a differential analyzer.Intense activity [642,111] was stimulated by advances innumerical integration of ordinary differential equations[160,294,201]. By 1967 there were more than 20 differ-ent programs available, e.g., CSMP [112] from IBM. TheSimulation Council Inc (SCI) created the CSSL standard[630], a major milestone because it unified concepts andnotation. The program ACSL from Mitchell and Gau-thier Associates [476], which was based on CSSL, becamethe defacto standard. Like its predecessors, ACSL wasimplemented as a preprocessor to Fortran; the code forintegration was interleaved with the code representingthe model. It was easy to include Fortran statements aspart of the model but documentation and maintenancewere difficult. Another limitation was that computationswere represented using the low level elements of ana-log computing. ACSL was a batch program. Recompila-tion was required when initial conditions or parameterswere changed. The human-machine interaction was sig-nificantly inferior to traditional analog computing. Thesystem Simnon [192], from 1972, admitted changes ofparameters and initial conditions interactively withoutrecompilation. The model was described in a special lan-guage with a formal definition, a simple language wasalso used for the interaction. Many other simulators ap-peared with the personal computer.

The general availability of computers in the 1970s in-spired the development of tools for analysis and designof control systems. Computer-Aided Control System De-sign became a subspecialty with symposia and confer-ences. Initially, industry and university developed in-house systems. The appearance of personal computersand graphics in the mid 1980s stimulated a new gener-ation of software. The state of the art circa 1985 is wellsummarized in the book [344].

27

Page 28: Astrom Kumar Control a Perspective

Since design calculations are based on numerical al-gorithms, collaboration with researchers in numericalmathematics emerged. Two areas of particular impor-tance were numerical linear algebra and integrationof differential and differential-algebraic equations. Nu-merical analysts developed reliable computer code forsolving Lyapunov and Riccati equations [415], and forintegrating differential and differential-algebraic equa-tions [244,277,276,278,24,274].

The advent of Matlab, created by Cleve Moler in 1981,was a game changer. Moler participated in the develop-ment of LINPACK and EISPACK software libraries fornumerical linear algebra, and he wanted to have a simpleway to test the programs. He designed an interpretiveprogramming language in which it was very easy to en-ter matrices and perform the calculations by typing sim-ple commands. Moler also added functions and macros(scripts) which allowed the user to extend the language.

Matlab was picked up by the control community, andtools for control system design were developed. Pio-neering work was done by two companies in Palo Alto.Systems Control developed CTRL-C [433] and Inte-grated Systems developed MatrixXand SystemBuild[595]; both systems were based on Moler’s Matlab. JohnLittle, who worked for Systems Control, obtained therights to develop a PC version and teamed up withMoler and Bangert to found the company MathWorks.MathWorks developed the simulator SIMULINK [263](originally called SIMULAB) integrated with Matlab,and Stateflow, a simulator for finite state machines[281]. MATLAB and Simulink are the dominant prod-ucts but there is other similar software. The programSysquake [532] is highly interactive, and executablefiles can be distributed freely. There are two public do-main products, Octave [183] and Scilab [330]. Tools forcontrol system design are also being developed for thescripting language Python [612].

John Little encouraged control researchers to developtoolboxes for solving control problems, and much of thework on computer aided control system design migratedto MATLAB . The toolboxes provided a convenient wayto package theory and make it widely available. Math-works also developed software for generating code forembedded systems from SIMULINK .

National Instruments (NI) supplied computer interfacesfor instrumentation. In 1986 Kodosky of NI developedthe program LabVIEW which allowed flexible configura-tion of instruments with nice graphical panels [387,350].The program was based on data flow programming. Itwas originally intended for emulation of electronic in-struments but it also became popular for control ap-plications. National Instruments acquired MatrixXandfeatures from it were gradually migrated to LabVIEW.

Simulation requires models of processes and controllers.

Because of the wide range of applications, control engi-neers need models in many different domains. Even ifmodeling softwares for specific domains are available itis difficult to combine them. It is therefore highly desir-able to have a unified approach to modeling that cutsacross different domains.

A simple and general approach to modeling is to splita system into subsystems, define interfaces, write thebalance equations for the subsystems, and add consti-tutive equations. This approach yields a descriptionthat is general, close to physics, and convenient forbuilding libraries. A drawback is that much manualwork is required to assemble the subsystems into amodel which is suitable for simulation or optimiza-tion. Much of the work can be automated using com-puter algebra and object oriented programming. Theprocedure results in models that are differential al-gebraic equations. In the 1980s there had been sig-nificant advances in numerical solution of such equa-tions [244,110,24,278]. The modeling method had beenused for electronic circuits [490]. The language Dy-mola, developed by Elmqvist [193] in 1978, extendedthe method to general physical domains. Dymola hada formally defined syntax and it was implemented inSimula [76], the only object oriented environment avail-able at the time. Many other object-oriented modelinglanguages were developed later when more memoryand computing power became available, for example[191,455,454,504,509,574,234,665,345,113,346,530]. In1992 Elmqvist started the company Dynasim to mar-ket a modern implementation of Dymola. The programquickly gained industrial acceptance, it was, for exam-ple, used to develop the Toyota Prius. Dynasim waslater acquired by Dassault Systemes.

A collaborative effort to develop a language for physicalmodeling was started in Europe in 1996. It was carriedout by a diverse group with a broad range of experi-ences; modelers from many domains, software engineers,computer scientists and numerical analysts. Practicallyall European modeling groups participated. The effortresulted in the formation of the Modelica Association[28]. The first task was a formal definition of a mod-eling language; the first version was available in 1978[194]. The Modelica language has many useful featuressuch as units of variables, matrices and matrix equa-tions, functions, hybrid modeling features and class pa-rameters. A significant effort has been devoted to devel-oping model libraries. There are libraries for many dif-ferent fields, e.g., control systems, multi-body systems,electrical circuits, hydraulic systems, and thermal sys-tems. The open source Modelica Standard Library con-tains about 1000 model components and more than 500functions from many domains. The Modelica activityexpanded, there are groups for advanced development,language specification, and libraries. Textbooks have ap-peared [643,233]. The 80th design meeting was held in2013 and the 10th Modelica conference was held in 2014.

28

Page 29: Astrom Kumar Control a Perspective

Several Modelica simulation environments are availablecommercially and there are also opens source versions[28]. Models developed in Modelica can be exported toSIMULINK.

4.16 The Organizations Promoting Control

The International Federation of Automatic Control(IFAC), see Section 3.7, provided a global arena forcontrol. Since IFAC operated through national mem-ber organizations it strongly contributed to the globalspread of control. The national member organizationsalso organized conferences locally [81,537,186]. IFACmaneuvered very skillfully to maintain a world-widecontrol community in spite of political tensions duringthe cold war. The triennial IFAC World Congress hasbeen operating since 1960 [354]. IFAC also arrangesworkshops and symposia. Participation in IFAC activ-ities and committees was a good training experience,particularly for control engineers from small countries.Automatica became an IFAC journal in 1969 [149]with George Axelby [42] as the editor. IFAC’s activi-ties have expanded substantially and today there areIFAC meetings almost every week. Later IFAC startedseveral journals: Annual Reviews of Control (1977), En-gineering Applications of Artificial Intelligence (1988),Journal of Process Control (1991), Mechatronics (1991)and Control Engineering Practice (1993).

There are also significant activities organized by otherengineering organizations. The Instrument Society ofAmerica (ISA) formed in 1946, was renamed Interna-tional Society of Automation in 2000. They organizea yearly Automation Week as well as Conferences andSymposia. ISA also publishes books and the JournalsInTech and ISA Transactions. The American Society ofMechanical Engineers (ASME) created a division for in-struments and regulators in 1943. The Journal of Dy-namic Systems, Measurement and Control was startedin 1971. The division changed its name from AutomaticControl to Dynamic Systems, Measurement and Con-trol in 1978. The AIAA started the Journal of Guidance,Control, and Dynamics in 1971.

The IEEE Control Systems Society was formed in 1971,see Section 3.7. The long running Symposium on Adap-tive Processes (1963–1970) became the IEEE Conferenceon Decision and Control (CDC). Interestingly it did sojust as research in adaptive control began to take off.The CDC had generous acceptance practices for confer-ence papers that encouraged researchers to submit theirlatest research and attend the annual conference. It be-came a fertile meeting ground with a large umbrella. TheIEEE Transactions on Automatic Control, with a dy-namic editorial board organized along very topical areasand regularly rotated with fresh talent, became a majorpublisher of theoretical research papers.

The American Automatic Control Council, which is thenational member organization of IFAC in USA, orga-nizes the yearly American Control Conference in collab-oration with many engineering societies: AIAA, AIChE,ASCE, ASME, IEEE, ISA, and SCS. The European Con-trol Conference, which now meets every year, startedwith a meeting in Grenoble in 1991. The Asian ControlConference, launched in 1994, now meets regularly ev-ery other year. The organization MTNS focuses on the-oretical issues in system theory and organizes biannualconferences.

There are also strong organizations in China, England,France, Germany, Japan and many other countrieswhich organize symposia and published journals.

Some organizations created during the war like the Ra-diation Laboratory at MIT were dismantled, others likethe LIDS at MIT [477], the Coordinated Science Labo-ratory at the University of Illinois, the Institute of Con-trol Sciences in Moscow, and the institutes run by theacademies of sciences in Hungary, China, Czechoslovakiaand Poland flourished after 1960. New institutions werealso created. In Japan there were large national pro-grams for Fourth Generation Computers and Fuzzy Con-trol.

The Institute for Research in Computer Science andControl (IRIA) was started by the French Ministries ofResearch and Industry in 1967 as part of General deGaulle’s Plan Calcul. It was one of the first research insti-tutes that combined control and computer science. Theinstitute was originally in Rocquencourt outside Paris.It became a national institute and was renamed IN-RIA in 1979 and has since expanded with 8 regional re-search centers. The institute employs close to 4000 peo-ple, among them about 1000 PhDs and 500 postdocs. Itbecame a powerhouse for research under superb leaders,among them the mathematicians Jacques-Louis Lionsand Alain Bensoussan. INRIA has strong interactionswith industry and has spun off about 100 companies. Itpioneered work on control of systems governed by partialdifferential equations and created software like Scilaband Esterel was carried out at INRIA.

After the Second World War, there was a major expan-sion of research world wide, and a great growth of majorresearch universities. Research funding increased signif-icantly. The National Science Foundation was created inthe US, and its mode of peer review of proposals leveledthe playing field for researchers irrespective of location.After the experience with the fire control efforts and theManhattan Project during the Second World War, therewas a great infusion of funding to universities by theDepartment of Defense in the USA, often operating ina peer review mode. The European Union started ma-jor research programs, as did the Japanese government.Control research was a major beneficiary of all thesedevelopments in the period after 1960. Research from

29

Page 30: Astrom Kumar Control a Perspective

universities in the area of control grew tremendously.There was a great expansion in hiring of control faculty.There was also a strong internationalization; studentsand teachers moved between different countries. The USbenefited strongly from immigration of students and sci-entific talent from other countries. EU established theErasmus Programme in 1987 followed by the Socrates,the Lifelong Learning Program and the Marie Curie pro-gram for experienced researchers.

5 Widening the Horizon

Around 2000 there were indications that control wasentering a new era. Traditional applications were ex-ploding because of the shrinking cost of computing,while new applications were emerging. The applicationsranged from micro- and nano-scale devices to large-scale systems such as smart national power-grids andglobal communication systems.The expansion of the In-ternet and the cellular networks were strong technologydrivers, as was the desire for systems with increased au-tonomy. A sign of the importance is that the inauguralQueen Elizabeth Prize for Engineering was awarded toLouis Poutin, Robert Cerf, Tim Berners Lee and MarcAndreessen in 2013 for “the ground-breaking work,starting in 1970, which led to the internet and world-wide web. The internet and worldwide web initiateda communications revolution which has changed theworld” [225]. It is an educated guess that it will alsohave a very strong impact on automatic control.

There was also a pressing need to develop method-ologies for mass producing complex control systemsefficiently. In the Golden Age control had benefittedstrongly from interactions with mathematics. In thisnext phase, stronger interaction with communicationengineers and computer scientists started to develop.Interactions with physics, biology and economics arealso increasing. In this section we provide an overviewof some of the trends. Our treatment of what lies aheadis necessarily speculative.

5.1 Advances in Computing and Networks

Computer hardware, following Moore’s law, is incom-parably more powerful now than it was in 1960. Cray1, delivered to Los Alamos National Laboratory in1976, weighed over five tons, but could only deliver 250megaflops, while the current Mac Pro desktop is a thou-sand times faster, delivering 90 gigaflops. In the pastfifty years, embedded computers have also proliferated.Indeed, already by 1998, only 2% of all processors wereworkstations while 98% were for embedded systems[620].

Software engineering has made great advances. Experi-ence based on large and complex projects has been cod-

ified and made reusable into design patterns, softwareframeworks and development processes [238,543].

One of the most noticeable changes is the birth andgrowth of communication networking. Telephony, whichhad originated around 1877, was based on a circuit-switched network. In 1969, the US Advanced ResearchProjects Agency (ARPA) developed a packet switchednetwork that connected four university computers [72].A flexible architecture was developed in 1974 [135] thatallowed different previously incompatible networks to beinterconnected. It featured hierarchical addressing, gate-way routers between networks, and TCP, a protocol toensure reliable delivery of packets in an ordered fashionacross networks. Later this was split into two protocols,together designated TCP/IP [134], with the TCP partrunning only on end hosts, while the IP part took careof packet passing between networks or within a network.This made it feasible to scale up the network.

At around the same time, packet radio networks werealso emerging. In fact one of the goals of TCP was tointerconnect packet radio networks such as PRNETand SATNET with ARPANET. In 1971 the ALOHAnetpacket radio network was developed at the Univer-sity of Hawaii. It was used to connect users across theHawaiian islands with a computer in Oahu [2]. The keyinnovation was the random access protocol to resolvecontention between several users for the shared wirelessmedium. This was later the central feature of Ethernet[470], which was developed around 1973. Much later,the random access protocol was also adopted for usein wireless local area networks (WLANs) in the IEEE802.11 standard which has proliferated across officesand homes worldwide [154].

In a parallel development, cellular telephone systemshave also proliferated. The first design for a US cel-lular telephony system, the Advanced Mobile PhoneSystem (AMPS), was developed in 1971. The first mo-bile portable handset was developed in 1973. In Japan,the Nippon Telegraph and Telephone (NTT) companydeveloped the integrated commercial cell phone systemin 1979. By 1960 several Nordic countries had theirown mobile systems. In 1981 the Nordic Mobile Tele-phone Network (NMT) made it possible to use mobilephones across the countries. NMT later in 1992 createdthe Global System for Mobile Communications (GSM)which permitted users to place and receive calls glob-ally. Researchers from ATT, NMT, NTT, and Motorolawere awarded the 2013 Draper Prize [487] for these de-velopments. Cellular technology and the World WideWeb of interlinked hypertext documents developed in1990 have created a revolution in terms of connectivityand information access across the globe.

Today there are more than 5 billion wirelessly connectedmobile devices and the number is expected to increase byone or two orders of magnitude by 2020 [196,145]. The

30

Page 31: Astrom Kumar Control a Perspective

end-to-end latency is of particular interest for control.In the current LTE/4G system it is around 100 millisec-onds, but is expected to be down to a few millisecondsin the 5G system planned for 2020. Such small latencieswill significantly expand opportunities for control appli-cations over the network.

In 1998, the Smart Dust project at the University of Cal-ifornia at Berkeley [353] developed tiny devices called“Motes,” that could compute, communicate wirelessly,and to which sensors could be connected. The Rene Motedeveloped by CrossBow Technologies in 1999 had an AT-MEL CPU, 512 Bytes of RAM, 8K of Flash memoryand a packet radio that could communicate at about10Kbps [300]. A key development was the TinyOS opensource operating system [423,155] which has facilitatedmuch experimentation by academic researchers. Sincetheir original development there have been several gen-erations of Motes. They can be used to form relativelylarge “sensor networks” with widely distributed nodesthat communicate wirelessly and perform computationon the data they receive.

If one attaches actuators to sensor networks, then oneobtains what in some computer science communities arecalled “sensor-actuator” networks, a notion familiar tocontrol engineers. Of particular interest for control isWirelessHART, which is designed as a communicationstandard for process control [151,617]. Also of interestfor control is ISA100.11a developed by the InternationalSociety of Automation (ISA) [336].

5.2 Control of and over Networks

Information between multiple sensors and actuators canbe transported over a packet-based network. Thus, theadvances in networking make it possible to deploy con-trol systems on a large scale, giving rise to “control overnetworks,” or what is dubbed “networked control” [45].Since loops are closed over the communication network,it plays an important role in overall stability and per-formance. Networks also require control to provide goodperformance, an area called “control of networks.”

Congestion control is an early example of use of feedbackin a network. The rate of injection of packets into a net-work is regulated to avoid congestion while maintaininga high throughput [660]. In fact, TCP, later the TCP/IPprotocol, which does this is at the heart of the Internet,and is one of the reasons why the Internet has prolifer-ated so rapidly. More generally, control principles andloops are needed at several levels for the operation ofnetworks. The early ALOHA protocol [2], a componentof WiFi, is a feedback control scheme which attempts tothrottle nodes when they are causing too many packet“collisions,” in a manner similar to James Watts’ Gov-ernor.

It is important for control systems to design communica-tion networks that provide the “quality of service” thatcontrol loops need. The networks will have to not onlydeliver packets from sensors to actuators at a specifiedthroughput, but will also have to deliver them withina specified delay. The current Internet is what is called“Best Effort”; it does not provide such guarantees, butthey are important if one is to close loops over networks.The CANBus [144] and Field Bus system [138] have beendesigned for control applications. A major challenge isto design wireless networks that provide such quality ofservice. For example, it is of interest to replace currentwired intra-vehicular networks connecting about 75 sen-sors and 100 switches with a wireless access point servingthem, since that can save weight, reduce complexity ofmanufacture, permit easier upgrades, etc.. The problemof characterizing what types of quality of service accesspoints can support, and how to do so, is of great interest[312].

Concerning control over networks, issues such as the de-sign of the system, proofs of stability, or establishmentof performance, need to take into account the character-istics of the imperfect network over which informationfrom sensors to actuators or actuators to sensors may betransported. This problem can be addressed at differentgranularities to take into different aspects of the con-straints posed by the network or communication channelinvolved [439].

Probably one of the earliest issues to confront with re-spect to the control system design is when to samplethe system so as to reduce the data needing to be trans-ported over the network. One could of course sampleperiodically or at given time points; this is reminiscentof the manner in which the Riemann integral is defined.However, it may result in the system being sampled un-necessarily even if nothing has changed. An alternativeis to sample it on an event-driven basis; which is remi-niscent of the manner in which the Lebesgue integral isdefined [33].

From the viewpoint of transporting packets over thenetwork, three important characteristics are the rate atwhich the network can handle incoming packets, the de-lay that packets may experience before they are deliveredat the intended destination due to the traffic load on thenetwork, and the probability or likelihood with whichthe network may drop packets. All three aspects are ofinterest vis-a-vis their impact on the control system. Foran LQG system the stability of the associated Kalmanfilter depends on the probability that packets contain-ing observations are lost [610]. For the control problem,it is of interest to determine the data rate needed to beprovided by the channel in order to stabilize a given lin-ear system [492]; this is also related to the problem ofhow to quantize measurements for the purpose of control[689,118]. At a more fundamental level, when one wishesto stabilize unstable systems over control loops contain-

31

Page 32: Astrom Kumar Control a Perspective

ing noisy channels, there arise control specific notions ofinformation theoretic capacity, such as “anytime capac-ity” [573].

5.3 Computing and Control

There has been increasing interest in control by the com-puter science community, and vice-versa. One reasonis the increasing employment of feedback in computingsystems. Another is the proliferation of embedded com-puting systems. The large number of computer controlsystems in cars has driven the need to automate design,production and testing of control systems. Also, with theincreased complexity of systems featuring computers inthe feedback loop, there is a need for methodologies andtools for reliable system design. In many complex appli-cations, it is important to design systems with guaran-teed safety.

Similar to “control of networks,” control is also increas-ingly being applied to computing systems [293]. Themain drivers are the increased flexibility and better qual-ity of service, and the desire to save energy and cost.Feedback-based techniques have been considered for dy-namic management and scheduling of resources such asCPU time, memory, IO bandwidth, and power, in sys-tems ranging from embedded computers used in, e.g.,smartphones, to data centers hosting server-based cloudapplications. Successful applications have been devel-oped in web storage systems, high-performance serversystems, real-time databases, software performance tun-ing, and multimedia streaming, to name a few.

In the reverse direction, the interaction of some sub-communities in computer science with control has beenlongstanding. A prime example is the real-time commu-nity. In 1961, the IBM 1720 Process Control ComputerSystem was installed in three plants [287]. The theoreticfoundation of real-time systems started with the workof Liu and Layland [434]. This pioneering work consid-ered the problem of how to schedule a CPU to serveseveral tasks, where jobs in each task are periodic andrequire a certain execution time. The Rate Monotonicpolicy developed by them prioritizes jobs according tothe frequency with which jobs of that task arrive. It isa particularly simple static priority policy that has seenwidespread implementation. For a large number of tasksrate monotonic scheduling guarantees that all tasks willbe executed properly provided that the CPU utilizationis less than log 2 = 0.68. This conservatism can be re-duced by applying scheduling algorithms based on feed-back [22,594]. A prominent example of the importanceof real-time computing considerations in control systemsis the priority inversion problem in the real-time compu-tation system that occurred in 1997 on the Mars Rover[348,562]. Researchers in real-time systems have alsobegun addressing the problem of robustness of controlloops, where the robustness includes errors in implemen-tation. The so called “Simplex” architecture of [592,593]

addresses the problem of robustness to software bugs inthe implementation of new control algorithms.

An important aspect anticipated of future control sys-tems is the interaction between the physical world oftenmodeled by differential equations and the logical dynam-ics of the computational world. Typically, one would liketo establish the properties of the composite systems com-prising both. The emerging field of hybrid systems is oneattempt to address these challenges [451,297,419,71]. Itis an interesting meeting place of control and computerscience [70]. Hybrid automata models have been usedfor this purpose [295]. It is of interest to determine thereach-set [10]. For example one would like to determineif the system is “safe,” i.e., it never reaches an unsafestate. Software tools for computing such quantities areuseful, e.g., [413]. However, determining that is undecid-able for general models, and it is of interest to character-ize what is decidable [9,296]. More generally one wouldlike to establish properties such as safety and livenessof an entire system such as an automated distributedtransportation system [265,380].

More broadly, “time” is an essential matter for con-trol systems, in contrast to, say, general purpose com-puting [549]. That is, for safety critical systems, timeli-ness of interactions is important for maintaining safety,stability, etc. The time-triggered architecture is an ap-proach to developing distributed embedded systems thatseeks to attain reliability by temporal coordination [390].When closing loops over a wireless network, there arecertain limitations to synchronizing clocks; certain com-binations of delays and clock offsets cannot be resolved[231].

Control systems are often safety critical. Their security istherefore a major concern. They may be amenable to at-tacks occurring over the network to which they are con-nected. The recent Stuxnet worm specifically attackedcontrol systems [200,463,141]. There have been other lessreported attacks of a natural gas pipeline system [582],a water system [198], a Supervisory Control and DataAcquisition system [605], trams [426] and power utili-ties [267]. Defense of ordinary data networks is alreadyproblematic, and defense of complex control systems iseven more so since the attacks can take advantage ofcomplex interactions between the networking, computa-tional, control systems, and physical layers. Much needsto be done in the area of security [499]. There are somestandardization efforts under way [497,627,333].

Automatic control has a strong base in mathematics.The interaction goes in both directions; a wide range ofmathematics has found its use in control, and control hasoccasionally stimulated the development of mathemat-ics. Some examples are system theory, optimal control,stochastic control, and nonlinear control [207,485]. Withthe convergence of communication, control and comput-ing, newer theoretical areas such as hybrid systems and

32

Page 33: Astrom Kumar Control a Perspective

real-time information theory have emerged. Theories ofstability or performance or safety will also need to strad-dle different areas, since it is the overall system that isultimately the determinant of performance. In some spe-cific systems one can provide holistic handcrafted proofsof performance of the overall system that includes dis-crete event dynamics, real-time scheduling, kinematics,etc. [265]. However as we build more complex systemssuch as automated air transportation systems, it is use-ful to be able to automate the proofs of safety. Com-plexity however is a major challenge for computationalprocedures, so control researchers will need to developnew theories that permit tractable ways of modeling.

5.4 Autonomy

Research on systems with adaptation and learning hasbeen well developed for a long time, as noted in Section4.12. However, higher levels of autonomy which includecognition and reasoning will be required in the future.It is pointed out in an NAE study [488] that:

Everything will, in some sense, be smart; that is, everyproduct, every service, and every bit of infrastructurewill be attuned to the needs of the humans it serves andwill adapt its behavior to those needs.

Interesting experiments with robot cars were performedby Ernst Dickmanns at the end of the last century. Heequipped cars with cameras and other sensors [168]. In1994 he demonstrated autonomous riding on a high-way near Paris, and in 1995 one of his cars drove au-tonomously (though with human supervision) from Mu-nich to Copenhagen at speeds up to 175 km/hour. Thecar was able to overtake and change lanes. More recentlyin 2007 several autonomous cars competed in a desertedcity in the DARPA Grand Challenge. One of the rulesrequired that the cars follow the traffic rules of Califor-nia. More recently, Google has developed a driverless car[195]. Carnegie Mellon’s Boss is another autonomous ve-hicle [555]. Even if full autonomy is not introduced on amassive scale, elements of it, such as collision avoidanceand parking assist, are now available in new cars. Au-tonomous air vehicles are in operation and an unmannedcargo mission has recently been performed with a BlackHawk helicopter [673].

Humanoid robots are other examples of systems with ahigh degree of autonomy. Research in Japan has beenparticularly dominant in this area. Several generations ofrobots have been developed. Toyota recently announceda violin playing robot. Humanoid robots that act as pa-tients have been developed for training dentists.

5.5 Model Based Design

The automotive industry started to have an impact oncomputer-aided design of control systems when com-

puter control was introduced in cars in the 1970s, a de-velopment that accelerated when the systems becamemore complex. More than 80 million cars were producedin 2012; ordinary cars may have ten or more electroniccontrol units (ECU) while advanced cars may have over100 ECUs. With this scale it is important to have effi-cient engineering procedures for design and manufactur-ing of the systems. Often the controller is co-designedwith the plant. There are similar needs in many otherindustries even if the numbers are smaller.

Development typically includes the following tasks: re-quirements, modeling, control design, code generation,implementation, hardware-in-the-loop simulation, com-missioning, operation and reconfiguration. Validation,verification and testing are inserted between the differ-ent tasks since it is expensive to find errors late in thedesign process. Since models are key elements of the pro-cedure it has been known as model based design (MBD).The advantage of using models is that fewer prototypeshave to be built; particularly important when buildingnew systems like hybrid cars. The aerospace and au-tomotive industries have been early adopters of MBDwhich is currently developing rapidly [379,275]. Use ofMBD is endorsed by the following quote from an NAEstudy [488]:

There will be growth in areas of simulation and model-ing around the creation of new engineering structures.Computer-based design-build engineering ... will be-come the norm for most product designs, acceleratingthe creation of complex structures for which multiplesubsystems combine to form a final product.

An example of the use of MBD is that suppliers of com-ponents for climate control systems for cars in Germanyare now providing not only hardware but also validateddynamic models of their equipment [428]. This makes itpossible for car manufacturers to simulate the completesystem and to explore the consequences of using compo-nents from different suppliers on fuel consumption andcomfort.

In system design it is desirable to explore design choicesand to investigate several process configurations. A car-dinal sin of automatic control is to believe that the sys-tem to be controlled is given a priori. Control problemsthat are difficult can be alleviated by modification ofthe process or the system architecture. Integrated de-sign of a process and its controller is highly desirable. Asmentioned in Section 2.5, the Wright brothers succeededwhere others failed because they deliberately designedan unstable airplane that was maneuverable, with a pi-lot used for stabilization. There are still substantial ad-vantages in having an unstable aircraft that relies on acontrol system for stabilization. Modern fighters obtaintheir performance in this way.

There are tools for some phases of the design process:

33

Page 34: Astrom Kumar Control a Perspective

DOORS for requirements [321], e.g., CAD programs forequipment design, Modelica for modeling, MATLAB forcontrol design, and SIMULINK for simulation and codegeneration. Systems for documentation and version con-trol are also available. Even if it is desirable to have acomplete design suite it is unlikely that a single softwarepackage can serve all the needs. Software tools thereforehave to be designed so that they can be combined. Togive one example, Dassault Systemes are combining Dy-mola/Modelica with their CAD program CATIA. Thismeans that 3D geometry data, masses and inertias areavailable directly from the CAD system. High quality3D rendering is also available to animate the simulationresults.

A recent development in the industrial simulation com-munity is the introduction of the Functional Mock-upInterface (FMI) designed to facilitate tool interoperabil-ity at the level of compiled dynamic models. FMI speci-fies an XML schema for model meta data, such as namesand units, and a C API for evaluation of the model equa-tions. The first version of FMI was introduced in 2010and since then a large number of tools have adopted thestandard. Future versions of FMI will support commu-nication of complete models in XML format, which issuitable for use with integration in symbolic tools thatcan explore the structure of models beyond evaluatingmodel equations [522].

Car manufacturers typically buy systems consisting ofsensors, actuators, computers and software as packagesfrom suppliers. This approach works very well whenthere were only a few electronic systems with small in-teraction. The situation became complicated when morecontrol functions were added because a sensor from onesubsystem could be used in another system. The auto-motive industry, including their suppliers and tool de-velopers, therefore created AUTomotive Open SystemARchitecture (AUTOSAR), an open and standardizedautomotive software architecture, for automotive elec-trical and electronic systems [40].

There are a wide range of interesting problems that ap-pear after a control system is designed and implemented.First, the system has to be commissioned and all con-trol loops have to be brought into operation. Then it isnecessary to continuously supervise and assess that thesystem is running properly. Many of the problems oc-curring during this phase have only recently begun tobe addressed in a systematic fashion. Typical issues arefault detection and diagnosis, but there are also manyother interesting problems, such as loop assessment andperformance assessment. Developments in this area arestrongly motivated by the drive for safety and higherquality. Commissioning can be influenced substantiallyby a proper control design. Automatic tuners, for exam-ple, can drastically simplify the commissioning proce-dure.

When the automatic control system becomes a criticalpart of the process it may also become mission critical,which means that the system will fail if the control sys-tem fails. This induces strong demands on the reliabilityof the control system. An interesting discussion of theconsequences of this are found in the inaugural IEEEBode lecture by Gunter Stein [623].

5.6 Cyber-Physical Systems

The increased use of communication, and the increasedsophistication and complexity of the software both incontrol systems design as well as operation, has led tocloser interaction between control, computing and com-munication. It is also fostering the development of con-trol systems of large scale. As was the case for the previ-ous eras, this third potential platform revolution, afteranalog and digital control, is also creating a need for aframework for rigorous design.

This interaction has also been stimulated by severalresearch funding agencies. DARPA launched a researchprogram called Software Embedded Control in 1999[575], which was followed by an NSF project on Em-bedded and Hybrid Systems. The European Unionlaunched in 2001 the ARTIST Accompanying Measureon Advanced Real-Time Systems [19], and in 2004 theAdvanced Research and Technology for Embedded In-telligence Systems (ARTEMIS) program [496]. In 2006,a group of researchers and program managers in theUS coined a name, “Cyber-Physical Systems (CPS),”to describe the increasingly tight coupling of control,computing, communication and networking. In the US,the National Science Foundation established a majorresearch funding program in Cyber-Physical Systems[44]. In its strategic plan towards 2020 [331], INRIA em-phasizes “the challenge of very large digital, embeddedand buried systems, and of systems of systems.” TheRobert Bosch Center for Cyber-Physical Systems wasestablished at the Indian Institute of Science in 2011. InSweden, the Strategic Research Foundation supported10 year Linnaeus Grants for three centers that supportcontrol research [3,484,417].

In 2008, a week long annual event called “CPS Week”was launched, which has grown to include five collocatedconferences, the ACM/IEEE International Conferenceon Cyber-Physical Systems (ICCPS), the Conferenceon High Confidence Networked Systems (HiCoNS), Hy-brid Systems: Computation and Control (HSCC), theACM International Conference on Information Process-ing in Sensor Networks (IPSN), and IEEE Real-Timeand Embedded Technology and Applications Sympo-sium (RTAS). In 2013, the Proceedings of the IEEE cele-brated the hundredth anniversary of IRE (the IEEE wasformed from the union of IRE and AIEE in 1963), andpublished a 100th Anniversary Issue in which Cyber-Physical Systems was one of nineteen topics chosen for

34

Page 35: Astrom Kumar Control a Perspective

inclusion [381]. There is also great interest in “The In-ternet of Things,” focusing on connecting large numbersof general physical objects, from cattle to cars to coffeemakers, expected to number 50 billion by 2020. IEEE isstarting two new journals, IEEE Transactions on Con-trol of Network Systems to commence in 2013, and IEEEInternet of Things Journal to commence in 2014.

5.7 Complexity of Systems

There is a general tendency that engineering systemsare becoming more complex. Complexity is created bymany mechanisms: size, interaction and complexity ofthe subsystems are three factors that contribute.

Chemical process control systems can have many thou-sands of feedback loops. Recirculation schemes save en-ergy and raw material and reduce pollution, but theyintroduce coupling from the output streams to the inputstreams, which generates interactions and complexity.Efficient systems for distributing goods globally usingcomputer assisted supply chains use complicated net-works for transport of goods and information. Astro-nomical telescopes with adaptive optics may have a largenumber of reflecting surfaces whose orientations are con-trolled individually by feedback systems. Even in a smallsystem, due to the “curse of dimensionality,” the result-ing size can be extremely large after discretization.

One of the great achievements of the last five decadeshas been the development of a rigorous foundation forthe study of complexity of computing as size increases[152,372], though some basic questions remain open.A fundamental challenge pervading many applied do-mains, including control system design and analysis, isto develop models that can result in tractable algorithmswhose complexity scales polynomially, preferably of lowdegree, in the size of the system.

The Internet and the electricity grid are perhaps amongthe most complex systems that have been engineered.Both depend critically on feedback at several levels fortheir operation. As noted in Section 5.2, in the case ofthe former, this is what is meant by control of networks.Algorithms suitable for small systems may not be suit-able for large systems. It is also of interest to determinescaling laws that provide insight into how system per-formance changes as size grows.

Another question attracting great interest is how to takeadvantage of the increasing availability of large amountsof data, called “big data.” This can be especially usefulin understanding the behavior of large socio-economic-technological systems, whose “physics” is not well un-derstood. An example is the control of demand responsein the emerging smart grid supplied by renewable energyand controlled by price signals.

Another factor that introduces complexity is that thesystems are hybrid: continuous systems are mixed withlogic and sequencing. Cruise control in cars is a simpleexample. Other examples are found in process control,where many continuous controllers are combined withsystems for logic and sequencing. Such systems are verydifficult to analyze and design. The modern car is anexample of a complex system; it has several networks andup to 100 electronic control units. Specification, design,manufacturing, operation and upgrading of such systemsis an increasingly complex task.

5.8 Physics

Interactions between physicists and engineers are in-creasing [53]. Feedback control systems have played acritical role in instruments for physics, more so with theincreasing complexity of the experiments. For example,governors were used to track the motion of planets inearly telescopes [456], feedback was an essential elementof early spectrometers [502], and the 1912 Nobel Prize inPhysics was awarded to Gustaf Dahlen for “invention ofautomatic regulators for use in conjunction with gas ac-cumulators for illuminating lighthouses and boys” [505].The Dutch engineer van der Meer shared the 1984 NobelPrize in Physics for a clever feedback system for gener-ating a high density beam in a particle accelerator [505].

Feedback has also proven crucial for physics experi-ments. Large telescopes use adaptive optics to reducethe disturbances caused by the density variations in theatmosphere. Control systems are also widely used at themicro and nano-scales [262,190]. Binning and Rohrershared the 1986 Nobel Prize in Physics for the inven-tion of the scanning tunneling microscope. A variation,the atomic force microscope, is now a standard tool forbiologists and material scientists, capable of providingimages with sub nanoscale resolution. The control sys-tems in the instruments are critical; improved controlgives immediate benefits in terms of sharper and fasterimaging. Great challenges faced by modern control en-gineering were overcome in making the Large HadronCollider (LHC) operational. Many interacting systemcomponents function on time scales that differ by sev-eral orders of magnitude, from nanoseconds for particlebeam steering to months for cooling large electromag-nets.

Control has also had impact at a more fundamental level.A long time ago it was attempted to explain shear flowturbulence by linearizing the Navier Stokes equation.The hypothesis was that the linearized equations wouldbecome unstable when the Reynolds number increased,which somewhat surprisingly did not happen. Anotherattempt based on linear analysis was made by Dahlehand Bamieh [46]. They computed the gain of the opera-tor mapping surface irregularities to velocity based on alinearized model, and found that the gain increased with

35

Page 36: Astrom Kumar Control a Perspective

the Reynolds number in a way compatible with experi-mental data.

Control has also been applied to quantum systems[317,377,227], NMR imaging [378] being one applica-tion. Another spectacular application is given in [120]where it is proposed to break molecules into ions by ap-plying very fast laser pulses. The problem can be formu-lated as an optimal control problem for the Schrodingerequation for the molecule, where the cost criterion is tobreak the molecule with minimal energy.

5.9 Biology and medicine

In 1932 the physiologist Walter Cannon wrote the bookThe Wisdom of the Body [132], in the introduction ofwhich he says:

Our bodies are made of extraordinarily unstable ma-terial. Pulses of energy, so minute that very delicatemethods are required to measure them, course along ournerves. ... The instability of bodily structure is shownalso by its quick change when conditions are altered.... The ability of living mechanism to maintain theirown constancy has long impressed biologists. ... Hence,then, is a striking phenomenon.

He then went on to say:

Organisms composed of material which is charac-terized by the utmost inconstancy and unsteadiness,have somehow learned the methods of maintainingconstancy and keeping steady in the presence of con-ditions which might reasonably be expected to proveprofoundly disturbing.

Cannon’s book is based on insights obtained by care-ful observations and experiments. In our terminology wecan summarize the above statements as: the human bodyhas fantastic control systems. It is, however, a long wayfrom this qualitative statement to quantitative resultsbased on models and analysis, illustrated by the follow-ing quote from the book The Way Life Works [306]:

Feedback is a central feature of life. All organisms havethe ability to sense how they are doing and to makenecessary modifications. The process of feedback gov-erns how we grow, respond to stress and challenge, andregulate factors such as body temperature, blood pres-sure and cholesterol level. The mechanisms operate atevery level, from the interaction of proteins in cells tothe interaction of organisms in complex ecologies.

Feedback has been used extensively when investigatingbiological systems. Hodgkin and Huxley received the1963 Nobel Prize in Medicine for “their discoveries con-cerning the ionic mechanisms involved in excitation andinhibition in the peripheral and central portions of the

nerve cell membrane.” They used a clever feedback sys-tem to investigate the propagation of action potentialsin the axon. The measurement technique was further re-fined by Neher and Sakmann who received the 1991 No-bel Prize in Medicine “for their discoveries concerningthe function of single ion channels in cells.”

Today, there are many efforts to develop efficient toolsfor patients and doctors and to augment the body’s nat-ural feedback systems when they fail [329]. Roboticssurgery is now well established. Mechanical hearts arealready in use. Experiments with on-line control of bloodsugar are performed [520,332], as is automatic control ofanesthesia, to mention a few examples.

Control subsystems in animals are being explored. Be-haviors of insects and birds are being investigated bywind-tunnel and free flight experiments. It has also beenattempted to make artificial devices that mimic ani-mals. Some of the more basic functions such as standing,walking and running can now be performed by robots[678]. Some of these functions are simple control tasks in-volving stabilization and regulation, but there are manymore complicated tasks that require cognition. Interest-ing efforts in this direction have been made by Profes-sor Hiroshi Ishiguro at Osaka University who has de-signed several humanoid robots [23]. Experiments withsynthetic biology are also performed at the molecularlevel [13].

There has been increasing interest in control and sys-tems biology [376,327,157,240]. Rather than taking a re-ductionist approach consisting of studying an isolatedentity, systems biology which originated around 1988aims to understand how the components interact as dy-namical systems, whether at the cell or organ levels. Itthereby aims to unravel the complexity of biological anddisease networks. A quote from the Institute of SystemsBiology [218] summarizes it thus:

Even the simplest living cell is an incredibly complexmolecular machine. It contains long strands of DNAand RNA that encode the information essential to thecells functioning and reproduction. Large and intri-cately folded protein molecules catalyze the biochemi-cal reactions of life, including cellular organization andphysiology. Smaller molecules shuttle information, en-ergy, and raw materials within and between cells, andare chemically transformed during metabolism. Viewedas a whole, a cell is like an immense city filled withpeople and objects and buzzing with activity.

The interaction between control engineers and biologistsis increasing and new academic departments and educa-tional programs are being established at major univer-sities.

36

Page 37: Astrom Kumar Control a Perspective

5.10 Economics

There are common interests between economists andcontrol engineers in game theory, input-output models,stochastic control, optimization and system identifica-tion (econometrics). The economists Simon, Nash andArrow have already been mentioned in this paper.

Early work in economics was done by Adam Smith andand Maynard Keynes. Keynes’s work was largely con-ceptual but he also introduced simple models that wereimportant for emerging out of the Great Depression inthe 1930s, such as the notion of multipliers which indi-cate the impact of government investment on GDP. Oneway to assess the research in economics is to look at theworks that have been awarded the Economics Prize sincethe first one in 1969. Several have strong connections tocontrol: the 1978 and 1986 Prizes on decision-making,the 1994 and 2005 Prizes for game-theory, the 1997 Prizeon evaluation of derivatives, the 2004 Prize for drivingforces behind the business cycles, and the 1969, 1980,1989 and 2003 Prizes for modeling and time series anal-ysis.

Economics influences us all and requires our attention,and economists have been well aware of the role of con-trol for a long time [374]. However, the economic systemis a large, complex, global, distributed, dynamic sys-tem with many actors, governments, banks, investmentbanks, companies and individuals. Governments controlby laws and taxes, the central banks set interest ratesand control money supply, companies and individualsbuy, sell and invest. The different actors have widely dif-ferent goals. Behavioral economics shows that individualdecisions can be based on emotional and social aspects.The system oscillates in business cycles and there areoccasional crises, clearly pointing to control problems.

Krugman who received the 2008 Sveriges Riksbank Prizein Economic Sciences in Memory of Alfred Nobel (theEconomics Prize) says the following in his book [396]:

We have magnetotrouble, said John Maynard Keynesat the start of the Great Depression. Most of the eco-nomic engine was in good shape, but a crucial compo-nent, the financial system, wasn’t working. He also saidthis, “We have involved ourselves in a colossal mud-dle, having blundered in the control of a delicate ma-chine, the workings of which we do not understand.”Both statements are as true now as they were then.

Comparing this with the quote by Wilbur Wright on thedifficulties of balancing and steering airplanes in Sec-tion 2.5 it is clear that leading economists realize thatthere are severe control problems in economics.

6 The Interplay of Theory and Applications

Feedback control is a key component of an amazinglybroad range of applications, in fact touching upon almosteverything in the modern world. The theory of controlis a similarly deep field, drawing upon a broad rangeof mathematics, and sometimes even contributing to it.One can ask how such a broad range of applications anddeep theory came to be. The answer is rich with texture,as expounded on in this paper.

Control systems need enabling technologies in order tobe implementable. The availability of such technologies,and the advances in technology, sometimes revolution-ary advances, have played a key role in shaping the evo-lution of control. An important role has also been spo-radically played by motivating grand challenge applica-tions that have led to great societal, i.e., government andindustrial, investment in development of both controltechnology and theory. The development of the mod-ern research university system with systematic researchfunding has also played a key role in fostering academicresearch. And, of course, there have been visionaries anddeep researchers who have been at the front-line.

The evolution of the field has not been smooth. Therewere several fits and starts in the process that cumula-tively resulted over the long haul in advances in bothpractice and theory. The “gap between theory and ap-plications” has been a dynamic process which has fre-quently raised controversy in the community. In a nu-anced editorial in 1964 [41], Axelby observes that:

Certainly some gap between theory and applicationshould be maintained, for without it there would be noprogress. ... It appears that the problem of the gap is acontrol problem in itself; it must be properly identifiedand optimized through proper action.

There were periods of time when applications were aheadof theory, with success not resulting until key theoreticalbreakthroughs had been made. At other times, the gapwas in the reverse direction. Theory was developed in anopen-loop fashion without feedback from real-world im-plementation and application, investigating what maypossibly be feasible applications in the future and sug-gesting imaginative possibilities, even though the tech-nology for implementation was not yet ripe enough orconstraints fully understood. The field has seen both ap-plication pull of theory as well as theory push of appli-cations. One could say that this gap, in whichever di-rection, has been a source of creative tension that ulti-mately led to advances to the benefit of both theory andapplications.

Understanding the interplay of these diverse interactionsprovides some insight into how engineering research anddevelopment and knowledge have evolved vis-a-vis con-trol, a key pillar of the modern technological era.

37

Page 38: Astrom Kumar Control a Perspective

There have been several periods when applications havebeen developed without much of theory. In 1933, one ofthe leading actors in process control, Ivanoff, said [65, p49]:

The science of the automatic regulation of tempera-ture is at present in the anomalous position of havingerected a vast practical edifice on negligible theoreticalfoundations.

Even today, PID control is enormously successful; it isone of the simplest ways to benefit from the power offeedback. In ship steering, the tinkerer Sperry outdid thetheoretician Minorsky, as recounted in [65]. A similarsituation prevailed in early flight control[465, p 5]:

.. they seemed to have made progress with a minimumamount of mathematics until after the end of the 1939-1945 war. ... During roughly the first 50 years of avi-ation’s history, the study of the dynamics of aircraftsand their control system was of negligible interest todesigners, who learned to get by with rules of thumb... This was in spite of the fact that a mathemati-cal theory for the stability of the unattended motionand of the aircraft’s response to control was developedat an early date. Very fortunate, wartime pressuresproduced two developments that fundamentally alteredtechniques for the design of automatic flight controlsystems. The first of these was the theory of servomech-anisms: the second was the electronic computer. Anal-ysis and simulation are today the twin pillars on whichthe entablature of aircraft flight control system designstands.

The gradual evolution over several decades from an ap-plication of feedback to a broad and deep theoreticalframework for its analysis and design can be clearly seenin the case of the centrifugal governor. Used early onin windmills in 1745 [460], and subsequently by Wattin steam engines in the 1780s, it was originally a pro-portional controller, with integral and derivative actionsubsequently added [65]. About a century later, Vyshne-gradskii [670] and Maxwell [456] initiated a theoreticalinvestigation. This led to the work of Hurwitz [320] andRouth [569] on stability analysis.

In a similar vein, the challenge of designing repeatersfor long-distance telephony led to the invention of thefeedback amplifier by Black [83] in 1927, though with-out a theory. In the absence of a fundamental theoreti-cal understanding this was a very difficult technology toemploy. Difficulties with instability encountered in thelab inspired Bode and Nyquist to develop the theory forthe feedback amplifier [507,93]. There was one other ex-tremely important factor in this case: the presence of apowerful corporation with a research and developmentlab: AT&T. All three, Black, Nyquist and Bode, were itsemployees. This is a supreme example of the success of alarge concentrated effort, in this case by a monopoly, in

bringing to bear sufficient resources over a long periodto solve fundamental application challenges.

Another example in the same vein was the developmentof fire control, originally for naval warfare and subse-quently for anti-aircraft fire. The latter received sus-tained support by the US Government in a project ledby Vannevar Bush, and eventually led to the develop-ment of servomechanism theory, as described in Section3. In his earlier work on power system networks at MIT,Bush [684] had observed that:

Engineering can proceed no faster than the mathemat-ical analysis on which it is based. Formal mathematicsis frequently inadequate for numerous problems press-ing for solution, and in the absence of radically newmathematics, a mechanical solution offers the mostpromising and powerful attack.

Similarly, the cold war air defense network Semi-Automatic Ground Environment (SAGE) led eventuallyto work on sampled data systems of Ragazzini, Jury,Franklin [553,352] (Section 3.1). Later, flight controlwith more stringent requirements flowing from pushingthe flight envelope led to work on multi-variable stabil-ity margins, i.e., robustness [571] (Section 4.13). Morerecently, the need to design safe and reliable embeddedsystems, e.g., for automobiles, is driving work on modelbased development, validation and verification.

The development of model predictive control is anotherinteresting instance of interaction, in this case serendip-itous, between theory and practice. Richalet [564] solveda range of practical problems in a discrete time settingby calculating the optimal trajectory, but only using theinitial portion of it, and then recomputing the trajectoryafter each sample. This procedure was called recedinghorizon control; it had also been investigated in [408].Later, two chemical engineers, Charlie Cutler and BrianRamaker, were running a refinery during a strike. A keyproblem was that several control variables had to be setduring a grade change. Cutler and Ramaker solved theproblem by first determining the steady state gain exper-imentally. Given the desired changes in the output, theappropriate changes in the controls were then obtainedby matrix inversion. To make faster changes they mea-sured the multi-variable pulse response and computed aset of future controls that would change the state accord-ing to a desired trajectory. They applied the first controlsignal and repeated the procedure. Not being versed incontrol theory, they called the impulse response the “dy-namic matrix,” and the design procedure was called dy-namic matrix control (DMC) [158,546]. When the strikewas over, Cutler and Ramaker returned to research anddevelopment, and started refining the method. Theysponsored research and arranged workshops at Shell tointeract with academia [545,544], which in turn initiatedresearch on stability and robustness [241,480,457,60].

38

Page 39: Astrom Kumar Control a Perspective

On the other hand, there was a long period in the sec-ond half of the twentieth century when theoretical re-search in control was exuberant. When the digital com-puter came to be introduced into the feedback loop, asnoted in Section 4, it caused a platform revolution. Itobviously needed a different kind of theory, state-spacetheory, from the the frequency domain theory that hadbeen so appropriate for harnessing the feedback ampli-fier. In fact, for his paper [360], Kalman claimed that“This paper initiates study of the pure theory of con-trol.” The state-space theory found immediate applica-tion. Swerling applied his filter [634] to the estimation ofsatellite trajectories using ground based measurements[269], and S. F. Schmidt of NASA’s Ames Research Cen-ter applied Kalman’s filter [368] to the circumlunar nav-igation problem, and developed it for real-time on-boardnavigation in the Apollo mission [269]. The developmentof state space theory became a very active research topicin academia from about 1960 for almost four decades.The mathematics and control group at the Research In-stitute for Advanced Studies (RIAS) that Solomon Lef-schetz began leading in 1957 played a major role, thecenter having been founded in 1955 to conduct work sim-ilar to what was being done in the Soviet Union [269].Control researchers delved fully into studying linear dif-ferential/difference equations modeling linear systems,and beyond, into stochastic systems, nonlinear systems,decentralized systems, and distributed parameter sys-tems. This was facilitated by well funded research pro-grams of the U.S. Government for university researchers,a legacy of Vannevar Bush’s wartime efforts in defenseresearch and development and subsequent advocacy forthe creation of the National Science Foundation in theU.S. Many imaginative possibilities were investigated.A rich theory of model-based systems began to emerge.There were some important applications that emerged,such as system identification and adaptive control, asdescribed in Section 4. This was particularly the case inprocess control where there was a rich tradition of ex-perimentation.

The theory at this time was in some respects aheadof technology, since many ideas explored could not yetbe implemented, and had to await the development ofpowerful computing, networking, etc. The theory ex-plored the limits of the feasible, whether due to theinfinite-dimensionality of the resulting solution, or thecurse of dimensionality of dynamic programming, ormore broadly complexity of either the solution or itsimplementation. For example, the class of nonlinearfiltering problems for which the optimal solution wasfinite dimensional was carefully investigated. Effortssuch as this served a valuable purpose in calibratingwhat was feasible and were important in themselves,a la information theory, even though they did not re-sult in major applications. However, theory that earliershowed the limits to explicit solution was revisited insubsequent decades after computational power avail-able had greatly increased. An example is the control

of partially observed systems, which is now one of themainstays of machine learning and artificial intelligence[596], as noted in Section 4.10.

For the reasons noted above, this extremely fertile pe-riod for theoretical research also led to control theorydeveloping in an “open-loop” fashion without constantfeedback from real-world applications against which itcould be tested. There was a time gap between theoret-ical ideas and their testing against reality, if they couldbe tested at all, for, in some cases, the implementa-tion technology was not yet ripe. Important shortcom-ings were only discovered after the theory was tried inan application. An example is the lack of multivariablestability margins for linear quadratic Gaussian control[170] alluded to in Section 4.13, discovered in simula-tion testing of submarines [571]. In fact, robustness tomodel uncertainty was broadly one of the major short-comings of early model-based state-space theories. Thatthis was true even in adaptive control when the modelclass within which parameters are fitted does not con-tain the true system became a cause celebre [567] thatsubsequently resulted in frenetic activity in “robustify-ing” adaptive control. Eventually there developed a the-ory that encompassed model uncertainty, culminating ina paper that won the IEEE W.R.G. Baker Award [172].

In the golden age for control theory research, control be-came well established in academia. There was a criticalmass of theoretical researchers to dig deeply into manyareas, facilitating the formation of a strong theory. A pa-per from Berkeley [73], entitled “Moving from Practiceto Theory: Automatic Control after World War II,” de-scribes this phenomenon in detail. Inevitably, there wasan attitude of l’art pour l’art. Sometimes a lot of effortwas also devoted to less important problems forgettingor oblivious of the following words from John von Neu-mann [347]:

I think that it is a relatively good approximation totruth - which is much too complicated to allow anythingbut approximations - that mathematical ideas origi-nate in empirics. But, once they are conceived, thesubject begins to live a peculiar life of its own and is.. governed by almost entirely aesthetical motivations.In other words, at a great distance from its empiricalsource, or after much “abstract” inbreeding, a mathe-matical subject is in danger of degradation. Wheneverthis stage is reached the only remedy seems to me to bethe rejuvenating return to its source: the reinjection ofmore or less directly empirical ideas ...

An important factor influencing the evolution of con-trol has of course been the availability of technology forimplementation. While the digital computer did spawna revolution, computational power was initially limited,and there was no data networking to any appreciable ex-tent. Thus, in many respects in this era, the theoreticalresearch led technology, as noted above, and naturally

39

Page 40: Astrom Kumar Control a Perspective

needed course corrections as technological possibilitiesand limitations became clearer. Nevertheless the imagi-native research left the field in a good creative state topursue the opportunities that have since opened up af-ter revolutions in computational power, software, datanetworking, sensors and actuators, following the microelectronics revolution realized by four incessant decadesof Moore’s Law.

It is impossible to list the applications of control in theirentirety; a crude random sampling yields the following:process control, telephony, cellular phones, power sys-tems, aircraft and spacecraft, the Internet, computercontrol of fuel injection, emission control, cruise control,braking and cabin comfort in automobiles, productionand inventory control, missile guidance, robotics, appli-ances, semiconductor wafer fabs, active noise canceling,automated highways, atomic force microscopes, quan-tum control, mass spectroscopy, large space structures.At present, almost every technology has feedback con-trol at its core. As an exemplar, a recent article [527]describing the efforts of the most recent awardee of theIEEE Medal of Honor, Irwin Jacobs, a co-founder ofQualcomm, has this to say:

... he envisioned a rapid-response system: CDMAphones would monitor the power of the signal comingin from the tower; if the signals suddenly dropped, say,when a user walked into a building, the phone wouldcrank up its transmitting signal, figuring that if it washaving trouble hearing the tower, then the tower wouldhave trouble hearing the phone. Next, equipment atCDMA towers would take a handful of received bitsand calculate an average signal strength; if that signalfell above or below a preset threshold, then the towerwould prompt the phone to lower or raise its power. ...“Someone else might have looked at all the complexi-ties and the concerns and concluded that it just wasn’tpossible.”

It is not for nothing that control, omnipresent every-where, is called a hidden technology.

Besides engineering and technology, there are many usesof feedback and feedforward in other areas too. In eco-nomics, central planning could perhaps be regarded as anexample of feedforward, while a market economy couldbe regarded as an example of feedback. Tustin wrote abook [653] on applications of control to the economy asearly as 1953. At least in technical systems it is knownthat the best results are obtained by combining feedbackand feedforward. Control is also entering unusual fieldslike internet advertising [371] and art [159].

And then there is biology, perhaps on the edge of arevolution, where the unraveling and harnessing of om-nipresent feedback processes is the dream of mankind.

With all the aforementioned advances, the stage is set

for large scale system building. The twenty-first centurycould well be such an age of large scale system building.Not only are we running into resource and environmen-tal limitations, whether in energy or water, but at thesame time we are also facing great demands for mod-ern transportation, energy, water, health care services,etc., from large segments of the globe that did not pre-viously have access to such services. Major efforts acrossthe globe are targeted at grand challenges vis-a-vis thesmart electricity grid, automated transportation, healthcare, etc., for all of which sensing and actuation, viz.,control is key.

7 Concluding Remarks

Control is a field with several unique characteristics. Itsevolution is a veritable microcosm of the history of themodern technological world. It provides a fascinatinginterplay of people, projects, technology, and research.

Control transcends the boundaries of traditional engi-neering fields such as aeronautical, chemical, civil, elec-trical, industrial, mechanical and nuclear engineering.Its development was triggered not by a sole techno-logical area but by several technological projects, suchas fire control, telephony, power systems, flight control,space exploration, and robotics, at different times in itsevolution. Control has also had impact on several non-engineering fields such as biology, economics, medicineand physics. Concepts and ideas have migrated betweenthe fields.

The development of control has benefited greatly fromseveral grand challenges, e.g., transcontinental tele-phony, fire control, and landing a man on the moon.Its development also benefited from the concentratedpower of monopolistic industries and the government.It further benefited from the great post-second worldwar boom of academic research.

Different application areas have emphasized different as-pects of control, leading to the development of a richframework for control. In turn, closing the loop, the evo-lution of control has influenced, radically in many cases,the development of each of these areas.

Control is the first systems discipline. It recognized thecommonality of issues at the heart of many engineeringproblems. The systems viewpoint – make the output of“plant” or entity behave in a desirable manner by ma-nipulating its input – is a unifying viewpoint that pro-vides great clarity to the design process irrespective ofthe field of application.

The enabling technology for implementation had a ma-jor impact on the evolution of the techniques for con-trol design and the underlying theory, as witnessed bythe development of frequency domain theory in the age

40

Page 41: Astrom Kumar Control a Perspective

of analog computation, and later by the development ofthe state-space approach and multi-stage decision mak-ing in the era of digital computing.

Control is a field whose progress has been punctuatedby several key theoretical contributions. These have in-volved a variety of mathematical sub-disciplines, such ascomplex analysis, differential equations, probability the-ory, differential geometry, optimization and graph the-ory. As such, control is currently one of the most math-ematized fields of engineering.

The research in the field has resulted in an exceedinglyrich collection of advanced and specialized books cover-ing several subfields. The range includes adaptive con-trol, classical control, discrete-event systems, differentialgames, digital control, distributed parameter systems,dynamic programming, estimation, identification, lin-ear systems, multi-variable control, networked systems,nonlinear systems, optimal control, robust control, slid-ing mode control, stability, stochastic control. There areeven encyclopedias of control.

Control has become a central component of many mod-ern technologies, even though often hidden from view.In fact it is hard to conceive of any technology dealingwith dynamic phenomena that does not involve control.

There has been a dynamic gap between theory and prac-tice. At times, applications consisted mainly of tinker-ing. At times it was the severe difficulties encountered inpractice that led to dramatic theoretical breakthroughswhich were extremely relevant and important in prac-tice; an example being the work of Bode and Nyquist. Atother times, incipient technological possibilities openedup new fields of theoretical research. This resulted ina broad exploration of systems theoretic concepts suchas stability, controllability, information structures, opti-mality, complexity and robustness. At times, the explo-ration developed in an open-loop way without feedbackfrom practical applications, and sometimes as a mathe-matical endeavor. In some cases, technology was not yetripe to implement and test out some the concepts beingexplored.

Where are we now, and what may we learn from history?How may the past provide some guidance and feedbackfor the future? We present our viewpoint.

On the technological side, with dramatic evolution ofsensors, actuators, networks, computational hardwareand software, it has become feasible to deploy and im-plement large and complex systems. With this consid-erable strengthening of the implementation capabilities,the theory-practice gap needs to be narrowed. This mayneed to happen on both fronts – more theory to solvedifficulties encountered in applications, as well as moreexperimentation to determine what are the difficultiesand thereby identify problems that need a solution.

On the one hand, where the problems are well under-stood, there need to be strong theoretical efforts to de-velop solutions. An example is the need for formal meth-ods. A good theory can obviate the need for massive sim-ulation based testing that is very expensive both in costand time. Design, implementation, maintenance and up-grading of complex systems cannot be done safely with-out formal methods that go all the way from require-ments to the final product.

On the other hand, there needs to be greater experimen-tation to understand what are the bottlenecks, calibratepurported solutions, and to understand what works orimproves performance and what does not. An exampleis the goal of building autonomous systems, where priordistributions of uncertainties or model classes are notwell understood. Experimentation in such situations isimportant for learning about the real world, and is in-tended to be revelatory. It can lead to a relevant theory.

Experimentation is different from demonstrations. Ex-perimentation involves two way dynamic interaction be-tween theories or models and practice; i.e., a feedbackloop. Demonstrations are on the other hand just that –they demonstrate that a particular solution performs asclaimed. They are not a substitute for experiments orgenuine laboratories.

It is important for research to investigate applications,being guided by them, by what works and what doesnot. Awareness of what are the real bottlenecks for per-formance, robustness, reliability and how to shorten thecycle of design and deployment is important.

Control systems researchers should take full systems re-sponsibility. In fact, as history has shown, for examplein the case of the feedback amplifier, the recognition ofwhat is really the problem is itself a major accomplish-ment in research. Such awareness can then lead to rele-vant advances that have deep impact on practice.

Pedagogy also needs to play an important role. The fieldshould educate students who are capable of solving thewhole problem from conceptual design to implementa-tion and commissioning. Due to the convergence of con-trol, communication and computing, students will alsoneed to be knowledgeable across a broad front of all thesefields, as well as mathematics. We must also leave spacefor students to acquire knowledge of fields such as bi-ology, where advances have been extraordinarily rapid.How all this can be accomplished within the time lim-ited confines of an undergraduate curriculum requires athorough examination. At the graduate level, one alsohas the additional challenge of preserving depth, sincethat is critical for research, and in fact has been an im-portant strength of the field.

Book writing has an important role to play. The tremen-dous research advances of the past seven decades must

41

Page 42: Astrom Kumar Control a Perspective

be distilled with the benefit of hindsight into compactbooks. We need to compress current knowledge, empha-sizing the fundamentals. This needs to be done at boththe undergraduate and graduate levels.

There are also challenges with respect to how controlis dispersed in several engineering departments. Sinceeducation and research in engineering grew out of spe-cific technologies such as mining, building of roads anddams, construction of machines, generation and trans-mission of electricity, industrial use of chemistry, etc., itled to an organization of engineering schools based ondepartments of mining, civil engineering, mechanical en-gineering, electrical engineering, chemical engineering,etc. This served very well at the end of the 19th centuryand the beginning of the 20th century. But is this theoptimal structure in the twenty-first century to teach anincreasingly powerful systems discipline such as controlthat cuts across these areas?

The field of control has a bright future since there aremany grand challenges. There is great planet wide de-mand for advanced systems for transportation, healthcare, energy, water, etc., which have to be engineered ina resource limited environment. Biology is another ma-jor frontier of research. The twenty-first century couldwell be the age of large system building.

Acknowledgements

The authors are grateful to Leif Andersson, Karl-ErikArzen, Tamer Basar, John Ballieul, Bo Bernhardsson,John Cassidy, Helen Gill, Aniruddha Datta, Johan Eker,Y. C. Ho, Charlotta Johnsson, David Mayne, PetarKokotovic, Sanjoy Mitter, Richard Murray, AndersRantzer, Pravin Varaiya, Eva Westin and the severalanonymous reviewers for input and feedback on thispaper.

References

[1] O. M. Aamo and M. Krstic. Flow Control by Feedback.Springer Verlag, London, 2002.

[2] Norman Abramson. The ALOHA system: Anotheralternative for computer communications. In Proceedingsof the November 17-19, 1970, Joint Computer Conference,pages 281–285, 1970.

[3] ACCESS. AboutACCESS Linnaeus Centre, 2008. http://www.kth.se/en/

ees/omskolan/organisation/centra/access/aboutaccess.

[4] M. A. Aizerman. On a problem concerning stability in thelarge of dynamical systems. Uspehi Mat. Nauk, 4:187–188,1949.

[5] M. A. Aizerman. Lectures on the theory of automatic control.Fizmatgiz, Moscow, 1958.

[6] R Akella and P. R. Kumar. Optimal control of productionrate in a failure prone manufacturing system. AutomaticControl, IEEE Transactions on, 31(2):116–126, 1986.

[7] A. Al Alam, A. Gattami, K. H. Johansson, and C. J. Tomlin.When is it fuel efficient for heavy duty vehicles to catchup with a platoon. In Proc. IFAC World Congress, Milan,Italy, 2013.

[8] K. Almstrom and A. Garde. Investigation of frequencyanalysis of the Swedish network. CIGRE, pages 23–36, 1950.Report 315.

[9] R. Alur, C. Courcoubetis, N. Halbwachs, T. A. Henzinger,P.-H. Ho, X. Nicollin, A. Olivero, J. Sifakis, and S. Yovine.The algorithmic analysis of hybrid systems. TheoreticalComputer Science, 138(1):3–34, 1995.

[10] Rajeev Alur and David L. Dill. A theory of timed automata.Theoretical Computer Science, 126:183–235, 1994.

[11] B. D. O. Anderson, R. R. Bitmead, C. R. Johnson, P. V.Kokotovic, R. L. Kosut, I. Mareels, L. Praly, and B. Riedle.Stability of Adaptive Systems. MIT Press, Cambridge, MA,1986.

[12] B. D. O. Anderson and J. B. Moore. Linear Optimal Control.Prentice-Hall, Englewood Cliffs, NJ, 1971. Dover edition2007.

[13] Ernesto Andrianantoandro, Subhayu Basu, David K Karig,and Ron Weiss. Synthetic biology: new engineering rulesfor an emerging discipline. Molecular systems biology, 2(1),2006.

[14] A. A. Andronov. A. Vyshnegradskii and his role in thecreation of automatic control theory (100 years from the dayof publication of I. A. Vysnegradskii’s work on automaticcontrol theory). Avtomatika i Telemekhanika, 4:5–17, 1978.

[15] A. A. Andronov, A. A. Vitt, and S. E. Khaikin. Theoryof Oscillators. Dover, New York, NY, 1937. Translation ofSecond Russion edition by Dover 1966.

[16] Anon. Institute of control sciences. Formerly Institute ofAutomation and Remote Control.

[17] Anon. The Instrumentation Laboratory (Draper Lab).

[18] Kenneth J Arrow, Theodore Harris, and Jacob Marschak.Optimal inventory policy. Econometrica: Journal of theEconometric Society, pages 250–272, 1951.

[19] ARTIST FP5. http://www.artist-embedded.org/artist/

ARTIST-FP5.html, 2006.

[20] ARTIST2. A Network of Excellence on Embedded SystemsDesign.

[21] ArtistDesign. European Network of Excellence onEmbedded Systems Design.

[22] Karl-Erik Arzen, Anton Cervin, Johan Eker, and Lui Sha.An introduction to control and scheduling co-design. InProceedings of the 39th IEEE Conference on Decision andControl, Sydney, Australia, December 2000.

[23] Minoru Asada, Karl F. MacDorman, Hiroshi Ishiguro, andKuniyoshi. Cognitive developmental robotics as a newparadigm for the design of humanoid robots. Robotics andAutomation, 37:185–193, 2001.

[24] Uri M. Ascher and L. R. Petzold. Computer Methods forOrdinary Differential Equations and Differential-AlgebraicEquations. SIAM, Philadelphia, 1998.

[25] Asea. Supporting and replacing Nova tune systems. http:

//www.firstcontrol.se/html/fn\_novatune.html,.

[26] W. Ross Ashby. Design for a Brain. Chapman & Hall,London, 1952.

[27] W. Ross Ashby. An Introduction to Cybernetics. Chapman& Hall, London, 1956.

[28] Modelica Association. Modelica. http://www.modelica.

org.

42

Page 43: Astrom Kumar Control a Perspective

[29] Karl J Astrom. Optimal control of Markov decisionprocesses with incomplete state estimation. Journal ofMathematical Analysis and Applications, 10(1):174–205,1965.

[30] Karl J Astrom. Automatic control - the hidden technology.In Paul Frank, editor, Advances in Control - Highlightsof ECC’99, pages 1–29, London, 1999. European ControlCouncil, Springer-Verlag.

[31] Karl Johan Astrom. Computer control of a paper machine—An application of linear stochastic control theory. IBMJournal of Research and Development, 11(4):389–405, 1967.

[32] Karl Johan Astrom. Early control development in Sweden.European Journal of Control, 13:7–19, 2007.

[33] Karl Johan Astrom and B.M. Bernhardsson. Comparison ofRiemann and Lebesgue sampling for first order stochasticsystems. In Proceedings of the 41st IEEE Conference onDecision and Control, volume 2, pages 2011–2016, 2002.

[34] Karl Johan Astrom and Torsten Bohlin. Numericalidentification of linear dynamic systems from normaloperating records. In Proc. IFAC Conference on Self-Adaptive Control Systems, Teddington, UK, 1965.

[35] Karl Johan Astrom and Tore Hagglund. PID Controllers:Theory, Design, and Tuning. Instrument Society ofAmerica, Research Triangle Park, North Carolina, 1995.

[36] Karl Johan Astrom and Bjorn Wittenmark. On self-tuningregulators. Automatica, 9:185–199, 1973.

[37] Karl Johan Astrom and Bjorn Wittenmark. AdaptiveControl. Addison-Wesley, Reading, Massachusetts, 1989.Second Edition 1995, Dover edition 2008.

[38] M. Athans and P. L. Falb. Optimal Control. McGraw-Hill,New York, NY, 1966. Dover Edition 2006.

[39] Derek P. Atherton. Nonlinear Control Engineering—Describing Function Analysis and Design. Van NostrandReinhold Co., London, UK, 1975.

[40] AUTOSAR. AUTOSAR: Automotive open systemsarchitecture. http://www.autosar.org, 2013.

[41] George S Axelby. The gap - Form and future (Editorial).IEEE Transactions on Automatic Control, 9(2):125–126,April 1964.

[42] George S. Axelby. The IFAC Journal - A New Automatica.Automatica, 5:5–6, 1969.

[43] George S. Axelby, Ole I. Franksen, and B (editors) Nichols,Nathaniel. History of control systems engineering. IEEEControl Systems Magazine, 4:4–32, 1984.

[44] Radhakisan Baheti and Helen Gill. Cyber-physical systems.The Impact of Control Technology, pages 161–166, 2011.

[45] John Baillieul and Panos J Antsaklis. Control andcommunication challenges in networked real-time systems.Proceedings of the IEEE, 95(1):9–28, 2007.

[46] Bassam Bamieh and Mohammed Dahleh. Energyamplification in channel flows with stochastic excitation.Phys. Fluid, 13:3258–3260, 2001.

[47] Thomas H. Banks, R. H. Fabiano, and Kiyosi Ito. NumericalLinear Algebra Techniques for Systems and Control. Soc.for Industrial and Applied Math, Providence, RI, 1993.

[48] Tamer Basar and Pierre Bernhard. H∞-Optimal Controland Related Minimax Design Problems - A Dynamic GameApproach. Birkhausser, Boston, CA, 1991. Second edition1995.

[49] Tamer Basar and Geert Jan Olsder. Dynamic NoncoopertiveGame Theory. Academic Press, New York, NY, 1982. SIAMClassics in Applied Mathematics 1999, revised from the 2nd1995 edition.

[50] Tamer (Editor) Basar. Control Theory, Twenty-FiveSeminal Papers (1932-1981). IEEE Press, Piscataway, NJ,2001.

[51] Forest Baskett, K Mani Chandy, Richard R Muntz, andFernando G Palacios. Open, closed, and mixed networks ofqueues with different classes of customers. Journal of theACM (JACM), 22(2):248–260, 1975.

[52] Prestib R. Bassett. Sperry’s forty years in the progress ofscience. Sperryscope, 12(1), 1950.

[53] J. Bechhoefer. Feedback for physicists: A tutorial essay oncontrol. Reviews of Modern Physics, 77:783–836, 2005.

[54] Arthur Becker Jr, P. R. Kumar, and Ching-ZongWei. Adaptive control with the stochastic approximationalgorithm: Geometry and convergence. Automatic Control,IEEE Transactions on, 30(4):330–338, 1985.

[55] R. Bellman. Adaptive Control Processes. A Guided Tour.Princeton University Press, Princeton, N. J., 1961.

[56] R. Bellman, I. Glicksberg, and O. A. Gross. Some aspectsof the mathematical theory of control processes. TechnicalReport R-313, The RAND Corporation, Santa Monica,Calif., 1958.

[57] Richard Bellman. A Markovian decision process. Technicalreport, DTIC Document, 1957.

[58] Richard Ernest Bellman. An introduction to the theory ofdynamic programming. Technical Report Technical ReportR-245, Rand Corporation, 1953.

[59] Richard Ernest Bellman. Dynamic Programming. PrincetonUniversity Press, New Jersey, 1957.

[60] Alberto Bemporad, Manfred Morari, Vivek Dua, andEfstratios N Pistikopoulos. The explicit linear quadraticregulator for constrained systems. Automatica, 38(1):3–20,2002.

[61] Th. Benecke and A. W. Quick. History of GermanGuided Missiles Development. Appelhans & Co, Brunswick,Germany, 1957.

[62] VE Benes. Existence of optimal stochastic control laws.SIAM Journal on Control, 9(3):446–472, 1971.

[63] VE Benes. Exact finite-dimensional filters for certaindiffusions with nonlinear drift. Stochastics: An InternationalJournal of Probability and Stochastic Processes, 5(1-2):65–92, 1981.

[64] G. Bengtsson and B. Egardt. Experiences with self-tuningcontrol in the process industry. In Preprints 9th IFAC WorldCongress, pages XI:132–140, Budapest, Hungary, 1984.

[65] S. Bennett. A History of Control Engineering 1800-1930.Peter Peregrinus Ltd., London, 1979.

[66] S. Bennett. A History of Control Engineering 1930-1955.Peter Peregrinus Ltd., London, 1993.

[67] Stuart Bennett. The emergence of a discipline: Automaticcontrol 1940-1960. Automatica, 12:113–121, 1976.

[68] Stuart Bennett. Nicolas Minorsky and the automaticsteering of ships. IEEE Control Systems Magazine, 4:10–15,1984.

[69] Alain Bensoussan, Giuseppe Da Prato, Michel C. Delfour,and Sanjoy K. Mitter. Representation and Control ofInfinite-Dimensional Systems: Vol.I. Birkhausser, Boston,CA, 1992.

[70] A. Benveniste and K. J. Astrom. Meeting the challenge ofcomputer science in the industrial applications of control:An introductory discussion to the Special Issue. Automatica,29:1169–1175, 1993.

43

Page 44: Astrom Kumar Control a Perspective

[71] Albert Benveniste, Timothy Bourke, Benoit Caillaud, andMarc Pouzet. Non-standard semantics of hybrid systemsmodelers. Journal of Computer and System Sciences,78(3):877–910, 2012.

[72] Bolt Beranek and Newman. A history of the ARPANET:The first decade.

[73] Sarah Bergbreiter. Moving from practice to theory:Automatic control after World War II. History of Science,University of California, Berkeley, 2005.

[74] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-DynamicProgramming. Athena Scientific, Boston, MA, 1996.

[75] T Bielecki and P. R. Kumar. Optimality of zero-inventorypolicies for unreliable manufacturing systems. Operationsresearch, 36(4):532–541, 1988.

[76] G. M. Birtwistle, Ole Johan Dahl, Bjørn Myhrhaug, andKristen Nygaard. SIMULA BEGIN. Auerbach PublishersInc., 1973.

[77] C. C. Bissell. Control engineering and much more: Aspectsof the work of Aurel Stodola. Measurement and Control,22:117–122, 1989.

[78] C. C. Bissell. Pioneers of control - an interview with ArnoldTustin. IEE Review, 38:223–226, 1992.

[79] C. C. Bissell. Russian and Soviet contributions to thedevelopment of control engineering: a celebration of theLyapunov centenary. Trans. Inst. MC, 14:170–178, 1992.

[80] C. C. Bissell. A history of automatic control. In S. Y. Nof,editor, Springer Handbook of Automation. Springer, 2009.

[81] Sergio Bittanti, Claudio Bonivento, Eduardo Mosca,Alberto Isidori, Giorgio Picci, and Roberto Tempo. Italyin IFAC - From dawn to our days. Committee for IFACCongress in Italy, Rome, Italy, 2003.

[82] Sergio Bittanti and Michel Gevers (Editors). On the dawnand development of control science in the XX-th century.European Journal of Control, 13:5–81, 2007.

[83] H. S. Black. Stabilized feedback amplifiers. Bell SystemTechnical Journal, 13:1–18, 1934.

[84] H. S. Black. Inventing the negative feedback amplifier. IEEEspectrum, pages 55–60, December 1977.

[85] P. F. Blackman. Extremum-seeking regulators. In J. H.Westcott, editor, An Exposition of Adaptive Control.Pergamon Press, Oxford, UK, 1962.

[86] D. Blackwell. Discrete dynamic programming. The Annalsof Mathematical Statistics, 33:719–726, 1962.

[87] David Blackwell. Discounted dynamic programming. Ann.Math. Statist., 133:226–235, 1965.

[88] David Blackwell. Positive dynamic programming. In Proc.5th Berkeley Symp., pages 415–418, 1967.

[89] David Blackwell. On stationary policies. J. Royal Stat. Soc.A, 133:46–61, 1970.

[90] John H. Blakelock. Automatic Control of Aircraft andMissiles. John Wiley, New York, NY, 1981. second edition1991.

[91] George J. Blickley. Modern control started with Ziegler-Nichols tuning. Control Engineering, 90:11–17, 1990.

[92] Hans Blomberg. Algebraic Theory for Multivarible LinearSystems. Academic Press, 1983.

[93] H. W. Bode. Relations between attenuation and phase infeedback amplifier design. Bell System Technical Journal,19:421–454, July 1940.

[94] H. W. Bode. Network Analysis and Feedback AmplifierDesign. Van Nostrand, New York, 1945.

[95] H. W. Bode. Feedback – The history of an idea. InProceedings of the Symposium on Active Networks andFeedback Systems. Polytechnic Institute of Brooklyn, 1960.

[96] R Boel and P Varaiya. Optimal control of jump processes.SIAM Journal on Control and Optimization, 15(1):92–119,1977.

[97] R Boel, P Varaiya, and E Wong. Martingales on jumpprocesses. i: Representation results. SIAM Journal onControl, 13(5):999–1021, 1975.

[98] Rene Boel and Geert Stremersch. Discrete Event Systems:Analysis and Control. Springer, New Yok, NY, 2012.

[99] V. Boltyanskii, R. Gamkrelidze, and L. Pontryagin. On thetheory of optimal processes. Reports of the Academy ofSciences of the USSR, 110(1):7–10, 1956.

[100] D. C. Bomberger and B. T. Weber. Stabilization ofservomechanisms. Technical Report M.M-41-110-52, BellTelephone Laboratory, December 1941.

[101] W. A. Boothby. An Introduction to Differentiable Manifoldsand Riemanniann Geometry. Academic Press, New York,NY, 1975.

[102] Vivek S Borkar. Optimal control of diffusion processes.Longman Scientific & Technical England, 1989.

[103] John L. Bower and Peter M. Schultheiss. Introduction tothe Design of Servomechanisms. Wiley, New York, 1958.

[104] George E. P. Box and Gwilym Jenkins. Time SeriesAnalysis: Forecasting and Control. Holden, San Francisco,1970.

[105] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan.Linear Matrix Inequalities in System and Control Theory,volume 15 of Studies in Applied Mathematics. SIAM,Philadelphia, 1994.

[106] Maury Bramson. Instability of FIFO queueing networks.The Annals of Applied Probability, pages 414–431, 1994.

[107] Dennis Brandl. Design patterns for Flexible Manufacturing.International Society of Automation (ISA), 2006. ISBN/ID978-1-55617-998-3.

[108] Pierre Bremaud. Point processes and queues: Martingaledynamics. Springer, 1981.

[109] Pierre Marie Bremaud. A martingale approach to pointprocesses. Electronics Research Laboratory, University ofCalifornia, 1972.

[110] K. E. Brenan, S. L. Campbell, and L. R. Petzold. NumericalSolution of Initial-Value Problems in Differential-AlgebraicEquations. North-Holland, Amsterdam, 1989. Also availablein SIAM’s Classics in Applied Mathemathics series, No. 14,1996.

[111] R. D. Brennan and R. N. Linebarger. A survey of digitalsimulation—Digital analog simulator programs. Simulation,3, 1964.

[112] R. D. Brennan and M. Y. Silberberg. The System/360continuous system modeling program. Simulation, 11:301–308, 1968.

[113] Arno P.J. Breunese and Jan F. Broenink. Modelingmechatronic systems using the SIDOPS+ language. InProceedings of ICBGM’97, 3rd International Conference onBond Graph Modeling and Simulation, Simulation Series,Vol.29, No.1, pages 301–306. The Society for ComputerSimulation International, January 1997.

[114] R. W. Brockett. Finite Dimensional Linear Systems. JohnWiley and Sons, Inc, New York, 1970.

[115] Roger W. Brockett. System theory on group manifolds andcoset spaces. SIAM Journal on Control, 10:265–284, 1972.

44

Page 45: Astrom Kumar Control a Perspective

[116] Roger W Brockett. Nonlinear systems and differentialgeometry. Proceedings of the IEEE, 64(1):61–72, 1976.

[117] Roger W Brockett. Hybrid models for motion controlsystems. Springer, 1993.

[118] Roger W Brockett and Daniel Liberzon. Quantized feedbackstabilization of linear systems. Automatic Control, IEEETransactions on, 45(7):1279–1289, 2000.

[119] E Brockmeyer, H. L. Halstrom, Arne Jensen, andAgner Krarup Erlang. The life and works of A. K. Erlang.Danish Academy of Technical Sciences, Copenhagen, 1948.

[120] E. Brown and H. Rabitz. Some mathematical andalgorithmic challenges in the control of quantum dynamicsphenomena. Journal of Mathematical Chemistry, 31:17–63,2002.

[121] G. S. Brown and D. P. Campbell. Principles ofServomechanisms. Wiley & Sons, New York, 1948.

[122] Gordon S. Brown and Donald P. Campbell. Control systems.Scientific American, 187:56–64, 1952.

[123] A. E. Bryson and Ho Y. C. Applied Optimal ControlOptimization, Estimation and Control. Blaisdell PublishingCompany, 1969. Revised edition by Taylor & Francis 1975.

[124] Arthur E. Bryson. Optimal programming and control. InProc. IBM Scientific Computing Symposium Control Theoryand Applications, White Plains, NY, 1966.

[125] Vannevar Bush. The differential analyzer. a new machinefor solving differential equations. Journal of the FranklinInstitute, 212(4):447–488, 1931.

[126] Vannevar Bush, FD Gage, and HR Stewart. A continuousintegraph. Journal of the Franklin Institute, 203(1):63–84,1927.

[127] R. L. Butchart and B. Shackcloth. Synthesis of modelreference adaptive control systems by Lyapunov’s secondmethod. In Proceedings 1965 IFAC Symposium on AdaptiveControl, Teddington, UK, 1965.

[128] Lovell C. A. Continuous electrical computations. Bell Labs.Rec, 25:114–118, 1948.

[129] Giuseppe C Calafiore and Marco C Campi. The scenarioapproach to robust control design. Automatic Control, IEEETransactions on, 51(5):742–753, 2006.

[130] A. Callender, D. R. Hartree, and A. Porter. Time-lag ina control system. Phil. Trans. Royal Society of London,235:415–444, 1935.

[131] Eduardo F Camacho and Carlos Bordons. Model predictivecontrol. Springer London, 2004. Second edition 2007.

[132] Walter B. Cannon. The Wisdom of the Body. Norton, NewYork NY, 1939.

[133] Derek Caveney. Cooperative vehicular safery applications.IEEE Control Systems Magazine, 30:38–53, 2010.

[134] Vinton G Cerf. Protocols for interconnected packetnetworks. Computer Communication Review, 10(4):10–57,1980.

[135] Vinton G Cerf and Robert E Icahn. A protocol for packetnetwork intercommunication. ACM SIGCOMM ComputerCommunication Review, 35(2):71–82, 2005.

[136] Paul E. Ceruzzi. A History of Modern Computing. MITPress, Cambridge, MA, 2003. second edition.

[137] Iris Chang. Thread of the Silk Worm. Basic Books, NewYork, N, 1995.

[138] Andrew Chatha. Fieldbus: the foundation for field controlsystems. Control Engineering, 41(6):77–80, 1994.

[139] Han-Fu Chen and Cheng Daizhab. Early developments ofcontrol in China. European Journal of Control, 13:25–29,2007.

[140] HAN-FU CHEN and Lei Guo. Convergence rate of least-squares identification and adaptive control for stochasticsystems. International Journal of Control, 44(5):1459–1476,1986.

[141] Steven Cherry. How Stuxnet is rewriting the cyberterrorismplaybook, 2010.

[142] H. Chestnut, editor. Impact of Automatic Control—Presentand Future. VDI/VDE-Gesellschaft, Duesseldorf, 1982.

[143] Harold Chestnut and Robert W. Mayer. Servomechanismsand Regulating System Design, Vol I and II. Wiley, NewYork, 1951.

[144] CiA. CAN history.

[145] Cisco. Cisco Visual Networking Imdex Global Mobile DataTraffic Forecast Update 2012-2017, 2013.

[146] JMC Clark. The design of robust approximations tothe stochastic differential equations of nonlinear filtering.Communication systems and random process theory,25:721–734, 1978.

[147] Ronald W. Clark. Tizard. MIT Press, Cambridge, MA,1965.

[148] D. W. Clarke, editor. Advances in Model-Based PredictiveControl. Oxford University Press, Oxford, 1994.

[149] John F. Coales. The birth of an IFAC journal. Automatica,5:5–6, 1969.

[150] Gustavo Colonnetti. Convegno Internationale sui ProblemiDell’Autisimo (International Congress on the Problems ofAutomation. La Ricerca Scientifica, Rome, 1956.

[151] HART Communication Foundation. WirelessHART.

[152] Stephen A Cook. The complexity of theorem-provingprocedures. In Proceedings of the third annual ACMsymposium on Theory of computing, pages 151–158. ACM,1971.

[153] Michael G. Crandall and Pierre-Louis Lions. Viscositysolutions of Hamilton-Jacobi equations. Transactions of theAmerican Mathematical Society, 277(1):1–42, 1983.

[154] B.P. Crow, I. Widjaja, L.G. Kim, and P.T. Sakai.IEEE 802.11 Wireless Local Area Networks. IEEECommunications Magazine, 35(9):116–126, 1997.

[155] David E. Culler. TinyOS: Operating system design forwireless sensor networks. Sensors, 2006.

[156] Ruth F. Curtain and Hans Zwart. An Introduction toInfinite-Dimensional Linear Systems Theory. Springer, NewYork, NY, 1991.

[157] Jose ER Cury and Fabio L Baldissera. Systems biology,synthetic biology and control theory: A promising goldenbraid. Annual Reviews in Control, 2013.

[158] C. R. Cutler and B. C. Ramaker. Dynamic matrix control—A computer control algorithm. In Proceedings JointAutomatic Control Conference, San Francisco, California,1980.

[159] Raffaello D’ Andrea. Dynamic works - exhibition. http:

//raffaello.name/exhibitions.

[160] G. Dahlquist. Stability and error bounds in the numericalintegration of ordinary differential equations. TransactionsNo. 130. The Royal Institute of Technology, Stockholm,Sweden, 1959.

[161] Jim G Dai. On positive Harris recurrence of multiclassqueueing networks: A unified approach via fluid limitmodels. The Annals of Applied Probability, 5(1):49–77, 1995.

45

Page 46: Astrom Kumar Control a Perspective

[162] George Bernard Dantzig. Computational algorithm of therevised simplex method. Rand Corporation, 1953.

[163] W. B. Davenport and W. L. Root. An Introduction to theTheory of Random Processes. McGraw-Hill, New York, NY,1958.

[164] MHA Davis. On a multiplicative functional transformationarising in nonlinear filtering theory. Zeitschriftfur Wahrscheinlichkeitstheorie und verwandte Gebiete,54(2):125–139, 1980.

[165] Edward Davison. The robust control of a servomechanismproblem for linear time-invariant multivariable systems.Automatic Control, IEEE Transactions on, 21(1):25–34,1976.

[166] W. G. Denhard. The start of the laboratory: the beginningsof the MIT Instrumentation Laboratory. IEEE Aerospaceand Electronic Systems Magazine, 7:6–13, 1992.

[167] C. A. Desoer and M. Vidyasagar. Feedback Systems: Input-Output Properties. Academic Press, New York, 1975.

[168] Ernst D. Dickmanns. Dynamic Vision for Perception andControl of Motion. Springer Verlag, Berlin, 2007.

[169] Richard C. Dorf. Modern Control Systems. Addison Wesley,New York, NY, 1980. !2th Edition with Robert H. Bishop2010.

[170] John Doyle. Guaranteed margins for LQG regulators. IEEETran. Automatic Control, AC-23:756–757, 1978.

[171] John C. Doyle, Bruce A. Francis, and Allen R. Tannenbaum.Feedback Control Theory. Macmillan, New York, NY, 1992.

[172] John C Doyle, Keith Glover, Pramod P Khargonekar,and Bruce A Francis. State-space solutions to standardH2 and H∞ control problems. Automatic Control, IEEETransactions on, 34(8):831–847, 1989.

[173] John C. Doyle and A Packard. The complex structuredsingular value. Automatica, 29:71–109, 1993.

[174] C. S. Draper and Y. T. Li. Principles of optimalizing controlsystems and an application to the internal combustionengine. In R. Oldenburger, editor, Optimal and Self-optimizing Control. MIT Press, Cambridge, MA, 1966.

[175] Charles Stark Draper. Flight control. Journal RoyalAeronautic Society, 59:451–477, 1955.

[176] S. S. Draper, W. Wrigley, and J. Hovorka. Inertial Guidance.Pergamon Press, Oxford, 1960.

[177] Guang-Ren Duan. Analysis and Design of Descriptor LinearSystems. Springer Verlag, 2010.

[178] T. E. Duncan. Probability densities for diffusion processeswith applications to nonlinear filtering theory and detectiontheory. PhD thesis, Stanford University, 1967.

[179] Tyrone Duncan and Pravin Varaiya. On the solutions ofa stochastic control system. SIAM Journal on Control,9(3):354–371, 1971.

[180] Tyrone Duncan and Pravin Varaiya. On the solutions ofa stochastic control system. II. SIAM Journal on Control,13(5):1077–1092, 1975.

[181] Tyrone E Duncan. On the nonlinear filtering problem.Technical report, DTIC Document, 1969.

[182] Zachary T. Dydek, Anuradha M. Annaswamy, and EugeneLavretsky. Adaptive control and the NASA X-15-3 flightrevisited - lessons learned and Lyapunov-stability-baseddesign. IEEE Control Systems Magazine, 30(3):32–48, 2010.

[183] John W. Eaton. Octave. http://www.gnu.org/software/

octave/, 1988.

[184] D. P. Eckman. Principles of industrial process control.Wiley, New York, NY, 1945.

[185] L. Robinson (Editor). Proceedings IBM ScientificComputing Symposium - Control Theory and Applications.IBM Data Processing Division, White Plains, NY, 1966.

[186] Tamer Basar (Editor). The American Automatic ControlCouncil - AACC History and Collaboration with IFAC 1957-2011. American Automatic Control Council, 2011.

[187] Bo Egardt. Stability of Adaptive Controllers. Springer-Verlag, Berlin, FRG, 1979.

[188] A. Editor Ekstrom. Integrated Computer Control of aKraft Paper Mill, Proceedings of the IBM Billerud. IBM,Stockholm, Sweden, 1966.

[189] A. El-Sakkary and G. Zames. Unstable systems andfeedback: The gap metric. In Proc. 18th AllertonConfernece, pages 380–385, 1980.

[190] Evangelos Eleftheriou and S. O. Reza Moheimani, editors.Control Technologies for Emerging Micro and NanoscaleSystems. Springer, Berlin, FRG, 2012.

[191] H. Elmqvist and Sven Erik Mattsson. A simulatorfor dynamical systems using graphics and equations formodelling. IEEE Control Systems Magazine, 9(1):53–58,1989.

[192] Hilding Elmqvist. SIMNON— An interactive simulationprogram for nonlinear systems — User’s manual. TechnicalReport TFRT-7502, Department of Automatic Control,September 1975.

[193] Hilding Elmqvist. A Structured Model Language for LargeContinuous Systems. PhD thesis, Lund University, May1978.

[194] Hilding Elmqvist, Sven Erik Mattsson, and MartinOtter. Modelica — The new object-oriented modelinglanguage. In Proceedings of the 12th European SimulationMulticonference (ESM’98), Manchester, UK, 1998. SCS,The Society for Computer Simulation.

[195] Erico Guizzo, IEEE Spectrum. How Googles’ self-drivingcar works, 18 Oct, 2011. http://spectrum.ieee.org/

automaton/robotics/artificial-intelligence/

how-google-self-driving-car-works.

[196] Ericsson. Ericsson Mobility Report -on the Pulse of theNetworked Society, 2013.

[197] A. K. Erlang. On the rational determination of the numberof circuits. The life and works of AK Erlang, pages 216–221,1948.

[198] Richard Esposito. Hackers penetrate water systemcomputers, 2006.

[199] H Everett. Recursive games. Annals ofMathematicalStudies, 39:47–78, 1957.

[200] Nicolas Falliere, Liam O’Murchu, and Eric Chien.W32.Stuxnet Dossier, 2011.

[201] E. Fehlberg. New high-order Runge-Kutta formulas withstep size control for systems of first and second orderdifferential equations. ZAMM, 44, 1964.

[202] A. A. Fel’dbaum. Dual control theory, Parts I and II.Automation and Remote Control, 21:874–880 and 1033–1039, April and May 1961.

[203] M. Felser. The fieldbus war: history or short break betweenbattles? In Proc 4th IEEE International Workshop onFactory Automation, pages 73–80, Vasteraas, Sweden, 2002.

[204] James Ferguson. A brief survey of the history of thecalculus of variations and its applications. arXiv preprintmath/0402357, 2004.

46

Page 47: Astrom Kumar Control a Perspective

[205] Fieldbus Foundation. Foundation Fieldbus. http://www.

fieldbus.org/, 1994.

[206] First Control. FirstLoop. http://www.firstcontrol.se/

index.html, 2013.

[207] W. H. Fleming, editor. Future Directions in ControlTheory—A Mathematical Perspective. Society for Industrialand Applied Mathematics, Philadelphia, 1988.

[208] Wendell H Fleming and Etienne Pardoux. Optimal controlfor partially observed diffusions. SIAM Journal on Controland Optimization, 20(2):261–285, 1982.

[209] WH Fleming. Some Markovian optimization problems. J.Math. Mech, 12(1):131–140, 1963.

[210] Michel Fliess and Cedric Join. Model-free control.International Journal on Control, 86, 2013.

[211] Michel Fliess, Jean Levine, and Philippe Martin. Flatness,motion planning and trailer systems. In Proc 32nd IEEEConference on Decision and Control, pages 2700–2705, SanAntonio, TX, 1993.

[212] Michel Fliess, Jean Levine, Philippe Martin, and PierreRouchon. Flatness and defect of nonlinear systems:Introductory theory and examples. International Journalon Control, 61:1327–1361, 1975.

[213] Michel Fliess, Jean Levine, Philippe Martin, and PierreRouchon. Sur les systemes non lineaires differentiellementplats. C. R. Acad. Sci. Paris Ser. I Math., 315:619–624,1992.

[214] Michel Fliess, Jean Levine, F. Ollivier, and PierreRouchon. Flatness and dynamic feedback linearizability:Two approaches. In Isidori, A. et. al., editor, Proceedings ofthe Third European Control Conference, ECC 95, volume 3,pages 649–654, 1995.

[215] John J. Florentin. Optimal control of continuous time,Markov, stochastic systems. International Journal onElectronice, 10(6):473–488, 1961.

[216] John J. Florentin. Partial observability and optimal control.International Journal on Electronics, 13(3):263–279, 1962.

[217] Irmgard Flugge-Lotz. Discontinuous and Optimal Control.McGraw-Hill, New York, NY, 1968.

[218] Institutefor Systems Biology. About systems biology. https://www.

systemsbiology.org/about-systems-biology, 2012.

[219] Jay W. Forrester. Digital information in three dimensionsusing magnetic cores. Journal of Applied Physics, 22, 1951.

[220] Jay Wright Forrester. Industrial Dynamics. MIT Press,Cambridge, MA, 1961.

[221] Jay Wright Forrester. Urban Dynamics. MIT Press,Cambridge, MA, 1969.

[222] Jay Wright Forrester. World Dynamics. MIT Press,Cambridge, MA, 1971.

[223] A. J. Fossard. The dawn of control science in france.European Journal of Control, 13:30–35, 2007.

[224] FG Foster. On the stochastic matrices associated withcertain queuing processes. The Annals of MathematicalStatistics, 24(3):355–360, 1953.

[225] Queen Elisabeth Prize Foundation. Queen Elizabeth prizefor engineering, 2013.

[226] Foxboro. Control system with automatic responseadjustment. US Patent 2,517,081, 1950.

[227] A. Fradkov. Cybernetical Physics: From Control of Chaosto Quantum Control. Springer, Berlin, 2007.

[228] Bruce A. Francis and Murray Wonham. The internal modelprinciple of control theory. Automatica, 12:457–465, 1976.

[229] Gene F. Franklin, J. D. Powell, and A. Emami-Naeini.Feedback Control Systems. Addison-Wesley, Reading, MA,1986. 6th Edition 2009.

[230] Randy A Freeman and Petar V Kokotovic. Robust nonlinearcontrol design: state-space and Lyapunov techniques.Springer, 2008.

[231] N.M. Freris, S.R. Graham, and P.R. Kumar. Fundamentallimits on synchronizing clocks over networks. IEEETransactions on Automatic Control, 56(6):1352–1364, 2011.

[232] Bernhard Friedland. The golden age of control systemsresearcy at Columbia University. Seminar Department ofAutomatic Control, Lund University, 1996.

[233] P. Fritzson. Introduction to Modeling and Simulation ofTechnical and Physical Systems with Modelica. Wiley, IEEE,Hoboken, NJ, 2011.

[234] Peter Fritzson, Lars Viklund, Dag Fritzson, andJohan Herber. High-level mathematical modeling andprogramming. IEEE Software, 12(3), July 1995.

[235] Masatoshi Fujisaki, Gopinath Kallianpur, and HiroshiKunita. Stochastic differential equations for the non linearfiltering problem. Osaka Journal of Mathematics, 9(1):19–40, 1972.

[236] D Gabor, W. P. L. Wilby, and R. Woodcock. A universalnon-linear filter, predictor and simulator which optimizesitself by a learning process. Proc. Inst. Elect. Engrs, 108,Part B:1061–, 1959.

[237] P. Gahinet and P. Apakarian. A linear matrix inequalityapproach to h∞ control. International Journal of Robustand Nonlinear Control, 4:421–448, 1994.

[238] E. Gamma, R. Helm, R. Johnson, and J. Vlissides. DesignPatterns: Elements of Reusable Object-Oriented Software.Addison-Wesley, Boston, MA, 1995.

[239] F.R. Gantmacher. The Theory of Matrices, volume I ochII. Chelsea, New York, 1960.

[240] Lu Gaohua and Hidenori Kimura. Theoretical biologyand medical modelling. Theoretical Biology and MedicalModelling, 6:26, 2009.

[241] Carlos E Garcia, David M Prett, and Manfred Morari.Model predictive control: Theory and practice - a survey.Automatica, 25(3):335–348, 1989.

[242] A Garde. The frequency analysis method applied to controlproblems. ASEA Journal, 21:128–138, 1948.

[243] A Garde and E. Persson. Automatic depth control ofsubmarines. ASEA Journal, 33:65–70, 1960.

[244] Charles W. Gear. Simultaneously numerical solution ofdifferential-algebraic equations. IEEE Transactions onCircuit Theory, CT-18:217–225, 1971.

[245] Hector Geffner and Blai Bonet. Solving large POMDPsusing real time dynamic programming. In In Proc. AAAIFall Symp. on POMDPs. Citeseer, 1998.

[246] I. M. Gelfand and S. V. Fomin. Calculus of variations.Dover Publications, Mineola, N.Y., 2000.

[247] Tryphon T. Georgiou and Anders Lindquist. Revisitingthe separation principle in stochastic control. Proc IEEECDC’2012, 2012.

[248] Tryphon T. Georgiou and Malcolm C. Smith. Optimalrobustness in the gap metric. IEEE Transactions onAutomatic Control, 35:673–686, 1990.

47

Page 48: Astrom Kumar Control a Perspective

[249] Tryphon T. Georgiou and Malcolm C. Smith. Robustnessanalysis of nonlinear feedback systems: An input-outputapproach. IEEE Transactions on Automatic Control,42:1200–1221, 1999.

[250] Janos (editor) Gertler. Historic Control Textbooks. Elsevier,Oxford, 2006.

[251] Michel Gevers. Towards a joint design of identificationand control? In H. L. Trentelman and J. C. Willems,editors, Essays on Control: Perspectives in the Theory andIts Applications, pages 111–151, ECC’93, Groningen, 1993.Birkhauser.

[252] Johannes G. Gievers. Erinnerung an Kreiselgerate(recollections of Kreiselgerate). Jahrbuch der DeutschenGesellschaft fur Luft- und Raumfahrt, pages 263–291, 1971.

[253] E. G. Gilbert. Controllability and observability inmultivariable control systems. SIAM Journal on Control,1:128–151, 1963.

[254] J. C. Gille, M. J. Pelegrin, and P. Decaulne. FeedbackControl Systems. McGraw-Hill, New York, 1959.

[255] J. C. Gittins and D. M. Jones. A dynamic allocationindex for the sequential design of experiments, pages 241–266. North-Holland, Amsterdam, The Netherlands, 1974.J. Gani, ed.

[256] Torkel Glad. On the gain margin of nonlinear and optimalregulators. IEEE Transactions on Automatic Control,29(7):615–620, 1984.

[257] Rafal Goebel, Ricardo G. Sanfelicd, and Andrew R.Teel. Hybrid Dynamical Systems: Modeling, Stability, andRobustness. Princeton University Press, 2012.

[258] G. C. Goodwin, P. J. Ramadge, and P. E. Caines. Discrete-time multivariable adaptive control. IEEE Transactions onAutomatic Control, AC-25:449–456, 1980.

[259] G. C. Goodwin and K. S. Sin. Adaptive Filtering, Predictionand Control. Prentice-Hall, Englewood Cliffs, New Jersey,1984.

[260] Graham C Goodwin, Peter J Ramadge, and Peter E Caines.Discrete time stochastic adaptive control. SIAM Journalon Control and Optimization, 19(6):829–853, 1981.

[261] Neil J. Gordon, David J. Salmond, and Adrian FM. Smith.Novel approach to nonlinear/non-Gaussian Bayesian stateestimation. In IEE Proceedings F (Radar and SignalProcessing), volume 140, pages 107–113. IET, 1993.

[262] Jason J. Gorman and Benjamin Shapiro, editors. FeedbackControl of MEMS to Atoms. Springer, New York, NY, 2012.

[263] A. C. W. Grace. SIMULAB, An integrated environment forsimulation and control. In Proceedings of the 1991 AmericanControl Conference, pages 1015–1020. American Autom.Control Council, 1991.

[264] Dunstan Graham and Duane McRuer. Analysis ofNonlinear Control Systems. Wiley, New York, NY, 1961.

[265] S. Graham, G. Baliga, and P.R. Kumar. Abstractions,architecture, mechanisms, and a middleware for networkedcontrol. IEEE Transactions on Automatic Control,54(7):1490–1503, 2009.

[266] Michael Green and J. N. Limebeer, David. Linear RobustControl. Prentice Hall, Englewood Cliffs, NJ, 1995.

[267] Andy Greenberg. Hackers cut cities’ power, 2008.

[268] P. C. Gregory, editor. Proc. Self Adaptive Flight ControlSymposium. Wright Air Development Center, Wright-Patterson Air Force Base, Ohio, 1959.

[269] Mohinder S Grewal and Angus P Andrews. Applications ofKalman filtering in aerospace 1960 to the present [historicalperspectives]. Control Systems, IEEE, 30(3):69–78, 2010.

[270] Northrop Grumman. SteerMaster. http://www.srhmar.

com/brochures/as/SPERRY\%20SteerMaster\%20Control\

20System.pdf, 2005.

[271] Guido O. Guardabassi. The dawn of control science inItaly from intuitive engineering to modern control theoryand automation technology. European Journal of Control,13:36–48, 2007.

[272] E. A. Guillemin. Mathematics of Circuit Analysi. Wiley,New York, NY, 1940.

[273] L. Guo and H. F. Chen. The astrom-Wittenmark self-tuningregulator revisitied and ELS-based adaptive trackers. IEEETransactions on Automatic Control, 30(7):802–812, 1991.

[274] K. Gustafsson. Object oriented implementation of softwarefor solving ordinary differential equations. ScientificProgramming, 2:217–225, 1993.

[275] Lino Guzzella and Antonio Sciarretta. Vehicle PropulsionSystems: Introduction to Modeling and Optimization.Springer, New York, NY, 2013. 3rd edition.

[276] E. Hairer, C. Lubich, and M. Roche. The Numerical Solutionof Differential-Algebraic Systems by Runge-Kutta Methods.Lecture Notes in Mathematics No. 1409. Springer Verlag,Berlin, 1989.

[277] E. Hairer, S.P. Nørsett, and G. Wanner. SolvingOrdinary Differential Equations I — Nonstiff Problems.Computational Mathematics No. 8. Springer Verlag, Berlin,1987.

[278] E. Hairer and G. Wanner. Solving Ordinary DifferentialEquations II — Stiff and Differential-Algebraic Problems.Computational Mathematics No. 14. Springer Verlag,Berlin, 1991.

[279] A. C. Hall. The Analysis and Synthesis of LinearServomechanism. The Technology Press, MIT, Cambridge,MA, 1943.

[280] A. C. Hall. Early history of the frequency response field. InFrequency Response, New York, NY, 1956. MacMillan.

[281] Gregoire Hamon and John Rushby. An operationalsemantics for Stateflow. In Fundamental Approaches toSoftware Engineering (FASE) Barcelona, Spain, pages 229–243. Springer Verlag, New York, NY, 2005.

[282] JE Handschin and David Q Mayne. Monte carlo techniquesto estimate the conditional expectation in multi-stage non-linear filtering. International journal of control, 9(5):547–559, 1969.

[283] H. J. Harris. The analysis and design of servomechanisms.Printed under Auspices of the Fire Control Committee (Sec.D-2) of NDRC, OSRD, 1942.

[284] J Michael Harrison. Brownian models of queueingnetworks with heterogeneous customer populations. InStochastic differential systems, stochastic control theory andapplications, pages 147–186. Springer, 1988.

[285] J Michael Harrison. Brownian models of open processingnetworks: Canonical representation of workload. Annals ofApplied Probability, pages 75–103, 2000.

[286] T. J. Harrison. Minicomputers in Industrial Control.Instrument Society of America, Pittsburg, PA, 1978.

[287] T. J. Harrison, B. W. Landeck, and H. K. St. Clair.Evolution of small real-time IBM computer systems. IBMJournal of Research and Development, 25(5):441–451, 1981.

48

Page 49: Astrom Kumar Control a Perspective

[288] GW Haynes and H Hermes. Nonlinear controllability vialie theory. SIAM Journal on Control, 8(4):450–460, 1970.

[289] H. L. Hazen. Design and test of a high performance servo-mechanism. Journal of Franklin Institute, 218:543–580,1934.

[290] Harold L Hazen. Theory of servo-mechanisms. Journal ofthe Franklin Institute, 218(3):279–331, 1934.

[291] D. O. Hebb. The Organization of Behavior. Wiley, NewYork, NY, 1949.

[292] C. Hedrick. Routing Information Protocol: Request forComments: 1058. http://tools.ietf.org/html/rfc1058, 1988.

[293] Joseph L. Hellerstein, Yixin Diao, Sujay Parekh, andDawn M Tilbury. Feedback Control of Computing Systems.John Wiley & Sons, Hoboken, NJ, 2004.

[294] P. Henrichi. Discrete variable methods in ordinarydifferential equations. John Wiley & Sons, Inc., 1962.

[295] Thomas A. Henzinger. The theory of hybrid automata. InProceedings of the 11th Annual IEEE Symposium on Logicin Computer Science, pages 278–292, 1996.

[296] Thomas A. Henzinger, Peter W. Kopke, Anuj Puri, andPravin Varaiya. What’s decidable about hybrid automata?In ACM Symposium on Theory of Computing, pages 373–382, 1995.

[297] Thomas A. Henzinger and Shankar Sastry, editors.Proceedings of the First International Workshop on HybridSystems: Computation and Control. Springer, 1998.

[298] R. Hermann and A. Krener. Nonlinear controllability andobservability. IEEE Transactions on Automatic Control,22:728–740, 1977.

[299] Robert Hermann. On the accessibility problem incontrol theory. In International Symposium on NonlinearDifferential Equations and Nonlinear Mechanics, pages 325–332. Academic Press, New York, 1963.

[300] Jason Hill, Mike Horton, Ralph Kling, and LakshmanKrishnamurthy. The platforms enabling wireless sensornetworks. Commun. ACM, 47(6):41–46, June 2004.

[301] Y Ho, A Bryson, and S Baron. Differential games andoptimal pursuit-evasion strategies. Automatic Control,IEEE Transactions on, 10(4):385–389, 1965.

[302] Yu-Chi Ho. Performance evaluation and perturbationanalysis of discrete event dynamic systems. AutomaticControl, IEEE Transactions on, 32(7):563–572, 1987.

[303] Yu Chi Ho and Xiren Cao. Perturbation analysis andoptimization of queueing networks. Journal of OptimizationTheory and Applications, 40(4):559–582, 1983.

[304] Yu-Chi Ho et al. Team decision theory and informationstructures in optimal control problems–part i. AutomaticControl, IEEE Transactions on, 17(1):15–22, 1972.

[305] Yu-Chi Ho, P Luh, and Ramal Muralidharan. Informationstructure, stackelberg games, and incentive controllability.Automatic Control, IEEE Transactions on, 26(2):454–460,1981.

[306] M. B. Hoagland and B. Dodson. The Way Life Works.Times Books, New York, 1995.

[307] Per A. Holst. George A. Philbrick and Polyphemus - thefirst electronic training simulator. Annals of the History ofComputing, 4:2:143–156, 1982.

[308] I. M. Horowitz. Synthesis of Feedback Systems. AcademicPress, New York, NY, 1963.

[309] I. M. Horowitz. Quantitative Feedback Design Theory(QFT). QFT Publicatins, Boulder, Colorado, 1993.

[310] Isaac A. Horowitz and Uri Shaked. Superiority of transferfunction over state-variable methods in linear time-invariantfeedback system design. IEEE Transaction on AutomaticControl, 20(1):84–97, 1975.

[311] Isaac M. Horowitz. Survey of quantitative feedback theory.International Journal on Control, 53:255–291, 1991.

[312] I-Hong Hou, Vivek Borkar, and P. R. Kumar. A theory ofQoS in wireless. In Proceedings of the IEEE INFOCOM,pages 486–494, 2009.

[313] Naira Hovakimyan and Chengyu Cao. L1 Adaptive ControlTheory. SIAM, Philadelphia, PA, 2010.

[314] R. A. Howard. Dynamic Programming and MarkovProcesses. Wiley, New York, NY, 1960.

[315] Ronald A Howard. Comments on the origin and applicationof Markov decision processes. Operations Research, pages100–102, 2002.

[316] Robert M. Howe. Design Fundamentals of Analog ComputerComponents. Van Nostrand, Princeton, NJ, 1961.

[317] Garng M Huang, TJ Tarn, and John W Clark. On thecontrollability of quantum-mechanical systems. Journal ofMathematical Physics, 24:2608, 1983.

[318] T. P. Hughes. Elmer Sperry Inventor and Engineer. TheJohn Hopkins University Press, Baltimore, MD, 1993.

[319] L. R. Hunt, R. Su, and G. Meyer. Global transformationsof nonlinear systems. IEEE Transactions on AutomaticControl, 28:24–31, 1983.

[320] A. Hurwitz. On the conditions under which an equationhas only roots with negative real parts. MathematischeAnnalen, 46:273–284, 1895.

[321] IBM. Rational DOORS. http://www-03.ibm.com/

software/products/us/en/ratidoor/, 2013.

[322] International Electrotechnical Commission (IEC). IEC61131: IEC standard for programmable controllers, 2011.

[323] International Electrotechnical Commission (IEC). IEC62264 ed2.0 (2013): Enterprise-control system integration,2013.

[324] IFAC. IFAC anniversary day, Heidelberg, Germany,September 15. IEEE Trans. Automat. Contr., 5:1–3, 2006.

[325] Donald L Iglehart and Ward Whitt. Multiple channel queuesin heavy traffic. I. Advances in Applied Probability, 2(1):150–177, 1970.

[326] Donald L Iglehart and Ward Whitt. Multiple channelqueues in heavy traffic. II: Sequences, networks, and batches.Advances in Applied Probability, 2(2):355–369, 1970.

[327] Pablo A. Iglesias and Brian P. Ingalls, editors. ControlTheory and Systems Biology. The MIT Press, Cambridge,2009.

[328] IGRP. InteriorGateway Routing Protocol. http://docwiki.cisco.com/

wiki/Interior_Gateway_Routing_Protocol, 2012.

[329] Francis J. Doyle III, B. Wayne Bequette, Rick Middleton,Babatunde Ogunnaike, Brad Paden, Robert S. Parker, andMathukumalli Vidyasagar. ?control in biological systems.In T. Samad and A. M. Annaswamy (eds.), editors, TheImpact of Control Technology, page Part 1. IEEE ControlSystems Society, Available at www.ieeecss.org, 2011.

[330] INRIA. Scilab. http://www.scilab.org/, 1990.

[331] INRIA. Towards INRIA 2020: Strategic plan. http://www.

inria.fr/en/institute/strategy/strategic-plan, 2013.

49

Page 50: Astrom Kumar Control a Perspective

[332] Sansum Diabetes Research Institute. SansumDiabetes Research Institute, University of California,Santa Barbara Conduct Innovative ArtificialPancreas Ultra-Rapid-Acting Inhaled InsulinTrial Funded by JDRF. http://www.sansum.org/

sansum-diabetes-research-institute-university-california-santa-barbara-conduct-innovative-artificial-pancreas-ultra-rapid-acting-inhaled-insulin-trial-funded-jdrf/,September 23, 2013.

[333] International Society for Automation. ISA99 Committeeon Industrial Automation and Control Systems Security.http://www.isa.org/isa99, 1999.

[334] P. A. Ioannou and J. Sun. Stable and Robust AdaptiveControl. Prentice-Hall, Englewood Cliffs, NJ, 1995.

[335] Petros A Ioannou and Petar V Kokotovic. Instabilityanalysis and improvement of robustness of adaptive control.Automatica, 20(5):583–594, 1984.

[336] ISA. ISA100.11a.

[337] R. Isaacs. Differential games I, II, III, IV. Rand CorperationResearch Memorandum RM-1391, 1399, 1411, 1468, 1954-1956.

[338] R. Isaacs. Differential Games. Kruger Publishing Company,Huntington, NY, 2nd edition, 1975. First edition: Wiley,NY, 1965.

[339] A. Isidori. Nonlinear Control Systems. Springer Verlag,Berlin, 3rd edition, 1995.

[340] Alberto Isidori and Chris Byrnes. Output regulationof nonlinear systems. IEEE Transactions on AutomaticControl, 35:131–140, 1990.

[341] James R Jackson. Networks of waiting lines. OperationsResearch, 5(4):518–521, 1957.

[342] D. H. Jacobson. Optimal stochastic linear systems withexponential performance criteria and their relation todeterministic differential games. IEEE Transactions onAutomatic Control, AC-18:124–131, 1973.

[343] H. M. James, N. B. Nichols, and R. S. Phillips. Theory ofServomechanisms; Volume 25, MIT Radiation Laboratory.McGraw-Hill, New York, NY, 1947.

[344] M. Jamshidi and C. J. Herget. Computer-Aided ControlSystems Engineering. North-Holland, Amsterdam, TheNetherlands, 1985.

[345] A. Jeandel, F. Boudaud, Ph. Ravier, and A. Buhsing.U.L.M: Un Langage de Modelisation, a modelling language.In Proceedings of the CESA’96 IMACS Multiconference,Lille, France, July 1996. IMACS.

[346] P. Jochum and M. Kloas. The dynamic simulationenvironment Smile. In G. Tsatsaronis, editor, SecondBiennial European Conference on System Design &Analysis, pages 53–56. The American Society of MechanicalEngineers, 1994.

[347] John von Neumann. The mathematician. In R.B. Heywood,editor, The Works of the Mind, pages 180–196. Universityof Chicago Press, Chicago, 1947.

[348] Mike Jones. What really happened on Mars? http:

//research.microsoft.com/en-us/um/people/mbj/mars_

pathfinder/Mars_Pathfinder.html, 1997.

[349] P. D. Joseph and J. T. Tou. On linear control theory. AIEETrans. Applications and Industry, 80:193–196, 1961.

[350] Josifovska. The father of LabVIEW. IEE Review, 9(3):30–33, October 2003.

[351] N. E. Joukowski. The Theory of Regulating the Motion ofMachines. Gosmashmetizdat, Moscow, 1909. 2nd edition1933.

[352] E. I. Jury. Sampled-Data Control Systems. John Wiley,New York, 1958.

[353] J. M. Kahn, R. H. Katz, and K. S. J. Pister. Mobilenetworking for smart dust. In Proceedings of theACM/IEEE International Conference on Mobile Computingand Networking (MobiCom 99), 1999.

[354] S. Kahne. A history of the IFAC congress. IEEE CSM,16(2):10–12, 78–83, 1996.

[355] Thomas Kailath. Linear Systems. Prentice-Hall, Inc,Englewood Cliffs, NJ, 1980.

[356] C. G. Kallstrom, Karl Johan Astrom, N. E. Thorell,J. Eriksson, and L. Sten. Adaptive autopilots for tankers.Automatica, 15:241–254, 1979.

[357] R. E. Kalman. Design of a self-optimizing control system.Trans. ASME, 80:468–478, 1958.

[358] R. E. Kalman. Contributions to the theory of optimalcontrol. Boletin de la Sociedad Matematica Mexicana,5:102–119, 1960.

[359] R. E. Kalman. New methods and results in linear predictionand filtering theory. Technical Report 61-1, RIAS, February1961. 135 pp.

[360] R. E. Kalman. On the general theory of control systems.In Proceedings first IFAC Congress on Automatic Control,Moscow, 1960, volume 1, pages 481–492, London, 1961.Butterworths.

[361] R. E. Kalman. Canonical structure of linear dynamicalsystems. Proceedings of the National Academy of Sciencesof the United States of America, 48(4):596–600, 1962.

[362] R. E. Kalman. Mathematical description of linear dynamicalsystems. SIAM J. Control, 1:152–192, 1963.

[363] R. E. Kalman. The theory of optimal control andthe calculus of variations. Mathematical optimizationtechniques, pages 309–331, 1963.

[364] R. E. Kalman and R. S. Bucy. New results in linear filteringand prediction theory. Trans ASME (J. Basic Engineering),83 D:95–108, 1961.

[365] R. E. Kalman, P. L. Falb, and M. A. Arbib. Topics inMathematical System Theory. McGraw-Hill, New York, NY,1969.

[366] R. E. Kalman, Y. C. Ho, and K. S. Narendra. Controllabilityof Linear Dynamical Systems, volume 1 of Contributions toDifferential Equations. John Wiley & Sons, Inc., New York,NY, 1963.

[367] Rudolf E. Kalman and John E. Bertram. Control systemanalysis and design via the ’Second Method’ of LyapunovI. continuous-time systems. Trans. ASME (J. BasicEngineering), 82 D:371–393, 1960.

[368] Rudolf Emil Kalman. A new approach to linear filteringand prediction problems. Trans. ASME (Journal of basicEngineering), 82(1):35–45, 1960.

[369] Ioannis Kanellakopoulos, Petar V Kokotovic, and A StephenMorse. Systematic design of adaptive controllers forfeedback linearizable systems. Automatic Control, IEEETransactions on, 36(11):1241–1253, 1991.

[370] Chung-Yao Kao, Alexandre Megretski, Ulf Jonsson, , andAnders Rantzer. A MATLAB toolbox for robustnessanalysis. In Proceedings of the IEEE Conference onComputer Aided Control System Design, pages 297–302,Taipei, Taiwan, 2004.

[371] Niklas Karlsson and Jianlong Zhang. Application offeedback control in online advertising. In 2013 AmericanControl Conference, volume 6, pages 6028–6033, 2013.

50

Page 51: Astrom Kumar Control a Perspective

[372] Richard M Karp. Reducibility among combinatorialproblems. Springer, 1972.

[373] L. H. Keel and S. P Bhattacharyya. Robust, fragileor optimal? IEEE Transactions on Automatic Control,42:1098–1105, 1997.

[374] David Kendrick. Stochastic Control for EconometricModels. McGraw-Hill, New York, NY, 1981.

[375] H. K. Khalil. Nonlinear Systems. MacMillan, New York,1992.

[376] Mustafa Khammash and Hana El-Samad. Systems biology:from physiology to gene regulation. Control Systems, IEEE,24(4):62–76, 2004.

[377] Navin Khaneja, Roger Brockett, and Steffen J Glaser. Timeoptimal control in spin systems. Phys. Rev. A., 63:032308,2001.

[378] Navin Khaneja, Timo Reiss, Thomas Schulte-Herbruggen,and Steffen J Glaser. Optimal control of coupled spindynamics: design of NMR pulse sequences by gradient ascentalgorithms. J. Magnetic Resonance, 172:296–305, 2005.

[379] Uwe Kiencke and Lars Nielsen. Automotive ControlSystems: For Engine, Driveline, and Vehicle. Springer, NewYork, NY, 2005. 2nd edition.

[380] Kyoung-Dae Kim. Collision free autonomous ground traffic:A model predictive control approach. In Proceedings of theFourth International Conference on Cyber-Physical Systems(ICCPS 2013), Philadelphia, 2013.

[381] Kyoung-Dae Kim and P. R. Kumar. Cyber–physicalsystems: A perspective at the centennial. Proceedings of theIEEE, 100(13):1287–1308, 2012.

[382] Joseph Kimemia and Stanley B Gershwin. An algorithmfor the computer control of a flexible manufacturing system.AIIE Transactions, 15(4):353–362, 1983.

[383] Hidenori Kimura. Chain-Scattering Approach to H-infinity-Control. Springer, 1997.

[384] T. Kitamori, N Suda, E. Shimemura, M. Masubuchi,H. Kimura, and Y Yoshitani. Control engineering in Japan:Past and Present. IEEE Control Systems Magazine, 4:4–10,1984.

[385] Ernst Klee, Otto Merk, and Wernher von Braun. The Birthof the Missile: The Secrets of Peenemunde. E. P. Dutton,New York, NY, 1965.

[386] R. J. Kochenburger. A frequency response method foranalyzing and synthesizing contactor servomechanisms.Trans. AIEE, 69:270–283, 1959.

[387] J. Kodosky, MacCrisken J., and G Rymar. Visualprogramming using structured data flow. In IEEEWorkshop on Visual Languages, pages 34–39, Kobe, Japan,1991.

[388] Petar Kokotovic, Hassan K. Khalil, and John O’Reilly.Singular Perturbation Methods in Control: Analysis andDesign. Academic Press, 1986. Reprinted by SIAM 1999.

[389] A. N. Kolmogorov. Interpolation and Extrapolation ofStationary Random Sequences. Math. 5. Bull. MoscowUniv., Moscow, 1941.

[390] Hermann Kopetz and Gunther Bauer. The time-triggeredarchitecture. Proceedings of the IEEE, 91(1):112–126, 2003.

[391] N. N. Krasovskii. Some problems in the theory of stabilityof motion. Fizmatgiz, Moscow, 1959.

[392] N. N. Krasovskii. Stability of Motion. Stanford UniversityPress, Stanford, CA, 1963.

[393] Arthur J Krener. A generalization of chow’s theorem and thebang-bang theorem to nonlinear control problems. SIAMJournal on Control, 12(1):43–52, 1974.

[394] M. Krstic, I. Kanellakopoulos, and P. V. Kokotovic.Nonlinear and Adaptive Control Design. Prentice Hall,Englewood Cliffs, NJ, 1993.

[395] Miroslav Krstic, Ioannis Kanellakopoulos, Petar VKokotovic, et al. Nonlinear and adaptive control design,volume 8. John Wiley & Sons New York, 1995.

[396] Paul Krugman. The Return of Depression Economics.Penguin, London, UK, 2008.

[397] A. N. Krylov and N. N. Bogoliubov. Introduction to Non-linear Mechanics. Princeton University Press, Princeton,New Jersey, 1937.

[398] P. R. Kumar. Re-entrant lines. Queueing systems, 13(1-3):87–110, 1993.

[399] P. R. Kumar and Thomas I Seidman. Dynamicinstabilities and stabilization methods in distributed real-time scheduling of manufacturing systems. AutomaticControl, IEEE Transactions on, 35(3):289–298, 1990.

[400] P. R. Kumar and P. Varaiya. Stochastic Systems:Estimation, Identification and Adaptive Control. PrenticeHall, Englewood Cliffs, NJ, 1986.

[401] Benjamin C. Kuo. Automatic Control Systems. Wiley, NewYork, NY, 1962. 9th edition 2009 with Farid Golnaraghi.

[402] Karl Kupfmuller. Uber die Dynamik der selbstattigenVerstarkungsregler. Electrische Nachrichtentechnik, 5:459–467, 1938.

[403] A. B. Kurzhanski. 50 years of coexistence in control - thecontributions of the Russian community. European Journalof Control, 13:41–60, 2007.

[404] Harold J Kushner. On the differential equations satisfied byconditional probablitity densities of Markov processes, withapplications. Journal of the Society for Industrial & AppliedMathematics, Series A: Control, 2(1):106–119, 1964.

[405] Harold J Kushner. Dynamical equations for optimalnonlinear filtering. Journal of Differential Equations, 3:179–190, 1967.

[406] Harold J Kushner and George Yin. Stochasticapproximation and recursive algorithms and applications,volume 35. Springer, 2003.

[407] HJ Kushner and DS Clark. Stochastic approximationmethods for constrained and unconstrained systems. 1978.

[408] W. H. Kwon and A. E. Pearson. A modified quadratic costproblem and feedback stabilization of a linear system. IEEETransactions on Automatic Control, 22(5):838–842, 1977.

[409] Tze Leung Lai and Ching Zong Wei. Least squaresestimates in stochastic regression models with applicationsto identification and control of dynamic systems. TheAnnals of Statistics, 10(1):154–166, 1982.

[410] Uno Lamm. Kraftoverforing med hogspand likstrom - dentidiga utvecklingen, jonventilen (Power transmission withHVDC). In Percy Barnevik, editor, Tekniken i ASEA(ASEA Technology). ASEA AB, Vasteras, 1983. ISBN 91-7260-765-3 in Swedish.

[411] I. D. Landau. Adaptive Control—The Model ReferenceApproach. Marcel Dekker, New York, NY, 1979.

[412] J. H. Laning and R. H. Battin. Random Processes inAutomatic Control. McGraw-Hill, New York, NY, 1956.

[413] Kim G. Larsen, Paul Pettersson, and Wang Yi. UPPAALin a nutshell. International Journal on Software Tools forTechnology Transfer, 1:134–152, 1997.

51

Page 52: Astrom Kumar Control a Perspective

[414] J. P. LaSalle. Some extensions of Lyapunov’s secondmethod. IRE Transactions on Circuit Theory, CT-7(4):520–527, 1960.

[415] Alan J. Laub, V. Patel, Rajni, and Paul M. (Editors)Van Dooren, editors. Numerical Linear Algebra Techniquesfor Systems and Control. IEEE Press, Piscataway, NJ, 1994.

[416] Eugene Lavretsky and Kevin A. Wise. Robust and AdaptiveControl with Aerospace Applications. Springer-Verlag,London, 2013.

[417] LCCC. Lund center for control of complex engineeringsystems - a Linnaeus Centre, 2008. http://www.lccc.lth.

se/.

[418] E. B. Lee and L. Marcus. Foundations of Optimal ControlTheory. Wiley, New York, NY, 1986.

[419] Edward A Lee and Haiyang Zheng. Leveraging synchronouslanguage principles for heterogeneous modeling and designof embedded systems. In Proceedings of the 7th ACM &IEEE international conference on Embedded software, pages114–123. ACM, 2007.

[420] Edward Ashford Lee and Pravin Varaiya. Structure andInterpretation of Signals and Systems. Addison Wesley,2003.

[421] Lee De Forest. http://en.wikipedia.org/wiki/Lee\_De\

_Forest.

[422] A. Ya. Lerner. Control and dynamic systems. advances intheory and applications. vol 11. In Cornelius T. Leondes,editor, Soviet Contributions to Control Theory, pages 491–514. Academic Press, New York, NY, 1974.

[423] Philip Levis, Sam Madden, Joseph Polastre, RobertSzewczyk, Alec Woo, David Gay, Jason Hill, Matt Welsh,Eric Brewer, and David Culler. TinyOS: An operatingsystem for sensor networks. In W. Weber, J. Rabaey, andE. Aarts, editors, Ambient Intelligence. Springer-Verlag,2004.

[424] Frank L. Lewis. Applied Optimal Control & Estimation -Digital Design & Implementation. Prentice-Hall, 2012.

[425] R. W. Lewis. Programming industrial control systemsusing IEC 1131-3. The Institution of Electrical Engineers,London, UK, 1995.

[426] John Leyden. Polish teen derails tram after hacking trainnetwork, 2008.

[427] K.-Y. Liang, J Martensson, K. H. Johansson, and C. J.Tomlin. Establishing safety for heavy duty vehicleplatooning: A game theoretical approach. In Proc. IFACAAC, Tokyo, Japan, 2013.

[428] D. Limperich, M. Braun, G. Schmitz, and Prolss. Systemsimulation of automotive refrigeration cycles. In GerhardSchmitz, editor, 4th International Modelica Conference,pages 193–199. Modelica Association, Hamburg, Germany,2005.

[429] Jacques-Louis. Lions. Optimal Control of Systems Governedby Partial Differential Equations. Springer, Berlin, 1971.

[430] Pierre-Louis Lions. Optimal control of diffusion processesand Hamilton–Jacobi–Bellman equations. Part 2: Viscositysolutions and uniqueness. Communications in partialdifferential equations, 8(11):1229–1276, 1983.

[431] Pierre-Louis Lions. Optimal control of diffusion processesand Hamilton-Jacobi-Bellman equations. Part I. Thedynamic programming principle and applications. Comm.Partial Differential Equations, 8(10):1101–1174, 1983.

[432] Pierre-Louis Lions. Viscosity solutions of fully nonlinearsecond order equations and optimal stochastic control in

infinite dimensions. Part II: Optimal control of Zakai’sequation. In Stochastic Partial Differential Equations andApplications II, pages 147–170. Springer, 1989.

[433] J. N. Little, A Emami-Naeini, and S. N. Bangert. CTRL-C and matrix environments for the computer-aided designof control system. In M. Jamshidi and C. J. Herget,editors, Computer-Aided Control Systems Engineering.North-Holland, Amsterdam, 1985. ISBN 0-444-87779-7.

[434] C. L. Liu and James W. Layland. Scheduling algorithmsfor multiprogramming in a hard-real-time environment.Journal of the ACM, 20(1):46–61, 1973.

[435] L. Ljung. System Identification—Theory for the User.Prentice Hall, Englewood Cliffs, New Jersey, 1987.

[436] Lennart Ljung. Analysis of recursive stochastic algorithms.Automatic Control, IEEE Transactions on, 22(4):551–575,1977.

[437] Claude Lobry. Controlabilite des systemes non lineaires.SIAM Journal on Control, 8(4):573–605, 1970.

[438] Claude Lobry. Controllability of nonlinear systems oncompact manifolds. SIAM Journal on Control, 12(1):1–4,1974.

[439] Steven H. Low, Fernando Paganini, and John C. Doyle.Internet congestion control. IEEE Control SystemsMagazine, 22:28–43, 2004.

[440] Steve C. H. Lu, Deepa Ramaswamy, and P. R. Kumar.Efficient scheduling policies to reduce mean and varianceof cycle-time in semiconductor manufacturing plants.Semiconductor Manufacturing, IEEE Transactions on,7(3):374–388, 1994.

[441] Steve H Lu and P. R. Kumar. Distributed scheduling basedon due dates and buffer priorities. IEEE Transactions onAutomatic Control, 36(12):1406–1416, 1991.

[442] K.H. Lundberg. The history of analog computing:introduction to the special section. IEEE Control SystemsMagazine, 25(3):22–25, 2005.

[443] U. Luoto. 20 Years Old; 20 Years Young. An AnniversaryPublication 1957–1977. Pergamon Press, 1978. IFACInternational Federation of Automatic Control.

[444] A. I. Lurie and V. N. Postnikov. On the theory of stabilityof control systems. Prikl. Mat. i Mech., 8:246–248, 1944.

[445] A. M. Lyapunov. The General Problem of the Stabilityof Motion (in Russian). Kharkov Mathematical Society(250 pp.), 1892. Collected Works II, 7. Republished by theUniversity of Toulouse 1908 and by Princeton UniversityPress 1949 republished in English by Int. J. Control, 1992).

[446] L. A. MacColl. Fundamental Theory of Servomechanisms.Van Nostrand, New York, NY, 1945. Dover edition 1968.

[447] J. M. Maciejowski. Predictive control with constraints.Prentice Hall, Essex, England, 2002.

[448] Donald Mackenzie. Inventing Accuracy - A HistoricalSociology of Nuclear Missile Guidance. MIT Press,Cambridge, MA, 1990.

[449] R. H. MacMillan. An Introduction to the Theory of Controlin Mechanical Engineering. Cambridge University Press,Cambridge, 1951.

[450] K. Magnus. Zum 75 Geburtstag von Prof. Dr.-Ing. MaxSchuler. Regelungstechnik, 5:37–40, 1957.

[451] Oded Maler. Amir Pnueli and the dawn of hybrid systems.In Proceedings of the 13th ACM international conference onHybrid systems: computation and control, pages 293–295.ACM, 2010.

52

Page 53: Astrom Kumar Control a Perspective

[452] G. S. Malkin. Routing Information Protocol Version 2:Request for Comments: 2453.http://tools.ietf.org/html/rfc2453, 1988.

[453] I. G. Malkin. On the stability of motion in the sense ofLyapunov. American Mathematical Society, Providence, RI,1951.

[454] Sven Erik Mattsson and Mats Andersson. Omola—Anobject-oriented modeling language. In M. Jamshidi andC. J. Herget, editors, Recent Advances in Computer-AidedControl Systems Engineering, volume 9 of Studies inAutomation and Control, pages 291–310. Elsevier SciencePublishers, 1993.

[455] Sven Erik Mattsson, Mats Andersson, and Karl JohanAstrom. Object-oriented modelling and simulation. In D. A.Linkens, editor, CAD for Control Systems, pages 31–69.Marcel Dekker, Inc, 1993.

[456] J. C. Maxwell. On Governors. Proceedings of the RoyalSociety of London, 16:270–283, 1868. Also published in“Mathematical Trends in Control Theory” edited by R.Bellman and R. Kalaba, Dover Publications, New York1964, pp. 3–17.

[457] D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M.Scokaert. Constrained model predictive control: Stabilityand optimality. Automatica, 36:789–814, 2000.

[458] David H. Mayne and David Q. Jacobson. DifferentialDynamic Programming. American Elsevier, New York, NY,1970.

[459] David Q. Mayne. Personal impressions of the dawn ofmodern control. European Journal of Control, 13:61–70,2007. Reprinted in Annual Reviews of Control 35(2011)153-159.

[460] O. Mayr. Zur Fruhgeschichte der Technischen Regelungen.R. Oldenbourg, Munchen, 1969.

[461] M. W. ed. McFarland, editor. The Papers of Wilbur andOrville Wright. Mc Graw Hill, New York, NY, 1953.

[462] Duncan McFarlane and Keith Glover. A loop shapingdesign procedure using H∞ synthesis. IEEE Transactionson Automatic Control, 37(6):759–769, 1992.

[463] Robert McMillan. Siemens: Stuxnet worm hit industrialsystems, 2010.

[464] D. McRuer and D. Graham. Eighty years of flight control:Triumphs and pitfalls of the systems approach. J. Guidanceand Control, 4(4):353–362, July-August 1981.

[465] Duane McRuer, Irving Ashkenas, and Dunstan Graham.Aircraft Dynamics and Automatic Control. PrincetonUniversity Press, Princeton, NJ, 1972.

[466] Donella H. Medows, Dennis L. Medows, Jorgen Randers,and William W. Behrens III. Limits to Growth. Signet,New York, NY, 1972.

[467] A. Megretski and A. Rantzer. System analysis via IntegralQuadratic Constraints. IEEE Transactions on AutomaticControl, 47(6):819–830, June 1997.

[468] Alexandre Megretski, Ulf Jonsson, Chung-Yao Kao, andAnders Rantzer. Integral quadratic constraints. In WilliamLevine, editor, The Control Handbook, Second Edition. CRCPress (Taylor and Francis Group), January 2010.

[469] Robert C Merton and Paul Anthony Samuelson.Continuous-time finance. Blackwell, Cambridge, 1990.

[470] Robert M Metcalfe and David R Boggs. Ethernet:Distributed packet switching for local computer networks.Communications of the ACM, 19(7):395–404, 1976.

[471] D. A. Mindell. Digital Apollo: Human and Machine inSpaceflight. MIT Press, Cambridge, MA, 2008.

[472] David A Mindell. Between human and machine: Feedback,control, and computing before cybernetics. Johns HopkinsUniversity Press, Baltimore, MD, 2002.

[473] Nicholas Minorsky. Directional stability of automaticallysteered bodies. Journal American Society of NavalEngineers, 34:280–309, 1922.

[474] Nicholas Minorsky. Nonlinear Oscillations. Van Nostrand,Princeton, NJ, 1962.

[475] E. Mishkin and L. Braun. Adaptive Control Systems.McGraw-Hill, New York, 1961.

[476] Edward E. L. Mitchell and Joseph S. Gauthier. Advancedcontinuous simulation language (ACSL). Simulation, 26:72–78, 1976.

[477] S. K. Mitter. From Servo Loops to Fiber Nets: Systems,Communication and Control. In R. G. Gallager, editor,Fifty Years & Beyond. A Symposium in Celebration of the50th Anniversary of the Laboratory for Information andDecision Systems. Massachusetts Institute of Technology,The Laboratory for Information and Decision Systems,1990.

[478] Sanjoy K Mitter. Filtering and stochastic control: Ahistorical perspective. Control Systems, IEEE, 16(3):67–76,1996.

[479] R. V. Monopoli. Model reference adaptive control with anaugmented error signal. IEEE Transactions on AutomaticControl, AC-19:474–484, 1974.

[480] Manfred Morari and Jay H. Lee. Model predictivecontrol: past, present and future. Computers and ChemicalEngineering, 23:667–682, 1999.

[481] James R Morrison, Brian Campbell, Elizabeth Dews, andJohn LaFreniere. Implementation of a fluctuation smoothingproduction control policy in IBM’s 200mm wafer fab. InDecision and Control, 2005 and 2005 European ControlConference. CDC-ECC’05. 44th IEEE Conference on, pages7732–7737. IEEE, 2005.

[482] A. S. Morse. Global stability of parameter-adaptive controlsystems. IEEE Transactions on Automatic Control, AC-25:433–439, 1980.

[483] RE Mortensen. Stochastic optimal control with noisyobservations. International Journal of Control, 4(5):455–464, 1966.

[484] MOVIII. MOVIII - Modeling, Visualization and Integration- A center for decision support in complex systems, 2008.http://www.moviii.liu.se/.

[485] R. M. Murray, editor. Control in an Information RichWorld: Report of the Panel on Future Directions in Control,Dynamicsand Systems. SIAM, Philadelphia, 2003. Available athttp://www.cds.caltech.edu/ murray/cdspanel.

[486] Richard M. Murray, Karl Johan Astrom, Stephen P. Boyd,R. W. Brockett, and G. Stein. Future directions in control inan information-rich world. IEEE Control Systems Magazine,23(2):20–33, April 2003.

[487] NAE. 2013 Draper prize. http://www.nae.edu/Projects/

Awards/DraperPrize/67245.aspx.

[488] NAE. The Engineer of 2020: Visions of Engineeringin the New Century. National Academy of Engineering,Washington, 2004.

[489] NAE. Educating the Engineeri of 2020: AdaptingEngineering Education to the New Century. NationalAcademy of Engineering, Washington, 2005.

53

Page 54: Astrom Kumar Control a Perspective

[490] L. W. Nagel and D. O. Pederson. SPICE simulation programwith integrated circuit emphasis. Memorandum No. ERL-M382 University of California, 1973.

[491] Sanjeev M Naik, P. R. Kumar, and B. Erik Ydstie. Robustcontinuous-time adaptive control by parameter projection.Automatic Control, IEEE Transactions on, 37(2):182–197,1992.

[492] Girish N. Nair and Robin J. Evans. Stabilizability ofstochastic linear systems with finite feedback data rates.SIAM Journal on Control and Optimization, 43(2):413–436,2004.

[493] Ranjit Nair, Milind Tambe, Makoto Yokoo, DavidPynadath, and Stacy Marsella. Taming decentralizedPOMDPs: Towards efficient policy computation formultiagent settings. In International Joint Conference onArtificial Intelligence, volume 18, pages 705–711. LawrenceErlbaum Associates Ltd, 2003.

[494] K. S. Narendra and A. M. Annaswamy. Stable AdaptiveSystems. Prentice Hall, Englewook Cliffs, NJ, 1989.

[495] J. Nash. Non-cooperative games. Annals of Mathematics,54(2):286–295, 1951.

[496] NDRC. The ARTEMIS embedded computing initiative.http://www.artemis-ju.eu.

[497] Critical infrastructure protection.

[498] Yurii Nesterov and Arkadii Nemirovskii. Interior PointPolynomial Algorithms in Convex Programming. Societyof Industrial and Applied Mathematics, Philadelphia, PA,1994.

[499] C. Neuman. Challenges in security for cyber-physicalsystems. In DHS: S&T Workshop on Future Directions inCyber-physical Systems Security, 2009.

[500] G. C. Newton, L. A. Gould, and J. F. Kaiser. AnalyticalDesign of Linear Feedback Controls. Wiley, New York, NY,1957.

[501] Andrew Y Ng and Michael Jordan. PEGASUS: Apolicy search method for large MDPs and POMDPs. InProceedings of the Sixteenth conference on Uncertainty inartificial intelligence, pages 406–415. Morgan KaufmannPublishers Inc., 2000.

[502] A. Nier. Device to compensate for magnetic fieldfluctuations in a mass spectrometer. Review of ScientificInstruments, 6:254–255, 1935.

[503] H. Nijmeijer and A. van der Schaft. Nonlinear DynamicalControl Systems. Springer Verlag, Berlin, 1990.

[504] Bernt Nilsson. Object-Oriented Modeling of ChemicalProcesses. PhD thesis, Lund University, August 1993.

[505] Nobelstiftelsen. All Nobel Prizes, 2013.

[506] J. P. Norton. An Introduction to Identification. AcademicPress, London, UK, 1986.

[507] Harry Nyquist. Regeneration theory. Bell System TechnicalJournal, 11:126–147, 1932.

[508] Katsuhiko Ogata. Modern Control Engineering. Prentice-Hall, 1970. Fourth edition 2002.

[509] M. Oh and C.C. Pantelides. A modelling and simulationlanguage for combined lumped and distributed parametersystems. Computers and Chemical Engineering, 20:611–633,1996.

[510] V. Oja. Frequency-response method applied to the studyof turbine regulation in the Swedish power system. InRufus Oldenburg, editor, Frequency Response, pages 109–117. MacMillan, New York, 1956.

[511] R. C. Oldenbourg and H. Sartorius. Dynamik derSelbstattigen Regelungen (Dynamics of Automatic Control).Oldebbourg-Verlag, Munchen, 1944. Reprinted 1949, 1951.Translated to English 1948, Russian 1949 and Japanese1953.

[512] R. C. Oldenbourg and H. Sartorius. Dynamics of AutomaticControls. ASME, New York, NY, 1948.

[513] R. Oldenburger. IFAC, from idea to birth. Automatica,5:697–703, 1969.

[514] Rufus Oldenburger. Frequency Response. MacMillan, NewYork, 1956.

[515] W. Oppelt. Grundgesatze der Regelung. (Principles ofAutomatic Regulation). Wolfenbuttler Verlags-Anstalt,Hannover, Germany, 1947. (in German).

[516] W. Oppelt. A historical review of autopilot development,research, and theory in Germany. Journal of DynamicSystems, Measurements, and Control, 98:215–223, 1976.

[517] W. Oppelt. On the early growth of conceptual thinking incontrol system theory – the German role up to 1945. IEEEControl Systems Magazine, 4:16–22, 1984.

[518] Winfred Oppelt and G. (editors) Vossius. Der Mensch aslRegler (Man as a Regulator. VEB Verlag, Berlin, 1970.

[519] Andy Packard. Gain scheduling via linear fractionaltransformations. Systems and Control Letters, 22:79–92,1994.

[520] Robert S Parker, Francis J Doyle III, and Nicholas APeppas. A model-based algorithm for blood glucose controlin type i diabetic patients. Biomedical Engineering, IEEETransactions on, 46(2):148–157, 1999.

[521] P. C. Parks. Lyapunov redesign of model reference adaptivecontrol systems. IEEE Transactions on Automatic Control,AC-11:362–367, 1966.

[522] Roberto Parrotto, Johan Akesson, and Fracesco Casella.An XML representation of DAE systems obtained fromcontinuous-time Modelica models. In Third InternationalWorkshop on Equation-based Object-oriented ModelingLanguages and Tools, September 2010.

[523] PATH. Parners for advanced transportation technology.http://www.path.berkeley.edu/, 1986. Established bythe Institute of Transportation Studies (ITS) UCB andCaltrans.

[524] Henry M. Paynter. The differential analyzer as an activemathematical instrument. IEEE Control Systems magazine,9:3–8, 1989.

[525] Aurelio Peccei and Alexander King. STELLA. http://www.clubofrome.org/, 1968.

[526] James R Perkinsand P. R. Kumar. Stable, distributed, real-time schedulingof flexible manufacturing /assembly /diassembly systems.IEEE Transactions on Automatic Control, 34(2):139–148,1989.

[527] Tekla S Perry. Captain cellular. Spectrum, IEEE, 50(5):52–55, 2013.

[528] Erik Persson. Methods for design of control systems, 1970.Lecture at LTH.

[529] George A. Philbrick. Designing industrial controllers byanalog. Electronics, 21:108–111, 1948.

[530] P.C. Piela, T.G. Epperly, K.M. Westerberg, and A.W.Westerberg. ASCEND: An object-oriented computerenvironment for modeling and analysis: The modelinglanguage. Computers and Chemical Engineering, 15(1):53–72, 1991.

54

Page 55: Astrom Kumar Control a Perspective

[531] Joelle Pineau, Geoff Gordon, and Sebastian Thrun. Point-based value iteration: An anytime algorithm for POMDPs.In International joint conference on artificial intelligence,volume 18, pages 1025–1032. LAWRENCE ERLBAUMASSOCIATES LTD, 2003.

[532] Yves Piquet. Sysquake. http://www.calerga.com/

products/Sysquake/, 1998.

[533] J. W. Polderman and J. C. Willems. Introduction toMathematical Systems Theory: The behavioral approach.Springer Verlag, 1990.

[534] L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze,and E. F. Mischenko. The Mathematical Theory of OptimalProcesses. John Wiley, New York, 1962.

[535] V. M. Popov. Absolute stability of nonlinear systemsof automatic control. Automation and Remote Control,22:857–875, 1973.

[536] V. M. Popov. Hyperstability of Control Systems. Springer-Verlag, Berlin, FRG, 1973.

[537] Ilkka (editor) Porkka. IFAC Activities in Finland 1956-2006. Finnish Society of Automation, Helsinki, Finland,2006.

[538] A. Porter. Introduction to Servomechanims. Methuen,London, 1950.

[539] Arthur Porter. The Servo Panel - a unique contribution tocontrol systems engineering. Electronics & Power, 11:330–333, 1965.

[540] J.E. Potter. A guidance-navigationseparation theorem. Rep. RE-11, Experimental AstronomyLaboratory, Massachusetts Institute of Technology, 1964.

[541] L. Praly. Robustness of model reference adaptive control. InProceedings III Yale Workshop on Adaptive Systems, pages224–226, 1983.

[542] L. Praly. Robust model reference adaptive controllers, PartI: Stability analysis. In Proc. 23rd IEEE Conf. on Decisionand Control, pages 1009–1014, 1984.

[543] R. S. Pressman. Software Engineering: A Practioner’sApproach. McGraw-Hill Science/Engineering/Math, NewYork, NY, 2004.

[544] David M. Prett, Carlos E. Garcıa, and Brian L. Ramaker.The Second Shell Process Control Workshop. Butterworths,Stoneham, MA, 1990.

[545] David M. Prett and Manfred Morari. Shell Process ControlWorkshop. Butterworths, 1987.

[546] David M Prett, Brian L Ramaker, and Charles R Cutler.Dynamic matrix control method, September 14 1982. USPatent 4,349,869.

[547] PROFIBUS. Process Field Bus. http://www.profibus.

com/, 1986.

[548] P. Profos. Professor Stodola’s contribution to control theory.Journal on Dynamic Systems Measurement and Control,98:119–120, 1976.

[549] Claudius Ptolemaeus, editor. System Design, Modeling,and Simulation using Ptolemy II. Ptolemy.org,http://ptolemy.org/systems, 2014.

[550] Pupin. http://en.wikipedia.org/wiki/Mihajlo\_Pupin.

[551] Joe S. Qin and Thomas A. Badgwell. A survey of industrialmodel predictive control technology. Control EngineeringPractice, 11(7):733–764, 2003.

[552] Michael J. Rabins. 50th anniversary issue of DSCD. ASMEJournal of Dynamic Systems, Measurement, and Control,115:219, 1993.

[553] J. R. Ragazzini and G. F. Franklin. Sampled-Data ControlSystems. McGraw-Hill, New York, 1958.

[554] J. R. Ragazzini, R. H. Randall, and F. A. Russell. Analysisof problems in dynamics by electronic circuits. Proc. IRE,35:444–452, 1947.

[555] Raj Rajkumar. A cyberphysical future. Proceedings of theIEEE, 100(1):1309–312, 2012.

[556] Peter J Ramadge and W Murray Wonham. Supervisorycontrol of a class of discrete event processes. SIAM journalon control and optimization, 25(1):206–230, 1987.

[557] Peter JG Ramadge and W Murray Wonham. The control ofdiscrete event systems. Proceedings of the IEEE, 77(1):81–98, 1989.

[558] Karl J. Astrom and Richard M. Murray. Feedback Systems:An Introduction for Scientists and Engineers. PrincetonUniversity Press, 2008.

[559] J.B. Rawlings and D.Q. Mayne. Model Predictive Control:Theory and Design. Nob Hill Publishing, Madison,Wisconsin, 2009.

[560] Kent C. Redmond and Thomas M. Smith. ProjectWhirlwind: The History of a Pioneer Computer. DigitalPress, Bedford, MA, 1980.

[561] Kent C. Redmond and Thomas M. Smith. From Whirlwindto MITRE: The R&D Story of the SAGE Air DefenseComputer. MIT Press, Cambridge, MA, 2000.

[562] Glenn E. Reeves. What really happened on Mars? http:

//research.microsoft.com/en-us/um/people/mbj/mars_

pathfinder/Authoritative_Account.html, 1997.

[563] J. Richalet and O’Donnovan. D. Predictive FunctionalControl: Principles and Industrial Applications. Springer,New York, NY, 2009.

[564] J Richalet, A Rault, JL Testud, and J Papon. Modelpredictive heuristic control: Applications to industrialprocesses. Automatica, 14(5):413–428, 1978.

[565] Berry Richmond. STELLA. http://www.iseesystems.com/softwares/Education/StellaSoftware.aspx, 1985.

[566] C. Rohrs, L. S. Valavani, M. Athans, and G. Stein.Robustness of continuous-time adaptive control algorithmsin the presence of unmodeled dynamics. IEEE Transactionson Automatic Control, AC-30:881–889, 1985.

[567] Charles E Rohrs, Lena Valavani, Michael Athans, andGunter Stein. Analytical verification of undesirableproperties of direct model reference adaptive controlalgorithms. In Decision and Control including theSymposium on Adaptive Processes, 1981 20th IEEEConference on, volume 20, pages 1272–1284. IEEE, 1981.

[568] H. H. Rosenbrock and P. D. Morran. Good, bad or optimal?IEEE Transactions on Automatic Control, AC-16(6):529–552, 1971.

[569] E. J. Routh. A Treatise on the Stability of a Given Stateof Motion. Macmillan, London, 1877. Reprinted in Fuller(1975).

[570] Aleksandr Nikolaevich Rybko and AL Stolyar. Ergodicityof stochastic processes describing the operation of openqueueing networks. Problemy Peredachi Informatsii,28(3):3–26, 1992.

[571] M. G. Safonov and M. K. H. Fan. Editorial: Special issueon multivariable stability margin. International Journal ofRobust and Nonlinear Control, 7:97–103, 1997.

[572] Michael G. Safonov. Origins of robust control: Early historyand future speculations. Annual Reviews in Control, 36:173–181, 2012.

55

Page 56: Astrom Kumar Control a Perspective

[573] A. Sahai and S. Mitter. The necessity and sufficiencyof anytime capacity for stabilization of a linear systemover a noisy communication link: Part I - Scalar Systems.IEEE Transactions on Information Theory, 52(8):3369–3395, 2006.

[574] P. Sahlin, A. Bring, and E. F. Sowell. The Neutral ModelFormat for building simulation, Version 3.02. Technicalreport, Department of Building Sciences, The RoyalInstitute of Technology, Stockholm, Sweden, June 1996.

[575] Tariq Samad and Gary Balas. Software-Enabled Control.IEEE Press, Piscataway, New Jersey, 2003.

[576] Tariq Samad and Anuradha Annaswamy (Editors). TheImpact of Control Technology - Overview, Success Stories,and Research Challenges. IEEE Control Systems Society,2011.

[577] Irwing Sandberg. Feedback-domain criteria for the stabilityof nonlinear feedback systems. Proc. National ElectronicsConference, 20:737–740, 1964.

[578] Irwing Sandberg. A frequency-domain condition forstability of feedback systems containing a single time-varying nonlinear element. Bell System Technical Journal,43:1601–1608, 1964.

[579] Irwing Sandberg. Some results on the theory of physicalsystems governed by nonlinear functional equations. BellSystem Technical Journal, 44:871–898, 1965.

[580] Shankar Sastry and Marc Bodson. Adaptive Control:Stability, Convergence and Robustness. Prentice-Hall,Englewood Cliffs, NJ, 1989.

[581] H Scarf. The optimality of (S, s) policies in the dynamicinventory problem. Mathematical Methods in the SocialSciences, pages 196–202, 1960.

[582] Stuart E. Schechter, Jaeyeon Jung, and Arthur W. Berger.Fast detection of scanning worm infections. In Proceedingsof the 7th International Symposium on Recent Advances inIntrusion Detection, pages 59–81, 2004.

[583] C Scherer, P. Gahinet, and M. Chilali. Multiobjectiveoutput-feedback control via LMI optimization. IEEETransactions on Automatic Control, 42:896–911, 1997.

[584] Bianca Scholten. The road to integration, a guide to applyingthe ISA-95 standard in manufacturing. InternationalSociety of Automation (ISA), 2007. ISBN-13: 978-0-9792343-8-5.

[585] Max Schuler. Einfuhrung in die Theorie der selbsttatigenRegler. Academie Verlag, Leibzig, 1956.

[586] Maximilian Schuler. Die Storung von Pendel- undKreisselapparaten durch die Beschleunigung des Fahrzeuges(Disturbance of pendulous and gyroscopic devices byacceleration of the vehicle on which they are mounted).Phys. Z, 24:334–350, 1923.

[587] Carla Seatzu, Manuel Silva, and Jan vanSchuppen (Eds.).Control of Discrete-Event Systems. Springer, New Yok, NY,2012.

[588] Thomas I Seidman. “First come, first served“ can beunstable! Automatic Control, IEEE Transactions on,39(10):2166–2171, 1994.

[589] R. G. Selfridge. Coding a general purpose digital computerto operate as a differential analyzer. In Proceedings 1955Western Joint Computer Conference, IRE, 1955.

[590] M. M. Seron, J. H. Braslavsky, Petar V. Kokotovic, andDavid Q. Mayne. Feedback limitations in nonlinear systems:From Bode integrals to cheap control. Automatic Control,IEEE Transactions on, 44(4):829–833, 1999.

[591] Marıa M. Seron, Julio H. Braslavsky, and Graham C.Goodwin. Fundamental Limitations in Filtering andControl. Springer-Verlag, London, UK, 1997.

[592] D Seto, B Krogh, L Sha, and A Chutinan. The Simplexarchitecture for safe online control system upgrades. InAmerican Control Conference, 1998. Proceedings of the1998, volume 6, pages 3504–3508. IEEE, 1998.

[593] Lui Sha. Using simplicity to control complexity. IEEESoftware, 18(4):20–28, 2001.

[594] Lui Sha, Tarek Abdelzaher, Karl-Erik Arzen, Anton Cervin,Theodore Baker, Alan Burns, Giorgio Buttazzo, MarcoCaccamo, John Lehoczky, and Aloysius K. Mok. Real-time scheduling theory: A historical perspective. Real-TimeSystems, 28(2–3):101–155, November 2004.

[595] Sunil C. Shah, Michel A. Floyd, and Larry L. Lehman.Matrixx: Control design and model building CAE capability.In M. Jamshidi and C. J. Herget, editors, Computer-Aided Control Systems Engineering, pages 181–207. ElsevierScience Publishers B.V. (North-Holland), 1985.

[596] Guy Shani, Joelle Pineau, and Robert Kaplow. A surveyof point-based POMDP solvers. Autonomous Agents andMulti-Agent Systems, 27(1):1–51, 2013.

[597] Claude E Shannon. A symbolic analysis of relay andswitching circuits. Electrical Engineering, 57(12):713–723,1938.

[598] Claude Elwood Shannon. Mathematical theory of thedifferential analyzer. Journal of Mathematics and Physics,20:337–354, 1941.

[599] Claude Elwood Shannon. A mathematical theory ofcommunication. Bell System Technical Journal, 27:379–423,623–656, 1948.

[600] L. S. Shapley. Stochastic games. In Proceedings of theNational Academy of Sciences, volume 39, pages 1095–1100,1953.

[601] Marcel J. Sidi. Spacecraft Dynamics and Control.Cambridge University Press, Cambridge, UK, 1997.

[602] M. Simaan and J. B. Cruz, Jr. On the Stackelberg strategyin nonzero sum games. Journal of Optimization Theory andApplications, 11(6):533–555, 1973.

[603] H. A. Simon. Dynamic programming under uncertaintywith a quadratic criterion function. Econometrica, 24:74–81, 1956.

[604] Sigurd Skogestad and Ian Postlethwaite. MultivariableFeedback Control: Analysis and Design. John Wiley,Chichester, England, 1996.

[605] Jill Slay and Michael Miller. Lessons learned from theMaroochy water breach. In Eric Goetz and Sujeet Shenoi,editors, Critical Infrastructure Protection, volume 253 ofIFIP International Federation for Information Processing,chapter 6, pages 73–82. Springer Boston, Boston, MA, 2007.

[606] R. D. Smallwood and E. J. Sondik. The optimal control ofpartially observable Markov processes over a finite horizon.Operations Research, 21:1071–1088, 1973.

[607] E. S. Smith. Automatic Control Engineering. McGraw Hill,New York, NY, 1944.

[608] C. P. Snow. Science and Government - The Godkin Lecturesat Harvard University 1960. The New American Library,New York, NY, 1962.

[609] D Snyder. Filtering and detection for doubly stochasticPoisson processes. Information Theory, IEEE Transactionson, 18(1):91–102, 1972.

56

Page 57: Astrom Kumar Control a Perspective

[610] D. Snyder and P. Fishman. How to track a swarm of firefliesby observing their flashes (corresp.). IEEE Transactions onInformation Theory, 21(6):692–695, 1975.

[611] Torsten Soderstrom and Petre Stoica. System Identification.Prentice-Hall, London, UK, 1989.

[612] Python Software Foundation. Python. http://www.python.org/, 2001.

[613] V. Solo. The convergence of AML. Automatic Control,IEEE Transactions on, 24(6):958–962, 1979.

[614] V. V. Solodovnikov. The frequency-response method in thetheory of regulation. Automatica i Telemekhanika, 8:65–88,1947.

[615] V. V. Solodovnikov. Introduction to the Statistical Dynamicsof Automatic Control Systems. Dover, New York NY, 1952.Dover Edition 1960.

[616] V. V. Solodovnikov, editor. Foundations of automaticregulation: theory. Mashgiz, Moscow, 1954.

[617] Jianping Song, Song Han, A.K. Mok, Deji Chen, M. Lucas,and M. Nixon. WirelessHART: Applying wireless technologyin real-time industrial process control. In Proceedingsof the IEEE Real-Time and Embedded Technology andApplications Symposium, pages 377–386, 2008.

[618] D. Sontag, Eduardo and Yuan Wang. On characterizationof the input-to-state stability property. Systems & ControlLetters, 24:351–359, 1995.

[619] Matthijs TJ Spaan and Nikos Vlassis. Perseus: Randomizedpoint-based value iteration for POMDPs. Journal ofartificial intelligence research, 24(1):195–220, 2005.

[620] John A Stankovic. Vesta toolset for constructing andanalyzing component based embedded systems. InEmbedded Software, pages 390–402. Springer, 2001.

[621] A. W. Starr and Y. C. Ho. Nonzero-sum differential games.Journal of Optimization Theory and Applications, 3(3):184–206, 1969.

[622] G. Stein. Adaptive flight control: A pragmatic view. InK. S. Narendra and R. V. Monopoli, editors, Applicationsof Adaptive Control. Academic Press, New York, NY, 1980.

[623] G. Stein. Respect the unstable. IEEE Control SystemsMagazine, 23(4):12–25, 2003.

[624] Charles Proteus Steinmetz. Theory and Calculation ofAlternating Current Phenomena, volume 4. McGraw-Hill,New York, NY, 1916.

[625] F. Stengel, Robert. Flight Dynamics. Princeton UniversityPress, Princeton, NJ, 2004.

[626] A. B. Stodola. Uber die Regulierung von Turbinen.Schweitzer Bauzeitung, 22:113–135, 1893.

[627] Keith Stouffer, Joe Falco, and Karen Scarfone. Guideto supervisory control and data acquisition (SCADA) andindustrial control systems security.

[628] Ruslan L. Stratonovich. On the theory of optimal non-linearfiltering of random functions. Theory of Probability and itsApplications, 4:223–225, 1959.

[629] Ralph E. Strauch. Negative dynamic programming. Ann.Math. Statist., 37:871–890, 1966.

[630] J. C. Strauss (ed.). The SCi continuous system simulationlanguage (CSSL). Simulation, 9:281–303, 1967.

[631] C. Striebel. Sufficient statistics in the optimum controlof stochastic systems. J. Mathematical Analysis andApplications, 12:579–592, 1965.

[632] J. Strothman. More than a century of measuring andcontrolling industrial processes. InTech, pages 52–78, June1995.

[633] Hector J. Sussman and Velimir Jurdjevic. Controllabilityof nonlinear systems. Journal of Differential Equations,12:95–116, 1972.

[634] Peter Swerling. First-order error propagation in a stagewisesmoothing procedure for satellite observations. Technicalreport, Rand Corporation, Santa Monica, CA, 1959.

[635] Gang Tao and Petar V Kokotovic. Adaptive control ofsystems with actuator and sensor nonlinearities. John Wiley& Sons, Inc., 1996.

[636] Gerald Tesauro. Temporal difference learning and TD-Gammon. Communications of the ACM, 38:58–68, 1995.

[637] George J. Thaler and Robert G. Brown. ServomechanismAnalysis. Mc-Graw-Hill, New York, 1953.

[638] H. Theil. A note on certainty equivalence in dynamicprogramming. Econometrica, 25:346, 1959.

[639] William Thomson. Mechanical integration of linearequations of second order with variable coefficients. Proc.Roy. Soc., 24:269, 1876.

[640] William Thomson. Harmonic analyzer. Proc. Roy. Soc.,27:371–373, 1878.

[641] Sebastian Thrun. Monte Carlo POMDPs. Advances inneural information processing systems, 12(12):1064–1070,2000.

[642] D. Tiechroew, John Francis Lubin, and Thomas D. Truitt.Discussion of computer simulation and comparison oflanguages. Simulation, 9:181–190, 1967.

[643] M. Tiller. Introduction to Physical Modeling with Modelica.Kluwer International Series in Engineering and ComputerScience. Kluwer Academic Publishers, Norwell, MA, 2001.Second Edition 2004.

[644] M. Tolle. Die Regelung der Kraftmaschinen. SpringerVerlag, Berlin, 1905. 2nd edition 1909,3rd edition 1922.

[645] John Truxal. Automatic Feedback Control System Synthesis.McGraw-Hill, New York, NY, 1955.

[646] H. S. Tsien. Engineering Cybernetics. McGraw-Hill BookCompany, Inc, New York, NY, 1954.

[647] Ya. Z. Tsypkin. Theory of intermittent control. Avtomatikai Telemekhanika, 1949, 1950. Vol. 10 (1949) Part I, pp.189–224; Part II, pp. 342–361; Vol. 11 (1950) Part III, pp.300–331.

[648] Ya. Z. Tsypkin. Theorie der Relais Systeme derAutomatischen Regelung. R. Oldenburg, Munich, Germany,1958.

[649] Ya. Z. Tsypkin. Adaptation and Learning in AutomaticSystems. Academic Press, New York, NY, 1971.

[650] Ya. Z. Tsypkin. Relay Control Systems. CambridgeUniversity Press, Cambridge, UK, 1984.

[651] Y. Z. Tsypkin. Theory of Impulse Systems. State Publisherfor Physical Mathematical Literature, Moscow, 1958.

[652] Yakov Z. Tsypkin. A Russian life in control. IEE Review,38:313–316, 1992.

[653] A. Tustin. Mechanisms of Economic Systems. Heinemann,London, 1953.

[654] Arnold Tustin. The effects of backlash and of speeddependent friction on the stability of closed-cycle controlsystems. Journal IEE Pt. IIa, 94:143–151, 1947.

[655] Arnold Tustin. A method of anslusing the behaviour oflinear systems in terms of time series. J. Institution ofElectrical Engineers, 94:130–142, 1947.

57

Page 58: Astrom Kumar Control a Perspective

[656] Arnold Tustin. The nature of the operator’s response inmanual control and its implications for controller design. J.Institution Electrical Engineers, 94:190–207, 1947.

[657] Arnold Tustin. Automatic and Manual Control -Papers contributed to the Conference at Cranfield, 1951.Butterworths, London, 1952.

[658] Arnold Tustin, editor. Automatic and Manual Control:Proceedings of the 1951 Cranfield Conference, New York,NY., 1952. Academic Press.

[659] Mac E Van Valkenburg. Network Analysis. Prentice Hall,Englewood Cliffs, NJ, 1955. Third edition 1974.

[660] Van Jacobson. Congestion avoidance and control. ACMComputer Communication Review, pages 314–329, August1988.

[661] J.H Van Schuppen. Filtering, prediction and smoothingfor counting process observations, a martingale approach.SIAM Journal on Applied Mathematics, 32(3):552–570,1977.

[662] Arthur F Veinott. Optimal policy for a multi-product,dynamic, nonstationary inventory problem. ManagementScience, 12(3):206–222, 1965.

[663] M. Vidyasagar. Nonlinear Systems Analysis. Prentice-Hall,Englewood Cliffs, NJ, 1978. Second edition SIAM Classicsin Applied Mathematics 2002.

[664] Mathukumalli Vidyasagar. Control System Synthesis: AFactorization Approach. MIT Press, Cambridge, MA, 1985.

[665] L. Viklund and P. Fritzson. ObjectMath - An object-oriented language and environment for symbolic andnumerical processing in scientific computing. ScientificProgramming, 4:229–250, 1995.

[666] Glenn Vinnicombe. Uncertainty and Feedback. H∞ Loop-Shaping and the ν-Gap Metric. Imperial College Press, 2001.

[667] J. von Neumann and O. Morgenstern. Theory of Games andEconomic Behavior. Princeton University Press, Princeton,NJ, 2nd edition, 1947.

[668] John von Neumann. Zur Theorie der Gesellschaftsspiele.Mathematische Annalen, 100:295–320, 1928. Translatedas ”On the Theory of Games of Strategy”, pp.13-42 inContributions to the Theory of Games, Volume IV (Annalsof Mathematics Studies, 40) (A. W. Tucker and R. D. Luce,eds.), Princeton University Press, Princeton, 1959.

[669] A. A. Voronov. Elements of the theory of automaticregulation. Voenizdat, Moscow, 1954.

[670] J. Vyshnegradskii. Sur la theorie generale des regulateurs.Compt. Rend. Acid. Sci. Paris, 83:318–321, 1876.

[671] J Walrand and P Varaiya. Flows in queueing networks: amartingale approach. Mathematics of Operations Research,6(3):387–404, 1981.

[672] Qinan Wang. Control policies for multi-echelon inventorysystems with stochastic demand. In Supply ChainCoordination under Uncertainty, pages 83–108. Springer,2011.

[673] G. W. Washington. Matrix Revolution: Sikorsky researchprogram aims at autonomy for any vertical-lift vehicle,manned or unmanned. Aviation Week, 2013.

[674] Nelson Wax. Selected Papers on Noise and StochasticProcesses. Dover, New York, NY, 1954.

[675] Lawrence M Wein. Scheduling semiconductor waferfabrication. Semiconductor Manufacturing, IEEETransactions on, 1(3):115–130, 1988.

[676] J. C. West. Textbook of Servomechanisms. EUP, London,1953.

[677] J. C. West. Forty years in control. Proc. IEE Pt. A, 132:1–8, 1985.

[678] Eric R. Westervelt, Jessy W. Grizzle, ChristineChevallereau, Jun Ho Choi, and Benjamin Morris. FeedbackControl of Dynamic Bipedal Robot Locomotion. CRC Press,Taylor & Francis, Bocca Raton, FL, 2007.

[679] H. P. Whitaker, J. Yamron, and A. Kezer. Design of model-reference adaptive control systems for aircraft. Report R-164, Instrumental Laboratory, Massachusetts Institute ofTechnology, 1958.

[680] B. Widrow. Generalization and information storage innetwork of Adaline neurons. In Yovits et al., editors, Self-Organizing Systems. Spartan Books, Washington, DC, 1962.

[681] B. Widrow and S. D. Stearns. Adaptive Signal Processing.Prentice-Hall, Englewood Cliffs, NJ, 1985.

[682] Norbert Wiener. Cybernetics or control and communicationin the animal and the machine. MIT Press, Cambridge,MA, 1948.

[683] Norbert Wiener. The Extrapolation, Interpolation, andSmoothing of Stationary Time Series with EngineeringApplications. Wiley, New York, NY, 1949. Originally issuedas a classified MIT Rad. Lab. Report in February, 1942.

[684] K. L. Wildes and N. A. Lindgren. A Century of ElectricalEngineering and Computer Science at MIT, 1882–1982.MIT Press, Cambridge, MA, 1986.

[685] Jan C. Willems. Dissipative dynamical systems part i:General theory. Archive for Rational Mechanics andAnalysis, 45:321–351, 1972.

[686] H. S. Witsenhausen. A counterexample in stochastic optimalcontrol. SIAM Journal on Control, 6:131–147, 1968.

[687] H. S. Witsenhausen. On information structures, feedbackand causality. SIAM Journal on Control, 9:149–160, 1971.

[688] H. S. Witsenhausen. Separation of estimation and controlfor discrete time systems. IEEE Proceedings, 59:1557–1566,1971.

[689] Wing Shing Wong and Roger W Brockett. Systems withfinite communication bandwidth constraints. ii. stabilizationwith limited information feedback. Automatic Control,IEEE Transactions on, 44(5):1049–1053, 1999.

[690] WorldFIP. Factory Instrumentation Protocol. http://www.worldfip.org/, 1982.

[691] Jingchen Wu and Xiuli Chao. Optimal control of a Brownianproduction/inventory system with average cost criterion.Mathematics of Operations Research, 2013.

[692] V. A. Yakubovic. Solution of certain matrix inequalitiesencountered in nonlinear control theory. Soviet Math.Doklady, 5:652–656, 1964.

[693] B Erik Ydstie. Stability of discrete model reference adaptivecontrol revisited. Systems & Control Letters, 13(5):429–438,1989.

[694] Lars Erik Zachrisson. Markov Games. In M. Dresher, L. S.Shapley, and A. W. Tucker, editors, Advances in GameTheory, pages 211–253. Princeton University Press, 1964.

[695] L. A. Zadeh and C. A. Desoer. Series in Systems Science.McGraw-Hill, Inc., New York, NY, 1963.

[696] Moshe Zakai. On the optimal filtering of diffusion processes.Zeitschrift fur Wahrscheinlichkeitstheorie und verwandteGebiete, 11(3):230–243, 1969.

[697] George Zames. On the stability of nonlinear time-varyingfeedback systems. Proc. National Electronics Conference,20:725–730, 1964.

58

Page 59: Astrom Kumar Control a Perspective

[698] George Zames. On the input-output stability of non-lineartime-varying feedback systems, part 1: Conditions derivedusing concepts of loop gain, conicity, and positivity. IEEETrans. Automatic Control, AC-11:228–238, 1966.

[699] George Zames. On the input-output stability of non-lineartime-varying feedback systems, part 2: Conditions involvingcircles in the frequency plane and sector nonlinearities.IEEE Trans. Automatic Control, AC-11:465–477, 1966.

[700] George Zames. Feedback and optimal sensitivity: Modelreference transformations, multiplicative seminorms andapproximate inverses. IEEE Trans. Automatic Control, AC-26:301–320, 1981.

[701] Kemin Zhou and John Doyle. Essentials of Robust Control.Prentice Hall, Englewood Cliffs, NJ, 1997.

[702] Kemin Zhou, John Doyle, and Keith Glover. Robust andOptimal Control. Prentice Hall, Englewood Cliffs, NJ, 1996.

[703] J. G. Ziegler and N. B. Nichols. Optimum settings forautomatic controllers. Trans. ASME, 64:759–768, 1942.

59