wegner, peter - why interactions is more powerful than algorithims

12

Upload: andre-folloni

Post on 02-Dec-2015

15 views

Category:

Documents


4 download

DESCRIPTION

Livro de Peter Wegner sobre interações e algoritmos

TRANSCRIPT

82 May 1997/Vol. 40, No. 5 COMMUNICATIONS OF THE ACM

The folk wisdom that marriage contracts cannot bereduced to sales contracts is made precise by show-ing that interaction cannot be expressed by algo-rithms. Contracts over time include algorithmsthat instantaneously transform inputs to out-puts as special cases.

Interactive tasks, like driving home fromwork, cannot be realized through algo-rithms. Algorithms that execute automati-cally without taking notice of theirsurroundings cannot handle traffic andother interactive events. Algorithmsare surprisingly versatile, but prob-lems are more difficult to solve

wheninteraction

is precluded (closed-book exams are more difficultthan open-book exams). Interaction can simplifytasks when algorithms exist and is the only game intown for inherently interactive tasks, like driving orreserving a seat on an airline.

Smart bombs that interactively check observationsof the terrain against a stored map of their routes are“smart” because they enhance algorithms with inter-action. Smartness in mechanical devices is often real-ized through interaction that enhances dumbalgorithms so they become smart agents. Algorithmsare metaphorically dumb and blind because they can-not adapt interactively while they compute. They areautistic in performing tasks according to rules ratherthan through interaction. In contrast, interactive sys-tems are grounded in an external reality both moredemanding and richer in behavior than the rule-based world of noninteractive algorithms.

Objects can remember their past and interact withclients through an interface of operations that share ahidden state. An object’s operations are not algo-rithms, because their response to messages depends on

a shared state accessed through nonlocal vari-ables of operations. The effect of a bankaccount’s check-balancing operation is notuniquely determined by the operation alone,since it depends on changes of state by depositand withdraw operations that cannot be pre-dicted or controlled. An object’s operationsreturn results that depend on changes of statecontrolled by unpredictable external actions.

The growing pains of software technologyare due to the fact that programming in the large isinherently interactive and cannot be expressed by orreduced to programming in thesmall. The behavior ofairline reserva-

tion systems and otherembedded systems cannot be expressed by

algorithms. Fred Brooks’s persuasive argument [1]that there is no silver bullet for specifying complexsystems is a consequence of the irreducibility ofinteractive systems to algorithms. If silver bullets areinterpreted as formal (or algorithmic) system speci-

fications, the nonexistence of silver bullets canactually be proved. Artificial intelligence has undergone a paradigm

shift from logic-based to interactive (agent-oriented)models paralleling that in software engineering.Interactive models provide a common framework foragent-oriented AI, software engineering, and systemarchitecture [10].

Though object-based programming has become adominant practical technology, its conceptual frame-work and theoretical foundations are still unsatisfac-tory; it is fashionable to say that everyone talks aboutit but no one knows what it is. “Knowing what it is”has proved elusive because of the implicit assump-tion that explanations must specify “what it is” interms of algorithms. Accepting irreducibility as afact of life has a liberating effect; “what it is” is morenaturally defined in terms of interactive models.

From Turing Machines to Interaction Machines

THE BRITISH MATHEMATICIAN, COMPUTER PIO-neer, and World War II code-breaker AlanTuring showed in the 1930s that algorithms in

any programming language have the same transfor-mation power as Turing machines [6]. We call theclass of functions computable by algorithms and Tur-ing machines the “computable functions.” This pre-cise characterization of what can be computedestablished the respectability of computer science as adiscipline. However, the inability to compute morethan the computable functions by adding new primi-

Infinite TapeRead/Write Head

Control device

1 0 0 1 0 1 1

Figure 1. A Turing machine is a closed, noninteractive system, shutting out the external world while it computes.

Figure 2. The ant’s path reflectsthe complexity of the beach.

tives proved so frustrating that this limitation of Tur-ing machines was also called the “Turing tarpit.”Interactive computing lets us escape from the gooeyconfines of the Turing tarpit.

Turing machines transform strings of input sym-bols on a tape into output strings by sequences ofstate transitions (see Figure 1). Each step reads asymbol from the tape, performs a state transition,writes a symbol on the tape, and moves the readinghead. Turing machines cannot, however, acceptexternal input while they compute; they shut out theexternal world and are therefore unable to model thepassage of external time.

The hypothesis (aka Church’s thesis) that the for-mal notion of computability by Turing machinescorresponds to the intuitive notion of what is com-putable has been accepted as obviously true for 50years. However, when the intu-

itive notion of what is computableis broadened to include interactive computations,Church’s thesis breaks down. Though the thesis isvalid in the narrow sense that Turing machinesexpress the behavior of algorithms, the broader asser-tion that algorithms capture the intuitive notion ofwhat computers compute is invalid.

Turing machines extended by addition of inputand output actions that support dynamic interactionwith an external environment are called “interactionmachines.” Though interaction machines are a sim-ple and obvious extension of Turing machines, thissmall change increases expressiveness so it becomestoo rich for nice mathematical models. Interactionmachines may have single or multiple input streamsand synchronous or asynchronous actions and canalso differ along many other dimensions. Distinc-tions among interaction machines are examined in[9], but all forms of interaction transform closed sys-tems to open systems and express behavior beyondthat computable by algorithms, as indicated by thefollowing informal argument:

Claim: Interaction-machine behavior is not reducible toTuring-machine behavior.

Informal evidence of richer behavior: Turingmachines cannot handle the passage of time or interac-tive events that occur during the process of computation.

Formal evidence of irreducibility: Input streams ofinteraction machines are not expressible by finite tapes,since any finite representation can be dynamicallyextended by uncontrollable adversaries.

The radical view that Turing machines are not themost powerful computing mechanisms has a distin-

guished pedigree. It was accepted by Turing, whoshowed in 1939 that Turing machines with oracles(like the oracle at Delphi) were more powerful thanTuring machines without oracles. Milner [3] noticedas early as 1975 that concurrent processes cannot beexpressed by sequential algorithms, while Mannaand Pnueli [2] showed that nonterminating reactiveprocesses, like operating systems, cannot be modeledby algorithms. Gödel’s discovery that the integerscannot be described completely through logic,demonstrating the limitations of formalism in math-ematics, may be adapted to show that interactionmachines cannot be completely described by first-order logic.

Input and output actions are logical sensors andeffectors that affect external data even when they

have no physical effect. Objects and robotshave similar interactive models of computa-

tion; robots differ fromobjects only inthat their sensors and effectorshave physical rather than logical effects. Interac-tion machines can model objects, software engineer-ing applications, robots, intelligent agents,distributed systems, and networks, like the Internetand the World-Wide Web.

The observable behavior of interaction machinesis specified by interaction histories describing thebehavior of systems over time. In the case of simpleobjects, like bank accounts with deposit and with-draw operations, histories are described by streamsof operations called traces. Operations whose effectsdepend on the time of their occurrence, as in inter-est-bearing bank accounts, require time-stampedtraces. Objects with inherently nonsequential inter-faces, like joint bank accounts accessed from multi-ple automatic teller machines, have inherentlynonsequential interaction histories. Interaction his-tories of distributed systems, like the history in his-tory books, consist of nonsequential events thatmay have duration and may interfere with eachother.

Whereas interaction histories express the externalunfolding of events in time, instruction-executionhistories express an ordering of inner events of analgorithm without any relation to the actual passageof time. Algorithmic time is intentionally measuredby number of instructions executed, rather than bythe actual time taken by execution, in order to pro-vide a hardware-independent measure of logicalcomplexity. In contrast, the duration and the timeelapsing between the execution of operations may beinteractively significant. Operation sequences areinteractive streams with temporal as well as func-

COMMUNICATIONS OF THE ACM May 1997/Vol. 40, No. 5 83

tional properties, while instruction sequencesdescribe inner state-transition semantics.

Pure Interaction, Judo, and the Management ParadigmRaw interactive power is captured by interactive-identity machines (IIMs) that output their inputimmediately without transforming it. IIMs are sim-

ple transducers that realize

nonalgorithmic behavior by harness-ing the computing power of the environment: Theymay be described in any of a number of equivalentprogramming-language notations:

loop input(message); output(message); endloopor

while true do echo input end whileor

P = in(message).out(message).P

IIMs employ the judo principle of using theweight of adversaries (or cooperating agents) toachieve a desired effect. They realize the managementparadigm, coordinating and delegating tasks withoutnecessarily understanding their technical details.Though IIMs are not inherently intelligent, they canbehave intelligently by replicating intelligentinputs from the environment:

Claim: Interactive identity machines have richerbehavior than Turing machines.

Evidence: An IIM can mimic any Turingmachine and any input stream from the environ-ment.

INTERACTION MACHINES ARE NOT SIMPLY A

theoretical trick. They embody the behaviorof managers who rely on subordinates to

perform substantive problem-solving tasks. Aperson ignorant of chess can win half the gamesin a simultaneous chess match against two mas-ters by echoing the moves of one opponent onthe board of the other. A chess machine M can makeuse of intelligent input actions through one interfaceto deliver intelligent outputs through another inter-face; though unintelligent by itself, M harnesses theintelligence of one player to respond intelligently to asecond player. Clients of M, like player A, are unawareof the interactive help M receives through its interfaceto B. From A’s point of view, M is like Van Kempelen’s17th-century chess machine whose magical mastery ofchess was due to a hidden human chess master con-

cealed in an inner compartment. From A’s viewpoint,B’s actions through a hidden interface are indistin-guishable from those of a daemon hidden inside themachine.

Simon’s well-known example [5] of the irregularbehavior of an ant on a beach in finding its way hometo an ant colony illustrates that complex environmentscause simple interactive agents to exhibit complexbehavior (see Figure 2). The computing mechanism ofthe ant is presumed to be simple, but the ant’s behav-

ior reflects the complexity of the beach where nonal-gorithmic topography causes the ant to traverse a

nonalgorithmic path. The behavior of antson beaches cannot be described by

algorithms because the setof all possible beachescannot be so described.

Though interaction opens up limit-less possibilities for harnessing the environ-ment, it is entirely dependent on external resources,while machines with built-in algorithmic clevernessare not. High achievement, whether by machines orby people, can be realized either by self-sufficientinner cleverness or by harnessing the environment.The achievements of presidents of large corporationsor of the president of the U.S. are dependent on theeffective use of a supporting environmental infra-structure. Interaction scales up to very large prob-

lems better than inner cleverness, because itexpresses delegation and coordination.

Interaction, Parallelism, Distribution, and OpennessInteraction, parallelism, and distribution are concep-tually distinct concepts:

• Interactive systems interact with an external envi-ronment they cannot control.

84 May 1997/Vol. 40, No. 5 COMMUNICATIONS OF THE ACM

Interaction is nonalgorithmic,even for identity machines.

Distributed(separatedin space)Noninteractive

parallelism anddistribution are algorithmic.

Interactive(externalinputs)

Traditional sequentialalgorithms

Parallel (simultaneous in time)

Figure 3. Design space for interactive computing.

• Parallelism (concurrency) occurs when computa-tions of a system overlap in time.

• Distribution occurs when components of a systemare separated geographically or logically.

Parallel and distributed computation can, in theabsence of interaction, be expressed by algorithms.

Parallel algorithms, studied in many textbooksand university courses, are necessarily nonin-

teractive. Distributed algorithms, intextbooks on distributed computing,

are likewise noninteractive.The insight that interactionrather than parallelism or

distribution is the keyelement in provid-

ing greater

behavioral richness is nontrivial,requiring a fundamental reappraisal of the roles ofparallelism and distribution in complex systems.The horizontal base plane of Figure 3 includes manyinteresting and important algorithmic systems; sys-tems not in the base plane are nonalgorithmic, evenwhen they are as simple as the interactive identitymachine.

Agents in the environment can interact in a coop-erative, neutral, or malicious way with an interaction

machine. Adversaries interfering with algorithmicgoals provide a measure of the limits of interactionmachine behavior. Synchronous adversaries controlwhat inputs an agent receives, while asynchronousadversaries have additional power over when an inputis received. Asynchronous adversaries who can decidewhen to zap an interaction machine are interactivelymore powerful than synchronous adversaries.

Interactiveness provides a natural and precise def-inition of the notion of open and closed systems.Open systems can be modeled by interactionmachines, while closed systems are modeled by algo-rithms. Interaction machines provide a precise defi-nition not only for open systems but for other fuzzyconcepts, like empirical computer science and pro-gramming in the large. They robustly capture manyalternative notions of interactive computing, just asTuring machines capture algorithmic computing[9].

Open systems have very rich behavior to handleall possible clients, while the individual interfacedemands of clients are often quite simple. Open sys-tems whose unconstrained behavior is nonalgorith-mic can become algorithmic by stronglyconstraining their interactive behavior [9], summa-rized as:

Supplied behavior of an interactive system >> demandedbehavior at a given interface.

Interfaces are a primitive building block of interac-tive systems, playing the role of primitive instruc-tions. Interactive programmers compose (plugtogether) interfaces, just as algorithmic programmerscompose instructions. Interactive software technologyfor interoperability, patterns, and coordination can beexpressed in terms of relations among interfaces [10].Frameworks constrain the interface behavior of theirconstituent components to realize goal-directed

behavior through graphical user interfaces [11].Interfaces express the mode of use or pragmatics

of an interactive system, complementingsyntax and semantics (see Figure 4).

Closed systems with algorithmicbehavior have open subsystems

with nonalgorithmic behavior.For example, an engine of a

car may behave unpre-dictably when a sparkplug is removed. Animals(or people) with an establishedbehavior routine may behave errat-ically in unfamiliar environments.Subsystems with predictable (algorith-

COMMUNICATIONS OF THE ACM May 1997/Vol. 40, No. 5 85

Pragmatics (User view, or interface, including observation,use, control, prediction, design, description, and

management of the modeled world)

Syntax(Representation that expresses static properties,

including languages, machine formulas,and programs)

Semantics(Meaning that captures properties of

objects in the modeled world)

Figure 4. A user’s view of computational modeling, using representation to express multiple meanings.

Interactivesystems aregrounded in an external reality both more demanding and richerin behavior than the rule-based world of noninteractive algorithms.

mic) constrained behavior have unpredictable (non-algorithmic) behavior when the constraint isremoved, and a greater range of possible behaviorsmust be considered in these situations.

Interactiveness (openness) is nonmonotonic inthat decomposition of systems may create interactiveunpredictable subsystems, or equivalently, composi-

tion of interactive systems may produce non-interactive algorithms. In contrast,

concurrency and distributionare monotonic; if a

system is not concurrentor not distributed, all subsystemsalso have this property. Nonmonotonicityis an untidy formal property of interaction since itimplies that noninteractive systems with algorith-mic behavior may have interactive subsystems whosebehavior is nonalgorithmic.

Interfaces as Behavior SpecificationsThe negative result that interaction is not express-ible by algorithms leads to positive new approachesto system modeling in terms of interfaces. Giving upthe goal of complete behavior specification requiresa psychological adjustment but makes respectablesoftware-engineering methods of incomplete partialsystem specification by interfaces. Since a completeelephant cannot be specified, the focus shifts to spec-ifying its parts and its forms of behavior (such as itstrunk or its mode of eating peanuts). Complete spec-ification must be replaced by partial specification ofinterfaces, views, and modes of use. Airline reserva-tion systems can be partially specified by multipleinterfaces:

• Travel agents: Making reservations on behalf ofclients

• Passengers: Making direct reservations• Airline desk employees: Making inquiries on behalf

of clients and checking their tickets• Flight attendants: Aiding passengers during the

flight itself• Accountants: Auditing and checking financial

transactions• System builders: Developing and modifying

systems

Airline reservation systems have a large number ofgeographically distributed interfaces, each with anormal mode of use that may break down underabnormal overload conditions. The requirements ofan airline reservation system may be specified by theset of all interfaces (modes of use) it should support.The description of systems by their modes of use is a

starting point for system design; interfaces play apractical as well as a conceptual role in interactivesystem technology.

ASYSTEM SATISFIES ITS REQUIREMENTS IF IT

supports specified modes of use, even thoughcorrect behavior for a given mode of use is

not guaranteed and complete system behaviorfor all possible modes of use is unspecifi-able. Though correctness of programsunder carefully qualified condi-tions can still be proved,result checking is

needed during executionto verify that results actually

obtained are valid. Techniques forsystematic online result checking will

play an increasingly important practicaland formal role as a supplement to off-line testingand verification. Result checking is performed auto-matically by people for such tasks as driving withvisual feedback, but must be performed by instru-ments or programs as safeguards against airplanecrashes and other costly embedded-system failures.

Interface descriptions are called harnesses, sincethey serve both to constrain system behavior (likethe harness of a horse) and to harness behavior foruseful purposes. Harnesses have a negative connota-tion as constraints and a positive connotation asspecifications of useful behavior; interfaces focus pri-marily on the positive connotation. We distinguishbetween open harnesses, which permit interactionthrough other harnesses or exogenous events, andclosed harnesses, which cause the system (togetherwith its harness) to become closed and thereby algo-rithmic.

Airline reservation systems are naturallydescribed by a collection of open harnesses.Microsoft’s Component Object Model (COM)describes components according to the interfacesthey support. The key property possessed by everyCOM component is an interface directory throughwhich the complete set of interfaces may be accessed.Every component has an interface I-unknown with aqueryinterface operation for checking the existence ofan interface before it is used. Industrial-strengthobject-based models suggest that components withopen-harness multiple interfaces are a more flexibleframework than inheritance of interface functional-ity for modeling complex objects. Interactionmachines conform more closely to industrial modelslike COM than to inheritance models.

The goal of proving correctness for algorithms isreplaced for interactive systems by the more modest

86 May 1997/Vol. 40, No. 5 COMMUNICATIONS OF THE ACM

goal of showing that components have collections ofinterfaces (harnesses) corresponding to desired formsof useful behavior. Interaction machines specifyingsoftware systems are described by multiple interfacesexpressing functionality for different purposes. Peo-ple are likewise better described as collections ofinterfaces; the behavioral effect of “thinking” is bet-ter modeled by interaction machines with multipleinterfaces than by Turing machines.

From Rationalism to EmpiricismPlato’s parable of the cave, comparing humans tocave dwellers who observe only shadows of realityon their cave walls, not actual objects in the out-side world, shows that observation cannot com-pletely specify the inner structure or behaviorof observed objects. For example, a personspending his or her entire life in the BlueGrotto sea cave in Capri, Italy, would imag-ine the outside world to be blue. Projec-tions of light on our retinas serve asincomplete cues for constructing ourworld of solid tables and chairs.

Plato concluded that abstract ideasare more perfect and therefore morereal than physical

objects like tables andchairs. His skepticism concerning empirical sciencecontributed to the 2,000-year hiatus in the evolutionof empiricism. Empiricists accept the view that per-ceptions are reflections of reality but disagree withPlato on the nature of reality, believing that thephysical world outside the cave is real but unknow-able. Fortunately, complete knowledge is unneces-sary for empirical models of physics because theyachieve their pragmatic goals of prediction and con-trol by dealing entirely with incomplete observablereflections.

Modern empirical science rejects Plato’s beliefthat incomplete knowledge is worthless, using par-tial descriptions (shadows) to control, predict, andunderstand the objects that shadows represent. Dif-ferential equations capture quantitative properties ofphenomena they model without requiring a com-plete description. Similarly, computing systems canbe specified by interfaces describing propertiesjudged to be relevant while ignoring propertiesjudged irrelevant. Plato’s cave, properly interpreted,is a metaphor for empirical abstraction in both nat-ural science and computer science.

Turing machines correspond to Platonic ideals byfocusing on mathematical models at the expense ofempirical models. To realize logical completeness,

they sacrifice the ability to model external interac-tion and real time. The extension from Turing tointeraction machines, and from procedure-orientedto object-based systems, is the computational analogof liberation from the Platonic world view that led todevelopment of empirical science. Interactionmachines liberate computing from the Turing tarpitof algorithmic computation, providing a conceptualmodel for software engineering, AI agents, and thereal (physical) world.

The contrast between algorithmic and interactivemodels parallels interdisciplinary distinctions inother domains of modeling, arising in its purest formin philosophy, where the argument between rational-ists and empiricists has been central and passionatefor more than 2,000 years. Descartes’ quest for cer-tainty led to the rationalist credo “cogito ergo sum,”succinctly asserting that thinking is the basis of exis-tence and implying that certain knowledge of thephysical world is possible through inner processes ofalgorithmic thinking. Hume was called anempiricist because he showed that inductive infer-ence and causality are not deductiveand that rationalist

models of the world areinadequate. Kant, “roused from his dog-

matic slumbers” by Hume, wrote the Critique of PureReason to show that pure reasoning about necessarilytrue knowledge was inadequate to express contin-gently true knowledge of the real world.

Rationalism continued to have great appeal inspite of the triumph of empiricism in modern sci-ence. Hegel, whose “dialectical logic” extended rea-son beyond its legitimate domain, influencedpolitical philosophers like Marx as well as mathe-matical thinkers like Russell. George Boole’s treatiseThe Laws of Thought demonstrated the influence ofrationalism by equating logic with thought. Mathe-matical thought in the early 20th century was dom-inated by Russell’s and Hilbert’s rationalistreductions of mathematics to logic. Gödel’s incom-pleteness result showed the flaws of rationalist math-ematics, but logicians and other mathematicianshave not fully appreciated the limitations on formal-ism implied by Gödel’s work.

The term “fundamental,” as in “fundamental par-ticles,” or “foundations of mathematics,” is a codeword for rationalist (reductionist) models in bothphysics and computing. The presence of this codeword in terms like “religious fundamentalism” sug-gests that social and scientific rationalism have com-mon roots in the deep-seated human desire forcertainty. Rationalism is an alluring philosophy withmany (dis)guises.

COMMUNICATIONS OF THE ACM May 1997/Vol. 40, No. 5 87

Though empiricism has displaced rationalism inthe sciences, Turing machines reflect rationalist rea-soning paradigms of logic rather than empirical par-adigms of physics. Algorithms and Turingmachines, like Cartesian thinkers, shut out theworld during the process of problem solving. Turingwas born in 1912 and matured at about the timeGödel delivered his coup de grace to formalist math-ematics. But the effects of Gödel’s incompletenessresult were slow to manifest themselves among suchlogicians as Church, Curry, and Turing who shapedthe foundations of computer science.

Abstraction is a key tool in simplifying systemsby focusing on subsets of relevant attributes andignoring irrelevant ones. Incompleteness is the keydistinguishing mechanism between rationalist,

algorithmic abstraction and empiricist,interactive abstraction. The comfort-

able completeness and pre-dictability of algorithms

is inherently inade-quate in modelinginteractive computing tasksand physical systems. The sacri-fice of completeness is frightening totheorists who work with formal models likeTuring machines, just as it was for Plato andDescartes. But incomplete behavior is comfortablyfamiliar to physicists and empirical modelbuilders. Incompleteness is the essential ingredi-ent distinguishing interactive from algorithmicmodels of computing and empirical from rational-ist models of the physical world.

Models in Logic and ComputationModels in logic and computation aim to capturesemantic properties of a domain of discourse or mod-eled world by syntactic representations for the prag-matic benefit of users. They express properties ofphysical or mathematical modeled worlds in a formthat is pragmatically useful.

A model M = (R, W, I) is a representation R of amodeled world W interpreted by a human or mechan-ical interpreter I that determines semantic propertiesof W in terms of syntactic expressions of R. R, W, andI determine, respectively, the syntax, semantics, andpragmatics of the model as in Figure 4.

Interactive models have multiple pragmaticmodes of use, while algorithms have a singleintended pragmatic interpretation determined bythe syntax. The goal of expressing semantics by syn-tax is replaced by the interactive goal of expressingsemantics by multiple pragmatic modes of use. Thegoal of complete behavior specification is replaced

by the goal of harnessing useful forms of partialbehavior through interfaces.

Logical proof involves step-by-step progress froma starting point to a result as in:

logical system ]]. programming languagewell-formed formulae ]]. programstheorem to be proved ]]. initial inputrules of inference ]]. nondeterministic rules of computationproofs ]]. sequential algorithmic computations

Reasoning is weaker than interactive computingor physics for modeling and problem solving [7].Hobbes was correct in saying that “reasoning is butreckoning,” but the converse assertion “reckoning isbut reasoning” is false, since interactive reckoning isricher than reasoning.

The inherent incompleteness of interactive sys-tems has the practical consequence that maximalgoals of logic and functional programming and offormal methods cannot be achieved. The result thatlogic programming is too weak to model interactivesystems, presented by the author at the closing ses-sion of the fifth-generation computing project inTokyo in 1992 [7], showed that the project could

not have achieved its software-engineering goalseven with a tenfold increase in effort or a

10-year extension. Algorithms and logical

formulae take theirmeanings in the samesemantic world of computable functions, butlogic is a purer paradigm that expresses the relationbetween syntax and semantics with fewer distrac-tions. Well-formed formulae are semantically inter-preted as assertions about a modeled world thatmay be true or false. Formulae true in all interpre-tations are called tautologies.

A logic is sound if all syntactically provable for-mulae are tautologies and complete if all tautologiesare provable. Soundness and completeness measurethe adequacy of syntactic proofs in expressingsemantic meaning. They relate the syntactic repre-sentation R of a logical model to its semantic mod-eled world W:

Soundness: Implies that syntactically proved theo-rems express meaning in the modeled world.

Completeness: Implies that all meanings can be syn-tactically captured as theorems.

Soundness ensures that representations cor-rectly model behavior of their modeled worlds,while completeness ensures that all possible

88 May 1997/Vol. 40, No. 5 COMMUNICATIONS OF THE ACM

behavior is modeled. Soundness and completenesstogether ensure that (for each model) a representa-tion is correct and that it captures all behavior inthe world being modeled. But completenessrestricts the kinds of modeled worlds completelyexpressible by a representation. Completenessconstrains modeling so only modeled worldswhose semantics are completely expressible bysyntax can be expressed.

Soundness ensures that proofs are semanticallycorrect, while completeness measures the compre-hensiveness of the proof system in expressingsemantic meaning. Soundness and completenesstogether imply that W is reducible to R (W and Rare isomorphic abstractions). Reducibility of W toR implies completeness of R in expressing proper-ties of W, while incompleteness implies irre-ducibility.

Though soundness and completeness are desirableformal properties, they are often abandoned for prac-tical reasons. For example, logics for finding errors inprograms are sound if they generate error messagesonly when the program has an error and complete ifthey discover all errors:

Soundness: If error message then errorCompleteness: If error then error message

In this case, insisting on soundness conservativelyexcludes useful error messages because they are occa-

sionally wrong, while complete logics recklesslygenerate many spurious error messages in their questfor completeness. Practical logics are neither soundnor complete, generating some erroneous messagesand missing some errors to strike a balance betweencaution and aggressiveness.

THE GOAL OF ERROR ANALYSIS IS TO CHECK

that a syntactically defined error-detectionsystem captures an independent semantic

notion of error. Since the semantic notion of errorin dynamically executed programs cannot be stati-cally defined, error detection cannot be completelyformalized, but the semantic notion of error can besyntactically approximated. In choosing anapproximation, the extreme conservatism ofsoundness and the extreme permissiveness of com-pleteness are avoided by compromising (in boththe good and bad senses of the word) between con-servatism and permissiveness.

Gödel’s incompleteness result for a particularmathematical domain (arithmetic over the integers)

has an analog in computing. The key property ofincomplete domains is irreducibility. Completenessis possible only for a restricted class of relativelytrivial logics over semantic domains reducible tosyntax. Completeness restricts behavior to thatbehavior describable by algorithmic proof rules.Models of the real world (and even of integers) sac-rifice completeness in order to express autonomous(external) meanings. Incompleteness is a necessaryprice for modeling independent domains of dis-course whose semantic properties are richer than thesyntactic notation by which they are modeled, sum-marized as:

Open, empirical, falsifiable, or interactive systems are nec-essarily incomplete.

MATHEMATICALLY, THE SET OF TRUE STATE-ments of a sound and complete logic can beenumerated as a set of theorems and is

therefore recursively enumerable.Gödel showed incompleteness of the integers byshowing that the set of true statements about inte-gers was not recursively enumerable using a diago-nalization argument. Incompleteness of interactionmachines can be proved by showing that the set ofcomputations of an interaction machine cannot beenumerated. Incompleteness follows from the factthat dynamically generated input streams are math-ematically modeled by infinite sequences. The set ofinputs of an interactive input stream has the cardi-

nality of the set of infi-nite sequences over a

finite alphabet, which is notenumerable.

The proof of incompleteness of an interactionmachine is actually simpler than Gödel’s incom-pleteness proof since interaction machines are morestrongly incomplete than the integers. Interaction-machine incompleteness follows from nonenumer-ability of infinite sequences over a finite alphabetand does not require diagonalization.

Before Gödel, the conventional wisdom of com-puter scientists assumed that proving correctnesswas possible (in principle) and simply neededgreater effort and better theorem provers. However,incompleteness implies that proving correctness ofinteractive models is not merely difficult butimpossible. This impossibility result parallels theimpossibility of realizing Russell and Hilbert’s pro-grams of reducing mathematics to logic. The goalsof research on formal methods must be modified toacknowledge this impossibility. Proofs of existenceof correct behavior (type-1 correctness) are in many

COMMUNICATIONS OF THE ACM May 1997/Vol. 40, No. 5 89

cases possible, while proofs of the nonexistence ofincorrect behavior (type-2 correctness) are generallyimpossible.

The Interactive Turing TestTuring [6] proposed a behavioral test, now called theTuring test, that answers the question “Canmachines think?” affirmatively if the machine’sresponses are indistinguishable from humanresponses to a broad range of questions. He notunexpectedly equated “machines” with Turingmachines, assuming that machines always answer

questions sequentially. Turing permits machinesto delay their answer in

games likechess to mimic the slower responsetime of humans but does not consider that machinesmay sometimes be inherently slower thanhumans or interact through hidden inter-faces while answering questions.

Agents that can receive help fromoracles, experts, and natural processes havegreater question-answering ability than Tur-ing machines. Though IIMs have lessthinking power than clever algorithms, their rangeof potential behaviors dominates that of Turingmachines.

Interaction machines can make use of external aswell as inner resources to solve problems morequickly than disembodied machines. They are moreexpressive in solving inherently nonalgorithmicproblems but also solve certain algorithmic prob-lems more efficiently through interactive tech-niques. They can play chess, perform scene analysisof complex photographs, or even plan a business tripwith partial help from an expert more efficiently andmore expressively than Turing machines. Outsidehelp can generally be obtained without knowledgeof the client (questioner).

The Turing test constrains interaction to closed-book exam conditions, while interaction machinesthat support open-book exams can expect superiorperformance. Open-book exams that allow accessthrough a computer to the Library of Congress andthe Web amplify exam-taking power through inter-active access to an open, evolving body of knowl-edge. Note that real open-book exams, which allowstudents to add a fixed set of textbooks, becomeclosed exams for an augmented but closed body ofknowledge, while exams with access to email and theWeb are truly open and interactive and allow alarger, nonalgorithmic set of questions to beanswered.

Interaction machines that solve problems through

a combination of algorithmic and interactive tech-niques are more human in their approach to problemsolving than Turing machines, and it is plausible toequate such interactive problem solving with think-ing.

Skeptics who believe machines cannot think canbe divided into two classes:

• Intensional skeptics who believe thatmachines that simulate thinkingcannot think. Machinebehavior does not

capture inner awarenessor understanding.

• Extensional skeptics who believethat machines have inherently weakerbehavior than humans. Machines caninherently model neither physics or

consciousness.

SEARLE ARGUED THAT PASSING thetest did not constitute thinkingbecause competence did not

imply inner understanding, while Penrose [4]asserted that Turing machines are not as expressiveas physical models. I agree with Penrose that Turingmachines cannot model the real world but disagreethat this implies extensional skepticism becauseinteraction machines can model physical and mentalbehavior.

Penrose builds an elaborate house of cards on thenoncomputability of physics by Turing machines.However, the cards collapse if we accept that inter-active computing can model physics. Penrose’s errorin equating Turing machines with the intuitivenotion of computing is similar to Plato’s identifica-tion of reflections on the walls of a cave with theintuitive richness of the real world. Penrose is a self-described Platonic rationalist, whose arguments,based on the acceptance of Church’s thesis, are dis-guised forms of rationalism, denying first-class sta-tus to empirical models of computation. Penrose’sargument that physical systems are subject to elusivenoncomputable laws yet to be discovered is unneces-sary, since interaction is sufficiently expressive todescribe physical phenomena, like action at a dis-tance, nondeterminism, and chaos [9], that Penrosecites as examples of physical behavior not expressibleby computers.

Penrose’s dichotomy between computing on theone hand and physics and cognition on the other isbased on a misconception concerning the nature ofcomputing that was shared by Church and Turingand that has its historical roots in the rationalism

90 May 1997/Vol. 40, No. 5 COMMUNICATIONS OF THE ACM

of Plato and Descartes. The insights that the ratio-nalism/empiricism dichotomy corresponds to algo-rithms and interactions and that “machines” canmodel physics and cognition through interactionallow computing to be classified as empirical,along with physics and cognition. By identifyinginteraction as the key ingredient distinguishingempiricism from rationalism and showing thatinteraction machines express empirical computerscience, we can show that the arguments of Plato,

Penrose, Church, Turing, and other rational-ists are rooted in a common fal-

lacy concerningthe role of nonin-teractive algorithmicabstractions in modeling the real world.

ConclusionsThe paradigm shift from algorithms to interactionis a consequence of converging changes in systemarchitecture, software engineering, and human-computer interface technology. Interactive modelsprovide a unifying framework for understandingthe evolution of computing technology, as well asinterdisciplinary connections to physics and phi-losophy.

The irreducibility of interaction to algorithmsenhances the intellectual legitimacy of computer sci-ence as a discipline distinct from mathematics and,by clarifying the nature of empirical models of com-putation, provides a technical rationale for callingcomputer science a science. Interfaces of computingsystems are the computational analog of shadows onthe walls of Plato’s cave, providing a framework forsystem description more expressive than algorithmsand that captures the essence of empirical computerscience.

Trade-offs between formalizability and expressive-ness arise in many disciplines but are especially sig-nificant in computer models. Overemphasis onformalizability at the expense of expressiveness inearly models of computing is evident in principleslike “Go to considered harmful” and the more sweep-ing “Assignment considered harmful” of functionalprogramming. Functional and gotoless program-ming, though beneficial to formalizability, are harm-ful to expressiveness. However, both these forms ofprogramming merely make certain kinds of pro-grams more difficult to write without reducing algo-rithmic problem-solving power. The restriction ofmodels of computation to Turing machines is a moreserious harmful consequence of formalization, reduc-ing problem-solving power to that of algorithms sothe behavior of objects, PCs, and network architec-

tures cannot be fully expressed. Computer science is a lingua franca for modeling,

allowing applications in a variety of disciplines to beuniformly expressed in a common form. Interac-tion machines provide a unifying frame-work not only for modelingpractical applications but for

talking precisely aboutthe conceptual foundations of

model building, so fuzzy philosophi-cal distinctions between rationalism and

empiricism can be concretely expressed in computa-tional terms. The crisp characterization of rationalistvs. empiricist models by “algorithms vs. interaction”expresses philosophical distinctions by concepts ofcomputation, allowing the interdisciplinary intuitionthat empirical models are more expressive than ratio-nalist models to be precisely stated and proved.

The insight that interactive models of empiricalcomputer science have observably richer behaviorthan algorithms challenges accepted beliefs concern-ing the algorithmic nature of computing, allowing usto escape from the Turing tarpit and to develop a uni-fying interactive framework for models of softwareengineering, AI, and computer architecture.

References1. Brooks, F. The Mythical Man-Month: Essays on Software Engineering. 2d

ed. Addison-Wesley, Reading, Mass., 1995. 2. Manna, Z., and Pnueli, A. The Temporal Logic of Reactive and Concurrent

Systems. Springer-Verlag, New York, 1992. 3. Milner, R. Elements of interaction. Commun. ACM 36, 1 (Jan. 1993),

78–89. 4. Penrose, R. The Emperor’s New Mind. Oxford University Press, Oxford,

England, 1989. 5. Simon, H. The Sciences of the Artificial. 2d ed. MIT Press, Cambridge,

Mass., 1982.6. Turing, A. Computing machinery and intelligence. Mind 59, 236

(1950), 33–60. 7. Wegner, P. Trade-offs between reasoning and modeling. In Research Di-

rections in Concurrent Object-Oriented Programming. G. Agha, P. Wegner,and A. Yonezawa, Eds. MIT Press, Cambridge, Mass., 1993, pp. 22–41.

8. Wegner, P. Interactive foundations of object-based programming. IEEEComput. 28, 10 (1995), 70–72.

9. Wegner, P. Interactive foundations of computing. To appear in Theo-retical Computer Science (1997)

10. Wegner, P Interactive software technology. In Handbook of Computer Sci-ence and Engineering, A. Tucker, ed. CRC Press, Boca Raton, Fla., 1997.

11. Wegner, P. Frameworks for active compound documents. Brown Univ.tech. rep. CS-97-1. www.cs.brown.edu/people/pw.

Peter Wegner ([email protected]) is a professor of computerscience at Brown University.

Permission to make digital/hard copy of part or all of this work for personal or class-room use is granted without fee provided that copies are not made or distributed forprofit or commercial advantage, the copyright notice, the title of the publication andits date appear, and notice is given that copying is by permission of ACM, Inc. To copyotherwise, to republish, to post on servers, or to redistribute to lists requires prior spe-cific permission and/or a fee.

© ACM 0002-0782/97/0500 $3.50

c

COMMUNICATIONS OF THE ACM May 1997/Vol. 40, No. 5 91