rationality foolishness and adaptive intelligence

14
Strategic Management Journal Strat. Mgmt. J., 27: 201–214 (2006) Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/smj.515 RATIONALITY, FOOLISHNESS, AND ADAPTIVE INTELLIGENCE JAMES G. MARCH* Stanford University, Stanford, California, U.S.A. Technologies of model-based rationality are the core technologies of strategic management, hav- ing largely replaced earlier technologies that placed greater reliance on traditional practice or on communication either with the stars or with the gods. The technologies used by organizations in their pursuit of intelligence can be imagined to change over time as a result of responding to the successes and failures associated with the technologies. Although technologies of rationality seem clearly to be effective instruments of exploitation in relatively simple situations and to derive their adaptive advantage from those capabilities, their ventures in more complex explorations seem often to lead to huge mistakes and thus unlikely to be sustained by adaptive processes. Whether their survival as instruments of exploratory novelty in complex situations is desirable is a difficult question to answer, but it seems likely that any such survival may require hitchhiking on their successes in simpler worlds. Survival may also be served by the heroism of fools and the blindness of true believers. Their imperviousness to feedback is both the despair of adaptive intelligence and, conceivably, its salvation. Copyright 2006 John Wiley & Sons, Ltd. INTRODUCTION Organizations pursue intelligence. That is, they can be described as seeking to adopt courses of action that lead them over the long run to outcomes that they find satisfactory, taking into account any mod- ifications of hopes, beliefs, preferences, and inter- pretations that occur over time, as well as conflict over them. The pursuit of intelligence is an ordi- nary task. It is neither mysterious nor unusually difficult. It is carried out by ordinary organiza- tions in ordinary ways every day in ways that permit most of them to survive from day to day. Since the earliest recorded times, however, the pur- suit of intelligence has been pictured, particularly Keywords: rationality; adaptation; exploration *Correspondence to: James G. March, Stanford University, Cub- berley Building, Room 71, 485 Lasuen Mall, Stanford, CA 94305-3096, U.S.A. E-mail: [email protected] by the intelligentsia, as requiring exquisite talents and considerable training. In the view of aca- demic theorists of organizational intelligence, the task involves trying to understand a complex and changing system of causal factors on the basis of incomplete, ambiguous, and contested information. It involves anticipating and shaping an environ- ment that consists of other actors who are similarly and simultaneously anticipating and shaping their environments. It involves confronting inconsisten- cies in preferences across groups and across time and making interpersonal and intertemporal com- parisons of desires (March, 1994: Ch. 6). Over the years, the pursuit of intelligence in organizations, like other organizational activities (Gavetti and Rivkin, 2004), has increasingly be- come the responsibility of people with special competencies. It has been organized around con- cepts and functions such as strategic management, planning, and economic and decision analysis that Copyright 2006 John Wiley & Sons, Ltd. Received 30 August 2004 Final revision received 22 July 2005

Upload: arturoglezmal2

Post on 20-Oct-2015

78 views

Category:

Documents


11 download

TRANSCRIPT

Page 1: Rationality Foolishness and Adaptive Intelligence

Strategic Management JournalStrat. Mgmt. J., 27: 201–214 (2006)

Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/smj.515

RATIONALITY, FOOLISHNESS, AND ADAPTIVEINTELLIGENCE

JAMES G. MARCH*Stanford University, Stanford, California, U.S.A.

Technologies of model-based rationality are the core technologies of strategic management, hav-ing largely replaced earlier technologies that placed greater reliance on traditional practice oron communication either with the stars or with the gods. The technologies used by organizationsin their pursuit of intelligence can be imagined to change over time as a result of responding tothe successes and failures associated with the technologies. Although technologies of rationalityseem clearly to be effective instruments of exploitation in relatively simple situations and to derivetheir adaptive advantage from those capabilities, their ventures in more complex explorationsseem often to lead to huge mistakes and thus unlikely to be sustained by adaptive processes.Whether their survival as instruments of exploratory novelty in complex situations is desirable isa difficult question to answer, but it seems likely that any such survival may require hitchhikingon their successes in simpler worlds. Survival may also be served by the heroism of fools andthe blindness of true believers. Their imperviousness to feedback is both the despair of adaptiveintelligence and, conceivably, its salvation. Copyright 2006 John Wiley & Sons, Ltd.

INTRODUCTION

Organizations pursue intelligence. That is, they canbe described as seeking to adopt courses of actionthat lead them over the long run to outcomes thatthey find satisfactory, taking into account any mod-ifications of hopes, beliefs, preferences, and inter-pretations that occur over time, as well as conflictover them. The pursuit of intelligence is an ordi-nary task. It is neither mysterious nor unusuallydifficult. It is carried out by ordinary organiza-tions in ordinary ways every day in ways thatpermit most of them to survive from day to day.Since the earliest recorded times, however, the pur-suit of intelligence has been pictured, particularly

Keywords: rationality; adaptation; exploration*Correspondence to: James G. March, Stanford University, Cub-berley Building, Room 71, 485 Lasuen Mall, Stanford, CA94305-3096, U.S.A. E-mail: [email protected]

by the intelligentsia, as requiring exquisite talentsand considerable training. In the view of aca-demic theorists of organizational intelligence, thetask involves trying to understand a complex andchanging system of causal factors on the basis ofincomplete, ambiguous, and contested information.It involves anticipating and shaping an environ-ment that consists of other actors who are similarlyand simultaneously anticipating and shaping theirenvironments. It involves confronting inconsisten-cies in preferences across groups and across timeand making interpersonal and intertemporal com-parisons of desires (March, 1994: Ch. 6).

Over the years, the pursuit of intelligence inorganizations, like other organizational activities(Gavetti and Rivkin, 2004), has increasingly be-come the responsibility of people with specialcompetencies. It has been organized around con-cepts and functions such as strategic management,planning, and economic and decision analysis that

Copyright 2006 John Wiley & Sons, Ltd. Received 30 August 2004Final revision received 22 July 2005

Page 2: Rationality Foolishness and Adaptive Intelligence

202 J. G. March

professionalize the making of policies and deci-sions. This professionalization has been buttressedby the development of elaborate tools for guidingorganizations toward favorable outcomes. Thesetools comprise the technologies of rationality thathave come to be recognized as a major post-enlightenment contribution to Western civilization,supplementing and to a large extent replacing ear-lier technologies that placed greater reliance ontraditional practice or on communication eitherwith the stars or with the gods. The basic ratio-nal rubric has become an almost universal formatfor justification and interpretation of action and forthe development of a set of procedures (e.g., bud-geting, planning, economizing, operations analy-sis, strategic analysis and management) that areaccepted as appropriate for organizations pursu-ing intelligence (Odiorne, 1984; Buffa and Sarin,1987).

The preeminence of rationality as an interpre-tation of human action is obvious, but so alsoare complaints about it. Not everyone agrees thatrational models comprehend human behavior ade-quately. They are seen as ignoring the limits torationality, the emotionality of human existence,and alternative logics of actions (Langlois, 1986;Halpern and Stern, 1998; Elster, 1999; March andOlsen, 2005). The position of rationality as anorm of intelligent behavior is less subject to criti-cism, but it has not escaped entirely (March, 1978;Elster, 1983, 1984; Arrow, 1992: 46, 50). Indeed,criticism of purveyors of rational technologies fillshistory and literature. Tolstoy (1869) satirized therational pretensions of German and French militarystrategists, and Camus (1951) argued that more evilhas been based on rationality than on conventionor religion.

This paper explores some aspects of experi-ence with rational technologies in the pursuit ofintelligence and the place of those technologieswithin a framework of feedback-based adaptiveintelligence. In particular, it examines the roleof rationality in the balance between exploitationand exploration by which adaptive systems sustainthemselves.

RATIONALITY AND ITS CRITICS

The notion that human action is, or shouldbe, rational in the sense of being derivedfrom a model-based anticipation of consequences

evaluated by prior preferences permeates contem-porary Western thinking. The notion builds on aset of Western European ideas that trace in a gen-eral way to Plato and Aristotle, were resurrectedby scholars and writers in the 16th and 17th cen-turies and made into the scripture of modernity byVoltaire, Descartes, Bentham, and Mill, and wereconverted in the 20th century into a technologyof operational procedures by Savage, Morgenstern,and von Neumann and by contributors to the elab-oration of operations, policy, and systems analysisand the micro-economics of choice in the secondhalf of the 20th century (Gal, Stewart and Hanne,1999).

Throughout modern history, the primacy of thistechnology of rational analysis and choice has beenchallenged, most conspicuously with respect tolarge system decision making in the debate overplanning and rational system design in the 1930s(Hayek, 1935; Popper, 1966) and in the shadowof the collapse of the Soviet Empire (Campbell,1992; Solnick, 1998), but also with respect toindividual or organizational choice more generally(Sen, 2002) and the functioning of complex tech-nical and organizational systems (Perrow, 1984).Nevertheless, these procedures, in various degreesof elaboration, are commonly seen as being bothcommon and desirable bases for making choices asvaried as those of public policy, consumer goods,mates, careers or jobs, investments, military strate-gies, and education. The technology and ideologyof rationality jointly sustain standard proceduresof action in modern Western organizations. Theyare the bases for justifications of action, teachingof decision making, and management of organiza-tions. They are the conventional commandmentsof policy making, planning, strategy formation,risk taking, asset and attention allocation, decisionmaking, and the management of life (Davis andDevinney, 1997; Lichbach, 2003).

For example, most discussions of strategic actionin business firms emphasize the use of a model-based rational logic to assess alternative strate-gies for changes in product, process, market,resource, or capability mix in response to informa-tion about expectations, threats, opportunities, andgoals in a competitive environment in which oth-ers are making decisions similarly (Porter, 1998;Walker, 2004). Modern academic treatments ofstrategic action sometimes subordinate a concep-tion of strategic planning based on the comparisonof expected values across pre-specified alternatives

Copyright 2006 John Wiley & Sons, Ltd. Strat. Mgmt. J., 27: 201–214 (2006)

Page 3: Rationality Foolishness and Adaptive Intelligence

Rationality, Foolishness, and Adaptive Intelligence 203

to a conception of designing methods for assur-ing capabilities to deal flexibly with an unfoldingfuture, but the underlying notion is that strate-gic choices should be made by some kind ofmodel-based assessment of the likelihoods of dif-ferent possible future outcomes and of preferencesamong them (Vertinsky, 1986; Rumelt, Schendel,and Teece, 1991; Barnett and Burgelman, 1996;Williamson, 1999).

The technologies of rationality involve threecomponents: first, abstractions, models of situa-tions that identify sets of variables, their causalstructures, and sets of action alternatives; second,collections of data capturing histories of the orga-nization and the world in which it acts; third, deci-sion rules that consider alternatives in terms oftheir expected consequences and select the alter-native that has the best expected consequencesfrom the point of view of the organization’s val-ues, desires, and time perspectives. The technolo-gies are embedded in an ideology that holds thataction should be a product of mind and choice, nottradition, rule, routine, or revelation; that choiceshould be derived from carefully considered expec-tations of future consequences, not from the dic-tates of habit, custom, identity, intuition, or emo-tion; that insight into the dynamics of histories canbe obtained from abstract models of them; and thatlevels of intelligence superior to those produced byother procedures can be achieved through model-based rationality.

This combination of theory, ideology, technol-ogy, and theology has shaped thinking in the socialand behavioral sciences in the last 100 years, aswell as applications to fields such as medicine, law,and management. In particular, it has become thebedrock of modern economic theory (Arrow, 1974:16) and of both the derivatives of economic theoryin other disciplines, including strategic manage-ment, and of important critiques of the use of thattheory (Lippman and Rumelt, 2003a, 2003b).

Rational models are common; but for many partsof the social science intellectual scene, rationalityis less a sacred belief than a convenient bete noire,organizing those who reject it as much as thosewho accept it. This includes the three incomparablefigures of the history of the social and behav-ioral sciences—Sigmund Freud, Karl Marx, andCharles Darwin—all of whom provided bases forchallenges to the centrality of rationality. Moderncritics of rationality generally eschew the grandercritiques made by those giants in order to make

two, not self-evidently consistent, complaints. Onthe one hand, rationality has been characterized asoverly conventional, lacking in creative imagina-tion except within the narrow confines of receivedknowledge, too tied to established institutions andcreeds, thus incapable of generating revolutionaryapproaches or novel ideas (Steuerman, 1999). Theanalytical rigidity of rationality is seen as limit-ing it to refinements on what is already known,believed, or existent and is contrasted with theimaginative wildness of various forms of creativ-ity. The argument is that a technology of ratio-nality has to be balanced by other technologiesthat free action from the constraints of conven-tional knowledge and procedures and introduceelements of foolishness into action (March, 1988:Ch. 12).

On the other hand, technologies of rational-ity have been described as sources of huge mis-takes, habitually misspecifying or encouraging themisspecification of situations and thereby pro-ducing disasters of major scope (Sagan, 1993;Vaughan, 1996; Albin and Foley, 1998). Althoughit is possible to demonstrate that actions takenwithin such technologies are effective in achiev-ing desired outcomes in a wide range of sim-ple decision situations, ranging from textbookproblems to short-run investment decisions, it isharder to show empirically that applications ofsuch procedures are reliably beneficial in the morecomplicated problems with which it sometimesdeals.

The list of difficulties noted by critics of thetheory is long and includes:

• Uncertainty. The future consequences are oftenquite obscure, and estimating them is con-founded by inadequacies of information andbiases introduced by desires, prejudices, andthe limitations of experience (Knight, 1921;Shackle, 1961; Weick, 1969).

• Causal complexity. The systems being modeledand analyzed are substantially more complexthan can be comprehended either by the ana-lytical tools or the understandings of analysts.As a result, important variables and interactionsamong them are invariably overlooked or incor-rectly specified (Albin and Foley, 1998).

• Confound of measurability and importance.Some things are more easily measured or esti-mated than others. Variables that can be mea-sured tend to be treated as more ‘real’ than those

Copyright 2006 John Wiley & Sons, Ltd. Strat. Mgmt. J., 27: 201–214 (2006)

Page 4: Rationality Foolishness and Adaptive Intelligence

204 J. G. March

that cannot, even though the ones that cannot bemeasured may be the more important (Halber-stam, 1972; Wildavsky, 1979).

• Preference ambiguity. Preferences, in the senseof the values, wants, or utilities that are servedby action, are unclear and inconsistent. Theirsummary and combination appear to demandmetrics and procedures that are elusive (March,1978; Winter, 2000; Greve, 2003). They change,partly endogenously (Lowenstein, Read, andBaumeister, 2003). Since consequences unfoldover time, intelligence requires intertemporaltrade-offs that are neither trivially specified noreasily accomplished.

• Interpersonal trade-offs. Different participantshave different preferences and combining themrequires some way of making interpersonaltrade-offs. Such trade-offs are major problemsfor theories of multi-actor choice (Arrow, 1951;Pfeffer and Salancik, 1978).

• Strategic interaction. Outcomes, and thereforechoices, of one organization depend on thechoices of other organizations whose outcomesand choices are, in turn, simultaneously depen-dent on the first organization (Luce and Raiffa,1957; Tirole, 1988; Gibbons, 1992; Ghemawat,1997).

This partial disenchantment with rational mod-els as a basis of intelligence in complex situa-tions has come in parallel with a change in theadversaries of rationality within the social andbehavioral sciences. Freud and Marx have beenmoved, temporarily perhaps, to those parts ofthe academic theater more closely allied with thehumanities and critics of contemporary social andpolitical regimes. Within social science, the prin-cipal current alternatives to rational models aremore closely connected to Darwin. The metaphors,and to some extent the theories, of evolution-ary change are as familiar to social and behav-ioral scientists today as those of Oedipus andthe class struggle once were; and one of theprimary discourses involves exploring how tothink about processes of adaptation and theoriesthat emphasize reacting to feedback from expe-rience, rather than anticipating the future (Sel-ten, 1991; Levinthal and Myatt, 1994; Borgers,1996; Gregorius, 1997; Gavetti and Levinthal,2000).

THEORIES OF FEEDBACK-BASEDADAPTATION

Contemporary theories that emphasize reacting tofeedback include theories of:

• Experiential learning. A process by which thepropensities to use certain procedures (or tohave certain attributes) depend on the history ofoutcomes associated with previous uses in sucha way that successes associated with a procedurein the past increase the propensity to use thatprocedure in the present (Cohen and Sproull,1996; Lomi, Larsen, and Ginsberg, 1997; Greve,2003).

• Learning from others (diffusion, imitation). Aprocess by which procedures or attributes ofone unit are reproduced in another, and thelikelihood of reproducing a particular proceduredepends (positively) on the successes of theunits using it (Abrahamson, 1991; Mezias andLant, 1994; Miner and Haunschild, 1995; Strangand Soule, 1998; Miner and Raghavan, 1999).

• Variation/selection. A process by which the pro-cedures used by particular units are unchangingbut more successful units are more likely tosurvive, grow, and reproduce than are less suc-cessful ones (Nelson and Winter, 1982; Hannanand Freeman, 1989; Aldrich, 1999).

Each of these is a theory of feedback-based changeover time. They posit that procedures or attributesassociated with successes are more likely to sur-vive and to replicate at a more rapid rate thanprocedures or attributes associated with failures.The reproductive history is sensitive to varietyand change in the environments, including partsof the environment that are simultaneously co-evolving. The adaptation may be imagined to occurat the individual unit (individual, rule, procedure,or organization) level or at the level of a popu-lation of units. The extent to which adaptation isseen as involving consciousness of its processesand opportunities for mindful intervention in themvaries from one variant on the adaptive theme toanother (Feldman, 2003).

Such feedback-based adaptive processes havelong been noted as relevant to the pursuit ofhuman intelligence and to organizations (Cyert andMarch, 1963). However, it is also well knownthat they do not necessarily result in the timelyachievement of global optima (Brehmer, 1980;

Copyright 2006 John Wiley & Sons, Ltd. Strat. Mgmt. J., 27: 201–214 (2006)

Page 5: Rationality Foolishness and Adaptive Intelligence

Rationality, Foolishness, and Adaptive Intelligence 205

Carroll and Harrison, 1994; Barron and Erev,2003). There are numerous potential complica-tions in using feedback-based adaptation to pursueintelligence. Some of those complications stemfrom properties of the settings in which adapta-tion occurs. Environments are often complicated,endogenous, subjective and contested. Some ofthe complications stem from properties of humanactors. They are limited in their cognitive, atten-tion, and memory capabilities and are depen-dent on a variety of well-known simplificationsin historical interpretation and heuristics of judg-ment and action (Kahneman and Tversky, 2000;Camerer, Lowenstein, and Rabin, 2004). Some ofthe complications stem from properties of adap-tive processes. Adaptive histories are inefficientin the sense that they proceed slowly and witherror and can easily lead to stable equilibria thatare far from global maxima. Except tautologi-cally, the fittest do not always survive (Gould,2002).

In discussions of these complications in recentyears, one conspicuous difficulty has been notedrepeatedly: the problem of maintaining adequateexperimentation. Empirical observations of adap-tive systems seem to suggest that they commonlyfail to generate or adopt new ideas, technologies,strategies, or actions that would provide long runsurvival advantages. That issue arises in the con-text of discussions of innovation and the prob-lems of stimulating it (Garud, Nayyar, and Shapira,1997; Van de Ven, Angles, and Poole, 2000); inthe context of discussions of entrepreneurship andnew enterprises (Lant and Mezias, 1990); and inthe context of discussions of diversity in species(Potvin, Kraenzel, and Seutin, 2001), work groups(Janis, 1982), and cultures (Martin, 1992).

The common observations are empirical, but itis not hard to derive the problem also from adap-tive theory (Campbell, 1985; Baum and McKelvey,1999). The first central requirement of adaptationis a reproductive process that replicates successes.The attributes associated with survival need to bereproduced more reliably than the attributes thatare not. The second central requirement of an adap-tive process is that it generate variety. Opportuni-ties to experiment with new possibilities need to beprovided. In order to meet these two requirements,adaptive processes engage in activities associatedwith exploitation—the refinement and implemen-tation of what is known—and exploration—the

pursuit of what might come to be known. Exploita-tion involves the application of established compe-tence to problems. It yields reliable, good, standardanswers, particularly in a stable world. Exploratoryactivities are sources of novel, unconventional,improbable, wild ideas and actions. Such ideas andactions become the bases for major innovationsand responses to change when they prove to beright; they can lead to major disasters when theyprove to be wrong.

In the face of an incompletely known and chang-ing environment, adaptation requires both exploita-tion and exploration to achieve persistent success(Holland, 1975; March, 1991). Exploitation with-out exploration leads to stagnation and failure todiscover new, useful directions. Exploration with-out exploitation leads to a cascade of experimentswithout the development of competence in any ofthem or discrimination among them. Specifyingthe optimal mix of exploitation and explorationis difficult or impossible (March, 1994: Ch. 6).In addition to the complications of dealing with ahighly uncertain set of possibilities, determining anoptimum involves trade-offs across time and spacethat are notoriously difficult to make. Although itcan be shown that, everything else being equal,lower discount rates and longer time horizonsmake investment in exploration in standard ban-dit problems more advantageous than do higherdiscount rates or shorter time horizons (DeGroot,1970: 398–399; Gittens, 1989: 82), such knowl-edge provides little precision in the identificationof an optimum.

Furthermore, achieving a desirable mix of ex-ploitation and exploration is made difficult by thelocal character of adaptation. Adaptive mecha-nisms are myopic (Levinthal and March, 1993;Denrell and March, 2001). Learning from others,experiential learning, and differential reproductionand survival all privilege outcomes, threats, oppor-tunities, and preferences in the temporal and spatialneighborhoods of an adaptive agent. There are cog-nitive, motivational, and physical necessities thatdictate this myopia. It is not something that can beroutinely abandoned. Moreover, it makes consider-able adaptive sense. For the most part, a local focuscontributes to organizational survival. An organi-zation cannot survive in the long run if it failsto survive in the short run, and the circumstancesunder which the self-sacrificing failure of a localcomponent contributes positively to global successare fairly special.

Copyright 2006 John Wiley & Sons, Ltd. Strat. Mgmt. J., 27: 201–214 (2006)

Page 6: Rationality Foolishness and Adaptive Intelligence

206 J. G. March

It is well known that the myopia of adaptiveprocesses poses a problem for exploration andtherefore for long-run viability. Since the out-comes from novel initiatives generally have greatervariability, longer lead times, and lower aver-age expectations than do outcomes from knownbest alternatives, feedback-based adaptation favorsexploitation over exploration, and thus favors tech-nologies that yield observable, short-run returns(March 1991). In the name of adaptation, exploita-tion eliminates exploration; efficiency eliminatesfoolishness; unity eliminates diversity. As a result,adaptive processes are biased against alternativesthat require practice or coordination with otheralternatives before realizing their full potential, abias that leads to the well-known competency trap(David, 1985; Levitt and March, 1988; Arthur,1989). The myopia of adaptation also results in abias against risky alternatives—those alternativesthat have a relatively large variance in the proba-bility distribution over possible outcomes (Denrelland March, 2001; Hertwig et al., 2004). Adaptiveagents learn to be risk averse (March, 1996); andalthough not all risky alternatives involve explo-ration, exploration almost always involves vari-ability in possible outcomes. A bias against riskis effectively a bias against exploration.

Adaptive myopia underlies two classic sets ofproblems recognized in studies of collectivechoice. Within a myopic process, alternatives thatprovide benefits that are local in time or space andcosts that are more distant are likely to be cho-sen more often than they should be. This leadsto concerns about stimulating ‘self-control,’ theforegoing of local benefits because of more dis-tant costs—something that adaptive systems havedifficulty in accomplishing (Postrel and Rumelt,1992; Elster, 2000). Conversely, alternatives thatprovide costs that are local and benefits thatare more distant are likely to be chosen lessoften than they should be. This leads to con-cerns about stimulating ‘altruism,’ the acceptanceof local costs in anticipation of subsequent or moreglobal benefits—something that adaptive systemsalso have difficulty in accomplishing (Badcock,1986; Cronin, 1991). The stimulation of altru-ism and self-control, when it is possible, coun-teracts problems stemming from the myopia ofadaptive systems, but it introduces knotty conun-drums of intertemporal and intergroup exchangeand equity. Similar problems exist with otherforms of diversity or variation, and efforts to shape

adaptive processes to counteract their limitationsoften both conflict with other desires and are hardto implement.

THE SEARCH FOR MECHANISMS OFVARIATION

The ways in which adaptation itself extinguishesthe exploration on which it depends are well under-stood; but theories of adaptation typically do notprovide powerful understandings about the gen-eration of new ideas, attributes, or actions, orabout the ways in which persistence in noveltyis supported. Adaptive theory generally leaves theendurance of exploratory initiatives and the mech-anisms that produce them unexamined and unex-plained. To a large extent, explorations in actions,attributes, and ideas, with their notorious low suc-cess rates, are accomplished by errors in the pro-cess. The underlying proposition is that nothingis perfect, that there will always be errors in thereproduction of success in any complex adaptiveprocess, and that errors yield variation.

In most evolutionary theories, for example, vari-ety is produced by sampling variation (e.g., thegenetic combinatorics of mating) or arbitrary ran-dom error (e.g., mutation) (Harrison, 1988; Ayala,Fitch, and Clegg, 2000). In a similar way, studentsof organizational adaptation identify as sourcesof variation various forms of relatively randomerrors in adaptation (e.g., ignorance, failures ofmemory, emotion-based irrationalities). In addi-tion, however, students of exploration in organiza-tional settings have described several less randommechanisms. These include incentives that linkexploration to immediate and near-neighborhoodreturns (e.g., competition for position), buffers ofaction from immediate feedback (e.g., organiza-tional slack, inattention), and modes of actionthat are unresponsive to feedback on consequences(e.g., intuition, commitments to identities, and thetying of success to an adaptive aspiration level)(March and Shapira, 1992; Winter, 2000; Greve,2003).

A more general basis for the endurance ofexploratory mechanisms in a myopic adaptive pro-cess is found in the crudeness of adaptive discrim-ination. Adaptive processes operate on bundles ofattributes rather than individual attributes. Orga-nizational adaptation deals both with elementarycomponents of action and with technologies that

Copyright 2006 John Wiley & Sons, Ltd. Strat. Mgmt. J., 27: 201–214 (2006)

Page 7: Rationality Foolishness and Adaptive Intelligence

Rationality, Foolishness, and Adaptive Intelligence 207

are amalgams of such components. It is possiblethat attributes associated with exploitation are sointermixed with attributes associated with explo-ration that the latter can be sustained by the replica-tions fostered by the former. Surviving exploratorymechanisms are linked to instruments of selectiveefficiency.

For example, sexual reproduction is a fundamen-tal instrument of mammalian replication of what isknown to be related to success. Successful individ-uals mate, so the mating has elements of selectivebreeding; but successful individuals do not clonethemselves exactly. There is substantial variationintroduced by the arousal-based selection of matesand the sampling of genes. Since sexual arousaland receptivity are relatively unselective, they aresimultaneously two of the main engines of theexploitative reproduction of success and sources ofexploratory variation. The mechanisms of arousaland receptivity in mating among corporations, aswell as those of reproduction, are different in detailfrom those associated with mammalian evolution;but it is easy to predict that selection among busi-ness firms is likely to have an imprecision compa-rable to that found among mammals.

Although it is recognized that mechanisms ofvariation can arise in these ways without con-scious planning or intent, the designers of adap-tive systems often proclaim a need for deliberatelyintroducing more of them to supplement explo-ration. In their organizational manifestations, theyadvocate such things as foolishness, brainstorm-ing, identity-based avoidance of the strictures ofconsequences, devil’s advocacy, conflict, and weakmemories (George, 1980; George and Stern, 2002;Sutton, 2002). They see potential advantages inorganizational partitioning or cultural isolation, thecreation of ideological, informational, intellectual,and emotional islands that inhibit convergenceto a single world view (March, 2004). Whereasthe mechanisms of exploitation involve connect-ing organizational behavior to revealed reality andshared understandings, the recommended mecha-nisms of exploration involve deliberately weaken-ing those connections.

EXPLORATION AND TECHNOLOGIESOF RATIONALITY

Technologies of rationality are implicated in suchstories. Organizations use a variety of technologies

to generate actions. Some of those technologies aretied to rules and routines and depend on matchingaction appropriate to a particular identity or roleto a particular situation. Some of those technolo-gies accumulate knowledge through experienceand attention to others and encode that knowledgein the form of action propensities. Some of thosetechnologies use confrontation among contendinginterests and dialectical processes of conflict. Someof those technologies are based on logics of con-sequences, modeling, and rationality. The mix oftechnologies used by any particular organizationor by families of organizations can be imagined tochange over time as a result of some mix of adap-tive processes, responding to indications of successor failure with the technologies (Chandler, 1962;Dutton, Thomas, and Butler, 1984).

The widespread replication of model-based ra-tional choice as a technology of action and thesanctification of rational choice as a technique ofproblem solving testifies, in part, to a record ofsuccesses of rationality as an instrument of intel-ligence. There seems little question that technolo-gies of rationality are often effective instruments ofexploitation. Model-based rational analysis has ledto numerous improvements, for example the intro-duction of slippery water in the New York FireDepartment, the development of single queues inmultiple server facilities, the design of auctions,and the implementation of flexible timing of traf-fic signals and other regulators of queues. Ratio-nal technologies are now used routinely to solvenumerous problems of focused relevance, limitedtemporal and spatial perspectives, and moderatecomplexity (RAND Review, 1998). Rationality andits handmaiden, well-established knowledge, bothincrease average return and reduce variance (unre-liability). A feedback-based adaptive process sup-ports and replicates technologies exhibiting suchexploitative capabilities; and model-based rational-ity has been widely replicated as a technology ofaction.

These successes have, however, not been re-peated reliably in more complex situations. Ascomplexity is increased and temporal and spatialperspectives are extended, returns (both of alterna-tives that are adopted and of those that are rejected)are likely to be misestimated by huge amounts.This means that those alternatives that are adoptedare likely to have been overestimated by hugeamounts. There are many instances in which theuse of a technology of rationality in a relatively

Copyright 2006 John Wiley & Sons, Ltd. Strat. Mgmt. J., 27: 201–214 (2006)

Page 8: Rationality Foolishness and Adaptive Intelligence

208 J. G. March

complex situation has been described as leadingto extraordinary, even catastrophic, failures. Theseinclude the extensive efforts of the Soviet Unionto manage an economy through planning and ratio-nal analysis (Campbell, 1992; Solnick, 1998); theattempts of various American firms to use learningcurves as a basis for investment, pricing, and out-put strategies (Ghemawat, 1985, 2002); the attemptby the American political and military establish-ment to use rational analysis to develop strategiesfor war in Vietnam (Halberstam, 1972); the attemptby Long-Term Capital Management to use ratio-nal theories to make money in the options market(Warde, 1998); and the wave of corporate mergersin the United States from 1998 to 2001 (Moeller,Schlingemann, and Stulz, 2003, 2005).

To some extent, the poor record of rational tech-nologies in complex situations has been obscuredby conventional gambits of argumentation andinterpretation. The failures have been pictured asstemming not from the technologies but from somefeatures of misguided use of them. It is sometimesclaimed that the schemes generated by such tech-nologies are good ones but have been frustratedby implementation problems, by the perversitiesor incompetence of individuals involved in bring-ing them to fruition. It is sometimes claimed thatalthough the rhetoric justifying a particular actionis explicitly rational, a rational technology hasactually been used only as a justificatory vocab-ulary not as a basis, thus did not produce thedisaster. It is sometimes claimed that although therecord is poor, it is at least as good as alternativetechnologies for dealing with complex situations.

Argumentation and interpretation cannot, how-ever, conceal the essential problem: the unin-tended exploration produced through technologiesof rationality in complex situations seems clearlyto produce many more disasters than it producesglories of successful discovery. There is no magicby which this fundamental dilemma of adaptationcan be eliminated. The use of rational technolo-gies in complex situations may very well be main-tained by generalization from successes in simplerworlds, by the enthusiasm of the industry that hasgrown up around it, and by the hubris and limitedmemories of decision makers (McMillan, 2004),but it seems unlikely to be sustained by experiencein complex situations and feedback-based adapta-tion stemming from that experience.

This result, however, can be seen as a possiblyunfortunate reflection of one of the more persistent

problems associated with learning from history.Adaptation responds to the recorded events of his-tory, not to the underlying distribution of possibleevents. In effect, adaptive processes treat realizedoutcomes of history as necessary ones. As a result,they essentially exaggerate the likelihood of whathas happened and underestimate the likelihood ofthings that might have happened. If history is seennot as a fixed outcome of determinate forces but asa draw from a probability distribution over possi-bilities, then the lessons to be drawn from historyare not contained simply in its realizations butextend to the hypothetical histories that might beimagined from simulating the inferred underlyinghistorical process (Brenner, 1983; March, Sproull,and Tamuz, 1991; Ferguson, 1999; Tetlock, 1999).Any probabilistic historical process is subject tosampling error in its realizations, and historicalprocesses involving small samples, highly skewedunderlying distributions (such as those hypothe-sized to exist with respect to novel ideas), andsampling rates that are affected by sampling expe-rience are particularly prone to being poorly rep-resented by their realizations (Denrell and March,2001; Hertwig et al., 2004; Denrell, 2005). Themisrepresentation leads to faulty adaptation.

In the present case, historical interpretations arelikely to ignore the extent to which the legendarymistakes of rational technologies may be symp-toms of exploratory power. History is filled withcases where the technologies of rationality haveinitiated or supported experimentation with newactions or strategies that offer novel approaches ofgreat predicted value. Characteristically, the pre-dicted values have failed to materialize. The samemethods that identify new possibilities, also con-tribute to substantial errors of estimation. The tech-nology depends on abstract models of reality thatreduce the complexity of any particular contextto what are believed to be its essential compo-nents and relations. The models depend on strongassumptions about the extent to which presentknowledge encompasses the causal structure ofthe world and the preference structures of humanactors. Within such abstractions, the forecasts ofrational calculation compound so that small errorsor oversights multiply into large ones and multi-ply at an increasing rate as complexity increases.These errors are often costly, even deadly, in theirconsequences.

In a world in which great ideas are unlikelydraws from a pool of apparently crazy ideas, the

Copyright 2006 John Wiley & Sons, Ltd. Strat. Mgmt. J., 27: 201–214 (2006)

Page 9: Rationality Foolishness and Adaptive Intelligence

Rationality, Foolishness, and Adaptive Intelligence 209

capability of rational abstractions to generate hugemistakes in their exploratory gambles is a potentialgauge of their capability to discover dramaticallyuseful surprises. Although it would be perverse tointerpret all disasters as stemming from imagina-tion, frequent cases of ventures that have turnedout to be hugely misguided are one of the possiblesigns of a potential for generating a rich cache ofwild ideas. The evidence from failures of rational-ity in complex situations, such as those identifiedabove, suggests that imagination thrives on thepower of rational technology and on the ambitionsof utopian rational modelers. By a sophisticated(and optimistic) invocation of counterfactual his-tories, one can picture the capabilities of rationaltechnologies for producing huge disasters as symp-tomatic of their capabilities for producing greatdiscoveries. Thus, it is possible to imagine that theexploratory power of rational technologies mightbe sustained in an adaptive process. The sophis-tication, however, comes at a price that plaguescounterfactual thinking. Learning from counterfac-tual histories blurs the distinction between data andtheory. It thereby compromises the integrity of evi-dence and inferences from it. It solves one problembut creates another.

By this reading of history, however, technolo-gies of rationality are not so much enemies offoolishness and exploration, as they are agents ofthem. Those who seek instruments of variation,exploration, and radical change in organizationsare often inclined to scorn rational technologies,seeing them as allies of conventionality and ofthe institutions, disciplines, and professions of thestatus quo. It is argued that the link between ratio-nality and conventional knowledge keeps rationaltechnologies reliable but inhibits creative imagina-tion. This characterization seems plausible, but itprobably underestimates the potential contributionof rational technologies to foolishness and radi-cal visions. Technologies of rational choice, andthe technologists of model-based rationality arenot simple instruments of exploitation but (partly)instruments of exploration hiding behind a facadeof exploitation: revolutionaries in pin-stripe suits.As such, they should perhaps be seen less as stodgyagents of conventional knowledge than as danger-ous fools, joining thereby the pool of dreamers outof which come great ideas as well as monstrousand idiotic ones.

Seeing the use of technologies of rationalityin complex situations as a source of exploration,

however, still leaves unresolved a classical norma-tive question of exploration—whether the brilliantdiscoveries that might occasionally be achieved areadequate recompense for the many egregious stu-pidities committed. The mysteries of hypotheticalhistories and the conundrums of trade-offs acrosstime and space make the answer to that questionconspicuously indeterminate; but the outcome pro-vided by adaptive processes is much less problem-atic. Whatever the potential value of rational tech-nologies as a source of exploration, their adverserecord in solving complex problems cannot beexpected to sustain them in a myopic adaptiveprocess that reproduces practices associated withlocal success and extinguishes practices that leadto local failure. Insofar as they have survived, theyhave done so because alternative technologies arenot conspicuously better in complex situations andbecause of the effectiveness of rational technolo-gies in simpler situations. The exploratory activi-ties of rationality have hitchhiked on its exploita-tive successes.

As with all hitchhiking phenomena, and for thesame reasons that exploration is disadvantaged bya myopic adaptive process, this kind of hitchhik-ing solution to the exploration problem is poten-tially ephemeral. The same adaptive process thatselects among technologies also shapes the appli-cations of the technologies, matching applicationsto situations of success. The differentiation amongsituations may be slow, but it seems reasonableto expect that in the long run, and in the absenceof some other mechanism, organizations will learnto use technologies of rationality for simple prob-lems, but not for complex ones. Learning to differ-entiate simple problems from complex problemsand using the technologies only for the formerwould eliminate most of the exploratory outputof rational technologies. There are folkloric indi-cations of such a differentiation. Decision makersnow frequently comment that technologies of ratio-nality are not particularly useful in major complexdecisions but should be focused on ‘more narrowlydefined problems’ (RAND Review, 1998: 5). Oneof the architects of the attempt to model the com-petition between the United States and the U.S.S.R.during the Cold War era described the complica-tions in these terms:

If we really are to understand the nature of thecompetition, the nature of their interaction process,we will need to understand much better than we

Copyright 2006 John Wiley & Sons, Ltd. Strat. Mgmt. J., 27: 201–214 (2006)

Page 10: Rationality Foolishness and Adaptive Intelligence

210 J. G. March

do now the decision making processes within boththe U.S. and Soviet political–military–industrialbureaucracies. We need an understanding of theprocesses that lead to the selection of specific R&Dprograms, to R&D budgets and their allocations,to procurement decisions, to the operation of thewhole of the weapon system acquisition process.We would need to understand how the perceptionsof what the other side is doing come about in vari-ous places within these complicated bureaucracies,and how these perceptions influence the behaviorof the various organizations and the decision mak-ers involved in the complex decision processes thatdrive the evolution of the several defense programsinvolved. (Marshall, 1971: 7)

Such an understanding is likely to be elusivewithin models of rational technology. As a result,attention to feedback from experience in com-plex situations discourages the use of the tech-nology (Popper, 1961: Myrdal, 1971; Hirschman,1981). For example, the imaginative and provoca-tive set of ideas pioneered by John von Neumannand Oskar Morgenstern (1944) under the rubric ofgame theory has proven to be enormously usefulin analyzing simple forms of conflict but arguablyless powerful in more complex situations. Fromthe late 1940s to the mid 1960s, academic schol-ars working at the RAND Corporation, includingsuch luminaries as Kenneth Arrow, Merrill Flood,John Nash, Thomas Schelling, Martin Shubik, andLloyd Shapley, produced research memoranda ongame theory, its possible applications, and its pit-falls. By the end of that period, game theory waswell-established as a domain, and its applicationsto some (but not all) relatively simple situationswere viewed as successful (Ghemawat, 1997); butthe relevance of game theory for dealing with com-plex situations such as that involved in the con-frontation of the United States and the U.S.S.R.was widely questioned. This negative adaptiveexperience with game theory as a basis for dealingwith international conflict is not unique. It appearsnot to be a product of any particular applicationof any specific rational technology but an inherentcharacteristic of the technology itself.

It is possible to imagine that the feedback dis-advantages of the wild ideas of rationality mightbe reduced by reshaping organizational experience.The most obvious possibility is to discover ways,at an early stage, to distinguish novel ideas thatwill be spectacularly successful from those thatwill be disasters, thus to change the success rate

of experience. This possibility has supported con-siderable research on creativity. The evidence isstrong, however, that such early discriminationsare almost impossible to make, particularly whenthe novel ideas deviate substantially from receivedtruths—precisely the case of interest. Historically,new ideas that have subsequently been shown tochange things dramatically have generally lookeddistinctly unpromising initially (Aronson, 1977;David, 1990). Most attempts to distinguish cre-ative instances of craziness from useless or dan-gerous ones at an early stage impose criteria ofconventionality on craziness and thereby imposeself-defeating filters that reduce novelty.

Alternatively, if it were possible to make smallexperiments with wild ideas, while retaining thepossibility of diffusing those that prove to be goodones, the adaptive position of exploration wouldbe strengthened (Romano, 2002; Holahan, Weil,and Wiener, 2003). Since structures that protectthe system from the catastrophic consequences ofwild ideas generally also inhibit the transfer ofmajor discoveries (Cohen and Levinthal, 1989,1990), there is no perfect solution to the prob-lem. However, two kinds of strategies seem to haveappeal: the first involves controlling the ‘bet size’of developing ideas. If small-size experiments canbe devised to examine an idea, and if they canbe scaled up without loss of validity subsequently,bad ideas can be sorted from good ones at a costthat is less than the return generated by the fewgood ones when they are scaled up. There are dif-ficulties associated with the problems of scaling upfrom small experiments to large implementations,particularly when one of the common complica-tions is the failure to capture the full complexityof the system, but small experiments are a classicdevice.

A second strategy involves partitioning the audi-ence into subgroups. Technologies of action spreadso that practitioners and theorists share a commonsense of what is true, just, and beautiful. The com-munity is often fractured, however, into segregatedsubgroups pursuing different practices and ideas.Experience with the good or bad consequences ofthose practices is localized to the subgroups, andthe resulting smaller sample sizes of experienceincrease the possibilities for a locally favorableresult from a practice having a poor expected value(Dubins and Savage, 1965), and thus for localpersistence in pursuing technologies that gener-ate wild ideas. Although most of that persistence

Copyright 2006 John Wiley & Sons, Ltd. Strat. Mgmt. J., 27: 201–214 (2006)

Page 11: Rationality Foolishness and Adaptive Intelligence

Rationality, Foolishness, and Adaptive Intelligence 211

will turn out to be unproductive, the segregation ofsubgroups sustains experimentation in a few casesthat are ultimately deemed successful. Providedthe segregation of groups is not complete, thesesuccesses can spread more widely (March, 2004).There is, however, an obvious complication: sub-group segregation contributes to local differenti-ated persistence with ideas, but it also inhibits thediffusion of good ideas to groups other than theones that originally discovered them. As a result,the optimum level of segregation is difficult tospecify.

THE HEROISM OF FOOLS

Enthusiasts for rational technologies in complexsituations, like other enthusiasts for variation inadaptive systems, generally proclaim an unveri-fiable confidence in two propositions. The firstproposition is that the current level of explorationis less than would be optimal. The second proposi-tion is that, in a long-term and global perspective,the threats to survival posed by the disasters ofrational technologies applied to complex situationsare less than those posed by failure to discover andexploit the beneficial ideas generated by the tech-nologies. Neither proposition is obvious, nor is itobvious how one might easily confirm or discon-firm either of them.

We do know something, however. Exploratoryfoolishness may sometimes be desirable, and tech-nologies of rationality may be important sourcesof exploration; but the use of rational technolo-gies in complex situations is unlikely to be sus-tained by the main events and processes of history.Technologies and practices that produce wild ideashave large variances in their returns and relativelylow means. Their positive returns are dispropor-tionately realized at temporal and spatial distancesfrom the point of action. These properties makethem vulnerable to a myopic adaptive process.Some ameliorations of the dilemma are possible,but it cannot be eliminated. In the end, foolish-ness and exploration are the natural (and largelyunlamented) victims of adaptive processes. Theirsustenance requires errors of adaptation, in partic-ular the errors produced by the heroism of foolsand the blindness of faith.

A commitment to rationality as an instrumentof exploration might be imagined to be properfor academic scholars of strategic management.

Persistence in such a commitment is not, how-ever, a likely product of experience outside theworld of the mind. Foolishness in the service ofexploration will usually turn out to be danger-ously unwise: it is unlikely to be rewarded by thehistories recorded in the temporal or spatial neigh-borhoods of the craziness, nor can it be guaranteedto be justified in a longer or broader perspec-tive. Belief in the exploratory value of rationaltechnologies requires a leap of faith about thevirtue, joy, and beauty of imagination, the con-tribution of abstract thinking to fostering it, andthe pleasures of deriving surprising implicationsfrom unrealistic assumptions. Such enthusiasm forexploratory rationality fulfills academic obligationsto defend a utopia of the mind against the real-ism of experience, and academic purities of beliefcan endure within a relatively closed communityof similar academics. Such faith-based impervious-ness to feedback is both the despair of adaptiveintelligence and, occasionally but only occasion-ally, its salvation.

ACKNOWLEDGEMENTS

This paper was originally presented as the 2004Viipuri Lecture at the Lappeenranta (Finland) Uni-versity of Technology, 26 May 2004. I am gratefulfor comments by the participants at that lectureand by Mie Augier, Jerker Denrell, Hans Hvide,W. Richard Scott, and two anonymous referees.

REFERENCES

Abrahamson E. 1991. Managerial fads and fashions: thediffusion and rejection of innovations. Academy ofManagement Review 16: 586–612.

Albin PS, Foley DK. 1998. Barriers and Bounds toRationality: Essays on Economic Complexity andDynamics in Interactive Systems . Princeton UniversityPress: Princeton, NJ.

Aldrich H. 1999. Organizations Evolving . Sage: London.Aronson SH. 1977. Bell’s electrical toy: what’s the use?

The sociology of early telephone usage. In The SocialImpact of the Telephone, de Sola Pool I (ed). MITPress: Cambridge, MA; 15–39.

Arrow KJ. 1951. Social Choice and Individual Values .Yale University Press: New Haven, CT.

Arrow KJ. 1974. Limits of Organization. W. W. Norton:New York.

Arrow KJ. 1992. I know a hawk from a handsaw. InEminent Economists: Their Life and Philosophies ,Szenberg M (ed). Cambridge University Press:Cambridge, U.K.; 42–50.

Copyright 2006 John Wiley & Sons, Ltd. Strat. Mgmt. J., 27: 201–214 (2006)

Page 12: Rationality Foolishness and Adaptive Intelligence

212 J. G. March

Arthur BW. 1989. Competing technologies, increasingreturns, and lock-in by historical events. EconomicJournal 99: 116–131.

Ayala FJ, Fitch WM, Clegg MT (eds). 2000. Variationand Evolution in Plants and Microorganisms: Towarda New Synthesis 50 Years after Stebbins . NationalAcademy Press: Washington, DC.

Badcock CR. 1986. The Problem of Altruism: Freudian-Darwinian Solutions . Blackwell: Oxford, U.K.

Barnett WP, Burgelman RA. 1996. Evolutionary per-spectives on strategy. Strategic Management Journal ,Summer Special Issue 17: 5–19.

Barron G, Erev I. 2003. Small feedback-based decisionsand their limited correspondence to description-baseddecisions. Journal of Behavioral Decision Making 16:215–233.

Baum JAC, McKelvey W (eds). 1999. Variations inOrganization Science: Essays in Honor of Donald D.Campbell . Sage: Thousand Oaks, CA.

Borgers T. 1996. On the relevance of learning andevolution to economic theory. Economic Journal 106:1374–1385.

Brehmer B. 1980. In one word: not from experience. ActaPsychologica 45: 223–241.

Brenner R. 1983. History: The Human Gamble. Univer-sity of Chicago Press: Chicago, IL.

Buffa ES, Sarin RK. 1987. Modern Production Opera-tions Management (8th edn). Wiley: New York.

Camerer CF, Lowenstein G, Rabin M (eds). 2004.Advances in Behavioral Economics . Princeton Univer-sity Press: Princeton, NJ.

Campbell BG. 1985. Human Evolution: An Introductionto Man’s Adaptations (3rd edn). Aldine: Chicago, IL.

Campbell RW. 1992. The Failure of Soviet EconomicPlanning: System, Performance, Reform . IndianaUniversity Press: Bloomington, IN.

Camus A. 1951. L’Homme Revolte. Gallimard: Paris,France.

Carroll GR, Harrison JR. 1994. Historical efficiencyof competition between organizational populations.American Journal of Sociology 100: 720–749.

Chandler AD. 1962. Strategy and Structure: Chapters inthe History of the Industrial Enterprise. MIT Press:Cambridge, MA.

Cohen MD, Sproull LS (eds). 1996. OrganizationalLearning . Sage: Thousand Oaks, CA.

Cohen WM, Levinthal DA. 1989. Innovation and learn-ing: the two faces of R&D. Economic Journal 99:569–590.

Cohen WM, Levinthal DA. 1990. Absorptive capacity:a new perspective on learning and innovation.Administrative Science Quarterly 15: 128–152.

Cronin H. 1991. The Ant and the Peacock: Altruism andSexual Selection from Darwin to Today . CambridgeUniversity Press: New York.

Cyert RM, March JG. 1963. A Behavioral Theory of theFirm . Prentice-Hall: Englewood Cliffs, NJ.

David PA. 1985. Clio and the economics of QWERTY.American Economic Review 75: 332–337.

David PA. 1990. The hero and the herd in technologicalhistory: reflections on Thomas Edison and ‘TheBattle of the Systems’. In Economic Development

Past and Present: Opportunities and Constraints ,Higgonet P, Rosovsky H (eds). Harvard UniversityPress: Cambridge, MA; 72–119.

Davis JG, Devinney TM. 1997. The Essence of CorporateStrategy: Theory for Modern Decision Making . Allen& Unwin: St. Leonards, Australia.

DeGroot MH. 1970. Optimal Statistical Decisions .McGraw-Hill: New York.

Denrell J. 2005. Why most people disapprove ofme: experience sampling in impression formation.Psychological Review 112 (in press).

Denrell J, March JG. 2001. Adaptation as informationrestriction: the hot stove effect. Organization Science12: 523–538.

Dubins LE, Savage LJ. 1965. How to Gamble if YouMust: Inequalities for Stochastic Processes . McGraw-Hill: New York.

Dutton JM, Thomas A, Butler JE. 1984. The historyof progress functions as a managerial technology.Business History Review 58: 204–233.

Elster J. 1983. Sour Grapes: Studies in the Subversion ofRationality . Cambridge University Press: Cambridge,U.K.

Elster J. 1984. Ulysses and the Sirens: Studies inRationality and Irrationality (2nd edn). CambridgeUniversity Press: Cambridge, U.K.

Elster J. 1999. Alchemies of the Mind: Rationality and theEmotions . Cambridge University Press: Cambridge,U.K.

Elster J. 2000. Ulysses Unbound: Studies in Rationality,Precommitment, and Constraints . Cambridge Univer-sity Press: Cambridge, U.K.

Feldman MS. 2003. A performative perspective onstability and change in organizational routines.Industrial and Corporate Change 12: 727–752.

Ferguson N (ed). 1999. Virtual History: Alternatives andCounterfactuals . Basic Books: New York.

Gal T, Stewart TJ, Hanne T (eds). 1999. MulticriteriaDecision Making: Advances in MCDM Models, Algo-rithms, Theory, and Applications . Kluwer Academic:Boston, MA.

Garud R, Nayyar PR, Shapira ZB. 1997. TechnologicalInnovation: Oversights and Foresights . CambridgeUniversity Press: Cambridge, U.K.

Gavetti G, Levinthal D. 2000. Looking forward andlooking backward: cognitive and experiential search.Administrative Science Quarterly 45: 113–138.

Gavetti G, Rivkin JW. 2004. On the origin of strategy:action and cognition over time. Manuscript, HarvardBusiness School.

George AI. 1980. Presidential Decision Making inForeign Policy: The Effective Use of Information andAdvice. Westview: Boulder, CO.

George AL, Stern EK. 2002. Harnessing conflict in for-eign policy making: from devil’s to multiple advocacy.Presidential Studies Quarterly 32: 484–508.

Ghemawat P. 1985. Building strategy on the experiencecurve. Harvard Business Review 63(2): 143–149.

Ghemawat P. 1997. Games Businesses Play: Cases andModels . MIT Press: Cambridge, MA.

Copyright 2006 John Wiley & Sons, Ltd. Strat. Mgmt. J., 27: 201–214 (2006)

Page 13: Rationality Foolishness and Adaptive Intelligence

Rationality, Foolishness, and Adaptive Intelligence 213

Ghemawat P. 2002. Competition and business strategyin historical perspective. Business History Review 76:37–42.

Gibbons R. 1992. Game Theory for Applied Economists .Princeton University Press: Princeton, NJ.

Gittens JC. 1989. Multi-armed Bandit Allocation Indices .Wiley: New York.

Gould SJ. 2002. The Structure of Evolutionary Theory .Harvard University Press: Cambridge, MA.

Gregorius H-R. 1997. The adaptational system as adynamical feedback system. Journal of TheoreticalBiology 189: 97–105.

Greve H. 2003. Organizational Learning from Perfor-mance Feedback: A Behavioral Perspective on Innova-tion and Change. Cambridge University Press: Cam-bridge, U.K.

Halberstam D. 1972. The Best and the Brightest . RandomHouse: New York.

Halpern J, Stern RN (eds). 1998. Debating Rationality:Nonrational Aspects of Organizational DecisionMaking . Cornell University Press: Ithaca, NY.

Hannan MT, Freeman J. 1989. Organizational Ecology .Harvard University Press: Cambridge, MA.

Harrison GA. 1988. Human Biology: An Introduction toHuman Evolution, Variation, Growth, and Adaptabil-ity . Oxford University Press: Oxford, U.K.

Hayek FA (ed). 1935. Collectivist Economic Planning:Critical Studies on the Possibilities of Socialism . G.Routledge: London.

Hertwig R, Barron G, Weber EU, Erev I. 2004. Decisionsfrom experience and the effect of rare events in riskychoices. Manuscript. University of Basel.

Hirschman AO. 1981. Essay in Trespassing: Economicsto Politics and Beyond . Cambridge University Press:Cambridge, U.K.

Holahan J, Weil A, Wiener JM (eds). 2003. Federalismand Health Policy . Urban Institute Press: Washington,DC.

Holland JH. 1975. Adaptation in Natural and ArtificialSystems . University of Michigan Press: Ann Arbor,MI.

Janis IL. 1982. Groupthink: Psychological Studies ofPolicy Decisions and Fiascoes (2nd edn). HoughtonMifflin: Boston, MA.

Kahneman D, Tversky A (eds). 2000. Choice, Values,and Frames . Cambridge University Press: Cambridge,U.K.

Knight FH. 1921. Risk, Uncertainty, and Profit . Harper& Row: New York.

Langlois R. 1986. Rationality, institutions and explana-tion. In Essays in the New Institutional Economics ,Langlois R (ed). Cambridge University Press: NewYork; 225–255.

Lant TK, Mezias SJ. 1990. Managing discontinuouschange: a simulation study of organizational learningand entrepreneurship. Strategic Management Journal ,Summer Special Issue 11: 147–179.

Levinthal DA, March JG 1993. The myopia of learning.Strategic Management Journal , Winter Special Issue14: 95–112.

Levinthal DA, Myatt J. 1994. Co-evolution of capabili-ties and industry: the evolution of mutual fund pro-cessing. Strategic Management Journal , Winter Spe-cial Issue 15: 45–62.

Levitt B, March JG. 1988. Organizational learning.Annual Review of Sociology 14: 319–340.

Lichbach MI. 2003. Is Rational Choice All of SocialScience? University of Michigan Press: Ann Arbor,MI.

Lippman SA, Rumelt RP. 2003a. The payments perspec-tive: micro-foundations of rational analysis. StrategicManagement Journal , Special Issue 24(10): 903–927.

Lippman SA, Rumelt RP. 2003b. A bargaining perspec-tive on resource advantage. Strategic ManagementJournal 24(11): 1069–1086.

Lomi A, Larsen E, Ginsberg A. 1997. Adaptive learn-ing in organizations: a system dynamics-based explo-ration. Journal of Management 23: 561–582.

Lowenstein G, Read D, Baumeister R (eds). 2003.Time and Decision: Economic and PsychologicalPerspectives on Intertemporal Choice. Russell SageFoundation: New York.

Luce RD, Raiffa H. 1957. Games and Decisions . Wiley:New York.

March JG. 1978. Bounded rationality, ambiguity, and theengineering of choice. Bell Journal of Economics 9:587–608.

March JG. 1988. The Pursuit of Organizational Intelli-gence. Blackwell: Oxford, U.K.

March JG. 1991. Exploration and exploitation inorganizational learning. Organization Science 2:71–87.

March JG. 1994. A Primer on Decision Making: HowDecisions Happen . Free Press: New York.

March JG. 1996. Learning to be risk averse. Psychologi-cal Review 103: 308–319.

March JG. 2004. Parochialism in the evolution of aresearch community. Management and OrganizationReview 1: 5–22.

March JG, Olsen JP. 2005. The logic of appropriateness.In The Oxford Handbook of Public Policy , Rein M,Moran M, Goodin RE (eds). Oxford University Press:Oxford, U.K. (in press).

March JG, Shapira Z. 1992. Variable risk preferencesand the focus of attention. Psychological Review 99:172–183.

March JG, Sproull LS, Tamuz M. 1991. Learning fromsamples of one or fewer. Organization Science 2:1–13.

Marshall A. 1971. Long term competition with theSoviets: a framework for strategic analysis. RANDPaper P-21542. RAND Corporation: Santa Monica,CA.

Martin J. 1992. Cultures in Organizations: ThreePerspectives . Oxford University Press: New York.

McMillan J. 2004. Avoid hubris: and other lessons forreformers. Finance and Development 41: (September):1–4.

Mezias SJ, Lant TK. 1994. Mimetic learning andthe evolution of organizational populations. InEvolutionary Dynamics of Organizations , Baum JAC,

Copyright 2006 John Wiley & Sons, Ltd. Strat. Mgmt. J., 27: 201–214 (2006)

Page 14: Rationality Foolishness and Adaptive Intelligence

214 J. G. March

Singh JV (eds). Oxford University Press: New York;179–198.

Miner AS, Haunschild P. 1995. Population level learning.In Research in Organizational Behavior , Vol. 17,Cummings LL, Staw BM (eds). JAI Press: Greenwich,CT; 115–166.

Miner AS, Raghavan SV. 1999. Interorganizational imi-tation: a hidden engine of selection. In Variations inOrganization Science: In Honor of Donald T. Camp-bell , McKelvey W, Baum JAC (eds). Sage: London;35–62.

Moeller SB, Schlingemann FP, Stulz RM. 2003. Doshareholders of acquiring firms gain from acquisi-tions? National Bureau of Economic Research Work-ing Paper Number 9523.

Moeller SB, Schlingemann FP, Stulz RM. 2005. Wealthdestruction on a massive scale? A study of acquiring-firm returns in the recent merger wave. Journal ofFinance 60: 757–782.

Myrdal G. 1971. [1957]. Economic Theory and Underde-veloped Regions . Harper & Row: New York.

Nelson RR, Winter SG. 1982. An Evolutionary Theoryof Economic Change. Harvard University Press:Cambridge, MA.

Odiorne GS. 1984. Strategic Management of HumanResources . Jossey-Bass: San Francisco, CA.

Perrow C. 1984. Normal Accidents: Living with High-riskTechnologies . Basic Books: New York.

Pfeffer J, Salancik G. 1978. The External Control ofOrganizations: A Resource Dependence Perspective.Harper & Row: New York.

Popper KR. 1961. The Poverty of Historicism . Harper &Row: New York.

Popper KR. 1966. The Open Society and Its Enemies (5thedn). Princeton University Press: Princeton, NJ.

Porter ME. 1998. Competitive Strategy: Techniques forAnalyzing Industries and Competitors . Free Press:New York.

Postrel S, Rumelt RP. 1992. Incentives, routines, andself command. Industrial and Corporate Change 1:397–425.

Potvin C, Kraenzel M, Seutin G (eds). 2001. ProtectingBiological Diversity: Roles and Responsibilities .McGill–Queen’s University Press: Montreal.

RAND Review 1998. 50 years of looking forward. RANDReview, Fall.

Romano R. 2002. The Advantages of CompetitiveFederalism for Securities Regulation . AmericanEnterprise Institute Press: Washington, DC.

Rumelt RP, Schendel D, Teece DJ. 1991. Strategicmanagement and economics. Strategic ManagementJournal , Winter Special Issue 12: 5–29.

Sagan SD. 1993. The Limits of Safety: Organizations,Accidents, and Nuclear Weapons . Princeton UniversityPress: Princeton, NJ.

Selten R. 1991. Evolution, learning and economicbehavior. Games and Economic Behavior 3: 3–24.

Sen AK. 2002. Rationality and Freedom . HarvardUniversity Press (Belknap): Cambridge, MA.

Shackle GLS. 1961. Decision, Order and Time in HumanAffairs . Cambridge University Press: Cambridge, U.K.

Solnick SL. 1998. Stealing the State: Control andCollapse in Soviet Institutions . Harvard UniversityPress: Cambridge, MA.

Steuerman E. 1999. The Bounds of Reason: Habermas,Lyotard, and Melanie Klein on Rationality . Routledge:London.

Strang D, Soule SA. 1998. Diffusion in organizations andsocial movements: from hybrid corn to poison pills.Annual Review of Sociology 24: 265–290.

Sutton RI. 2002. Weird Ideas That Work: 11 1/2 Practicesfor Promoting, Managing, and Sustaining Innovation .Free Press: New York.

Tetlock PE. 1999. Theory-driven reasoning about possi-ble pasts and probable futures: are we prisoners of ourpreconceptions? American Journal of Political Science43: 335–366.

Tirole J. 1988. The Theory of Industrial Organization .MIT Press: Cambridge, MA.

Tolstoy L. 1869. (2000). Voina i Mir . Zakharov: Moscow.Van de Ven AH, Angles HL, Poole MS. 2000.

Research on the Management of Innovation: theMinnesota Studies . Oxford University Press: Oxford,U.K.

Vaughan D. 1996. The Challenger Launch Decision:Risky Technology, Culture, and Deviance at NASA.University of Chicago Press: Chicago, IL.

Vertinsky I. 1986. Economic growth and decision-makingin Japan: the duality of visible and invisible guidinghands. In Organizing Industrial Development , Wolff R(ed). Walter de Gruyter: Berlin; 41–57.

Von Neumann J, Morgenstern O. 1944. The Theory ofGames and Economic Behavior . Princeton UniversityPress: Princeton, NJ.

Walker G. 2004. Modern Competitive Strategy . McGraw-Hill: New York.

Warde I. 1998. Crony capitalism: LTCM, a hedgefund above suspicion. Le Monde Diplomatique 5November. http://mondediplo.com/1998/11/05warde2[15 August 2005].

Weick KE. 1969. The Social Psychology of Organizing .Random House: New York.

Wildavsky A. 1979. Speaking Truth to Power: The Artand Craft of Policy Analysis . Little Brown: Boston,MA.

Williamson OE. 1999. Strategy research: governanceand competence perspectives. Strategic ManagementJournal 20(12): 1087–1108.

Winter SG. 2000. The satisficing principle in capabilitylearning. Strategic Management Journal , Special Issue21(10–11): 981–996.

Copyright 2006 John Wiley & Sons, Ltd. Strat. Mgmt. J., 27: 201–214 (2006)