causal powers and cognition

12
Mind Association Causal Powers and Cognition Author(s): Patricia Hanna Source: Mind, New Series, Vol. 94, No. 373 (Jan., 1985), pp. 53-63 Published by: Oxford University Press on behalf of the Mind Association Stable URL: http://www.jstor.org/stable/2254697 . Accessed: 28/06/2014 12:06 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . Oxford University Press and Mind Association are collaborating with JSTOR to digitize, preserve and extend access to Mind. http://www.jstor.org This content downloaded from 46.243.173.21 on Sat, 28 Jun 2014 12:06:34 PM All use subject to JSTOR Terms and Conditions

Upload: patricia-hanna

Post on 30-Jan-2017

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Causal Powers and Cognition

Mind Association

Causal Powers and CognitionAuthor(s): Patricia HannaSource: Mind, New Series, Vol. 94, No. 373 (Jan., 1985), pp. 53-63Published by: Oxford University Press on behalf of the Mind AssociationStable URL: http://www.jstor.org/stable/2254697 .

Accessed: 28/06/2014 12:06

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

Oxford University Press and Mind Association are collaborating with JSTOR to digitize, preserve and extendaccess to Mind.

http://www.jstor.org

This content downloaded from 46.243.173.21 on Sat, 28 Jun 2014 12:06:34 PMAll use subject to JSTOR Terms and Conditions

Page 2: Causal Powers and Cognition

Causal Powers and Cognition

PATRICIA HANNA

1*

A good deal of controversy has been generated by the question whether there is any philosophical significance to be attached to recent efforts directed toward computer simulations of human cognition or human cognitive capacities. I wish here to discuss one very limited facet of these issues. In a well-known and vigorously argued paper, entitled 'Minds, Brains, and Programs',' John Searle has contended that recent (and indeed any foreseeable) work in 'Artificial Intelligence' has virtually no relevance at all to leading issues in philosophical psychology. Although I am willing to concede that this may eventually turn out to be the case, my aim in this paper is to suggest that Searle's leading arguments, taken by themselves, have no tendency to show that this is so.

At the outset it is vital to distinguish between what Searle refers to as weak and strong Artificial Intelligence (hereafter Al). On Searle's account, weak Al consists in the claim that the main value of the computer in philosophical psychology is that it gives us a powerful tool, for example, with respect to formulating and testing hypotheses in a rigorous and precise manner. By contrast, strong Al is taken by Searle to involve the following two much stronger claims: first, that the appropriately programmed com- puter literally has cognitive states, and, second, that the programs in question explain human cognition in virtue of their possession of the cognitive states. Following Searle, I shall focus the present discussion on the strong Al thesis. I shall also concentrate on the first of the claims he attributes to the strong Al theorist, since he himself devotes most of his attention to this claim. Moreover, given the limitations of explanations in psychology, the second claim seems less central to the thesis of strong Al.

Searle's argument against strong Al is essentially very simple and is directed against the ostensible implications of the work of Roger Schank and his associates at Yale. It is necessary briefly to sketch this work as Searle understands it before turning to his main line of criticism. As Searle explains it, the aim of Schank's project is to simulate the human ability to understand

* I wish to acknowledge extensive help from T. M. Reed with the writing of section I. I would also like to thank Bernard Harrison and Mendel Cohen for their comments and criticisms; and I would especially like to acknowledge Virgil Aldrich's insightful criticisms and constant encouragement.

1 This paper (hereafter MBP) originally appeared in Brain and Behavioral Science, vol. 3, I980. It has been reprinted in Douglas Hofstadter and Daniel C. Dennett, eds., The Mind's I, New York: Basic Books, I98I; all page references are to this printing.

This content downloaded from 46.243.173.21 on Sat, 28 Jun 2014 12:06:34 PMAll use subject to JSTOR Terms and Conditions

Page 3: Causal Powers and Cognition

54 Patricia Hanna

stories, specifically the story-understanding reflected in the human capacity to answer questions about the story even though the information that they request is never provided explicitly in the story. For example, we may suppose that you are presented with the following story: 'A man went into a restaurant and ordered a hamburger. When the hamburger arrived it was burned to a crisp, and the man stormed out of the restaurant angrily, without paying for the hamburger or leaving a tip.' If you are asked the question 'Did the man eat the hamburger?', your answer, presumably, will be 'No, he did not'. Now as Searle points out, Schank's machines will print out answers of the sort we would expect human beings to supply if presented with stories such as this one. And as Searle construes the matter, proponents of strong Al contend that in such a question and answer sequence the machine is not only simulating a human ability but also (i) that the machine can literally be said to understand the story and thus provide answers to questions, and (2) that what the machine and its program do explains the human ability to understand the story and answer appropriate questions about it.

Searle's response to the strong Al partisan is based on his famous 'Chinese Room' example, which runs very briefly as follows. Searle supposes himself locked in a room and presented with a 'large batch' of Chinese writing. The conditions which govern the case are as follows. First, Searle assumes, what is true, that he himself knows no Chinese and that Chinese writing is to him just a series of meaningless squiggles. Second, Searle is then given a second batch of Chinese script along with a set of rules, here in English, for correlating the second batch with the first batch of Chinese symbols. These rules enable him to correlate one set of formal symbols with another set, where 'formal' means that he is able to identify the symbols entirely by their shapes. Finally, Searle is supplied with a third batch of Chinese symbols, together with English instructions, which enable him to correlate elements of this third batch with the first two batches, where the rules instruct him how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes supplied to him in the third batch.

To complete the case. Unknown to Searle, the people providing him with these symbols call the first batch a 'script',2 the second batch a 'story', and the third batch 'questions'. In addition, they call his responses to the third batch 'answers to the questions' and the set of rules given to him in English the 'program'. To satisfy or perhaps more than satisfy the conditions of

2 At this point, Searle has in fact seriously misrepresented Schank's proposal. I will not elaborate the misrepresentation in detail, but merely give a brief description of its character. According to Schank, the way in which a real Al program should work is as follows. The system is provided with a script in machine-'understandable' form, a story is 'read' and mapped onto parts of the script by the system; only then can questions about the story be answered. But this entails that Searle's use of a Chinese 'script' which he does not understand is fatally disanalogous to Schank's proposed use of scripts and cannot be considered use of a script.

This content downloaded from 46.243.173.21 on Sat, 28 Jun 2014 12:06:34 PMAll use subject to JSTOR Terms and Conditions

Page 4: Causal Powers and Cognition

Causal Powers and Cognition 55

Schank's program, we are further to suppose that Searle becomes so pro- ficient at manipulating the Chinese symbols that from the external point of view, i.e., that of persons outside the locked room, his answers are indis- tinguishable from those of native Chinese speakers; in other words, no one can determine just by looking at his answers that he cannot speak a word of Chinese.

Searle draws the following important and indeed crucial conclusions from the case thus developed. First, he produces his answers by manipulating uninterpreted formal symbols; hence, as far as Chinese is concerned, he merely behaves like a computer, in that he simply performs computational operations on formally specified elements and is thus only an instantiation of the computer program. Moreover, Searle alleges, it is clear that in the case described he does not understand a word of the Chinese stories, notwith- standing the fact that his inputs and outputs are indistinguishable from those of native Chinese speakers. Furthermore Searle claims to have falsified the view that any such program explains human understanding. For in the present case, the computer and program (taken together, these con- stitute the system)3 are functioning but there is no understanding (here of Chinese). Consequently, Searle suggests, Schank's computer understands nothing of any of the stories alluded to, for the computer possesses no more than does Searle himself in a clear case in which he grasps (i.e., understands) nothing whatsoever. Hence Searle concludes that the computer program is simply irrelevant to the understanding of stories, for in the Chinese room example Searle has everything that artificial intelligence can put into him by way of a program, and he nevertheless understands nothing. Thus, so Searle contends, computational operations on purely formally specified elements have no interesting connection with understanding or, more generally, with the possession of intentional states, where these involve (e.g.) that feature of certain mental states by which they are directed at or about objects or states of affairs in the world.

II

Before discussing Searle's main theoretical mistakes, it will be instructive to give a brief indication of some of his more or less technical errors and misunderstandings. These confusions and misapprehensions about the nature of research in Al help illuminate, and to some degree explain, some of Searle's more substantive confusions and mistakes. The types of misunder- standing I have in mind concern such things as the nature of the computa- tional model; the role of semantics, as contrasted with syntax, in current Al

3 Here, as well as in a number of other places, Searle displays a regrettable lack of knowledge of computer science and the terms in which real AI research is conducted.

This content downloaded from 46.243.173.21 on Sat, 28 Jun 2014 12:06:34 PMAll use subject to JSTOR Terms and Conditions

Page 5: Causal Powers and Cognition

56 Patricia Hanna

research; and the distinction, however rough, between a program and an algorithm.4 I shall elaborate on each of these briefly.

In Minds and Mechanisms,s Margaret Boden writes of the computational model that

[i]n general, cognitive scientists believe that a computational approach, based on the information-processing concepts developed within computer science and artificial intelligence, may be helpful or even necessary in articulating these multifarious complexities. These concepts were specifically developed to express qualitative distinctions between representations, and between symbol-manipulating pro- cedures for effecting the transformations of representations, which is why they are potentially useful to cognitive science (p. 2, emphasis added).

From this, it is apparent that researchers in Al conceive the computational model as a much richer device than does Searle. Talk of qualitative distinc- tions, and representations indicates that these Al researchers assume that the computational model is capable of modelling interpretive and/or interpreted processes, not just uninterpreted (or 'formal') symbols.

Even more striking in this connection is Aaron Sloman. In The Computer Revolution in Philosophy6 he claims that'. . . computational processes, which build up and deploy knowledge of the form and contents of the world' (p. 6i) are the models currently in the forefront of Al research. Further, he alleges that Al researchers

... view complex processes as computational processes, including rich information flow between subprocesses and the construction and manipulating [of] symbolic structures within processes. This should supercede older paradigms, such as the paradigm which represents processes in terms of equations or correlations between numerical variables (p. 3).

From this it is obvious that when Searle says that '[a]s far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements' (MBP, p. 285), he is caught in the 'older paradigm' of a computational model. Insofar as his arguments rely on this paradigm, their force against much current Al research is considerably weakened.

Moreover, Searle's characterization of the Chinese room example pre- supposes that a purely syntactic specification of rules and symbol manipulation could, in principle, duplicate human behaviour. While

4 Another source of confusion is Searle's terminology. There are a number of terms which have a technical use in both Al research and philosophy, e.g., 'knowledge', 'semantic', and 'representation'. Unfortunately, Searle fails to take account of the difference in the meanings assigned the terms by the two groups. As a result, issues are muddied even further. This tendency toward opacity is further heightened by Searle's use of such terms as 'causal' and 'intentional' which are not understood, at least in the sense he uses them, by the Al researchers he claims to be addressing.

Minds and Mechanisms, Hassocks: Harvester, I98I. For further discussions of AI see Margaret Boden, Artificial Intelligence and Natural Man, Hassocks: Harvester, I977, and John Haugeland, Mind Design, Cambridge: MIT Press, I98I.

6 The Computer Revolution in Philosophy, Atlantic Highlands: Humanities, 1978.

This content downloaded from 46.243.173.21 on Sat, 28 Jun 2014 12:06:34 PMAll use subject to JSTOR Terms and Conditions

Page 6: Causal Powers and Cognition

Causal Powers and Cognition 57

limiting the rules that can be given to the machine to purely 'formal' or syntactic (see MBP, p. 284) may perhaps be ultimately justified, it is not a limitation which Schank would accept. Moreover, the grounds for such a restriction are not even hinted at by Searle. In 'Meaning, Memory and Syntax', Schank et al. are explicit in their claim that, although one may have purely syntactic and purely semantic rules, these are always '. . . employed in an integrated control structure' (pp. i6-17). This constitutes a rejection of the Chomskian approach to language understanding, in which the syn- tactic, semantic, and phonetic components of the grammar operate entirely independently of one another to generate an interpretation of a sentence. Searle's attack is more appropriate to the Chomskian model than to Schank's, as I shall show.7

Searle's discussion of the Chinese room example depends on his mis- representation of Schank's avowed aims, for if Schank is correct, the very coherence of the example itself is called into question. Central to Schank's project is the assumption that duplication or even interesting approximation of human linguistic understanding could not be achieved by any program which did not take full account of semantic rules and interpretation. Thus, Searle is correct when he writes that Schank would accept his claim that merely being able to write "'squoggle-squoggle" after "squiggle-squiggle"' in appropriate contexts in no way guarantees understanding (see MBP, p. 290). However, Schank would go on to argue that this shows Searle's original specification of the behaviour of the Chinese room as '. . . indis- tinguishable from [that] of native Chinese speakers' (MBP, p. 285) to be incoherent. Schank would argue that Searle is faced with the following dilemma. If the room is to produce indistinguishable behaviour, we must posit an integrated syntactic/semantic component. If, on the other hand, we restrict the room to purely syntactic rules, the resulting behaviour will not mirror human behaviour. Searle is thus caught between two pictures which he cannot reconcile without abandoning at least one crucial feature of the Chinese room example.

Searle would reject the possibility of incorporating a semantic (inten- tional) component in an Al system. I shall argue that Searle's rejection rests on a faulty argument, as well as a misunderstanding of Schank's research.

To oversimplify the matter, in An Enquiry Concerning Human lJnder- standing (section II, i6), Hume considers a possible rejoinder to his

7 For an elaboration of these views, see Roger Schank and Lawrence Birnbaum, 'Memory, Meaning, and Syntax', Research Report I89, November I980, Yale Department of Computer Science, and Roger Schank and Mark Burstein, 'Modelling Memory for Language Understanding', Research Report g220,

February I982, Yale Department of Computer Science. This is not the place to explore Schank's specific claims. I will only note that his examples of the

necessity for allowing semantic and syntactic interpretation to aid and inform each other are prima facie persuasive. In addition, Schank's use of scripts for modelling human linguistic understanding provides AI researchers with a new tool for modelling human understanding; one which seems moreover to be very promising from the standpoint of mimicking human behaviour.

This content downloaded from 46.243.173.21 on Sat, 28 Jun 2014 12:06:34 PMAll use subject to JSTOR Terms and Conditions

Page 7: Causal Powers and Cognition

58 Patricia Hanna

empiricist claim that every idea requires a corresponding impression. He concludes that one could come to have an idea of a missing shade of blue without having an impression of it, if allowed to extrapolate from the shades which one has experienced. It seems quite reasonable to suppose that a great deal of human knowledge is derived from such extrapolation. Schank clearly holds this view. The point of his research is to show how, from a semantic base, a system might construct such extrapolated knowledge.

Searle addresses the question of adding a semantic element, though only indirectly. He writes:

Suppose, unknown to me, some of the Chinese symbols that come to me come from a television camera attached to [a] robot and other Chinese symbols that I am giving out serve to make the motors inside the robot move the robot's legs or arms (MBP, p. 362).

This seems promising since Searle characterizes the so-called 'robot reply' as tacitly conceding that cognition is not solely a matter of formal symbol manipulation. Here (at last) he appears to recognize that current Al research is not concerned with mere symbol manipulation; but in the end he dis- appoints by remaining firmly in the grasp of the Chomskian model.

At issue, it seems, is the source of this semantic component. In the case of ordinary human beings, it arises through causal interaction with the world; but machines cannot at present avail themselves of the ordinary channels. Schank proposes to 'gift' the system with a semantic element, i.e., the programmer will put it in the system.8 Once the element is in place, the system can use it to make inferences, answer questions, and so forth.

In response to this, Searle claims that '[i]t is important to emphasize that all I am doing is manipulating formal symbols: I know none of these other facts [cited above]' (MBP, p. 362). Here we have added some causal contact with the world; but, Searle claims, it is insufficient to produce intentionality, indeed it is insufficient even to add a semantic element.

But it misses the force of the robot reply as conceived by current Al researchers. To suppose that vision is mimicked, let alone duplicated, simply by attaching a camera to a machine is implausible. It reflects the same confusion which leads Searle to claim elsewhere that a blind man who understands a flow chart for a computer vision program can actually see.9 Within the Al community, vision programs are also semantically endowed. By positing a completely non-semantic system, Searle guarantees that the

8 See 'Memory, Meaning, and Syntax', cited above. 9 In 'Analytic Philosophy and Mental Phenomena', Midwest Studies in Philosophy, vol. 6, I98I, pp.

405-23, Searle implies that proponents of strong AI must maintain that merely by following a flow chart for computer vision, a blind man could come to see. Countering this implication he writes: '. . . all sorts of substances can instantiate the flow chart even though they are inappropriate for having visual experience, tickles, itches, and so on' (p. 417). This of course misses entirely the point concerning the distinction between an algorithm and a program running on some machine; without the relevant equipment (in this case functioning organs of vision), no instantiation could actually perform the task at hand.

This content downloaded from 46.243.173.21 on Sat, 28 Jun 2014 12:06:34 PMAll use subject to JSTOR Terms and Conditions

Page 8: Causal Powers and Cognition

Causal Powers and Cognition 59

system is indeed non-semantic (non-intentional). The cost is, however, that his argument is circular. Additionally, it manages totally to miss the state of current Al research. Though they may be mistaken in their beliefs about what they can do, Al researchers are not committed to Chomsky. Searle, unfortunately, treats their work as though they were.

The final set of preliminary confusions which I wish to address concerns Searle's apparent failure to distinguish clearly between algorithms and programs. Roughly, an algorithm can be characterized as any systematic, effective procedure for solving a problem or implementing a procedure. Thus, recipes for cakes and instructions for assembling a bicycle are algorithms. Although the efficiency, elegance, and/or simplicity of an algorithm can be affected by the computational environment in which it is implemented, one very important characteristic of an algorithm is that it is not machine or language specific. One can, for example, implement the same algorithm, with more or less ease, in any of an enormous number of com- puter languages. This is the point of contrast between an algorithm and a program. Although there can be ontological problems in connection with discussions of programs which parallel those in the philosophical debate over propositions, it is safe to say that programs are written in specific languages for specific machines.

Now it is clear that one and the same program, in the sense of 'piece of code', which compiles and runs on machine A, may not even compile on machine B. Thus, whatever Searle says of programs must always be understood as relativized to some specific machine. Unfortunately, Searle is not clear about this.

To some extent, he may be merely careless in his choice of words: after all, his primary audience is philosophical, and philosophers do not really need all the gory details. However, this principle of charity cannot be over- extended. At certain points in his paper he seems genuinely to confuse programs with algorithms, and this confusion is not benign since his claim that strong Al commits one to dualism rests on this confusion.

In strong Al ... what matters are programs, and programs are independent of their realization in machines . . . But I should not have been surprised; for unless you accept some form of dualism, the strong Al project hasn't got a chance ... unless the mind is not only conceptually but empirically independent of the brain, you cannot carry out the project, for the program is completely independent of any realization (MBP, p, 304).

I would agree that it is programs which matter to strong AI, but it is algorithms, not programs, which are (in some important sense) independent of machine realization. Thus, one of Searle's more striking points-that Al is dualistic-rests on an elementary confusion. Programs, which provide the explanations of human cognition, not only need not be, but in fact cannot be, separated from their machine realizations.

This content downloaded from 46.243.173.21 on Sat, 28 Jun 2014 12:06:34 PMAll use subject to JSTOR Terms and Conditions

Page 9: Causal Powers and Cognition

6o Patricia Hanna

III

We now turn to some of Searle's more theoretical confusions. The discus- sion will be limited to his treatment of the source of cognition. For the pur- pose of this discussion, I will consider the machine in question to be a robot of the sort discussed by Searle (cf. MBP, pp. 362-3). In connection with this, it will be illuminating to consider Searle's criticisms of the Turing test.

Although he is not perfectly clear on the character of his argument con- cerning the source of cognition, sometimes vacillating between a priori and empirical claims, there are two separable, though closely related, tendencies in the paper. The first consists in the claim that the material 'stuff' of the brain is necessary for cognition, or at least that material of a relevantly similar sort is necessary. The second, more plausible, argument is to the effect that it is the unique (though apparently in principle duplicable) causal powers of the brain which are necessary for cognition. In what follows, I shall argue that neither of these arguments taken alone is sufficient to establish Searle's conclusions; and that, in fact, when taken together the arguments are mutually destructive.

Part of the difficulty in assessing Searle's contentions arises from his tendency to conflate these two lines of argument. In some passages he appears to argue that, ceteris paribus, the mere fact of the brain's being composed of a certain kind of material is by itself sufficient to warrant the ascription of cognition. In other passages he seems merely to claim that the stuff itself is relevant only insofar as it is capable of producing certain causal effects, and that it is these causal powers which create (or are) cognition.

The first of these arguments indicates Searle's apparent desire to limit his discussion to the ontological foundations of cognition, while avoiding all epistemological considerations. In this purely ontological mode, reference to the 'stuff' of which the brain is made is peculiarly appropriate. However, another of Searle's concerns is to show that programs like that of Schank cannot explain cognition. But, if he wants to offer an alternative explanation, he must provide some account of why and how this 'stuff' produces cogni- tion. And it is in this connection that he appeals to the causal powers of the brain, going so far as to admit that even if Martians were composed of material radically different from ours, we might as the consequence of an empirical investigation nevertheless conclude that they too are cognitive beings. Here Searle has moved from ontology to epistemology: the issue is now whether and how we can know that other beings have cognitive states.

Now, in fairness to Searle, one must note that he does mention the specific structure of the brain. And one might suppose that he holds that it is the effects produced by a brain (material) with the specific structure of the human brain that count as cognition. However, besides offering no reason in support of this claim, Searle's aforementioned views on Martian cognition run strongly counter to it. If we grant that Martians are composed of

This content downloaded from 46.243.173.21 on Sat, 28 Jun 2014 12:06:34 PMAll use subject to JSTOR Terms and Conditions

Page 10: Causal Powers and Cognition

Causal Powers and Cognition 6i

material quite unrelated to human material, there is nothing conceptually odd in positing that the structure of their brains (or whatever) does not resemble the structure of a human brain. But if this is conceded there is no clear reason why machines could not be granted cognition as well.

Aside from a blind dogmatic commitment to some form of vitalism, there is no more reason to accept Searle's claims about the necessity of human brain 'stuff' for cognition than there is to accept Wittgenstein's related claims about the human form and language. 10 It is incumbent on Searle to show that there is no reason for, and indeed conclusive reason against, the ascription of cognition to machines. Moreover, these reasons must be independent of the difference in material from which machines and humans are composed. Searle's argument must take the form of showing that humans do, but machines do not (and cannot), possess the relevant causal powers. He now turns to epistemological arguments and addresses the well- known Turing test.

As a general matter, one might well consider how we in fact determine whether something which is presumed to interact with its environment as our 'robot' is does or does not have certain causal powers. It seems utterly unremarkable to maintain that we determine the causal powers of something by observing what effects it produces and then inferring the presence of certain causal powers from these effects. Seen in its light, and placed in its proper historical context, the Turing test is a far more plausible measure of cognition than Searle's characterization would suggest.

Turing's paper, 'Computing Machinery and Intelligence', in which he initially poses the so-called 'Turing test' was published in 1950.11 At that time the state of computer science was extremely primitive, and no one had as yet realized anything even as sophisticated as an expert system. More- over, 'communication' between operator and machine was strictly confined to something like typescript. As a consequence, Searle misrepresents Turing's thesis when he characterizes it as a test based on some simple, mechanical consideration of inputs and outputs. This is further reflected in his rather too hasty dismissal of what might be called a 'Turing robot' (cf. section II). The point of Turing's proposal was to suggest a conceptual means of identifying intelligence; and to show that it was in principle possible that a system might satisfy the very criteria we apply to humans. If we are willing to ascribe intelligence to humans on the basis of their satis- faction of these criteria, Turing and others go on to argue, then we should ipso facto ascribe it to systems which also satisfy the criteria. 12

10 One might, of course, find it easier to understand a being which resembled oneself than to understand some creature which was utterly unlike one in appearance. This is, however, a point about the pragmatic considerations pertaining to understanding, translation, etc., and not a point about the possibility of language itself. 11 Alan Turing, Mind, vol. LIX, no. 236, I950.

12 If one rejects this interpretation of the Turing test, I will simply replace it with something exactly like the test described and call it the 'X-Turing test'. It is, in any case, the newly formed X-Turing test which is used by Al theorists.

This content downloaded from 46.243.173.21 on Sat, 28 Jun 2014 12:06:34 PMAll use subject to JSTOR Terms and Conditions

Page 11: Causal Powers and Cognition

62 Patricia Hanna

Searle's most forcefully presented case against acceptance of the Turing test consists in his claim that if a liver were to pass the Turing test, we would be forced to accept it as intelligent.

If we are to conclude that there must be cognition on the grounds that I have a certain sort of input and output and a program in between, then it looks like all sorts of noncognitive subsystems are going to turn out to be cognitive. For example, there is a level of description at which my stomach does information processing ... it is hard to see how we avoid saying that stomach, heart, liver and so on are all understanding subsystems, since there is no principled way to distinguish the motivation for saying the Chinese subsystem understands from saying that stomach understands (MBP, pp. 260- I).

This case is, however, misleading, and reflects Searle's distortion of the Turing test. In order that it 'pass' the Turing test, a liver, for example, would have to duplicate intelligent human behaviour. This requirement itself supplies the motivational difference which Searle seeks for dis- tinguishing ordinary livers from 'minds'.

But, what if we did have a liver which somehow duplicated human intelligent behaviour, as contrasted with some animation in which raw liver is portrayed in glasses and a moustache dancing the tango? While I certainly find it difficult to imagine in any detail how a liver would thus duplicate any human behaviour sophisticated enough to count as intelligent, should it occur, I would feel compelled to recognize the liver as possessing cognition. 13 Searle's case is so striking because he actually relies on our imagining a dancing liver (a la Walt Disney's dancing broomsticks) being presented as candidate for cognition; however, if the case is to count against the Turing test in any way this is not an appropriate picture. Instead, we must imagine a liver which actually duplicates some range of intelligent behaviour. But if this is granted, then the consequences of the Turing test in no way undermine the plausibility of the test; though we may question the plausibility of the example. If on the other hand, the liver merely displays (as it in fact does) a complex pattern of inputs and outputs which can be described by something analogous to a computer program, then we do not have anything which satisfies the Turing test. Hence, there is no reason for ascribing cognition to it in the first place.

These considerations help to bring into focus some of Searle's complaints about reference and intention14 in connection with machine intelligence. According to Searle the real problem with the Chinese room is that it cannot understand the story since it cannot know that 'hamburger' refers to hamburgers (for example). But of course if the room does behave exactly as we behave-i.e., if it can (somehow) pick out hamburgers, as contrasted

13 There is, of course, a general question as to what does count as intelligent behaviour; however, this in no way affects the present case.

14 For a thorough discussion of his views on intentionality, see John Searle, Intentionality, New York: Cambridge University Press, I983.

This content downloaded from 46.243.173.21 on Sat, 28 Jun 2014 12:06:34 PMAll use subject to JSTOR Terms and Conditions

Page 12: Causal Powers and Cognition

Causal Powers and Cognition 63

with merely uttering the word 'hamburger'-then how are we to continue denying the room understanding and intentionality? Searle's criticism of the Turing test does not, contrary to his apparent beliefs, provide an answer to his question.

What now remains of Searle's claims? His insistence on the crucial role of the material of which the brain is made has been seen to be nothing more than a vitalist dogma, unargued and indeed undermined by his own tendency to allow ascription of cognition to Martians. His admittedly more plausible move toward emphasis on the causal powers of the brain is confused by his misunderstanding of the force of the Turing test. It appears in retrospect that it is Searle, not the Al researchers, who is committed to something like dualism; only, perhaps in Searle's case, it is something more akin to vitalism than dualism. Many find accepting such an archaic and vague doctrine even less appealing than admitting the possibility of Al itself. 15

Department ofPhilosophy PATRICIA HANNA

University of Utah 338 Orson Spencer Hall Salt Lake City Utah 84II2 U.S.A.

15 I am indebted to Dudley Irish for his patient, detailed, informative, and prolonged discussions of the workings of computers, programs, software systems, and all the real 'stuff' of computer science.

This content downloaded from 46.243.173.21 on Sat, 28 Jun 2014 12:06:34 PMAll use subject to JSTOR Terms and Conditions