philosophy of mind - university of guelph

25
The Philosophy of Mind © Andrew Bailey ([email protected]) Few things are simultaneously so familiar and so mysterious as the mind. Our own mental states—our thoughts, our sensations, our fears and desires—are more intimate to us than anything else; indeed, in some sense we live our whole lives ‘in’ our minds, immersed in a flow of mental content. On the other hand, the difference between things with minds and things without is one of the most profound distinctions we can make. The presence of mind can make the difference between life and death; between being an intelligent, creative, feeling entity or a mere mechanism; between being a locus of moral value and being a mere object. Our minds seem to be the sources of our selves—they are what make us who we are. Yet, for all their importance, minds are elusive: we ordinarily think that minds cannot be observed or studied directly, and although we all agree that there is a very important connection between our minds and our brains it is unclear to most of us, on reflection, just what the nature of that connection is. This chapter focuses on the issue, central to the philosophy of mind, of the relationship between the mind and the body. We will survey the main philosophical positions on this question: substance dualism (mind as spirit), behaviourism (mind as a way of behaving), mind-brain identity theory (mind as brain), non-reductive materialism (mind as an ‘emergent’ physical property of brains), and property dualism (mind as a non-physical property of brains). DUALISM The dominant theory of the mind from the seventeenth to the early twentieth century was substance dualism. This is the view that the mind and the body (including the brain) are two quite distinct kinds of object. Bodies are lumps of matter (and, in more modern versions, fields of energy) located in three-dimensional space, mechanically obeying the laws of physics. Minds by contrast are ‘lumps’ of mind-stuff—a substance which is non-material, non-spatial and not subject to the laws of physics. Although not located in space, our minds are nevertheless, according to substance dualism, somehow ‘attached’ to our bodies; that is, at least while we are living in this world, our selves are made up of two, quite different and separable parts—a mind and a body. INTERACTIONISM Many, though not all, substance dualists believed that our minds and our bodies interact with each other. Our physical bodies are bombarded by signals from the physical environment: particles—what we today call photons—strike the retinas of our eyes, vibrations in the air affect the tympanic membranes in our ears, and so on. These physical sensory inputs are passed on to the brain by some mechanical mechanism: as vibrations in a very fine fluid called ‘vital spirit’ according to seventeenth-century science, or as electrical signals travelling along our nerves, as more modern physiology has discovered. Once in the brain, these inputs provoke extremely complex neural reactions as the brain processes the information it receives. So far, even for the substance dualist, all of these processes are purely physical and

Upload: others

Post on 03-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Philosophy of Mind - University of Guelph

The Philosophy of Mind

© Andrew Bailey ([email protected])

Few things are simultaneously so familiar and so mysterious as the mind. Our own mental states—our thoughts, our sensations, our fears and desires—are more intimate to us than anything else; indeed, in some sense we live our whole lives ‘in’ our minds, immersed in a flow of mental content. On the other hand, the difference between things with minds and things without is one of the most profound distinctions we can make. The presence of mind can make the difference between life and death; between being an intelligent, creative, feeling entity or a mere mechanism; between being a locus of moral value and being a mere object. Our minds seem to be the sources of our selves—they are what make us who we are. Yet, for all their importance, minds are elusive: we ordinarily think that minds cannot be observed or studied directly, and although we all agree that there is a very important connection between our minds and our brains it is unclear to most of us, on reflection, just what the nature of that connection is.

This chapter focuses on the issue, central to the philosophy of mind, of the relationship between the mind and the body. We will survey the main philosophical positions on this question: substance dualism (mind as spirit), behaviourism (mind as a way of behaving), mind-brain identity theory (mind as brain), non-reductive materialism (mind as an ‘emergent’ physical property of brains), and property dualism (mind as a non-physical property of brains).

DUALISM The dominant theory of the mind from the seventeenth to the early twentieth century was substance dualism. This is the view that the mind and the body (including the brain) are two quite distinct kinds of object. Bodies are lumps of matter (and, in more modern versions, fields of energy) located in three-dimensional space, mechanically obeying the laws of physics. Minds by contrast are ‘lumps’ of mind-stuff—a substance which is non-material, non-spatial and not subject to the laws of physics. Although not located in space, our minds are nevertheless, according to substance dualism, somehow ‘attached’ to our bodies; that is, at least while we are living in this world, our selves are made up of two, quite different and separable parts—a mind and a body.

INTERACTIONISM Many, though not all, substance dualists believed that our minds and our bodies interact with each other. Our physical bodies are bombarded by signals from the physical environment: particles—what we today call photons—strike the retinas of our eyes, vibrations in the air affect the tympanic membranes in our ears, and so on. These physical sensory inputs are passed on to the brain by some mechanical mechanism: as vibrations in a very fine fluid called ‘vital spirit’ according to seventeenth-century science, or as electrical signals travelling along our nerves, as more modern physiology has discovered. Once in the brain, these inputs provoke extremely complex neural reactions as the brain processes the information it receives.

So far, even for the substance dualist, all of these processes are purely physical and

Page 2: Philosophy of Mind - University of Guelph

2

mechanical. However, at this point, says the dualist, the information that the brain has received is passed out of the physical world—out of space—and into the realm of the mental, into the mind. Here the physical information from the senses is turned into a conscious mental image, composed of colour patches, shapes, sounds, smells, and all the other elements of our phenomenal mental life. These mental images, plus other more abstract ideas that we have stored in our minds (such as mathematical or religious concepts), are the materials of thought. Using these ideas we formulate beliefs about the way the world is, respond emotionally and cognitively to these representations, generate desires for ways the world should be, and construct plans for actions that will carry these desires into effect.

Many of the things our bodies do they do automatically: for example, even substance

dualists agree that breathing and balancing are capacities our bodies have automatically as physical organisms. However, according to the substance dualist, some of our actions are deliberate—they are willed by our minds, which communicate their instructions to our bodies by influencing the activities of our brains. Thus, certain influences return from the mental realm and—at least in the restricted realm of the brain—change the causal processes that are operating in the physical world in order to make our bodies behave in certain ways. Our minds put their plans into action by influencing the behaviour of our bodies.

This, according to classical dualism, is the difference between human beings and the lower animals. The lower animals are merely physical machines, and everything they do is automatic—governed by the mechanical laws of physics. Human beings, by contrast, have non-physical minds which can break free of mere physical law and cause our bodies to act freely and rationally. Put another way, lower animals do not have minds (at all, not even a little bit) and people do. Thus, although animals may behave as if they are in pain and wish to be relieved of it, unlike human beings they do not really feel pain or want it to stop.

Page 3: Philosophy of Mind - University of Guelph

3

TWO ARGUMENTS FOR SUBSTANCE DUALISM Dualism has many initial attractions as a theory of the mind. There are several striking differences between mental and physical phenomena which give weight to the dualist intuition. Thoughts are seemingly spontaneous and resistant to mathematical description, unlike the behaviour of objects in the physical world; we are intimately aware—conscious—of our own thoughts in a way that is hard to square with the relations between physical things; and our thoughts, apparently unlike physical objects, are intrinsically meaningful. However, none of these differences are by themselves enough to show that the mental is a different substance than the physical, rather than merely an unusual variety of physical phenomenon (in rather the way that life, say, is a highly distinctive and outwardly mysterious but still physical process).

René Descartes (1596–1650), substance dualism’s best-known defender and the formulator of the classic version of the theory, mustered two arguments for the claim that mind and matter do indeed belong to quite different realms. Both of these arguments trade on a simple logical truism: if ‘two’ things are really the same thing (e.g. if the mind is really identical with some chunk of matter, such as the brain) then all the attributes of one must be attributes of ‘the other.’ This is simply because, of course, everything has all the same properties as itself. So if someone has two names—a regular name and a secret superhero identity, for example—then everything that is true of that person when they are picked out using the first name must still be true of them even if we use their other name. Conversely, if we can discover that the thing named A has some attribute that the thing named B does not have—say A was in Toronto at 11:30 am EST on October 4, 2004 while B was in Paris at exactly the same time—then we can be sure that the thing named A cannot possibly be the same as the thing named B.

So for Descartes to prove that minds are radically distinct from any material thing, all he has to do is prove that minds have a property that matter could not possibly have, or alternatively prove that matter has a property that minds could not possess. In his Meditations on First Philosophy (1641) he mounts arguments in both directions. His central example of a property that matter has that minds do not is that of being extended in space. Everything material, Descartes asserts, has spatial location and extent—it fills space. Furthermore, every material thing—even infinitesimally tiny things—can in principle be divided into smaller parts: the part filling the space on the left, say, and the part filling the space on the right. (This would be true even for material things that for some reason cannot in fact be split—subatomic particles say—as long as they at least have a non-zero volume.) So, for Descartes, every material thing can be divided.

But, according to Descartes, it is not even coherent to suppose that mental entities can be divided—what would it be for a belief to be cut into eighths, for example, or an emotion to be separated into two parts and the visual image of a cow inserted into the middle? The problem is not so much that we are not in practice able to cut the mind into parts as that it apparently does not even make sense to talk about doing so—minds are simply not the sorts of things that have spatial volume. So matter has a property that minds do not—three-dimensional extension. So, Descartes concludes, minds cannot be identified with anything material (such as brains).

Descartes’ argument that minds have an attribute that matter lacks goes as follows. Even when we try and doubt the existence of anything that can be doubted—the truth of anything that might even conceivably be false—even under these conditions of radical

Page 4: Philosophy of Mind - University of Guelph

4

doubt there is something that cannot be doubted. This indubitable fact is Descartes’ famous “cogito ergo sum”: I think therefore I am. In other words, the fact of our own minds cannot be doubted by each of us, no matter what else we might call into question—even the very formulation of a doubt is the performance of a mental act, something we could not possibly do without a mind.

By contrast, everything material is open to this sort of radical doubt: the entire physical world I seem to see around me, including my own body, could be a giant illusion, and I might be nothing more than an immaterial self dreaming of—or being tricked into believing in—fictional material objects. So here, Descartes thought, is another important difference between mind and matter: matter is open to doubt, whereas mind is indubitable; we might be mistaken about the existence of our bodies but we could not possibly be mistaken about our own occurrent mental states. Hence, mind and matter differ in their properties; and so, once again, Descartes concludes that they must be different things—different substances.

PROBLEMS WITH THESE ARGUMENTS Descartes’ arguments for dualism, however, are flawed. The problem with the first argument is that it begs the question—that is, it assumes just what it sets out to prove, and so should not convince someone who does not already believe the conclusion. The premise that thoughts cannot be chopped up depends upon the assumption that they have no spatial dimension … and this will be true only if thoughts do not occupy physical space, which is just what the argument purports to prove. Descartes’ first argument, then, only succeeds in establishing that if minds are non-spatial then minds are non-spatial; this is true, but unhelpful.

The flaw in the second argument is a little more subtle. The argument’s first premise is certainly correct: if two names or descriptions pick out things that have different properties then they must pick out two different things. And Descartes does manage to show that there are two things that have different properties: one which is indubitable, and the other which is not. The problem is that the things about which Descartes establishes a dualism are not minds and brains—for it is not minds and brains themselves which can be doubted or not. Rather, what is open to doubt is some proposition or fact about brains. You cannot, for example, ‘doubt my brain’—such a thing would be nonsensical as well as ungrammatical. What you can do is doubt that I have a brain. But here, what is being doubted is (roughly) the proposition expressed by the sentence “Andrew has a brain.” It is this proposition that has the property of being dubitable (open to doubt), rather than my brain itself. So Descartes has managed to show that the proposition “My mind exists” is different than the proposition “My brain exists,” since they differ in at least one of their properties … but of course, this is not very interesting. What he has not done is shown that minds themselves have some property that brains do not.

PROBLEMS WITH DUALISM Even if there are no good arguments for substance dualism, though, it does not follow from this that dualism is false: something could perfectly well be true even though we have only bad reasons for believing it. However, there are reasons to think that substance dualism is not a fully coherent theory—that we cannot quite make sense of what the theory is telling us—and thus that it may be false (or meaningless). The two

Page 5: Philosophy of Mind - University of Guelph

5

main problems for substance dualism are the problem of mind-brain causation and the problem of other minds.

The problem of mind-brain causation is this: according to substance dualism, brains are material things occupying physical space while minds are immaterial, non-spatial, not even located anywhere. Yet minds and brains seem to exert a causal influence on each other. When I stub my toe this sends physical signals to my brain which bring about a conscious sensation of pain. When I decide to get up and get a drink from the fridge, this mental action leads to changes in my brain which in turn cause my body to move. But how can these causal chains bridge the gap between mind and brain? What would it even be for something that has no location, no moving parts, no mass, no energy to somehow bring about physical changes in a lump of matter? Similarly, how could brains cause myriad intricate changes to their associated minds without passing any kind of energy or other influence ‘into’ them—there is simply nowhere to pass this energy to, since non-material minds occupy no spatial dimension. This two-way relation, even though we might call it ‘causation,’ bears no relation to any kind of causal interaction with which we are familiar in the natural world, and it seems utterly mysterious.

Gilbert Ryle (1900–1973), who coined the phrase “the ghost in the machine” in his influential critique of substance dualism The Concept of Mind (1949), suggested that substance dualists tended to think of the mind as being a sort of “para-mechanical” contraption—a ghostly, invisible clockwork connected to the physical machine of the brain. But, as Ryle suggests, to adopt this comforting conception of the mind is simply cheating, if you are a substance dualist. Non-physical minds can’t resemble physical machines in any meaningful way since they do not have spatially related parts to interact with each other (and also, presumably, because unlike material constructs they are not driven by mechanistic laws). So even the ‘internal’ workings of the mind—how one thought or sensation leads to another—are rendered mysterious by substance dualism.

The second problem for substance dualism is the problem of other minds. When we interact with other people we are usually very confident that we know something—often quite a lot—about the contents of their mind. When someone in the same room as us is angry or elated, we can often tell. We sometimes notice that someone we meet is intelligent or conniving or interested in stamp collecting. If you tell me that you had eggs for breakfast, typically I will assume that you believe you had eggs for breakfast, or at least that you intend for me to believe it. But this ease by which we ‘read’ the minds of others seems to be inconsistent with substance dualism—for, according to the dualist, the minds of others are totally inaccessible to us. Indeed, not only are they inaccessible to us in our everyday interactions with others, they are in principle impervious to any kind of psychological investigation since they cannot be detected by any possible scientific instrument.

Our knowledge of the minds of others then, it turns out, is really only knowledge of other people’s behaviours. And, according to substance dualism, we can have no evidence for the view that this behaviour is accompanied by appropriate mental states in any case except our own. I know that when I get poked with a sharp stick I feel a certain sensation, and that this sensation seems to give rise to certain behaviours. But why should this entitle me to believe that when you get poked with a sharp stick and react in similar ways that you undergo the same sort of mental experience (or indeed any mental

Page 6: Philosophy of Mind - University of Guelph

6

experience at all)? This problem is all the more vexed because, unless science turns out to be incomplete and full of explanatory gaps (which most substance dualists are prepared to accept it won’t), then all of human behaviour can be fully accounted for naturalistically: all deliberate actions are caused by brain events, and all brain events are caused by certain prior physical events. There is thus no explanatory need to assume that other people have minds.

None of this amounts to conclusive proof that substance dualism is false. But what these considerations do do is call into question whether we can really make sense of substance dualism: what seemed to be a fairly clear and simple model of the mind has turned out to be mind-bendingly mysterious and strongly counter to some of our deep intuitions. In the absence of any good reason to believe dualism true, then, we should seek elsewhere for a theory that is at least as plausible or satisfying as substance dualism but without its problems.

THE REACTION AGAINST DUALISM—BEHAVIOURISM The first response to dualism that we shall consider—analytic behaviourism—can be seen as a reaction to the problem of other minds. The conundrum is that we normally never see any object that could plausibly be called a mind in our interaction with others—all we are directly aware of is the way they behave—yet nevertheless we confidently make claims about their mental states. One way to reconcile these two data is to adopt the theory that the mind just is behaviour. THE THEORY OF BEHAVIOURISM The first and most important thing to notice about this theory is that it involves a change in the category into which we place the mind. According to behaviourism, the mind-body relation is not a relation between two things: minds are not things at all. Instead, minds are, as it were, what certain bodies do: they are patterns of behaviour exhibited by people and perhaps by certain other kinds of bodies (other species, aliens, artificial intelligences, and so on). Similarly, ‘mental states’ are not inner states of either our minds or our brains. In fact, it is perhaps most accurate to say that behaviourism does not believe in the existence of mental states at all. Instead, the language that we might have thought was about mental states—talk of pains, perceptions, emotions, dreams, beliefs and so on—is really about certain patterns of behaviour. For example, according to behaviourism, for Mary to feel thirsty is no more and no less than for her to seek out a drink, all things being equal; for Michael to feel jealous is exactly for him to behave in certain ways characteristic of jealousy; for Mandar to be visually aware of a tree just is for her to report that she sees a tree, walk around it if it is in her way, and so on.

There are three further crucial things to notice about this account of mentality. First, in talking about ‘patterns of behaviour’ behaviourists are really talking about very complex sets of dispositions to behave. Being thirsty, for instance, cannot be simply characterised as getting a drink—obviously, sometimes we are thirsty but we do not drink. We might not have the opportunity to drink, we might have more pressing things to do, we might believe (truly or falsely) that all the available liquids are poisonous, and so on. So the behaviour that characterises thirst is dependent upon circumstances—both the circumstances of our environment and the circumstances of our other ‘mental states’ (or dispositions to behave in certain ways). What remains constant across all these

Page 7: Philosophy of Mind - University of Guelph

7

many and varied circumstances are the dispositions to behave: that is, I am thirsty if and only if, if I know that a drink is conveniently available and I have no more pressing thing to do and I believe that the drink will harmlessly quench my thirst (and any other conditions that need to be specified are specified), then I will get myself a drink. Hence it is these dispositions to behave, rather than actual behaviours, that are picked out by mental terms.

A second thing to notice is that once these accounts of ‘mental states’ are established then they are taken to be true by definition. This is why the version of behaviourism I am describing here is often called analytic behaviourism. According to behaviourism, we do not need to hunt for some hidden underlying essence of thirstiness in order to find out what thirst is—we already know what it is to be thirsty. We must already know what it is to be thirsty because otherwise we would not know when we could correctly describe someone as thirsty and when not; and patently we do know this. So in order to understand thirst we should examine the circumstances under which the term “thirsty” can be correctly applied—in other words, we should examine the meaning of the word “thirsty.” The philosophy of mind, from this viewpoint, is largely a matter of bringing to light the “logical geography” of mental concepts.

This same point can be turned around and approached in a different way. Again, the starting point is that we often do know when people are (for example) thirsty—when the term “thirsty” can be correctly applied to them. We find out whether people are thirsty by examining the ways in which they are disposed to behave—rather than, for example, by scanning their brains or by attempting vainly to access their non-physical mind. Hence mental terms must be about—must refer to—these dispositions to behave (rather than inner states of brains or souls).

This takes us to a third useful thing to notice about the behaviourist account of mentality. It is uncontroversial to suppose that certain kinds of external behaviour are evidence of mental states: everyone can agree on this simple fact, no matter what their theoretical stripe. One might even be attracted by the idea—sometimes called methodological or scientific behaviourism—that the best or only way to study the mind is to study the external behaviour of organisms, while treating their bodies and brains as ‘black boxes’ (psychologists J.B. Watson (1878–1958) and B.F. Skinner (1904–1990) are representatives of this view). However, behaviourism as a theory of the nature of the mind goes far beyond this. According to analytic behaviourism, behaviour is not merely evidence of further, internal mental states. Certain kinds of behaviour actually are aspects of mentality: thirst is not the inner cause of the disposition to drink—it is the disposition to drink. We wear our minds on the surface of our bodies as it were. A good way of putting this idea is to note that, for behaviourism, the word “thirst” does not refer to the unseen cause of drinking; it refers to (generally easily visible) drinking behaviour.

PROBLEMS WITH BEHAVIOURISM Analytic behaviourism has its attractions as a theory of mind. It rests on what seems to be a sensible theory of the meaning of mental terms, and it handily does away with the two problems we identified above with substance dualism. The problem of mind-brain causation is dissolved simply because, according to behaviourism, minds properly understood are not the kinds of things to be causes at all—‘mental states’ are themselves characteristic patterns of behaviour rather than whatever it is that causes that

Page 8: Philosophy of Mind - University of Guelph

8

behaviour. The problem of other minds is solved because minds are no longer treated as secret inner sanctums invisible to others—rather, our mental life is as readily identifiable as our dispositions to behave.

However, like substance dualism, it is generally agreed that analytic behaviourism is seriously flawed as a theory of mind: just as with dualism, much of the initially attractive intuitiveness of the theory turns out to be illusory. Here are two problems with the theory.

Firstly, analytic behaviourism is committed to denying any kind of internalism about mental states, and this has various worrying consequences. It apparently means, for example, that there can in principle be no aspects of mentality that have no behavioural signature at all: according to behaviourism, dreams that we experience and then forget entirely about, or people who are completely paralysed but nevertheless have mental lives, or disembodied brains being tricked into illusory perceptions in some mad scientist’s laboratory, are all logically impossible. Furthermore, it means that—contrary to most people’s strong intuitions—our mental states do not cause or explain our behaviour (they cannot do so, since they are our behaviour). Thus, for example, any account of why we think or feel the way we do must necessarily leave the realm of psychology—it must be entirely an explanation in the realm of the non-mental. Further still, behaviourism is committed to denying that we have any kind of introspective access to our mental states—a kind of access to our own mental lives that we lack for the mental lives of others—simply because, for behaviourism, there is nothing mental to introspect. We do not discover mental states by ‘looking inside ourselves’: we discover them by observing the overt behaviour of ourselves and others. Thus behaviourism is faced with the difficult task of explaining why we think we have privileged access to our own mental states, or why we assume that pain, for example, is an inner mental state that feels a certain way to us (and to us alone—you cannot literally feel my pain).

Second, even in its own terms, it seems that behaviourism is doomed to fail. The central claim of behaviourism is that every aspect of mentality is no more and no less than a bundle of behaviour—but on examination it appears to be impossible to characterise even the simplest mental states in purely behavioural terms. Consider again the example of thirst, given above. Even the simplistic dispositional characterisation of that mental characteristic that we laid out there makes essential reference to other mental states: the behaviour that will characterise my thirst depends upon whether I am visually aware of the glass of liquid in front of me, whether I fear the liquid is poisonous, whether I desire to conceal my thirst from you, and so (more or less indefinitely) on. Furthermore, each of these mental states in turn can only be specified in ways that include reference to further mental states; the same is true of those mental states in their turn; and so it goes on. There seems to be no prospect at all, then, of reducing any mental state to pure behaviour; or, to put the same point in a different way, no prospect of translating ‘mental state’ talk into behavioural talk.

NATURALISTIC MENTAL REALISM—IDENTITY THEORY We have so far considered two theories of the mind: the first identifies minds with non-physical entities; the second denies that minds belong to the category ‘entity’ at all, and urges that they are instead patterns of behaviour. Neither theory, we have discovered, is satisfactory. The next obvious place to look is at the brain. After all, our examination of behaviourism has emphasised that mental states are, pre-theoretically, the inner causes

Page 9: Philosophy of Mind - University of Guelph

9

of our behaviour (rather than that behaviour itself), while our consideration of dualism raised the worry that the non-physical minds could not plausibly play the role of those causes. Where better to look for these inner, causal events than inside the skull?

This theory of the mind is called the mind-brain identity theory. Its central claim is quite simple: mental states are brain states. For example, it might be, the mental state of pain is exactly the same thing as a certain kind of activation of a type of neuron called a pyramidal cell. The main thing to notice about this theory is that it goes beyond merely saying that mental states are correlated with or caused by brain states—the claim that mind and brain are correlated is not at all controversial, and indeed could easily have been accepted by classic substance dualists like Descartes. Identity theory goes further and says that the mind is the same thing as the brain. “Pain” and “pyramidal cell activity (pca),” for example, are according to identity theory simply two names for one and the same thing (just as Bob Dylan and Robert Zimmerman are two names for one and the same person, or water and H2O two ways of referring to the same substance). It follows from this that, since “pain” and “pca” pick out exactly the same thing, all the properties of pain must also be properties of pyramidal cell activity and vice versa. So, for example, it is a consequence of identity theory that pain states have a certain location, volume, mass and electro-chemical composition, while pyramidal cell activity generally feels unpleasant and can be sharp or dull, unbearable or merely nagging.

THE ADVERBIAL THEORY OF MENTAL STATES One initial reaction to identity theory, reminiscent of Descartes’ first argument for dualism, is to object that it is simply too implausible to believe that mental states are nothing more than neural states. Imagine looking carefully at a ripe lemon in a well-lit room. That conscious visual perception of the lemon, it seems, has certain properties: among other things, it is bright yellow and lemon-shaped, and it is apparently radically private and unshareable (I could have my own very similar experience of the lemon but it seems to make no sense to say I could literally have your visual experience). Yet nothing in your brain has those attributes—none of your brain states are yellow and lemon-shaped, none of them are inherently private—so nothing in your brain could be identical to this perceptual state. Conversely, visual images of lemons seem to lack properties that neural states possess: the perceived lemon is in front of me, not inside my skull, and furthermore my lemon perceptions have no chemical composition, temperature, or other attributes of my brain states.

Defenders of the mind-brain identity theory—such as U.T. Place (1924–2000) and J.J.C. Smart (1920– )—were well aware of this apparent problem, but unfazed by it. Their response was essentially to deny that mental objects exist: according to identity theory, there is no such thing as ‘a perception of a lemon,’ and hence nothing which is yellow, private and so on … nothing with properties that differ from those of brains. But, unlike behaviourism, identity theory does not deny the inner reality of mental states. Instead, according to identity theory, we should learn to think of our mental life as made up of processes rather than objects.

Suppose I have a limp. The sentence “Andrew has a limp” looks grammatically similar to sentences like “Andrew has a computer” or “Andrew has a grey shirt.” But this similarity is misleading: to say that Andrew has a computer is to describe a relationship between two things, me and my computer. But if we say that Andrew has a limp, this does not describe a relationship between me and some other object, a limp. Limps are

Page 10: Philosophy of Mind - University of Guelph

10

not things; they are ways of walking. Similarly, according to many identity theorists, mental states are not things, they are processes. When I say “I have a headache” what this really means is that my head aches, not that there is some entity—a headache—to which I am related in some way. When I see a ripe lemon, I perceive it—I do not literally have a perception of the lemon.

To say a limp has a certain property—for example, being unusual—is not to pick out a limp and call it unusual; it is to describe a way of limping. In the same way, according to this account of mental states, to have a nagging headache is to have a head that aches naggingly and to see a yellow lemon is to perceive a lemon yellowly. That is, it is to characterise a process rather than a thing. For this reason, this is often called the adverbial theory of mental states.

Finally, one might reasonably ask, what is it to perceive yellowly? According to identity theorists such as J.J.C. Smart, for a sensing to be a yellow sensing is just for “something to be going on” that is like what happens when we perceive something yellow. That is, the proper characterisation of mental events is what Smart calls topic neutral: we simply refer to inner processes that are like or unlike each other (e.g. one is like what happens when we see lemons, another is like what happens when we touch a candle flame, and so on) but we do not commit ourselves as to the intrinsic basis of these similarities. The way is thus left open for neuroscience to determine what are the actual characteristic properties of these mental processes.

THE REACTION AGAINST REDUCTIVE MATERIALISM Despite all of this, it is hard to escape the lingering worry that mind-brain identity theory misses something crucial about our mental lives. When we see a ripe lemon or feel a sharp pain, it seems as if we are intimately acquainted with some mental event that has an intrinsic, qualitative nature. The yellowness of lemon perceptions, the painfulness of pain sensations—these are genuine features of mental states, and it is hard to see how any amount of information about neurons, neurotransmitters and oscillation frequencies could possibly account for them. Identity theory, that is, seems unable to account for the phenomenal aspect of consciousness.

This is a difficult issue, to which we will return towards the end of the chapter. As it happens, though, it was not this problem which ultimately caused the greatest philosophical dissatisfaction with identity theory; the argument that most philosophers came to feel was a decisive refutation of the theory was an appeal to what is called multiple realizability.

MULTIPLE REALIZABILITY The mind-brain identity theory, recall, is interesting because it does not merely say that mental events and neural events are correlated—it commits itself to the claim that they are one and the same thing. But if ‘two things’ are really one and the same thing—more precisely, if two different names or definite descriptions in fact pick out the same referent—then it must be quite impossible for ‘one’ of those things of occur when ‘the other’ does not. Furthermore, this is not just a physical impossibility—it is a logical impossibility. There is simply no way that the following claims could all be true at once: a=b, a is the case, b is not the case.

Another important thing to notice at this stage is that mind-brain identity theory is a claim about types of mental and neural states, rather than particular or token states.

Page 11: Philosophy of Mind - University of Guelph

11

For example, it says that pain—whenever and wherever it occurs—is pyramidal cell activity, rather than merely that this particular pain I am having right now is pca. (Similarly, it is true that this glass of water is H2O and also true that it is located in my office, but only the first claim is true of water as a type—and hence is scientifically interesting—while the latter is true only of this water token.) For this reason, another commonly used name for mind-brain identity theory is type identity theory.

Given all this, the argument from multiple realizability is quite simple. It consists in pointing out that, for mental and neural types, sometimes a and not-b, so therefore it is not true that a=b. The central idea for this argument is that one and the same mental state type can be realized by more than one type of physical configuration.

For example, it is now a well-established fact that the human brain is highly plastic: that is, neural operations that in one person’s brain are accomplished in a particular region or system of the brain may be carried out in quite another place in someone else’s. A fairly extreme example of this is a condition called hydrocephalus, which is an abnormal accumulation of cerebrospinal fluid within cavities (called ventricles) inside the brain. In most instances, this is a congenital, life-long condition, and hydrocephalics typically have a normal lifespan and intelligence, and close-to-normal mental functioning. However, in some cases of hydrocephalus, the size, shape and structures of the brain are radically re-arranged by the pressure of the fluid inside.

This is just one example of an actual case where, plausibly, some mental state is

realised by a neural state that is different from the normal case. Suppose that in most people the visual image of a lemon is realized by neural state P; but suppose that Bob, who happens to be hydrocephalic, has a lemon image that is psychologically just like yours or mine, but whose mental picture is realized by a different neural state Q. This means that the mental state in question—seeing a lemon—cannot be the same thing as neural state P, simply because sometimes lemon-seeing occurs in the absence of P (just as being US President can’t be the same thing as being from Texas, since sometimes US presidents are not from Texas—and indeed, many Texans are not US presidents). Thus, this mental state cannot be identified with the brain state P, or with Q either for the same reasons, or with any other neural state—and there is no reason why this result could not be generalized to any other mental state as well. Hence, the mind-brain identity theory is refuted.

In case this kind of example is not convincing enough, Hilary Putnam (1926–), an American philosopher who was one of the originators of the multiple realizability argument, also suggested two other main kinds of problem case that the identity theory

Page 12: Philosophy of Mind - University of Guelph

12

would have to surmount in order to be successful. Other species plausibly have mental states at least some of which are like ours (e.g.

pains), and sometimes members of those species have nervous systems that are not only structurally quite different from human brains but also even electrochemically different. Octopi may feel pain and see colours, for example, but they are members of the phylum mollusca (a group that includes snails, clams, mussels, squids and so on) and their nervous system is importantly different from that of mammals—in particular, their nerve cells differ from some mammalian nerve cells in lacking a ‘myelinated sheath,’ and this changes their electrical conductance properties.

Furthermore, mind-brain identity theory, if it were true, would entail that no organism or system that lacks a brain could possibly have mental states like ours: e.g., the only things that can have a visual sensation of yellow, or feel anger, or believe that 2+2=4, are things with human-like brains. But this is implausible. It would seem to entail, for example, that computer-based artificial intelligence is by definition impossible, and that there could be recognisably sentient life on other planets only if carbon-based, neuronal nervous systems happened to have evolved on them.

Because of multiple realizability considerations, then, mind-brain identity theory has been widely (though not universally) abandoned, and with it the project of giving a neurophysiological reduction of the mind to the brain. That is, it is widely believed, mental properties are not neural (or other physical) properties—they cannot be identified with them, and thus must be different than them. What was not abandoned, however, was the methodological assumption that the mind is, in some sense, nothing more than a physical phenomenon: the fall of reductionism did not lead to a return to dualism.

Instead, the philosophy of mind moved in the direction of non-reductive materialism: the view that mental properties are not themselves physical properties but are nevertheless importantly dependent on them—that if we fix all the physical facts we thereby, and without any additional work, fix all the mental facts. This general metaphysical position has proven very stable over the past thirty years or so, and its main variant—a theory called functionalism—though certainly not without problems, is by far the most influential and widely accepted contemporary model of the mind.

NON-REDUCTIVE MATERIALISM—FUNCTIONALISM Consider, again, the example given above of the difference between the way an average and a hydrocephalic brain might realize the visual sensation of yellow. In the former case it was realized by neural state P and in the latter by neural state Q; this led us to reject the view that seeing yellow might be identified with one or other of these states. It is natural, however, to go on to suppose that what characterises a mental state as being a sensation of yellow (as opposed to green, or pain) is something that P and Q have in common. We already know, from the set-up of the example, that they have no neural property in common—no specific commonality of brain composition or structure. But they may have some more abstract, or higher-level, features in common; in particular, it seems reasonable to suppose that both P and Q play the same kind of role in their respective brains.

For example, both will typically be caused by the presence of yellow things in the visual field, both will be related to other colour sensations in similar ways (e.g. both regular folk and hydrocephalics agree that yellow is a “brighter” colour than dark blue),

Page 13: Philosophy of Mind - University of Guelph

13

both will typically bring about changes in colour-related cognitive states (e.g. causing the belief that I see a yellow thing), and both will be connected in similar ways to the controllers of various sorts of colour-influenced behaviour (e.g. choosing to buy a sports car of just that colour).

To turn this sensible-seeming idea into a theory, we can go one step further. We can say that having a sensation of yellow just is to be in a state that plays that role. That is, we can identify yellow sensations with a certain, rather abstract pattern of role-playing—we can commit ourselves to the view that anything at all that plays that role is a sensation of yellow (including, among other things, neural states P and Q), and that whenever there is a sensation of yellow there is a state with the same role.

This theory—or rather, family of theories—is called functionalism, for the following reason. The informal notion of a role can be usefully fleshed out as a certain kind of input-output relation; for example, sensations of yellow are characterised in terms of certain kinds of inputs (typically and roughly, light reflected from the surface of yellow things onto the retina), and certain kinds of outputs (such as causing colour beliefs and colour-related behaviours). A general name for an input-output relation is a function, as for example a mathematical function (such as x2) which takes numbers as inputs and produces others as outputs.

MACHINE FUNCTIONALISM The input-output relation that, according to classical functionalism, characterises mental states has the following general structure. A typical input will be some sort of sensory stimulation, but the effects that that input will have will depend upon the existing internal state of the system. For example, the actions I might perform when ‘yellow’ light is flashed on my retina will depend on whether I am paying attention or day-dreaming, panning for gold or bird-watching, normally sighted or colour blind, and so on. The combination of a particular type of input with a given internal state results in some specific, more or less complex, output. This output can often usefully be thought of as involving two different elements—a change to the internal state of the system (such as coming to have new beliefs or desires), and overt behaviours (such as beginning a careful scan of the creek bed).

Mental states can thus be represented abstractly, using the following sort of table:

State S1 State S2 State S3

Input I1 B1 S3 B1 S1 B2 S2

Input I2 B2 S1 B1 S3 B3 S1

Input I3 B3 S1 B3 S2 B2 S3

For example, whenever the system receives input I2 and is in state S3 it produces behaviour B3 and goes into internal state S1; if it then immediately undergoes I2 again, it will produce behaviour B2 (as it is now in internal state S1) but it will not change state.

Diagrams of this sort are often called machine tables. This is because, properly configured, they can take any (computable) function and break it down into a set of very simple instructions that could be carried out by an abstract device called a Turing

Page 14: Philosophy of Mind - University of Guelph

14

machine (after Alan Turing (1912–1954), their inventor). A Turing machine simply consists of an infinite one-dimensional tape divided into cells on which inputs and outputs—usually 1s and 0s—are recorded by a read-write device that travels along the tape reading one cell at a time. The action of a Turing machine is completely determined by a table of transition rules—the “program” for the machine—that instructs the read-write head to behave in particular ways depending on (1) the current state of the machine, and (2) the symbol in the cell currently being scanned by the head.

Turing intended this abstract formulation of a mechanical device to capture—to define—the notion of computation; that is, if Turing is right, any function whatsoever that can be computed at all can be computed by a Turing machine (and of course vice versa). Although this result—called the Church-Turing thesis—cannot be formally proven, it is generally accepted as true. So now we are in a position to draw the following connections: according to functionalism, all mental states are essentially functions. All functions (or at least all computable functions) can be understood as machine tables—as instruction sets for Turing machines. And machine tables can be thought of as computer programs: indeed Turing’s work on the mathematics of computation in the 1930s was one of the main building blocks that led to the advent of the electronic computer later in the century.

This suggests an intriguing and hugely influential idea: that the mind is best thought of as software, running on the brain which in turn is to be treated as a kind of computer—an information-processing machine.

FUNCTIONALISM COMPARED A good way to get a sense of functionalism is to compare it to the other theories we have considered so far. In some ways functionalism is a descendent of behaviourism, in that it treats mental states as being nothing more than complex dispositions to behave in certain ways. However functionalism thinks of mental states as the inner causes of that behaviour, rather than the pattern of behaviour itself. To put it another way, functionalism, like behaviourism, characterises mental states as input-output relations, but for functionalism these input-output devices are inside the head and not only link environmental stimuli and overt behaviour but also connect mental states to each other.

Functionalism differs from mind-brain identity theory, as we have seen, in endorsing the multiple realizability of mental states and thus denying that they can be type-identified with any particular physical realization. It is important to notice that functional properties—and thus, according to functionalism, mental properties—are therefore not physical properties. They are higher-level, more abstract properties than the physical. For the same reason, of course, functional properties are not what we might call soul properties—they are not properties of non-physical mind-stuff, and so functionalism is also incompatible with substance dualism as a theory of the nature of mind.

Nevertheless, perhaps surprisingly at first sight, functionalism is consistent with both materialism—the view that everything in the universe is physical—and the dualistic view that mind and matter are different substances. This is because functional states are realizable by any system capable of running that function, and since this means anything able to replicate a Turing machine this will include not just organic brains but also such things as silicon computers, conglomerations of individuals ‘acting out’ a computer program, or non-physical systems (such as the minds of ghosts or angels, or

Page 15: Philosophy of Mind - University of Guelph

15

just Turing machines made of ‘ectoplasm’ instead of matter). Almost all contemporary functionalists are in fact physicalists, but this is an additional theoretical commitment to functionalism: it is the claim that as a matter of fact all functional processes in the actual world are carried out by physical systems.

VARIETIES OF FUNCTIONALISM The idea that mental states are computations was central to the original introduction of functionalism, and it is still widely influential. On this view, the mind is not just like a computer program—it literally is a computer program, of just the same sort as (though vastly more complex than) the software that runs on personal computers. Thought is nothing more nor less than the manipulation of internal symbolic representations in accordance with pre-programmed (hard-wired or learned) algorithms. This paradigm for understanding the mind—often called the Computational-Representational Theory of Mind—is the foundation for the successful interdisciplinary field of cognitive science. It is also the basis for what has become known as the strong artificial intelligence project: if thought is computation then it ought to be possible to construct computer programs that do not merely simulate mental process but actually are thinking (or seeing or feeling or hoping). Ultimately, it should be in principle possible to construct human-like minds realized by—‘running on’—non-biological hardware.

However, not all versions of functionalism remain so closely linked to the idea of computation. Another way to elaborate on the notion of function is not to introduce machine tables but to treat mental states as unobservable theoretical entities that need to be postulated by psychological theories in order to explain the observed connections between environmental stimuli and behaviour. On this understanding, functional specifications of mental states arise from the discovery that the most adequate empirical theory of human behaviour needs to make reference to internal functional states, like beliefs, desires, memories and so on. Similarly, the empirical study of chemistry in the late eighteenth and nineteenth centuries led to the postulation of unobserved theoretical entities called atoms and molecules in order to explain the behaviours of substances.

The central functionalist idea that mental states are input-output relations is preserved, but this brand of functionalism characterises functions as causal roles rather than pieces of computer code. Mental events are conceived as nodes in the complex causal network that connects sensory inputs with behavioural outputs. This version of functionalism is sometimes called causal-theoretical functionalism.

If mental states are theoretical entities, determined to be the nodes of the hidden causal network that determines our behaviour, then the key question becomes: what is the correct psychological theory? Different theories will have different ontological commitments—that is to say, they will postulate the existence of different unseen entities. For example, Freudian psychology includes a commitment to such entities as drives, egos, ids, dreams, hysteric impulses, and so on. Commonsense or so-called ‘folk’ psychology endorses some but not all of the Freudian categories, including beliefs, desires, emotions, dreams, etc. Some modern psychological theories not only introduce new categories of mental entities—such as working memory or schemata—but re-cast the causal role of familiar ‘folk’ concepts in light of the new theory.

It is clear from this that the nature of a causal-theoretical functionalist theory depends upon which psychological theory is taken to be most explanatory. There are two

Page 16: Philosophy of Mind - University of Guelph

16

major schools of thought on this. Psycho-functionalism characterises mental states and processes as entities defined by their role in the best available scientific psychological theory. By contrast, analytic functionalism holds that analysis of everyday, ‘folk’ mental concepts—such as wanting a glass of water or having a fear of spiders—reveals that they characterise mental states by their causal roles. According to analytic functionalism this network of ‘folk psychological’ concepts makes up a pre-scientific theory about the mind, and (roughly) since this theory determines what we mean by, for example, ‘desire’ or ‘belief,’ it is this theory that should determine the proper content of functionalism.

Psycho-functionalism, at its most extreme, can lead to a doctrine called eliminative materialism. If functionalism is linked to the best scientific account of how brains cause behaviour and if, as some cognitive scientists think probable, our best scientific theory of the brain will do without—indeed, will be inconsistent with—such ‘folk psychological’ notions as belief or desire or visual image, then we can conclude that many of the mental states posited by common sense really do not exist. In their place, we will have spiking frequencies, spreading activations in connectionist networks, state-space vector codings, and other arcana.

Finally, another important distinction between species of functionalism is this: given that all varieties of functionalism characterise mental states as certain complex sorts of input-output relations, it is open to the functionalist to identify the mental state either with that input-output relation itself, or with the mechanism that realizes the input-output relation. Consider pain, for example. On the former view—sometimes called functional state identity theory—pain is a certain sort of functional role (a kind of abstract property); on the latter view, sometimes called functional specification theory, pain is whatever plays that role—for example, human pain may be a kind of pyramidal cell activity.

A helpful way to appreciate the difference between these two views is to think of the difference between second-order and first-order properties. Being red is an example of a first-order property: it is a property that some individual thing could have, such as a car or a rubber ball. Being coloured is a second-order property: it is a property that individuals could have only in virtue of having a first-order property—something could not be coloured without being red (or blue, or green, or …). First-order properties are properties of individuals; second-order properties are properties of first-order properties. According to functional state identity theory, pain (and other mental states) are second-order properties—properties we have in virtue of having other first-order properties (such as undergoing pyramidal cell activity). According to functional specification theory, pain is itself a first-order property: indeed, it turns out, it is in humans the same property as pyramidal cell activity.

OBJECTIONS TO FUNCTIONALISM For the final third of the twentieth century, functionalism was comfortably dominant as the best available theory of mental states. It is still very widely accepted, both by philosophers and other cognitive scientists, but today faces a range of quite sophisticated philosophical objections that have not yet been answered satisfactorily. Many of these objections can be fitted into one of two categories, which we shall consider in turn.

Page 17: Philosophy of Mind - University of Guelph

17

1. Chauvinism and Liberalism First, there is the problem of threading a path between chauvinism and

liberalism. As we have seen, functional roles are specified in terms of their characteristic inputs and outputs: for example, pain is the state typically caused by certain kinds of stimuli (sharp sticks, intense heat, electrical shocks and so on) and giving rise to certain kinds of internal and external behaviours (e.g. screaming and strongly desiring to avoid that stimulus in the future). It is tempting to classify these inputs and outputs as ‘painful stimuli’ and ‘pain behaviours’ and leave it at that: but that would be cheating. We are seeking to give a functional definition of pain, and it would be viciously circular to say that pain is the state caused by painful stimuli and giving rise to pain behaviours.

So we need to find some way of characterising the right kinds of inputs and outputs independently of their connection to pain. But this is very difficult. On one side of the coin, it is hard to avoid specifying inputs and outputs in a way that is chauvinistic: that is, specific to human beings and thus illicitly excluding other organisms that feel pain. For example, octopi may feel pain but they do not scream or groan or verbally complain about it, so these behaviours cannot be part of what is definitive of pain. What is needed is some specification of aversive behaviours that is common, not only to all actual species that feel pain, but even to all possible (e.g. extraterrestrial) pain-feeling species. It is far from clear whether this is possible. Similar problems exist for specifying typical pain inputs: for example, bright light or salt on the skin might cause pain in some species but not others.

On the other side of the coin, functionalism must not specify the inputs and outputs for mental states in a way that is too liberal: that is, that includes systems that do not have the mental states in question. Consider, for example, the way we specified mental states as machine tables in our characterisation of machine functionalism. We noted there that this suggests that functional roles are equivalent to computer programs; there is much that is attractive about this idea, but one problem with it is that computer programs can, demonstrably, be realized by systems that seem to lack any glimmer of mentality. For example, any computer program at all—including, therefore, any computer program that is a specification of the functional role of a mental state—can be run by what is called a Universal Turing Machine. As we saw above, a Turing machine is fundamentally a very simple device: indeed one could, in principle, be constructed from a very long roll of toilet paper and a large number of small pebbles together with an appropriate ‘pebble placement’ device that follows the instructions on the machine table. Most people have the strong intuition that a system like this would not have human-like mental states; if this is right, then machine functionalism is too liberal a theory, and more psychologically specific functional characterisations must be sought.

2. Qualia-Based Objections The second category of problems with functionalism has to do with whether

functionalism can capture a particular class of mental properties called qualia. Qualia are the aspects of mental states that give them a phenomenal feel—they are qualitative properties such as the painfulness of pain (the way pain feels rather than what it does), the yellowness of yellow sensations (the way yellow things look), the sad quality of a melancholy sensation, the offensiveness of the smell of rotten eggs, and so on. As it is

Page 18: Philosophy of Mind - University of Guelph

18

often put, adopting a way of talking first popularized by Thomas Nagel (1937– ) in 1974, qualia are the aspects of mentality that make it something it is like to be in a particular mental state.

Of course, describing qualia in this way—as the way certain mental states feel to their possessor, rather than an aspect of the role of mental states—already biases the issue against functionalism. If qualia exist, as intrinsic rather than relational mental properties, then functionalism—which treats all mental states as fundamentally relational—cannot be a complete theory of the mind. So one way of understanding the controversy is as being over the issue of whether qualia, as I have characterised them, exist, or whether the qualitative aspect of consciousness can somehow be captured relationally.

Qualia-based arguments against functionalism proceed by granting the functionalist all the functional (i.e. relational) characteristics they wish, and then trying to show that systems specified in this way might have inverted or even completely absent qualia. To show this would be to show that functionalism leaves something out about the mental—indeed, according to these critics, something crucial: what it feels like to have a mind.

The inverted spectrum argument goes as follows. Imagine a pair of newborn identical twins, Sophia and Katerina. One of them, Sophia, has an operation performed on her optic nerve that switches the ‘red’ and ‘green’ messages from her retina to her brain: from this point on, it seems natural to say, things that look green to Katerina look red to Sophia, and vice versa (after all, when Sophia sees grass this causes in her the neural state that, in Katerina, is caused by seeing blood). Sophia and Katerina are raised together and both appear to be perfectly normal children: for example, they both use the word “red” to describe blood and “green” to describe grass, and they are both equally good at discriminating between different colours (such as different shades of red). So it seems that the twins, though physically slightly different, are functionally exactly the same when it comes to perceiving and thinking about colours.

In fact, we can say that the experienced colour spectrum has been inverted for Sophia, relative to Katerina: the space of colour hues has been flipped symmetrically around its axis so that all the relations between the colours stay exactly the same while the intrinsic natures of those hues have been reversed, so that red becomes green, blue becomes yellow, turquoise becomes orange, and so on.

Page 19: Philosophy of Mind - University of Guelph

19

The problem for functionalism is simply that a story like the one I just told seems perfectly unproblematically possible. It might be practically highly difficult to accomplish, but it is not incoherent. Yet if functionalism were true, it ought to be incoherent: if functionalism were true, then the mere fact that Sophia and Katerina are functionally identical would entail that every aspect of their mental life must be identical as well.

The absent qualia objection is similar in aim to the inverted spectrum argument, but instead of arguing that two functionally identical systems could have inverted qualia it tries to establish that, in two functionally identical cases, one system might have qualia and the other entirely lack them. One argument of this sort, invented by Ned Block (1942–), runs as follows. Take whatever is the best possible functional account of a human mind, broken down into all its simplest component input-output relations. Then gather together a large enough group of people—say, the population of China (over a billion people)—so that each person can be responsible for a single functional node. Finally, rig up some system of having all those people run through the appropriate input-output actions—perhaps, connect them all together with walkie-talkies, or send visual code signals to them from a low-orbit geosynchronous satellite. Such systems are often known as Blockheads, in honour of their originator. As long as we set things up right—and there’s no obvious reason in principle why we can’t—our Blockhead will be functionally identical to a person undergoing, say, a sensation of pink or a stabbing pain … but it seems extremely counterintuitive to suggest that China could have qualia. Hence, if this is right, functionalism has failed to capture the phenomenal aspect of our mental lives. THE NORMATIVITY OF THE MENTAL Though in one variant or another functionalism is by far the most popular version of non-reductive materialism, it is not the only one. The two kinds of objection to functionalism described above have tended to call into question non-reductive materialism itself. Liberalism-chauvinism objections can be seen as worries about the non-reductive part of non-reductive materialism. Qualia-based objections have been taken as an attack on the materialism part (and we will consider this more in the next section). A third class of objections to functionalism, however, has tended to generate alternative non-reductive theories: these objections generally have to do with what is called the normativity of thought.

In essence, the worry is this. By their very nature, functionalist theories of every stripe are causal theories—they characterise mental states by their causal roles in bringing about behaviour. However, some critics (such as Donald Davidson (1917–2003) or John McDowell (1942– )) assert, mental states need to be understood in terms of their role in justifying behaviour. The difference between a simple machine and a human being is that the machine’s behaviour can be adequately explained by talking about how it was caused; for a human, we need to explain the reasons for their behaviour—we need to give a rational rather than a causal explanation. The fact that, when I believe it is going to rain and I don’t want to get wet, I put on a rain jacket, has to do essentially with the contents of my belief and desire, and the rationality of putting on a jacket to protect myself from the rain (given all the other things I believe). A psychological law that explains my behaviour must link my contentful mental states to my behaviour in a way that conforms to the prescriptive norms or ideals of practical and

Page 20: Philosophy of Mind - University of Guelph

20

theoretical reasoning. These “constitutive” normative relations among mental states, according to some

philosophers, simply will not correspond to causal relations between our sensory stimulations, inner neural states, and overt behaviours. The sources of evidence and standards for correctness for rationalising explanations are different from those for empirical theories. So the normative level of the mental must differ from, and be irreducible to, the causal level of neurophysiology: we can give causal explanations for our behaviour by talking about the brain, but rational explanations only by talking about the mind.

DENNETT’S INTERPRETATIONISM We can use a distinction laid out by Daniel Dennett (1942– ) to make this a little clearer, and to give an example of one type of non-reductive materialism motivated in part by these considerations—a view often called interpretationism.

In considering any sufficiently complex system, we can look at it in at least two different ways. First of all, we can ask ourselves how it functions—how its internal parts interact in such a way that they produce certain regular input-output patterns. This Dennett calls the design stance. Alternatively, we can look at the same system and treat it as if it were a rational agent with beliefs and desires: we can explain and predict its behaviour by assuming that in general it will do what it ought to do in a particular situation—what is sensible given its goals and perspective. This perspective is called the intentional stance.

For example, in interacting with a chess program, I might find it useful to have some knowledge of the structure of its programming so that I can explain why it makes a particular move in a particular context. But once the chess program reaches a certain level of complexity, it will be easier for me to treat it as if it wants to win the game, knows the rule of chess and certain strategies, has certain beliefs about my own strategy in the game, and so on.

What’s important about this is that both perspectives—both stances—are applicable to many complex systems, including human beings. Both will provide a certain kind of explanation, and we can shift from one to the other depending on our explanatory interests. But if we adopt, in particular, the intentional stance, we do not need to be strongly realist about it: we do not need to insist that the chess program really does have internal beliefs and desires which interact to cause its behaviour (and, in the case of the chess program, it is obvious that it has no such internal states or mechanisms). All that needs to be the case is that the chess program can be truly described as if it had beliefs and desires—and this certainly is the case.

When we apply the intentional stance to our fellow human beings, we get the same result: we can truly describe them as exhibiting patterns of behaviour that are those of rational agents whose behaviour is governed by internal beliefs and desires, and we can make correct predictions and explanations of their behaviour by using these patterns. As Dennett puts it, they are real patterns, no less real than the physical laws describing centres of gravity or the economic laws of markets. But, when we descend to the design level (or even further, to the physical level), we should not expect to see the categories and regularities of the intentional stance. Interpretationism thus attempts to balance realism about the mind with the claim that the categories of the mind are not reducible to those of the brain.

Page 21: Philosophy of Mind - University of Guelph

21

THE PROBLEM OF CONSCIOUSNESS AND THE RESURGENCE OF PROPERTY DUALISM A recurring theme of our discussion has been the problem caused for materialist theories of the mind, both reductive (identity theory) and non-reductive (functionalist), by the intrinsic, qualitative nature of some mental states. In our discussion of functionalism we labelled these aspects of the mind qualia: properties like the painfulness of pain, the taste of strawberry, the vivid redness of the sight of a red balloon, the feeling of falling.

Mental states that have qualia, it is generally agreed, are states that can only be conscious states: that is, states that you have when you are awake or dreaming (as opposed to in a dreamless sleep) and that you are aware you are presently undergoing (as opposed to, say, the things you believe but are not currently thinking about, or perhaps ‘suppressed’ mental states such as a Freudian desire to kill your father). Many philosophers also hold that all conscious mental states are states that have qualia—indeed, that are conscious in virtue of their qualia. Consciousness in this sense is often called phenomenal consciousness (to distinguish it with other senses of the word “consciousness”); and another way of saying that qualia are a problem for materialism is to refer to the problem of consciousness.

The problem of consciousness is the problem of explaining how phenomenally conscious mental states can be fitted into the material world—what is sometimes called the problem of naturalising consciousness. Consider the problem of explaining the phenomenon of transparency. This is really just the problem of explaining how some materials allow light to pass through them while others do not; and this high-level, emergent property of some materials can be exhaustively accounted for by describing their atomic structure and how this structure interacts with photons. Furthermore, once we fully understand the atomic properties of these materials, it is no longer at all surprising that they are transparent—we would expect them to be transparent, and can correctly predict when a new, unseen material is transparent solely on the basis of knowing its micro-physical properties.

Although no one expects phenomenal consciousness to be naturalized in quite so straightforward a manner, the fundamental problem is the same: what we would hope for in a naturalistic theory of consciousness would be a story about the lower-level physical properties of the brain that, once understood, would allow us to understand how certain brain states have certain phenomenal properties and correctly predict novel conscious experiences from a knowledge of the brain states of their subjects. So for example, we should be able to understand why pyramidal cell activity feels ‘from the inside’ like pain and should be able to appreciate the similarities and differences between, say, how human and octopus pain feel, through a knowledge of the relevant neural states.

Notice, by the way, that the demand for a naturalistic account of consciousness is not the same as the demand for a reductive account. Being a mousetrap is not the same thing as—is not reducible to—some particular configuration of metal and wood; nevertheless unless we can in principle understand how this particular mechanism functions physically to catch mice we shall have to concede that it is a magical mousetrap working in a mysterious way. Similarly, the problem is not to show that consciousness is identical with brain states but to show how brain states could possibly be conscious.

Page 22: Philosophy of Mind - University of Guelph

22

The problem of naturalizing consciousness is so difficult that some philosophers—notably David Chalmers (1966– )—have been led to argue that that it is insoluble; and that it is insoluble because phenomenal consciousness, unlike transparency and indeed unlike every other non-abstract property of which we are aware, is not a physical phenomenon. Materialism—the doctrine that everything that exists is physical—is therefore false, on this view. However, this does not mark a return to the substance dualism of Descartes: most modern philosophers are sceptical of the view that there are any non-physical substances. Instead, modern dualism is typically property dualism: the view that not all the properties of physical things are themselves physical properties.

There are various arguments that purport to show that phenomenal consciousness cannot be part of the physical world. Two of the most important are the zombie argument, and the knowledge argument.

ZOMBIES The zombie argument is rather similar to the absent qualia argument described in the section on functionalism, but is directed against materialism in general rather than the specific materialist theory of functionalism. Zombies, in philosophical rather than horror literature, are defined as organisms that physically are exactly like normal human beings—neuron for neuron, even atom for atom—but that completely lack any consciousness.

Zombies, therefore, behave precisely like regular people, since behaviour is a physical phenomenon—the movement of various body parts through space, for example. So if a zombie is poked with a sharp stick they will recoil, scream sharply, and perhaps become violent; zombies will profess to love their zombie spouses, and will eagerly report their opinion of movies they have seen; zombies will evince firm preferences for, say, chocolate over broccoli. But, by definition, no zombie ever actually feels pain or indignation, undergoes the emotions of love or excitement, or genuinely tastes the food they eat: they merely behave as if they do, and they behave in this way because their brains—and hence their behaviours—are exactly like those of non-zombies.

Zombies, of course, are fictional; no philosopher believes that zombies actually exist. The question is, though, whether they could have existed—whether zombies might in principle exist in a possible world, different from but rather like the actual one. If zombies are possible in this way, then it shows that consciousness is something over and above the physical world—that we could remove all the consciousness in the world while leaving the physical exactly the same. That is, the mere possibility of zombies seems to show that consciousness is non-physical, and hence materialism false, and hence something like property dualism is true.

To make this claim somewhat clearer, try and imagine making the case for zombie versions of various other higher-level properties: for example, could there be objects that are tranzparent—objects that have all the micro-physical properties of transparent objects but that are opaque to light? The answer is surely no. It is incoherent—not merely vastly unlikely or surprising but literally internally inconsistent—to suppose that a window has all the same atomic properties that it does, and photons are the same as they are in the actual world, and the laws of physics the same also, and yet that photons will not pass through the glass. So tranzparent—‘zombie transparent’—objects are logically impossible: in any possible world where (regular, clear) glass is not transparent there must be some physical difference, either to the glass itself or to the laws of physics

Page 23: Philosophy of Mind - University of Guelph

23

that are true in that world. To put the same point slightly differently, if tranzparency were a possibility then the physical facts in the actual world would not be enough to explain why objects are transparent. It would mean those exact same physical facts could have held and yet windowpanes not be transparent, so there must be some further reason why windows are transparent in the actual world. So to say that there could be no such thing as tranzparency is also to say that physics completely explains transparency.

If zombies are possible, then, this has major philosophical significance. And the zombie argument is a powerful one insofar as zombies surely seem to be possible—there is nothing obviously internally incoherent in the description I gave above. The debate turns on whether zombies really are possible, in the appropriate way, or whether they merely seem possible to us because of our various cognitive limitations (such as that we do not yet know enough about the brain). The bibliography at the end of this chapter will give you some starting points if you wish to explore this issue further. THE KNOWLEDGE ARGUMENT A second argument for non-physicalist consciousness is usually called the knowledge argument; its main populariser was an Australian philosopher called Frank Jackson (1943– ). It comes in several flavours, but the most influential one goes as follows. Imagine a scientist, usually called Mary, who is the most brilliant colour scientist in the history of the world: she literally knows everything physical that there is to know about colour, including all the relevant facts about the surfaces of objects and the interiors of transparent volumes, about light, and about the processing of colour information in the human visual system. However, Mary has spent her entire life in a totally colourless environment: since birth she has been locked in a black and white room and kept from being exposed to any colours, learning the science of colour from black and white textbooks and computer monitors. As with zombies, the point of this thought experiment is not that it is a likely or even a practically possible scenario: all that matters is that it is logically possible—that it is not an incoherent supposition.

The second stage of this argument is to ask what happens when Mary sees colour for the first time. Suppose one of her captors relents and, deciding to apologise for the appalling way she has been treated, brings her a bouquet of red roses. The most natural intuition about Mary’s reaction to seeing red for the first time is that she will be astonished by the experience—that she will say things like, “I already knew everything there was to know about the physics of redness, but I had no idea it looked like this!”

If this is right—if Mary would react with surprise—then materialism looks to be in trouble. The argument goes like this:

i. Mary knows all the physical facts about colour. ii. When Mary sees colour for the first time, she learns something new. iii. So there are facts about our conscious experience of colour that are not

physical facts—call them, instead, phenomenal facts. iv. Since the phenomenal facts outstrip the physical ones, consciousness is over

and above the physical, and materialism is false. There have been many attempts to refute the knowledge argument, and they tend to

break down into two camps. Some philosophers have argued that Mary would not learn anything new when she sees colour for the first time—that she already knew what it was like to see red even though she had never herself seen it. (Analogously, an adequate account of transparency should tell us what objects are transparent even if we have

Page 24: Philosophy of Mind - University of Guelph

24

never seen them.) Others have argued that Mary will learn something new but that what she learns is not a new fact—that she already knew all the facts about colour experience and has merely gained (something like) a new way of describing those facts. As with the zombie argument, this is a lively, on-going debate and the suggestions for further reading at the end of the chapter will help you get started with it.

QUESTIONS

1) According to classical substance dualism, no creature without a soul—such as animals—undergoes any mental states at all (no pain, no hunger, no colour sensations, and so on); whereas all creatures that do have souls—such as humans—have rich sensory and intellectual mental lives. Does it seem to you plausible that consciousness has this absolute ‘on-off’ quality? Can this consequence be avoided by dualists, and if not does it cause problems for the theory—for example, problems in fitting the mind into an evolutionary story?

2) A favourite example of analytic behaviourists such as Gilbert Ryle was intelligence. Being intelligent is a mental property, but it seems odd to think of it as an internal state of our minds: instead, the behaviourists urged, it is more sensible to analyse intelligence as a set of ways of behaving—being good at chess and math, speaking several languages, doing crossword puzzles quickly, and so on. So behaviourism seems to work well as a model for intelligence. What do you think of this example? Does it show that we were too hasty in our rejection of analytic behaviourism?

3) An apparently decisive refutation of mind-brain identity theory, as we saw above, was the appeal to the phenomenon of multiple realizability. But there are other, apparently successful, cases of scientific reduction where multiple realizability seems just as prominent. For example, a textbook example of scientific reduction is the reduction of the notion of temperature in classical thermodynamics to a lower-level property. However, the relevant property differs for different types of substance: in gases temperature is mean molecular kinetic energy; in solids it is mean maximal molecular kinetic energy (as the molecules cannot move); in a plasma it is something else entirely, since the molecular constituents of a plasma have been ripped apart; and even a vacuum can have a (blackbody) temperature despite having no molecular constituents. What implications might this have for mind-brain identity theory? Why should we say that temperature is a physical property and deny that, say, pain is?

4) How plausible to you find the absent and inverted qualia objections against functionalism? If you think they succeed against functionalism, then do they work just as well against materialism (and if not, why not)?

5) Do you agree that rational and causal explanations of human behaviour are in tension? What exactly is the nature of this tension, and what significance might it have?

FURTHER READING The main source for Descartes’ dualism is his Meditations on First Philosophy, first published in Latin in 1641 and now available in several good English translations, such as that by John Cottingham published by Cambridge University Press in 1996. A modern defence of substance dualism is John Foster’s The Immaterial Self (Routledge,

Page 25: Philosophy of Mind - University of Guelph

25

1991). Gilbert Ryle’s attack on dualism and defence of behaviourism are in his The Concept of Mind, first published in 1949 by Hutchinson. A seminal article on mind-brain identity theory is J.J.C. Smart’s “Sensations and Brain Processes” (The Philosophical Review 68, 1959), and the theory is developed in David Armstrong’s A Materialist Theory of the Mind, first published by Routledge in 1968. Jerome Schaffer attacked Smart’s article in “Mental Events and the Brain,” Journal of Philosophy 60 (1963). The multiple realizability argument is used to knock down identity theory and set up machine functionalism in Hilary Putnam’s “The Nature of Mental States” (1967) which is widely anthologised in, for example, Readings in Philosophy of Psychology, Volume 1, edited by Ned Block (Harvard University Press, 1980)—this Block volume is an excellent starting point for the original literature on functionalism. Dennett’s book The Intentional Stance (MIT Press, 1987) describes his form of non-reductive materialism, and another highly influential version of this sort of theory is Donald Davidson’s “anomalous monism,” expressed in the 1970 article “Mental Events” which is reprinted in his Essays on Actions and Events (Oxford University Press, 2001). Finally, an excellent starting point for the investigation of property dualism is David Chalmers’ The Conscious Mind (Oxford University Press, 1996). The Journal of Consciousness Studies has been the focus for much of the debate about philosophical zombies since 1994, and volume 2, issue 4 (1995) contains an especially rich vein. The modern origin of the knowledge argument is Frank Jackson’s article “Epiphenomenal Qualia” (Philosophical Quarterly 32, 1982); two representative responses are David Lewis’s “What Experience Teaches” (1988) and Brian Loar’s “Phenomenal States (1990) both reprinted in David Chalmers, ed., Philosophy of Mind (Oxford University Press, 2002).