The title of this essay probably strikes readers who are familiar with philosophy of mind as somewhat curious. The multiple realizability argument against functionalism? In I, I outline the standard multiple realizability argument in the debate between functionalists and identity theorists. In II, I show how this argument can be turned against the functionalist and in III, I deal with some objections.
– Kane Baker
The identity theory, outlined in Smart (1959), claims that every type of mental state is identical to a type of brain state. For instance, fear is identical to stimulation of the amygdala eliciting the release of adrenaline and cortisol; pain is identical to the firing of C-fibres (of course, fear and pain both involve much more than this, but these simplifications illustrate the point). Whenever there is fear or pain, there are these brain processes, and vice versa.
Putnam (1975) objects that this view fails to account for the fact that mental states are multiply realizable. This is the claim that there are, or at least could be, organisms with physiologies different from our own but which also have minds. It seems, for instance, that octopi have minds at least somewhat like ours, despite the fact that the human nervous system and the octopus nervous system are radically different. Suppose we discover exactly the brain states and processes that occur when humans experience e.g. pain; label these states and processes B. The identity theorist claims that pain is identical to B. Putnam’s response is simple: octopi experience pain; octopi don’t have B; therefore, pain is not identical to B. Two things can’t be identical if you can have one without the other. These examples can be proliferated. Fodor (1981: 127) suggests that there could be aliens with minds, or that one day we might build machines with minds; these creatures wouldn’t even have neurons, let alone anything like a brain.
The multiple realizability argument provided the main impetus for functionalism. According to functionalism, mental states are identified not with some underlying physical structure, but with a particular causal profile. For instance, pain is a state that is caused by bodily damage and that causes distress and withdrawal from the harmful stimuli. What the human pain, the octopus pain, and the alien pain have in common is that they all share this causal profile.
Today, functionalism is probably the dominant theory in philosophy of mind. This is however rather peculiar given that functionalism faces a multiple realizability problem just like the one explained above. As far as I know this was first pointed out by Lewis (1980). Lewis supposes that there could be a madman for whom pain has totally deviant causes and effects. Rather than being caused by bodily injury, his pain is caused by “moderate exercise on an empty stomach.” Rather than being distracting, his pain facilitates concentration on mathematics, and he has no desire whatsoever to alleviate the pain. If this is right, the argument against functionalism is simple. The functionalist identifies pain with a certain functional state F (F is caused by bodily injury, causes distress, etc). The madman experiences pain; the madman is not in state F; therefore, pain is not identical to F.
Putnam’s multiple realizability argument pretty much dismantled the identity theory. Lewis’s multiple realizability argument seems not to have made a dent in functionalism. One reason for this, I think, is that it seems plausible for the functionalist simply to reject Lewis’s example. Lewis takes it as being just obvious that there could be a person who experiences pain, but for whom pain has totally deviant causes and effects. He doesn’t provide any actual argument for this. So is it really so obvious
that there could be such a person? Perhaps Lewis’ intuition that there could be is just a failure of imagination: he’s not really imagining this madman in enough detail (as Dennett might suggest; cf. Dennett 1995). Could a state that causes no distress at all, that doesn’t distract us but actually helps us concentrate, etc – could a state like that really be pain? It’s surely not as obvious as Lewis assumes.
But the problem can be pressed with more familiar examples (see Gozzano and Hill 2012: 10). For sufferers of psychosomatic pain, intense pain results not from bodily damage, but rather from stress, anxiety, and other emotional factors. Masochists are often disposed not to avoid pain but instead to actively seek it out; for different reasons, people with depression may seek pain through self-harm. People who are paralyzed and people who are on drugs may be unable to exhibit any of the behaviour usually associated with pain. For some people, pain may be a powerful motivator (the bodybuilder who pushes himself to beat the pain); for others, it may simply depress them, scare them, or otherwise interfere with their goals (the hypochondriac who interprets even the slightest pain as a sign of serious illness). Functional multiple realizability proliferates when we look beyond humans. Characteristic behavioural effects of pain – wincing, screaming, crying, searching for paracetamol – aren’t expressed in most other species.
These examples show that pain cannot be defined in terms of its causes and effects. It should be easy to see how similar problems arise for other mental states. This is the multiple realizability argument against functionalism. Each type of mental state can be realized by many types of functional states.
Let’s consider some possible replies.
(1) The functional multiple realizability argument is mistaken in that it assumes that the functionalist claims that pain necessarily has certain causes and effects. Rather, pain is defined in terms of its typical causes and effects (this is how Lewis treats functionalism (1980: 218)). The masochist’s pain is pain because it’s a state that typically causes avoidance and distress; i.e. for most people in this state, the state causes avoidance and distress. Psychosomatic pain is pain because it’s a state that’s typically caused by bodily damage; i.e. for most people in this state, the state has been caused by bodily damage.
The basic difficulty with this response is that it leaves open the question of what grounds there are for assuming that the masochist is in pain in the first place. Granted, assuming that already we know that the masochist is in pain, the functionalist can accommodate this pain by appealing to typical causes and typical effects. But why should we count it as pain in the first place on the functionalist view? In fact, the same question can now be asked even of standard cases of pains. The problem here is that once we retreat to typical causes and effects, the appeal to causal profile no longer provides a clear way to demarcate pains from non-pains. Provided that most of the things we call pain exhibit the right causal profile, anything whatsoever can be a pain or can fail to be a pain. The feeling I get when I eat chocolate cake, for instance, could be a pain; it’s just a pain with atypical causes and effects. Similarly, the feeling I get when I accidently hit my thumb with a hammer and scream could be joy, just joy with atypical causes and effects.
(2) Definitions of mental states are context-specific. Although we often talk about pain without qualification, in fact all talk of pain is relative to specific contexts. In a sense, there’s no such thing as pain simpliciter, but only pain-in-x. Pain-in-the-normal-human is caused by bodily damage, causes distress, etc. Pain-in-the-masochist is caused by bodily damage, causes sexual arousal, etc. There are many different functional definitions of pain, each appropriate to different contexts. The fact that pain
has different causes and effects for different people doesn’t show that pain can’t be defined as a state that has a certain causal profile; it only shows that there are different kinds of pain.
I find this response deeply unsatisfying. It entails that when we talk on the one hand about pain in normal circumstances, and on the other about pain for the masochist, we’re using the word “pain” in different ways. But this just seems wrong. Granted, it may be that the normal pain and the masochist’s pain are different in various ways, but there’s evidently something significant that normal pain and the masochist’s pain have in common, and this is what we generally use the word “pain” to refer to. One way to make this clear is to ask: in virtue of what are pain-in-the-normal-human and pain-in-the-masochist both called pains? Why call what the masochist is experiencing “pain-in-the-masochist” instead of something else? Why not “sadness-in-the-masochist” or “surprise-in-the-masochist”?
(3) Despite the causal differences pains can exhibit, there are some more abstract causal properties they share in virtue of which they’re all pains. We can solve the functional multiple realizability problem by making the functional definition more abstract. Consider, for instance: “pain causes screaming and wincing” vs “pain causes expressions of distress.” The latter is more abstract: expressions of distress include screaming and wincing but also much else besides. The latter is also clearly more plausible for a functional definition of pain: we wouldn’t want to define pain as something that causes screaming and wincing, since there are many pains that don’t make us scream or wince. Of course, “pain causes expressions of distress” still isn’t abstract enough – the masochist and the person completely paralyzed won’t be expressing distress. But perhaps some functional definition still more abstract will do the job.
So the question is, in light of the examples suggested in II, what exactly will this definition look like? We know that it can’t include “caused by bodily damage” (because this rules out psychosomatic pain) or “causes attempts to alleviate it” (this rules out the masochist’s pain), nor can we appeal to any causes or effects that would be specific to humans. When we consider the variety of causes and effects that pain can have or fail to have, it becomes difficult to imagine that a functional definition able to capture the many different causes and effects of pain could be specific enough to distinguish pain from other mental states. It seems like the best we’ll be able to get is an utterly trivial definition such as “pain is caused by processes in the body and causes other processes in the body.” And perhaps even this will turn out to be false, if we one day build computers without bodies that can be programmed to experience pains.
(4) Finally, the functionalist might simply bite the bullet, and insist that the masochist’s “pain,” the psychosomatic “pain,” etc, are not really pains after all. This view strikes me as absurd on its own terms, but the functionalist should find it especially painful to espouse. As we saw in I, one of the primary advantages of functionalism is it allows us to attribute mental states to a whole host of different kinds of entities: not just humans, but also octopi, robots, and aliens might experience pains. But the functionalist who adopts this response to the functional multiple realizability problem is committed to an extreme conservativism about the mental. Only humans experience pain, and only the right kinds of humans.
An interesting point about responses (2), (3), and (4) is that the identity theorist can use exactly the same responses to deal with the multiple realizability argument against the identity theory, and these responses face exactly the same objections as those outlined above. The appeal to context-specificity is discussed in Churchland (1988: 41): we can save the identity theory by identifying pain-in-the-human with one kind of physical structure (brain states) and pain-in-the-robot with another (silicon-chip states, say). This solution is unsatisfying because it doesn’t tell us in virtue of what the human pain and the robot pain are both pains. (3) is suggested by Shapiro (2000: 643-646): a human brain and a robot
“brain” may be different in various respects, but they might share the same physical structure in a more abstract sense. The challenge now is to explain what exactly this more abstract physical structure is supposed to look like. Finally, an identity theorist might simply deny that robots could experience pain, though this would be an extremely counterintuitive conclusion.
To conclude, what I hope to have shown is that functionalism faces an argument exactly analogous to Putnam’s multiple realizability argument against the identity theory. As far as multiple realizability is concerned, both theories are on a par.
Churchland, P.M. (1988) Matter and Consciousness, revised edition, Cambridge, Massachusetts; London, England: MIT Press.
Dennett, D. (1995) “The Unimagined Preposterousness of Zombies”, Journal of Consciousness Studies, vol. 2, no. 4, pp. 322-326.
Fodor, J. A. (1981) “The Mind-Body Problem”, Scientific American, 244, vol. 1, pp. 124-132.
Gozzano, S. and Hill, C. (2012) “Introduction” in Gozzano, S. and Hill, C (eds.) New Perspectives on Type Identity, Cambridge: Cambridge University Press, pp. 1-15.
Lewis, D. (1980) “Mad Pain and Martian Pain”, in Block, N. (ed.) Readings in the Philosophy of Psychology: Volume 1, London: Methuen, pp. 216-222.
Putnam, H. (1975) “The Nature of Mental States”, in Putnam, H. (ed.) Philosophical Papers, Volume 2: Mind, Language and Reality, Cambridge: Cambridge University Press, pp. 429-440.
Shapiro, L. (2000) “Multiple Realizations”, The Journal of Philosophy, vol. 97, no. 12, December, pp. 635-654.
Smart, J.J.C. (1959) “Sensations and Brain Processes”, The Philosophical Review, vol. 68, no. 2, April, pp. 141-156.