Harnad, S. (2000) Turing Indistinguishability and the Blind Watchmaker. In:  Fetzer, J. & Mulhauser, G. (eds.) Evolving Consciousness Amsterdam: John Benjamins (in press)



 


 TURING INDISTINGUISHABILITY AND THE BLIND WATCHMAKER

Stevan Harnad
Cognitive Sciences Center
Southampton University
Highfield, Southampton
SO17 1BJ United Kingdom
harnad@soton.ac.uk harnad@princeton.edu
http://www.princeton.edu/~harnad/intpub.html
http://cogsci.soton.ac.uk/~harnad/intpub.html

ABSTRACT: Many special problems crop up when evolutionary theory turns, quite naturally, to the question of the adaptive value and causal role of consciousness in human and nonhuman organisms. One problem is that -- unless we are to be dualists, treating it as an independent nonphysical force -- consciousness could not have had an independent adaptive function of its own, over and above whatever behavioral and physiological functions it "supervenes" on, because evolution is completely blind to the difference between a conscious organism and a functionally equivalent (Turing Indistinguishable) nonconscious "Zombie" organism: In other words, the Blind Watchmaker, a functionalist if ever there was one, is no more a mind reader than we are. Hence Turing-Indistinguishability = Darwin-Indistinguishability. It by no means follows from this, however, that human behavior is therefore to be explained only by the push-pull dynamics of Zombie determinism, as dictated by calculations of "inclusive fitness" and "evolutionarily stable strategies." We are conscious, and, more important, that consciousness is piggy-backing somehow on the vast complex of unobservable internal activity -- call it "cognition" -- that is really responsible for generating all of our behavioral capacities. Hence, except in the palpable presence of the irrational (e.g., our sexual urges) where distal Darwinian factors still have some proximal sway, it is as sensible to seek a Darwinian rather than a cognitive explanation for most of our current behavior as it is to seek a cosmological rather than an engineering explanation of an automobile's behavior. Let evolutionary theory explain what shaped our cognitive capacity (Steklis & Harnad 1976; Harnad 1996, but let cognitive theory explain our resulting behavior.

CONSCIOUSNESS CANNOT BE AN ADAPTATION

Here's an argument to try out on those of your adaptationist friends who think that there is an evolutionary story to be told about the "survival value" of consciousness: Tell me whatever you think the adaptive advantage of doing something consciously is, including the internal, causal mechanism that generates the capacity to do it, and then explain to me how that advantage would be lost in doing exactly the same thing unconsciously, with exactly the same causal mechanism.

Here are some examples: It is adaptive to feel pain when your leg is injured, because then you spare the leg and avoid the cause of the injury in future. (This account must be coupled, of course, with a causal account of the internal mechanism for detecting tissue damage and for learning to avoid similar circumstances in the future.) How would the advantage be lost if the tissue damage were detected unconsciously, the sparing of the leg were triggered and maintained unconsciously, and the circumstances to avoid were learned and avoided unconsciously? In other words: identical internal mechanisms of detection, learning, avoidance, but no consciousness?

Another example: It is adaptive to pay conscious selective attention to the most important of the many stimuli impinging on an organism at any one time. (This must be paired with a causal account of the mechanism for attending selectively to input and for detecting and weighting salient information.) How would that adaptive advantage be lost if the input selection and salience-detection were all taking place unconsciously? What is the advantage of conscious fear over unconscious vigilance, danger-detection and avoidance? Of conscious recall and retrieval from memory over mere recall and retrieval? Conscious inference over unconscious inference? Conscious discrimination over unconscious? Conscious communication? Conscious discourse? Conscious cognition? And all these comparisons are to be made, remember, in the context of an internal mechanism that is generating it all: the behavior, learning, memory, and the consciousness accompanying it.

The point that I hope is being brought out by these examples is this: An adaptive explanation must be based on differential consequences for an organism's success in surviving and reproducing. This can even include success in the stock market, so the problem is not with the abstractness or abstruseness of the adaptive function in question, it is with the need for differential consequences (Catania & Harnad 1988). Adaptive consequences are functional consequences. A difference that makes no functional difference is not an adaptive difference. The Blind Watchmaker (Dawkins 1986) is no more a mind-reader than any of the rest of us are. He can be guided by an organism's capacity to detect and avoid tissue injury, but not by its capacity or incapacity to feel pain while so doing. The same is true for conscious attention vs. unconscious selectivity, conscious fear vs. unconscious danger avoidance, conscious vs. unconscious memory, inference, discrimination, communication, discourse, cognition.

So for every story purporting to explain the adaptive advantage of doing something consciously, look at the alleged adaptive advantage itself more closely and it will turn out to be a functional advantage (consisting, in the case of cognition, of a performance capacity and the causal mechanism that generates it); and that exact same functional advantage will turn out to remain intact if you simply subtract the consciousness from it (Harnad 1982, 1991).

Indeed, although the comparison may seem paradoxical (since we all know that we are in fact conscious), those who have tried to claim an evolutionary advantage for consciousness are not unlike those uncritical computer scientists who are ready to impute minds even to the current generation of toy computational models and robots (Harnad 1989): There is an interesting similarity between claiming that a thermostat has (rudimentary) consciousness and claiming that an organism's (real) consciousness has an adaptive function. In both cases, it is a mentalistic interpretation that is misleading us: In the case of the organism that really is conscious, the interpretation of the organism's state as conscious happens to be correct. But the interpretation of that real consciousness as having adaptive function (over and above the adaptive function of its unconscious causal mechanism) is as gratuitous as the interpretation of the thermostat as having a consciousness at all, and for roughly the same reason: The conscious interpretation is not needed to explain the function.

Why do we have the conviction that consciousness must have survival value? Well, in part it must be because evolutionary theory is not in the habit of viewing as prominent and ubiquitous a biological trait as consciousness as just a causal dangler like an appendix, a "spandrel" (Gould 1994), or worse. A partial reply here might be that there are reasons for believing that the mind could be rather special among biological traits (it is surely not a coincidence that centuries of philosophy have been devoted to the mind/body problem, not the "blue-eye/brown-eye" problem, or even the "phenotype/ genotype" problem). But I suspect that the real reason we are so adaptationistic about consciousness has to do with our experience with and intuitions about free will (Dennett 1984). We are convinced that if/when we do something consciously, it's because we choose to do it, not because we are unconsciously impelled to do so by our neurophysiology (Libet 1985). So it's natural to want to establish an adaptive value for that trait (free will) too.

Yet it seems clear that there is no room for an independent free will in a causal, functional explanation unless we are prepared to be dualists, positing mental forces on a par with physical ones, and thereby, I think, putting all of physics and its conservation laws at risk (Alcock 1987). I don't think the mental lives of medium-sized objects making up the relatively minuscule biomass of one small planet in the universe warrant such a radical challenge to physics, so let us assume that our feeling of free will is caused by our brains, and that our brains are the real causes of what we do, and not our free wills.

This much explains why not many people are telling adaptive stories directly about free will: Because it leads to embarrassing problems with causality and physics. Yet I think our sense of free will is still behind the motivation to find an adaptive story for consciousness, and I think the latter is wrong-headed for about the same reason: If it is clear why it is not a good idea to say that there is a selective advantage for an organism that can will its actions (as opposed to having its brain cause them for it), it should be almost as clear why it is not good to say that there is a selective advantage for an organism that really sees as blue as opposed to merely detecting and responding to blue. We really see blue alright, but there's no point trying to squeeze an adaptive advantage out of that, since we have no idea HOW we manage to see, detect, or respond to blue. And once we do understand the causal substrate of that, then that causal substrate and the functional capacities it confers on our bodies will be the basis of any adaptive advantages, not the consciousness of blue.

REVERSE ENGINEERING AND TURING INDISTINGUISHABILITY

How are we to arrive at a scientific understanding of that causal substrate? First, I think we have to acknowledge that, as Dennett (1994, 1995) has suggested, the behavioral and cognitive sciences and large parts of biology are not basic sciences in the sense of physics and chemistry, but branches of "reverse engineering." Basic sciences study and explain the fundamental laws of nature. Forward engineering then applies these to designing and building useful things such as bridges, furnaces, and airplanes, with stipulated functional capacities. Reverse engineering, by contrast, inherits systems that have already been designed and built by the Blind Watchmaker with certain adaptive functional capacities, and its task is to study and explain the causal substrate of those capacities.

Clearly, what reverse engineering needs first is a methodology for finding that causal substrate: a set of empirical constraints that will reliably converge on them. The logician Alan Turing (1964) has provided the basis for such a methodology, although, as you will see, his original proposal needs considerable modification because it turns out to be just one level of a ("Turing-") hierarchy of empirical constraints (Harnad 1994a).

According to Turing's Test, a machine has a mind if its performance capacity (i.e., what it can do) is indistinguishable from that of a person with a mind. In the original version of the Turing Test (T2), the machine was removed from sight so no bias would be introduced by its appearance (the indistinguishability had to be in performance, not in appearance). Then (although this is not how Turing put it), the machine had to be able to correspond (by exchanging letters) with real people for a lifetime in such a way that it could not be distinguished from a real pen-pal. There are accordingly two dimensions to the Turing Test: The candidate (1) must have all the performance capacities of a real person and (2) its performance must be indistinguishable FROM that of a real person TO (any) real person (for a lifetime -- I add this to emphasize that short-term tricks were never the issue: the goal was to really generate the total capacity; Harnad 1992b).

T2 has been the subject of much discussion, most of it not pertinent here (Harnad 1989). What is pertinent is that the out-of-sight constraint, which was intended only to rule out irrelevant biases based on appearance, also inadvertently ruled out a lot of human performance: It ruled out all of our robotic capacity Harnad 1995b), our capacity to discriminate, manipulate, and categorize those very objects, properties, events and states of affairs that the symbols in our pen-pal correspondence are about (Harnad 1987, 1992a, Harnad et al. 1995). So although the symbolic level of performance to which T2 is restricted is a very important one (and although there are even reasons to think that T2 could not be successfully passed without drawing indirectly upon robotic capacity), it is clear that human performance capacity amounts to a lot more than what can be tested directly by T2. Let us call a test that calls for Turing-Indistinguishable symbolic and robotic capacity the Total Turing Test, or T3. A T3 robot would have to be able to live among us and interact with the people and objects in the world Turing-Indistinguishably from the way we do.

At this point people always ask: Well, how indistinguishably? What about the question of appearance? Would it have to be able to shave? A little common sense is needed here, keeping in clear sight the fact that this is not about tricks or arbitrary stipulations (Harnad 1994a, 1995a). The point of Turing Testing is to generate functional capacities. What one aims for is generic capacities. Just as a plane has to be able to fly, but doesn't have to look like or fly exactly like any particular DC-11 -- it just has to have flying capacity Turing-indistinguishable from that of planes in general -- so a T3 robot would only have to have our generic robotic capacities (to discriminate, manipulate, categorize, etc.), not their fine-tuning as they may occur in any particular individual.

But T3 is not the top of the Turing hierarchy either, for there IS more that one could ask if one wanted to capture Turing-Indistinguishably every reverse engineering fact about us, for there are also the internal facts about the functions of our brains. A T4 candidate would be Turing indistinguishable from us not only in its symbolic and robotic capacities but also in its neuromolecular properties. And T4, need I point out, is as much as a scientist can ask, for the empirical story ends there.

So let's go for T4, you are no doubt straining to say. Why bother with T3 or T2 at all? Well, there are good reasons for aiming for something less than T4, if possible. For one thing, (1) we already know, in broad strokes, what our T3 capacity is. The T3 data are already in, so to speak, so there we can already get to work on the reverse engineering. Comparatively little is known so far about the brain's properties (from its T3 capacity, of course). Furthermore, it is not obvious that we should wait till all the brain data are in, or even that it would help to have them all, because (2) it is not at all clear which of the brain's properties are relevant to its T3 capacities. And interesting though they are in their own right, it is striking that (3) so far, T4 neuroscientific data have not yet been helpful in providing functional clues about how to reverse-engineer T3 capacity. Turing also had a valid insight, I think, in implicitly reminding us that we are not mind-readers with one another either, and that (4) our intuitive judgments about other people's minds are based largely on Turing Indistinguishable performance (i.e., T2 and T3), not on anything we know or think we know about brain function.[Footnote 1]

There is one further reason why T3 rather than T4 might be the right level of the Turing hierarchy for mind-modelling; it follows from our earlier discussion of the absence of selective advantages of consciousness: The Blind-Watchmaker is likewise not a mind-reader, and is hence also guided only by T3. Indeed, T4 exists in the service of T3. T4 is one way of generating T3, but if there were other ways, evolution would be blind to the differences between them, for they would be functionally -- hence adaptively -- indistinguishable.

UNDERDETERMINATION OF THEORIES BY DATA

Are there other ways to pass T3, apart from T4? To answer that we first have to consider the general problem of "scientific underdetermination." In basic science, theories are underdetermined by data. Several rival theories in physics may account equally well for the same data. As long as the data that a particular theory accounts for are subtotal (just "toy" fragments of the whole empirical story -- what I call "t1" in the T-hierarchy), the theory can be further calibrated by "scaling it up" to account for more and more data, tightening its empirical degrees of freedom while trimming excesses with Occam's razor. Only the fittest theories will scale all the way up to T5, the "Grand Unified Theory of Everything," successfully accounting for all data, past, present and future; but it is not clear that there will be only one survivor at that level. All the "surviving" rival theories, being T5-indistinguishable, which is to say, completely indistinguishable empirically, will remain eternally underdetermined. The differences among them make no empirical difference; we will have no way of knowing which, if any, is the "right" theory of the way the world really is. Let us call this ordinary scientific underdetermination. It's an unresolvable level of uncertainty that even physicists have to live with, but it does not really cost them much, since it pertains to differences that do not make any palpable difference to anyone.

There is likewise underdetermination in the engineering range of the Turing hierarchy (T2 - T4). T2 is the level of symbolic, computational function, and here there are several forms of underdetermination: One corresponds to the various forms of computational equivalence, including Input/Output equivalence (also called Turing Equivalence) and Strong Equivalence (equivalence in every computational step) (Pylyshyn 1984). The other is the hardware-independence of computation itself: the fact that the same computer program can be physically implemented in countless radically different ways. This extreme form of underdetermination is both an advantage and a disadvantage. With it goes the full power of formal computation and the Church-Turing Thesis (Church 1936, Turing 1937) according to which everything can be simulated computationally. But it has some liabilities too, such as the symbol grounding problem (Harnad 1990, 1994b), because the meanings of symbols are not intrinsic to a symbol system; they are parasitic on the mind of an external interpreter. Hence, on pain of infinite regress, symbols and symbol manipulation cannot be a complete model for what is going on in the mind of the interpreter.

There is underdetermination at the T3 level too. Just as there is more than one way to transduce light (e.g., Limulus's ommatidia, the mammalian retina's rods and cones, and the photosensitive cell at your local bank; Fernald 1997) and more than one way to implement an airplane, so there may be more than one way to design a T3 robot. So there may well be T3-indistinguishable yet T4-distinguishable robots. The question is, will they all have a mind, or will only T4 robots have one? Note that the latter question concerns a form of underdetermination that is much more radical than any I have mentioned so far. For unlike T5 underdetermination in physics, or even T2 ungroundedness, T3 underdetermination in mind-modelling involves a second kind of difference, over and above ordinary empirical underdetermination, and that difference does make a palpable difference, but one that is palpable to only one individual, namely, the T3 candidate itself. This extra order of underdetermination is the mark of the mind/body problem and it too is unresolvable; so I propose that we pass over it in silence, noting only that, scientifically speaking, apart from this extra order of uncertainty, the T3-indistinguishable candidates for the mind are on a par with T5-indistinguishable candidates for the Grand Unified Theory of Everything, in that in both cases there is no way we can be any the wiser about whether or not they capture reality, given that each of them can account for all the data. [Footnote 2]

---

Figure 1 about here:

t1 toy fragment of human total capacity

T2 Total Indistinguishability in symbolic performance capacity

T3 Total Indistinguishability in robotic (including symbolic) performance capacity

T4 Total Indistinguishability in neural (including robotic) properties

T5 Total Physical Indistinguishability

---

Is T4 a way? In a sense it is, because it is certainly a tighter empirical approximation to ourselves than T3. But the extra order of underdetermination peculiar to the mind/body problem (the fact that, if you will, empiricism is no mind reader either!) applies to T4 as well. Only the T4 candidate itself can know whether or not it has a mind; and only the T3 candidates can know whether or not we would have been wrong to deny them a mind for failing T4, having passed T3.

The T-hierarchy is a hierarchy of empirical constraints. Each successive level tightens the degrees of freedom on the kinds of candidates that can pass successfully. The lowest, "toy" level, t1, is as underconstrained as can be because it captures only subtotal fragments of our total performance capacity. There are countless ways to generate chess-playing skills, arithmetic skills, etc.; the level of underdetermination for arbitrary fragments of our Total capacity is far greater than that of ordinary scientific underdetermination. T2 is still underconstrained, despite the formal power of computing and the expressive and intuitive power of linguistic communication, because of the symbol grounding problem (Harnad 1990) and also because T2 too leaves out the rest of our performance capacities. T4 is, as I suggested, overconstrained, because not all aspects of brain function are necessarily relevant to T3 capacity, and it is T3 capacity that was selected by evolution. So it is T3, I would suggest, that is the right level in the T-hierarchy for mind-modelling.

I could be wrong about this, of course, and let me describe how: First, the question of "appearance" that we set aside earlier has an evolutionary side too. Much more basic than the selection of the mechanisms underlying performance capacity is the selection of morphological traits, both external ones that we can see and respond to (such as plumage or facial expression) and internal ones (such as the macromorphology and the micromorphology [the physiology and the biochemistry] of our organs, including our brain). The Blind Watchmaker may be blind to T3-indistinguishable differences underlying our performance capacity, but he is not blind to morphological differences, if they make an adaptive difference. And then of course there is the question of "shape" in the evolutionary process itself: the shape of molecules, including the mechanism of heredity, and the causal role that that plays. And we must also consider the status of certain special and rather problematic "robotic" capacities, such as the capacity to reproduce. Morphological factors are certainly involved there, as they are involved in other basic robotic functions, such as eating and defecation; there might well prove to be an essential interdependency between cognitive and vegetative functions (Harnad 1993a).

But let us not forget the monumental constraints already exerted by T3 alone: A causal mechanism must be designed that can generate our full performance capacity. To suppose that this is not constraint enough is to suppose that there could be mindless T3 Zombies (Harnad 1995c), and that only the morphological constraints mentioned above could screen them out. But, as has been noted several times earlier, this would be a remarkable coincidence, because, even with "appearance" supplementing T3, indeed, even with the the full force of T4, evolution is still not a mind-reader. It seems more plausible to me that T3 itself is the filter that excludes Zombies: that mindless mechanisms are not among the empirical possibilities, when it comes to T3-scale capacity.

In any case, even if I'm wrong, T3 seems to be a more realistic goal to aim for initially, because its constraints of T3 -- the requirement that our model generate our full performance capacity -- are positive ones: Your model must be able to do all of this (T3). The "constraints" of T4, in contrast, are, so far, only negative ones: They amount to a handicap: "However you may manage to get the T3 job done, it must be done in this brainlike way, rather than any old way." Yet at the same time, as I have suggested, no positive clue has yet come from T4 (neurobiological) research that has actually helped generate a t1 fragment of T3 capacity that one could not have generated in countless other ways already, without any T4 handicaps. So the optimal strategy for now seems to be a division of labor: Let mind-modelers do T3 modelling and let brain-modelers do T4.

Obviously, if T4 work unearths something that helps generate T3, then T3 researchers can help themselves to it; and of course if T4 research actually attains T4 first, then there is no need to continue with T3 at all, because T4 subsumes T3. But if T3 research should succeed in scaling up to a T3-passer first, we could then fine-tune it, if we liked, so it conforms more and more closely to T4 (just as we could calibrate it to include more of the fine-tuning of behavior mentioned earlier).

Or we could just stop right there, forget about the fine-tuning, and accord civil rights to the successful T3-passer. I, for one, would be prepared to do so, since, not being a mind-reader myself, I would really not feel that I had a more compelling basis for doubting that it feels pain than I do in the case of my natural fellow creatures.

Never mind. The purpose of this excursion into the Turing hierarchy was to look for methodological and empirical constraints on the reverse engineering of the mind, and we have certainly found them; but whether your preference is for T3 or T4, what the empirical task yields, once it is completed, is a causal mechanism: one that is capable of generating all of our T3 capacities (in a particular way, if you prefer T4). And as I have stressed repeatedly, neither T3 nor T4 can select directly for consciousness per se, for there is no Turing-distinguishable reason that anything that can be done consciously cannot be done unconsciously just as well, particularly since what does the causal work cannot be the consciousness itself but the mechanism we have laboriously reverse-engineered till it scaled up to T3.

So, since, for all the T3 or the Blind-Watchmaker can determine, the candidate might as well be a Zombie, does it follow that those who have been stressing the biological determinism of behavior (Dawkins 1989; Barkow et al. 1992) are closer to the truth than those who stress cognition, consciousness and choice?

ARE WE DRIVEN BY OUR DARWINIAN UNCONSCIOUS?

Let's consider specific examples. The following kind of suggestion has been made (e.g., by Shields & Shields 1983 and Thornhill & Thornhill 1982; and most recently by Baker 1996, in a curious juxtaposition of pornography and bio-psychodynamic hermeneutics that, had the writing been better, would be reminiscent of Freud): For reasons revealed by game-theoretic and inclusive-fitness assumptions and calculations, there are circumstances in which it is to every man's biological advantage to rape. Our brains accordingly perform this calculation (unconsciously, of course) in given circumstances, and when its outcome reveals that it is optimal to do so, we rape. Fortunately, our brains are also sensitive to certain cues that inhibit the tendency to rape because of the probability of punishment and its adverse consequences for fitness. This is also a result of an unconscious calculation. So we are rather like Zombies being impelled to or inhibited from raping according to the push and pull of these unconscious reckonings. If we see a potential victim defenceless and unprotected, and there is no indication that we will ever be caught or anyone will ever know, we feel inclined to rape. If we instead see the potential victim flanked by a pair of burly brothers in front of a police station, we feel inclined to abstain. If the penalties for rape are severe and sure, we abstain; if not, we rape.

Similarly, there is an unconscious inclusive fitness calculator that assesses the advantages of mating with one's sibling of the opposite sex (van den Berghe 1983). Ordinarily these advantages are vastly outweighed by the disadvantages arising from the maladaptive effects of inbreeding. However, under certain circumstances, the advantages of mating with a sibling outweigh the disadvantages of inbreeding, for example, when great wealth and status are involved, and the only alternative would be to marry down (as in the case of the Pharaohs). To put it dramatically, according to the function of this unconscious biological calculator, as we approach the pinnacle of wealth and status, my sister ought to be looking better and better to me.

These explanations and these hypothetical mechanisms would make sense, I suggest, if we really were Zombies, pushed and pulled directly by unconscious, dedicated "proximal mechanisms" of this kind. But what I think one would find in a T3-scale candidate, even a T3 Zombie, would not be such unconscious, dedicated proximal mechanisms, but other, much more sophisticated, powerful and general cognitive mechanisms, most of them likewise unconscious, and likewise evolved, but having more to do with general social and communicative skills and multipurpose problem-solving and planning skills than with any of the specifics of the circumstances described. These evolved and then learned T3 capacities would have next to nothing to do with dedicated fitness calculations of the kind described above (with the exception, perhaps, of basic sexual interest in the opposite sex itself, and its inhibition toward those with whom one has had long and early contact, i.e., close kin).

The place to search for Darwinian factors is in the origin of our T3 capacity itself, not in its actual deployment in a given individual lifetime. And that search will not yield mechanisms such as rape-inhibition-cue-detectors or status-dependent-incest-cue-detectors, but general mechanisms of social learning and communication, language, and reasoning. The unconscious substrate of our actual behavior in specific circumstances will be explained, not by simplistic local Darwinian considerations (t1 "toy" adaptationism, shall we call it?), but by the T3 causal mechanism eventually revealed by the reverse engineering of the mind. The determination of our behavior will be just as unconscious as biological determinism imagines it will be, but the actual constraints and proximal mechanisms will not be those dictated directly by Darwin but those dictated by Turing, his cognitive engineer.

What, then, is the role of the mind in all this unconscious, causally determined business? Or, to put it another way, why aren't we just Zombies? Concerns like these are symptomatic of the mind/body problem, and that, it seems to me, is going to beset us till the end of time -- or at least till the end of conscious time. What is the mind/body problem? It's a problem we all have with squaring the mental with the physical, with seeing how a mental state, such as feeling melancholy, can be the same as a physical state, such as certain activities in brain monoamine systems (Harnad 1993b).

The old-style "solution" to the mind/body problem was simply to state that the physical state and the mental state were the same thing. And we can certainly accept that (indeed, it's surely somehow true), but what we can't do is understand how it's true, and that's the real mind/body problem. Moreover, the sense in which we do not understand how it's true that, say, feeling blue is really being low in certain monoamines, is, I suggest, very different from the kinds of puzzlement we've had with other counterintuitive scientific truths. For, as Nagel (1974, 1986) has pointed out (quite correctly, I think), the understanding of all other counterintuitive scientific truths except those pertaining to the mind/body problem has always required us to translate one set of appearances into a second set of appearances that, on first blush, differed from the first, but that, upon reflection, we could come to see as the same thing after all: Examples include coming to see water as H2O, heat as mean kinetic energy, life as certain biomolecular properties, and so on.

The reason this substitution of one set of appearances for another was no problem (given sufficient evidence and a causal explanation) was that, although appearances changed, appearance itself was preserved in all previous cases of intuition-revision. We could come to see one kind of thing as another kind of thing, but we were still seeing (or picturing) it as something. But when we come to the mind/body problem, it is appearance itself that we are inquiring about: What are appearances? -- for mental states, if you think about it, are appearances. So when the answer is that appearances are really just, say, monoaminergic states, then that appearance-to-appearance revision mechanism (or "reduction" mechanism, if you prefer) that has stood us in such good stead time and time again in scientific explanation fails us completely. For what precedent is there for substituting for a previous appearance, not a new (though counterintuitive) appearance, but no appearance at all?

This, at least, is how Nagel evokes the lasting legacy of the mind/body problem. It's clearly more than just the problem of ordinary underdetermination, but it too is something we're going to have to live with. For whether your preference is for T3 or T4, it will always take a blind leap of faith to believe that the candidate has a mind. Turing Indistinguishability is the best we can ever do. Perhaps it's some consolation that the Blind Watchmaker could do no better.

FOOTNOTES:

1. The work of some authors on "theory of mind" in animals (Premack & Woodruff 1978) and children (Gopnik 1993) and of some adult theorists of the mind when they adopt the "intentional stance" (i.e., when they interpret others as having beliefs and desires; Dennett 1983) can be interpreted as posing a problem for the claim that consciousness cannot have had an adaptive advantage of its own. "Theory of mind" used in this nonstandard way (it is not what philosophers mean by the phrase) corresponds in many respects to Turing-Testing: Even though we are not mind-readers, we can tell pretty well what (if anything) is going on in the minds of others (adults, children, animals): We can tell when others are happy, sad, angry, hungry, menacing, trying to help us, trying to deceive us, etc. To be able to do this definitely has adaptive advantages. So would this not give the Blind Watchmaker an indirect way of favouring those who have mental states? Is it not adaptive to be able to infer the mental states in others.

The problem is that the adaptive value of mind-reading (Turing Testing) depends entirely on how (1) the appearance and behaviour of others, (2) the effects of our own appearance and behaviour on others, and (3) what it feels like to be in various mental states, covary and cue us about things that matter to our (or our genes') survival and reproduction. We need to know when someone else threatens us with harm, or when our offspring need our help. We recognise the cues and can also equate them with our own feelings when we emit such cues. The detection of internal state correlates of external cues like this is undeniably adaptive. But what is the adaptive value of actually feeling something in detecting and using these correlates? Nothing needs to be felt to distinguish real affection from feigned affection, in oneself or in others. And real and feigned affection need not be based on a difference in feelings, or on any feelings at all.

Dennett is the one who has argued most persuasively for the necessity and the utility of adopting the intentional stance, both in order to live adaptively among one's fellow creatures and in order to reverse-engineer them in the laboratory. But let us not forget that Dennett's first insight about this was based on how a chess-playing computer programme can only be understood if one assumes that it has beliefs and desires. Let us not entertain here the absurd possibility that a computer running a chess-playing programme really has beliefs and desires. It is not disputed that interpreting it as if it had beliefs and desires is useful. But then all that gives us is an adaptive rationale for the evolution of organisms that can act as if they had minds and as if they could read one another's minds; all that requires is a causal exchange of signals and cues. It provides no rationale for actually feeling while all that is going on.

2. We could decide to accept the theory that has the fewest parameters, but it is not clear that God used Occam's Razor in designing the universe. The Blind Watchmaker certainly seems to have been profligate in designing the biosphere, rarely designing the "optimal" system; which means that evolution leaves a lot of nonfunctional loose ends. It is not clear that one must duplicate every last one of them in reverse-engineering the mind.

REFERENCES

Alcock, J. E. 1987. "Parapsychology: Science of the anomalous or search for the soul?" Behavioral and Brain Sciences .10: 553 - 643.

Baker, R. 1996. Sperm wars : infidelity, sexual conflict and other bedroom battles . London : Fourth Estate, 1996.

Barkow, J., Cosmides, L. and Tooby, J. 1992. (eds.) The Adapted Mind: Evolutionary psychology and the generation of culture. NY: Oxford University Press.

Catania, A.C. & Harnad, S. (eds.. "1988. The Selection of Behavior: The Operant Behaviorism of BF Skinner: Comments and Consequences. New York: Cambridge University Press.

Church A, 1936. "An unsolvable problem of elementary theory." American Journnal of Mathematics . 58: 345-63.

Dawkins, Richard. 1989. The selfish gene. Oxford ; New York : Oxford University Press

Dawkins, Richard 1986. The blind watchmaker. NY: Norton.

Dennett, D. C. 1983. "Intentional systems in cognitive ethology: The "Panglossian paradigm" defended." Behavioral & Brain Sciences 6: 343-390.

Dennett, D. C. 1984. Elbow room : the varieties of free will worth wanting . Cambridge, Mass. : MIT Press

Dennett, D.C. 1994. "Cogntitive Science as Reverse Engineering: Several Meanings of "Top Down" and "Bottom Up."" In: Prawitz, D., & Westerstahl, D. (Eds.) International Congress of Logic, Methodology and Philosophy of Science. Dordrecht: Kluwer International Congress of Logic, Methodology, and Philosophy of Science (9th: 1991)

Dennett, D. C. 1995. Darwin's dangerous idea : evolution and the meanings of life . London ; New York : Allen Lane

Fernald RD 1997. "The evolution of eyes". Brain Behavior and Evolution 50: 253-259

Gopnik, A 1993. "How we know our minds: The illusion of first-person knowledge of intentionality". Behavioral & Brain Sciences 16: 29-113.

Gould, S J. 1994. "The spandrels of San Marco and the panglossian paradigm: A critique of the adaptationist programme. In: Sober, E. (Ed.). Conceptual issues in evolutionary biology,.Second edition. Cambridge MA: MIT Press Massachusetts, Pp. 73-90. [Reprinted from Proceedings of the Royal Society B, London 205: 581. 1979.]

Harnad, S. 1982. "Consciousness: An afterthought". Cognition and Brain Theory . 5: 29 - 47.

Harnad, S. 1987. (ed.) Categorical Perception: The Groundwork of Cognition. New York: Cambridge University Press.

Harnad, S. 1989. "Minds, Machines and Searle". Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25.

Harnad, S. 1990. "The Symbol Grounding Problem". Physica D 42: 335-346.

Harnad, S. 1991. "Other bodies, Other minds: A machine incarnation of an old philosophical problem". Minds and Machines 1: 43-54.

Harnad, S. 1992a. Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds. Connectionism in Context. Springer Verlag.

Harnad, S. 1992b. "The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion". SIGART Bulletin 3(4. "(October. "9 - 10.

Harnad, S. 1993a. "Artificial Life: Synthetic Versus Virtual". Artificial Life III. Proceedings, Santa Fe Institute Studies in the Sciences of Complexity. Volume XVI.

Harnad S. 1993b. Discussion (passim. In: Bock, G.R. & Marsh, J. (Eds.) Experimental and Theoretical Studies of Consciousness. CIBA Foundation Symposium 174. Chichester: Wiley

Harnad, S. 1994a. "Levels of Functional Equivalence in Reverse Bioengineering: The Darwinian Turing Test for Artificial Life". Artificial Life 1(3): 293-301.

Harnad, S. 1994b. "Computation Is Just Interpretable Symbol Manipulation: Cognition Isn't". Special Issue on "What Is Computation" Minds and Machines 4:379-390

Harnad, S, 1995a. Does the Mind Piggy-Back on Robotic and Symbolic Capacity? In: H. Morowitz (ed.) The Mind, the Brain, and Complex Adaptive Systems. Santa Fe Institute Studies in the Sciences of Complexity. Volume XXII. P. 204-220.

Harnad, S. 1995b. Grounding Symbolic Capacity in Robotic Capacity. In: Steels, L. and R. Brooks (eds.) The Artificial Life Route to Artificial Intelligence: Building Embodied Situated Agents. New Haven: Lawrence Erlbaum. Pp. 277-286.

Harnad, S. 1995c. "Why and How We Are Not Zombies". Journal of Consciousness Studies 1: 164-167.

Harnad, S. 1996. The Origin of Words: A Psychophysical Hypothesis In Velichkovsky B & Rumbaugh, D. (Eds.) Communicating Meaning: Evolution and Development of Language. NJ: Erlbaum: pp 27-44.

Harnad, S. Hanson, S.J. & Lubin, J. 1995. Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding. In: V. Honavar & L. Uhr (eds. Symbol Processors and Connectionist Network Models in Artificial Intelligence and Cognitive Modelling: Steps Toward Principled Integration. Academic Press. pp. 191-206.

Libet, B. 1985. "Unconscious cerebral initiative and the role of conscious will in voluntary action". Behavioral and Brain Sciences 8: 529-566.

Nagel, T. 1974. "What is it like to be a bat?" Philosophical Review 83: 435 - 451.

Nagel, T. 1986. The view from nowhere. New York: Oxford University Press.

Premack, D. & Woodruff, G. 1978. "Does the chimpanzee have a theory of mind?" Behavioral & Brain Sciences 1: 515-526.

Pylyshyn, Z. W. 1984. Computation and cognition. Cambridge MA: MIT/Bradford

Shields, WM & Shields, LM. 1983. "Forcible rape: An evolutionary perspective". Ethology & Sociobiology 4: 115-136.

Steklis, H.D. & Harnad, S. 1976. From hand to mouth: Some critical stages in the evolution of language. In: Harnad, S., Steklis, H. D. & Lancaster, J. B. (eds.) Origins and Evolution of Language and Speech. Annals of the New York Academy of Sciences 280: 445 - 455.

Thornhill, R & Thornhill, Nancy W. 1992) The evolutionary psychology of men's coercive sexuality". Behavioral and Brain Sciences 15: 363-421.

Turing A.M. 1937. "On computable numbers". Proceedings of the London Mathematical Society 2 42: 230-65

Turing, A. M. 1964. Computing machinery and intelligence. In: Minds and machines. A. Anderson (ed.), Engelwood Cliffs NJ: Prentice Hall.

Van den Berghe, PL. 1983. "Human inbreeding avoidance: Culture in nature". Behavioral and Brain Sciences 6: 91-123.