Harnad, S. (2001) What's Wrong and Right About Searle's Chinese Room Argument? In: M. Bishop & J. Preston (eds.) Essays on Searle's Chinese Room Argument. Oxford University Press.
http://www.ecs.soton.ac.uk/~harnad/Temp/searlbook.htm
http://cogprints.org/1622/
Minds, Machines and Searle 2:
What's Right and Wrong About the Chinese Room Argument


Stevan Harnad
in an academic generation a little overaddicted to "politesse," it may be worth saying that violent destruction is not necessarily worthless and futile. Even though it leaves doubt about the right road for London, it helps if someone rips up, however violently, a 'To London' sign on the Dover cliffs pointing south . . . (Hexter 1979).

When in 1979 Zenon Pylyshyn, associate editor of The Behavioral and Brain Sciences (BBS, a peer commentary journal which I edit) informed me that he had secured a paper by John Searle (with an unprepossessing title that Zenon, John, and I have all since forgotten!), I cannot say that I was especially impressed; nor did a quick reading of the brief manuscript -- which seemed to be yet another tedious "Granny Objection" about why/how we are not computers -- do anything to upgrade that impression.

The paper pointed out that a "Chinese-Understanding" computer program would not really understand Chinese because someone who did not understand Chinese (e.g., Searle himself) could execute the same program while still not understanding Chinese; hence the computer executing the program would not be understanding Chinese either. The paper rebutted various prima facie counterarguments against this (mostly variants on the theme that it would not be Searle but "the system," of which Searle would only be a part, that would indeed be understanding Chinese when the program was being executed), but all of this seemed trivial to me: Yes, of course an inert program alone could not understand anything (so Searle was right about that), but surely an executing program might be part of what an understanding "system" like ourselves really does and is (so the "Systems Reply" was right too).

The paper was refereed (favorably), and was accepted under the revised title 'Minds, Brains, and Programs', circulated to a hundred potential commentators across disciplines and around the world, and then co-published in 1980 in BBS with twenty-seven commentaries and Searle's Response. Across the ensuing years, further commentaries and responses continued to flow as, much to my surprise, Searle's paper became BBS's most influential target article (and still is, to the present day) as well as something of a classic in cognitive science. (At the Rochester Conference on Cognitive Curricula (Lucas & Hayes 1982), Pat Hayes went so far as to define cognitive science as "the ongoing research program of showing Searle's Chinese Room Argument to be false" -- "and silly," I believe he added at the time).

As the arguments and counterarguments kept surging across the years I chafed at being the only one on the planet not entitled (ex officio, being the umpire) to have a go, even though I felt that I could settle Searle's wagon if I had a chance, and put an end to the rather repetitious and unresolved controversy. In the late 80's I was preparing a critique of my own, called 'Minds, Machines and Searle' (after 'Minds, Machines, and Gödel', by Lucas [1961], another philosopher arguing that we are not computers), though not sure where to publish it (BBS being out of the question). One of the charges that had been laid against Searle by his critics had been that his wrong-headed critique had squelched funding for Artificial Intelligence (AI), so the newly founded Journal of Experimental and Theoretical Artificial Intelligence (JETAI) seemed a reasonable locus for my own critique of Searle, which accordingly appeared there in 1989.

I never heard anything from Searle about my JETAI critique, even though we were still interacting regularly in connection with the unabating Continuing Commentary on his Chinese Room Argument (CRA) in BBS, as well as a brand new BBS target article (Searle 1990a) that he wrote specifically to mark the 10th anniversary of the CRA. This inability to enter the fray would have been a good deal more frustrating to me had not a radically new medium for Open Peer Commentary been opening up at the same time: It had been drawn to my attention that since the early 80's the CRA had been a prime topic on "comp.ai", a discussion group on Usenet. (That Global Graffiti Board for Trivial Pursuit was to have multiple influences on both me and BBS, and on the future course of Learned Inquiry and Learned Publication, but that is all another story [Harnad 1990a, 1991b; Hayes et al., 1992]; here we are only concerned with its influence on the Searle saga).

Tuning in to comp.ai in the mid-late 80's with the intention of trying to resolve the debate with my own somewhat ecumenical critique of Searle (Searle's right that an executing program cannot be all there is to being an understanding system, but wrong that an executing program cannot be part of an understanding system), to my surprise, I found comp.ai choked with such a litany of unspeakably bad anti-Searle arguments that I found I had to spend all my air-time defending Searle against these non-starters instead of burying him, as I had intended to do. (Searle did take notice this time, for apparently he too tuned in to comp.ai in those days, encouraging me [off-line] to keep fighting the good fight -- which puzzled me, as I was convinced we were on opposite sides).

I never did get around to burying Searle, for when, after months of never getting past trying to clear the air by rebutting the bad rebuttals to the CRA, I begged Searle [off-line] to read my 'Minds, Machines and Searle' and know that we were adversaries rather than comrades-at-arms, despite contrary appearances on comp.ai. He wrote back to say that although my paper contains points on which reasonable men might agree to disagree, on the essential point, the one everyone else was busy disputing, I in fact agree with him -- so why don't I just come out and say so?

It was then that the token dropped. For there was something about the Chinese Room Argument that had just been obviously right to me all along, and hence I had quite taken that part for granted, focusing instead on where I thought Searle was wrong; yet that essential point of agreement was the very one that everybody was contesting! And make no mistake about it, if you took a poll -- in the first round of BBS Commentary, in the Continuing Commentary, on comp.ai, or in the secondary literature about the Chinese Room Argument that has been accumulating across both decades to the present day (and culminating in the present book) -- the overwhelming majority still think the Chinese Room Argument is dead wrong, even among those who agree that computers can't understand! In fact (I am open to correction on this), it is my impression that, apart from myself, the only ones who profess to accept the validity of the CRA seem to be those who are equally persuaded by what I earlier called "Granny Objections" -- the kinds of soft-headed friends that do even more mischief to one's case than one's foes.

So what is this CRA then, and what is right and wrong about it? Searle is certainly partly to blame for the two decades of misunderstandings about his argument about understanding. He did not always state things in the most perspicuous fashion. To begin with, he baptized as his target a position that no one was quite ready to own to be his own: "Strong AI".

What on earth is "Strong AI"? As distilled from various successive incarnations of the CRA (oral and written: Searle 1980b, 1982, 1987, 1990b), proponents of Strong AI are those who believe three propositions:

(1*) The mind is a computer program.

(2*) The brain is irrelevant.

(3*) The Turing Test is decisive.
 
 

It was this trio of tenets that the CRA was intended to refute. (But of course all it could refute was their conjunction. Some of them could still be true even if the CRA was valid). I will now reformulate (1*) - (3*) so that they are the recognizable tenets of computationalism, a position (unlike "Strong AI") that is actually held my many thinkers, and hence one worth refuting, if it is wrong (Newell 1980; Pylyshyn 1984; Dietrich 1990).

Computationalism is the theory that cognition is computation, that mental states are just computational states. In fact, that is what tenet (1) should have been:

(1) Mental states are just implementations of (the right) computer program(s). (Otherwise put: Mental states are just computational states).
 
 
If (1*) had been formulated in this way in the first place, it would have pre-empted objections about inert code not being a mind: Of course the symbols on a piece of paper or on a disk are not mental states. The code -- the right code (assuming it exists) -- has to be executed in the form of a dynamical system if it is to be a mental state.

The second tenet has led to even more misunderstanding. How can the brain be irrelevant to mental states (especially its own!)? Are we to believe that if we remove the brain, its mental states somehow perdure somewhere, like the Cheshire Cat's grin? What Searle meant, of course, was just the bog-standard hardware/software distinction: A computational state is implementation-independent. Have we just contradicted tenet (1)?

(2) Computational states are implementation-independent. (Software is hardware-independent).
 
 
If we combine (1) and (2) we get: Mental states are just implementation-independent implementations of computer programs. This is not self-contradictory. The computer program has to be physically implemented as a dynamical system in order to become the corresponding computational state, but the physical details of the implementation are irrelevant to the computational state that they implement -- except that there has to be some form of physical implementation. Radically different physical systems can all be implementing one and the same computational system.

Implementation-independence is indeed a part of both the letter and the spirit of computationalism. There was even a time when computationalists thought that the hardware/software distinction cast some light on (if it did not outright solve) the mind/body problem: The reason we have that long-standing problem in understanding how on earth mental states could be just physical states is that they are not! Mental states are just computational states, and computational states are implementation-independent. They have to be physically implemented, to be sure, but don't look for the mentality in the matter (the hardware): it's the software (the computer program) that matters.

If Searle had formulated the second tenet of computationalism in this explicit way, not only would most computationalists of the day have had to recognise themselves as his rightful target, not only would it have fended off red herrings about the irrelevance of brains to their own mental states, or about there being no need for a physical implementation at all, but it would have exposed clearly the soft underbelly of computationalism, and hence the real target of Searle's CRA: For it is precisely on the strength of implementation-independence that computationalism will stand or fall.

The critical property is transitivity: If all physical implementations of one and the same computational system are indeed equivalent, then when any one of them has (or lacks) a given computational property, it follows that they all do (and, by tenet (1), being a mental state is just a computational property). We will return to this. It is what I have dubbed "Searle's Periscope" on the normally impenetrable "other-minds" barrier (Harnad 1991a); it is also that soft underbelly of computationalism. But first we must fix tenet (3*).

Actually, verbatim, tenet (3*) is not so much misleading (in the way (1*) and (2*) were misleading) as it is incomplete. It should have read:

(3) There is no stronger empirical test for the presence of mental states than Turing-Indistinguishability; hence the Turing Test is the decisive test for a computationalist theory of mental states.
 
 
This does not imply that passing the Turing Test (TT) is a guarantor of having a mind or that failing it is a guarantor of lacking one. It just means that we cannot do any better than the TT, empirically speaking. Whatever cognition actually turns out to be -- whether just computation, or something more, or something else -- cognitive science can only ever be a form of "reverse engineering" (Harnad 1994a) and reverse-engineering has only two kinds of empirical data to go by: structure and function (the latter including all performance capacities). Because of tenet (2), computationalism has eschewed structure; that leaves only function. And the TT simply calls for functional equivalence (indeed, total functional indistinguishability) between the reverse-engineered candidate and the real thing.

Consider reverse-engineering a duck: A reverse-engineered duck would have to be indistinguishable from a real duck both structurally and functionally: It would not only have to walk, swim and quack (etc.) exactly like a duck, but it would also have to look exactly like a duck, both externally and internally. No one could quarrel with a successfully reverse-engineered candidate like that; no one could deny that a complete understanding of how that candidate works would also amount to a complete understanding of how a real duck works. Indeed, no one could ask for more.

But one could ask for less, and a functionalist might settle for only the walking, the swimming and the quacking (etc., including everything else that a duck can do), but ignoring the structure, i.e., what it looks like, on the inside or the outside, what material it is made of, etc. Let us call the first kind of reverse-engineered duck, the one that is completely indistinguishable from a real duck, both structurally and functionally, D4, and the one that is indistinguishable only functionally, D3.

Note, though, that even for D3 not all the structural details would be irrelevant: To walk like a duck, something roughly like two waddly appendages are needed, and to swim like one, they'd better be something like webbed ones too. But even with these structure/function coupling constraints, aiming for functional equivalence alone still leaves a lot of structural degrees of freedom open. (Those degrees of freedom would shrink still further if we became more minute about function -- moulting, mating, digestion, immunity, reproduction -- especially as we approached the level of cellular and subcellular function. So there is really a microfunctional continuum between D3 and D4; but let us leave that aside for now, and stick with D3 macrofunction, mostly in the form of performance capacities).

Is the Turing Test just the human equivalent of D3? Actually, the "pen-pal" version of the TT as Turing (1950) originally formulated it, was even more macrofunctional than that -- it was the equivalent of D2, requiring the duck only to quack. But in the human case, "quacking" is a rather more powerful and general performance capacity, and some consider its full expressive power to be equivalent to, or at least to draw upon, our full cognitive capacity (Fodor 1975; Harnad 1996a).

So let us call the pen-pal version of the Turing Test T2. To pass T2, a reverse-engineered candidate must be Turing-indistinguishable from a real pen-pal. Searle's tenet (3) for computationalism is again a bit equivocal here, for it states that TT is the decisive test, but does that mean T2?

This is the point where reasonable men could begin to disagree. But let us take it to be T2 for now, partly because that is the version that Turing described, and partly because it is the one that computationalists have proved ready to defend. Note that T2 covers all cognitive capacities that can be tested by paper/pencil tests (reasoning, problem-solving, etc).; only sensorimotor (i.e. robotic) capacities (T3) are left out. And the pen-pal capacities are both life-size and life-long: the candidate must be able to deploy them with anyone, indefinitely, just as a real pen-pal could; we are not talking about one-night party-tricks (Harnad 1992) here but real, human-scale performance capacity, indistinguishable from our own (Harnad 2000a).

We now reformulate Searle's Chinese Room Argument in these new terms: Suppose that computationalism is true, that is, that mental states, such as understanding, are really just implementation-independent implementations of computational states, and hence that a T2-passing computer would (among other things) understand.

Note that there are many ways to reject this premise, but resorting to any of them is tantamount to accepting Searle's conclusion, which is that a T2-passing computer would not understand. (His conclusion is actually stronger than that -- too strong, in fact -- but we will return to that as another of the points on which reasonable men can disagree). So if one rejects the premise that a computer could ever pass T2, one plays into Searle's hands, as one does if one holds that T2 is not a strong enough test, or that implementational details do matter.

So let us accept the premise and see how Searle arrives at his conclusion. This, after all, is where most of the heat of the past twenty years has been generated. Searle goes straight for computationalism's soft underbelly: implementation-independence (tenet (2)). Because of (2), any and every implementation of that T2-passing program must have the mental states in question, if they are truly just computational states. In particular, each of them must understand. Fair enough. But now Searle brings out his intuition pump, adding that we are to imagine this computer as passing T2 in Chinese; and we are asked to believe (because it is true) that Searle himself does not understand Chinese. It remains only to note that if Searle himself were executing the computer program, he would still not be understanding Chinese. Hence (by (2)) neither would the computer, executing the very same program. Q.E.D. Computationalism is false.

Now just as it is no refutation (but rather an affirmation) of the CRA to deny that T2 is a strong enough test, or to deny that a computer could ever pass it, it is merely special pleading to try to save computationalism by stipulating ad hoc (in the face of the CRA) that implementational details do matter after all, and that the computer's is the "right" kind of implementation, whereas Searle's is the "wrong" kind. This just amounts to conceding that tenet (2) is false after all.

By the same token, it is no use trying to save computationalism by holding that Searle would be too slow or inept to implement the T2-passing program. That's not a problem in principle, so it's not an escape-clause for computationalism. Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make a phase transition into the mental (Churchland 1990). It should be clear that this is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of "complexity").

On comp.ai (and even in the original 1980 commentary on Searle), some of these ad hoc counterarguments were faintly voiced, but by far the most dogged of the would-be rebuttals were variants on the Systems Reply, to the effect that it was unreasonable to suppose that Searle should be understanding under these conditions; he would be only a part of the implementing system, whereas it would be the system as a whole that would be doing the understanding.

Again, it is unfortunate that in the original formulation of the CRA Searle described implementing the T2-passing program in a room with the help of symbols and symbol-manipulation rules written all over the walls, for that opened the door to the Systems Reply. He did offer a pre-emptive rebuttal, in which he suggested to the Systematists that if they were really ready to believe that whereas he alone would not be understanding under those conditions, the "room" as a whole, consisting of him and the symbol-strewn walls, would be understanding, then they should just assume that he had memorized all the symbols on the walls; then Searle himself would be all there was to the system.

This decisive variant did not stop some Systematists from resorting to the even more ad hoc counterargument that even inside Searle there would be a system, consisting of a different configuration of parts of Searle, and that that system would indeed be understanding. This was tantamount to conjecturing that, as a result of memorizing and manipulating very many meaningless symbols, Chinese-understanding would be induced either consciously in Searle, or, multiple-personality-style, in another, conscious Chinese-understanding entity inside his head of which Searle was unaware.

I will not dwell on any of these heroics; suffice it to say that even Creationism could be saved by ad hoc speculations of this order. (They show only that the CRA is not a proof; yet it remains the only plausible prediction based on what we know). A more interesting gambit was to concede that no conscious understanding would be going on under these conditions, but that unconscious understanding would be, in virtue of the computations.

This last is not an arbitrary speculation, but a revised notion of understanding. Searle really has no defense against it, because, as we shall see (although he does not explicitly admit it), the force of his CRA depends completely on understanding's being a conscious mental state, one whose presence or absence one can consciously (and hence truthfully) ascertain and attest to (Searle's Periscope). But Searle also needs no defense against this revised notion of understanding, for it only makes sense to speak of unconscious mental states (if it makes sense at all) in an otherwise conscious entity. (Searle was edging toward this position ten years later in 1990a).

Unconscious states in nonconscious entities (like toasters) are no kind of mental state at all. And even in conscious entities unconscious mental states had better be brief! We're ready to believe that we "know" a phone number when, unable to recall it consciously, we find we can nevertheless dial it when we let our fingers do the walking. But finding oneself able to exchange inscrutable letters for a lifetime with a pen-pal in this way would be rather more like sleep-walking, or speaking in tongues (even the neurological syndrome of "automatic writing" is nothing like this; Luria 1972). It's definitely not what we mean by "understanding a language," which surely means conscious understanding.

The synonymy of the "conscious" and the "mental" is at the heart of the CRA (even if Searle is not yet fully conscious of it -- and even if he obscured it by persistently using the weasel-word "intentional" in its place!): Normally, if someone claims that an entity -- any entity -- is in a mental state (has a mind), there is no way I can confirm or disconfirm it. This is the "other minds" problem. We "solve" it with one another and with animal species that are sufficiently like us through what has come to be called "mind-reading" (Heyes 1998) in the literature since it was first introduced in BBS two years before Searle's article (Premack & Woodruff 1978). But of course mind-reading is not really telepathy at all, but Turing-Testing -- biologically prepared inferences and empathy based on similarities to our own appearance, performance, and experiences. But the TT is of course no guarantee; it does not yield anything like the Cartesian certainty we have about our own mental states.

Can we ever experience another entity's mental states directly? Not unless we have a way of actually becoming that other entity, and that appears to be impossible -- with one very special exception, namely, that soft underbelly of computationalism: For although we can never become any other physical entity than ourselves, if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it. This is Searle's Periscope, and a system can only protect itself from it by either not purporting to be in a mental state purely in virtue of being in a computational state -- or by giving up on the mental nature of the computational state, conceding that it is just another unconscious (or rather nonconscious) state -- nothing to do with the mind.

Computationalism was very reluctant to give up on either of these; the first would have amounted to converting from computationalism to "implementationalism" to save the mental -- and that would simply be to rejoin the material world of dynamical systems, from which computational had hoped to abstract away. The second would have amounted to giving up on the mental altogether.

But there is also a sense in which the Systems Reply is right, for although the CRA shows that cognition cannot be all just computational, it certainly does not show that it cannot be computational at all. Here Searle seems to have drawn stronger conclusions than the CRA warranted. (There was no need: showing that mental states cannot be just computational was strong enough!) But he thought he had shown more:

Searle thought that the CRA had invalidated the Turing Test as an indicator of mental states. But we always knew that the TT was fallible; like the CRA, it is not a proof. Moreover, it is only T2 (not T3 or T4 REFS) that is vulnerable to the CRA, and even that only for the special case of an implementation-independent, purely computational candidate. The CRA would not work against a non-computational T2-passing system; nor would it work against a hybrid, computational/noncomputational one (REFS), for the simple reason that in neither case could Searle be the entire system; Searle's Periscope would fail. Not that Systematists should take heart from this, for if cognition is hybrid, computationalism is still false.

Searle was also over-reaching in concluding that the CRA redirects our line of inquiry from computation to brain function: There are still plenty of degrees of freedom in both hybrid and noncomputational approaches to reverse-engineering cognition without constraining us to reverse-engineering the brain (T4). So cognitive neuroscience cannot take heart from the CRA either. It is only one very narrow approach that has been discredited: pure computationalism.

Has Searle's contribution been only negative? In showing that the purely computational road would not lead to London, did he leave us as uncertain as before about where the right road to London might be? I think not, for his critique has helped open up the vistas that are now called "embodied cognition" and "situated robotics," and they have certainly impelled me toward the hybrid road of grounding symbol systems in the sensorimotor (T3) world with neural nets.

And Granny has been given a much harder-headed reason to believe wha she has known all along: That we are not (just) computers (Harnad 2000b, 2001).

References

Cangelosi, A. & Harnad, S. (2000) 'The Adaptive Advantage of Symbolic Theft Over Sensorimotor Toil: Grounding Language in Perceptual Categories', Evolution of Communication (Special Issue on Grounding).

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad00.language.html

Churchland, P.M. & Churchland, P.S. (1990) 'Could a Machine Think?', Scientific American, vol.262, pp.32-37.

Dietrich, E. (1990) 'Computationalism', Social Epistemology, vol.4, pp.135-54.

Fodor, J.A. (1975) The Language of Thought, (New York: Thomas Y.Crowell).

Fodor, J.A. & Pylyshyn, Z.W. (1988) 'Connectionism and Cognitive Architecture: A Critical Appraisal', Cognition, vol.28, pp.3-71.

Cangelosi A., Greco A. & Harnad S. (2000) 'From Robotic Toil to Symbolic Theft: Grounding Transfer from Entry-level to Higher-level Categories', Connection Science, vol.12, pp.143-162.

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/cangelosi-connsci2.ps

Harnad, S. (1982a) 'Neoconstructivism: A Unifying Theme for the Cognitive Sciences', in T.Simon & R.Scholes (eds.), Language, Mind and Brain, (Hillsdale NJ: Lawrence Erlbaum), pp.1-11.

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad82.neoconst.html

(1982b) 'Consciousness: An Afterthought', Cognition and Brain Theory, vol.5, pp.29-47.

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad82.consciousness.html

(ed.) (1987) Categorical Perception: The Groundwork of Cognition, (New York: Cambridge University Press).

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad87.categorization.html

(1989) 'Minds, Machines and Searle', Journal of Theoretical and Experimental Artificial Intelligence, vol.1, pp.5-25.

ftp://ftp.princeton.edu/pub/harnad/Harnad/HTML/harnad89.searle.html

(1990a) 'The Symbol Grounding Problem', Physica D, vol.42, pp.335-346.

ftp://ftp.princeton.edu/pub/harnad/Harnad/HTML/harnad90.sgproblem.html

(1990b) 'Against Computational Hermeneutics', (Invited commentary on Eric Dietrich's 'Computationalism'), Social Epistemology, vol.4, pp.167-172.

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad90.dietrich.crit.html

(1990c) 'Lost in the Hermeneutic Hall of Mirrors', (Invited Commentary on: Michael Dyer, 'Minds, Machines, Searle and Harnad'), Journal of Experimental and Theoretical Artificial Intelligence, vol.2, pp.321-327.

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad90.dyer.crit.html

(1990d) 'Scholarly Skywriting and the Prepublication Continuum of Scientific Inquiry', Psychological Science, vol.1, pp.342-3 (reprinted in Current Contents, vol.45, November 11th, 1991, pp.9-13).

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad90.skywriting.html

(1991a) 'Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem', Minds and Machines, vol.1, pp.43-54.

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad91.otherminds.html

(1991b) 'Post-Gutenberg Galaxy: The Fourth Revolution in the Means of Production of Knowledge', Public-Access Computer Systems Review, vol.2, pp.39-53.

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad91.postgutenberg.html

(1992) 'The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion', SIGART Bulletin, vol.3, pp.9-10.

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad92.turing.html

(1993) 'Artificial Life: Synthetic Versus Virtual', in Artificial Life III: Proceedings, Santa Fe Institute Studies in the Sciences of Complexity. Volume XVI.

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad93.artlife.html

(1994a) 'Levels of Functional Equivalence in Reverse Bioengineering: The Darwinian Turing Test for Artificial Life', Artificial Life, vol.1, pp.293-301 (reprinted in: C.G.Langton (ed.), Artificial Life: An Overview, (Cambridge, MA: MIT Press, 1995).

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad94.artlife2.html

(1994b) 'Computation Is Just Interpretable Symbol Manipulation: Cognition Isn't', Minds and Machines, vol.4, pp.379-390.

ftp://ftp.princeton.edu/pub/harnad/Harnad/HTML/harnad94.computation.cognition.html

(1995b) 'Why and How We Are Not Zombies', Journal of Consciousness Studies, vol.1, pp.164-167.

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad95.zombies.html

(1996) 'The Origin of Words: A Psychophysical Hypothesis', in B.Velichkovsky & D.Rumbaugh (eds.), Communicating Meaning: Evolution and Development of Language, (Hillsdale, NJ: Lawrence Erlbaum), pp.27-44.

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad96.word.origin.html

Harnad, S. (2000a) 'Machines, and Turing: The Indistinguishability of Indistinguishables', Journal of Logic, Language, and Information, vol.9, (special issue on "Alan Turing and Artificial Intelligence"), pp.425-45.

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad00.turing.html

(2000b) 'Correlation Vs. Causality: How/Why the Mind/Body Problem Is Hard' [Invited Commentary on Nick Humphrey's, 'How to Solve the Mind-Body Problem'], Journal of Consciousness Studies, vol.7, pp.54-61.

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad00.mind.humphrey.html

(2001) 'Explaining the Mind: Problems, Problems', The Sciences, (New York Academy of Sciences), April.

http://www.cogsci.soton.ac.uk/~harnad/Tp/bookrev.htm

Harnad, S. Hanson, S.J. & Lubin, J. (1995) 'Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding', in: V.Honavar & L.Uhr (eds.), Symbol Processors and Connectionist Network Models in Artificial Intelligence and Cognitive Modelling: Steps Toward Principled Integration, New York & London: Academic Press, pp. 191-206.

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad95.cpnets.html

Harnad, S., Steklis, H.D. & Lancaster, J.B. (eds.), (1976) Origins and Evolution of Language and Speech, Annals of the New York Academy of Sciences, vol.280.

Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) 'Virtual Symposium on Virtual Mind', Minds and Machines, vol.2, pp.217-238.

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad92.virtualmind.html

Heyes, C.M. (1998) 'Theory of Mind in Nonhuman Primates', Behavioral and Brain Sciences, vol.21, pp.101-134.

http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.heyes.html

Hexter, J.H. (1979) Reappraisals in History, (Chicago: University of Chicago Press).

Lucas, J.R. (1961) 'Minds, Machines and Gödel', Philosophy, vol.36, pp.112-117.

http://cogprints.soton.ac.uk/abs/phil/199807022

Lucas, M.M. & Hayes, P.J. (eds.), (1982) Proceedings of the Cognitive Curriculum Conference. University of Rochester.

Newell, A. (1980) 'Physical Symbol Systems', Cognitive Science, vol.4, pp.135-83.

Luria, A.R. (1972) The Man with a Shattered World, (New York: Basic Books).

Premack, D. & Woodruff, G. 1978. 'Does the Chimpanzee Have a Theory of Mind?', Behavioral & Brain Sciences, vol.1, pp.515-526.

Pylyshyn, Z.W. (1980) 'Computation and Cognition: Issues in the Foundations of Cognitive Science', Behavioral and Brain Sciences, vol.3, pp.111-169.

(1984) Computation and Cognition: Toward a Foundation for Cognitive Science, (Cambridge, MA: MIT Press).

(ed.), (1987) The Robot's Dilemma: The Frame Problem in Artificial Intelligence, (Norwood, NJ: Ablex).

Searle, J.R. (1980a) 'Minds, Brains, and Programs', Behavioral and Brain Sciences, vol.3, pp.417-57.

http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html

(1980b) 'Intrinsic Intentionality', Behavioral and Brain Sciences, vol.3, pp.450-6.

(1982) 'The Chinese Room Revisited', Behavioral and Brain Sciences, vol.5, pp.345-8.

(1984) Minds, Brains, and Science, (Cambridge, MA: Harvard University Press).

(1987) 'Minds and Brains Without Programs', in C.Blakemore & S.Greenfield (eds), Mindwaves, Oxford: Basil Blackwell, pp.208-33.

(1990a) 'Consciousness, Explanatory Inversion and Cognitive Science', Behavioral and Brain Sciences, vol.13, pp.585-96.

(1990b) 'Is the Brain's Mind a Computer Program?', Scientific American, vol.262, pp.20-5.

Steklis, H.D. & Harnad, S. (1976) 'From Hand to Mouth: Some Critical Stages in the Evolution of Language', in Harnad et al. (eds.), 1976, pp.445-455.

Turing, A.M. (1950) 'Computing Machinery and Intelligence', Mind, vol.49, pp.433-460.

http://cogprints.soton.ac.uk/abs/comp/199807017

Wittgenstein, L. (1953) Philosophical Investigations, (New York: Macmillan).