The Convergence Argument in Mind-Modelling: Scaling Up from Toyland to the Total Turing Test

Harnad, Stevan (2000) The Convergence Argument in Mind-Modelling: Scaling Up from Toyland to the Total Turing Test. [Journal (On-line/Unpaginated)]

Full text available as:



The Turing Test is just a methodological constraint forcing us to scale up to an organisms' full functional capacity. This is still just an epistemic matter, not an ontic one. Even a candidate in which we have successfully reverse-engineered all human capacities is not guaranteed to have a mind. The right level of convergence, however, is total robotic capacity; symbolic capacity alone (the standard Turing Test) is underdetermined, whereas full neurosimilitude is overdetermined.

Item Type:Journal (On-line/Unpaginated)
Keywords:artificial intelligence, behaviorism, cognitive science, computationalism, Fodor, functionalism, Searle, Turing Machine, Turing Test.
Subjects:Psychology > Cognitive Psychology
ID Code:1649
Deposited By:Harnad, Stevan
Deposited On:26 Jun 2001
Last Modified:11 Mar 2011 08:54

References in Article

Select the SEEK icon to attempt to find the referenced article. If it does not appear to be in cogprints you will be forwarded to the paracite service. Poorly formated references will probably not work.

Chiappe, D.L. & Kukla, A. (2000) Artificial Intelligence and Scientific Understanding. PSYCOLOQUY 11(064)

Chiappe, D.L. & Kukla, A. (1993) Artificial Intelligence and Scientific understanding. Cognoscenti 1: 7-9.

Dennett, D.C. (1993) Discussion (passim) In: Bock, G.R. & Marsh, J. (Eds.) Experimental and Theoretical Studies of

Consciousness. CIBA Foundation Symposium 174. Chichester: Wiley

Dennett, D.C. (1994) Cognitive Science as Reverse Engineering: Several Meanings of "Top Down" and "Bottom Up". In:

Prawitz, D., & Westerstahl, D. (Eds.) International Congress of Logic, Methodology and Philosophy of Science.

Dordrecht: Kluwer International Congress of Logic, Methodology, and Philosophy of Science (9th: 1991)

Fodor, J.A. (1981) The Mind-Body Problem. Scientific American 244: 114-23.

Fodor, J.A. (1983) The Modularity of Mind. Cambridge MA: MIT Press.

Fodor, J.A. (1991) Replies. In B. Loewer & G. Rey (Eds.) Meaning in Mind: Fodor and his Critics (pp. 255-319).

Cambridge MA: Blackwell.

Green, C.D. (2000a) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061)

Green, C.D. (1993a) Is AI the Right Method For Cognitive Science? Cognoscenti 1: 1-5

Green, C.D. (2000b) Empirical Science and Conceptual Analysis Go Hand in Hand. PSYCOLOQUY 11(071)

Green, C.D. (1993b) Ontology Rules! (But not Absolutely). Cognoscenti 1: 21-28.

Harnad, S. (1982) Consciousness: An afterthought. Cognition and Brain Theory 5: 29 - 47.

Harnad, S. (ed.) (1987) Categorical Perception: The Groundwork of Cognition. New York: Cambridge University Press.

Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25.

Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346.

Harnad, S. (1991) Other bodies, Other minds: A machine incarnation of an old philosophical problem. Minds and Machines

1: 43-54.

Harnad, S. (1992a) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds.) Connectionism

in Context Springer Verlag.

Harnad, S. (1992b) The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion. SIGART Bulletin

3(4) (October) 9 - 10.

Harnad, S. (1993a) Grounding Symbols in the Analog World with Neural Nets. Think 2(1) 12 - 78 (Special issue on

"Connectionism versus Symbolism," D.M.W. Powers & P.A. Flach, eds.).

Harnad, S. (1993b) Problems, Problems: The Frame Problem as a Symptom of the Symbol Grounding Problem.

PSYCOLOQUY 4(34) frame-problem.11

Harnad, S. (1993c) Symbol Grounding is an Empirical Problem: Neural Nets are Just a Candidate Component. Proceedings

of the Fifteenth Annual Meeting of the Cognitive Science Society. NJ: Erlbaum

Harnad, S. (1994) Levels of Functional Equivalence in Reverse Bioengineering: The Darwinian Turing Test for Artificial

Life. Artificial Life 1(3): 293-301. Reprinted in: C.G. Langton (Ed.). Artificial Life: An Overview. MIT Press 1995.

Harnad, S. (1995a) Grounding Symbolic Capacity in Robotic Capacity. In: Steels, L. and R. Brooks (eds.) The Artificial

Life Route to Artificial Intelligence: Building Embodied Situated Agents. New Haven: Lawrence Erlbaum. Pp. 277-286.

Harnad, S, (1995b) Does the Mind Piggy-Back on Robotic and Symbolic Capacity? In: H. Morowitz (ed.) "The Mind, the

Brain, and Complex Adaptive Systems." Santa Fe Institute Studies in the Sciences of Complexity. Volume XXII. P.


Harnad, S. (2000) Turing Indistinguishability and the Blind Watchmaker. In: Mulhauser, G. (ed.) "Evolving Consciousness"

Amsterdam: John Benjamins (in press)

Harnad, S. (2001) Minds, Machines, and Turing: The Indistinguishability of Indistinguishables. Journal of Logic, Language,

and Information (JoLLI) special issue on "Alan Turing and Artificial Intelligence" (in press)

Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) Virtual Symposium on Virtual Mind. Minds and Machines 2:


Plate, T. (2000) Caution: Philosophers at work. PSYCOLOQUY 11(70)

Plate, T. (1993) Reply to Green. Cognoscenti 1: 13.

Searle, J. R. (1980) Minds, Brains and Programs. Behavioral and Brain Sciences 3: 417-424.

Zelazo, P.D. (2000) The nature (and artifice) of cognition. PSYCOLOQUY 11(076)

Zelazo, P.D. (1993) The Nature (and Artifice) of Cognition. Cognoscenti 1: 18-20


Repository Staff Only: item control page