We human beings may not be the most admirable species on the planet, or the most likely to survive for another millennium, but we are without any doubt at all the most intelligent. We are also the only species with language. What is the relation between these two obvious facts?
Intelligence, in its most general guise, is the capacity to extract information from the world and put it to good use. And since we live in time, this capacity can be put more pointedly as the capacity to mine the past for the sake of the future. Learning is simply improvement in this capacity, in whatever form it takes. The soliloquy that accompanies the errors committed by any intelligent agent might be: "Well, I mustn't do that again!" and one of the hardest lessons for any agent to learn, apparently, is how to learn from one's own mistakes. In order to learn from them, one has to be able to contemplate them, and this is no small matter. Life rushes on, and unless one has developed positive strategies for recording one's tracks, the task known in Artificial Intelligence as credit assignment (also, known, of course, as blame assignment!) is insoluble. When one says to oneself, "I mustn't do that again!" what exactly does "that" refer to? I have just fallen over a cliff, perhaps, and my last act was to move my left leg forward. Surely the moral I should draw is not that I should never again move my left leg forward. Nor that I should never locomote again, nor that I should be careful never again to locomote before supper, or when there are daffodils in bloom. The task that confronts me is forming an appropriate representation of the crucial features of my past experience, so that I can use it as a guide to future action.
Learning from one's mistakes is not the only variety of learning, but in a certain sense it is fundamental, and there can be no doubt that we Homo sapiens are much better at this task than any other species, and that verbal language plays a major role in our pre-eminence. The question is: How? Just now I imagined an agent saying to himself "I mustn't do that again!", but might not the benefits of such reflection be available to an agent that couldn't say anything to itself? Might not an animal, lacking natural language, nevertheless be capable of categorizing--and re-categorizing--its own behaviors?
Before we consider the ways in which language might enhance our capacity to learn from experience, we should remind ourselves briefly of what is known about the capacity of animals without language to learn from experience. There is no doubt that they do learn, usually rather slowly, and usually only an ecologically marked subset of the lessons we might offer them. For instance, rats (like many other animals) learn to find their way through mazes, learn to avoid performing certain simple actions under certain simple conditions (conditions in which they come to expect to be punished for taking those actions), learn to optimize their performance on certain natural or artificially contrived food-obtaining tasks, and so forth. These laboratory examples of learning are commonly held--for reasons good and bad--to be suitable substitutes for the variety of ecologically more realistic cases of learning animals exhibit in the wild. There is this much to be said in favor of their equivalence: they involve incremental learning over longish amounts of time, identifiable in the lab as training time, and discernible in the wild simply as the need for substantial repetition and variation of experience to drive in the message. Animals in the wild do not often exhibit "one shot" learning of anything very demanding.
There is a famous exception, the Garcia effect: the capacity of rats to exhibit a sudden and well-aimed revision of their eating policies in response to a single episode of nausea. What is so striking about the Garcia effect is that it is dramatically unlike the gradual--even sluggish--hill-climbing towards a better policy that rats exhibit in other domains of their lives. But in another regard, it is typical of non-human learning; it is rigidly tied to a specific sub-class of sensory cues that have a narrowly circumscribed behavioral meaning to the animals in question. When we observe the Garcia effect, we are not getting a glimpse of a single instance of a more general talent. It is a special case, incapable of wide variation. We might say that Garcia discovered that rats are idiots savants with expertise in the toxic-food-detection field.
Such exotic effects aside, animals show a definite capacity to be gradually trained by their experience in the world, and this gradual sort of learning has been studied and modeled by several different waves of researchers, from Associationist to Behaviorist to Connectionist (and all their subvarieties), and thanks to the cumulative effect of several generations of criticism and rebuttal, retraction and revision, we are getting clearer about the strengths and limits of this real but not all-encompassing variety of learning. I want to reserve a term for this variety of learning, wherever it occurs, and a word that comes close in its connotations is training.
Although training can yield remarkably subtle and powerful discriminatory competences, capable of teasing out the patterns lurking in voluminous arrays of data, these competences tend to be anchored in the specific tissues that are modified by training. Andy Clark and Annette Karmiloff-Smith, in a forthcoming paper, describe them as "embedded in special-purpose effective procedures" (Clark and Karmiloff-Smith, "The Cognizer's Innards," forthcoming in Mind and Language). They are "embedded" in the sense that they are incapable of being "transported" readily to other data domains or other individuals.
Today, there is an explosively growing interest in models of such learning,
under the general banner of connectionism. The defining feature
of a connectionist model is that it is composed of nodes of one
sort or another, linked together into a network, in which the connection
strengths between nodes can be adjusted by training regimes, and in which
each increment in the training is accomplished by the slight adjustment
of many nodes. The result is that the effects of learning are distributed
throughout the fabric in which the learning in embodied. Clark and Karmiloff-Smith
vividly characterize the knowledge embedded in such special-purpose pattern-recognition
systems as "interwoven", and note that while there are clear benefits to
a design policy that "intricately interweave[s] the various aspects of
our knowledge about a domain in a single knowledge structure", there are
costs as well: "the interweaving makes it practically impossible to operate
on or otherwise exploit the various dimensions of our knowledge independently
of one another."
We must be careful not to exaggerate the embeddedness of interwoven
knowledge. The trained pianist has subjected only his hands and fingers
to laborious training in hitting the keys, but will probably find himself
already more adept at playing the pedals of an organ with his feet
than the untrained person. And as Karl Lashley famously proved years ago,
what the rat learns in the maze is not merely a way of moving its legs
till it gets to the food; it has learned something much more general: roughly,
how to get to the food.
Such knowledge is often called procedural (as opposed to declarative) knowledge, or implicit (as opposed to explicit) knowledge or (by Polanyi and me, with somewhat different emphases) tacit knowledge. Endnote 1 So opaquely is such knowledge hidden in the mesh of the connections that, as Clark and Karmiloff-Smith say, "it is knowledge in the system, but it is not yet knowledge to the system." Once we think of the contrast in these terms, it may remind us of many other "intelligent" animal behaviors that are not trained but innate. For example, the unsettling and precocious singlemindedness with which the newly hatched cuckoo chick shoulders the competing eggs out of the nest in which it finds itself provokes what may be a comforting judgment: the evolutionary rationale for this behavior is crystal clear, but it is nothing to the cuckoo. The "wisdom" of its behavior is in some sense embedded in the innate wiring that achieves this effect so robustly, but the cuckoo hasn't a clue about this rationale (See Dennett, 1987, 1991, Dretske, 1991 discussion, and Dennett, 1992). Why not? What would have to be added to the cuckoo's computational architecture for it to be able to appreciate, understand, exploit the wisdom interwoven in its neural nets?
A popular answer to this question, in its many guises, is "symbols!" The answer is well nigh tautological, and hence is bound to be right, in some interpretation. How could it not be the case that implicit or tacit knowledge becomes explicit by being expressed or rendered in some medium of "explicit" representation? Symbols, unlike the nodes woven into connectionist networks, are "movable"; they can be "manipulated"; they can be composed into larger structures where their contribution to the meaning of the whole can be a definite and generatable function of the structure--the syntactic structure--of the parts.
There is surely something right about this; what we human beings have that far outstrips the cognitive capacities of both rat and cuckoo (and maybe even outstrips the cognitive capacities of all other primates) is the capacity for swift, insightful learning--learning that does not depend on laborious training but is simply--"simply"!--ours as soon as we contemplate a suitable symbolic representation of the knowledge. (We do have to understand the representation we contemplate, of course, and there is where mystery still lurks.)
"So," Clark and Karmiloff-Smith ask, "do we merely need to add to a connectionist network a mechanism that generates linguistic labels for the network's implicit knowledge?" How could we take a connectionist network and turn it into a symbolic system? Or perhaps we should ask instead: How could we take a connectionist network and attach it to a symbolic system? (And if so, what would the symbolic system be "made of" if not connectionist parts?) Or still better: How could we make a connectionist system grow a symbolic system on top of itself?
And finally, what is the role of natural language in making possible the higher levels of cognitive activity we have just begun to isolate? Are the representational elements out of which we compose these higher-level movable distillations of our embedded wisdom words? Are they entities tied to specific elements in natural languages, or are they perhaps elements in an innate representational system, terms in a "language of thought"?
I have argued in the past that natural language is an essential ingredient in the development of human consciousness, and more particularly, in developing the sort of higher-level self-categorizing cognition we are talking about. And so Clark and Karmiloff-Smith are correct to identify me as one who has held that "the human mind deploys essentially connectionist style representations but augments itself with the symbol structures of natural language in the public domain." They go on to claim:
Theories which make the distinctive cognitive characteristics of humans dependent on an ability with public language seem, in general, to get the cart before the horse. It is more plausible to see our abilities with language as one effect or product of a deeper underlying difference in the redescriptive architecture of our cognitive apparatus, a difference which may group us with some non-linguistic higher mammals but separate us from hamsters, sea-slugs and standard connectionist networks.I am inclined to think that they are overstating their case, and missing some of the very complexity they rightly insist upon. In what follows I want to explore the idea that the capacity of a system to engage in "representational redescription," as they say, really does depend on that system's capacity--not yet fully developed, but in the process of development--to master and use a natural language.
Karmiloff-Smith 1979 is a pioneer expression of the now familiar idea
that a main virtue of introducing higher-level representations is that
one creates a new class of entities that can be operated upon, that can
become "objects of cognitive manipulation, transportable to other tasks."
(Clark and Karmiloff-Smith) But these very skills of cognitive manipulation
have to be created along with the representations; that is, they have to
develop out of something prior, some capacities that can be harnessed or
exapted (to use Stephen Jay Gould's term) to the novel tasks of
composing, saving, retrieving, revising, comparing these new internal objects.
Karmiloff-Smith's own research with children gives us some of the best
glimpses we have of children gradually equipping themselves with these
competences, and I want to suggest a few ways in which what is going on
during this process might depend on natural language even when it doesn't
directly or explicitly involve the child's using natural language.
I begin with a useful analogy with a more recent technological breakthrough. The advent of high-speed still photography was a revolutionary technological advance for science because it permitted human beings, for the first time ever, to examine complicated temporal phenomena not in real time, but in their own good time--in leisurely, methodical, backtracking analysis of the traces they had created of those complicated events. Here a technological advance carried in its wake a huge enhancement in cognitive power. Eduard Muybridge's famous studies of the gaits of horses and human beings are particularly accessible and charming examples, but it should be remembered that there are thousands of important counterparts in other scientific fields. The advent of language, I want to suggest, was a parallel boon for human beings, a technology that created a whole new class of objects-to-contemplate, verbally embodied surrogates that could be reviewed in any order at any pace.
Before there were cameras and high speed film, there were plenty of observational and recording devices that permitted the scientist to extract data precisely from the world for subsequent analysis at his leisure. The exquisite diagrams and illustrations of several centuries of science are testimony to the power of these methods, but there is something special about a camera: it is "stupid." It does not have to understand its subject the way an artist or illustrator does in order to "capture" the data represented in its products. Just now I noted that the sort of learning we human beings can achieve just by contemplating symbolic representations of knowledge depends not on our merely, in some sense, perceiving them, but also understanding them, and my rather curious suggestion is that in order to arrive at this marvelous summit, we must climb steps in which we perceive but don't understand our own representations.
Contemplating one's past experience in such a way as to make it good material for general judgments requires recording it, somehow, but recording one's past experience in toto is probably impossible. We are not equipped--though some like to think we are--with a sort of multi-media recording of all our experience in the brain.
Many years ago, Wilder Penfield (1958) wrote a fascinating account of his experience with the direct electrical stimulation of the cerebral cortices of awake, locally anesthetized patients, and among the most tantalizing of his findings were the cases in which subjects reported that they were re-experiencing, in high detail, apparently long forgotten events in their lives. The idea that Penfield had stumbled upon the "Play" button of the brain's multi-modal VCR was hard to resist, and many have succumbed to the temptation, but in fact, the details Penfield reported, and the clinical experiences of subsequent researchers, are consistent with much less exciting (if biologically more plausible) hypotheses. Only the highly interpreted (and normally, oft-rehearsed or reviewed) abstracts of our past experiences are available to us, even under the best of conditions. But how are these abstracts made?
Recording "edited" versions of our past experience would be possible if we had an initially "stupid" way of doing both the editing and storing. (If we had to have a good understanding of what we were editing at the time we stored it, we would not need to take our time, later, to re-analyze and reconsider what we had done.) Endnote 2 Consider the examples I gave of the "wrong" morals to draw from falling over the cliff; each was neatly entered on the list with a simple linguistic phrase: "moving my left leg forward" "locomoting" "locomoting before supper" "locomoting while daffodils are in bloom". Each is a candidate categorization of my action, and each is of course pretty stupid in the circumstances. But you have to start somewhere, and this is the advantage we have over animals: we can start. A stupid hypothesis to test is better than none at all, especially if it is one's debut in a career of hypothesis-testing.
How might a habit of label-generation, hypothesis-formation and testing get started, and what is involved in the general practice of such "redescription"? Nobody knows yet--certainly I don't know--but I have some speculations to offer that might not be too far wide of the mark. Consider what happens early in the linguistic life of any child. "Hot!" says mother. "Don't touch the stove!" At this point the child doesn't know what "hot" or "touch" or "stove" mean--they are primarily just sounds--auditory event-types that have a certain redolence, a certain familiarity, a certain echoing memorability to the child. They come to conjure up a situation-type, however, and not just a situation in which a specific prohibition is typically encountered but also a situation in which a certain auditory rehearsal is encountered.
We may crudely overstate the case and suppose that the child acquires the habit of saying to itself (aloud---why not?) "Hot!" "Don't touch the stove!" without much of an idea what it means, but as an associated part of the drill that goes with approaching and then avoiding the stove, but also as a sort of mantra that might be uttered at any other time. After all, children are taken with the habit of rehearsing words they have just heard, in and out of context, building up recognition-links and association paths between the auditory properties and concurrent sensory properties, internal states, and so forth. That's a laughably crude sketch of the sort of process that must go on, but it could have the effect of initiating a habit of what we might call semi-understood self-commentary. The child, prompted initially by some insistent auditory associations provoked by its parents' admonitions, acquires the habit of adding a sound track to its activities, "commenting"; the actual utterances would consist at the outset of large measures of "scribble" (the nonsense-talk children engage in), real words mouthed with little or no appreciation of their meaning, and understood words. There would be mock exhortation, mock prohibition, mock praise, mock description, and all these eventually mature into real exhortation, prohibition, praise and description. But the habit of adding the "labels" would be driven into place before the labels had to be understood, even partially understood.
It is such initially "stupid" practices, the mere "mouthing" of labels in circumstances appropriate and inappropriate, I am suggesting, that could soon be turned into the habit of redescription. As the child lays down more associations between the auditory and articulatory processes, on the one hand, and other patterns of concurrent activity on the other, this would create "nodes" of saliency in memory; a word can become familiar even without being understood. And it is these anchors of familiarity that could give a label an independent identity within the system. Without such independence, labels are "invisible".
Labeling is a non-trivial cognitive tactic, and it is worth a moment's
digression to consider the conditions under which it works. Why does anyone
ever label anything, and what does it take to label something? Suppose
you were searching through thousands of boxes of shoes, looking for a housekey
that you had good reason to believe had been hidden in one of them. Unless
you are an idiot, or so frantic in your quest that you cannot pause to
consider the wisest course, you will devise some handy scheme for cutting
down your task by preventing you from looking more than once in each box.
One way would be to move the boxes from one stack (the unexamined stack)
to another stack (the examined stack). Another way, potentially more energy
efficient, is to put a check mark on each box as you examine it, and then
adopt the rule never to bother looking in a box with a check mark on it.
A check mark is a way of making the world simpler; it cuts down on your
cognitive load by giving you a simple perceptual task in place of
a more difficult--perhaps impossible--task. Notice that if the boxes are
all lined up in a single row, and you don't have to worry about unnoticed
re-orderings of the queue, you don't need to put check marks--you can just
work your way from left to right, using the simple distinguisher nature
has already provided you, the left/right distinction.
But now let's concentrate on the check mark itself. Will anything do as a checkmark? Clearly not. "I put a faint smudge somewhere on each box as I examine it." "I bump the corner of each box as I examine it." Not good choices, since the likelihood is too high that something else may already have inadvertently put such a mark on a box. I need something distinctive, something that I can be confident is the result of my labeling act, not some extraneously produced blemish. It should also be memorable, of course, so I will not be beset by confusions about whether or not this is my label, and if so, what policy I meant to be following when I adopted it. Only under these conditions will a label fulfill its raison d'etre, which is to provide a cognitive crutch, off-loading a bit of cognitive work into the environment. This is perhaps the most primitive precursor of writing, the deliberate use of parts of the external world as "peripheral" information-storage systems.
An interesting--and largely unasked, let alone unanswered--question is whether non-human animals ever engage in deliberate labeling or marking of this sort. There are the scent trails of insects and other animals, of course, and one can easily recognize their capacity to make various otherwise difficult cognitive tasks extremely easy. Many animals stake out territory by marking the boundary with urine or other idiosyncratic productions, but these are at least primarily for the information of other animals, not aides-memoire for themselves. Clark's nuthatches are superbly good at locating the caches of seeds they have left behind, and they may use the debris they leave behind when they empty a cache as a sign to themselves that they needn't re-explore it (just like the shoe box check mark), but even if this is a good case (and I am tempted to think it is) it is a case of opportunistic exploitation of a disturbance that would be made in any case for other reasons. That is nature's way, of course, but the question is whether any other creatures--other than ourselves--have discovered the practice of creating labels for things for the express purpose of making their cognitive tasks easier.
But now I want to return to the practice of internal labeling, and the moral I want to draw from the discussion of external labeling is that labels need to be independently and readily identifiable, which means in this context that they must be ready enhancers of sought-for associations that are already to some extent laid down in the system. Beyond that, they can be arbitrary, and their arbitrariness is actually part of what makes them distinctive--there is little risk of failing to notice the presence of the label--it doesn't just blend in to its surroundings like a dent in the corner of a shoebox. It wears the deliberateness of its creation on its sleeve.
The habit of semi-understood self-commentary could, I am suggesting,
be the origin of the practice of deliberate labeling, in words (or scribble-words
or other private neologisms), which in turn could lead to a still more
efficient practice, dropping all or most of the auditory and articulatory
associations and just relying on the rest of the associations (and
association-possibilities) to do the anchoring. The child, I suggest, can
abandon such vehicles as out-loud mouthings, and create private, unvoiced
neologisms as labels for features of its own activities.
We can take a linguistic object as a found object (even if we have somehow blundered into making it ourselves, rather than hearing it from someone else), and store it away for further consideration, "off line". This depends on there being a detachable guise for the label, something that is independent of meaning. Once we have created labels, and the habit of "attaching" them to experienced circumstances, we have created a new class of objects that can themselves become the objects of all the pattern-recognition machinery, association-building machinery, and so forth. Like the scientists lingering retrospectively over an unhurried examination of the photographs they took in the heat of experimental battle, we can reflect, in recollection, on whatever patterns there are to be discerned in the various labeled exhibits we dredge out of memory.
As we improve, our labels become ever more refined, more perspicuous,
ever better articulated, and the point is finally reached when we approximate
(at least--and in fact at best) to the near magical prowess we began with:
the mere contemplation of a representation is sufficient to call
to mind all the appropriate lessons; we have become understanders
of the objects we have created. We might call these artifactual nodes in
our memories, these pale shadows of articulated and heard words, concepts.
A concept, then, is an internal label which may or may not include among
its many associations the auditory and articulatory features of a word
(public or private). But words, I am suggesting, are the prototypes or
forbears of concepts. The first concepts one can manipulate, I am
suggesting, are "voiced" concepts, and only concepts that can be manipulated
can become objects of scrutiny for us.
Do animals have concepts? Does a dog have a concept of cat? Or food, or master? Yes and no. No matter how close extensionally a dog's "concept" of cat is to yours, it differs radically in one way: the dog cannot consider its concept. It cannot ask itself if it knows what cats are; it cannot wonder whether cats are animals; it cannot attempt to distinguish the essence of cat (by its lights) from the mere accidents. Concepts are not things in the dog's world in the way cats are. Concepts are things in our world because we have language.
Language creates a new class of objects. This is trivial; words and sentences. But it is also not trivial, for these objects in turn permit another class of objects to become salient: concepts. No languageless mammal can have the concept of snow the way we can, because such a mammal--a polar bear, let's say--has no way of considering snow "in general" or "in itself", and not for the trivial reason that it doesn't have a (natural language) word for snow, but because without a natural language, it has no talent for wresting concepts from their interwoven connectionist nests.
It is a commonplace of philosophy that concepts are language-neutral; the concept of snow is the concept of Der Schnee is the concept of la neige is the concept of la neve. Even if, as persistent myth has it, the Eskimos have 17 (or 14 or 35 or . . .) different words for snow, and hence different concepts of snow, there is a concept of snow that is shared or sharable, easily if the languages in question are European, more indirectly under other circumstances.
But it is a mistake to suppose that that concept, or anything like that concept, is thinkable about by a polar bear. There are good reasons for attributing to polar bears a sort of concept of snow. For instance, polar bears have an elaborate set of competences for dealing with snow in its various manifestations that are lacking in lions. How should we capture the snow-related information-structures in the polar bear (but not in the lion) if not by attributing a concept of snow to the polar bear's mind? We can speak of the polar bear's implicit or procedural knowledge of snow, and we can even investigate, empirically, the extension of the polar bear's embedded snow-concept, but then bear in mind (no pun intended) that this is not a wieldable concept for the polar bear. Endnote 3
Coda: The Implications for "Cognitive Closure"
It has been plausibly maintained, by Nicholas Humphrey, David Premack Endnote 4 and others, that chimpanzees are natural psychologists--what I would call second-order intentional systems--but if they are, they nevertheless lack a crucial feature shared by all human natural psychologists, folk and professional varieties: they never get to compare notes. They never dispute over attributions, and ask for the grounds for each others' conclusions. No wonder their comprehension is so limited. Ours would be, too, if we had to generate it all on our own. Science is not just a matter of making mistakes, but of making mistakes in public. Making mistakes for all to see, in the hopes of getting the others to help with the corrections.
I think it is very likely that every content that has so far passed through your mind and mine, as I have been presenting this talk, is strictly off limits to non-language-users, be they apes or dolphins, or even non-signing Deaf people. If this is true, it is a striking fact, so striking that it reverses the burden of proof in what otherwise would be a compelling argument: the claim, first advanced by the linguist Noam Chomsky, and more recently defended by the philosophers Jerry Fodor and Colin McGinn (1990), that our minds, like those of all other species, must suffer "cognitive closure" with regard to some topics of inquiry. Spiders can't contemplate the concept of fishing, and birds--some of whom are excellent at fishing--aren't up to thinking about democracy. What is inaccessible to the dog or the dolphin, may be readily grasped by the chimp, but the chimp in turn will be cognitively closed to some domains we human beings have no difficulty thinking about. Chomsky and company ask a rhetorical question: What makes us think we are different? Aren't there bound to be strict limits on what Homo sapiens may conceive? This presents itself as a biological, naturalistic argument, reminding us of our kinship with the other beasts, and warning us not to fall into the ancient trap of thinking "how like an angel" we human "souls," with our "infinite" minds are.
I think that on the contrary, it is a pseudo-biological argument, one
that by ignoring the actual biological details, misdirects us away from
the case that can be made for taking one species--our species--right off
the scale of intelligence that ranks the pig above the lizard and the ant
above the oyster. Comparing our brains with bird brains or dolphin brains
is almost beside the point, because our brains are in effect joined together
into a single cognitive system that dwarfs all others. They are joined
by one of the innovations that has invaded our brains and no others: language.
I am not making the foolish claim that all our brains are knit together
by language into one gigantic mind, thinking its transnational thoughts,
but rather that each individual human brain, thanks to its communicative
links, is the beneficiary of the cognitive labors of the others in a way
that gives it unprecedented powers. Naked animal brains are no match at
all for the heavily armed and outfitted brains we carry in our
1.The same distinction (roughly) is drawn by Kandel and Hawkins, 1992 (p.80), who are primarily concerned with modeling of the neuronal building blocks of neural tracts capable of such learning.
2. Clark and Karmiloff-Smith discuss Mozer and Smolensky's promising idea of "skeletonizing" networks, to extract the essential knowledge in them. They add the useful idea of skeletonizing copies of the networks, leaving the detailed, robust parent network intact for use in its original domain, and then, somehow, forming "new structured representations" tied to these skeletonized copies. I am suggesting that the sophistications necessary to develop such a process are exapted from language-processes.
3.The discussion by Clark and Karmiloff-Smith of the ideas of Gareth Evans (1982) is relevant here--but there is no time on this occasion for me to discuss it.
4.Premack's work on chimpanzees has some tantalizing
suggestions relevant with my speculations here. Most notably, he showed
that chimpanzees who had been given extensive training in (what he interprets
to be) language, are better than non-trained chimpanzees at certain cognitive
puzzles that are independent of the terms they were trained to use.
Two caveats: Premack draws our attention to the fact that in spite of their
extensive training, his chimpanzees never gained anything remotely as powerful
as a normal child's language ability; and he specifically points out that
"we do not find in the ape anything remotely comparable to . . .the metacognitive
tasks that Karmiloff-Smith (1979) has demonstrated in the child." (1986,
pp.143). Nevertheless, if the claim is borne out that language (-like)
training enhances more general ability at reasoning, this would strengthen
my suggestion that, contrary to Clark and Karmiloff-Smith, it is language
learning, but not specific language-using, that is a necessary condition
for higher-level redescription. But I do not claim to have demonstrated
here that I am right and they are wrong; I merely claim to have opened
up the question for further investigation.
Clark, A. and Karmiloff-Smith, A., "The Cognizer's Innards," forthcoming in Mind and Language.
Dennett, 1987, The Intentional Stance
Dennett, 1991, "Ways of Establishing Harmony," in B. McLaughlin, ed., Dretske and his Critics, Oxford: Blackwell.
Dennett, 1992, "La Compréhension Artisanale," in Daniel C. Dennett et les Stratégies Intentionnelles, in Lekton, 2, Univ. de Québec à Montréal.
Dretske, F., 1991, Replies (in McLaughlin, 1991)
Evans, G., 1982, The Varieties of Reference, Oxford: Oxford Univ. Press.
Kandel, E.R., and Hawkins, R. D., "The Biological Basis of Learning and Individuality," Scientific American, 267, September 1992, pp.79-86.
Karmiloff-Smith, A., 1979, A Functional Approach to Child Language, Cambridge: Cambridge Univ. Press.
McGinn, C., 1990, The Problem of Consciousness, Oxford: Blackwell.
Penfield, W., 1958, The Excitable Cortex in Conscious Man, Liverpool: Liverpool Univ. Press.
Premack, D., 1986, Gavagai! or the Future History of the Animal Language Controversy, Cambridge, MA: MIT Press/A Bradford Book.