Review of:
The Feeling of What Happens by Antonio Damasio
A Universe of Consciousness by Gerald M. Edelman and Giulio Tononi
The Mysterious Flame by Colin McGinn
The Cultural Origins of Human Cognition by Michael Tomasello
The Mind Doesn't Work That Way by Jerry A. Fodor
To appear in The Sciences New York Academy of Sciences, April 2001

Explaining the Mind: Problems, Problems

Stevan Harnad

Intelligence/Agents/Multimedia Research Group
Department of Electronics and Computer Science
University of Southampton
Highfield, Southampton
SO17 1BJ UNITED KINGDOM
harnad@cogsci.soton.ac.uk
http://www.cogsci.soton.ac.uk/~harnad/



Three of the books under review are about consciousness, one is about meaning, and one is about language, but the topics are inter-related, as we shall see. The reader may find it surprising to learn that it has lately become fashionable in cognitive science to call the problem of consciousness the "hard problem," and the problems of meaning and language (and brain function and behavior) the "easy problems." Everything is relative. The "easy problems" may be easier than the "hard one," but that certainly does not make them any easier than most other scientific problems.

What is the hard problem, then? It's as old as the human mind, it's probably lurking behind our ideas about religion and the immateriality and immortality of the soul, and it has been pondered since the advent of philosophy, where it is usually called the "mind-body" problem. Unfortunately, the word "mind" is ambiguous here, and "body" a misnomer. Some philosophers think it's more useful to call it the "mental-physical" problem instead, but even that doesn't quite do the trick.

The problem itself arises when we try to relate one sort of "thing" (mental things) to another sort of thing (physical things). We know that physical things are not just "bodies": They are matter and energy, the stuff that physicists (and chemists and biologists and engineers) study and explain to us with their familiar functional, cause-and-effect explanations (e.g. transfer of momentum from one billiard ball to another, chemical reactions, liver function). And we know exactly what mental "things" are, too: They are what is going on in our heads when we are awake: thoughts, experiences, feelings.

The problem is: How do we put those two kinds of things together: Are they both the same kind of thing? Are thoughts/experiences/feelings just matter/energy, somehow? If so, How? (I pause to let the reader test whether, mirabile dictu, he can provide a satisfactory answer to this hard question where everyone else so far has failed. . . .)

Explaining How and Why We Are Not Zombies

If the mental and the physical are not the same kind of thing, what is the relation between them? We know they are exactly correlated, but correlation is not explanation. How does the mental fit into the physical world causally? Is it an extra "force," like gravitation? Those who resort to invoking the paranormal in the face of the hard problem, reply "yes," and boldly proclaim the "telekinetic" power of the mind (rather like Uri Geller's spoon-bending, except that it's mind-over-matter even when we bend the spoon with our fingers -- if we are really moving our fingers because we feel like it).

The trouble with this easy solution to the hard problem is that it has some uneasy consequences: It is at odds with the conservation of mass and energy, causal laws of physics that have an awful lot of evidence supporting them, all over the universe. To regard the mental as a telekinetic force, we have to be ready to believe that some rather remarkable things are going on on our small planet: Things move because they are willed to move, not just because of the usual transfer of energy. And what is the source of this telekinetic force? That's anyone's guess, but it can't be just our brains, because our brains, like our hearts and our livers, are just that ordinary stuff, matter/energy, structure/function. (The telekineticists reason that if our brains were the cause of all our motions after all -- in other words, if telekinesis really didn't happen -- then it could never be true that we move because we feel like it; it would just feel-like that was how and why we were moving.)

I will not pursue the telekinetic option any further (it is often called "dualism"), because, in exchange for "solving" the hard problem, it seems to raise even harder problems, pitting itself against all the rest of science. Suffice it to say that none of the authors of the books under review would endorse telekinetic dualism. They are all committed to explanations that stay within the natural bounds of matter and energy, structure and function--bounds set by current theory and evidence in physics, biology and engineering. Yet let us admit that telekinesis certainly feels like the right explanation for our minds, and what they do, and how. It's just that it's an explanation that unfortunately does not fit with the scientific explanation of everything else--and hence would itself stand in need of a good deal more hard scientific explanation in its own right.

Three of the books under review try to take on the hard problem directly. Antonio Damasio does it using brain anatomy and physiology; Gerald Edelman and Giulio Tononi use computational modeling of brain function. Colin McGinn, in contrast, does not try to solve the hard problem at all, but with an excuse: he argues that, although the problem does have a solution, the human brain is incapable of finding it (and would not understand it if it did). The other two books, by Michael Tomasello and by Jerry Fodor, do not venture to take on the hard problem at all. Fodor thinks it would be futile but (unlike McGinn) does not say why (he spends his time instead trying to show why we may not even be able to solve some of the easy problems!). And Tomasello does not even mention consciousness. Damasio, however, as well as Edelman and Tononi, strive not to beg the question.

There are basically two ways to beg the question. One way is to change the subject, swap an easy problem for the hard one (but keep calling it the hard one anyway), and then solve that easy problem instead. The second way is simply to provide an easy solution, but interpret it as if it had solved the hard problem. Damasio does the first and Edelman and Tononi do the second.

At the outset, it looks as if Damasio will not beg the question--as if he is not merely going to explain intelligence or language or brain function or behavior. All of those could be explained in principle even if there were no hard problem at all: For if we had the very same intellectual and linguistic capacities we have, but we were not conscious (no mental states, no thoughts, experiences, feelings), then there would still be the "easy" problem of explaining those capacities in terms of brain function. But that would just amount to ordinary ("easy") science. Let us call that kind of explanation a structural-functional explanation (functional explanation for short). Functional explanations are perfectly compatible with the matter-energy explanations given by physics, biology and engineering.

What makes the hard problem hard is that giving functional explanations for bodily capacities is not all there is to it: We are not just zombies with certain intellectual and linguistic skills. We are conscious, that is, we do have mental states: thoughts, experiences, feelings. Let's call what it is that makes mental states mental "feelings,"for short. If we were nonfeeling zombies, there would be no hard problem. What makes the hard problem hard is precisely the mysterious difficulty of explaining feelings functionally. So the "mind-body" problem is actually the "feeling-function" problem.

Why is it so difficult (if not impossible) to explain feelings in terms of function? Because a functional explanation is always a cause-effect explanation, showing how and why something works the way it does. A functional explanation is fine for ordinary, nonfeeling matter-energy[[: the stuff investigated by physics, biology and engineering]]. But every time we try to explain a feeling functionally, it turns out that the function alone can do the cause-effect job just fine (thank you very much!), and the feeling just falls by the wayside, unexplained.

For example, a functional explanation of pain might go something like this: Pain is a signal indicating that tissue has been injured. It is useful for an organism's survival and reproduction to minimize tissue injury, to learn and remember to avoid what has caused injury in the past, to avoid contact between a currently injured body part and other objects while the part is still damaged, and so forth. The sensorimotor and neural machinery for accomplishing all this, including the computational mechanism that would do the learning, the remembering, the selective attending and so forth, could all be described, tested, confirmed and fully understood. The only part that would remain unexplained is why pain feels like something: The functional explanation accounts for the functional facts, but the feeling is left out. And so it goes: every time you try to give a functional explanation of feeling, the feeling itself turns out to be functionally superfluous (unless you happen to be a telekinetic dualist!).

In short, we know that we are not feelingless zombies. The hard problem is explaining how and why we are not. And because how's and why's are purely functional matters, that would seem to leave only two possibilities: (1) ("epiphenomenalism") that feelings are not functional but merely "decorative," piggy-backing (for some inexplicable, because nonfunctional, reason) on certain functions; or (2) (dualism) that feelings are telekinetic. The hard problem is finding an explanation for feelings that is neither (1) nor (2). My own view is that this is simply impossible. How do our authors fare with this?

Damasio's Error: Motions, Emotions and Unfelt Feelings

Damasio's title, The Feeling of What Happens, sounds as if he will be confronting the hard problem of feeling head on. And his book does provide a great deal of new, insightful and illuminating data and theory about the brain areas and activities correlated with feeling, particularly the feeling of the "self," and about the remarkable ways in which those areas and activities can diminish or break down in sleep, coma, vegetative state, akinetic (motionless) mutism and epileptic automatism. We are all zombies when we are in deep, dreamless sleep; are we zombies in any of the more active states, too? These questions and answers are fascinating, but they do not include the hard one.

Maybe we are indeed feelingless zombies when we are in the grip of an epileptic automatism, maybe we are not. (It is hard to know for sure without being the epileptic undergoing the automatism; and even if you were, you wouldn't be able to speak at the time, and afterwards you wouldn't be able to recall! So, without telepathic powers, no neurologist could ever know for sure whether or not a patient was in a zombie state: That is called the "other-minds problem," the flip side of the mind-body problem.)

Damasio's functional anatomy of feeling states certainly tells us a good deal about what their brain and behavioral correlates are: When this part of the brain is active, you feel this and you can do that; when you lose this part of the brain, you can no longer feel this or do that. This is of great interest to the clinician trying to do diagnosis, prognosis and treatment. It is also useful to the victims of brain injury, the families of such patients and to everyone interested in how their own brain works. In some cases, for instance, in the brain anatomy of the so-called sense of self, Damasio's findings may help theorists design functional models that actually have the capacities that go with having a sense of self. But those are all "easy" problems. Do Damasio's findings shed any light on the hard problem of how and why we feel at all?

Alas, they do not, and I think I can pinpoint where the question gets begged: Damasio is intent on providing a bottom-up explanation of feelings, from the most primitive feeling-state of akinetic mutism to the highest-order feeling-states of a philosopher like Descartes when he is reflecting on the nature of mind. But explaining the variations along such a hierarchy is the easy part; the hard part is explaining how and why any of it is felt at all. The critical transition, in other words, is between nonfeeling and feeling, and that is the transition Damasio completely overlooks.

Instead, Damasio rests his hierarchy of feeling-states on a highly nonstandard (and, I think, in the end, incoherent) notion of emotion. On the face of it, "emotion" is just a synonym for a certain kind of feeling. (Other kinds of feelings include sensations, such as seeing something blue or hearing something loud; hybrid emotion-sensations, such as feeling pain; desire states, such as wanting something; psychomotor states, such as willing an action; and complex feeling-thinking states, such as believing, doubting or understanding.) But Damasio uses emotion in an equivocal way, in an attempt to bridge the unbridgeable gap between nonfeeling and feeling. His bottom-level emotions (readers should confirm this for themselves) are either just motions--in other words movement tendencies and their underlying brain activities, in which case they are no kind of feeling at all, and leave us as clueless as before about how to bridge the gap--or, worse, they are "unfelt feelings," which is a contradiction in terms. Either way, it is only by invoking this blurred notion of emotion that Damasio gives the (illusory) impression of having made some sort of successful transition from the unfelt to the felt.

Descartes (whom some people wrongly blame for the idea of dualism) was the subject of an earlier book of Damasio's, titled Descartes' Error. In that book Damasio argued that Descartes had made the mistake of trying to separate what in the brain is inseparable: the psychic (mind) and the somatic (body). In the workings of the brain, Damasio pointed out, there is no such duality of function. That is correct; but let us not forget that all of the brain, both structure and function, is "somatic," and that's precisely Damasio's error with motions and emotions in The Feeling of What Happens. For the functional part of emotion, the somatic part, is indeed, as Damasio maintains, just motion! But the felt (psychic) part is something else: something 100 percent correlated with brain structure and function, to be sure--but, again, correlation isn't explanation. Correlations need a causal explanation, and the only candidate explanation, namely telekinetic dualism, is a nonstarter. Hard luck.

Edelman and Tononi's Hermeneutics

So Damasio has unfortunately begged the hard question with his motion/emotions and his unfelt feelings. Do Edelman and Tononi manage to do any better? They too set out promising not to beg the question the way others have done before them. They want to make sure they explain the difference between real seeing and, say, the activity of an optical transducer such as a photo-cell. It will not do, as they correctly point out, simply to declare that one's favorite functional mechanism "feels," any more than it will do simply to declare that an optical transducer "sees." In both cases, the how and why of the feeling itself must first be explained.

But then Edelman and Tononi go ahead and beg the hard, "feeling" question anyway. They describe some very interesting functional networks -- "distributed, re-entrant" ones -- which, they hypothesize, have some powerful functional capacities (some already demonstrated experimentally, many of them not yet). They also describe how such networks are brain-like in many ways. This is all important and exciting, but it is still all just functional. The nagging question still remains: How and why do the feelings come in (other than as the usual mysterious, unexplicated correlations)? Without an answer to that question, Edelman and Tononi's discussion is just an exercise in hermeneutics: the functional mechanism that correlates with feeling is interpreted as actually being the feeling itself, and hence as functionally explaining the feeling; whereas in reality it merely explains the functions that are mysteriously correlated with the feeling, nothing more.

Edelman and Tononi's network model is largely a mechanism for learning categories. A mechanism with all the functional capacities the authors attribute to their model will be an important contribution to cognitive science if it can indeed be shown to have all those capacities. But that is not what Edelman and Tononi are out to show here. In A Universe of Consciousness they simply try to persuade the reader that the functions of their network are somehow an explanation of feeling. The locus of Edelman and Tononi's question-begging can again be pin-pointed: An essential function of their model is discrimination, and their treatment of discrimination is equivocal in exactly the same way Damasio's treatment of motions/emotions is.

To discriminate is to be able to tell things apart. Psychophysicists speak about the JND, or just-noticeable-difference--the smallest sensory difference that people can feel. Feel? But of course psychophysics, being an ordinary functional science like all the others, really only deals with the smallest sensory difference people can detect and respond to. That could just as well apply to an optical transducer. The fact that it also happens to feel-like something when one detects those differences is another matter, and Edelman and Tononi's model comes no closer to explaining the how and why of that than an optical transducer does.

[[Two other points are worth making in passing about Edelman and Tononi: (1) They cast some of their argument in terms of another fashionable problem, the so-called binding problem: How does the brain manage to "bind" all the simultaneous sensations that it receives while perceiving an object into one unitary percept of that object? But would there be a binding problem at all if there were nothing it felt-like to perceive an object--if our brains just went about doing all their functional business of moving, categorizing, discriminating without feeling anything while doing it? Might the binding problem be just another variant of the (hard) question of how and why we are not zombies? (2) I personally did not glean much insight from Edelman and Tononi's paraphilosophical koan, "Being Precedes Describing."]]

McGinn: We Don't Have the Brains

Colin McGinn suggests that the reason our species must resort to question-begging or koans in the face of the hard problem is that we just don't have the brains to solve it. Now let us immediately concede that he could be right about that--but by the same token, the creationists could be right, too. There may be mysteries beyond the grasp of our intellects.

But why should the feeling-function problem be one of them? To turn McGinn's suggestion into anything more than an arbitrary conjecture, one would have to answer a how-and-why question every bit as hard as the hard problem itself, namely, how and why is the brain unable to solve the hard problem? To this reader, McGinn's answers unfortunately read like Just-So stories, leaving us no less mystified than we were before being informed that our mystification was innate. That's about as unhelpful as informing us that the brain does cause feelings somehow (but not explaining how or why).

For surely the latter is true: The brain does somehow cause feelings; no nondualist doubts that. The hard part is explaining how and why. Now McGinn's position is interesting in the sense that he is declaring, positively (but nondemonstratively) that there is an answer, but it just happens to be one that we are not equipped to grasp. By way of evidence, he cites other kinds of things that our brains are not equipped to grasp: We cannot know, for instance, what it feels like to be a bat (with its extra sonar sense), any more than someone born blind can know what it feels like to see. But that's cheating! It amounts to saying that a certain feeling is simply missing from the human repertoire, and that certain feeling is: what it feels like to know the solution to the feeling-function problem!

At the very least, to give this speculation some substance, McGinn would have to offer some hint of what the solution to the hard problem might look like, as well as how and why it could be the solution, even though it did not feel like the solution. For, on the face of it, all we are asking for here is a functional how-and-why explanation of something. Such explanations tend to be objective ones, which do not depend on how they "feel" to you, any more than the truth of a mathematical proof (as Descartes also noted famously) depends on whether or not it feels true to you. If there is indeed a functional explanation of feeling, it ought to be possible to at least state it (and test it, functionally), even if, because of our brain limitations, such a statement and test would not be sufficient to dispel from our minds the attendant mystery about the hard problem .

But perhaps McGinn means something even stronger than this: not just that we lack the sense to see that something is a solution to the hard problem even when it is staring us in the face, but that we even lack the means to state that solution. But that would be very odd, because it would be a limit not just on the nature of our brains, but on the expressive power of language and mathematics (both of which, though rooted in our brains, have universal, brain-independent powers, too): I may not be able to feel what it is like to be a bat, but surely I should be able to state all the functional facts about it (in fact, that's exactly how we understand the bat's sonar sense, and there is absolutely no mystery there, just a feeling that we know perfectly well that we humans happen to lack!).

No, I don't think McGinn's conjecture helps us with the hard question at all: If the question is, "how and why do we feel?", then his reply that we are not equipped to know simply raises another question, just as hard: How and why not?

Before leaving the hard problem and moving on to the two books that address easier problems, I will venture an answer: It is not because we have the wrong brains. It's because of the nature of functional explanation, the nature of feeling, and possibly also the nature of causality. The only alternative to telekinesis (in which feelings would have an independent causal power of their own) is that feelings do not have an independent causal power of their own (epiphenomenalism). They just are. (We know they exist; that's not in dispute.) Moreover, they pose no problem to the rest of science if they are simply side-effects of matter and energy, structure and function, not causes in their own right.

Make no mistake: We are no less mystified by my own conclusion that the "function" of feelings is merely decorative, but at least epiphenomenalism moots any further how-and-why questions. And it implies that the reason the hard problem is insoluble is that (1) telekinesis itself is false and (2) feeling is immune to (nontelekinetic) functional explanation (hence it is inexplicable). [[But we are still left with the sense of mystery about how and why this should be so--a mystery that could perhaps only be dispelled if we did have an extra sense, a telepathic sense, of the way matter-energy-structure-function causes and constitutes feeling.]] [[Such a hypothetical sense, however, would be just as self-contradictory, hence impossible, as a functional explanation of feeling, because of the essentially first-person nature of feeling: The only feelings you can feel are your own. ("I feel your pain" is just a metaphor.) So any "telepathic" sense I had of how nonfeeling causes or constitutes feeling could only be an illusion. I can feel only what I feel, not how I (or anyone else) feel(s).]]

Tomasello: Pantomime vs. Propositions

The question Tomasello is trying to answer is unapologetically one of the "easy" ones: How and why does our species, and no other, master language? In the past, other theorists have begged the question of consciousness by suggesting that having consciousness and having language are somehow one and the same thing, but Tomasello will have no part of that view. He recognizes that animals not only have feelings but that they are also very smart. So in many ways the question about language is: How and why do we differ from other animals in that respect? What is the functional specialization that makes us capable of language, and them incapable?

To answer that question Tomasello studies the behavioral, social, conceptual and communicative capacities of (1) apes and (2) children before and after the age at which they acquire language. His comparative studies implicate a few critical capacities: the capacity to imitate others; the capacity to "mind-read" (to sense what others are seeing, wanting or thinking); and the capacity to monitor and coordinate joint attention with others: to sense that both of you are looking at or thinking about the same thing, and to sense that the other one senses that too. (Damasio's mechanisms for the sense of self would come in very handy here). No nonhuman species has that set of capacities in full, and not even the human child does until the age when language usually begins. So Tomasello concludes that those are the capacities that make up the functional basis of language.

These findings are very important, and, as Tomasello shows, the capacities he has isolated form a basis for human culture. But do they explain language? (There still remains the separate question of grammatical capacity--another easy problem--but let us leave that aside, as a functionally autonomous module, until we get to Fodor's book.) Has Tomasello really pinpointed the functional basis for language (apart from grammar), here? I would like to suggest that he has not. For human language is, among other things, the capacity to express any proposition with a string of symbols--"The cat is on the mat," "Feeling cannot be explained functionally," "2 + 2 = 4"--plus the capacity to understand symbol strings as expressing propositions.

But if you look closely at the capacities Tomasello has singled out (and even if you design a functional model that implements those capacities), you will find that you have a mechanism that is capable of producing and sharing social pantomime. Such a mechanism could act out present and future scenes, draw people's attention to this or that, share all the kinds of data that can be shared through this kind of joint activity--but this does not provide a clue about how to get from pantomime to propositions. Even acting out the cat's being on the mat is simply that: a pantomime of the cat being on the mat, in much the way that the cat's actually being on the mat is a pantomime of itself.

In short, entities with Tomasello's functional capacities remain in the analog world of events, and copies and re-enactments of those events. Such capacities may well be necessary preconditions for language. But there is no language proper until we make the transition from this analog world of social imitation to the arbitrary, symbolic world of propositions. Perhaps Tomasello's functional resources need to be augmented with Edelman and Tononi's: If their category-learning network has the power they say it has, it should be able to learn to detect and identify cats and mats and "on-ness." So far that would just name them. But if it can also string those names into propositions that describe events and can be construed as either true or false, then we may indeed be closer to the functional substrate for language capacity.

Fodor's Skepticism about Explaining the Mind

Grounding language in category-learning, however, is an enterprise about which our last author, Jerry Fodor, is somewhat skeptical. To understand what Fodor is driving at, you have to know where he is coming from. Fodor, like McGinn, is a philosopher (in fact they are now both at Rutgers University in New Brunswick, New Jersey), but his work is in part inspired by the monumental work of the linguist Noam Chomsky of MIT on grammar. Chomsky showed that much of the human capacity for grammar, rather than being learned, arises from a complex inborn structure in the brain. Furthermore, that inborn "universal grammar," or UG, probably did not evolve the usual way, the way that fins or wings did: instead, UG is somehow an intrinsic part of the structure of matter, ever since the Big-Bang, or possibly even a necessary part of the eternal Platonic world of logic and mathematics, constraining matter whenever it is configured into a mechanism capable of language.

Now this view of Chomsky's is highly controversial, but it has a great deal of evidence supporting it: It does look as if UG isn't and cannot be learned by the language-learning child (as Chomsky has long been pointing out, the trial-and-error possibilities are far too large, and the child's actual learning time and experience far too small); for similar reasons, it is hard to imagine how UG could have evolved in the usual way (but that conclusion is perhaps not as firmly based on evidence as the fact that UG cannot be learned in childhood).

Fodor, impressed by the innateness of one function of the mind, UG, generalizes to other functions that go far beyond the evidence for UG: Fodor thinks that most categories ("cat," "mat," "object," "number" and so forth) are innate and unlearned too, just as UG is. All we learn is what names to call them; their meanings are already innately in our heads, like place-holders merely waiting for labels. If this is true, it is bad news for Edelman and Tononi's category-learning networks, because it leaves them precious little to do: Most of the category structure of the world would somehow have to be built into them in advance.

But is there any reason to believe Fodor's assertion is true? Is there any evidence that there are not examples enough, and time, for children (and adults) to learn all the kinds of things there are in the concrete and abstract world by trial and error, guided by feedback indicating when they get it right and wrong? I think there is no such evidence. But then why does Fodor believe that what is true of grammar might be true of meaning too?

I think the answer is related to yet another ("easy") problem, the symbol-grounding problem: Symbols alone do not mean anything. Ignorant of Chinese, you would look in vain for the meaning of any Chinese word in a Chinese-Chinese dictionary: It's all in there, and yet it isn't! You look up a definition, and it's just more meaningless symbols, even though, for a Chinese speaker who does not know the meaning of that particular word but does know the meaning of the words used to define it, it's enough to convey the new meaning.

This example illustrates both the power and the limitation of language: In principle, you can find out anything and everything from strings of words expressing propositions, but you can't start from scratch: Some of the words have to be "grounded" in something other than just more (meaningless) words. How are those basic words to be grounded? Edelman and Tononi's networks, linked to the world through analog sensors and effectors, sound like a good start, although one would be well-advised to build in the functions Damasio describes for internal sensorimotor maps and the self, as well as the functions Tomasello describes for social communication.

Such a system would then ground some of its symbols directly, in the capacity to detect, discriminate, categorize and manipulate the things the symbols stand for in the outside world. Other symbols could then be grounded indirectly, through propositions that define them in terms of already grounded symbols.

Fodor thinks that kind of mechanism is a nonstarter, for pretty much the same reason that the "associationism" of seventeenth-century philosophers was a nonstarter: Thought and meaning arise not merely through the association of "ideas." Thought has structure over and above mere association in time and space. (So Fodor would not believe in the Edelman and Tononi network module of such a hybrid symbol-grounding system.) Symbols and computation can perhaps capture some of the structure of thought, but Fodor, although he is a functionalist and a computationalist (computation is his "language of thought"), doubts that computation can do the whole job. His doubts are based in part on worries about "holism" (the view that symbols are local things, but meanings are not) and in part on "abduction" (how can a symbol system find the best theory to explain any set of data unless the answers are all already built into it in advance?) So Fodor would not believe in the computational component of such a hybrid symbol-grounding system either.

(It should be added that Fodor seems to have little more faith in the explanatory usefulness of "modules"--despite the fact that he himself was responsible for popularizing the notion--than he does in nets or symbols [or brain function, for that matter, or evolution]. We can define modules, in a theory-neutral way, as functionally independent components of a system, components whose design can be understood and modeled on their own, in isolation from the rest of the system. Perhaps because Fodor's own notion of modularity was inspired by UG [which was originally considered by Chomsky to be a functionally independent component of our language mechanism], the definition of "module" has been saddled with so many additional arbitrary stipulations--they must be innate, they must not interact, they must not be influenced by what a person knows--that the word really has lost all its usefulness.)

Are there grounds for all this skepticism on Fodor's part about the only explanatory resources that cognitive science has at its disposal? It is certainly true that cognitive science has not even come close to solving any of its "easy" problems, such as explaining the functional basis of language or meaning or any other lifesize piece of human intellectual capacity or brain function. But it's also hard to know how fast cognitive science ought to be explaining the mind, based on scientific track records elsewhere. Tononi and Edelman have to be given the time to demonstrate whether or not their nets can do what associationism could not. Their nets are, after all, operating on sensorimotor and symbolic inputs, not "ideas" (whatever those are).

And if symbols have their limitations, they also have their powers. No one can say in advance what hybrid systems can or cannot accomplish if their symbols are grounded in the sensorimotor world via category-learning networks. Changing the definition of just one word in a dictionary already propagates "holistically" to every other definition in the dictionary in which that word figures. Change the sensorimotor grounding and the holistic effects could be even more dramatic.

So there's no a priori reason to doubt that the "easy" problems can be solved using cognitive science's current functional tools. If, however, what you want to know is how and why it feels like something to be a system that has and exercises all those remarkable functional capacities, then I am afraid you will be disappointed. That is one unsolved mystery we will all just have to learn to live with.

Refererences

Cangelosi, A. & Harnad, S. (2000) The Adaptive Advantage of Symbolic Theft Over Sensorimotor Toil: Grounding Language in
Perceptual Categories. Evolution of Communication (Special Issue on Grounding)
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad00.language.html

Chalmers, D. (1995) Facing Up to the Problem of Consciousness. Journal of Consciousness Studies http://www.u.arizona.edu/~chalmers/papers/facing.html

Damasio, A.R. (1994) Descartes' Error: Emotion, Reason, and the Human Brain. Avon Books.

Chomsky, N. (1959) A Review of B.F. Skinner's Verbal Behavior. Language 35(1): 26-58
http:// cogprints.soton.ac.uk/documents/disk0/00/00/11/48/

Fodor, J. A. (1975) The language of thought. New York: Thomas Y. Crowell.

Fodor, J. A. (1985) Precis of  The Modularity of Mind. Behavioral & Brain Sciences 8:1-42.

Harnad, S. (1987) The induction and representation of categories. In:  Harnad, S. (ed.) (1987) Categorical Perception: The Groundwork of Cognition. New York: Cambridge University Press.
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad87.categorization.html

Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346. [Reprinted in Hungarian Translation as "A
Szimbolum-Lehorgonyzas Problemaja." Magyar Pszichologiai Szemle XLVIII-XLIX (32-33) 5-6: 365-383.]
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad90.sgproblem.html

Harnad, S. (1996) The Origin of Words: A Psychophysical Hypothesis In Velichkovsky B & Rumbaugh, D. (Eds.)
Communicating Meaning: Evolution and Development of Language. NJ: Erlbaum: pp 27-44.
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad96.word.origin.html

Harnad, S. (2000) Correlation Versus Causality: How/Why the Mind/Body Problem Is Hard.  Journal of Consciousness Studies 7(4): 54-61. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad00.mind.humphrey.html

Harnad, S. (2000) Minds, Machines, and Turing: The Indistinguishability of Indistinguishables. Journal of Logic, Language, and Information 9(4): 425-445. (special issue on "Alan Turing and Artificial Intelligence") http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad00.turing.html

Tomasello, M., Kruger, A. C. & Ratner, H. H. (1993) Cultural learning. Behavioral & Brain Sciences 16:495-552.