Dennett, Daniel C. (1994) Get Real. Reply to my critics, in Philosophical Topics, vol. 22, no 1 &2, Spring & Fall 1994. 505-568.

Get Real

Reply to 14 essays, in Philosophical Topics, vol. 22, no. 1 & 2, Spring & Fall 1994, pp. 505-568.

Daniel C Dennett

Get Real
Table of Contents

1. Scale Up in the Fox Islands Thorofare

Ivan Fox, "Our Knowledge of the Internal World"

2. Dretske's Blind Spot

Fred Dretske, "Differences That Make no Difference"

3. Truth-Makers, Cow-sharks and Lecterns

Brian McLaughlin & John O'Leary-Hawthorne, "Dennett's Logical Behaviorism"

Mark Richard, "What Isn't a Belief?"

Lynn Rudder Baker, "Content Meets Consciousness"

Stephen Webb, "Witnessed Behavior and Dennett's Intentional Stance"

4. Superficialism vs. Hysterical Realism

Georges Rey, "Dennett's Unrealistic Psychology"

5. Otto and the Zombies

Joseph Levine, "Out of the Closet: A Qualophile Confronts Qualophobia"

Robert Van Gulick, "Dennett, Drafts and Phenomenal Realism"

6. Higher Order Thoughts and Mental Blocks

David Rosenthal, "First-Person Operationalism and Mental Taxonomy"

Ned Block, "What is Dennett's Theory a Theory of?"

7. Qualia Refuse to go Quietly

Joseph Tolliver, "Interior Colors"

Stephen White, "Color and Notional Content"

Jeff McConnell, "In Defense of the Knowledge Argument"

Eric Lormond, "Qualia! (Now Showing at a Theater near You)"

8. Luck, Regret and Kinds of Persons

Michael Slote, "The Problem of Moral Luck"

Carol Rovane, "The Personal Stance"

There could be no more gratifying response to a philosopher's work than such a bounty of challenging, high-quality essays. I have learned a great deal from them, and hope that other readers will be as delighted as I have been by the insights gathered here. One thing I have learned is just how much hard work I had left for others to do, by underestimating the degree of explicit formulation of theses and arguments that is actually required to bring these issues into optimal focus. These essays cover my work from top to bottom. Just about every nook and cranny is probed and tested in ways I could never do for myself. The essays thus highlight the areas of weakest exposition of my views; they also show the weak points of the views themselves--and suggest repairs, which I am sometimes happy to accept, but not always, since there are a few cases in which one critic deftly disarms another, sight unseen. I will be fascinated to learn how the individual authors react to each other's essays, since they side with me on different points, and disagree about what is still in need of revision or repair.

To me the most interesting pattern to emerge is the frequency with which the criticisms hinge on mistaken assumptions about the empirical facts. Since I have long maintained that ignoring the relevant science is the kiss of death in philosophy of mind, no project could be dearer to my heart than showing how paying attention to such non-traditional details is the key to progress. So I will give pride of place to my responses to Ivan Fox and Fred Dretske, whose essays show most vividly the need for joining forces with cognitive science. Then I will turn to the others, following my usual order: considering content first, then turning to consciousness, and finally, to the ethical considerations of personhood. It hardly needs saying that this essay, long as it is, would be twice as long if I responded to all the points raised that deserve discussion.Endnote 1

1. Scale Up in the Fox Islands ThorofareEndnote 2

Ivan Fox's essay may very well be the most important essay in the collection, an original breakthrough in phenomenology that can really move us into a new understanding--or it may not be. I just can't tell. I have by now spent many hours struggling with it, an experience that puts me in mind of one of the delights of sailing on the coast of Maine: the exhilarating phenomenon known as scale up. You are sailing along in a dense pea-soup fog, the sails dripping, the foghorn moaning nearby but unlocatable in the white-out, visibility less than fifty yards; you move cautiously, checking and double-checking the compass, the depth sounder, the chart, looking out for dangers on all sides, working hard and feeling tense and uncertain, and then all of a sudden you sail out of the fogbank into glorious sunshine, with miles of visibility, blue sky, sparkling water, a fresh breeze. Yeehah! That's the way I felt reading his paper. There were long patches of fog that I struggled through, unsure I knew where I was or where I was headed, and then suddenly I'd find myself bathed in clear, insightful going, a novel course through recognizable landmarks. Yeehah! Then back into the fog, anxiously waiting for the next scale up. We Downeasters have learned to take a perverse pleasure in living through the foggy passages for the rewards of a good scale up. But in philosophy there ought to be a better way. It is not that Fox has overlooked an easy way of proceeding; anyone who has actually tried hard to say what happens in conscious perception will appreciate that he is not making up difficulties and fancy ways of dealing with them. The more straightforward ways of saying what happens are all seriously confused and deeply misleading, for the reasons he enunciates--a verdict my commentaries on some of the other essays will support in due course.

A better way--not an easier way--would get clearer about what the rules of such an enterprise are, what counts as being right or wrong, what sorts of implications and applications these ideas have. Here is my methodological proposal. If Fox is on an important new track, as I suspect, then it ought to be possible to recast all of it--all of it--in terms that have a direct and helpful bearing on a project I am working on these days: the Cog project in robotics, directed by Rodney Brooks and Lynn Andrea Stein at the AI Lab at MIT. Cog is a humanoid robot, situated in real (not virtual) space and time, with human-sized eyes, arms, hands, and torso that move like human body parts, innervated by sensors for "touch" and "pain" (scare-quotes for the squeamish), and designed to undergo a long period of "infancy," not growing larger, but learning hand-eye coordination and much, much more--e.g., folk physics and even folk psychology--the way we human beings do: by being experientially embedded in a concrete world of things that can harm, help and otherwise "interest" it. (Dennett, 1994) Cog will have to track individual objects, reidentify them, interact gracefully with them, protect its own bodily integrity and safety, and--in our fondest blue sky aspirations--come to talk about its life, its subjectivity, in this concrete world it shares with us.

Among the opportunities and problems that Cog will confront are instances very much like Fox's example of pulling the thorn from the finger. So will Cog's cognitive architecture have to incorporate his "surrogates"--representatives instead of "representations"? That sounds very much like the idealogy for which Rodney Brooks is famous in AI circles. He is, after all, the author of "Intelligence without Representation," (1991), one of the most influential manifestos of the new anti-GOFAI (and hence anti-LOT) Endnote 3 school of AI, and like Fox, he has all along stressed the practical importance of not interposing intellectualist systems of sentential objects between input and output in robots that must cope in real time and space. When Fox speaks of our creating the Cartesian modes as a reflection that does not disturb the underlying Empedoclean modes of acquaintance, this is thus tantalizingly close to Brooks' subsumption architecture, in which new sophistications have to be piled on top of earlier systems of distributed perceptuo-locomotory prowess. One might well wonder if Fox has simply re-invented some of Brooks's wheels, in a daunting new vocabulary, so perhaps, after all, he has no new insights to offer to the Cog team, who long ago turned their backs on High Church Computationalism, in spite of their domicile at the East Pole (Dennett, 1987). Or perhaps he has seen, from his phenomenological and philosophical vantage point, some crucial sharpenings and advances that the Cog team must come to appreciate and somehow honor in their engineering if they are ever to get Cog to do what they want Cog to do.

My first ambition was to figure out for myself what the "take home message" of Fox's essay would be for the Cog group. Translating phil-speak points into their terms is a task I often undertake these days, and I find it is always a salutary exercise. The faculty and graduate students in the group are both open-minded and astonishingly quick studies--utterly unfazable by technicalities, both theoretical and empirical. But they are also deeply practical; they are embarked on an extraordinarily ambitious and difficult project, and any advice offered had better actually rule something out that they might have been tempted to try, or rule something in that they might otherwise have overlooked. They won't be impressed if you tell them that unless they ensure that Cog has x, Cog won't have, say, intrinsic intentionality--unless you can go on to demonstrate that without intrinsic intentionality (whatever you take that to be), Cog will blunder about, or fail to acquire the sought-for competences, or go all Hamlet-like in some combinatorial explosion of futile looping, or destroy itself. If Cog can locate the thorns, readily recognize their negativity, be moved somehow by this recognition to attempt to remove them, and succeed (in jig time), it will be hard to sell the Cog team on the complaint that Cog nevertheless lacks the je ne sais quoi that distinguishes our own acts of thorn-removal.

But I have been simply unable to state to myself with any confidence what Fox's message is, in the end. Cog's manifest image, he seems to be saying, must incorporate an ontology that is irreducibly naive--an ontology that resists going in either of the two directions sophisticated philosophical analysis demands. I think this is an extremely promising idea, but what does it mean in implementation terms? What shouldn't be there and what should be there? How does one get "gestalting" into Cog's processes, for instance, and how do you tell if you've succeeded? I cannot answer these questions yet. I encourage Fox to pose the problems for himself. I am not so swept up in this robotics project, or so doctrinaire, as to require that anything worth doing in philosophy of mind be translatable into valuable Cog-speak. There may well be many important projects in philosophy of mind or phenomenology that have scant bearing, or no bearing at all, on the particular problems of engineering and robot psychology confronting the Cog team. But for an enterprise that is in danger of losing its grip on reality--a common enough danger in all areas of philosophy--this is at least a way of virtually guaranteeing that whatever one asserts or denies tackles a real problem (however wrongheadedly).

There are plenty of passages in Fox's essay that encourage me to think that he does aspire to inform such engineering projects. Since he readily allows fish to have surrogates in thought [p27], I doubt if he would turn up his nose at robot cognition. He dismisses one model of how one picks up a cup by saying "Life is too short, and there is too much of routine in action to make this a feasible or worthwhile cognitive architecture." [p20]. He also speaks of the "Mac-wiring of the two-worlds system" as the feature that ensures that the external object is appropriately treated by the agent's fears, desires and plans. Here (and elsewhere) he seems to be giving "the specs"--but in philosophical, not engineering, terms--for the only sort of system that can achieve good, real, effective (or at any rate our kind of) cognition. As he says, "This experienced directness is the entitlement of unreflective naive realism accomplished through surrogate objects. It is not available to a mind that perceives by way of representations and acts solely on information." [p.28] At other times, it seems as if he chickens out, recanting this aspiration altogether. For we learn in the end that "if we cannot distinguish behaviorally" between a Cog that has surrogates and a Cog that uses representations, "this shows the limits of behaviorism." [p56]. He goes right on to say that his distinction is "as objectively determinable as any fact of cognitive architecture" but one wonders what practical importance (e.g., for the rush of controlling behavior in real time) this difference of cognitive architecture has, and how, from the third-person point of view, we can determine this objective difference. Endnote 4

I do not think this is a minor criticism. When Husserl made his famous distinction between the hyletic and noetic phase, he neatly saved Phenomenologists from dirtying their hands with the grubby physical details of the hyletic phase, but only at the very serious risk of trivializing Phenomenology for all time. The breath of fresh air (and sparkling sunshine) in Fox's phenomenology is his recognition, at many points and in many ways, that his enterprise is a species of extremely abstract engineering, but then he too often seems to me to shrug off the hard questions as somebody else's responsibility. Endnote 5When Fox says "The body of the Other warps the structure of phenomenal space-time and draws mine to it along the geodesic of desire," [p51] that's a nifty description of the wonderful effect achieved by our brains. But we want to know: how is it done? His claims about surrogates strike me as a very useful proposal, somewhat along the lines of similar suggestions by Ruth Millikan (e.g., 1993), and these promise a route for getting away from the language of thought. Fine, but now either you pass the buck entirely to the engineers and just declare that the problem has a solution (no doubt it does), or you attempt to contribute to the solution. I'm not asking for wiring diagrams, but just for a closer rapprochement--something I could tell to the Cog team that they could understand.

By insensible phylogenetic degrees the phenomenal world emerges at the turning point of the reflex arc. I do not doubt that there is something which it is like to be a bat, or a bee. . . . The dimwitted orgasm of an earthworm is as truly phenomenology as our own multimedia experience. What is marvelous about our phenomenology is not that it is phenomenology but that it is marvelous phenomenology--nature's three billion year solution to the problem of achieving in one state the surrogate of the perceived world; a world within a world conceived by nature in its own image. No doubt this engineering feat requires very special properties. You can't make a silk purse out of a sow's ear, and the phenomenal world is silk purse phenomenology. [p46]

And what does Fox have to say about how to accomplish this engineering feat? His opening gambit of explaining the phenomenal world via the metaphor of the Mac user-interface is cute, but I fear it backfires. Endnote 6 If "the phenomenal world is the end of the line," [p11] this implies that there are no further internal users or appreciators or perceivers, but then this is a major disanalogy with the user illusion of the word processor. He seems to be saying--and I very much agree--that the phenomenal world is the emergent product of all the corner-turning (Consciousness Explained [hereafter CE], 1991, pp108-111), not the preamble or final raw material before the corner of consciousness is turned. The Mac user-interface, however, isn't the end of the line; it is designed to present material to a user--that's the whole point of it. And its engineering is indeed tricky, but nowhere near as tricky as the engineering that seems to be required for Fox's phenomenal world, in which round-for-all-intents-and-purposes and its brethren must be implemented. If this is virtual roundness--like the virtual hardness of the virtual cast on Marcel Marceau's arm (CE p.211)--then an abyss opens up. How do you make it true that "surrogate objects and their properties track external objects and their properties"? [p10]

Zenon Pylyshyn has often warned cognitive scientists not to posit what he memorably calls "mental clay," a magical material out of which to fashion internal surrogates whose causal properties automatically track the physics of their external counterparts. Donald Knuth, a hero of computer programmers everywhere, made a lovely innovation in text-formatting technology when he invented virtual "glue," a virtually elastic and virtually sticky substitute for the rigid space called up by the typewriter space bar. Putting a varying virtual dab between each pair of words, depending on their relative lengths, his formatting program then virtually stretched the word-string by virtually pulling on its virtual ends till it fit perfectly between the left and right margins, apportioning just the right among of extra white space for each gap between words. Fortunately, Knuth didn't also have to make his word-processing glue virtually shiny, tasty, and smelly, but Fox's surrogates, in contrast, seem to have a full complement of for-all-intents-and-purposes perceptual properties. "To serve as a surrogate for an F thing in this system, an internal object must have such causal properties as enable it to be the target of F relevant object attitudes and to govern the ensuing action in F relevant ways." [p21] Beyond allowing that so far as he is concerned, surrogates can literally have some of the relevant causal properties--they can be literally ellipsoid, or "an unindividuated color" for instance [p22]--Fox is silent on how to deal with this problem. I fear his silence is proxy for "And then a miracle happens." I hope I have misunderstood him.

If Fox's surrogates are not literally surrogates, made of mental clay, what are they? If they are intentional objects, Fox's phenomenology reduces to my heterophenomenology, an account of the believed-in entities, only on occasion and indirectly an account of the internal states and processes. That is fine with me, but then his claim to have pushed beyond heterophenomenology to a radically different ontology must evaporate. Too bad, since we all need to push further into the engineering, and not just revel in the specs. Endnote 7

2. Dretske's Blind Spot

Fred Dretske is wonderfully direct in his essay. No glancing blows here; one of us is flat wrong. And surprisingly, for a philosophical essay, our differences--if I understand him--have quite directly testable empirical consequences. In a passage Dretske quotes, I say that "the richness of the world outside, in all its ravishing detail . . . does not 'enter' our conscious minds, but is simply available," to which he forthrightly retorts: "This is false. . . . Our experience of this ravishing detail does cease when we close our eyes. So the ravishing detail is not only 'in' the world." I do not know why he thinks this last bit follows, unless he is mistakenly assuming that our experience of ravishing detail must itself be ravishingly detailed, but this is just what is called in question by recent experiments that dramatically support my version of the facts. Even more telling, a recent thought experiment of Dretske's (in "Conscious Experience," 1993) perfectly anticipates one of these real experiments and encourages us to imagine an outcome seriously at odds with the actual results. Endnote 8 But before we get to that dénouement, I must set the stage.

Dretske has been a firm believer in the importance of "non-epistemic seeing" ever since his 1969 book, Seeing and Knowing, which I did read and admire when it came out, but he did not at all persuade me of the importance of non-epistemic seeing. (Many of the difficulties I saw were picked up by Virgil Aldrich in his 1970 J.Phil. review of the book, by the way.) Dretske uses non-epistemic seeing to mark what he takes to be a theoretically important category: "entering conscious experience." His isolation of non-epistemic seeing struck me in 1969 as at best a harmless tidbit of ordinary language philosophy; now I think it is worse: a theorist's illusion, pure and simple, an artifact of taking ordinary language too seriously. There is no important difference--no difference that makes a difference--between things non-epistemically seen (e.g., the thimble in front of Betsy's eyes before she twigs) and things not seen at all (e.g., the child smirking behind Betsy's back). Endnote 9

Common usage does, as he says, endorse a third-person use of "see." As he puts it now: "Ask someone! Other people may be able to supply information which, together with what you already know, helps you discover what (or who) you saw." (fn15) For instance, you're standing deep in the waving crowd as Hillary Rodham Clinton's motorcade passes by--"I wonder if she saw me!" you exclaim, and your companion says "Sure--you're tall enough, her eyes were open, and she kept looking back and forth from one side of the street to the other. If you saw her, she saw you." Big deal. (While we're doing ordinary language philosophy, notice that your companion might just as naturally have said "Since you could see her, she could see you." This raises a difficult question for Dretske to answer: how does visible to A at time t differ in meaning from non-epistemically seen by A at time t? Does it follow from the fact that Ms. Clinton could see you that she did see you?)

The third-person-attributable usage Dretske draws to our attention is common enough, but it survives on ignorance--the everyday ignorance of ordinary folk about how their visual systems work. It assumes, roughly, that if your eyes are wide open and you're awake, then everything that is "right in front of your eyes" has a common marked status--the status Dretske marks as seen (in the non-epistemic sense). The idea is that this is all it takes, normally, to get a visible thing registered (or "exhibited" as he now says) in the sighted person's "consciousness". Not being able to see inside people's heads to confirm that the imagined normal registration has in fact taken place, we treat the outward signs as proof enough. We can honor this status in a sort of legalistic way if we desire, but the facts about human vision render this understanding Pickwickian in the extreme. (If your transpacific plane touches down for refueling in Tahiti and you sleep through the landing and takeoff, can you say you've been to Tahiti? Yes, you can. Big deal.) Ordinary folk do not realize that one's "visual field" is gappy, degrades shockingly in resolution in the parafoveal regions, and--most important of all--is not recorded for comparison from moment to moment.

Long before there was film, there were cameras. The camera obscura is literally a dark room with a pinhole opening (perhaps enhanced by a lens) in one wall, and on the opposite wall a full-color (upside-down) image of the outside world is exhibited, evanescently, for onlookers inside the camera to see and enjoy. The room doesn't see, of course, even though the information is there on the wall. Suppose I walk by a vacant camera obscura and make a face in the direction of its pinhole. This guarantees that my smirk, in high resolution color, was briefly present on the opposite wall--an inert historical fact. Big deal. A camera obscura does not in any sense see what is exhibited on its wall. Or consider a camcorder, turned on but not recording. Unlike the wall of the camera obscura, there are photosensitive elements here that are evanescently changed by the photons raining down on them, but they change back immediately, leaving no trace. Even when the camcorder is recording, it still doesn't see, of course, but intuitively it takes a step in the right direction, since it records (some of) that information, for some later use, appreciation, analysis. A trace is made; the information sticks around. But presumably a camcorder doesn't do enough with the information to count as seeing, even when it makes a record of what happens.

It is a problem for Dretske to say what more is needed for non-epistemic seeing to occur. He makes an analogy: "seeing is like touching" [p.3]. We may ask: Is it like a rock touching the soil it is embedded in, or like a tree's roots touching that soil, or is it like a mole's paws touching it? Presumably the last of these, but why? Not, apparently, because moles have "conceptual" categories that can sort the information; seeing can occur "in the absence of conceptual uptake" [p.2], and "your experience can exhibit even though you may not be able to judge that something is ." [p.14] Presumably, the mole's experience "exhibits" the soil, but the tree is just as much in contact with the soil, and responds, slowly, to that contact. What more would be needed for the tree to exhibit the soil in its experience?

This is a good place to see the stark contrast between Dretske's view and mine. "The difference," he says, "between a visual experience and a belief about what you experience seems reasonably clear pre-theoretically." [p.9] I agree--it seems to be, but this is one of those treacherous philosophical observations. He says it is impossible to give a plausible theory of consciousness as long as experience and belief are conflated; I say it is impossible to give a plausible theory of consciousness as long as experience is deemed to be entirely independent of belief--or something rather like belief. Belief is not quite the right term for the job, as I have noted.

When Dretske says that the micro-cognitions I substitute for beliefs do "precisely" what potential or suppressed beliefs did for Armstrong and Pitcher, he misses a major point: I was deliberately getting away from their mistaken personal-level treatment of the issue, so my micro-cognitions do an importantly (precisely) different job. The personal level treatment misconstrues the facts--in the ways and for the reasons that Dretske points out:

One has certainly not shown that seeing an object, being perceptually aware of a thimble, consists in a judgment that it is a thimble (or anything else) in anything like the ordinary sense of the word 'judgment'.[p10]

Exactly right. You have to go to a non-ordinary sense of the word 'judgment' to make this claim hold, and hold it must, since otherwise we are stuck unable to tell the camera obscura from the genuine seer. What a genuine seer must do is somehow take in and "categorize" or "recognize" or "discriminate" or "identify" or . . . . (each term stretched out of its ordinary field) . . . in some other way "judge" the presence of something (as a thimble or as something else). With such uptake there is seeing. Otherwise not. Dretske asks [p.4] "Are we really being told that it makes no sense to ask whether one can see, thus be aware of, thus be conscious of, objects before being told what they are?" Yes, in one sense, and no, in another. I am indeed challenging the claim that there is a coherent sense of "conscious" and "aware" and "see" linked in the manner of his question, but I quite agree that it "makes sense" to ask Dretske's question in the course of some ordinary affairs; it also makes sense to speak of the sun setting, and of breaking somebody's heart.

Notice that Dretske's sense of "see," ordinary and familiar though it is, is utterly powerless to deal with the following questions: (1) Does the blindsight subject see objects in the blind field? He can react to them in some ways and not others. (2) Does the blue-eyed scallop see? It has eyes.Endnote 10 (3) Does a sleepwalker see? He engages in visually guided locomotion. (4) Does the anesthetized person with open eyes see? (5) Does the hysterically blind person see? (6) Do you see objects that are parafoveal in your visual field? How far parafoveal? It is obvious that in order to answer any of these questions, we have to go beyond the ordinary grounds for attributing seeing--which draw a blank--and ask what is going on inside. To a first approximation, the question then becomes: is what is going on inside more like what happens in a vacant camera obscura or more like what happens in a camcorder when it is recording? Is there uptake, and if so of what?

And the answer is that to a surprising degree, the visual part of your brain is more like a camera obscura than you might have thought. On the last page (468) of CE, I described an experiment with eye-trackers that had not been done, and predicted the result. The experiment has since been done, by John Grimes (forthcoming) at the Beckmann Institute in Champaign Urbana, and the results were much more powerful than I had dared hope. I had inserted lots of safety nets (I was worried about luminance boundaries and the like--an entirely gratuitous worry as it turns out). Grimes showed subjects high-resolution color photographs on a computer screen, and told the subjects to study them carefully, since they would be tested on the details. (The subjects were hence highly motivated, like Betsy, to notice, detect, discriminate, or judge whatever it was they were seeing.) They were also told that there might be a change in the pictures while they were studying them (for ten seconds each). If they ever saw (yes, "saw," the ordinary word) a change, they were to press the button in front of them--even if they could not say (or judge, or discriminate) what the change was. So the subjects were even alerted to be on the lookout for sudden changes. Then when the experiment began, an eyetracker monitored their eye movements, and during a randomly chosen saccade changed some large and obvious feature in each picture. (Some people think I must be saying that this feature was changed, and then changed back, during the saccade. No. The change is accomplished during the saccade, and the picture remains changed thereafter.) Did the subjects press the button, indicating they had seen a change? Usually not; it depended on how large the change was. Grimes, like me, had expected the effect to be rather weak, so he began with minor, discreet changes in the background. Nobody ever pressed the button, so he began getting more and more outrageous. For instance, in a picture of two cowboys sitting on a bench, Grimes exchanges their heads during the saccade and still, most subjects don't press the button! In an aerial photograph of a bright blue crater lake, the lake suddenly turns jet black--and half the subjects are oblivious to the change, in spite of the fact that this is a portrait of the lake. (What about the half that did notice the change? They had apparently done what Betsy did when she saw the thimble in the epistemic sense: noted, judged, identified, the lake as blue.)

What does this show? It shows that your brain doesn't bother keeping a record of what was flitting across your retinas (or your visual cortex), even for the fraction of a second that elapses from one saccade to the next. So little record is kept that if a major change is made during a saccade--during the changing of the guards, you might say--the difference between the scene thereafter and the scene a fraction of a second earlier, though immense, is typically not just unidentifiable; it is undetectable. The earlier information is just about as evanescent as the image on the wall in the camera obscura. Only details that were epistemically seen trigger the alarm when they are subsequently changed. If we follow Dretske's usage, however, we must nevertheless insist that, for whatever it is worth, the changes in the before and after scenes were not just visible to you; you saw them, though of course you yourself are utterly clueless about what the changes were, or even that there were changes.

Dretske says: "Part of what it means to say that Sarah sees all five fingers is that if you conceal one of the fingers, things will look different to Sarah. . . . There will not only be one less (visible) finger in the world, but one less finger in Sarah's experience of the world." [ms, p.12] Then I suppose it follows, trivially, that in Grimes' experiments, things "look different"--even hugely different--to his subjects after the saccadic switcheroo. This is, however, vacuous, given subjects' utter lack of uptake of the difference. In what sense do things look different to them? Things "look different" in the vacant camera obscura when I duck out of sight after my smirk, but they don't look different to anybody.

The difficulty with Dretske's view of non-epistemic seeing comes out even more strikingly in an experiment recently conducted by Rensink, O'Regan and Clark (forthcoming). Provoked by Grimes' result, and thinking it had nothing to do with saccades (but everything to do with "uptake" of some kind), they presented subjects with pictures that are interrupted, every quarter of a second (250msec) with a black screen which remains on for 150msec. The resulting phenomenology is rather annoying: a stable picture briefly interrupted, again and again and again. But in fact, subjects are told, the picture changes during each interruption, going back and forth between two pictures, with a rather large and visible difference between them. For instance, the huge airplane that almost fills the picture grows an extra engine on its wing twice a second. Back and forth, back and forth go the two pictures of the plane, but you can't see any change at all! The two pictures appear to you to be exactly the same. You study them, focussing, scanning, inventorying, and then eventually, after perhaps twenty or fifty back-and-forths, you notice the change. Sometimes, in spite of thirty seconds of steady hunting, the subjects still fail to see (epistemically) the change. This produces the same helpless and frustrating state of Betsy hunting for the thimble. She knows it's there, right in front of her nose, and she can't see it! But on Dretske's account, the difference is there, back and forth, being "exhibited" in consciousness, in non-epistemic seeing.

In fact, he gives an example (1993, p.273) of just such a pair of pictures, which he calls Alpha and Beta. The difference between them is that Spot (a good-sized round dot) is in Alpha but absent in Beta. Everyone who looks at Alpha, and then at Beta, Dretske says, is "thing-aware of Spot" even though many people may not be "fact-aware" of the difference.

In saying that the reader is conscious of Spot--and, hence, in this sense, the difference between Alpha and Beta--without being conscious of the fact that they differed, we commit ourselves to the possibility of differences in conscious experience that are not reflected in conscious belief. (p.275)

He imagines an objection:

The differences are out there in the objects, yes, but who can say whether these differences are registered [my emphasis] in here, in our experience of the objects? . . . This is a way of saying that conscious experiences, the sort of experiences you have when looking around the room, cannot differ unless one is consciously aware that they differ. . . . This objection smacks of verificationism, but calling it names does nothing to blunt its appeal. (p.277)

Right. Dretske recognizes that he needs something better than name calling to fend off this objection, so he offers a final example, drawn from Kluver's studies of size discrimination in monkeys. But he begs the question in his account of how one would have to describe the monkeys' capacity, so it doesn't in fact provide any further support for his way of looking at seeing. Endnote 11

There is no doubt that the periodic changes in the Rensink et al. experiment are "exhibited" on one's retinas, and hence one's primary visual cortex, to anybody who looks at these parts of the nervous system, with the right equipment. But if you (or your homuncular agents) do not in fact "look at" most of these exhibits with any equipment at all, the only sense in which these changes are "registered" is the sense in which the changes are also registered inside a camcorder that is turned on, but not recording. This is, in fact, the normal situation--powerfully revealed in this abnormal environment. If Dretske wants to say that this is all he meant by non-epistemic seeing, he is welcome to the concept, but it is not a persuasive model of "conscious experience."

I may have just slightly exaggerated the evanescence of the "registration" in primary visual cortex in comparing it to the temporary changes in a camcorder's photo-sensitive elements. Perhaps there is enough long term uptake in the brain so that, although you can't readily notice changes, if given a forced choice guess about whether or not there has been a change, you will do better than chance. Suppose we show subjects two kinds of picture pairs: pairs like the most difficult of those in the Rensink experiment, and pairs that are in fact identical. They will look just alike to subjects--they will detect no changes. But if required to guess which pictures do involve a change, they might well do better, even much better, than chance, in spite of their utter inability to say what these changes might be. (This experiment is currently under development in Rensink's lab.) If subjects can make good forced choice guesses, this would conclusively show that some information was preserved from moment to moment, that there was some non-ephemeral "registration" after all. This would not serve Dretske's purposes, however, by giving him a "difference which makes a difference" on which to hang non-epistemic seeing in conscious experience, since this performance on forced choice guessing is precisely the evidence standardly relied on to demonstrate unconscious information preservation in blindsight. I doubt that Dretske would want blindsight to count as non-epistemic seeing.

So, to revert to the confrontation with which we began, Dretske noted what he takes to be a clear mistake of fact in my theory of consciousness. I say the detail only seems to be "in there" and he disagrees. I agree that it is in the eye (focused on the retina), but that is surely not enough, for in that sense, the detail is also in the camera obscura. Most of this detail is not--and cannot be--picked up at all, but some of it is. The few details that are picked up are picked up by being identified or categorized in some fashion--if only as blobs worthy of further consideration, as Treisman's experiments show. Endnote 12 It does indeed seem as if all the details are "in here" in some stronger sense--a difference that makes a difference--but that is an illusion.

3. Truth-Makers, Cow-sharks and Lecterns

McLaughlin and O'Leary-Hawthorne have succeeded where others have tried and failed. They have obliged me to respond, in detail worthy of their challenge, to the question: why don't I take Swampman, Blockheads, and their friends seriously? They have obliged me by writing an exemplary essay, fair, patient and reasonable, setting out the problems with my view as they (and many other philosophers) see them. They provide a compelling exhibition of something philosophers should more often strive for: a consideration of ideas that transcends questions of who said what when. Which of the many variations of the ideas they consider is mine? Which did I mean? It doesn't much matter, since they canvass all the possibilities, and try to show which is the best--given my purposes--and why. If I didn't say or mean that, I should have. Or so they claim, with supporting reasons. First let me confirm a suspicion that they hint at occasionally: I have not thought that such fanatic attention to precise formulations was work worth doing; I still think that this is largely make-work, but there are many, apparently, who think I am wrong, and I owe them, in my response to this challenge, a proper reply.

It cannot have escaped philosophers' attention that our fellow academics in other fields--especially in the sciences--often have difficulty suppressing their incredulous amusement when such topics as Twin Earth, Swampman, and Blockheads are posed for apparently serious consideration. Are the scientists just being philistines, betraying their tin ears for the subtleties of philosophical investigation, or have the philosophers who indulge in these exercises lost their grip on reality?

These bizarre examples all attempt to prove one "conceptual" point or another by deliberately reducing something underappreciated to zero, so that What Really Counts can shine through. Blockheads hold peripheral behavior constant and reduce internal structural details (and--what comes to the same thing--intervening internal processes) close to zero, and provoke the intuition that then there would be no mind there; internal structure Really Counts. Manthra is more or less the mirror-image; it keeps internal processes constant and reduces control of peripheral behavior to zero, showing, presumably, that external behavior Really Doesn't Count. Swampman keeps both future peripheral dispositions and internal states constant and reduces "history" to zero. Twin Earth sets internal similarity to maximum, so that external context can be demonstrated to be responsible for whatever our intuitions tell us. Thus these thought experiments mimic empirical experiments in their design, attempting to isolate a crucial interaction between variables by holding other variables constant. In the past I have often noted that a problem with such experiments is that the dependent variable is "intuition"--they are intuition pumps--and the contribution of imagination in the generation of intuitions is harder to control than philosophers have usually acknowledged.

But there is also a deeper problem with them. It is child's play to dream up further such examples to "prove" further conceptual points. Suppose a cow gave birth to something that was atom-for-atom indiscernible from a shark. Would it be a shark? What is the truth-maker for sharkhood? If you posed that question to a biologist, the charitable reaction would be that you were making a labored attempt at a joke. Suppose an evil demon could make water turn solid at room temperature by smiling at it; would demon-water be ice? Too silly a hypothesis to deserve a response. All such intuition pumps depend on the distinction spelled out by McLaughlin and O'Leary-Hawthorne between "conceptual" and "reductive" answers to the big questions. What I hadn't sufficiently appreciated in my earlier forthright response to Jackson is that when one says that the truth-maker question requires a conceptual answer, one means an answer that holds not just in our world, or all nomologically possible worlds, but in all logically possible worlds. Endnote 13 Smiling demons, cow-sharks, Blockheads, and Swampmen are all, some philosophers think, logically possible, even if they are not nomologically possible, and these philosophers think this is important. I do not. Why should the truth-maker question cast its net this wide? Because, I gather, otherwise its answer doesn't tell us about the essence of the topic in question. But who believes in real essences of this sort nowadays? Not I.

Consider the fate of "logical behaviorism" with regard to magnets. Here are two candidate answers to the question of what the truth-maker is for magnets: (a) all magnets are things that attract iron, and (b) all magnets are things that have a certain internal structure (call it M-alignment). Was the old, behavioral criterion (a) eventually superseded by the new, internal structure criterion (b), or did the latter merely reductively explain the former? To find out, we must imagine posing scientists the following Swampman-style questions. Suppose you discovered a thing that attracted iron but was not M-aligned (like standard magnets). Would you call it a magnet? Or: Suppose you discovered a thing that was M-aligned but did not attract iron. Would you call it a magnet? The physicists would reply that if they were confronted with either of these imaginary objects, they would have much more important things to worry about than what to call them (Dennett, 1968, p234). Their whole scientific picture depends on there being a deep regularity between the alignment of atomic dipoles in magnetic domains and iron-attraction, and the "fact" that it is logically possible to break this regularity is of vanishing interest to them. If they are "logical behaviorists" about magnets, this is no doubt due to William Gilbert's early phenomenological work in the 17th century, which established the historical priority, if nothing else, for the classification of magnets by what they do, not what they have inside. (He built upon, and improved, the folk physics of magnets, in short.) What is of interest, however, is the real covariance of "structural" and "behavioral" factors--and if they find violations of the regularities, they adjust their science accordingly, letting the terms fall where they may. Nominal essences are all the essences that science needs, and some are better than others, because they capture more regularity in nature.

In "Do Animals have Beliefs?" (forthcoming), I say, commenting on a point of agreement between Fodor and me:

We both agree that a brain filled with sawdust or jello could not sustain beliefs. There has to be structure; there have to be elements of plasticity that can go into different states and thereby secure one revision or another of the contents of the agent's beliefs. (p.116)

Doesn't this passage concede everything McLaughlin and O'Leary-Hawthorne have been pressing on me? When I say "could not" and "have to," am I speaking of "conceptual" or "nomological" necessities? I am speaking of serious necessities. If I ever encounter a plausible believer-candidate that violates them, what to call it will be the least of my worries, since my whole theory of mind will be sunk.

So why do I lean towards "logical behaviorism" and away from the specifics of internal activity and structure that McLaughlin and O'Leary-Hawthorne go to such lengths to highlight? For the reasons that Lynne Rudder Baker explains so well in her essay. Like Gilbert, I start with folk theory, which is remarkably robust in the case of folk psychology. It is a discovered fact, already well confirmed, that "peripheral narrow behavior" of the sorts commonly observed by everyday folk, is readily predicted and explained by folk psychology. Thus the order of explanation is from outer to inner, not vice versa. We want a theory of the innards that can account for all that regularity. It might have gone otherwise; it is logically possible, I suppose, that we could have found "belief-boxes" in people's heads that causally explained their behavior, and well-nigh identical "belief-boxes" in the cores of redwood trees that were entirely inert. We would then have put a premium on explaining that regularity of internal structure, and let the differences in behavioral consequences tag along behind. But we didn't find any such thing. It is not just logically possible but already demonstrated that there are in fact many internally different ways of skinning the behavioral cat, while it is at best logically possible, and Vanishingly (Darwin's Dangerous Idea, 1995, p.109) unlikely, that we will ever encounter Manthra, or anything else that is an internal twin lacking the behavioral prowess.Endnote 14

This all depends, of course, on how closely we look at the innards for signs of similarity. How different do internal ways have to be to count as different? McLaughlin and O'Leary-Hawthorne see a contradiction between my various positions on behaviorism, and I guess they are right. I should have explained why I thought that the difference between molecular and molar behaviorism didn't amount to anything important, rather than burking the distinction altogether. Of all the molecular differences that there might be, the only ones that would make a difference to psychology (as ordinarily understood) would be those that made a difference to the "peripheral narrow behavior" that is predicted and explained by folk psychology.

Consider: Tweedledum and Tweedledee both hear a joke and both laugh; both also would laugh at various other jokes, would find others unfunny, etc. Nevertheless their overall joke-getting machinery has some differences--differences that would never show up in any peripheral behavior. These differences are clearly (I would think) below the level of psychology. In particular, these differences would not license a different attribution of belief. Start with what is probably a safe limiting case. Tweedledum's brain makes somewhat different use of potassium in its regulation of axonal transmission than Tweedledee's brain does. Otherwise, their brains always "do the same thing"--they are not quite molecular behavioral twins, but pretty close. Though not molecular behavioral twins, they are nevertheless psychological twins, for the differences are just too fine-grained to show up in interesting psychological differences--such as different belief-attributions on anybody's story of belief attributions. Suppose next a much larger-scale difference: Tweedledum and Tweedledee have entirely different subcognitive systems of face-recognition; one relies on a sort of feature-detection checklist, and the other on some global, holistic constraint-satisfaction scheme (Brainstorms, 1978, pp23-28). Now, we may suppose, they do exhibit different psychological profiles (at least in relatively abnormal circumstances): under experimental conditions, one can readily recognize faces that the other cannot, and hence they will not share just the same beliefs. If, contrary to plausibility, their radically different "face-recognition modules" had exactly the same competence under all experimental conditions, we would see the difference as an interesting physiological difference, but too fine-grained to "count" as psychology. But where we draw the line is not a big deal, one way or the other.

Some philosophers may still think that in spite of all this, Blockheads illustrate an important principle, so before taking my leave of Blockheads I cannot resist pointing out that the "principle" relied upon by Block in his original thought experiment is mistaken in any case. One of the most intelligent things any thinking agent can do is plan ahead, engaging in what we might call temporally distal self-control. Anticipating that when push comes to shove at some later time, it may be difficult or impossible to Do the Right Thing--to figure out and execute the rationally optimal response to current circumstances--the wise agent arranges to tie his hands a little, and cede temporally local control to a policy figured out long ago, in cool, dilatory reflections "off line". Dieters, knowing their urges, arrange to locate themselves in places bereft of snacks, and when they later act in an environment that does not include "shall I have a snack?" as a live option, this is a feature of the environment for which they themselves are responsible, as a result of their own earlier intentional actions, not a mere external constraint on performance. The practical navigator, John Stuart Mill reminds us (in Utilitarianism), goes to sea with the hard problems of spherical trigonometry and celestial motion pre-computed, their answers neatly stored in a rather large (but portable) lookup table. It is thus no sign of mindlessness, but rather of foresight, if we encounter the navigator mechanically determining his position by looking up the answers, swiftly, in a book.

We think of Oscar Wilde as a great wit. It would no doubt diminish his reputation considerably if we learned that he lay awake most nights for hours, obsessively musing "Now what would I reply if somebody asked me. . . , and what might my pithy opinion be of . . . . ?" Suppose we learned that he patiently worked out, and polished, several dozen bon mots per night, ingeniously indexing them in his memory, all freshly canned and, if the occasion should arise, ready to unleash without skipping a beat--for brevity is indeed the soul of wit. They would still be his bon mots, and their funniness would depend, as the Polish comedian said, on timing. Timing is important for almost any intelligent act--which is why it is intelligent to anticipate and pre-solve problems wherever possible. Wilde's brute force witticism-production scheme might disappoint us, but in fact it draws attention to the fact that all intelligent response depends on costly "R and D", and it doesn't make much difference how the work is distributed in time so long as the effects are timely.

So, contrary to Block's guiding intuition, discovering that some Turing Test contestant was (one way or another) looking up the responses in a giant lookup table should not at all rule out the hypothesis that this was the manifestation of an intelligent agent at work. Local inspection would perhaps often leave us in doubt about who the intelligent agent was (or who they were), but we should have no doubt at all that the witticisms on the transcripts were the product of intelligent design, responsive to the meaning of the inputs, and just temporally removed by being solved in the hypothetical. Intelligent design is the only way witticisms can be made. Am I saying it is actually logically necessary that any such giant lookup table of clever responses would have such an etiological history? Heavens no. A cow-shark could give birth to one. But in our world, the only way anything will ever pass the Turing Test is by being an intelligent, thinking agent.

Mark Richard provides a close encounter that is long overdue, confronting my "pretty pernicious instrumentalism" with a relentless challenge from one of those who think that the way to make a proper theory of belief is to construct and defend formal definitions of its terms. I turned my back on the efforts of the Content Mafia (otherwise known as the Propositional Attitude Task Force) in 1982, after publishing "Beyond Belief," in which I gave my reasons for rejecting their methodology and enabling assumptions. The tradition has continued in force without my blessing, of course, and few participants have felt the need either to respond to my criticisms, or to show how their way of philosophizing shows what is wrong with mine, so it is high time to see how the scales balance today.

Richard offers a three-pronged attack on my account of believers as intentional systems: I cannot solve the lectern problem, he claims, and two avenues which might seem to offer escape hatches for me, Stich's attempt to distinguish beliefs from sub-doxastic states via a condition of "inferential promiscuity," and Evans' Generality Constraint, turn out to be flawed. If, as Richard notes, I can't adequately answer the question "What isn't a belief?" I can't answer the question "What is a belief?" either. Not a good verdict for a theory of belief.

I claim to solve the lectern problem, as Richard observes, by showing how, when predicting lectern "behavior," the intentional stance gives a predictor no purchase over using the physical stance. "That no single event is unpredictable from the Laplacean perspective does not imply that every behavioral pattern is perceptible from the perspective," Richard notes, but he then tries to shoehorn my position into the assertion that such behavioral patterns can be identified with the "instantiation of a ceteris paribus law of property instantiation by an individual" [p.12]. No, the impressive patterns cannot be reported in single generalizations of the sort Richard illustrates; the patterns that inspire adoption of the intentional stance consist in the success (with n% noise) of a myriad of such ceteris paribus generalizations. No one predictive success counts for much at all--witness the lectern's readily predicted null behavior.

The predictive power of the intentional stance does not derive from our having induced kazillions of psychological "laws" which we are reminded of whenever we see their antecedent conditions being satisfied. Where would all these "laws" come from? We surely aren't taught them by the score. Rather, we effortlessly generate our predictions from an appreciation of the underlying normative principle of intentional stance prediction--rationality. What we need the strategy to explain is our power to generate these predictions--describe these patterns--ad lib and ad hoc in any number we wish, and find the vast majority of them to be predictive way better than chance. That's why I spoke of intentional systems whose behavior is "voluminously" predictable, a theme Richard notes in passing, but underestimates.

Richard then goes on to interpret my claim about the ineliminability of the intentional characterization of people (and other true believers) as the claim that the presumptive intentional laws governing lecterns, unlike the intentional laws governing believers, have equally predictive--indeed coextensive--"physical equivalents," and hence are eliminable. This misconstrues my case. I am quite willing to grant that some unimaginably long but finite disjunction of physically characterized conditions exhausts by brute force the entire predictive power of any intentional stance prediction whatsoever. So what? Such claims are not interesting; the same move could be used to strike down every biological category (for instance), since the Heat Death of the Universe, if nothing else, guarantees that "there is" a huge but finite disjunction of predicates constructible by Boolean means from terms drawn strictly from sub-atomic physics that would be exactly as predictive as "is a herbivore" or "is hemoglobin" or "reproduces asexually." And yet the patterns referred to by these biological terms are perfectly real.

The guaranteed existence of such an unwieldy predicate doesn't diminish the actual value, for purposes of prediction, of an intentionalistic predication, and it doesn't explain what an intentionalistic claim explains. I have improved on my Martian predictor example in Darwin's Dangerous Idea, pp.412-19, in the riddle of the two black boxes. I show that there are short, readily tested causal generalizations whose almost exceptionless truth would be manifest to super-Laplaceans but utterly mysterious and inexplicable by them unless they adopted the intentional stance. The fact that the super-Laplaceans could predict each instance of the generalization--could generate, given enough time, every disjunct in the unimaginably long list--would not impress them, since they would see clear as day in the totality of their predictions a simple regularity that they could not explain.

Why does Richard go to such pains to translate my thesis into the alien language of "ceteris paribus laws" and then interpret its central claim in terms of the non-existence of equivalent sentences composed in non-intentional vocabulary? The reason, I think, is methodological: Richard simply cannot use the tools of his trade unless he can first turn the object of scrutiny into such a claim. This methodological imperative comes out more sharply when we turn to his painstakingly constructed arguments against Stich's distinction, and Evans' Generality Constraint. Here he helps himself on several occasions to the tempting surmise that there is a language of thought, even though he concedes that this hypothesis may not, in the end, make much sense when applied to human believers. Why does he do this? Because he cannot construct his arguments without it. He is not alone, but his forthrightness on this occasion helps to underscore a point I have made in the past: all the fine arguments spun from the intricate examples posed by the Propositional Attitude Task Force depend on isolating rather special cases of what I have called opinions--linguistically infected states quite distinct from beliefs--and showing how, if these presumably clearly identified propositional attitudes are held constant in imagination, troubles can be raised. Without the language of thought as a crutch to keep the "fatal" examples from toppling over under the weight of their often bizarrely topheavy loads of specific content, there would be no research program here at all. Endnote 15

Even if our thought is not invariably realized in a linguistic medium, the existence of something whose thought was invariably so realized, and in whom we would identify the possession of concepts with lexical mastery, isn't impossible. [p36-7].

Or, rather, such a being better not be impossible, since a cottage industry of philosophical research depends on it. Part of my evidence for this claim is the studied indifference of these philosophers, almost without exception, to the efforts by various researchers in Artificial Intelligence and cognitive psychology, to construct and defend models of belief that do utilize something like a language of thought. Is Douglas Lenat's CYC project the sort of entity Richard has in mind? It is a "belief box" containing millions of hand-coded propositions, and insofar as it has any concepts at all, it is in virtue of the "lexical mastery" provisioned by all those carefully wrought definitions, as interanimated by its attached inference engine. Is CYC a believer? The general run of opinion in cognitive science is that CYC is a brave attempt at an impossible project. At the very least, the burden of proof is on those who think that it is possible for something we would recognize as a believer/thinker to be composed as Richard assumes. (Of course it is possible to construct large boxes of interanimated sentences--CYC is an actual instance--but few would think that a theory about which sentences appeared where under what conditions in such boxes would be a theory of belief. That is, however, one way of reading the underlying assumption of a language of thought.)

Richard appeals at one point to the supposition "I think in English," and at another point tells us of Jane, who believes "Twain's here" expresses a truth, but doesn't realize that Mark Twain is Clemens. Later he alludes to Smith, the poorly integrated bi-lingual, and finally to Jan, the bi-lingual Dutchman. He needs these special cases because he has to be able to point to propositions crisply "identified" as only a specific sentence in a particular language can do (one can speak about sets of possible worlds, but the only practical way of saying which set you have in mind is to go piggyback on a specific sentence). For instance, Jan's belief that lions are in zoos has to be identified with a specific sentence in one of Jan's languages of thought, so it can be clung to, as one of Jan's beliefs, in spite of the evidence that Jan is really a bit dense about lions and zoos, so dense that he can't even "think the thought" in his other language of thought. Richard tells us the point of the exercise: "What Jan provides us with is an example of a believer for whom the Generality Constraint fails." [p.41]

The grandfather of all such cases is Kripke's (1979) Pierre, who believes--well, what does he believe about London? While hundreds of pages have been published about Pierre, I have not bothered adding to them, since the proper response seemed to me to be so obvious (Dennett, 1987, p.208n). Thanks to Kripke's clear setting out of the conditions under which Pierre fell into his curiously ill-informed state, we know exactly what his state of mind is. What is the problem, then? The problem is saying, formally and without fear of embarrassing contradiction elsewhere, exactly what Pierre believes. Which propositions, please, should be inscribed on Pierre's belief-list, and how are they to be individuated? Well, it can't be done. That's the point of the Pierre case; it neatly straddles the fence, showing how the normally quite well-behaved conditions on belief pull against each other in abnormal circumstances. What should one do in such a dire circumstance? Chuckle and shrug, and say, "Well, what did you expect? Perfection? Pierre is an imperfect believer, as we all are."

How can I say we know exactly what Pierre's state of mind is while cheerfully admitting that we cannot say, exactly, what his state of mind is in terms of propositional attitudes? Simple: propositional-attitude talk is a hugely idealized oversimplification of the messy realities of psychology. Whenever push comes to shove in borderline cases, its demands become unanswerable. That is my pretty pernicious instrumentalism showing, I guess. I don't call my view instrumentalism anymore, but whatever it should be called, my view is that propositional attitude claims are so idealized that it is often impossible to say which approximation, if any, to use. There is nothing unprecedented about this: biologists shrug when asked whether herring-gulls and lesser black-backed gulls are truly different species (Darwin's Dangerous Idea, p.45), and electrical engineers are unperturbed when you point out that it is quite possible to take a perfectly good FM tuner, and, by making a few minor revisions, turn it into something that is maybe a genuine but lousy FM tuner and maybe not an FM tuner at all. How close to the (ideal) "specs" does something have to be to count as a genuine FM tuner? What if it can receive only one station? What if it tends to receive two stations at a time? What if a cow-shark swallows it, and its stomach acid turns it into a television set?

The various predicaments that Richard treats as counterexamples to theories could better be considered to be shortcomings in the particular believers, fallings short from the ideal of inferential promiscuity, or Generality, for instance. (Pierre is a true believer, of course, but a decidedly sub-optimal one. Believers aren't supposed to get themselves into the sort of epistemic pickle Pierre has blundered into.) Since all believers fall short of the ideal, Stich's useful idea about how to tell beliefs from other "subdoxastic states" should be treated as a desideratum of beliefs, not a litmus test. Then we can see a gradation of cases, from truly embedded or encapsulated subdoxastic states to more and more "movable" and inferentially available states. The question of how, in the species and in the individual, this transition to more and more versatile cognitive states occurs is fast becoming a major theoretical issue in cognitive science (see, esp. Clark and Karmiloff-Smith, 1993). It wisely ignores the question of how to define belief formally. If, on the other hand, you insist on setting up a definition of belief as a set of necessary and sufficient conditions, in the fashion that Richard assumes obligatory, you merely guarantee that there are no true believers, not among ordinary mortals.

So Richard is mistaken in thinking from the outset that I take on the obligation to offer a "principled account of the distinction between (having) propositional attitudes, as against (having) psychological states which, though they produce and regulate behavior, and can be assigned informational content, are not propositional attitudes." [p.2-3] And hence he is mistaken about the role that either Stich's or Evans' claims might play for me, whatever role they have played in the work of others. I myself have always thought that the Generality Constraint nicely captured the ideal--the same ideal Fodor captures by speaking of belief fixation as "Quinian and isotropic." It is an ideal no believer meets but all--all worthy of the name--approximate. One of the best arguments against CYC-style models of belief could in fact be put thus: Since it is at least very hard (and maybe impossible) for them to meet Evans' Generality Constraint, even in approximation, there must be some other way of organizing the innards of a believer that accounts for the fact that believers are in general quite able to honor that constraint.

The trouble with the tools of the trade of the Propositional Attitude Task Force is that they cut too fine! Propositions are abstract objects, and (according to theory) just as distinct and well-behaved as, say, integers. If propositions measure psychological states the way numbers measure physical states (as Paul Churchland (1979) has noted), then the belief that p is not identical to the belief that q if p is not identical to q. But the principles of propositional identity are tied to sentence identity in a language. In theory, of course, proposition identity can be specified in terms of sets of possible worlds, but in practice, the way such a set is referred to is as the set in which a particular (English) sentence is true. In reality, propositions are, for this reason, more like dollars than numbers, and the precision aspired to is an illusory goal (The Intentional Stance, 208).

Lynn Rudder Baker gives a wonderful account of the reasons why the patterns discernible from the intentional stance should not be assumed to be repeated, somehow, in the brain. In this regard I especially commend her discussion (fn58) of the bogus question about the location of the money-making. As she says, "Such questions are not serious spurs to inquiry." But other rather similar questions are. It is the confusion of the non-serious ones with the serious ones that causes a lot of the confusion. As Alan Turing noted, in one of his many prophetic asides,

I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localize it. (1950, p.447)

The reality of consciousness does not require its localization in the brain, but it still depends on features of brain activity, and if we ant to confirm or disconfirm hypotheses about specific conscious experiences, what we need to test is the truth of the claims that constitute somebody's heterophenomenology. "It occurred to me that winters in Vermont are long" is not about the weather; and I might be wrong in asserting it. Nothing of the kind may have occurred to me. (Baker's analysis of this case would be much helped by honoring David Rosenthal's distinction between expressing and reporting: in uttering this sentence, I would be expressing a higher order thought about my mental life in reporting a lower order thought--about the weather.)

Heterophenomenology exhausts the intentional stance theory of consciousness, but we want more (and so we should). Consider a parallel: there is undoubtedly a real pattern in the tales told (and believed) these days by self-styled victims of satanic ritual abuse, but are any of their beliefs true? We'd like to know. Similarly, there is certainly a real pattern in the tales told (and believed) by subjects about what occurs to them at various times, what they "do" in their minds at various times, and we'd like to know which of these beliefs of theirs are true. That is where "brain-mapping" comes in. Baker sees a deep tension between the intentional stance and this brain-mapping move, mainly because she misinterprets me as thinking the brain-mapping will be a "deeper" theory (in her terms), and thus non-intentional. Not at all. The theory of content I espouse for the whole person I espouse all the way in. The neurobiological theory of content is homuncular functionalism, to dress it in its most vivid metaphorical costume, and hence the very same principles of interpretation are used to endow sub-personal parts with contents as are used to endow whole persons. (David Rosenthal's interpretive "hypothesis" on this score is thus correct. Since he has to work to arrive at this position, and Baker misses it, I cannot have done a proper job of expounding it.) The way in which personal-level attributions of belief and other intentional properties get confirmed (in the crunch) by sub-personal attributions of (non-ordinary) intentional properties is roughly parallel to the way in which one might confirm one's attribution of culpable motives to, say, the British Empire, or the CIA, or IBM, by discovering a pattern of beliefs, desires, intentions, among the agents whose joint activity compose the actions, beliefs and intentions of the super-personal agent (see the discussion of Carol Rovane, at the end of this essay, for more on this).

So in the case of Eve, the story goes like this. Eve expresses the higher-order thought "I was suddenly conscious of the fact that I was not alone in the house," thereby reporting (truly or falsely) that she had a certain first-order thought to the effect that she was not alone in the house. Did she? We'll have to look at our record of what went on in her brain at the relevant time. Hmm, sure enough, here's a brain event that had the content (roughly): "Who's there!?!?!" That's close enough. We confirm her report, in this case. But we might not have confirmed it. We might have found circumstantial evidence to the effect that Eve has rationalized the whole event, and wasn't the least bit driven, at the relevant time, by contents concerning the presence of others. Some of this evidence might be our secret videotape of her externally visible behavior at the time. (We see her humming contentedly throughout the relevant period, right up to the time when she answered the phone and began answering our questions about her recent phenomenology. There is no sign of apprehension, no abrupt change of trajectory, and the record of heartrate and skin conductance shows no alarm.) But we might actually get some even better (because closer to the gist of her self-report) evidence from our neurocryptography unit, which, applying the intentional stance to Eve's brain-parts, has tentatively identified a homunculus whose duties include signalling a general fire drill whenever it detects the presence of another--it's hooked up to the vertical symmetry detector in the vision system, for instance (CE, p.179). That's the sort of story I had in mind. And it uses the intentional-stance theory of content all the way in.

At one point [p.9], Baker wonders what I would conclude if "the neuroscientist could find no brain state or process with which to identify the putatively conscious belief." And she says either alternative would leave half of my project in the lurch. It's worse than that: paraphrasing the physicists who were confronted with the non-magnet (if that's what it was) I'd have a much worse problem than which half of my project to cling to. I'd be worried that my whole project was on the verge of collapse. But she is right that in this terrible eventuality, I'd cling to the intentional stance aspect (which I know works very well) and do my best to explain why so many people have so many false beliefs about what is going on in their minds. Why would I be so eager to show that people were mistaken in these beliefs? Because my whole worldview resists as an utterly last resort any alternative explanation--e.g., a frankly dualistic explanation--of these undeniably real patterns. To pursue the parallel, suppose we can't confirm any of the peoples' stories about satanic cult abuse. No physical evidence of the events reported can be found. What would we do then? We would try to show that these folks, however sincere in their reports, were just wrong. But if the pattern defied that diagnosis, as it might, in principle, we'd have to start toying with the idea that the satanic ritual abuse happened in another dimension or some other equally extravagant departure from conservative physics and metaphysics. We're talking seriously spooky last resorts, folks!

Stephen Webb's dexterous navigation of the shoals of Plan One and the various Passes is largely for nought, since it misconstrues the intentional stance in a different way. It overlooks a key feature of the intentional stance: one is allowed to revise one's attributions in the light of falsified predictions. As soon as one hears the professor's surprising (in the circumstances) words, one revises the attribution: she must not have seen her keys, one easily concludes. So behavioral evidence has all along been front and center in the intentional stance. What is the heterophenomenological method, after all, but an application of this obvious principle? One patiently gathers behavioral evidence (largely but not entirely verbal behavioral evidence), hypothesizing interpretations and refining one's attributions until, in the limit, they account for ("predict," make sense of) all the behavior. The standard limit myth may be invoked: the beliefs and desires (and other intentional states) of an intentional system are those that would be attributed to it by an ideal observer with a God's-eye-point-of-view. Prediction and retrodiction are all one to the intentional stance, which explains as it attributes. It would not be a very useful stance if it couldn't be harnessed reliably for real-time prediction, but after-the-fact explanation of behavior is hardly off limits to it, and the behavior itself is obviously a main source of evidence.

Webb patiently tries to corral me at various other choice points, overlooking reasons I have already given for resisting his alternatives. Much of what I have said above in response to McLaughlin and O'Leary-Hawthorne, and Baker, applies here as well. To reiterate: The skin is not that important as a boundary, as Skinner famously conceded (with my concurrence), so "internal" behavior is in principle not off limits to the interpretive exercise, and it can, in particular cases, crucially supplement the stock of relevant data, but it gets interpreted by the same rules as behavior outside the skin: the carton of cigarettes listed on the scrap of paper has the same evidentiary status as the carton of cigarettes referred to by some cerebral shopping list, if and when science can discern it. I have pointed out that the line between the design stance and the intentional stance is not sharp in any case; sub-personal cognitive psychology is a design stance enterprise conducted with the aid of liberal intentional stance characterizations of homunculi.

4.Superficialism vs. Hysterical Realism

. [Save the phenomena.]

--Plato

Save the surface and you save all.

--Sherwin-Williams

Thus Quine opens his Pursuit of Truth (1990). Georges Rey is surely right that the key to the profound disagreements he and other Fodorians have with me is that I espouse a view he deftly labels "superficialism." Who would ever want to be called a superficialist? Well, Quine, for one, might not shrink from the label. One good term deserves another. In the past I've called Fodor an "industrial strength realist" but the connotations of that term are all too positive, and I think Rey's blithe self-description as a "commonsense" realist is amusingly belied by the convolutions of doctrine he presents to us, so I'll change the epithet. Rey is an advocate of hysterical realism. Now you can choose: which would you rather be, a hysterical realist or a superficialist like me?

Rey is far from being alone when he responds to the siren song of hysterical realism. The defense of what Richard Rorty mockingly calls "Our Realist Intuitions" is a fervent activity in many quarters, and while I am not hereby endorsing Rorty's brand of resistance, I do think he has touched one of the untouchable sore spots in contemporary philosophy. Hysterical realism Endnote 16 deserves its name, I will argue, because it is an overreaction, a rationally unmotivatable spasm brought on by peering into the abyss of certain indeterminacies that really should not trouble anybody so much.

For Rey the particular topic is consciousness, a contentious topic, but we can begin cautiously. I trust we can all agree on this much: we find people asking questions about consciousness, using ordinary language, of course, with its ordinary presuppositions enshrined. Then once we start examining the real, causal complexities of what happens in brains, we find, I claim, a strikingly poor match between the pre-theoretical presuppositions and the messy details. Here is another place where a disagreement over empirical facts looms large. Those who have not looked very closely are apt to think that I am exaggerating when I claim that this is a serious theoretical impasse. But since this isn't the place to review the empirical details, let's just suppose for the sake of argument that I am not exaggerating. Faced, then, with the striking non-alignment of everyday talk with scientific talk, what, as philosophers, should we do? My view, "superficialism," holds that we should relax our grip on the conviction that our everyday terms must find application--one application or another--in any acceptable solution. There may be, I say, no fact of the matter of just which events are--to use the ordinary term--conscious, and which are not (in the rather special cases raised by the Orwell vs. Stalin impasse). Fie! Scandalous irrealism! Verificationism!

Rey exposes my position with thorough scholarship. He relentlessly tracks down virtually all the points of confrontation, and he has organized the whole in proper marching order. Time and again as I read his essay, I found myself thinking "but I've dealt with that" only to turn the page and find the very passage I had in mind duly cited and discussed. There is one exception. To advert to what may be the only relevant passage in my work Rey doesn't discuss, the situation with regard to consciousness is like that confronted by the theorists who are asked what fatigues are (Brainstorms, p.xix-xx). The natives in this imaginary land, you may recall, have a curious doctrine of "fatigues"--too many fatigues spoil your aim, a fatigue in the legs is worth two in the arms, and Mommy, where do fatigues go when I sleep? We modern scientists arrive in their midst and they ask us to solve their traditional mysteries about fatigues. The hysterical realist takes on the thankless task of finding something real in the body to declare to be the fatigues so favored by the local folk. And I say "Gimme a break! We already know enough about fatigues to know that they are not a good category for precision science; you simply cannot motivate a realist theory of fatigues. There's nothing left to discover that could be relevant to what the right theory of fatigues would be." That is superficialism, by definition, but the question remains: Is this in fact a superficial response? Does it fall prey, as Rey suggests, to my own charge of mistaking a failure of imagination for an insight into necessity?

I say that we already know enough empirical facts about what consciousness isn't to know that the ordinary concept of consciousness, like the concept of fatigues, is too frail; it could never be turned into the sort of scientific concept that could wring answers to the currently unanswerable questions. On the contrary, says the hysterical realist, we can dimly imagine that science will someday uncover grounds, currently unsuspected or even unimaginable by us, for settling such questions as the Orwell vs. Stalin question, or the question of whether some subset of human beings might turn out to be zombies after all, etc. "Never say never," is the advice offered by Owen Flanagan (1992, p. 14-15).

I'm all for open-mindedness and scientific optimism, but surely Rey, Flanagan, and the others would agree that there are some occasions when the jig is up--when it is just silly to hold out hope for such a scientific revelation. Consider Einsteinian physics. Einstein noted that it is impossible to distinguish by local observation between a gravitational field and an accelerated frame of reference. This led him to postulate the equivalence that is at the heart of relativity theory. Now insert the "realist," who says "Oh just because you can't distinguish the two doesn't mean they aren't different! There might be a difference that is indistinguishable by any current test! Never say never!" Yeah, there might be, but in the meantime, tremendous progress is made by concluding that there isn't (Cf. Field, 1974, 1975). I am proposing similar simplifications: since you can't distinguish between the Orwellian and Stalinesque models of meta-contrast, or between a zombie that acts just as if its conscious and a conscious being, they are equivalent.

If Rey et al. agree with me that the superficialist response is sometimes appropriate, the question that divides us is whether the empirical facts support consciousness being such an instance. I say yes, and what some philosophers don't appreciate is that when I say this, I'm not just putting forward a philosophical thesis meant to ride piggy-back on whatever the current scientific consensus might be. I'm making a fairly bold proposal about empirical theory in cognitive neuroscience, and basing it on a fairly detailed analysis of a wide variety of experimental and theoretical results. The Multiple Drafts Model is deliberately sketchy about many details, but it is a scientific model, not just a philosophical toy.

Rey sketches his alternative, the CCC theory of consciousness, and suggests that in many regards I could go along with his way of speaking--all my multiple drafts being carried along as sentences of LOT in all his registers. And he acknowledges that his theory will have to be--shall we say--innovative in the way it settles many of the issues left open in the traditional understanding. But he imagines that there might be good scientific reasons in favor of such innovations, though of course he can't now say what they might be. This is one place where I think he has clearly underestimated the grounds I have given for my view. It is in the details of the account of the problems about time that the incoherence of Cartesian materialism emerges, and when Rey says that the problems of so-called temporal anomaly can be "easily sorted out" by a three-fold distinction in his CCC theory, his optimism counts for nothing till we see just how this is to be accomplished when he turns his philosophical theory into a scientific theory with some detail.Endnote 17 David Rosenthal takes the first few steps of this project in his own attempt to describe a possible brain mechanism that could resolve the Orwell-Stalin issue, and arrives at the conclusion that his mechanism "blurs the contrast between Stalinesque and Orwellian. . . There is reason, therefore, to be suspicious of a hard and fast distinction between Stalinesque and Orwellian mechanisms." [p.21] (More about this later.) Rey admits he hasn't a clue whether Orwell or Stalin would prevail in his CCC theory, but sees no reason "to leap to the conclusion" that there is no fact of the matter. My claim was no leap; it was brought home by a careful examination of the possibilities. Endnote 18

(This is a good opportunity to point out that Flanagan's discussion similarly underestimates the difficulties raised by the Multiple Drafts Model. He cites (p.15-16) the Logothetis and Schall experiment with macaques, but it is systematically inconclusive: Suppose we find that the activity in the STS region in macaques (parallel to MT in the human cortex) is also found in human subjects who deny awareness of the shift to which the macaques have been trained to respond, while making some other behavioral indication (e.g., a saccade) that they have detected the shift? Would this be evidence showing that MT activity was not, after all, associated with consciousness? Flanagan overlooks my discussion of the issue:

There is a region in the cortex called MT, which responds to motion (and apparent motion). Suppose then that some activity in MT is the brain's concluding that there was . . . motion. There is no further question, on the Multiple Drafts model, of whether this is a pre-experiential or post-experiential conclusion. It would be a mistake to ask, in other words, whether the activity in MT was a "reaction to a conscious experience" . . . as opposed to a "decision to represent motion" [in conscious experience]. (CE, p.128n)

And it is interesting to note that Logothetis and Schall themselves acknowledge the gap, and do not attempt to close it:

The interpretation of the results is by no means conclusive. The differential modulation of these STS neurons in response to rivalrous stimuli was evident much earlier than subjects typically resolve the rivalrous perception. Thus further processing is clearly involved, and the data do not exclude the possibility that the perception-related modulation observed in these neurons may be a result of feedback from higher centers. (Logothetis and Schall, 1989)

What would closing the gap entail? It would entail singling out some privileged corner-turning as the one that the conscious subject presided over or witnessed, but there is no such inner sanctum for such events to happen in. Flanagan has jumped to conclusions about the proper heterophenomenology of the macaques and its relation to activity in STS, begging the question.

Flanagan also speculates that either 40-herz synchrony or the Squire Zola-Morgan theory of hippocampal involvement might provide the leverage to resolve the quandaries I say are unresolvable, but these two popular themes are already tidily tucked in among the factors that are powerless to settle the questions, as a careful examination shows. (See, for instance, Jeffrey Gray's forthcoming article in BBS, presenting his model of the role of the hippocampus in consciousness, and my commentary, "Overworking the hippocampus.") Until you work through the details, it is indeed hard to see that these ideas utterly fail to provide any further leverage on the Orwell-Stalin question. And if they can't help, what could? It is no use just saying "Well, I can't imagine anything now that could upset your claim, but still, something could!")

I think there are abundant empirical grounds for eschewing Rey's proposed way of speaking about "registers" in the brain containing various sentences in a language of thought, but I'll go along with the gag this far: if Rey could motivate a categorization of all these registers into c-registers and u-registers, I would grant that he could thereupon handily discover, by brute inspection, facts of the matter where I say there are none. Some of these might turn out to be mighty strange facts--it could turn out that lefthanders aren't conscious after all, or that the order in which successive perceived events become conscious in subjects is seldom if ever the order in which they seem to subjects to occur, or that people are conscious of multiple interpretations of every sentence they hear, without realizing it--but who ever said science couldn't surprise us? But precisely because such bizarre "discoveries" are not just possible but already upon us, the demand for a motivation for the proposed identifications is going to be very, very high. We already know that the tasks that would have to be normally accomplished by the imagined c-registers are broken up and distributed around to many different structures, asynchronously making their particular contributions. Singling out any one variety of these, or even some salient team of these, and labeling them the c-registers is going to look awfully ad hoc. Superficialism is attractive whenever the task of motivating a particular hard line looks hopeless.

Hilary Putnam's classic paper, "Dreaming and Depth Grammar" 1962, did a number on superficialism from which it has never, till now, recovered. Norman Malcolm and the other mavens of ordinary language philosophy had raised their brand of "conceptual analysis" to a deeply regressive, hyperconservative, pitch. Science, it seemed, could never discover anything that the Folk didn't already know about their phenomena. If science discovered anything truly surprising, it would have to be about something else--"you've changed the concept," as the saying went. Putnam showed how lamentably thin this philosophy of language was (see also Dennett, 1968), but I think the pendulum has swung too far. Used in moderation, the superficialist response to mismatches between folk psychology and academic psychology (like the mismatch between folk physiology --"fatigues"--and scientific physiology) is just the ticket.

Ordinary folk think that dreams are experiences that occur during sleep. Suppose science were to discover that the content-fixing series of neural events that generate the stories people tell on waking actually occur, unbeknownst to their subjects, during waking life. At this very minute, let's suppose, your brain is composing ("having"?) the dream you will report on waking tomorrow. Now in this imagined eventuality, "what would we say?" Would we say "It turns out that dreams aren't experiences after all" or would we say "It turns out that there aren't any dreams at all" or what? The superficialist about dreams says that this may be an interesting question but it is not a question of discovery, but one of policy, just like the fatigues case--or the magnets case. The hysterical realist says that it all depends on what the scientists have learned. If there is a deep enough theory (we can't imagine it now, of course), there could be conclusive grounds for the claim that scientists had discovered that dreams happen while you're awake, and are not experiences at all. But, says Malcolm, rising from the grave, now you're just changing the concept. You have discovered a scientific truth, but not a truth about dreams. The concept of a dream may not be a good scientific concept, but it is our very own. He would say the same about the alternative claim--the scientists had discovered that there simply were no dreams after all. Eliminativism about dreams is as myopic and needlessly tendentious as proclaiming the weird identification. The main point, in any event, is that if dreams (as ordinarily understood) turned out not to have a good fit with scientific discoveries, then it would be a gratuitous exercise in special pleading to try to force one scientific identification of dreams or another; whatever we decided was best to say, our decision would not be a scientific discovery, but a more or less political decision about how best to avoid misunderstandings.

It must count against any novel scientific recategorization of an ordinary term that it creates huge dislocations of common understanding (dolphins turn out not to be fish, but when we are told that tomatoes are fruit, not vegetables, we blithely conclude that scientists have some other, technical concept of fruit in mind, not the ordinary one). That's just a point about language--a close kin to Quine on the constraints on radical translation, and also, as Rey notes, to Malcolm and Wittgenstein on "changing the concept." Endnote 19 Putnam is commonly thought to have put a stop to this line of thought, not just in his trouncing of Malcolm, but in his more recent insistence (1975) that, thanks to the division of linguistic labor, natural kinds can be what people are really talking about, even when, as individuals, they are incompetent to discriminate between alternative possibilities. The fundamental idea is that we can escape our local epistemological limitations by ceding our referential authority to Nature itself: What I mean (whether I know it or not) by my own word "water" is whatever natural kind the stuff I paradigmatically call "water" turns out to be. This is a fine idea when Nature cooperates--or rather, to put the responsibility where it belongs, when our linguistic community happens to have hit the nail on the head (or close enough). But whenever our everyday terms carve Nature less well (in spite of our Putnamian goal of naming natural kinds) there is no forcing the issue.

It is thus the over-extension of Putnam's doctrine of natural kinds that is a bulwark of hysterical realism, an attempt to turn the nominal essences of science into real essences (Dennett, 1995). This comes out clearly if we contrast a case in which Putnam's doctrine looks plausible with one that does not look compelling at all. Suppose Twin Earth is just like Earth except for having shmust where we have dust--behind the books on the bookcase, along country roads during dry spells, etc. But surely, you protest, the concept of dust isn't the concept of a natural kind--shmust is dust, in spite of what anybody says! Exactly. It's a superficial concept, a nominal essence of scant interest or power. We already know enough about dust to know that science couldn't discover that dust was really something else--or that there wasn't any dust. Science could not uncover the secret nature of dust, because dust qua dust couldn't have a secret nature. In contrast, we already know enough about water, and gold, to know that they are natural kinds.

The question then that divides Rey and me might seem to be: are the concepts of folk psychology like the concept of dust, or like the concept of water? Rey thinks they are like the concept of water, good candidates for natural kindhood. I think we already know enough about many of them to know that even though they may aspire to name natural kinds (unlike the concept of dust), they aren't good enough to succeed. That is a difference of opinion arising from different readings of the empirical facts, but there is also an underlying philosophical disagreement. Rey, as a hysterical realist, thinks there is always the further question to be answered: which natural kinds do our terms in fact name?

This question cannot be forced, since natural kinds can be nested, an undeniable fact (though Putnamians have ignored it for years) that leads us right back to superficialism. Consider Putnam's standard case of water. Presumably, H2O and XYZ are subkinds of some larger natural kind. Putnam disguises this implication by calling his alternative "XYZ" and not, say, "X2O," which we would be more inclined to consider a novel variety of water, like D20, deuterium oxide or heavy water.Endnote 20 How could H20 and XYZ not be instances of some single natural kind, given that they are as interchangeable in the physical world as Putnam requires us to imagine? Suppose, then, that H20 and XYZ are both instances of some broader natural kind, K. Which natural kind did the folk mean by their pre-scientific word "water"? Since they lacked any scientific purchase on the difference between H20 and K, and could not, ex hypothesi, distinguish them, there cannot have been grounds in their usage or understanding for favoring one over the other. Might we invoke a "general principle" to the effect that whenever people use a term with the understanding that it names a natural kind, they mean the term to refer to the narrowest natural kind that fits their historical usage? We might, but the arbitrariness of the principle will haunt us in cases in which isolation forces bizarrely narrow answers. Endnote 21For instance, if there is life anywhere in the universe that is not carbon-based, such life forms could not be called "alive" by unscientific earthlings, since the only life forms that have ever been called alive on this planet are exclusively carbon-based, and carbon-based life is surely a natural kind. Adopting the proposed repair to Putnamian doctrine, we would have to say that anyone who called a Martian alive would be making the same mistake as the person who called a glass of XYZ water. Not very persuasive.

So if we abandon the minimalist principle as arbitrary and unmotivatable, we are left with two choices: either there is a fact of the matter, but one that is systematically indiscernible from every possible perspective (now that's truly hysterical realism!), or there is no fact of the matter--it's a policy question at best--and we are back at superficialism. Try it. You'll like it. I find it a lot easier to swallow than hysterical realism.

Otto and the Zombies

As Joseph Levine says, "it's terribly difficult to get clear about just what is being affirmed and denied" in the qualia debate. His response is to meet my case against qualia with a methodical, sympathetic and accurate rebuttal, very usefully shining light on the issues from the other side. It might seem at first as if his botanizing of species of bold and modest qualophilia, reductionism and eliminativism is an indictment of the whole enterprise, showing it to be one more instance of philosophers playing burden tennis instead of engaging in a serious investigation. His use of the common philosophical diction of "available strategies" belies the fact that for him, as for me, the point in the end isn't to win, but to uncover the truth. Let me try to reframe the issue slightly. We begin as people with opposing hunches--it's as simple as that--and neither side knows just what to say. One side feels pretty uncomfortable with the prospect of a materialistic account of subjectivity, and the other side is pretty sure all the real problems with such a theory can be worked out. So they put their heads together (face to face and opposed), and see what happens. Strong moves by one side (e.g., "bold qualophilia") are readily rebuffed--not refuted once and for all, but made to seem gratuitous or extravagant, worth putting on the back burner for awhile if not abandoned utterly--while more modest forays are explored. Out of this actually quite constructive interplay of opposing hunches, genuine progress is made, or can be made.

It is important to recognize this interplay or dialectic, for otherwise it can seem as if people are always talking past each other--or worse: deliberately attempting to confound the opposition. Robert Van Gulick, for instance, describes my method in the following terms: "like any good debater, he tries to saddle the opposition with as much questionable philosophical baggage as possible." [ms p6] That would not be a constructive move on my part, and I sincerely hope I haven't done that. Let's leave debating tricks to the debaters. What I have tried to do is to show that the "questionable philosophical baggage" comes along for the ride under surprisingly innocent-seeming circumstances. I can see how my arguing that somebody's apparently innocent "realism" or "mild qualophilia" had embarrassing implications might look like a debater's trick to somebody on the receiving end, but if that were all it was, it should not be admired or even tolerated; it should be dismissed as unhelpful and unserious shenanigans. It has seemed to some as if I am shooting down strawmen with arguments against (only) bold qualophilia, when in fact I aim some arguments--appropriately--against the bold views and others--appropriately--against the modest views. This comes out clearly in Levine's deft peeling of the onion, as he follows me down the path to what he considers Otto's "trap," acknowledging--and explaining--how each move on his side is neatly countered by a move on my part.

I will return to Otto, and Levine's attempt to escape with an intact view, in a moment, but first I want to highlight a feature of the interplay that comes out intermittently in Levine's discussion, but to which he does not draw explicit attention. This is what we might call unrecognized allegiances. In this phenomenon, people on one side or the other explicitly disavow any allegiance to a strong view as soon as a good objection to it is pressed, thereby allowing themselves to concede without further examination that the other side's arguments would indeed demolish that strawman, but precisely because they never bother to defend the strong view, they fail to see just how much they are giving up--and giving up for good--as they move to the more modest and defensible versions. Then they later unwittingly revert to an appeal to some feature that belongs only to the strong view, and we go round and round in circles. I believe this problem of unrecognized allegiances is a common foible, and one of my countermeasures is to set up vivid reminders of what one is renouncing--never to return. Figment, for instance. It is an attractive feature to qualophiles until I find a suitably abusive way of characterizing it, and I am always gratified when some brave qualophile admits that, yes, something along the lines of figment as just what she was hankering for. Well, you can't have it. Figment doesn't properly come up, of course, in discussions with modest qualophiles, who have officially renounced such extravagances, but without the frontal attack on it, it would, I am sure, continue to fuel the motivation of some modest qualophiles behind the scenes.

On his journey to Otto, Levine acutely describes the challenge of heterophenomenology, and sees that the only escape for qualophiles is to maintain "that conscious experiences themselves, not merely our verbal judgments about them, are the primary data to which a theory must answer." [p.15] Leopold Stubenberg (1995) has seen the same cliff-edge looming, and resisted in the same terms. Here is my response (and note that its force is somewhat acknowledged, at various points, in Levine's discussion): You defenders of the first-person point of view are not entitled to this complaint about the "primary data" of heterophenomenology, since by your own lights, you should prefer its treatment of the primary data to any other. Why? Because it manifestly does justice to both possible sources of non-overlap. On the one hand, if some of your conscious experiences occur unbeknownst to you (they are experiences about which you have no beliefs, and hence can make no "verbal judgments"), then they are just as inaccessible to your first-person point of view as they are to heterophenomenology. Ex hypothesi, you don't even suspect you have them--if you did, you could verbally express those suspicions. So heterophenomenology's list of primary data doesn't leave out any conscious experiences you know of, or have any first-person inklings about. On the other hand, unless you claim not just reliability or normal incorrigibility, but outright infallibility, you should admit that some--just some--of your beliefs (or verbal judgments) about your conscious experiences might be wrong; in all such cases, however rare they are, what has to be explained by theory is not the conscious experience, but your belief in it (or your sincere verbal judgment, etc). So heterophenomenology doesn't include any spurious "primary data" either, but plays it safe in a way you should approve. Endnote 22

Levine's response to this impasse, like Stubenberg's, takes us right to Otto, who anticipates it, in his plea for "real seeming". And Levine notes, correctly I daresay, that if I am right when I say "there is no difference between being of the heartfelt opinion that something seems pink to you, and something really seeming pink to you," then there is "nothing left about which to argue." So we are closing in. (For more on real seeming, see below.) And it is true, as Levine says, that in my immediate response to Otto, I don't really argue for this claim; I just assert it. Elsewhere in the book, however, I do give grounds for believing it--in my own account of what consciousness comes to, and in my arguments about what no empirically realistic model of consciousness can tolerate: a Cartesian Theater. But I surely didn't make it clear enough why those considerations guaranteed my assertion to Otto on this occasion. Thanks to Levine, I can now repair that gap, for the issue is exposed with unprecedented clarity in his attempt to characterize the modest qualophile's "inability to provide an account of the mechanisms of first-person epistemic access." [p27]

So suppose B is a state of conscious experience. I want to understand how a cognitive state, A, carries the information that B. It seems that in order for me to understand that relation, I must first understand how B is realized in those very physical mechanisms by which the information that B is to be carried to A. But, by the qualophile's own hypothesis, this understanding is not currently available. That is, I don't understand how B is itself realized in physical mechanisms. So, it follows that I also don't understand how information concerning B flows to A [my emphasis]. Hence, I don't have an account of first-person epistemic access. [p.28-9]

Think of what lurks in this "flowing to A." There is a (functional) place, A, which either "has access to" the information that B, or doesn't. How on earth does the information get there? These are the terms in which Levine's qualophile frames the issue. But since this is to be an account of first-person epistemic access, the place in question must be none other than the place where I reside, the Cartesian Theater. There is no such place. Any theory which postulates such a place is still in the grip of Cartesian materialism. What (and where) is this I? It is not an organ, a subfaculty, a place in the brain, a medium--or medium (Dennett, 1993)--into which information gets transduced. My attack on the Cartesian Theater is among other things an attack on the very practice--illustrated here in an otherwise remarkably surefooted performance--of positing an unanalyzed "I" or "we" or "self" or "subject" who "has access" to x or y, as if we could take this as a primitive of our theorizing. Any sane account of the mechanisms of consciousness must begin with the denial of Cartesian materialism; and that leads irresistibly to the view that the "me" has to be constructed out of the interactions, not vice versa. This is the point of what I sometimes think is the most important, and under-appreciated, passage in CE:

How do I get to know all about this? How come I can tell you all about what was going on in my head? The answer to the puzzle is simple: Because that is what I am. Because a knower and reporter of such things in such terms is what is me. My existence is explained by the fact that there are these capacities in this body. (p410)

But then what about the zombie problem? Levine is excellent on the zombie problem. In particular, he shows exactly why, as I have urged, there is really nothing left of modest qualophilia unless you hang tough on the conceivability of zombies. So, hanging tough on zombies is just what he does, with resourcefulness and an acute appreciation of the pitfalls the qualophobe has prepared for him. I hope no qualophiles find fault with his treatment, since he seems to me to have captured the dialectic and strategy of both sides just about perfectly. Certainly he has done justice to my campaign against zombies--except for one delicate matter, alluded to in his polemical closing, but not directly addressed. Levine deplores the defensive position into which qualophiles have been thrust by my attack on their belief in zombies. No, he says, truly modest qualophilia is not "a philosophically infantile obsession" and modest qualophiles "practice their puzzlement in a spirit of profound respect for science." I gather, in other words, that he finds my ridiculing of the belief in zombies to be unfair, at best a cheap shot. I confess that try as I might, I cannot summon up conviction for any other verdict: zombies are ridiculous!

By my lights, it is an embarrassment to philosophy that what is widely regarded among philosophers as a major theoretical controversy should come down to whether or not zombies (philosophers' zombies) are possible/conceivable. I myself try hard to avoid the issue, and the term, in discussions of consciousness with scientists, since I invariably find that any attempt at serious discussion of the zombie problem meets with ill-suppressed hilarity. This does philosophy and philosophers no good, and I deplore it just as much as Levine does. Clearly a massive public relations job needs to be done, and just as clearly I am not the one to attempt it, since I myself don't yet see how a philosopher as acute and surefooted and wise as Levine can stomach the position he staunchly maintains about zombies. I have helped drive him there, thinking the campaign would cure him. If he chooses instead to outsmart Endnote 23 me, then perhaps he himself should take on the delicate task of explaining to a general audience, not to philosophers, why the belief in zombies is not a reductio ad absurdum. His paper in this volume is a fine foundation, but it is still manifestly written for a philosophical audience, and even in it, he fails (in my biased opinion) to secure much of a leg for a zombie to stand on. But on the strength of his showing here, he can do it if anybody can.

I responded above to Robert Van Gulick's suggestion that I was cleverly trying "to saddle the opposition with as much questionable philosophical baggage as possible," by denying that this was a debater's trick on my part. Now it is time to address the substantive issue: can one be a phenomenal realist (in Van Gulick's sense) without taking on the bad baggage? I have said no, but he is not convinced. His paper usefully clarifies the conditions under which one can be a phenomenal realist, in his sense, but in the process it seems to me that he ends up with a position that is scarcely distinguishable from mine after all. He thinks, for instance, that Marcel Kinsbourne's "integrated field" theory might be a good empirical fleshing out of his phenomenal realism, but Kinsbourne's theory and mine are one and the same; we worked it out in collaboration, and so far as I know we do not part company--except by inadvertence or forgetfulness--on any of the issues, differing only in which aspects of the shared theory to emphasize at various moments. What keeps Kinsbourne and me from being phenomenal realists too, then?

Phenomenal realists believe there are important structural and functional differences between mental states with phenomenal properties and those without. . . . Phenomenal states for example seem to play an especially privileged role in the initiation of intentional behavior . . . On the structural side phenomenal states typically involve highly integrated representations that incorporate multi-modal information and rich network[s?-editor check ms] of connections among interrelated items in the represented scene or situation. [p.17]

Kinsbourne and I certainly agree about the importance to consciousness of these "structural and functional differences," which is why they each get special treatment in the Multiple Drafts Model (e.g., the discussions of blindsight, "hide the thimble," and prosthetic vision in CE). But what work are "phenomenal properties" doing over and above the role played by the integration and the rich network? Kinsbourne and I wonder what we can be supposed to be leaving out. We insist upon the functional and structural differences, and on their importance. I go on to say, wearing my philosopher's hat, that these features are the very features typically misdescribed by philosophers as somehow "phenomenal."

What difference are we disagreeing about? Perhaps this one (leaning on the usual crutch): Kinsbourne and I would see no reason in principle why a robot (like the proverbial zombie that lacks phenomenal consciousness) could not exhibit both sides of the distinctions observable in blindsight--being unable to initiate or guide intentional actions by benefiting from the information in its scotoma, while showing the normal responsiveness, etc., to the visual information gleaned from the rest of its visual field. If so, then the difference could not, ex hypothesi, be a difference in phenomenal consciousness, the robot having none under the best of conditions. Would phenomenal realists dig in their heels here, and if so, how? Would they insist that no robot could have highly integrated, action-initiating vision? A daring and implausible empirical claim. Would they insist that any robot that did exhibit normal visual competence would show ipso facto that robots have phenomenal consciousness after all? That would be a clarification or revision of the meaning of "phenomenal consciousness" that would put Kinsbourne and me squarely in the camp of the phenomenal realists (see my discussion of Cog, in the first section of this essay). A third possibility is that the phenomenal realist would declare that in the case of such a robot, we wouldn't know (from all we've been told so far) whether this robot had, or didn't have, phenomenal consciousness to go along with its (otherwise) normal vision. But then what has happened to the importance of phenomenal properties? What leg would they then stand on?

There is one window through which the presumed difference can be clearly seen. Van Gulick thinks he has shown that

. . . there can be facts of the matter about experience that cannot be empirically detected. There may be a briefly transient fact about how experience is for the subject, but if the duration of that experience is insufficient to fix a belief or generate a report it will systematically elude detection.[ms p16]

Kinsbourne and I opened our joint paper by quoting a sentence from Ariel Dorfman's novel, Mascara, that was supposed to exhibit the dubiousness of this assumption:

I'm really not sure if others fail to perceive me or if, one fraction of a second after my face interferes with their horizon, a millionth of a second after they have cast their gaze on me, they already begin to wash me from their memory: forgotten before arriving at the scant, sad archangel of a remembrance.

In my discussion of Rosenthal and Block, below, I will respond further to his claim.

6. Higher Order Thoughts and Mental Blocks

David Rosenthal and Ned Block are both unpersuaded by the radical implications of my Multiple Drafts Model of consciousness, and their essays deal with many of the same issues, but they take opposite approaches to the task confronting them. Rosenthal looks closely at the MDM and attempts to show how to sever its connections to its most disturbing feature, "first-person operationalism" (FPO), in the process very usefully highlighting the reasons I have found for uniting them. Block, in contrast, turns his back on the details of the MDM, thereby confirming the folk wisdom that if you don't look at something, you can't see it.

Rosenthal, like Dretske, tries to establish something like a medium of representation (of "sensory content") midway between stimulation and the sort of (mis-)taking that is constitutive of how it seems (in at least one sense). They are both trying to find a home for what I call real seeming, in short. And once again, Rosenthal directly confronts a problem that is being underestimated by others. In his discussion of "Hide the Thimble," he notes that there is a prima facie problem with any view that insists, contrary to Betsy's first-person disavowals, that a "sensation" of the thimble is somehow part of her consciousness. As he observes, "such a sensation could, it seems, be conscious only in some technical sense that lacks any implications about our intuitive conception of consciousness." 13. But he also notes that "Theories often expand our ability to discriminate among phenomena that we cannot discriminate by other means," [p.11] echoing the positive thinking of Owen Flanagan and Georges Rey. True, but as I have stressed in my discussion of them, we have to be able to motivate the extension of the theory. Rosenthal recognizes this burden, and claims that the apparent adhocness of any such theory extension is removed by reflection on the existence of "fleeting auditory and visual sensations that occupy the periphery of our attention." [p.14] Are we in fact conscious of any such fleeting auditory and visual sensations? It certainly seems so. We know that they are there, it seems, since although we can never quite catch them individually in the net of recollection, if they weren't there, we'd notice their absence. (Whatever we catch in the net of recollection is always, ipso facto, something picked out by attention, something the existence of which is known to us in the manner Dretske calls fact awareness.) But would we in fact notice their absence if they weren't "there"? The experiments by Grimes and Rensink et al. show that we don't notice huge differences "in them," and the existence of such counter-intuitive pathologies as Anton's Syndrome (people who have become totally blind but don't yet realize it!) show that our everyday intuitions about these matters are not to be trusted. In what interesting sense does the occurrence of these putative "fleeting sensations" register on us at all? Rosenthal thinks he can slip between Scylla and Charybdis here: "All that's necessary for that to happen is a momentary event of being transitively conscious of the sensation, albeit too briefly to register as part of the subject's first-person point of view." [p.14] But why speak of transitive consciousness here at all, if it leaves the first-person point of view unaffected? Endnote 24 What is called for (by both Dretske and Rosenthal) is some form of ephemeral effect on some informational medium in the brain. That is easily found, in abundance: the brief irradiation of one's retinas by the thimble's image should do the trick, or the equally brief modification of V1, the "first" visual area of the cortex. We could say, then, that people have transitory transitive consciousness (thing awareness, in Dretske's terms) of any stimuli whose image irradiates the retinas and/or modifies V1. Why not? Because, once again, you can't motivate the claim that any such medium counts as the medium of consciousness (Mangan, 1993), or the claim that such transitory modification counts as seeing (Dretske), or that the units composing such a medium count as the c-registers (Rey).

Rosenthal's deliberate re-expression of my MDM leaving out FPO forces him to encounter from a different angle the problems that led me to incorporate FPO into my account. At one point, describing the Orwellian option, he says: "In this sequence, the initial stimulus did reach consciousness . . . but that conscious sensation didn't last long enough to have any noticeable mental effects; it commanded no attention, and when it ceased all memory of it was expunged." [p.8] Making sense of such a claim requires one to have some theory or model of consciousness that permits the normal mental effects of consciousness to be gathered, as it were, into a family under some tolerant umbrella, so that getting under the umbrella counts, even if one goes on to achieve few, if any, of the normal effects. Endnote 25 But not all phenomena are amenable to such treatment. Fame is my favorite exception (Dennett, forthcoming a). Consider the parallel claim, made about somebody: "In his life, he did achieve fame, but that fame didn't last long enough to have any noticeable effects in the world; he commanded no attention, and when he died, all memory of him was expunged." What on earth could this mean? Unless it meant something "technical" and unmotivated, along the lines of "he was inducted into some Hall of Fame" the claim just contradicts itself.

Rosenthal attempts to rehabilitate real seeming by discovering several different levels of seeming. He shows how to drive a wedge between the "second level" seeming of the first-person point of view as constituted by heterophenomenology and a "first level" variety of seeming, which would be revealed by such phenomena as somebody swerving to avoid a truck, without that truck entering their heterophenomenological worlds. This distinction is real enough, a descendant of my distinction (Content and Consciousness, 1969) between awareness1 and awareness2, now abandoned because I saw that instead of there being sharp levels of seeming, there was something more blurry, something more like a continuum, as revealed most vividly in Marcel's experiment requiring subjects to make multiple "redundant" responses to the same stimulus (CE, p248). Normally, all the responses of a person (or animal) pull together in favor of one reading of how things seem "to" that unitary agent, but in pathological or just extreme circumstances, the "transcendental unity" of seeming can come apart. When it does, we are not entitled to assume that some still unidentified property of consciousness (a "player to be named later") belongs to some subset of the seemings. Rosenthal recognizes this, in part: he allows me the category of unconscious seemings (what, in the old days, I would have called cases of awareness2 without awareness1), but in spite of his several recognitions of the onset of blurriness in his own account as he develops the details, he persists in holding out for a sharp divide between the unconscious and the conscious, and he persists in trying to make the divide distinct from the brutally incisive rule of first-person operationalism: if the subject can't report it, it isn't part of the subject's consciousness.

Heterophenomenological reports give us our best evidence about how people's conscious mental lives appear to them. But things aren't always as they seem. So Dennett's methodological appeal to these reports is neutral about whether they describe the conscious [emphasis added] events that constitute a subject's first-person viewpoint, or simply express the subject's beliefs about those mental events, events which may be entirely notional.[p.35]

I don't see how Rosenthal has met the burden of establishing that these putatively conscious events do in any way "constitute the subject's first-person viewpoint"--indeed, in the passage I quoted earlier, he apparently stipulates that in "Orwellian" cases, these events don't at all constitute the first-person viewpoint--so I don't see that he has motivated his claim that these are conscious events.

Ned Block's essay is his fourth in a series (1992, 1993, 1995) criticizing my theory of consciousness, and they arrive again and again at the same verdict: he can't see anything radical about it. It's either trivial or obviously false on any interpretation he can muster. He has so far overlooked the reading I intended. We all have fixed points--assumptions so obvious to us that we don't even consider them up for debate--and I have long thought that Block's inability to encounter my theory must be because he just couldn't bring himself to take seriously the idea that I was challenging some of his fixed points. Now he has confirmed this diagnosis, not just avowing that he has not taken it seriously but flatly urging no one else to take it seriously as well!

He says "I hope it is just obvious to virtually everyone that the fact that things look, sound and smell more or less the way they do to us is a basic biological feature of people, not a cultural construction that children have to learn as they grow up." I must dash his hopes; it is neither obvious, nor so much as true. He goes on to offer quite a list of ideas "we should not take seriously." It is not just, as I had suspected, that he was simply incapable of taking my hypotheses seriously. "My point," he says, "is that we should not take this question seriously. It is a poor question that will just mislead us." No wonder he has been so unmoved by my account! He has discarded it on general principles, without a hearing. That is a serious failure of communication, but we can now repair it.

Block agrees with me that consciousness is a "mongrel notion," and follows my strategy of titration--breaking down the ungodly mess into its components--but he underestimates the importance of the difference that language (and reportability) makes. I took a shot at it in 1969 with my distinction between the awareness1 that language-using creatures have to the contents that "enter" their "speech centers" and the awareness2 that marks appropriately discriminative uptake and is "enjoyed" equally by anteaters, ants, and electric-eye door-openers. As I have just acknowledged in my discussion of Rosenthal, that postulated speech center was all too Cartesian, and the role that language plays in consciousness is much more interesting and indirect than I saw in 1969, so I have had to make major adjustments to that doctrine. But the continuing importance of seeing a major distinction between the consciousness of language-users and the so-called consciousness of all other entities is made particularly clear by Block's work, which, by ignoring it, creates a powerful theoretical illusion. Block puts his major division between "access" and "phenomenal" consciousness, and, without further ado, declares that the "access" of awareness2 is all the access that matters. Block deliberately frames access consciousness so that language and hence reportability does not play a role. "My intent in framing the notion is to make it applicable to lower animals in virtue of their ability to use perceptual contents in guiding their actions." Endnote 26

As we shall see, this enhances the illusion that there is an "obvious" sense of consciousness in which lower animals and infants are conscious, and to make matters worse, Block actually enjoins people not to pursue the questions that would expose this illusion. My own efforts to convince Block of this in the past have all been frustrated, but he and I have kept plugging away, and now I have hopes of straightening it all out. At least he should now be able to see, for the first time, what my position is and always has been. Again and again in this paper he asks what he takes to be crushing questions, questions to which he thinks I can have no answer. He will "surely" be surprised by my answers--and even more, I expect, when I point out that these have always been my answers to them.

Block's attitude in the current essay towards his own major division (between "access" and "phenomenal" consciousness) is curiously ambivalent: he wields it, acknowledges that I have rejected it, but excuses himself from mounting the defense I say it needs. "We needn't worry," he tells us, "about whether access-consciousness is really distinct from phenomenal consciousness, since the question at hand is whether either of them could be a cultural construction. I am dealing with these questions separately, but I am giving the same answer to both, so if I am wrong about their distinctness it won't matter to my argument." [p.3-4] But it does matter, since it is the very move of supposing that he can make this cleavage between access consciousness and phenomenal consciousness that conceals from him the way in which consciousness could be a cultural construction. By looking at two mis-isolated components of the phenomenon, Block has convinced himself that since neither "separately" could be a cultural construction, consciousness cannot be a cultural construction. But these supposed sorts of consciousness don't make sense "separately"--they only seem to do so.

To put it bluntly--for a few more details, see "The Path Not Taken," (1995b) my commentary on Block's most recent sally in BBS--Block can't distinguish phenomenal consciousness from phenomenal unconsciousness without introducing some notion of access, a point he almost sees: "There is a 'me'-ness to phenomenal consciousness." Like Dretske, he needs there to be some sort of uptake to ensure that the "phenomenal" is to or for some subject--or could phenomenal itches and aromas just hang around being conscious without being conscious to anyone? Rosenthal enunciates as if it were a constitutive principle the intuitive demand that raises these problems for Dretske and Block: "Still if one is in no way at all transitively conscious of a particular mental state, then that state is not a conscious state." [p.12] This "transitive" consciousness must be a variety of "access" consciousness, for it relates "one" to what "one is conscious of". But once we let access come back in, we will have to ask what sort of access we are talking about (for "phenomenal" consciousness, mind you). Is the access to color boundary information enjoyed by the part of your brain that controls eye-movements sufficient? If it is, then the anesthetized subject (a monkey, most likely) whose eyes move in response to these "perceived" colors is enjoying phenomenal consciousness. And so forth. (In this area I think Ivan Fox's essay has valuable further lessons to offer.)

Block doesn't tell us anything about which features of access would suffice for phenomenal consciousness, but in any case, however Block would resolve this issue, I resolve it, as he correctly notes, via the concept of cerebral celebrity. This idea "seems more a theory of access-consciousness than any of the other elements of the mongrel" but it is also, I claim, a theory of phenomenal consciousness (after all, I deny the distinction). Can this really be so? Could the sort of access requisite for phenomenal consciousness really be "constructed" out of cerebral celebrity, and could this feature in turn be a cultural construction? Block is forthright in his incredulity. "I hope Dennett tells us how, according to him, cerebral celebrity could be a cultural construction." But I already have, at great length, over more than a decade. He just didn't notice.

He helpfully italicizes his main error for us: "But surely it is nothing other than a biological fact about people--not a cultural construction--that some brain representations persevere enough to affect memory, control behavior, etc." [p.5] Surely? No. Here Block completely overlooks all my patient efforts to explain precisely why cerebral celebrity is not a biologically guaranteed phenomenon. This is the point of all my discussion (going back to Elbow Room) of the evolution of consciousness: to open up as a serious biological possibility the idea that our brains are not organized at birth, thanks to our animal heritage, in ways that automatically guarantee the sorts of mutual influence of parts that is the hallmark of "our access" to conscious contents. My little thought experiment about talking to oneself (first in Elbow Room, pp.38-43, and, elaborated, in CE, pp193ff) is central. It suggests a way--a dead simple way, just to get our imaginations moving in the right direction--in which a culturally "injected" factor, the use of language, could dramatically alter the functionally available informational pathways in a brain. Now does Block think that my story is inconceivable? Does he think it is inconceivable that human infants, prior to rudimentary mastery of a language, and the concomitant habits of self-stimulation, have brain organizations that do not yet support "access" consciousness beyond the sorts "lower" animals enjoy? Probably not. But tempting though it undoubtedly is, he may not now fall back on his undefended distinction between access and phenomenal consciousness. He is in no position to say: "Surely" these lower animals, even if they do lack human-style access consciousness, have phenomenal consciousness?! (cf Lockwood, 1993, Nagel, 1991, Dennett, 1995c)

In an elegant paper, "Cued and detached representations in animal cognition," Peter Gärdenfors (forthcoming) points out "why a snake can't think of a mouse."

It seems that a snake does not have a central representation of a mouse but relies solely on transduced information. The snake exploits three different sensory systems in relation to prey, like a mouse. To strike the mouse, the snake uses its visual system (or thermal sensors). When struck, the mouse normally does not die immediately, but runs away for some distance. To locate the mouse, once the prey has been struck, the snake uses its sense of smell. The search behavior is exclusively wired to this modality. Even if the mouse happens to die right in front of the eyes of the snake, it will still follow the smell trace of the mouse in order to find it. This unimodality is

particularly evident in snakes like boas and pythons, where the prey often is held fast in the coils of the snake's body, when it .e.g. hangs from a branch. Despite the fact that the snake must have ample proprioceptory information about the location of the prey it holds, it searches stochastically for it, all around, only with the help of the olfactory sense organs. (Sjölander, 1993, p. 3)

Finally, after the mouse has been located, the snake must find its head in order to swallow it. This could obviously be done with the aid of smell or sight, but in snakes this process uses only tactile information. Thus the snake uses three separate modalities to catch and eat a mouse.

Can we talk about what the snake, itself, "has access" to, or just about what its various parts have access to? Is any of that obviously sufficient for "phenomenal" (or any other kind of) consciousness? What--if anything--is it like to be a (whole) snake? Postponing consideration of that question, does such an example render plausible--at least worth exploring--my hypothesis? My radical proposal is that the sorts of internal integrating systems the snake so dramatically lacks but we have are in fact crucial for consciousness, and they are not ours at birth, but something we gradually acquire, thanks in no small measure to what Block calls "cultural injection." I hope that, unlike Block, you think these are ideas that just might be worth taking seriously. Block says: "True, consciousness modulates cerebral celebrity, but it does not create it." It's almost the other way around: cerebral celebrity is consciousness, and it is, in part, a cultural creation. That, at any rate, is the phantom Dennettian claim that Block makes such a labor of searching for.

He's utterly right about the banality of the view that it takes culture to think of oneself as a federal self; the interesting view is that it takes culture to become a federal self. But he doesn't consider this view.

Whenever Block says "Surely," look for what we might call a mental block. Here is another: "Surely, in any culture that allows the material and psychological necessities of life, people genetically like us will have experiences much like ours; there will be something it is like for them to see and hear and smell things that is much like what it is like for us to do these things." [p11]. Block says "in any culture"--and I have never claimed that consciousness is a product of a very specific culture, since all sorts of human cultures for tens if not hundreds of thousands of years have had the perquisites. So Block ignores here the appropriate case, given my claims. What about the (fortunately, imaginary) case of Robinson Crusoe human beings, each raised in total isolation, in an entirely depopulated, a-social, a-cultural world, with no mother to cuddle and feed them, no language to learn, no human interactions at all? Is it obvious that "there will be something it is like for them to see and hear and smell things that is much like what it is like for us to do these things"? I don't think so. But "surely," you retort, however appallingly different it would be, it would be like something! Well, here is where "what it is like" runs into trouble. Is it obvious that it is "like something" to be an 8-month fetus in the womb? Is it obvious that it is "like something" to be a python? The less the functional similarities between normal adult, socialized consciousness and the test case under consideration, the less obvious it is that we are entitled to speak of "what it is like". Block's confidence about phenomenal consciousness masks this growing tension by supposing, optimistically, that of course there is something we can hold constant, in spite of all these differences in "access" consciousness: phenomenal consciousness. With this I flat disagree, and that is the primary source of our miscommunication up to now. Endnote 27

When we turn to Block's discussion of my comparison between consciousness and money, I must first correct a misrepresentation of my view. I don't say--let alone "repeatedly"--that you can't have consciousness unless you have the concept of consciousness, but that the phenomenon of consciousness depends on its subjects having a certain family of concepts (none of them necessarily any concept of consciousness). In CE, I speak of consciousness depending on "its associated concepts" (p.24). Block finds the one passage in my homage to Jaynes in which I deliberately overstated this point (while drawing attention to its "paradoxical" flavor). Let me try to undo the damage of that bit of bravado. Acquiring a concept is, on almost any view of concepts I have encountered, partly a matter of acquiring a new competence; before you had the concept of x, you couldn't really y, but now thanks to your mastery of the concept of x (and its family members and neighbors--don't try to pin some sort of atomism on me here), you can y, or more easily y, or more spontaneously y. Now if consciousness is "good for something"--if having it gives one competences one would lack without it--then there should be nothing surprising or metaphysically suspect about the claim that the way you make something conscious is by giving it (however this is done) some concepts that it doesn't already have. And so it is somewhat plausible--at least worthy of consideration, I would have thought--that acquiring concepts is partly a matter of, or contributes to, building new accessibility relations between disparate elements of a cognitive system. Concepts, you might say, are software links, not hardware links. Well then, here's an idea: maybe consciousness just is something that you gain by acquiring a certain sort of conceptual apparatus that you aren't born with! If you say, but "surely" that couldn't be true, since you have to be conscious to have concepts in the first place, I reply: that is a Big Mistake that Jaynes helped overthrow.

"It is hard to take seriously the idea that the human capacity to see and access [emphasis added] rich displays of colors and shapes is a cultural construction that requires its own concept." It is too hard for Block to take seriously, that's for sure. But if he were right, why don't the experimenters run the same color experiments on non-human mammals? Hint: because non-human mammals don't "have access" to all the richness of the colors and shapes their nervous systems nevertheless discriminate in one way or another (cf. footnote 10 above on Dretske on color vision in animals). Now perhaps you want to insist that the animals do "have access" to all this richness, but just can't harness it the way we can, to answer questions, etc., etc. That, however, is a surmise that is fast losing ground, and rightly so. The idea that we can isolate a notion of "access"--"you know, conscious access"--that is independent of all the myriad things that access thereby enables is just an artifact of imaginative inertia. It has no independent warrant whatever.

7Qualia Refuse to go Quietly

What is color? Joseph Tolliver clearly describes the logical space and the motivations behind the various theories of color that have recently been proposed. By my lights, however, he has been insufficiently critical of the shared assumptions of the literature he considers; what should be seen as differences of emphasis have been pumped up into differences of doctrine, rendered spuriously at odds by being forced into the procrustean bed of essentialism, leading, as we have seen, to hysterical realism. In fact, thanks to Tolliver, hysterical realism can be seen in a particularly clear light. Consider his lovely example, alexandrite, the philosopher's stone indeed. In sunlight it looks blue-green and in incandescent light or candlelight, it looks red.Endnote 28 What color is it really? What makes anybody think this question must have an answer? Essentialism. They think color has a real essence, and hence they cannot tolerate a view that leaves the answer to such questions indeterminate. Thus Edward Averill, raising his problems of counterfactual colors, poses a litmus test for theories of color parallel to my stumper about magnets. What would we say: that gold had changed its color or that the true color of gold had been obscured? As Tolliver notes, when my evolutionary theory faces this situation, it fails to resolve it. I don't view that as a criticism, however, for I don't think that the question of what color gold really is (in "all possible worlds") deserves attention.

He sees that my evolutionary account gives you a "principled means" of identifying the normal conditions, relative to the functions, and hence the standards, by which we identify the class of observers. But it must be essentialism ("color is a transworld property") that leads him to think that these evolutionary considerations don't suffice, since they don't provide similarly "principled" ways of fixing the standard viewing conditions of colored things that played no role in our evolution, such as "lasers, dichromic filters, gem stones, stars, and Benham disks." [p.22] So what? All such colors should be considered mere byproducts of the perceptual machinery designed to respond to the colors that have had evolutionary significance for us or our ancestors. If the sky's being blue (to us) is just a byproduct of the evolutionary design processes that adjusted human color vision, then no functional account (which would assume that the sky "ought" to look some particular color under some canonical circumstances) is needed. If, however, some features of our responsivity to color (e.g., the pleasure we take in seeing blue) itself derives, indirectly, from some later evolutionary response to this byproduct, then the sky's being blue is "right"--but now for a reason that is purely anthropocentric, and none the worse for that!

Tolliver also makes the minor error of elevating my evolutionary explanation of the grounding of color into some sort of constitutive claim on my part. Evolution answers the question for us, since evolution is the source of our functionality, but if the Creationists' story were true, then God the Artificer would have to hold the key. That's fine with me, as a fantasy. For I take it that we can readily imagine a race of robots endowed by their creators with a sort of "color" vision (scare-quotes to mollify the scaredy-cats), in which an entirely different set of patterns ruled, and ruled for equally "principled" reasons. In that world, thanks to the design decisions of the robots' creators, undesigned things (gem stones, stars, the sky) could fit into color-equivalence classes different from ours. On either this story or our non-fantastic evolutionary story, we anchor the standard conditions to the class of normal observers by functional considerations.

Tolliver's own functionalism is clearly superior to the alternatives he considers, but I think he misses a few crucial points.

Functional architecture is the formal structure that makes possible the construction of complex representations within the symbolic system. But the functional architecture is not another representation over and above the representations defined by means of it. [p.27]

True, but the functional architecture does contribute content--just not by "being a representation." There are many other ways of contributing content. Since this is an oft-ignored possibility, I wish I had hammered harder on this theme when I first raised it, in my example of the "thing about redheads" in "Beyond Belief" (1982, pp. 33-4; as reprinted in The Intentional Stance, pp148-9.) The idea that content must all be packaged in symbols or syntactic properties of representations is a very bad idea. Tolliver shows how a color coding system can be implemented by ordered triples, since every perceivable color can be uniquely placed in a three-space, the color solid.Endnote 29 "Surely," one is inclined to argue, a system of color coding all by itself doesn't amount to subjective color experience; there is nothing exciting or pleasurable, for instance, about ordered triples! Adding a fourth variable to represent the appropriate "affect" would not be a step in the right direction, and "translating" the ordered triples back into "subjective colors" (or qualia) would be a step in the wrong direction--a step back into the Cartesian Theater. We take a step in the direction of genuine explanation by postulating that these ordered triples are ensconced in a functional architecture in such a way that they have the right sorts of high-powered functions--the sort of thing Hardin and (earlier) Meehl note. That, the excitement potential of colors, and their capacity to soothe and delight us, is part of the content of color properties, and it is--must be--embodied in the functional architecture of the color system. The person who cannot use color as an alarm, as a reminder, as an ease in tracking or aide-memoire, does not have our color system.

My view of colors is an instance of what Stephen White calls the holistic strategy towards the problem of saying what experienced colors are, but Ned Block has raised his "Inverted Earth" fantasy against any such view. I did not discuss Block's thought experiment in CE, thinking its intricacies would not repay the time and effort it would take to present and criticize them, especially since I thought I had provided all the tools necessary to scuttle his case for anyone who sought them. That was wrong. White's analysis of the difficulties facing Block's thought experiment as published, and its subsequent post-publication variations, goes far beyond anything I had laid the ground for. And since I have failed to convince large and important segments of the philosophical audience, I have been making at least a tactical error which White's work repairs.

White treats patiently what I rush by with a few gestures. For instance, his expansion of Block's 4-stage example to 5 stages permits him to spell out--in enough imaginative detail to persuade--the sorts of thoughts "from the inside" that would go on in you were you to be in Block's posited circumstances. This was what I was getting at in CE at pp.393ff, especially the example of the shade of blue that reminds you of the car in which you once crashed. But White works it out so carefully, so crisply, that the point cannot be lost. See especially his nice observation on the inevitability of overcompensation, should your old hard-to-suppress inclination spontaneously disappear faster than you expected. Another excellent point: the subpersonal level could change in a gradual way while the personal level might stick for awhile, until it flipped in a "gestalt switch."

White then takes on notional worlds, an idea that I left rather vague and impressionistic in "Beyond Belief," and sharpens it up with a variety of his own insights and innovations to meet a host of objections. For the reasons discussed in the section on cow-sharks, I have no stomach for discussions of amnesiacs in blinding snowstorms who think they are being attacked by a bear (and are under the impression that other snow-covered amnesiacs are currently in the same pickle!), but for those who think such counterexamples are telling, White has a detailed response, thus forcing the anti-relationalists to take these ideas seriously. As he concludes from his examination, "Thus if we think seriously about the full range of discriminatory skills that a relational account can allow, its inadequacy as an account of our experience is far less obvious." [p.38] Hear, hear.

White's analysis also sharpens some points in Block's thought experiment that then invite a short-cut objection that can be used to forestall whole families of similar enterprises. In one of Block's variations, you have an identical twin, who is sent off to Inverted Earth with contact lenses chronically installed. As White notes: "Here we have two subjects whose experiences have all the same qualitative properties, and hence the same qualitative content, but different intentional contents." [p.5]. Block's argument requires this assumption, but where does it come from? Must qualia "supervene on" physical constitution? Thomas Nagel once claimed otherwise, in conversation with me; he insisted that there was no way to tell of two identical twins whether they had identical qualia. Whether or not qualia do supervene on physical constitution, something else definitely does, and that is what we might call functional micro-implementation--e.g., Tolliver's ordered triples of something small in the brain. Thus in "Instead of Qualia" (1994b), I describe color-discriminating robots that use numbers in registers to code for the different "subjective" colors they discriminate. The particular number systems they use (functionally parallel to the "file keeping" system White describes) are physical micro-details that anchor functions, but the numbers (which are arbitrary) could all be inverted without any detectable functional change. These, presumably, are not qualia that many qualophiles could love; they are in fact what I propose instead of qualia. And I claim that they can do, without mystery, all the work qualia were traditionally supposed to do--including telling qualia-inversion fantasies!

We can retell Block's thought experiment with two identical robots, one of whom is sent off to Inverted Earth with contact lenses chronically installed. Then we will have two robots whose "experiences" have all the same details of functional micro-implementation, but different intentional contents. Since everything Block says of you and your twin would also be true of the robot and its twin on Inverted Earth, for exactly the same reasons, and since qualia are not enjoyed by the robots (ex hypothesi), Block's argument cannot be used to show why a functionalist needs to posit qualia. Functional micro-implementation schemes will do just as well.

Jeff McConnell takes equal pains in his examination of another fantasy, Frank Jackson's case of Mary the color scientist who is, in Diana Raffman's fine phrase, chromatically challenged. I gave Mary short shrift in CE, and McConnell gives her long shrift in the attempt to demonstrate that Jackson's Knowledge Argument "remains alive and well" in the wake of my criticisms. I think he has drastically underestimated their subversiveness. They challenge not just the details but the whole strategy of attempting to prove anything by Jackson's methods. I am claiming that it counts for nothing--nothing at all--that Jackson's (or McConnell's or anybody's) intuitions balk at my brusque alternative claim about Mary's powers. Their fixed points are not my fixed points, but precisely the target of my attack. The most that can be said for an intuition pump such as Jackson's, then, is that it dramatizes these tacit presumptions, without giving them any added support. Now of course I might be wrong, but one cannot defeat my counterargument by blandly describing as an "insight" something I have been at considerable pains to deny.

In any case, McConnell gradually concedes some ground, if only for the sake of argument, adding proviso after proviso to the original story. By the time he is through, Jackson's deceptively crisp scenario has given way to the utterly imponderable hypothesis that "it does not seem inconsistent to suppose" that there could be a neuro-omniscient but imaginationally challenged person who, in virtue of the latter and in spite of the former, lacked the ability to construct a special sort of knowledge to be called "imaginative knowledge" (defined in terms of the suspect category of phenomenal properties). McConnell may think that the Knowledge Argument is still "alive and well" after this exercise, but it sure looks like a shadow of its former self to me, barely able to hold our attention, let alone vivify our convictions.

At one juncture McConnell points to the gap in his own case: "My counterargument shows that unless there is a defect in the mechanics of the Knowledge Argument or a deep flaw in our common sense about what Mary knows, then the standard positions about the nature of the mind are untenable." But all along I have been claiming that there is just such a deep flaw in our common sense.Endnote 30 Our common sense is strongly if covertly committed to the Cartesian Theater, and since many philosophers have wondered who on earth I can be arguing against (since they certainly weren't committed to there being a Cartesian Theater!), it will be instructive to show how McConnell's own commitment to the Cartesian Theater arises, especially since it is nicely concealed in his quite standard exploitation of familiar philosophical assumptions. He builds his case by extending the received wisdom about external reference in ordinary language to internal reference:

The success of demonstrative reference depends upon the demonstratum's being picked out for demonstrator and audience by a mode or manner of presentation--by something that individuates the cognitive significance of referring expressions. [emphasis added] [p24]

These assumptions are widely shared. It has seemed harmless to many philosophers of mind to couch their discussions of reference in perception, knowledge by acquaintance, inner ostension, and the like in the terms so well analyzed by philosophers of language dealing with reference, ostension, and similar phenomena in ordinary language. But as this passage nicely illustrates, these are poisoned fruits that quietly force the hand of the theorist: we have to have an inner audience, to whom things are presented, if we are to take these familiar extensions of linguistic categories literally (and if not literally, exactly what is left to be asserted?). Thus philosophers have debates about "modes of presentation" versus "definite descriptions in the language of thought" and the like, but these only make sense if we are presupposing an inner agent, capable of appreciating or perceiving presentations, or understanding the terms of the definite descriptions, but still in need of being informed about the matter in question, which is still somehow external to the agent. In other words, these discussions all presuppose a Cartesian Theater occupied by a Central Meaner who either has or has not yet been apprised of some fact that must somehow be borne to him on the vehicle of some show that must be presented, or some inner speech act that must be uttered, heard, and understood. But this is forlorn. As I argued at length in CE, this too-powerful inner agent has to be broken down, and all its work has to be distributed in both space and time in the brain. When that is done, the properties by which "agents" are "acquainted with" this and that have to be broken down as well. That is the point, once again, of my answer to the question of how I know these things: because a knower and reporter of such things is what is me. (CE, p. 410) But see how McConnell puts it:

We know our qualitative mental states by acquaintance, picking them out by direct reference as states "like this," so to speak, producing states of recognition or imagination for display [to whom, pray tell?] or ostending to ourselves [to our selves?] occurrent states. [p.25]

This isn't common sense; this is disaster, for as he himself shows, it leads quite inexorably to "irreducibly mental properties." Loar, on McConnell's reading, is thus headed in the right direction in trying to forestall this development. McConnell's objection to Loar--the imagined Marcy--is thus question-begging: "Imagine someone, for example, who can, without physical evidence, report and categorize many of her own brain states, even states that lack qualitative character [emphasis added]." [p.28] But what is "qualitative character" that might thus be absent? Who says that there are any states that even have "qualitative character"? It seems obvious to McConnell that there are "phenomenal properties," and so he never truly confronts the denial I am issuing. Perhaps the most telling instance--telling, because it strikes him as so tangential that he buries it in footnote 19--is the following:

The critic of the Knowledge Argument, however, must take the position that her neuroscientific expertise would not just enable her to do this but would constitute the grasping of phenomenal red, and this is implausible. For it seems easy to imagine a person in Mary's shoes, someone perhaps unlike Mary biologically, who doesn't have the powers of hallucination Flanagan supposes but about whom we would say the things Jackson says of Mary.

I have at least tried to cast doubt on any such appeals to what "seems easy to imagine" in these cases, claiming that after one undergoes a certain amount of factual enrichment about the nature of color perception and related topics, these things no longer seem so easy to imagine after all. That they seem so to McConnell is thus a biographical fact of no immediate use in an argument--at least not in an argument against me.

Eric Lormand brings out vividly how the Friends of the Cartesian Theater can cling to their fantasies. He shows how many different escape hatches there are for Theater-lovers, and points out that I can't block them all at once. No doubt. For instance, you can always "postulate a distinctive, nonprimitive but also nonrational means of access" [p.13], or some other variety of "access mechanism" if you want to, but why? Whose access to what? My point was to remove the motivation, but if you still want to posit qualia, I doubt that I can show that you will inevitably contradict yourself. I did not claim to prove a priori that there could not be a Cartesian Theater; I claimed to prove, empirically, that there was no Cartesian Theater, and that since there wasn't one, theories that presuppose otherwise must be wrong. Endnote 31There is an empirical point and then there is an a priori point, and the two have not yet been clearly enough distinguished--by me or my readers.

Consider the Brobdignagians, the giant people of Gulliver's Travels, and suppose we set out to do some anthropology there, and decided that the best way to do this was to make a giant humanoid puppet of sorts, controlled by Sam, a regular-sized human being in the control room in the giant head. (I guess that is at least as "logically possible" as the scenarios in other thought experiments that are taken seriously.) Sam succeeds in passing for Brobdignagian in his giant person suit, but then one day he encounters Brobdignagian Dennett sounding off on the unreality of the Cartesian Theater with its Central Meaner. Risky moment! Sam pushes the laugh button, and directs the giant speech center to compose the appropriate response (in translation): "Ha Ha! Who could ever take seriously the idea that there was a control room in the head, the destination of all the input, and the source of all the output! Such a fantasy!"--all the time hoping that his ruse would not be uncovered. Yes, this thought experiment shows that a Cartesian Theater is "possible," but we already know that there are no such places in our own brains--that's the empirical point. We also know--this is the a priori point--that sooner or later as we peel the layers off any agent, we have to bottom out in an agent that doesn't have a central puppeteer, and this agent will accomplish its aims by distributing the work in the space and time of whatever counts as its brain. Putting the two points together, we see that we have to live with these implications sooner, not later. We have to live with them now.

Lormand vividly supports my contention that qualia and the Cartesian Theater stand or fall together. The reason he is a Friend of the Theater is that he thinks he has to have qualia, and qualia without a Theater is no show at all. But then we must ask: What does the claim that there really are qualia get him? What does it explain? I'm not asking for a lot. I'd be content if his only answer was: "It explains my unshakable belief that I've got qualia!" But even this Lormand concedes to me. It would be quite possible, he says, to believe you had qualia when you didn't. Philosophically naive zimbos, for instance, would fervently believe they have qualia. As I said in my discussion of Levine, I view zimbos as a reductio. Others don't, but that's their problem, not mine.

The hydra-headed qualia live on, in Lormand's various options, shifting from one vision to another. That is enough to establish one of my main points for me: you simply cannot talk about qualia with the presumption that everyone knows what you're talking about. These different avenues are too different. It is only equivocation that permits the various different qualophiles to claim they agree about something, to wit: qualia.

8. Luck, Regret, and Kinds of Persons

Some enchanted evening, you may see a stranger across a crowded room--or you may not, and it may make all the difference, as the song suggests. For the stranger might have tempted you into moral dilemmas that you were not "ethically gifted" enough to resolve honorably, and then your life might end in ignominy, disgrace, and bitter regret. Or the stranger might have provoked you to embark in a direction that led you to acts of great courage and self-sacrifice, bestowing on you a hero's role that otherwise would have been inaccessible to you. In such a case, luck makes a huge difference, we can reasonably suppose, and has nothing to do with the prospect of negligence, or the capacity to estimate probabilities, important though those considerations often are.

I take myself to have been, so far, quite a good fellow; I have no terrible sins on my conscience. But I am also quite sure that there are temptations that, had they been placed before me, I would not have been able to resist. Lucky me; I have been spared them, and hence can still hold my head up high. It is not just luck, of course; policy has had something to do with it. I don't go looking for trouble, but I also don't go looking for opportunities to be a hero. Some people face life with a different attitude: they play for high stakes--hero or villain, with little likelihood of a bland outcome. And surely Michael Slote is right that some people are more ethically gifted than others by accident of birth--and other accidents. Perhaps in the best of all possible worlds, only the ethically gifted would be inspired to play for high stakes lives, while we more cowardly and self-indulgent folks just tried to keep our noses clean.

I am very glad Slote didn't give up on me altogether. After Elbow Room, in which I put some of his good work to good use, he proposed we join forces on an article developing further our shared views about luck, modality and free will, to which I readily agreed. He sent me some notes and sketches, but for reasons unknown to me, I never picked up my end. The engine was running, but somehow I couldn't let out the clutch. His essay on this occasion reminds me of how fruitful I find his perspective, and makes me regret all the more my strange inactivity in response to his previous sally. On this occasion, the focus is on the curious role of luck in rendering our acts blameworthy or praiseworthy. When it comes to assigning blame and credit, Slote suggests, we are confronted by an irresoluble war of competing intuitions. Blame should not be a matter of luck at all, proclaims one intuition, but living by that standard would seem to force us to absolve everyone always, which goes equally against the grain. One variety of compromise would be what Slote calls moral criticism without blame. This would extend to adults at their most responsible the attitude we tend to endorse towards young children; since we want them to improve, we are firm in our condemnation of their bad behavior, but we don't condemn them. We hold them quasi-responsible, you might say, not thereby illuminating anything.

Isn't it the case that any policy, any ethical theory, must accept luck as part of the background? Given that luck is always going to play a large role, what is the sane, defensible policy with regard to luck? Set up a system that encourages individuals to take luck into consideration in a reasonable way by not permitting them to cite bad luck when it leads them astray. The culpability of the driver is settled as a matter of higher-order holding accountable: we have given you sufficient moral education so that from now on you are a person (in Carol Rovane's sense), deemed accountable, like it or not, not only for your acts but for your policies. If you are reckless and get away with it, you are just lucky, but if you are reckless and thereby bring about great harm, you will have no excuse. If you are not reckless but bring about great harm, your blame will be diminished. Slote expresses mild sympathy for such a policy (see fn 4), but thinks it won't do. The problem, I gather, is that since there would still be unsupportably counter-intuitive implications in any such policy (in Slote's eyes), it could be maintained only by slipping in one way or another into the systematic disingenuousness Bernard Williams (1985 p.101) calls "Government House utilitarianism,"

Peter Vallentyne has suggested to him that the situation is not so grim; tying praise and blame to probabilities, not outcomes, has some intuitive support in any case, so some of the jarring intuitions might be ignored. Slote finds this attractive, but thinks that "it is a mistake to say nothing more needs to be said." [p16] Let me try to fill that gap a little. Slote lists two items of common-sense that obtrude:

a) the difference of blameworthiness between cases where an accident occurs and cases where none occurs and b) our intuitive sense that the person whose negligence leads to an accident doesn't enjoy a low degree of blameworthiness (simply because of the extreme unlikelihood of an accident). [p.17]

I suggest that our intuitions are playing tricks on us here--at least to some extent. With regard to a), consider the case in which you learn that Jones has enticed your child to play Russian roulette with a loaded revolver. Fortunately, both survive unharmed, but your moral condemnation of Jones will be scarcely diminished compared to the case in which your child actually dies. Is he blameworthy? He most certainly is. We don't get to put him in prison for murder, thanks to his undeserved luck, but we might think it entirely appropriate to ensure that nobody ever forgot, for the rest of his days, what an evil thing he did. In other words, I think Jones is just as blameworthy in both cases, even though there is vastly more harm to regret, and therefore more justifiable anger, in the catastrophic case, and I think common sense is comfortable with this, after all. Now go to the other extreme, and imagine the following variation on the scenic drive. You are showing friends the mountain scenery, and see a scenic lookout turnoff up ahead. "Let's just stop, so I can show this magnificent view to you!" you say, but your friends demur. "Don't bother, we can see it well enough while moving along." But you persist, and as you turn off the highway into the lookout, sunlight glinting off your windshield momentarily blinds the schoolbus driver, and calamity ensues. In this case, you broke no laws, you weren't negligent in any way, you were a good, safe driver. But for the rest of your life you will surely be racked with regret, thinking "if only I hadn't persisted!" This regret is not self-reproach; you know in your heart that you did nothing wrong. But this regret about that awful free choice of yours will perhaps overwhelm your thoughts--and the thoughts of all the parents of those dead schoolchildren--for years. Now alter the circumstances ever so slightly: in order to enter the scenic turnoff, you had to brake rather more suddenly than cars typically do, and it was the distraction of the bus driver in response to your (arguably) negligent braking that caused the accident. A tiny bit of negligence now, and at least as much regret. How much self-reproach? How much moral blameworthiness? Can we isolate in our imaginations the regret that any bad-outcome act is likely to provoke, and distinguish it clearly and reliably from the moral (self-)condemnation--if any--that is provoked in unison? If not, then perhaps--this is just a hypothesis for further thought-experimental exploration--Slote's conviction that a) and b) are worthy items of common-sense can be undermined. But there is still more to be said, of course.

Saving the best for last, I come to Carol Rovane's wonderfully constructive essay. She takes the main ideas in "Conditions of Personhood" and fixes them. They needed fixing. It is great to see ideas I like a lot protected from second-rate versions of them--my own. She wonders whether I will reject her revisions and elaborations or embrace them. I embrace them, with a few further amendments and virtually no reservations worth mentioning. Thus she is right that (1) my six conditions of personhood fall naturally into two groups of three; (2) I would be in much better position if I retreated from Kant as she recommends, opening up ethical disagreement among persons; (3) persons are committed to all-things-considered judgments, even though we can't actually make them; (4) I can have my naturalism and gradualism, and still have a rather sharp watershed dividing the persons from the non-persons; (5) her alternative is a "more integrated, and explanatorily complete, conception of the person, in which the ethical and metaphysical dimensions of personhood are in perfect accord." [ms, p.18]

Indeed, I just made use of points (4) and (5) in my commentary on Slote. Although different human beings may not be equally "ethically gifted," those that have the capacity to treat others as persons, are precisely those who are fit to be run through the mill of reason-giving. Those who are disqualified for personhood by not being up to the exercise are excused, but for those who are fit, there is indeed a choice, and if you are in this special category, you can stand convicted of having made a wrong (but informed, rational) choice. This watershed permits us to settle the inevitable penumbral cases of near-persons, persons-to-be, persons on the verge of incompetence, etc., in an ethically stable and satisfying way. (It doesn't settle all the morally troubling cases, of course--that would be too much to ask for--but it lays the ground for settling them as best we can.) As she says, she argues "from the ethical criterion of personhood to Dennett's list of conditions of metaphysical personhood, thereby preserving his uncompromisingly normative approach." [ms,.p.18]

What about her discussion of rationality, evaluation, and higher order intentionality in animals? I have come to realize in recent years that human rationality is so much more powerful than that of any animal, that, as she says, my "list of six conditions does not capture a spectrum of rational sophistication at all." [p.48] I have begun discussing alternative spectra in recent years (in Darwin's Dangerous Idea, and "Learning and Labeling" 1993c) and I intend to develop these ideas further, in a little book to be called Kinds of Minds, which will soon be completed. Therein I will offer a somewhat different account from the one sketched by Rovane, but not different in any way that undercuts her points. I have been stumbling along towards this for years. Ashley's dog was just the first of many cases to consider. Reading, listening to, and even working with ethologists over the years has taught me a lot about the differences, as well as the similarities, between animal and human minds. Discussing Gricean communication, she notes that "it is the absence of a guarantee for the first sort of reliability that affords the possibility of sincerity and insincerity." [p.45] Yes, as Gibsonians would say, there are affordances here, affordances that simply do not exist for non-persons, such as vervet monkeys and other animal quasi-communicators. (I now think, by the way, that Sperber and Wilson's (1986) vision of communication is much more realistic than Grice's, and would save some minor errors of over-idealization in her account.)

What, finally, of her punch line about multiple and group persons? I have already granted MPD--with suitable caveats--as she notes. In my discussion of Lynne Rudder Baker above, I opened the door to group persons, not quite for the first time. There is my brief definition and discussion of FPD, "Fractional Personality Disorder," CE, pp.422-423. Since my theory of the self (or personhood) "predicts" FPD, I am now on the lookout for instances of its acknowledgment in print. My favorite to date is the comment by one of the actors in the Coen brothers' film, "Barton Fink," when asked what it was like to act in a film with two directors. The reply: "Oh, there was only one director; he just had two bodies."

References

Aldrich, Virgil, 1970, review of Dretske, Journal of Philosophy, 67, pp995-1006.

 Block, Ned, 1992, "Begging the question against phenomenal consciousness," (commentary on Dennett & Kinsbourne), Behavioral and Brain Sciences, 15, pp205-6.

 --1993, review of Dennett, CE,1991, Journal of Philosophy, 90, pp.181-93

--1995, "On a Confusion about a Function of Consciousness," Behavioral and Brain Sciences, 18, pp.??

 Brown, Roger, and Herrnstein, Richard J., 1975, Psychology, Boston: Little, Brown.

 Churchland, Paul, 1979, Scientific Realism and the Plasticity of Mind, Cambridge: Cambridge Univ. Press.

 Clark, A. and Karmiloff-Smith, A., 1993, "The Cognizer's Innards"), Mind and Language, 8, (4), pp.487-519

 Dennett, Daniel, 1968, "Features of Intentional Actions" Philosophy and Phenomenological Research, 29, pp.232-44.

 --1969, Content and Consciousness, Routledge & Kegan Paul, London, and Humanities Press, U.S.A. (International Library of Philosophy and Scientific Method)

 --1978a, Brainstorms: Philosophical Essays on Mind and Psychology, Bradford Books (Montgomery, Vt.), Harvester, Hassocks, Sussex.

 --1978b, The Philosophical Lexicon, 7th edition, (8th edition available from American Philosophical Association.)

 --1982, "Beyond Belief," in A. Woodfield, ed., Thought and Object: Essays on Intentionality], Oxford Univ. Press.

 --1984, Elbow Room: The Varieties of Free Will Worth Wanting,

 Bradford Books/MIT Press, and Oxford Univ. Press.

 --1987a, The Intentional Stance, MIT/A Bradford Book

 --1987b. "The Logical Geography of Computational Approaches: a View from the East Pole," in Harnish and Brand, eds., Problems in the Representation of Knowledge, University of Arizona Press.

 --1991, Consciousness Explained, Boston: Little, Brown.

 --1993a, "Learning and Labeling" (commentary on A. Clark and A. Karmiloff-Smith, "The Cognizer's Innards"), Mind and Language, 8, (4), 540-547.

 --1993b, "Caveat Emptor" (reply to Mangan, Toribio, Baars and McGovern), Consciousness and Cognition, 2, (1), 48-57.

 --1993c, "The Message is: the is no Medium" (reply to Jackson, Rosenthal, Shoemaker & Tye), Philosophy & Phenomenological Research, 53, (4), 889-931.

 --1994a, "The practical requirements for making a conscious robot," Phil. Trans. R. Soc. Lond. A 349, pp.133-46.

 --1994b, "Instead of Qualia" in Consciousness in Philosophy and Cognitive Neuroscience, A. Revonsuo & M. Kamppinen, eds., Hillsdale, NJ: Lawrence Erlbaum.

 --1995a, Darwin's Dangerous Idea: Evolution and the Meanings of Life, New York: Simon & Schuster.

 --1995b, "The Path not Taken," commentary on Ned Block, "On Confusion About a Function of Consciousness," in Behavioral and Brain Sciences.

 --1995c, "Animal Consciousness: What Matters and Why,"in Social Research, vol. 62, no. 3, Fall 1995, pp. 691-710.

 --1995d, "Overworking the Hippocampus," (commentary on Jeffrey Gray) in Behavioral and Brain Sciences, vol. 18, no. 4, 1995, pp. 677-78.

 --forthcoming a, "Consciousness: More like Fame than Television," in Munich conference volume, ed. Ernst Pöppel

 --forthcoming b, "Do Animals have Beliefs?" in Herbert Roitblat, ed., Comparative Approaches to Cognitive Sciences, MIT Press.

 --forthcoming c, review of Hofstadter, Fluid Concepts and Creative Analogies, in Complexity.

 Dennett and Kinsbourne, M. 1992, "Time and the Observer: The Where and When of Consciousness in the Brain," Behavioral and Brain Sciences, 15, 183-247.

 Dretske, Fred, 1993, "Conscious Experience," Mind, 192, pp.263-83.

 Field, Hartrey, 1974, "Quine and the Correspondence Theory," Phil. Review, pp200-228,

 Field, Hartrey, 1975, "Conventionalism and Instrumentalism in Semantics," Nous pp.375-405

 Flanagan, Owen, 1992, Consciousness Reconsidered, Cambridge, MA: MIT Press.

 Fodor, J., 1975, The Language of Thought, Scranton, PA: Crowell.

 French, Robert, forthcoming, The Subtlety of Sameness, Cambridge, MA: MIT Press.

 Gärdenfors, Peter, forthcoming, "Cued and detached representations in animal cognition," in Behavioral Processes.

 Gray, Jeffrey, forthcoming, "The Contents of Consciousness: A Neuropsychological Conjecture," Behavioral and Brain Sciences,

Grimes, John, "On the failure to detect changes in scenes across saccades," in Kathleen Akins, ed. Perception, Vancouver Studies in Cognitive Science, Vol 5, Oxford Univ. Press.

 Haugeland, J., 1985, Artificial Intelligence: The Very Idea, MIT Press/A Bradford Book.

 Hill, Christopher, 1995, "Riding the Whirlwind: The Story of My Encounter With Two Strands in Dennett's Theory of Intentionality," presented at Notre Dame, April 1.

 Hofstadter, Douglas, 1995, Fluid Concepts and Creative Analogies, New York: Basic Books.

Humphrey, Nicholas K., 1974, "Vision in a monkey without striate cortex: a case study," Perception, 3, 241.

 Humphrey, Nicholas K., 1984, Consciousness Regained, Oxford: Oxford Univ. Press.

 Kripke, 1979, "A Puzzle about Belief," in A. Margolit, ed., Meaning and Use, Dordrecht: Reidel ,pp239-83.

 Lockwood, Michael, 1993, "Dennett's Mind," Inquiry, 36, pp.59-72.

 Logothetis, N. and Schall, J. D., "Neuronal Correlates of Subjective Visual Perception," Science, 245, pp.761-63.

 Mangan, Bruce, 1993, "Dennett, Consciousness and the Sorrows of Functionalism," Consciousness and Cognition, 2, 1993, pp1-17.

 Millikan, Ruth, 1993, "On Mentalese Orthography," in Bo Dahlbom, ed., Dennett and his Critics: Demystifying Mind, Oxford: Blackwells, pp.97-123.

 Mitchell, Melanie, 1993, Analogy-Making as Perception: A Computer Model, Cambridge, MA: MIT Press.

 Nagel, Thomas, 1991, "What we have in mind when we say we're thinking," review of Consciousness Explained, Wall Street Journal, 11/7/91.

Putnam, Hilary, 1962, "Dreaming and Depth Grammar" in R. J. Butler, ed. Analytical Philosophy, Oxford: Oxford Univ. Press.

 --1975, "The Meaning of 'Meaning',"in Keith Gunderson, ed., Language, Mind and Knowledge: Minnesota Studies in the Philosophy of Science, vol 7, Minneapolis: Univ. of Minn. Press

Rensink, Ronald, O'Regan, J. Kevin, and Clark, James, 1995, "Image flicker is as good as saccades in making large scene changes invisible," presented at the European Conference on Visual Perception, summer, 1995.

 Sjölander, S., 1993, "Some cognitive breakthroughs in the evolution of cognition and consciousness, and their impact on the biology of language, Evolution and Cognition, 3, p1-10.

 Sperber, Dan, and Wilson, Deirdre, 1986, Relevance: A Theory of Communication, Cambridge, MA: Harvard Univ. Press.

 Stromeyer, C. F., and Psotka, J., 1970, "The detailed texture of eidetic images," Nature, 225, pp. 346-9.

Stubenberg, Leopold, 1995, "Dennett on the Third-Person Perspective," presented at Notre Dame, April 1, 1995.

 Turing, Alan M., 1950, "Computing Machinery and Intelligence" Mind, 59, pp. 433-460.

 Williams, Bernard, 1985, Ethics and the Limits of Philosophy, Cambridge, MA: Harvard Univ. Press.

 

Endnotes

1. I am grateful for constructive feedback from Nikola Grahek and Diana Raffman, at the Center for Cognitive Studies at Tufts, and Derek Browne and his colleagues and students at Canterbury University, Christchurch, New Zealand, where drafts of this essay were prepared and discussed.

 2.The Fox Islands Thorofare is a beautiful but treacherous passage between the Scylla of North Haven and the Charybis of Vinal Haven, in Penobscot Bay.

 3.Good Old Fashioned AI (Haugeland, 1985) and Language of Thought (Fodor, 1975).

 4. Dedictomorphs are zombies, he tells us [p59], and I wonder how one can tell whether a particular implementation of Cog is a dedictomorph. Not by behavior, since a dedictomorph "may conform to the outward behavior of persons with de re states." But then why should the Cog team worry about getting de re states into Cog?

 5. By far the best model of a research program in phenomenology that uses the fruits of careful introspection to discern the features of engineering models is Douglas Hofstadter Fluid Analogy Research Group. See Hofstadter, 1995 (and my review, forthcoming in Complexity) Mitchell (1993), and French (forthcoming).

 6. I was surprised that Fox didn't use the standard term "user illusion" It fits his case rather well, since he claims that the phenomenal world is a benign, designed illusion of sorts (a philosophical illusion).

 7. At just one point, I thought Fox's phenomenology fell into error. He claims [p14] to be able to "remember melodies which (for me) have an intervalic structure but no pitch." I cannot do this, any more than I can remember or imagine a melody which reels off in no particular tempo. Melody seems entirely unlike imagined speech in this regard; imagined speech, for me and others I have queried, can have tempo and prosodic contour without any pitch. I raised the melody issue with Diana Raffman and Ray Jackendoff, both accomplished musicians; neither of them can do what Fox says he can do, so either he has a rare talent, or has given us a demonstration of how phenomenologists can be wrong about even their carefully considered claims.

8. In the same article Dretske also cites the amazing case of eidetic imagery reported by Stromeyer and Psotka in Nature, 1970, in support of his theory of "thing-awareness". But Stromeyer and Psotka's report turned out to be too good to be true. Their subject refused to cooperate with those who wanted to replicate the original experiment, and it is now generally presumed that the results were fraudulent, a practical joke played on the experimenters, most likely. This is not a trivial matter; Dretske needs something like this imaginary result to support his position, just as my theory needs support of the sort provided by Grimes' experiments, and more recently, those of Rensink, O'Regan and Clarke, to be described shortly. (Dretske also cites, in fn. 13, the "well-known experimental demonstration" by C. W. Perky. This series of experiments--conducted in 1910!--is in fact seldom cited any more, and is perhaps best known for not being replicated by others. For a neutral account, see Brown and Herrnstein, 1975, pp435-6.)

9. The game of Hide the Thimble actually exploits something very close to Dretske's concept of non-epistemic seeing. The rules are clear: you must hide the thimble in plain sight. It must not be concealed behind anything, for instance, or too high on a shelf to fall within the visual fields of the searchers. Or one might say: the "hidden" thimble must be visible. Is something that is visible seen as soon as it can be seen by someone looking at it? That seems to be what Dretske's concept of non-epistemic seeing insists upon.

 10. Dretske misses the point of my claims about the lack of clarity of animal consciousness--a fact that I would think would have become obvious to him when he noted, as he does, the passages in which I calmly grant sight--color vision--to birds and fish and honeybees. It must be, mustn't it, that I don't think seeing is a matter settled by experience (conscious experience--of the sort he finds obvious). He does see the way out: "being aware of colors does not require consciousness" [ms,p.6], but he can't see how this can be taken seriously. Why not? Because, I think, he is still committed to ordinary language philosophy. But vision, and color vision, can be, and routinely are, investigated in complete disregard of the ordinary senses of "aware" and "see" and "conscious". There is no doubt at all that honeybees have color vision; whether they are conscious in any interesting sense is quite another matter.

11. Blindsight in Nicholas Humphrey's monkey Helen is a particularly challenging case for Dretske (Humphrey, 1974, 1984). To put it with deliberate paradox, did Helen see--in Dretske's sense--in spite of her blindness? Humphrey and I once showed his film of Helen to a group of experts--psychologists and primatologists--at a meeting at Columbia University, and asked them if they could detect anything unusual about Helen, and if so what. For ten minutes they watched the film of Helen busily darting about in her space, picking up raisins and pieces of chocolate and eating them, avoiding obstacles, never making a false move or bumping into anything. Nobody suggested that there was anything wrong with her vision, but her entire primary visual cortex had been surgically removed. She was cortically blind. Would Dretske say that this was a case of epistemic seeing without non-epistemic seeing?

12. In fn. 19, Dretske mistakenly dismisses this as an avenue unworthy of my exploration--a measure of how much misunderstanding there has been between us.

 13. This was also brought home to me by Hill, 1995, and the ensuing discussion.

 14. What about real cases of peripheral paralysis? First, the only real cases have to be people who have lived an unparalyzed life for years--all other imaginable cases are cow-sharks, only logically possible and rudely dismissable. Second, the persistent integrity of the internal structures on which their continuing mental lives putatively depends is not a foregone conclusion. To the extent that the paralysis is truly just peripheral (unaccompanied by the atrophy of the internal), then, of course, such a sorry subject could go on living a mental life (as I imagined myself doing in the vat, in "Where am I?"). But all good things come to an end, and in the absence of normal amounts of "peripheral narrow behavior," mental life will surely soon fade away, leaving only historical traces of the vigorous aboutness its activities once exhibited. How long would it take? A gruesome empirical question, whose answer has no metaphysical significance.

 15. If all these examples concern opinions, not beliefs, then why not just re-construe the theory of propositional attitudes as the theory of opinions? Because there could be no such theory--for the same reason there is not theory of things said: people say the darndest things. People can be got to say all manner of crazy things for all manner of weird reasons; the set of things they say, or would say under various provocations, is not a tidy set of phenomena for which one might reasonably aspire to provide a theory. The set of opinions is very much like--is scarcely distinct from--this set of things said.

16. Rorty has warned me that feminists will object to my use of the word "hysterical", but I am confident that few if any feminists would be so insensitive to irony as to overlook the recursion that would occur were they to object to my usage. It's a fine word, the only word we have for a real phenomenon, and it would be cretinous to denigrate it because of its ignoble etymology.

17. While he is at it, he might tell us how he would show that there is a fact of the matter about just when--i.e., to the day or week--the British Empire learned of the signing of the treaty ending the war of 1812. Is it determined by the dates and postmarks on the various documents, or by their time of arrival at various critical places, or by some combination of such factors? He had better not say that the question is meaningless, and hence has no proper answer--that would be raving superficialism about empires.

 18.In spite of the gulf of disagreement, it is good to see that Rey joins me in giving the back of his hand to zombies and their ilk. The trouble I see with his way of doing it is that the qualophiles and zombists can complain, with some justice, that he is just changing the subject, redefining the problem out of existence.

19.

 20.In heavy water, the heavy isotope of hydrogen, H2 or D, replaces the ordinary hydrogen atom. Heavy water is found in about 1 part per 5000 in ordinary water; it has slightly higher freezing and boiling temperatures than ordinary water, seeds can't germinate in it, and tadpoles can't live in it. XYZ must be more like H20 than deuterium oxide is, and deuterium oxide is a kind of water.

 21. In "Beyond Belief," my example was the scientifically backward people who had a word for "gas" or perhaps "gaseous hydrocarbon"--surely a fine natural kind, but on this minimalist principle it would have to be translated "methane," since this is in fact the only gaseous hydrocarbon they have encountered.

 22. Besides, it seems to me that if you renounce the neutrality of heterophenomenology, you make it systematically impossible to close the putative explanatory gap, because you give up ab initio on the goal of finding a rapprochement between the first and third person point of view. What shape could a closing of "the explanatory gap" take? It seems to me it would have to be an explanation that permitted one to tell a third-person, scientific story about subjectivity. I don't see how anything else would count as a closing of the gap. So far as I know, nobody has defended another framework.

23. "Outsmart, v. To embrace the conclusion of one's opponent's reductio ad absurdum argument. "They thought they have me, but I outsmarted them. I agreed that it was sometimes just to hang an innocent man." The Philosophical Lexicon, (Dennett, 1978b)

 24. Rosenthal says at one point [p.17] that "it can happen that, even though one doesn't consciously see an object, one later recalls just where it was and what it looked like." I wonder what his evidence for this startling claim is. Wouldn't this be confounded with high-quality blindsight beyond anything yet reported in the literature? How would Rosenthal tell the two phenomena apart?

25. This is the illusion typically engendered by functionalistic "boxology" (CE, 270n, 358n). One defines a box in a flow chart in terms of the functional role anything entering it plays, and then forgets that if this is how "entrance" into that particular "box" is defined, it makes no sense to excuse an occupant of any of the defining powers. The boxes are not automatically salient tissues, organs, or separate media in the systems described, such that entrance into them can be distinguished independently of fulfilling the defining functional roles.

26. Robert Van Gulick correctly notes the strong tie between consciousness and reportability I have always endorsed. Since inability to report is in fact our most heavily relied upon grounds for presuming non-consciousness--in blindsight, for instance--when you loosen the tie to reportability, as Van Gulick suggests, you face the problem of motivation in a particularly severe form.

27. I am partly to blame, since I have myself often introduced Nagels' famous formula into the discussion, without being sufficiently explicit in announcing my rejection of its presuppositions. It is, I think, a chief source of this illusion of constancy of meaning in our questions about consciousness.

 28. Wanting to obtain a hunk of alexandrite (to see for myself), I consulted a geologist friend, who provided the appropriate literature, including color photographs of this marvelous mineral--but no samples, sad to say. Alexandrite is rare, and consequently commands a price commensurate with other gem stones.

29. See also Dennett, 1993b, 1994b, where these ideas are developed further.

 30. I am unmoved, then, by his advice to Churchland and me that we adopt a different strategy. I'm speaking for myself, and will not venture an opinion about Churchland's argument or McConnell's criticisms of it, since I don't rely on it.

 31. At one point, Lorman says: "My retinal and other very early visual representations are as rich or richer in difficult-to-express information as the osprey experience, yet I can say exactly what it's like to have them: nothing!' [p.22.] Why does he think this is true? Presumably because he thinks that while "very early visual representations" are unconscious, some "late visual representations" are conscious. But this is a terrible model of consciousness. It is true that "later" cerebral effects (not necessarily representations) are necessary for one to become conscious of the contents of one's early visual representations, but when those normal effects are there, no "later" visual representation has to occur. So normally it is like something for us to have them.