Hurley, S.L (1997) Non-Conceptual Self-Consciousness and Agency:  Perspective and Access.  Communication and Cognition  Vol. 30, (Part 1 of Special Issue: Approaching Consciousness) Nr. 3/4  207-248

NONCONCEPTUAL SELF-CONSCIOUSNESS AND AGENCY:  PERSPECTIVE AND ACCESS

S. L. Hurley


1. Does consciousness require self-consciousness?
2. Perspectival self-consciousness.
3. Self-consciousness as access.
4. Noncognitive or intentional access to contents and explicit abilities.
5. When does consciousness require cognitive access to contents?
6. Self-evidence does not entail incorrigibility.
7. Perspective, access, and life.
8. Summary.



1. Does consciousness require self-consciousness?

A. Distinguish conceptual and nonconceptual content for conscious states. A subject/agent can only have conscious states with conceptual content if she possesses the relevant concepts and associated structured reasoning abilities. Without giving an account of what it is to have concepts, we can register the widely held view that if someone has states with conceptual content then she has certain general abilities. If the information that a given object has a certain property is conceptualized, it is not context-bound. It has a structure that enables the person in principle to generalize systematically, to decompose and recombine, to make other applications of the same concepts. She can entertain and act not just on this information in this context, or closely related contexts, but also in principle on the information that the same object has a different property or the information that a different object has the same property. And she can quantify and make inferences that depend on such decompositional structure and context-freedom. That is, her reasoning abilities display the generality of concepts and the systematic structure and normativity of conceptual content.1 Her conceptual abilities can be expressed in reasoning about states of affairs removed from her immediate environment and needs. Perhaps the distinction is not sharp but admits of differences of degree; that issue is not pursued here.

Self-consciousness is usually thought of as conceptual in character, as a form of thought that requires conceptual abilities. On this view, a self-conscious person has the concept of a self and of mental states. She may believe that she is only one self among others. She is conscious of her own mental states, but may also believe that other people have mental states too. She can reason using the concepts of herself and other selves, and of mental states in general.

Such conceptual self-consciousness is a remarkable thing for those who have it. It permeates their mental lives. But it isn't necessary for consciousness. Animals without conceptual abilities can have conscious states. What it's like to have mental states may be qualitatively different for creatures with conceptual self-consciousness, but the difference isn't the difference between presence and absence of consciousness.

But self-consciousness need not be conceptual. This is true in at least two senses of self-consciousness, which will be described below: perspectival self-consciousness and access to contents. Both admit of nonconceptual as well as conceptual instances. Consciousness may require self-consciousness in these senses, so long as we allow that the requirement can be met by creatures without conceptual abilities: that self-consciousness can be context-bound. It can still be true that the richer, conceptual versions of perspectival self-consciousness and access to content are remarkable and qualitatively different.

B. Does consciousness require self-consciousness? If we answer yes but also suppose that self-consciousness must be conceptual, then it will seem that creatures without conceptual self-consciousness cannot be in conscious states. This is implausible. We avoid attaching this high price to a claim that consciousness requires self-consciousness by allowing that it can be satisfied nonconceptually. Consciousness may require self-consciousness but not conceptual abilities. This strategy is adopted here.

We can at once fend off a certain objection to this strategy. Suppose that human infants and nonhuman animals lack the normative thinking abilities that express conceptual self-consciousness and are distinctive of mature human experience. Suppose also that human infants and nonhuman animals have nonconceptual self-consciousness. So far, it remains a question open to further argument whether mature persons share with infants and other animals a primitive layer of nonconceptual consciousness on which the conceptual abilities and conceptual self-consciousness of persons builds (see McDowell 1994). So opposition to the view that there is such a common primitive layer does not provide an objection to this strategy.2

An alternative strategy would also avoid the implausible implication that creatures without conceptual abilities cannot be in conscious states. Instead of holding that consciousness requires self-consciousness but allowing that self-consciousness can be nonconceptual, we could instead require self-consciousness to be conceptual but deny that consciousness requires self-consciousness. Both strategies allow a kind of middle ground to exist between subpersonal (or subanimal) information states and full conceptual self-consciousness. But they characterize the middle ground differently.

What can be said in favor of the strategy adopted here? If consciousness of pain is possible without the concept of pain, or consciousness of the sun is possible without the concept of the sun, why not self-consciousness without the concepts of oneself, of conscious states, or of the objects or properties of which one is conscious? There seem to be at least two available senses in which creatures can have nonconceptual self-consciousness as well as nonconceptual consciousness. To a first approximation: First, they may be able to keep track of the relations between what they do and what they perceive, in the way that makes for perspectival self-consciousness. Second, they may have access to information they are conscious of, in that they have the ability to form intentions whose content is provided by this information. Creatures can in these ways be self-conscious no less than conscious, without having the richly normative conceptual and inferential abilities distinctive of persons. So we can recognize the force of the claim that consciousness requires self-consciousness, interpreted in these ways, without implying that creatures without concepts lack consciousness.

C. These two senses of nonconceptual self-consciousness both turn on agency and on the capacity to have intentions and act intentionally. Can a creature have intentions but lack the general concepts of itself, its states, the objects or properties of which it is conscious, and conceptually structured inferential abilities? It is here assumed that this is possible. A full argument for this assumption is not given here, but some justification can be sketched.

Contrast the normativity that intentional action involves with the richer normativity that conceptual abilities involve. A creature that acts intentionally, and so on intentions, can act for reasons and is subject to norms of rationality. That is, its actions depend holistically on relationships between what it perceives and intends, or between what it believes and desires. Relations between stimuli and responses are not invariant, but reflect the rational relations between perceptions and intentions and various possibilities of mistake or misrepresentation. Holism and normativity are what make it appropriate to speak of content at the personal level at all.3  To count as intentional actions, movements must express contentful states that meet minimal normative constraints, both formal and substantive. Violations of norms are possible, but make sense only against a background of satisfaction of norms. Decision theory shows how behavior can be structured and understood in terms of norms of consistency, which govern patterns among beliefs and preferences. For example, if, at a given time, a is preferred to b and b to c, then c should not be preferred to a. For such formal norms to be meaningful there must also be substantive constraints on the contents of beliefs and preferences: otherwise any behavior could be made to fit the formal constraints and nothing would count as inconsistency.4

So, intentional action requires behavior and norms rich enough to support distinctions between consistency and inconsistency. But conceptual abilities go well beyond this. They express the systematic articulation of content into concepts of properties and objects with generality of application. Conceptual content has decompositional structure that is not context-bound. Someone with conceptual abilities who can judge that a banana is green and that a sofa is soft can also in principle judge that a banana is soft, that a sofa is green, that it is not the case that all bananas are green, that if a banana is green then it is not soft, etc. Conceptually structured contents support quantification and a rich range of inferential and thinking abilities and are governed by a correspondingly rich range of normative constraints.

But notions of consistency and inconsistency can be applied to a creature's mental life without attributing to it conceptual abilities with decompositional/recombinant structure. Items of nonconceptualized information can be consistent or inconsistent, so that it makes little sense to have experiences with both of such nonconceptual contents. A commissurotomized monkey might express inconsistencies that suggest a split in consciousness. Creatures without conceptual abilities can have and express inconsistent preferences. Rats might reveal preferences for sex over food, food over pain avoidance, and pain avoidance over sex, without being able to use the concepts of these things to generalize and infer in a wide variety of contexts.5 Of course there may be alternative interpretations of their behavior that avoid inconsistency, but that is true for creatures with conceptual and linguistic abilities as well. Perhaps thought and conceptual content require linguistic behavior, but less richly structured behavior admits less richly normative interpretations. Intentional agency - what many animals have and plants lack - makes normative space between a mere stimulus-response system and linguistic and conceptual skills. Even in this middle ground, sometimes inconsistency makes the best sense of behavior, against a background of charitable assumptions about the content and consistency of perceptions and intentions. Although the behavior of nonhuman animals does not express the generality and systematic articulation of conceptual structure, it can be rich enough to make applicable elementary decision theory and norms of consistency that don't turn on decompositional/ recombinant structure. Such animals have the capacity to act intentionally.

The motivation for admitting nonconceptual contents here is to register a lack of thinking abilities that display conceptual generality and decompositional/recombinant structure. It is not to provide epistemic intermediaries or groundings for conceptualized perceptions (cf. Peacocke 1992). Consider the relations of inconsistency between nonconceptualized experiences expressed by our hypothetical commissurotomized monkey. These normative relations play no role in an epistemological project; the monkey who can act intentionally and in that sense for reasons nevertheless is not in the business of justifying his beliefs. Yet we shouldn't on that account be forced to say only the monkey's 'subpersonal' states as opposed to his experiences have content. Contents attributable to prelinguistic children may occupy a similar, nonconceptual but personal-level, middle ground. But, again, it would not follow that there is a layer of nonconceptual content in common to prelinguistic children and persons with conceptual abilities (see McDowell 1994). Conceptualization may transform and permeate content.

It will be argued below that intentional agency is necessary for consciousness in at least two ways, which involve perspective and access to contents. This rules out consciousness in uncontroversial nonagents, such as plants. But it will not here be claimed that intentional agency is sufficient for consciousness. Intentional agency is understood, in the way sketched above, to involve normatively constrained contents, even if not conceptual abilities. In virtue of the holism and normativity they display, such personal (or animal) level contents are properly attributed to states of an agent, rather than merely to subpersonal states.6 But they need not be conscious states. Moreover, the agents whose states they are need not be conscious: we can allow that sophisticated robots might be correctly interpreted as intentional agents in this sense, even if they are not conscious (see section 7 below). Since no claim that intentional agency is sufficient for consciousness is here made, unconscious intentions or agents do not provide counterexamples to the position taken here. While a full account of what makes for intentional agency is not here given, it is assumed that the normative and interpretational considerations that make for intentional agency do not obviously entail consciousness, but can be independently understood (compare Block's 1995 claim that access consciousness does not suffice for phenomenal consciousness). Perhaps an argument can also be given for the substantive claim that intentional agency suffices for consciousness, but this claim is not immediately validated simply by intuition. So the appeal to intentional agency does not simply help itself to consciousness. Moreover, if such an argument that intentional agency is sufficient for consciousness were given, it would not follow, and so would still be informative to claim, that intentional agency is necessary for consciousness.

2. Perspectival Self-Consciousness

A. Having a unified perspective is part of what it is to be conscious. The idea of perspectival self-consciousness reflects the interdependence of perception and action involved in having a unified perspective. Having a perspective means in part that what you experience and perceive depends systematically on what you do, as well as vice versa. Moreover, it involves keeping track, even if not in conceptual terms, of the ways in which what you experience and perceive depends on what you do. In this sense having a perspective involves self-consciousness.

Agency is essential to perspectival self-consciousness. But the kind of agency that is essential is intentional motor agency of an ordinary, empirical, worldly, embodied kind - as opposed to acts of classifying or conceptualizing or judging. For example, when I intentionally turn my head to the right, it is no surprise that the stationary object in front of me swings toward the left of my visual field. That is what I expect. If I intentionally turn my head and the object remains in the same place in my visual field, I perceive the object as moving.

We can consider the passive and intentional aspects of movement separately, to show how both contribute to perceptual content. If my head is turned passively, there may be ambiguity: whether I perceive the object as stationary or as moving will depend not just on how it moves through my visual field, but also on whether I am aware of the passive movement of my head. In essence this is the same self-world ambiguity that arises when you are in a train that begins to move: movement can be attributed either to your train, or to the train on the next track. If I try to turn my head to the right but for some reason it does not move - perhaps my muscles are paralyzed -there should in theory again be a self-world ambiguity: if I am aware of my attempt to move but unaware that it has failed, I might perceive the object in front of me as moving, even though it has not. This is rather like the phenomenon alleged to occur when eye muscles are paralyzed: an unsuccessful attempt to move the eyes plus constant sensory input to the eyes results in a perception that the world has jumped sideways (Gallistel 1981, p. 175). Together, the intentional and passive aspects of movement can eliminate such perceptual ambiguities.

Like the unity of consciousness, perspective has both personal and subpersonal aspects. At the personal level, the contents of intentions and of perceptions are interdependent in the way just illustrated, and support a distinction between self and world. But their interdependence can be understood as emerging from the co-dependence of perception and action on a subpersonal complex dynamic system. Moreover, feedback from motor outputs to sensory inputs plays a critical role within such a subpersonal dynamic system.7

So: having a unified perspective involves keeping track of the relationships of interdependence between what is perceived and what is done, and hence awareness of your own agency. In this sense, perspective already involves self-consciousness. But the sense of self-consciousness that makes good this thought is closely tied to ordinary motor agency and to spatial perceptions, and need not involve conceptually structured thought or inferences. We'll call it perspectival self-consciousness.8

Why regard perspective as involving self-consciousness? Even a primitive, context-bound, nonconceptual form of self-consciousness requires information that is about the self or its states or their contents, and that provides the contents of states of a person or animal, rather than merely subpersonal information.

The first requirement can be met by perspective. The perspectival interdependence of perception and action involves abilities to use information that is about the self, among other things. But why should the abilities to use information about the self that perspective involves be seen as self-involving contents at the personal or animal level (especially if they do not involve conceptual abilities?) That is, why read perspective in terms of states with content about the self, attributable to the person or animal? For the same reasons that states with any type of content are attributed to a person or animal: holism and normativity. When perspectival information about the self leads to no invariant response, but explains action only in the context set by intentions and the constraints of at least primitive forms of practical rationality (if not fully fledged conceptual abilities), then it counts as content at the personal or animal level.

But what makes perspective a form of self-consciousness? On the view taken here, consciousness requires perspective, but perspective does not require consciousness. Unconscious contents might feature perspectival interdependence. So perspective does not require self-consciousness either. It is a form of self-consciousness only where conscious contents are in play.

B. If consciousness requires self-consciousness, then conscious nonhuman animals are in some sense self-conscious. Perspectival self-consciousness provides one sense in which animals can be self-conscious, even if they lack conceptual self-consciousness. Perspectival self-consciousness can but need not be conceptual. A conscious animal, moving through its environment, has the ability to keep track of relationships between what it perceives and what it does. This ability enables it to use information about itself and its own states and activities as well as about its environment to meet its needs. But it doesn't follow that the animal has a general concept of itself or its conscious states, or the ability to reason systematically about aspects of itself, its states, others, and the environment in a variety of ways detached from its needs. Its perspectival uses of information about itself may be context-bound. Its abilities need not have the generality, richly normative character, and systematic recombinant structure of conceptual abilities.

Perception and action can be interdependent in the way that characterizes a perspective, even in the absence of conceptual abilities, hence, when their contents are not conceptual. So the idea that animals can be in states with nonconceptual content does not revive the myth of the given. Whether they are conceptual or nonconceptual, the contents of perceptions and intentions are interdependent, and depend on,´among other things, processes in the brain. Perceptual content without conceptual abilities is not pure input from world to mind.

3. Self-Consciousness as Access

A. Consider two distinctions, which give rise to four possible claims about self-consciousness. First, distinguish self-evidence from incorrigibility. Second, distinguish the contents of conscious states (in a broad sense that includes their qualitative characters) from other properties of conscious states.

A plausible claim is that a subject/agent with conscious states has access to their contents. This is one sense in which consciousness requires self-consciousness. Whether consciousness requires cognitive access to content is a further question. Cognitive access to contents is self-evidence for content. For example, content is self-evident if it is the case that if a subject/agent is conscious that p, then he believes he is conscious that p. We will return to how access to content might be other than cognitive.

A second claim about self-consciousness is false. It is that the subject/agent must have access to other properties of conscious states besides their contents. Cognitive access to other such properties is self-evidence for other properties. Neither claim is plausible. If a subject/agent is in a conscious state that is realized by certain neurophysiological properties, he does not have access, cognitive or otherwise, to these properties via self-consciousness. We put this claim aside.

A third claim is that self-consciousness is incorrigible about the contents of conscious states: if a subject/agent believes that he is conscious that p, then he must be conscious that p. Call this incorrigibility for content (cf. Williams 1978, p. 49). While this claim is often not distinguished from the claim that conscious contents are self-evident, it is less plausible and may be false. We'll return to it.

A fourth claim is that self-consciousness is incorrigible about other properties of conscious states: incorrigibility for other properties. This is false. If a subject/agent believes that he is in a conscious state that is the state of an immaterial substance, he may be wrong. At least, self-consciousness does not make him right. We also put this claim aside.

B. Return first to access to contents. Does consciousness require access to contents? To evaluate this issue, we need to consider how access to contents could be nonconceptual and how it could be noncognitive, and what the relationship is between conceptual and cognitive access. Consciousness might require access to contents, though not necessarily conceptual or cognitive access. These distinctions create four logical spaces.

Access to content might be cognitive and conceptual: you could have conceptual abilities and have beliefs with conceptual content that are about the contents of your conscious states. This is the normal case for mature persons. The remaining three categories are more problematic. Can access to content be cognitive but not conceptual? Can it be conceptual but not cognitive? Can it be neither? If so, what does it consist in? How is it different from lack of access?

Consider first how access to contents might be neither conceptual nor cognitive. Noncognitive access to content is closely related to the ability to use explicitly information that provides the contents of conscious states: it requires the ability to form intentions whose content is also provided by that information. If, moreover, a creature without conceptual abilities can have intentions, as was argued above, then a creature can have noncognitive or intentional access to the contents of its conscious states even if it does not have general concepts of itself or of consciousness or of the objects or properties of which it is conscious, or the related conceptually structured inferential abilities.

However, if a creature does have conceptual abilities, its access to the contents of its conscious states will naturally extend from noncognitive or intentional access to cognitive access. Such a creature will be able not just to act intentionally on certain information but also to form beliefs about the information he is acting on. The context-free inferential abilities conceptual content presupposes will support this extension. Exclusively noncognitive access to content is going to be nonconceptual in character, not because intentions cannot have conceptual content but because in the presence of conceptual abilities cognitive access will also be implicated.

Whether cognitive access to content must be conceptual in character will depend on whether our understanding of belief is tied to conceptual content and abilities. We can set this issue aside for present purposes. We will not investigate how cognitive access to content might be nonconceptual. For present purposes, we will only try to make out a noncognitive sense of intentional access to content. If this is available to creatures without conceptual abilities, then we have found one way at least of holding that consciousness requires access to content without exclusionary tendencies. It is no objection if there are others as well.

4. Noncognitive or intentional access to contents and explicit abilities

A. So, return to the question: Does consciousness require self-consciousness in the sense of access to content? We should not assume access to content must be cognitive access. But what is the difference between lack of access to a state's content, and noncognitive access to its content? If the claim of access to content holds, the former defeats consciousness, but the latter does not. When a subject/agent denies he is conscious of something, could he be nevertheless, but have noncognitive access? Results of work by Marcel (1993, 1994) suggest that the answer may not be obvious. In his experiments, people deny verbally they see a light while simultaneously reporting with a button push that they do see a light.9 For present purposes what is needed is not an answer to this difficult question but merely some clue about how access to conscious content can be assessed without presupposing that such access must be cognitive.

Begin with cognitive access to contents, or self-evidence for contents. If someone has a conscious experience and has cognitive access to its content, then she believes or can arrive at a belief that she has an experience with that content. According to the claim of self-evidence for content, if she lacks cognitive access to that content, then she does not have a conscious experience with that content. If she neither has nor can arrive at a correct belief about that content, it is not conscious. (The belief that provides cognitive access need not itself be conscious.)

There are two elements to what is required for consciousness by the claim of self-evidence. First, there is a background condition of cognitive ability: the subject/agent must have the ability to form beliefs about the contents of her own conscious states. Second, there is a success condition: if she has a conscious state, then she has a correct belief, or the ability to form a correct belief, about its content.10 It does not follow that she must be incorrigible: that if she has a belief that she has a conscious state with a certain content, then she does indeed have such a state.

B. Now consider how access to content might be other than cognitive. The distinction between the ability to use information explicitly and the ability to use information implicitly provides the needed clue (these are sometimes referred to for short as "explicit abilities" and "implicit abilities" in what follows). Dissociations between implicit and explicit abilities have been observed across a wide range of cases. The most familiar are cases of blindsight, but similar dissociations have been found in normals and in cases of amnesia, prosopagnosia, dyslexia, aphasia, hemineglect, etc. (see Weiskrantz 1980; Schacter et al 1988; Marcel 1983a, 1983b; Young and de Haan 1993, etc.). In such cases, people deny consciousness of a certain stimulus yet perform actions that reveal the influence of information about that stimulus. What does it mean to say that various cases reveal implicit abilities without explicit abilities? This is a question not about the causes of such dissociations, nor about how they should be explained, but rather about exactly what it is that needs explaining. The description of various types of dissociation in terms of implicit without explicit abilities suggests that they have something in common at the level of explanandum, if not at the level of explanans. For present purposes, consider cued guessing and priming effects.

There is a sense in which blindsight patients lack access to information in their blind field, even though they can use it implicitly. If naive blindsight patients are asked to report on the presence of a stimulus or to perform a task that depends explicitly on the problematic information in their blind field, they may be unable to, or will have to be cued to guess. They do not voluntarily or spontaneously perform actions that depend explicitly on the unconscious information, such as pointing to lights in their blind field. If asked to do so, naive patients tend to protest that they can't, since they see nothing. If persuaded to "guess", on cue, they are typically surprised that their guesses tend to be highly accurate, since they deny consciousness of what they are guessing about. Of course, with experience patients get used to guessing, though they still need to be cued to guess.11

Related results have been obtained with monkeys. Monkeys who can reliably localize lights in their blind field on cue under forced choice conditions nevertheless classify lights in their blind field with blank trials rather than with lights in their good fields (Cowey and Stoerig 1997, Cowey 1997, Stoerig and Cowey 1995). It is natural to interpret this behavior as similar to an ability to guess when prompted, despite an inability to access the kind of information they respond to when guessing. And monkeys who can reliably localize lights in their blind field by eye movements on cue cannot do so when they are not cued (Moore et al 1995). They seem to be unable to use the kind of information they can respond to on cue in a spontaneously initiated response.

The influence of unconscious information may also be revealed through priming (facilitation or inhibition). Priming effects are shown when patients are not conscious of a stimulus and are unable to use information about it explicitly in doing some task, but are nevertheless influenced by information about the stimulus in performing some other task that it influences implicitly. For example, normals may be asked to identify color patches preceded by words; the color patches 'mask' the words and prevent awareness of the words. But despite lack of awareness of the masked words, masked words naming the same color as the patch reduce reaction times to colors, while words naming different colors increase reaction times (Marcel 1983a).

C. It has seemed natural to describe such cases in terms of an inability to use information 'explicitly', despite the presence of implicit abilities. But what exactly is meant by this? What is it that needs explaining in both cued guessing and priming cases? (In asking this question we do not assume that the same subpersonal processes will explain both types of case.)

Certain intentions seem to be unavailable where ability to use information explicitly is lacking. But voluntary verbal report of information is not necessary for explicit use. A nonverbal creature could show explicit use of information by acting correctly on an intention whose content is provided by that information, such as an intention to push a lever just if the face is familiar.12 On the other hand, it is not enough for explicit use that information may influence the content of an intention, as in some priming effects, or even provide an essential step en route to the content of an intention (as in the case presented in Bisiach et al 1989).

A similar point applies to the distinction between reporting and guessing. The ability to guess accurately on cue is not the same thing as the ability to report voluntarily (or, in the case of the monkeys, the ability to touch the light whenever it appears, without a cue). Perhaps if these were the same, blindsighted and other patients with the former ability would not be severely disabled; but they are.13 The difference again involves the role and content of intention. When you guess on cue about a stimulus you are not conscious of, you guess intentionally. But information about the very stimulus in question does not feature in the content of your intentional guess in the same way it does when you intentionally report a stimulus you are conscious of, or when you act on it spontaneously. If information is conscious, you can report or act spontaneously on it: you can have the background intention to push the lever if, say, a light flashes, and you can then push it intentionally just because the light has flashed. More generally, if information is conscious then you can form an intention whose content is provided by that information and act on it just for the reason that information provides. Conscious information is available as an effective reason for acting. This is not the case when you can only guess on cue; the information in question does not activate your intention, or provide your reason for acting intentionally. You do not have intentional access to the information you can only respond to by guessing on cue. Your intention is to make a guess when prompted, even if you don't see anything (cf. Kelley and Jacoby 1993, p. 83).

Nevertheless, a 'passive' rather than intentional strategy toward the information in question may be highly successful. As Marcel writes:

In attempting to make deliberate judgments based on information of whose external source one is unaware, it would seem that one makes use of the relevant nonconscious information, if it is available, by relying passively on its effects (e.g. upon attention) rather than being able selectively to retrieve it or be sensitive to it such that it can be the basis of an intentional choice.14

Consider whether perceptual information is conscious or not, when the subject/agent does not have cognitive access to that information. We can still ask: can the subject/agent use the information explicitly? Or can the information be used only implicitly? What exactly is the link between explicit abilities and consciousness? Explicit abilities are not obviously sufficient for consciousness. For example, if, counterfactually, experienced blindsight patients came to be able to cue themselves to respond to stimuli, it would not follow on the present view that they were conscious of those stimuli (see Block on superblindsight, 1995). And it is not quite right to say explicit abilities are required for consciousness. Abilities to use information explicitly involve not just access to information in the sense required for consciousness, but also the ability to perform successfully.

D. Intentional access to content can also be understood in terms of two conditions. First, there is a background condition: if information is conscious, the subject/agent must have the normal ability to form intentions whose content is provided by the information in question (as opposed to different intentions whose execution may be implicitly influenced by that information). "Normal" means in part that it will not be necessary to force choices or to persuade such a subject/agent to guess or to cue her when to guess; the ability will be exercised spontaneously and voluntarily. For example, she must be able to form an intention such as: if the face is familiar, push the button. Second, there is a success condition. But we must be wary of requiring for consciousness that the subject/agent can successfully perform tasks based on the information in question. While abilities to use information explicitly are usually understood to involve the ability to perform correctly, disabilities irrelevant to consciousness might prevent correct performance. Intermittent paralysis might degrade button pushing abilities despite correct intentions to respond. The success condition required for consciousness should be put in terms of correct intentions or tryings rather than correct performance. If the information is conscious, the subject/agent must have correct intentions, or the ability to form correct intentions, to act just on the basis of that information (and not only when cued). The information whose consciousness is in question, as opposed to a cue, provides the reason for which the agent acts intentionally.15 Information that is conscious is available to play this kind of reasoning-giving role in intentional action. So if the perceptual information that a face is familiar is conscious, the agent has not just the ability to form background intentions such as: if the face is familiar, push the button. She must also be able to form correct intentions to act on the information that the face is familiar. Her intention to push the button must be correct in that the information that activates her intention to respond is indeed information that the face is familiar. Her intention to respond on this basis can be correct even if she has problems executing it. For persons with context-free inferential and conceptual abilities, such intentions go naturally with cognitive access. But a nonhuman animal might have a correct intention to act based on information that a face is familiar, if it lacks beliefs about the content of its perception.

A distinction between tasks often drawn in the literature turns on whether the instructions for the task refer to the information in question or not. If so, the task is considered explicit or direct or intentional, as opposed to implicit or indirect or incidental (Reingold and Merikle 1993, p. 52; Young and de Haan 1993, p. 62; Kelley and Jacoby 1993, p. 75; various essays in Underwood 1996, etc.) By contrast, the distinction suggested here turns not on the content of the instructions given to the agent but on the content of the intentions on which the agent acts. This has two advantages. First, by not requiring instructions we do not require agents to have linguistic capacities, so that the distinction can in principle generalize to applications involving nonlinguistic agents. This facilitates efforts to use the explicit/implicit distinction to characterize access for creatures lacking conceptual abilities. Second, the instructions given to blindsight patients do refer to the information in question, as in: when you see (or hear) the cue, take a guess at where there might be a light. So reference to information in instructions does not provide one criterion of explicitness that covers both what is missing in priming effects as well as in cued guessing or forced choice. The suggestion here is that for this we need to consider what the intentions involved, rather than the instructions, have in common.16

So, abilities to use information explicitly require intentional access to that information in the sense just given, but intentional access does not require abilities to use information explicitly if these are understood to require abilities to perform correctly as well as correct intentions. The claim we are considering is that conscious content must be intentionally accessible in this sense, even if not cognitively. The claim that phenomenal consciousness requires access to content may be more plausible when access is understood to include intentional as well as cognitive access then it would be otherwise (cf. Nelkin 1995, Block 1995).17 If the intentions required for such access need not have a conceptual or linguistic character, then neither must self-consciousness in the sense of access to content. This criterion of access could in principle be used for young children or nonhuman animals as well as adult human beings.

E. Consider some examples of lack of access to information, where that information nevertheless has unconscious influence (or is 'covertly processed'). Suppose a person has brain damage such that she cannot recognize the faces of persons whom she knows very well: she is prosopagnosic. She does not have the normal ability to form an intention such as: if the face is familiar, say 'hello', or: if the face is familiar, push that button. And she is unable spontaneously to form correct intentions to act on the basis of information about whether the face is familiar. According to the claim of access to content, someone who lacks these abilities lacks consciousness of the information that a face is familiar. Another example would be a normal person whose awareness of a face is masked by an immediately following presentation of another face, so that he does not have access to whether the first face is familiar.

But even though unconscious, such information may nevertheless have influence. Usually this is shown in a way that presupposes linguistic and conceptual abilities. For example, a prosopagnosic's reaction times to words that name familiar persons may be shorter when the names are presented along with a picture of the familiar person (De Haan and Newcombe 1991). But in principle to get a similar effect it is sufficient for there to be two ways of presenting the same item. Two visual stimuli, rather than one visual and one verbal, could in principle be used to demonstrate unconscious influence. In the case of masking, if the person is asked whether the second face is familiar, he might respond more quickly when it is a different picture of the same face than when it is a picture of a different face. Or visual and auditory stimuli might be used. For example, the prosopagnosic might respond more quickly to familiar voices when they are presented along with the faces of those to whom they belong then when they are presented along with unfamiliar faces. We are not making an empirical claim that such patterns would in fact obtain in particular cases, but merely suggesting an experimental design. In principle, such patterns of response might be demonstrated in young children or nonhuman animals. So the difference between access and lack of access to content can be assessed without presupposing that access must be conceptual.18

The claim of access to content is not that such intentional access to content is sufficient for content to be conscious, but that it is necessary. Intentional access may take conceptual or linguistic form, but it need not. Conceptual access differs from nonconceptual access to conscious content in requiring certain richly structured, relatively context-free normative abilities. But both require more than merely the ability to use information implicitly. The ability to use information explicitly does not in itself amount to or entail the richly structured normative abilities to generalize and reason associated with conceptual generality. Information can be used explicitly to meet a creature's immediate needs, even though it is used in a context-bound way and the creature lacks the ability to use general concepts of itself, its states, or the objects or properties of which it is conscious in a variety of contexts detached from those needs. For example, suppose a creature is conscious that a particular face is familiar. It should then be able to use just that information in acting correctly on intentions such as: if the face is familiar, push that button. It doesn't follow that the creature can use the general concept of a face or of familiarity in reasoning about other contexts. The ability to use information explicitly may not be structured or decomposable along conceptual lines, in a way that would support generalization to other contexts.

F. The claim of access to content does not require the agent to have a presently conscious belief about the contents of his conscious states, only that he have access to that content. And while access to content does require the agent to have or be able to form certain intentions, these intentions themselves might be unconscious in some cases. That is, an agent might be conscious of certain information and use it explicitly even though the intentions that explicit use expresses are not themselves conscious. In this respect the claim of access to content is similar to certain higher-order thought accounts of consciousness, which deny that the higher-order thought that makes for consciousness must itself be conscious (Rosenthal 1986, pp. 336-337; cf. Rosenthal 1991, p. 32; Williams 1978, pp. 82-83). But the access claim does not require that access be cognitive or conceptual, and it does not try to account for consciousness in terms of this access, but merely claims that access to content is required by consciousness.

Some higher-order thought accounts of consciousness distinguish 'mere' consciousness from introspective awareness by distinguishing unconscious thoughts about conscious states from conscious thoughts about conscious states (Rosenthal 1986, pp. 336-7). Either should count as giving cognitive access and hence self-consciousness. It has been urged here that self-consciousness should be understood to extend beyond cognitive access to noncognitive intentional access. From this perspective it would be extremely restrictive to reserve "self-consciousness" for conscious thought about conscious states, that is, for introspective awareness. Even if it is not accepted that noncognitive access provides a sense of self-consciousness, surely cognitive access does. And self-consciousness in the sense of cognitive access does not require conscious thought about conscious states. We'll adopt the helpful term "introspective awareness" in order to distinguish the subspecies of cognitive access to content that involves conscious thought about conscious states.

Consider a case in which you can shift your attention to the content of an experience by attending to something previously unheeded. A shift in attention seems to change what we experience. If this difference is a change in the content of the experience, then access to a given content could not involve being able to shift your attention to it, because the shift would change it to a different content (cf. Nelkin 1995, p. 380). But this difference in feel with a shift in attention may not be a difference in the content of the experience, but rather the difference between having an unconscious belief and having a conscious belief about the content of the experience. That is, a shift in attention may bring to consciousness a belief about the content of an already conscious state: may bring it to introspective awareness. Moreover, not all shifts of attention express a subject/agent's access to the content of a conscious state. Some may bring information to consciousness that was not previously conscious, and this would also produce a difference in feel, an overall change in what we experience.

Access to contents requires more than merely the ability to use information implicitly, in the sense already explained. Lack of access to content tells against consciousness of it, even given the ability to use information implicitly. However, access to content does not require conscious beliefs about your conscious states, or introspective awareness (a fortiori, since not even cognitive access requires this). Moreover, the difference between abilities to use information explicitly and implicitly does not turn on the difference between conscious and unconscious beliefs about conscious contents. It would not be correct to describe agents with implicit but not explicit abilities in relation to certain information as being conscious of it but not having conscious beliefs about their conscious states. The contrast between introspective awareness and its absence already has a use, in more ordinary cases: I can be conscious of something, yet not attending to it in the way that makes for introspective awareness. Blindsight patients lack something more basic than introspective awareness of information.

However, the idea of intentional access does not provide a variant of a higher-order thought account. The intentions required for explicit use of information and hence for intentional access are not intentions about the content of one's mental states, but are intentions about worldly events or states of affairs, such as lights flashing or faces being familiar. Intentions about the contents of mental states would require for their activation beliefs about the contents of mental states, or cognitive access; if intentional access required such intentions, it would not be distinct from cognitive access. By contrast, we are attempting to characterize a more basic sense of intentional access, which does not require higher-order thought or intentions about mental states but which does capture what is intuitively missing from the intentions involved in priming effects and in cued guessing or forced choice. Another difference is that higher-order thought accounts typically make access sufficient as well as necessary for consciousness, whereas intentional access is here suggested to be necessary but not sufficient for consciousness. So 'superblindsight' (Block 1995) is not here ruled out. The considerations that make it appropriate to speak of intentions and hence of intentional access do not per se require conscious contents (see section 1 above, and compare Block's 1995 arguments for the possibility of access consciousness without phenomenal consciousness). The idea of intentional access draws together something that is lacking in both cued guessing and priming cases. But this is not simply lack of consciousness, if it is possible for there to be intentional, access without consciousness. So that claim that intentional access is necessary for consciousness is informative.

In what sense is intentional access to contents a form of self-consciousness? In a generalization of the same sense that cognitive access to contents is a form of self-consciousness. Access to content reflects the normative roles of personal or animal level contents. The contents of my states are cognitively accessible to me when I can form certain beliefs, which make those contents available to my reasoning (even if the beliefs are not themselves conscious). These beliefs are about the contents of my states, so are higher-order. The contents of my states are intentionally accessible to me when I can form certain intentions, which make those contents available to me as effective reasons for action (even if the intentions are not themselves conscious). But these intentions are not themselves about the contents of my states. So intentional access is a generalization of, but distinct from, cognitive access.

However, since access to contents is not sufficient for consciousness, it is not sufficient for self-consciousness. Like perspective, it is only a form of self-consciousness when conscious contents are in play.

G. To recap this section and put it into context: Even if we don't think that all consciousness must be conceptual, we may assume that someone must have the concepts of himself or his own conscious states in order to be self-conscious. But then the view that consciousness requires self-consciousness will seem to deny consciousness to creatures, such as human infants and nonhuman animals, who may be held not to have these concepts. So defending the claim that consciousness requires self-consciousness in strictly conceptual terms attaches too high a price to this claim. For this reason we have been concerned to understand how self-consciousness can be nonconceptual. We saw earlier how perspectival self-consciousness can be nonconceptual, and we have now seen how noncognitive access to contents can be nonconceptual (leaving open whether cognitive access might also be nonconceptual). We can rebut two objections to the claim that consciousness requires access to contents. First, it does not mean that animals are not conscious, even if they do not have conceptual self-consciousness. Second, it leaves open the issue of whether mature persons have nonconceptual consciousness in common with animals. If they do not, the access claim might nevertheless hold in parallel both for creatures with and without conceptual abilities. For people with conceptual abilities, it is arguable that consciousness requires self-consciousness in the sense of self-evidence: cognitive access to the contents of conscious states. But we can now understand how creatures without conceptual abilities could have noncognitive, intentional access to contents. So consciousness may require self-consciousness in this weaker sense of access to contents, without exclusionary tendencies.

5. When does consciousness require cognitive access to contents?

A. So far we have been defending the weaker claim that consciousness requires self-consciousness in the sense of access to content. This is a weaker claim because it can be satisfied by noncognitive or merely intentional access. But now consider the stronger claim that consciousness requires self-evidence or cognitive access to contents. We can evaluate this claim by considering its denial.

To deny this is to hold that experience can have determinate conscious content that is independent of cognitive access. On this view, the fact that the subject/agent has no cognitive access to a certain content, or to certain distinctions in content, does not show that she is not conscious of them.

Suppose a subject/agent stares at a many pointed object and then looks away and experiences an afterimage.19 If there is some definite number of points she is conscious of, must she have access to this number? Keep in mind that it is the number of points in the conscious experience of the afterimage, not the original object, that we are discussing. Suppose the subject/agent has the background ability to form beliefs about the content of his conscious state. He might form a belief that there are five points in his afterimage. But is cognitive success required for consciousness? If his afterimage has seven points, must he believe or be able to form a belief that it has seven points? It is obvious the subject/agent might miscount the number of points on the original object. But does it make sense to suppose he could miscount the points in his afterimage, and believe falsely it has five points although it really has seven?

If we allow that he might miscount, that cognitive success is not required for consciousness, we may begin to feel uncertain what it is in virtue of which the information about the number of points in the afterimage counts as conscious. Yes, his brain or retina may well retain traces of information about the number of points on the original object, but this information need not be conscious. There is a temptation to say that without cognitive access to the number of points, there is no definite number of points he is conscious of.20

The issue is not whether the brain can register unconscious sensory information that is not explicitly accessible to the subject/agent. This seems to be the case with blindsight, priming effects in masking cases, etc.21 The issue is rather over a supposed middle ground between consciousness with cognitive access and unconscious information, namely, consciousness without cognitive access. Can the conscious experience of the after-image, as opposed to unconscious information, be determinately 7-pointedish, even if the subject/agent doesn't have cognitive access to that content? The self-evidence claim is parsimonious. It allows that there are unconscious information states and that there are cognitively accessible conscious states, but not that there is a middle ground. On this view, if it's not self-evident content, it's not conscious content.

We should not defend the middle ground by projecting determinacy from subpersonal information states into the content of conscious experience. This would be a version of the vehicle/content confusion. In masking cases, there may be no answer to the question whether certain information ever reached consciousness, even momentarily (Dennett 1991). In that sense the distinction between subpersonal information and conscious content may not be sharp.22 But such fuzzy transitions and indeterminacy are not what is needed to give a role to a distinct middle ground between unconscious information and self-evident consciousness.

B. Our more general sense of access, intentional access, may be helpful in characterizing a limited middle ground. Suppose our agent is a monkey who does not have cognitive access to the number of points in her afterimage. Perhaps consciousness of the information could nevertheless require a more general sense of access. First, she may have the background ability to form intentions such as: if there are seven points in the afterimage, push that button. Second, she may also have the ability to form a correct intention to act just on the information that the afterimage has seven points, where correctness is judged relative to information traces not assumed to be conscious. If our agent can execute her correct intentions, then she has the ability to use explicitly information about the number of points in the afterimage. But we can also allow here that irrelevant disabilities may interfere with performance.

If she meets the background and success conditions for access to content in these two ways, does it matter that the nonhuman agent does not have cognitive access to content? No. Consciousness requires access to content, though not necessarily cognitive access. This may generate the intuition concerning the human agent that she need not have cognitive access. But in the case of a person, it is difficult to keep access and cognitive access apart. If a person with general conceptual and cognitive abilities can form the relevant background intentions and correct intentions based on the information in question, why can't she also form correct beliefs about the information her intentions are based on? In the presence of these general abilities, it's hard to understand how someone could have noncognitive access without also having cognitive access.

So, while consciousness strictly speaking requires access to content but not necessarily cognitive access or self-evidence, for persons with conceptual abilities this requirement naturally becomes one of self-evidence.

6. Self-evidence does not entail incorrigibility

A. There are good reasons to deny that self-conscious subject/agents are incorrigible about the contents of their conscious states. It may seem that the self-evidence claim is committed to incorrigibility. But it is not. Self-evidence says that if a subject/agent is conscious that p, he believes or can form a belief that he is conscious that p. Nevertheless, a subject/agent may believe that he is conscious that p yet not be: his beliefs about his conscious contents may be mistaken.

It is uncontroversial that someone can have false beliefs about his past experiences, due to faulty inferences, or faulty memory. But incorrigibility can fail even for current experiences, and even though self-evidence holds. In the after-image experience, it may seem to the person that there is a determinate number of points. He believes, correctly, that it seems to him that there is a determinate number of points: the existential quantifier comes within the content of the seeming. This belief satisfies the claim of self-evidence. But he may infer, perhaps unconsciously, from this correct belief to the incorrect belief that there is some determinate number of points n such that it seems to him that there are n points (and such that he could miscount those seeming-points). Now his belief quantifies into the content of the seeming. But the inference involves a scope fallacy: from "it seems that there is an n such that..." it does not follow that "there is an n such that it seems that...n...." So the second belief is corrigible. It can seem that there is some determinate number even though there is no particular number it seems to be. The content of the experience can be existential: as of there being a determinate number, even though there is no determinate number it is of.

Another example of corrigibility is suggested by anosognosia. People who are blind may be unaware of their deficit; they may not believe that are blind (see Marcel 1993 for references). They may therefore believe that they are not blind, that is, that they experience conscious visual information. But they are wrong; they do not have such experience. A belief that you have visual experience does not entail that you do have visual experience. Contrapositively, the lack of visual experience does not entail lack of belief that you have visual experience. But these corrigibility claims are compatible with self-evidence: if you have visual experience, you have cognitive access to it, and if you don't have cognitive access to it, you don't have it.

To see that corrigibility and self-evidence are compatible, we should keep in mind that not having an experience with certain properties is not the same thing as having an experience as of the absence of those properties. We can apply this point to our two examples. There being no determinate number of points such that your experience is as of that number is not the same thing as having an experience as of there being no determinate number of points. Similarly, lack of visual experience is not the same thing as experience of a lack. If we can lack visual experience without experiencing a lack, then we can lack visual experience without believing that we lack it, or while believing that we have it.

B. Nevertheless, we normally think of self-evidence and incorrigibility as going together. It may seem odd to combine self-evidence with corrigibility. Why? Denying self-evidence straightforwardly explains corrigibility: If self-evidence can fail, then even though the afterimage has seven points, the person need not have cognitive access to this. So without internal inconsistency he could miscount and believe that it has five points; this belief is false. So incorrigibility also fails; the failure of self-evidence makes room at once for corrigibility. For the person to have this false belief while self-evidence held would be for him to believe, or be able to form a belief in, the truth about his current experience while believing something inconsistent about it. Such an inconsistency may suggest disunity of consciousness: these contents are either separately conscious, or not both conscious.

However, there are other ways for incorrigibility to fail. In our example, a person might (perhaps unconsciously) make an invalid inference (which fails to respect a scope distinction) about the content of his current experience. Such a mistake is compatible with the self-evidence of experience. Inferential failure is accepted as a source of false beliefs about past experience. But it can also be a source of false beliefs about current experiences, in a way that is compatible with their self-evidence.

Whether or not we accept self-evidence, incorrigibility is the normal case. When a belief about current experience is false, this needs some explanation, whether in terms of miscounting or inferential failure or whatever. The denial of self-evidence can explain corrigibility. If we accept self-evidence, we relocate the need for explanation. But in some cases it may be better to explain corrigibility in terms of cognitive mistakes, such as inferential failure, than to deny self-evidence.

7. Perspective, access, and life

A. One sense of the claim that consciousness requires self-consciousness appeals to the idea of perspectival self-consciousness and to the interdependence of perception and action this involves. The perspectival aspect of consciousness involves the ability to keep track of systematic dynamic relationships between what is perceived and what you are doing spatially, and in this sense awareness of your own agency. Another sense of the claim appeals to a general sense of access to content, which involves the availability of certain intentions. Such intentional access to contents is lacking in both cued guessing and priming examples of covert processing. The claims that consciousness requires self-consciousness in these two senses face issues about how infancy and development should be treated. While we don't resolve these issues here, we should recognize that there may be no bright line in the development of consciousness, and that consciousness may emerge in infants along with agency and the capacities for perspective, spatial intentions and movement, and other intentions.

There are several parallels between self-consciousness in these two senses. Both involve intentions and agency. Modulo the correct treatment of development, both seem necessary for consciousness, though neither alone seems sufficient. While in a conscious creature, perspective and access are forms of self-consciousness, neither by itself entails either consciousness or self-consciousness. So while lack of either perspective or access to content suggests lack of consciousness, the converse does not hold: either might in theory be found in a contentful system that lacks consciousness. In both senses, self-consciousness can be either conceptual or nonconceptual in character.

Is there any deeper connection between them? At one level they seem independent. Access to content obviously goes beyond spatial content. There may be no particular connection between perspective and access to nonspatial information, such as information about whether a face is familiar. Obviously, a prosopagnosic person has a perspective. But consider access to spatial information in particular: whether it is independent of perspective is not so obvious. It is not clear that a sleepwalker, for example, has either. Could someone have a perspective yet lack access to spatial information: be unable to form intentions whose content it provided, such as the intention to turn to the right? Could a being--a god, perhaps--have intentions whose content is provided by spatial information, yet lack a perspective? It is not clear that these possibilities do make sense.

B. Return to the point that even if both perspective and access to content are necessary for consciousness, neither alone seems sufficient for consciousness. Could it be sufficient for consciousness if a living thing has both perspective and access to content?1

When we worry about the presence or absence of conscious states, the worry often takes of form of supposing a machine or a zombie could have the property someone has proposed as sufficient for consciousness. Functional and behavioral proposals often get this treatment. A mad scientist or an alien might create a robot that acts in just the way specified- -but surely it need not be in conscious states!

Suppose for the sake of argument that the right kind of appeal to intentional action in an account of consciousness still leaves this type of worry in place. That is, suppose the right kind of appeal to intentional action includes an appeal both to the perspectival interdependence of perception and action and to access to content in the sense of the availability of intentions required to use information explicitly. And suppose that the zombie worry still survives. So a robot might produce such actions in such circumstances that we rightly attribute perspective and access to it, but still not be in conscious states. If more than this is needed for consciousness, what is it? More precisely, what extra ingredient could be added to perspective and access to contents to get a set of conditions that are jointly sufficient for consciousness?

Perspective and access involve normativity because intentional action involves normativity. But they need not involve the rich normativity and reasoning abilities associated with concepts. Are conceptual abilities and norms the extra ingredient that can keep zombie worries at bay? An agent with conceptual abilities has a more richly structured set of behaviors, and perhaps those behaviors must have causes with a certain related structure. But even granting all this, it is not clear why machine or zombie worries should be disarmed by conceptual abilities. If these worries are valid in the first place, why couldn't a machine or zombie have a conceptually structured set of behaviors and reasoning abilities with correspondingly structured causes, yet not be in conscious states? If the worries get a grip to begin with, their grip is not loosened by the enrichment of structure and of norms of rational behavior that goes with conceptual abilities. Even if we were to allow for the sake of argument that conceptual abilities are necessary for consciousness, nevertheless we can add conceptual abilities to perspective and access without yet getting a set of sufficient conditions.

Perhaps the extra ingredient needed in a set of sufficient conditions is not the richer normativity of concepts, but simply life. No account of life is given here: that is another substantive question (Boden 1996), though there is no particular problem about it for present purposes. The point relevant here is simply that agency and life may be independent. Not all living things can act: plants cannot. And we are supposing for the sake of argument that there might be intentional agents that are not alive, such as the worrisome robots. So life without perspective and access, as in plants, would not be sufficient for consciousness, and perspective and access without life, as in a robotic agent, need not be either. And life may not be necessary for consciousness. Nevertheless, life plus perspective and access may be sufficient for consciousness. If a thing has not just perspective and access to content but also life, it is very difficult to sustain zombie worries about it. That is why zombie worries just don't get a grip on many nonhuman animals, even though they do on linguistically sophisticated science fiction robots with conceptual abilities and silicon innards that implement syntactic structure. At least, the claim that there is a puzzling explanatory gap between living things with perspective and access, on the one hand, and consciousness, on the other, has to face the fact that few things are more natural or intelligible than the understanding of such creatures as conscious (cf. Levine 1993, 1995).

8. Summary

There are at least two senses in which consciousness requires self-consciousness: perspectival self-consciousness and access to contents. Self-consciousness in both of these senses can take nonconceptual as well as conceptual form. Access to contents can take cognitive form (self-evidence) as well as noncognitive or intentional form. Nonhuman animals and human infants can thus be self-conscious in these senses as well as conscious, even if they lack conceptual abilities or lack cognitive access to contents. However, exclusively noncognitive access to contents probably has to take nonconceptual form, since conceptual abilities will naturally extend noncognitive access to cognitive access. So for mature persons, consciousness may require cognitive access or self-evidence. Noncognitive intentional access to contents is closely related to the ability to use information explicitly. Self-evidence does not entail incorrigibility. Perspective and access to content both seem to be necessary for consciousness, though neither seems sufficient for consciousness. But could it be sufficient for consciousness for a living thing to have perspective and access to contents?

Notes

Thanks for criticisms of earlier drafts and discussion of these ideas to Jose Bermudez, Paul Boghossian, John Campbell, Ron Chrisley, Alan Cowey, Jerome Dokic, Naomi Eilan, Anthony Marcel, Michael Martin, Erik Myin, Derek Parfit and the members of various audiences on occasions when related material was presented. Thanks also to the British Academy and to the McDonnell-Pew Centre in Cognitive Neuroscience for support.

1.  This does not mean that there are no normative constraints at all on nonconceptual content.


Our concern here is with the implications of the absence of conceptual abilities in the sense explained in the text, not with a positive account of nonconceptual content. Notice that conceptual content is here identified in terms of its decompositional/recombinant structure. So it is assumed that lack of such structure makes for nonconceptual content, even though no further account of nonconceptual content is given here. This usage reflects one current sense of "conceptual structure", but if someone wishes to put a different or less restrictive gloss on the term "conceptual content", then "decompositional/recombinant content" can be substituted for it. Since the latter trips much less lightly off the tongue, this has not been done here.

2.This point can be applied to both senses of the claim that consciousness requires self-consciousness. There could be parallel conceptual and nonconceptual versions of the claims that consciousness requires access to contents and of the claim that consciousness requires perspectival self-consciousness. This would make the notions of consciousness and self-consciousness somewhat disjunctive: there would not be a 'primitive layer' of consciousness common to animals and persons, any more than animals share the normative abilities of persons (cf. Brewer 1992b, p. 18n). Conceptualization would transform consciousness. But conceptual and nonconceptual consciousness could still be seen as different versions of the same thing, rather than just as different. They may share many features, such as the requirements of unity, perspectival self-consciousness, and access to content.

3. Indeed, for terminological clarity, we can restrict "content" to information that reflects such holism and normativity, that is, to personal or animal level content, and speak of subpersonal information rather than of subpersonal content when holism and normativity are missing. Some degree of normativity is distinctive of the personal (or animal) level, and is part of what makes it appropriate to attribute contentful states to a person (or animal). If consciousness requires perspective and intentional access, as argued in the text, then conscious contents must be at the personal level. It does not follow that personal level content must have conceptual structure or must be conscious content; holism and normativity can be features of nonconceptual or of unconscious contents. Moreover, personal-level contents can be attributed to subsystems, for example, in cases of weakness of the will, self-deception, multiple personality, etc. (Hurley 1989; Consciousness in Action, forthcoming). Such partitioning is motivated by considerations of holism and normativity, so is a feature of the personal level.

4. For discussion, see Hurley 1989, ch. 4, 5; Bacharach and Hurley 1991, Introduction, sects. 2, 3; cf. Heyes and Dickinson 1993.

5. Cf. Battalio et al 1985 on how rats violate the Independence Axiom of expected utility theory; Tversky 1969.

6. Though nothing rules out the possibility that agency might be nested, or collective: that larger agents might include or be partly constituted by other agents. Perhaps social agency is like this; perhaps personal agency is like this. Normative coherence plays an important role in settling such issues. In general, the identity of the subject/agent, and the contrast between systems that are subject/agents and systems that are not, turn on both personal-level considerations (such as normative coherence) and subpersonal-level considerations (such as the singularity in causal flows that is constituted by a complex dynamic feedback system); see Hurley 1989, and Hurley, Consciousness in Action, forthcoming.

7. See Hurley, Consciousness in Action, forthcoming, for discussion of such a two-level interdependence view and of the possibility that the subpersonal dynamic aspect of perspective may provide the needed subpersonal complement to a normative, personal-level account of the unity of consciousness.

8. Perspectival self-consciousness includes but goes beyond ecological self-consciousness (Neisser 1988). According to the ecological view of perception, as put forward by J.J. Gibson (1986) and his followers, perception involves co-perceiving of the self and its environment. Information equally about the environment and about a creature's movement through the environment is present in ambient light and picked up through movement. But Gibson claimed that passive movement would do as well as active movement for purposes of understanding the interdependence of perception and action, and resisted appeals to internal processing of information in general and to efference copy or central motor feedback in particular. (See Gibson 1986, part 3, on the distinction between the effects of efferent feedback on vision and his own view of the role of action in perception; Turvey's commentary on Gyr et al 1979.)

Perspectival interdependence as intended here is not limited in these ways. The rivalry between ecological and motor theories is spurious. There is no incompatibility between the valuable ecological insights about the interdependence of perception and action and the further, complementary insights about the role of active movement and of internal motor feedback processes in perception. (See Shebilske 1987 for a careful ecumenical view that reconciles ecological view and motor theories of perception; see also Gallistel 1980, ch. 7; Gyr et al 1979; Sheerer 1984; Jeannerod 1997.)

9. Marcel gives several reasons for denying that button-pushing reflects nonconscious information. Spontaneous latencies are too long for reflexes, and differences persist when delays are imposed. Subjects insist their responses are reports; they make them voluntarily. Most importantly, when subjects are ask to guess instead of report conscious content, they do better: give more correct Yes responses and fewer incorrect Yes responses. "The accepted inference from this difference is that the Report condition is a genuine reflection of report of conscious contents." (1994, p. 86; 1993)

10. This formulation requires for self-evidence at least a disposition; a stronger claim would require an occurrent belief. As a necessary condition, the weaker dispositional claim seems less controversial, since occurrent properties of mental states may also involve dispositions (as Rosenthal points out, 1993, p. 208). Rosenthal's arguments against a dispositional version of a higher-order thought account turn on the status of such an view as an account of consciousness, which licenses sufficiency as well as necessity claims (e.g. his arguments on pp. 208, 210 cut against sufficiency claims rather than necessity claims). But here it is not claimed that access in the weaker dispositional sense is sufficient for consciousness, only that it is necessary.

11. Some patients eventually claim to become aware of a feeling about where the stimulus is, even though they deny the experience is visual (see Weiskrantz 1980; Zihl 1981, p. 168; Cowey and Stoerig 1991, p. 26).

12. This answer to the question is merely a placeholder, though one with certain commitments. To fill this answer out would require a full account of when information provides the content of an intention and the reason on which one acts rather than merely influencing it. Here we merely note that this is what is required for purposes of drawing the needed distinction between implicit and explicit uses of information. For some of the complications that arise, see Underwood 1996, pp. 86, 146, 170, 177, 236-239, etc.; cf. Allport 1988, who suggests that the direct use of information about X involves indicating its identity or responding to X itself, rather than using information about X in responding to something else.

13. Weiskrantz (1988) emphasizes the disabling consequences for patients. For example, "... the amnesiac patient is severely impaired, and requires continuous care, despite all of his or her primed retention." He also comments that none of the blindsight type patients can think or imagine in terms of the capacity of which they are unaware. Again, on the experimental basis for the distinction between guessing and reporting, see Marcel 1993, 1994.

14. Marcel 1983a, p. 210. Elsewhere he writes:

...people will not themselves initiate voluntary actions which involve some segment of the environment unless they are phenomenally aware of that segment of the environment. ... One [reason for this proposal] is that in such actions the critical segment of the environment is a logically necessary part of the intention. To the extent that the intention is conscious, it cannot be well-formed if a necessary part of it cannot be in the necessary state (i.e. conscious). ... If the problem of some of these [neurological] patients [who are letter by letter readers] is in attaining a conscious percept of the whole word, then it seems that they can base actions on their non-conscious representation but are normally unwilling to do so. If a word is visually presented to a normal person, but masked such that the subject is unaware of it, yet it provides semantic priming...., it makes little sense to ask the subject to 'read' the word ('What word?'). (Marcel 1988, p. 147)

If the suggestions made in the text about the difference between reporting and guessing, and between explicit and implicit use of information, are on the right track, then there is a link between intentions and conscious experience in cases like these. But now a question arises. In which direction does the dependence hold? Which inability is prior? Do certain intentions require certain conscious experiences (which seems to be what Marcel is suggesting; see also Reingold and Merikle 1993, p. 51; see also Kelley and Jacoby, p. 89), or do certain conscious experiences require certain intentions (which is the suggestion made in the text)? Does the inability consciously to experience certain stimuli account for the inability to form certain intentions? Or does the inability to form certain intentions account for the inability consciously to experience certain stimuli? Or is neither prior because the two are constitutively related? See Hurley, Consciousness in Action, forthcoming.

15. This does not mean that agents' conscious experience must accurately reflect the world, but that their intentions must accurately reflect their conscious information. This can be the case even though their intentions are not about the contents of their mental states, but rather about the world. In some cases guesses might reflect the world more accurately than conscious experience does.

16. Notice that the role of intentional access in characterizing explicit abilities does not compete with explanations of dissociations between implicit and explicit abilities. The notion of intentional access is used to clarify what needs explaining in both cases of cued guessing and priming, not to do the needed explanatory work. There may not be a single explanation of implicit/explicit dissociations in various cases. For example, consider the hypothesis that in some cued guessing cases, the patient retains abilities to orient and localize but loses abilities to identify or classify. (Though blindsight subjects may be able not just to localize in their blind field, but to distinguish, for example, X and O, under cued guessing conditions.) Now it is possible that localization information may be intact though identification/classification information is lost. But if this did explain implicit/explicit dissociations in some cued guessing cases, it would still not provide a unified characterization of what needs explaining when we speak of implicit/explicit dissociations. For example, in priming cases both the missing explicit abilities and the retained implicit abilities may relate to identification/classification information. So the localization/classification contrast does not capture the phenomenon to be explained here. I am grateful to Michael Martin for objections that prompted this clarification.

17. I do not find Nelkin's arguments for the dissociation of phenomenal experience and apperception persuasive. Compare his reading of blindsight for hue as involving phenomenal experience but not apperception (1995, p. 377-378) with Stoerig and Cowey's reading of their blindsight results for monkeys in terms of phenomenal consciousness as opposed to conscious access (1995, p. 153). The fact that blindsight results concern hue in particular seems to me to give us no independent leverage on Nelkin's issue. The position suggested here is that phenomenal consciousness requires access to content, though not necessarily cognitive access, but that access to content is not sufficient for phenomenal consciousness. Blindsight patients express lack of access and hence of phenomenal consciousness; cf Flanagan (1992), pp. 148-149.

The position taken here can also be compared to Block's (1995) distinction between phenomenal consciousness (P) and access consciousness (A). Consciousness here is understood as phenomenal and as requiring access, but access is not considered sufficient for consciousness. By contrast, Block argues that P and A are as a conceptual matter doubly dissociable. We can imagine (if not find) a superblindsighted subject, a kind of limited partial zombie, who can make self-prompted responses without phenomenal consciousness, or who has A without P. This conceptual possibility is conceded here. But the converse conceptual possibility argued for by Block, of P without A, is not conceded here. P without A seems to me far more problematic to make sense of than A without P. On this issue, compare the commentary by Atkinson and Davies and Block's reply to commentators with the commentaries by, e.g., Kobes, and Levine, and with Block's concession that "Perhaps there is something about P-consciousness that greases the wheels of accessibility", p. 242; see also Flanagan 1992, p. 147ff.

Arguments for P without A are by example. Typical examples of P without A involve coming to realize that for some time there has been a loud noise, or that you have had a headache. There are two generic difficulties with this strategy. One is that access can be weakened, for example, so that your capacity to come to realize makes for accessibility. The position taken here involves a weakened requirement of intentional access, which need not be cognitive or conceptual, by contrast with Block's richer conception of access in terms of inferential promiscuity and reasoning. This richness lends Block's double dissociability thesis more support than it would get from the kind of access that can plausibly be required of conscious animals and infants. The second generic difficulty is that, given some plausible weakening of access, we have no independent leverage on whether there is phenomenal consciousness without access. Why suppose there was phenomenal consciousness of the noise or the headache without accessibility, as opposed merely to information or a subpersonal state? This is not an issue on which introspection can rule: why suppose we can access what we have when we don't have access?

Despite these disagreements, by conceding the possibility of A without P I concede Block's distinction. I also agree with his rejection of reasoning that slides from an obvious function of A to a nonobvious function of P. Even if P does require A, if A is where the functional action is and A does not require P, then we cannot be satisfied with simply attributing A's function to P. We need to say something about why A's function is performed in the P-involving way.

If there is anything in the idea that life plus perspective and access are sufficient for consciousness (see section 7 below), then we might get a component-wise account of the evolution of consciousness by finding (separate though probably related) evolutionary explanations of life (!), perspective, and access. Cf. Hurley 1995.

18. If the subject in a masking experiment were an animal, it might be trained using two stimuli separated by an interval too long for masking to occur, and trained to respond only to the first stimulus--a face. Then the interval could be shortened until masking of the faces by the subsequent stimuli was achieved. To show priming and implicit abilities, faces could be used as the second, masking stimuli. I do not know whether prosopagnosic primates have been identified.

Again, these remarks suggest the design of experiments, not their outcome. The point is simply that the implicit/explicit distinction could in principle be applied to animals, not that it would in fact apply in particular cases.

On methodologies for assessing covert processing, see and cf. Marcel 1983a, 1983b, p. 206, etc., 1993, 1994, Allport 1988, Weiskrantz 1988a, 1990, Cowey and Stoerig 1991, 1997, Cowey 1997, etc.

19. I am indebted to Derek Parfit for discussion of this example.

20. On the other hand, this temptation may seem unduly verificationist. The claim that consciousness requires self-evidence or cognitive access to content might be criticized as follows. The realist about consciousness may object even if the 'observer' to whom manifestation of a difference in conscious content is required is oneself at the same time. He may deny that differences in conscious experience need be accessible to the subject at all, quite apart from issues of manifestation to other subjects or to later selves. It may seem verificationist to require that the contents of consciousness be 'manifested' to self-consciousness even simultaneously, that is, to require access to contents. If this requirement is denied, then there can be differences in a subject's conscious experience that are not accessible to the subject. In general, the fact that we cannot verify the answer to a question does not show there is no answer. Verificationism is discredited. In this case, the question happens to be about the contents of consciousness, but verificationism is just as wrong here.

See Hurley, Consciousness in Action, forthcoming, for an argument that the claim of self-evidence for conscious contents does not depend on verificationism and is compatible with realism about consciousness.

21. It needn't always be determinate whether the masked events ever reached consciousness in order for it sometimes to be determinate that there is no consciousness of sensory information that the brain is nevertheless using. Cf. Rosenthal 1986, sect. III, who regards it as controversial to claim that sensory states need not be conscious.

22. Here I am indebted to discussion with Anthony Marcel.

23. A creature's "being conscious" or "having consciousness" is here used to correlate with its being in conscious states. Cf. Rosenthal 1993, p. 197.

References

ALLPORT, Alan (1988), "What Concept Of Consciousness?," in Consciousness in Contemporary Science, A. J. Marcel and E. Bisiach, eds. Oxford: Clarendon Press, 159-82.

BACHARACH, Michael and HURLEY, Susan eds. (1991), Foundations of Decision Theory: Issues and Advances. Oxford: Blackwell.

BATTALIO, R. C., KAGEL, J. H. and MacDONALD, D. N. (1985) "Animals' choices over uncertain outcomes: some initial experimental results," American Economic Review 75, 597-613.

BLOCK, Ned (1995), "On a confusion about a function of consciousness", Behavioral and Brain Sciences 18, 227-287.

BODEN, Margaret A., ed. (1996), The Philosophy of Artificial Life. Oxford: Oxford University Press.

BREWER, Bill (1992b) "Self-Location and Agency," Mind 101 (401), 17-33.

COWEY, Alan (1997), "Current Awareness: Spotlight on Consciousness", Developmental Medicine and Child Neurology 39, 54-62.

COWEY, Alan, and STOERIG, Petra (1991), "Reflections on Blindsight", in The Neuropsychology of Consciousness, D. Milner and M. Rugg, eds. London: Academic Press, 11-37.

COWEY, Alan, and STOERIG, Petra (1997), "Visual Detection in Monkeys with Blindsight", Neuropsychologia (in proof).

DE HAAN, E. and NEWCOMBE, F. (1991), "What makes faces familiar?," New Scientist, 49-52.

DENNETT, Daniel C. (1991a), Consciousness Explained. Boston: Little Brown and Company.

FLANAGAN, Owen (1992), Consciousness Reconsidered. Cambridge: MIT Press.

GALLISTEL, C. R. (1980), The Organization of Action: A New Synthesis. Hillsdale,New Jersey: Erlbaum.

GIBSON, James J. (1986), The Ecological Approach to Visual Perception. Hillsdale, New Jersey: Lawrence Erlbaum.

GYR, John, WILLEY, Richmond and ADELE, Henry (1979) "Motor-sensory feedback and geometry of visual space: an attempted replication," Behavioral and Brain Sciences 2, 59-94.

HEYES, C., and DICKINSON, A. (1993), "The Intentionality of Animal Action", in Consciousness, Martin Davies and Glyn Humphreys, eds. Oxford: Blackwell, 105-120.

HURLEY, S. L. (1989), Natural Reasons. New York: Oxford University Press.

HURLEY, S. L. (forthcoming), Consciousness in Action. Cambridge: Harvard.

JEANNEROD, Marc (1997), The Cognitive Neuroscience of Action, Oxford: Blackwell.

KELLEY, C.M., and JACOBY, L.L. (1993), "The Construction of Subjective Experience: Memory Attributions", in Consciousness, Martin Davies and Glyn Humphreys, eds. Oxford: Blackwell, 76-89.

LEVINE, Joseph (1993), "On Leaving Out What It's Like", in Consciousness, Martin Davies and Glyn Humphreys, eds. Oxford: Blackwell, 121-136.

LEVINE, Joseph (1995), "Qualia: Intrinsic, Relational, or What?", in Conscious Experience, Thomas Metzinger, ed. Paderborn: Schoningh, 277-292.

MARCEL, Anthony J. (1983a) "Conscious and Unconscious Perception: An Approach to the Relations between Phenomenal Experience and Perceptual Processes," Cognitive Psychology 15, 238-300.

MARCEL, Anthony J. (1983b) "Conscious and Unconscious Perception:Experiments on Visual Masking and Word Recognition," Cognitive Psychology 15, 197-237.

MARCEL, Anthony J. (1988), "Phenomenal Experience and Functionalism," in Consciousness in Contemporary Science, A. J. Marcel and E. Bisiach, eds. Oxford: Clarendon Press, 121-58.

MARCEL, Anthony J. (1993), "Slippage in the Unity of Consciousness," in Experimental and Theoretical Studies of Consciousness, Gregory R. Bock and Joan Marsh, eds. Chichester: John Wiley & Sons, 168-79.

MARCEL, Anthony J. (1994), "What is Relevant to the Unity of Consciousness?," in Objectivity, Simulation and the Unity of Consciousness, Christopher Peacocke, ed. Oxford: Oxford University Press, 79-88.

McDOWELL, John H. (1994), Mind and World. Cambridge, Mass.: Harvard University Press.

MILNER, D., and RUGG, M., eds (1991), The Neuropsychology of Consciousness. London: Academic Press.

MOORE, T., RODMAN, H.R., REPP, A.B., GROSS, C.G. (1995), "Localization of Visual Stimuli after Striate Cortex Damage in Monkeys: Parallels with Human Blindsight", Proceedings of the National Academy of Sciences 92(18), 8215-8218.

NEISSER, Ulric (1988) "Five Kinds of Self-knowledge," Philosphical Psychology 1 (1), 35-59.

NELKIN, Norton (1995), "The Dissociation of Phenomenal States from Apperception", in Conscious Experience, Thomas Metzinger, ed. Paderborn: Schoningh, 373-383.

REINGOLD, E.M., and MERIKLE, P.M. (1993), "Theory and Measurement in the Study of Unconscious Processes", in Consciousness, Martin Davies and Glyn Humphreys, eds. Oxford: Blackwell, 40-57.

ROSENTHAL, David M. (1986), "Two Concepts of Consciousness," Philosophical Studies 49, 329-359.

ROSENTHAL, David M. (1991), "The Independence of Consciousness and Sensory Quality," in Philosophical Issues, Vol. 1, Enrique Villanueva, ed. Consciousness. Atascadero, California: Ridgeview, 15-36.

ROSENTHAL, David M. (1993), "Thinking that One Thinks", in Consciousness, Martin Davies and Glyn Humphreys, eds. Oxford: Blackwell, 197-223.

SCHACTER, Daniel L., McANDREWS, Mary Pat and MOSCOVITCH Morris (1988), "Access to Consciousness: Dissociations between Implicit and Explicit Knowledge in Neuropsychological Syndromes," in Thought Without Language, L. Weiskrantz, ed. Oxford: Clarendon Press, 242-78.

SHEBILSKE, W. L. (1987), "An Ecological Efference Mediation Theory of Natural Event Perception," in Perspectives on Perception and Action, H. Heuer and A. F. Sanders, eds. Hillsdale, New Jersey: Erlbaum, 195-213.

SHEERER, Eckart (1984), "Motor Theories of Cognitive Structure: A Historical Review," in Cognition and Motor Processes, W. Prinz and A.F. Sanders, eds. Berlin: Springer-Verlag.

STOERIG, Petra, and COWEY, Alan (1995), "Visual Perception and Phenomenal Consciousness", Behavioural Brain Research 71, 147-156.

TVERSKY, Amos (1969), "Intransitivity of Preference," Psychological Review 76, 31-48.

UNDERWOOD, Geoffrey (1996), Implicit Cognition. Oxford: Oxford University Press.

WEISKRANTZ, Lawrence (1980), "Varieties of Residual Experience," The Quarterly Journal of Experimental Psychology 32, 365-86.

WEISKRANTZ, Lawrence (1988), "Some Contributions of Neuropsychology of Vision and Memory to the Problem of Consciousness," in Consciousness in Contemporary Science, A. J. Marcel and E. Bisiach, eds. Oxford: Clarendon Press, 183-99.

WEISKRANTZ, Lawrence (1990) "Outlooks for blindsight: explicit methodologies for implicit processes," Proceedings of the Royal Society of London 239, 247-78.

WILLIAMS, Bernard (1978), Descartes: The Project of Pure Enquiry. Middlesex,England: Penguin Books.

YOUNG, A.W., and de Haan, E.H.F. (1993), "Impairments of Visual Awareness", in Consciousness, Martin Davies and Glyn Humphreys, eds. Oxford: Blackwell, 58-73.

ZIHL, J. (1981) "Recovery of visual functions in patients with cerebral blindness," Experimental Brain Research 44, 159-69.