Cognitive Science and Phenomenal Consciousness:
A Dilemma, and How To Avoid It

Gerard O'Brien and Jon Opie

Department of Philosophy
The University of Adelaide
South Australia 5005

Final Draft@April 1997


1 Introduction

2 Posing the Dilemma

2.1 Informational Access versus Access-Consciousness
2.2 From Informational Access to Information Processing Effects
3 Avoiding the Dilemma 3.1 Vehicle Theories of Phenomenal Consciousness
3.2 Classical and Connectionist Styles of Mental Representation
3.3 A Vehicle Theory of Consciousness: Why Classicism Can't, But Connectionism Can
3.4 Conclusion


When it comes to applying computational theory to the problem of phenomenal consciousness, cognitive scientists appear to face a dilemma. The only strategy that seems to be available is one that explains consciousness in terms of special kinds of computational processes. But such theories, while they dominate the field, have counter-intuitive consequences; in particular, they force one to accept that phenomenal experience is composed of information processing effects. For cognitive scientists, therefore, it seems to come down to a choice between a counter-intuitive theory or no theory at all. We offer a way out of this dilemma. We argue that the computational theory of mind doesn't force cognitive scientists to explain consciousness in terms of computational processes, as there is an alternative strategy available: one that focuses on the representational vehicles that encode information in the brain. This alternative approach to consciousness allows us to do justice to the standard intuitions about phenomenal experience, yet remain within the confines of cognitive science.

1 Introduction

It is the computational theory of mind that distinguishes cognitive science from those other disciplines that take human cognition as their explanatory target. What this means is that there is a consensus among practitioners within this discipline that human cognitive processes are to be understood, speaking generically, as disciplined operations defined over neurally realised representational states.[note1]Consequently, when cognitive scientists apply computational theory to the problem of phenomenal experience [note2], as many of them have been doing recently, it seems natural to seek to explain consciousness in terms of special kinds of computational processes to which some of the brain's representational vehicles are subject.

Such process theories of consciousness, as we shall call them, dominate the literature in this part of cognitive science. And by far and away the most popular process accounts developed in field are those that make consciousness depend on informational access that is, on the information processing relations enjoyed by certain representational states. Typically, the claim is that the content of a representational vehicle is conscious when the vehicle has some privileged computational status, say, being available to an executive system, or being inferentially promiscuous (Stich, 1978). On this kind of story, then, consciousness is a result of the rich and widespread informational access relations possessed by a (relatively small) subset of the information bearing states of a cognitive system (see, e.g., Baars, 1988; Churchland, 1995; Dennett, 1991; Johnson-Laird, 1988; Kinsbourne, 1988, 1995; Mandler, 1985; Schacter, 1989; Shallice, 1988a, 1988b; and Umilta, 1988).

Prominent among those process theories that focus on such access relations is the account of consciousness developed by Daniel Dennett in Consciousness Explained (1991). Dennett argues that phenomenal consciousness is the result of a pandemonium process in which the brain's representational vehicles compete with one another for control over subsequent cognitive behaviour. Those contents are conscious, he claims, whose representational vehicles win this competition and hence persevere long enough to achieve a persistent influence over ongoing cognitive events. But Dennett's work throws into sharp relief a dilemma that we believe is faced by all process theories. To cleave to rich informational access in this way, he argues, is to hold that phenomenal consciousness is nothing over and above characteristic kinds of information processing effects; that conscious experiences are actually composed of cognitive reactions (see especially his 1993, p.927). To identify phenomenal experience with such information processing effects, however, is to fly in the face of conventional wisdom. For most of us regard conscious experiences as, first and foremost, special kinds of causes: states that are distinct from and are causally responsible for the very kinds of cognitive effects that Dennett highlights.

If Dennett is right about informational access, then cognitive scientists face a dilemma: either one must give up the idea that conscious states are causes, or one must give up on process theories of consciousness. Falling on the second horn of this dilemma is no option for most cognitive scientists, since abandoning process theories would amount to abandoning the attempt to explain consciousness in computational terms. Yet the first horn is equally unpalatable, since it is highly counter-intuitive.

Responses to this situation are quite varied. Some process theorists fail to recognise this dilemma, and hence continue to treat conscious states as causes. Others seem to be aware to some degree of these consequences, but seek to obscure or ameliorate them through a selective use of the language in which they formulate their positions. A third response, perhaps motivated by Jerry Fodor's dictum that "remotely plausible theories are better than no theories at all" (1975, p.27), is to self-consciously opt for the first horn of the dilemma. The challenge here is to develop new intuitions regarding the place of consciousness in the causal fabric. This last has been championed by Dennett himself, but, somewhat ironically, he has been roundly criticised for not taking consciousness seriously (see, e.g., Block, 1993, 1995; Shoemaker, 1993; and Tye, 1993).

Our response is different again, although it contains both good and bad news. We will argue that Dennett's account of phenomenal consciousness is, of all recent attempts, most consistent with the general entailments of process theories. Hence, the dilemma implied by his work must be addressed by all process theorists. That's the bad news. But the good news is this: there is an escape route open to those cognitive scientists who, on the one hand, want to hold on to their intuitions about consciousness, and, on the other, wish to explain phenomenal experience using computational resources. The escape route we offer bypasses the territory occupied by process theories in favour of a largely unexplored region of the theoretical landscape.

2 Posing the Dilemma

In this section we will show how the dilemma faced by process theorists arises. As a first step towards this goal, we need to draw a distinction between rich informational access understood as a way of explaining consciousness, and understood as a kind of consciousness.

2.1 Informational Access versus Access-Consciousness

As we noted in the introduction, process theories seek to explain phenomenal consciousness in terms of computational activities that privilege certain representational vehicles over others. Bernard Baars' "Global Workspace" model of consciousness (1988) is a representative example. Baars' approach begins with the premise that the brain contains a multitude of distributed, unconscious processors operating in parallel, each highly specialised, and all competing for access to a global workspace a kind of central information exchange for the interaction, coordination, and control of the specialists. Such coordination and control is partly a result of restrictions on access to the global workspace. At any one time only a limited number of specialists can broadcast global messages (via the workspace), since competing messages are often contradictory. Those contents are conscious whose representational vehicles gain access to the global workspace (perhaps as a result of a number of specialists forming a coalition and ousting their rivals) and hence are broadcast throughout the brain (pp.73-118). Those contents are conscious, in other words, whose representational vehicles have rich information processing relations right across the cognitive system.

When talk of rich informational access is in the air, however, some read this as reference to a separate kind of consciousness, what Ned Block calls "access-consciousness" (1995). But that is not our target. Those process theories which exploit the fact that in any complex computational device some representational vehicles enjoy more widespread informational access than others, are not describing a special kind or form of consciousness they are seeking to explain how phenomenal experience emerges in a cognitive system. It is vital to be clear on this issue, so we'll devote a little space to enlarging on it.

Block believes that cognitive science trades in a number of different concepts of consciousness, the two most important of which he calls access-consciousness (A-consciousness) and phenomenal-consciousness (P-consciousness).[note3]The latter is just run of the mill conscious experience, as exemplified in "the experiential properties of sensations, feelings and perceptions" (1995, p.230). It is the "what it is like" of experience. P-consciousness is what we refer to when we speak of "phenomenal experience" or "consciousness". Access-consciousness, on the other hand, is a process notion. Here's how Block characterises it:

A state is access-conscious (A-conscious) if, in virtue of one's having that state, a representation of its content is (1) inferentially promiscuous (Stich, 1978), i.e. poised to be used as a premise in reasoning, and (2) poised for rational control of action and (3) poised for rational control of speech. (1995, p.231)

These three conditions are jointly sufficient, but not all necessary for a state to be access-conscious. For example, a content may be unavailable to speech production centres, and yet be access-conscious in virtue of its ability to contribute to reasoning processes. One would thus be A-conscious of a belief, even in the absence of dispositions to voice it, if it were in a position to influence decision making.

A-consciousness is tailor-made for beliefs and other propositional attitudes, since its criteria are intuitively plausible conditions on beliefhood (Stich, 1978). Philosophers who attend to ordinary usage are often attracted to the idea that consciousness of beliefs is something fundamentally different from sensory consciousness. They perhaps take this attitude to be warranted by situations in which we say "he is conscious of x" simply because his having belief x explains his actions, or because he is disposed to assent to x. Since it lines up with "belief consciousness" in this way, the idea of A-consciousness strikes us as being a philosopher's notion. In saying this we don't mean to deny the importance of access relations among the vehicles of mental representation; we just want to indicate that A-consciousness falls outside what people ordinarily have in mind when they refer to consciousness. This is not merely a terminological issue, because theory construction obliges one to maintain a well-defined explanatory target. For us that target is phenomenal experience: P-consciousness.

Now we are in position to properly distinguish between access-consciousness, on the one hand, and those process theories which exploit rich informational access, on the other. The former is a sort or kind of consciousness "belief consciousness" if you like. A content is access-conscious if its representational vehicle has rich and widespread informational access in a cognitive system. The latter are theses about how phenomenal experience (the "what it is like") is to be explained. In particular, they suggest that a content is phenomenally conscious when its representational vehicle is inferentially promiscuous.

Block identifies a related, but distinct, tendency to conflate or confuse A-consciousness with P-consciousness. This leads to the "fallacy" of illicitly transferring functions of A-consciousness to P-consciousness. There are numerous theorists who make these mistakes, according to Block (for the list, see 1995, pp.236-43). Block may be right about this. But a more charitable, and obvious, interpretation of this literature is that the theorists he names are committed to explaining phenomenal experience in terms of the rich access relations attaching to some representational vehicles. They are committed, in short, to explaining P-consciousness in terms of A-consciousness.

2.2 From Informational Access to Information Processing Effects

There is, however, a certain amount of discord, among adherents of such process theories, as to what rich informational access actually consists in. When philosophers and cognitive scientists talk of informational access, they often treat it as a notion to be unpacked in terms of the capacity of representational vehicles to have characteristic cognitive effects. This approach is evident, for example, in Block's characterisation of access-consciousness. Recall his talk of representational vehicles being "poised" for use in reasoning, and the rational control of action and speech (1995, p.231). On this reading, what the process theorist asserts is that items of information are phenomenally conscious in virtue of the availability of their representational vehicles to guide reasoning and action.

When Dennett, on the other hand, comes to explain phenomenal consciousness in terms of informational access, he has a distinctly different notion in mind (1991, 1993). Consciousness, he tells us, is nothing more than "cerebral celebrity":

Those contents are conscious that persevere, that monopolize resources long enough to achieve certain typical and symptomatic effects _ on memory, on the control of behavior and so forth.... In just the way a Hall of Fame is a redundant formality (if you are already famous, election is superfluous, and if you are not, election probably won't make the difference), there is no induction or transduction into consciousness beyond the influence already secured by winning the competition and thereby planting lots of hooks into the ongoing operations of the brain. (1993, p.929)

Thus, for Dennett, phenomenal consciousness is reducible to certain characteristic information processing effects, on both memory and behaviour. And this involves a somewhat different notion of informational access, since the focus has moved from the capacity of certain representational vehicles to guide reasoning and action, to their achievements in actually guiding such reasoning and action. As a consequence, the flavour of Dennett's process theory of consciousness is different from most others found in the literature.

So process theorists have two quite different stories to tell about what constitutes rich and widespread informational access relations in the brain. One of these ascribes such access relations to those vehicles that are currently implicated in significantly more cognitive processes than their brethren. Here, whether a representational vehicle is inferentially promiscuous is determined by its current activity in the brain, by its current effectiveness in throwing its weight around. It is this account of informational access that Dennett has in mind when he attempts to explain phenomenal consciousness in terms of cerebral celebrity. The other account of informational access involves the supposition that at any moment in time, certain representational vehicles are apt to be implicated in significantly more cognitive episodes than others. This account is implicit in the work of process theorists who seek to explain consciousness in terms of the capacity to guide reasoning and action, and is endorsed quite explicitly by Chalmers, for example, when he contrasts his own process theory of consciousness with Dennett's:

The main difference between our account and Dennett's is that my account takes consciousness to go along with potential cerebral celebrity. It is not required that a content actually play a global role to be conscious, but it must be available to do so. (1996, p.229, emphasis in original)

But can both of these interpretations of informational access be sustained by process theorists? We think not, and here's why. To treat rich informational access as a capacity enjoyed by some representational vehicles, but not others, requires a distinction between those vehicles that are "poised" to have widespread information processing relations, at any one moment in time, and those that aren't. But such a distinction is difficult to maintain. A representational vehicle's information processing relations are, to state the obvious, relational properties of the vehicle, and hence don't supervene on the vehicle itself but depend for their obtaining on the satisfaction of certain systemic conditions. Most importantly, in the present context, whether any particular vehicle will be inferentially promiscuous depends on the future conditions that obtain in the system, as it responds to both new inputs and ongoing task demands. Consequently, there is a real sense in which, at a single moment, all the vehicles tokened in a complex cognitive system are "poised" to have such relations they are all potential cerebral celebrities because the future systemic conditions might be such that any one of them could be rapidly promoted to an influential position in guiding cognitive behaviour. Which of them will actually throw their weight about in the ensuing milliseconds will depend on the specific nature of the processing demands as they unfold.

On closer analysis, therefore, it makes little sense to talk of a particular representational vehicle enjoying rich and widespread information processing relations in a cognitive system unless it is actually having rich and widespread information processing effects in that system. Dennett, we believe, has seen this, and so avoids reading rich informational access in terms of the capacities of a select subset of representational vehicles. Instead, he concentrates on what these vehicles actually do in the brain the impact they have on the brain's ongoing operations. A representational vehicle has rich informational access when it wins in the competition with its fellows, and in so doing persists long enough to have an enduring influence on cognition. As a result, phenomenal experience, according to Dennett, is like fame, a matter of having the right effects. With regard to pain, for example, he argues that our phenomenal experience is not identifiable with some internal state which is poised to cause typical pain reactions in the system; rather; "it is the reactions that compose the 'introspectable property' and it is through reacting that one 'identifies' or 'recognizes' the property" (1993, p.927). Consequently, in spite of the barrage of criticism that has been levelled at Dennett's account of phenomenal consciousness, his position is actually more consistent with the general entailments of process theories.

We think that many cognitive scientists and philosophers are caught in a bind between their theoretical commitments and their metaphysical intuitions. On the one hand, a commitment to the computational theory of mind drives these theorists to develop accounts of phenomenal consciousness in terms of processes defined over representational vehicles in general, and the rich computational relations enjoyed by some of these vehicles in particular. Yet, on the other, their intuitions won't allow them to fully embrace the consequences of such a position. From the first-person perspective, phenomenal experience appears to be a robust cause of one's subsequent thoughts and behaviour, not a composition of information processing effects. What results is a ready supply of process theories formulated in terms of the capacities of representational vehicles, since this formulation appears to license an alignment of consciousness with a privileged subset of vehicles deployed by the brain (entities that can be full-blooded causes of information processing effects) and this ameliorates the counter-intuitive consequences.

But this last, we have argued, is an unhappy compromise, based as it is on an illicit reading of rich informational access.[note4]These cognitive scientists and philosophers thus face a dilemma: they must either give up their process theories, or they must give up their intuitions about phenomenal consciousness. They cannot hold on to both. The former route appears to lead them away from cognitive science, and hence, for many, away from any real explanation of consciousness. The latter leaves them with an explanation, but one that is deeply unsatisfying. What to do?

Dennett exhorts these theorists to opt for the latter horn, claiming that we shouldn't view it as ominous that such process theories are at odds with common wisdom:

On the contrary, we shouldn't expect a good theory of consciousness to make for comfortable reading... If there were any such theory to be had, we would surely have hit upon it by now. The mysteries of the mind have been around for so long, and we have made so little progress on them, that the likelihood is high that some things we all tend to agree to be obvious are just not so. (1991, pp.37-8)

Another prominent philosopher who has, in his own way, seen and highlighted the dilemma facing cognitive science, is John Searle (in his 1992, for example). Famously, he opts to retain his intuitions about consciousness at the expense of cognitive science. Since computational theories of mind can't account for consciousness, so much the worse for these theories.

But we think there is a middle way a way to hold on to our deep-seated intuitions about consciousness, and yet remain within the confines of cognitive science.

3 Avoiding the Dilemma

Thus far we have explored the following line of thought. When it comes to phenomenal experience, cognitive scientists tend to think that the computational theory of mind commits them to what we have been calling process theories: theories that take consciousness to emerge from the computational activities in which the brain's representational vehicles engage. But such process theories have counter-intuitive consequences; in particular, they force us to accept that phenomenal experience is composed of information processing effects. For cognitive scientists, therefore, it seems to come down to a choice between a counter-intuitive process theory or no theory at all. Hence, the dilemma.

It is our view, however, that this line of thought is wrong in one crucial respect: the computational theory of mind doesn't force cognitive scientists to embrace process accounts of consciousness, as there is an alternative strategy available. What's more, this alternative doesn't have the counter-intuitive consequences that plague process theories. In what follows we shall substantiate this claim.

3.1 Vehicle Theories of Phenomenal Consciousness

At the outset we noted that given their commitment to the computational theory of mind the view that human cognitive processes are to be understood as disciplined operations defined over neurally realised representational states it is natural that cognitive scientists should seek to explain consciousness in terms of the kinds of computational processes in which representational vehicles are implicated. This is natural, but it is not obligatory. Given that computation is information processing, and given that information must be represented in order to be processed, cognitive scientists can also seek to identify consciousness with the representational vehicles employed by the brain to encode the information it processes.

Of course, it is sheer orthodoxy in cognitive science to hold that our brains represent far more information than we are capable of experiencing at any one moment in time. So this obviously rules out any story that simply aligns instantaneous phenomenal consciousness with all the information currently encoded by the brain. But it is now commonplace for theorists to distinguish between explicit and inexplicit forms of information coding. Representation is typically said to be explicit if each distinct item of information in a computational device is encoded by a physically discrete object. Information that is either stored dispositionally or embodied in a device's primitive computational operations, on the other hand, is said to be inexplicitly represented.[note5]It is reasonable to conjecture that the brain employs these different styles of representation. Hence the obvious alternative to a process theory, in cognitive science, is one that identifies phenomenal consciousness with the explicit coding of information in the brain.

We shall call any theory that takes this conjecture seriously a vehicle theory of consciousness. Such a theory holds that our conscious experience is identical to the vehicles of explicit representation in the brain. And on the face of it, such a theory appears quite attractive. The intuition here is that phenomenal experience typically involves consciousness "of something" (you are, for example, presumably conscious of the words and sentences on the paper in front of you, of the chair pressing against your body, and perhaps of some distant noises filtering in through the open window), and in being conscious of something we are privy to information, typically about our bodies or the environment. The conjecture that phenomenal experience is identical to the vehicles of explicit information coding deployed in the brain is one obvious way of fleshing out this intuition.

Because they focus on what representational vehicles are, rather than what they do, vehicle theories don't reduce consciousness to computational effects. What matters, so far as a vehicle theory of consciousness is concerned, is that the neural vehicles of explicit representation have certain intrinsic properties, not that they fill some special computational role, or enter into widespread information processing relations. Consequently, vehicle theorists are free to explain informational access in terms of phenomenal consciousness, rather than the converse. This order of explanation accords with the common wisdom that phenomenal experience stands behind and is causally responsible for our thoughts and behaviour.

In spite of their obvious virtues, vehicle theories of consciousness are exceedingly scarce in contemporary cognitive science. There are, we think, two principal reasons for this. First, experimental work employing such paradigms as dichotic listening, visual masking, and implicit learning, as well as the investigation of neurological disorders such as blindsight, suggest to many theorists that the explicit representation of information in the brain is dissociable from conscious experience, in the sense that the former can and often does occur in the absence of the latter. Second, it has simply been a working assumption of the classical computational theory of mind the theory that takes human cognition to be a species of symbol manipulation that there are a great many unconscious, explicit mental states. Consequently, there is currently almost unanimous support for the view that human cognition involves the unconscious manipulation of explicit representations. In this climate, it is no surprise that the growth of vehicle theories of consciousness has been inhibited, and that process theories have flourished in their stead.

But recent developments in cognitive science combine to suggest that a reappraisal of this situation is in order. On the one hand, a number of theorists have recently been highly critical of the experimental methodologies employed in the aforementioned studies; so critical, in fact, that it's no longer reasonable to assume that the dissociability of conscious experience and explicit representation has been adequately demonstrated (see, e.g., Campion, Latto & Smith, 1983; Dulany, 1991; Holender, 1986; and Shanks & St. John, 1994.) And on the other, classicism, as a theory of human cognition, is no longer as dominant in cognitive science as it once was. As everyone knows it now has a lively competitor in the form of connectionism[note6], and it's not obvious that the terrain will be unaltered when we take a fresh look at these issues from the connectionist perspective,. Adding to these recent developments the dilemma we have been at pains to highlight, a reappraisal of vehicle theories of consciousness seems especially urgent.

In what remains of this paper, we will conduct such a reassessment. In doing so we won't critically re-evaluate the experimental work on dichotic listening, visual masking, implicit learning, and so forth, since this has already been done in some detail by the authors mentioned in the previous paragraph.[note7] Instead, we will concentrate on the prospects for developing a vehicle theory of consciousness in the light of the two competing computational conceptions of cognition. Specifically, we will argue that while classicism doesn't have the computational resources to defend such a theory (something that formalises what most theorists have simply taken for granted), the unique computational properties of connectionism make a vehicle theory possible (something that may come as a surprise to many theorists).

3.2 Classical and Connectionist Styles of Mental Representation

The classical computational theory of mind holds that human cognitive processes are digital computational processes. What this means is that classicism takes the generic computational theory of mind (the claim that cognitive processes are disciplined operations defined over neurally realized representational states), and adds to it a more precise account of both the representational states involved (they are complex symbol structures possessing a combinatorial syntax and semantics) and the nature of computational processes (they are syntactically-governed transformations of these symbol structures). All the rich diversity of human thought from our most "mindless" everyday behaviour of walking, sitting and opening the fridge, to our most abstract conceptual ponderings is the result, according to the classicist, of a colossal number of syntactically-driven operations defined over complex neural symbols.[note8]

In contrast, connectionism relies on the neurally inspired computational framework commonly known as parallel distributed processing (or just PDP).[note9] A PDP network consists in a collection of simple processing units, each of which has a numerical activation value that reflects its level of activity at any moment. These units are joined by connection lines, along which the activation value of a unit travels, thereby contributing to the input and subsequent activation level of other units. The connection lines incorporate modifiable, signed, numerical connection weights, which modulate the effect of the activation values travelling along them in either an excitatory or inhibitory fashion. Each unit sums the modulated inputs it receives, and then generates a new activation level that is some threshold function of its present activation level and that sum. A PDP network typically performs computational operations by "relaxing" into a stable pattern of activation in response to a stable array of inputs over its input lines. These operations are mediated by the connection weights, which determine (together with network connectivity) the way that activation is passed from unit to unit. The values of connection weights are set using a learning rule (the most common of which employed in "toy" PDP models is back propagation see, e.g., the discussion in Rumelhart, Hinton & Williams 1986). Through repeated applications of such a rule, an individual network can be taught to generate a range of stable target patterns in response to a range of inputs.

The PDP computational framework does for connectionism what digital computational theory does for classicism. Human cognitive processes, according to connectionism, are the computational operations of a multitude of PDP networks implemented in the neural hardware in our heads. And the human mind is viewed as a coalition of interconnected, special-purpose, PDP devices whose combined activity is responsible for the rich diversity of our thought and behaviour. This is the connectionist computational theory of mind.[note10]

Because these two cognitive frameworks are supported by different conceptions of computation, they tell different stories about the manner in which information is coded in the brain. We'll highlight this fact by taking a look at the nature of representation in both classicism and connectionism.

First, and most straightforwardly, information can be represented in a computational system in an explicit fashion. Dennett provides a nice definition of this style of representation:

Let us say that information is represented explicitly in a system if and only if there actually exists in the functionally relevant place in the system a physically structured object, a formula or string or tokening of some members of a system (or 'language') of elements for which there is a semantics or interpretation, and a provision (a mechanism of some sort) for reading or parsing the formula. (1982, p.216)

In the classical context, explicit representation consists in the tokening of symbols in some neurally realized representational medium. This is a very robust form of mental representation, as each distinct item of information is encoded by a physically discrete, structurally complex object in the human brain. With connectionism, on the other hand, explicit representation consists in the generation of stable activation patterns across neurally realised PDP networks.[note11]These patterns are physically discrete, structurally complex objects, which, like the symbols in conventional computers, each possess a single semantic value no activation pattern ever represents more than one distinct content. They are embedded in a system with the capacity to process them in structure sensitive ways, and are "read" in virtue of having effects elsewhere in the system. Moreover, the quality of this effect is structure sensitive (ceteris paribus), that is, it is dependent on the precise profile of the source activation pattern. While the semantics of a PDP network is not language-like, it typically involves some kind of systematic mapping between locations in activation space and the object domain.[note12]

Information can also be represented in a computational system inexplictly: either in virtue of the fact that, while it is not currently explicit, the system is capable of rendering it explicit; or in virtue of being embodied in the primitive computational operations of the system.[note13]This style of representation is crucial to computational accounts of cognition, because it is utterly implausible to suppose that everything we know is encoded explicitly. Thus, both classicism and connectionism are committed to the existence of highly efficient generative mechanisms, whereby most of our knowledge can be readily derived. For classicists, such mechanisms rely on the physical symbols currently being tokened (i.e. stored symbols and those that are part of an active process) and the computational processes (data retrieval and data transformation) that enable novel symbols to be produced. Crucially, when accounting for the conversion of information that is merely inexplicit into symbolic form, classicists rely on the existence of a long-term store of explicit data our "core" knowledge store (see, e.g., Dennett 1984; and Fodor 1987, Chp.1). Without such a store of symbolic knowledge there is no foundation on which to build explicit vehicles whose causal powers reflect the vital computational role of inexplicitly represented information.

Inexplicit representation is even more important in connectionism, however, since any explicitly coded information in the form of a stable pattern of activation is obliterated every time a PDP network is exposed to a new input. While activation patterns are a transient feature of PDP systems, a "trained" network encodes, in its connection matrix, the disposition to generate a whole range of activation patterns, in response to cueing inputs. So a network, in virtue of the values of its connection weights, and its particular pattern of connectivity, can be said to store a set of contents corresponding to the range of explicit tokens it is disposed to generate. Connection weights are thus responsible for long-term memory in PDP systems. Central to the connectionist account of mental representation is the fact that such long-term information storage is superpositional in nature, since each connection weight contributes to the storage of every disposition. This means that the information stored in a PDP network is entirely inexplicit, because none of it is encoded in a physically discrete manner. Thus, on the connectionist computational theory of mind, according to which the human mind is a coalition of neurally-realised PDP devices, none of our long-term knowledge is stored explicitly. All the information that is represented explicitly in our brains, is generated on-the-fly, as a transient response to current input, while anything more permanent is represented inexplictly.

3.3 A Vehicle Theory of Consciousness: Why Classicism Can't, But Connectionism Can

Armed with this overview of classical and connectionist styles of mental representation, we can now raise the following question: Does either of these conceptions of cognition have the computational resources to support a vehicle theory of phenomenal consciousness? Such a vehicle theory would embrace the distinction between explicit representation (on the one hand) and inexplicit representation (on the other), as the boundary between the conscious and the unconscious. It would hold that all phenomenal experience is the result of the tokening of explicit representations in the brain's representational media, and that whenever such a representation is tokened, its content is phenomenally experienced. It would hold that whenever information is causally implicated in cognition, yet not consciously experienced, such information is encoded inexplicitly.

We think it is abundantly clear, however, that a classicist can't really contemplate this kind of vehicle theory of phenomenal experience. Whenever we act in the world, whenever we perform even very simple tasks, it is evident that our actions are guided by a wealth of knowledge concerning the domain in question.[note14]So in standard explanations of decision making, for example, the classicist makes constant reference to beliefs and goals that have a causal role in the decision procedure. It is also manifest that most of the information guiding this process is not phenomenally conscious. According to the classical vehicle theory under consideration, then, such beliefs must be inexplicit, either derivable from explicit beliefs via the inference rules of the system, or embodied somehow in the brain's primitive computational operations. The difficulty with this suggestion, however, is that many of the conscious steps in a decision process implicate a range of unconscious beliefs interacting according to unconscious rules of inference. That is, there is a complex economy of unconscious states that mediate the sequence of conscious episodes. While it is possible that all the rules of inference are inexplicit, the mediating train of unconscious beliefs must interact to produce their effects, else we don't have a causal explanation. But the only model of causal interaction available to a classicist involves explicit representations (Fodor is one classicist who has been at pains to point his out see, e.g., his 1987, p.25). So, either the unconscious includes explicit states, or there are no plausible classical explanations of higher cognition.

There is a further difficulty for this version of classicism: it provides no account whatever of learning. While we can assume that some of our intelligent behaviour comes courtesy of endogenous factors, a large part of our intelligence is a result of a long period of learning. A classicist typically holds that learning (as opposed to development or maturation) consists in the fixation of beliefs via the generation and confirmation of hypotheses. This process must be largely unconscious, since much of our learning doesn't involve conscious hypothesis testing. As above, this picture of learning requires an interacting system of unconscious representations, and, for a classicist, this means explicit representations. If we reject this picture, and suppose the unconscious to be entirely inexplicit, then there is no cognitive explanation of learning, in that learning is always and everywhere merely a process which reconfigures the brain's functional architecture. But any classicist who claims that learning is non-cognitive, is a classicist in no more than name.

The upshot of all of this is that any remotely plausible classical account of human cognition is committed to a vast amount of unconscious symbol manipulation. Consequently, classicists can accept that inexplicit information has a major causal role in human cognition, and they can accept that much of our acquired knowledge of the world and its workings is stored in an inexplicit fashion. But they cannot accept that the only explicitly represented information in the brain is that which is associated with our phenomenal experience for every conscious state participating in a mental process classicists must posit a whole bureaucracy of unconscious intermediaries, doing all the real work behind the scenes. Thus, for the classicist, the boundary between the conscious and the unconscious cannot be marked by a distinction between explicit and inexplicit representation. We conclude that classicism doesn't have the computational resources required to develop a plausible vehicle theory of phenomenal consciousness. Consequently, any classicist who seeks a computational theory of consciousness is forced to embrace a process theory a conclusion, we think, that formalises what most classicists have simply taken for granted.

But what about connectionism? Is a vehicle theory of consciousness any more plausible in its connectionist incarnation, than in the classical context? We think it is. While we were able to identify some common ground between classicism and connectionism, there is nonetheless an important representational asymmetry that divides them. Whereas much of the information represented in an inexplicit fashion is causally impotent in the classical framework (it must be rendered explicit before it can have any effects), the same is not true of connectionism. This makes all the difference. In particular, whereas classicism, using only its inexplicit representational resources, is unable to meet all the causal demands on the unconscious (and is thus committed to a good deal of unconscious symbol manipulation), connectionism holds out the possibility that it can (thus leaving stable activation patterns free to line up with the contents of consciousness).

Information is stored in PDP networks, you'll recall, in virtue of relatively long-term dispositions to generate a range of explicit representations (stable activation patterns) in response to cueing inputs. These dispositions are themselves determined by the particular connection weight values and connection pattern of each network. However, we also saw earlier that the configuration of a network's connection matrix is responsible for the manner in which it responds to input (by relaxing into a stable pattern of activation), and hence the manner in which it processes information. This means that the causal substrate driving the computational operations of a PDP network is identical to the supervenience base of the networks' memory. So there is a strong sense in which it is the information constituting a network's memory that actually governs its computational operations.[note15]

This fact about PDP systems has major consequences for the manner in which connectionists conceptualise cognitive processes. Crucially, information that is merely represented in a dispositional form in connectionist systems need not be rendered explicit in order to be causally efficacious. There is a real sense in which all the information that is encoded in a network in a dispositional fashion is causally active whenever that network responds to an input. What is more, learning, on the connectionist story, involves the progressive modification of a network's connection matrix, in order to encode further information in an inexplicit fashion. Learning, in other words, is a process which actually reconfigures the inexplicit representational base, and hence adjusts the primitive computational operations of the system. In Pylyshyn's (1984) terms, one might say that learning is achieved in connectionism by modifying a system's functional architecture.

The bottom line in all of this is that the inexplicit representational resources of connectionist models of cognition are vast, at least in comparison with their classical counterparts. In particular, the encoding and, more importantly, the processing of acquired information, are the preserve of causal mechanisms that don't implicate explicitly represented information (at least, not until the processing cycle is complete and stable activation is achieved). Consequently, most of the computational work that a classicist must assign to unconscious symbol manipulations, can in connectionism be credited to operations implicating inexplicit representation. Thus a connectionist can feel encouraged in the possibility of aligning phenomenal experience with explicit representation in the brain.

3.4 Conclusion

When cognitive scientists try to hang on to the intuition that conscious experience is a cause of our behaviour, while developing process theories of consciousness, they are putting themselves in an inherently unstable position. We have argued that to adopt a process theory requires that we view consciousness as a composition of information processing effects. For cognitive scientists in the thrall of classicism this offers a choice between a counter-intuitive process theory or no theory at all. However, when one moves to connectionism, the cognitive landscape undergoes a significant shift. Connectionism, unlike classicism, appears to have the right computational profile to hazard a vehicle theory of consciousness. What connectionism offers, therefore, is a stabilisation procedure for cognitive science. It offers an explanation of consciousness that not only remains within the confines of cognitive science, but, by identifying phenomenal experience with the brain's explicit representational vehicles, also treats it as a cause of one's thoughts and behaviour. Spelling out the full details of this connectionist theory is a task for another time.[note16]But its initial plausibility should excite those theorists seeking a computational theory of consciousness that accords with our first-person understanding of this most elusive and puzzling phenomenon.


Baars, B.J. (1988) A Cognitive Theory of Consciousness (Cambridge University Press).
Bechtel, W. & Abrahamsen, A. (1991) Connectionism and the Mind (Blackwell).
Block, N. (1993) Book review of Dennett's "Consciousness Explained", Journal of Philosophy , 90, pp. 181-93.
Block, N. (1995) On a confusion about a function of consciousness, Behavioral and Brain Sciences 18, 227-87.
Campion, J., Latto, R. & Smith, Y.M. (1983) Is blindsight an effect of scattered light, spared cortex, and near-threshold vision?, Behavioral and Brain Sciences, 6, pp. 423-86.
Chalmers, D. (1996) The Conscious Mind (Oxford University Press).
Chomsky, N. (1980) Rules and representations, Behavioral and Brain Sciences , 3, pp. 1-62.
Churchland, P.M. (1995) The Engine of Reason, the Seat of the Soul (MIT Press).
Clark, A. (1989) Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing (MIT Press).
Clark, A. (1993) Associative Engines: Connectionism, Concepts and Representational Change (MIT Press).
Cummins, R. (1986) Inexplicit representation, in: M.Brand & R.Harnish (Eds) The Representation of Knowledge and Belief (University of Arizona Press).
Cummins, R. & Schwarz, G. (1991) Connectionism, computation, and cognition, in: T.Horgan & J.Tienson (Eds) Connectionism and the Philosophy of Mind (Kluwer).
Cussins, A. (1990) The connectionist construction of concepts, in: M.Boden (Ed) The Philosophy of Artificial Intelligence (Oxford University Press).
Dennett, D.C. (1982) Styles of mental representation, Proceedings of the Aristotelian Society, New Series, 83, pp. 213-26.
Dennett, D.C. (1984) Cognitive wheels: The frame problem of AI, in: C.Hookway (Ed) Minds, Machines and Evolution (Cambridge University Press).
Dennett, D.C. (1991) Consciousness Explained (Little, Brown).
Dennett, D.C. (1993) The message is: There is no me dium, Philosophy and Phenomenological Research, 53, pp. 919-31.
Dietrich, E. (1989) Semantics and the computational paradigm in cognitive psychology, Synthese , 79, pp. 119-41.
Dulany, D.E. (1991) Conscious representation and thought systems, in: R.S.Wyer Jr. & T.K.Srull (Eds) Advances in Social Cognition IV (Lawrence Erlbaum).
Field, H. (1978) Mental representation, Erkentness , 13, pp. 9-61.
Fodor, J.A. (1975) The Language of Thought (MIT Press).
Fodor, J.A. (1981) Representations (MIT Press).
Fodor, J.A. (1987) Psychosemantics (MIT Press).
Fodor, J.A. & Pylyshyn, Z.W. (1988) Connectionism and cognitive architecture: A critical analysis, Cognition , 28, pp. 3-71.
Harman, G. ( 1973) Thought (Princeton University Press).
Haugeland, J. (1981) Semantic engines: An introduction to mind design, in: J.Haugeland (Ed) Mind Design: Philosophy, Psychology, and Artificial Intelligence (MIT Press).
Haugeland, J. (1985) Artificial Intelligence: The Very Idea (MIT Press).
Holender, D. (1986) Semantic activation without conscious awareness in dichotic listening, parafoveal vision, and visual masking: A survey and appraisal, Behavioral and Brain Sciences , 9, pp. 1-66.
Horgan, T. & Tienson, J. (1989) Representations without rules, Philosophical Topics, 27, pp. 147-74.
Johnson-Laird, P.N. (1988) The Computer and the Mind: An Introduction to Cognitive Science (Fontana Press).
Kinsborne, M. (1988) Integrated field theory of consciousness, in: A.Marcel & E.Bisiach (Eds) Consciousness in Contemporary Science (Clarendon Press).
Kinsbourne, M. (1995) Models of consciousness: Serial or parallel in the brain?, in: M.Gazzaniga (Ed) The Cognitive Neurosciences (MIT Press).
Lloyd, D. (1995) Consciousness: A connectionist manifesto, Minds and Machines, 5, pp. 161-85.
Lloyd, D. (1996) Consciousness, connectionism, and cognitive neuroscience: A meeting of the minds, Philosophical Psychology, 9, pp. 61-79.
Mandler, G. (1985) Cognitive Psychology: An Essay in Cognitive Science (Lawrence Erlbaum).
McClelland, J.L. & Rumelhart, D.E. (Eds) (1986) Parallel Distributed Processing: Explorations in the Microstructure of Cognition Vol. 2: Psychological and Biological Models (MIT Press).
Newell, A . (1980) Physical symbol systems, Cognitive Science , 4, pp. 135-83.
O'Brien, G.J. & Opie, J.P. ( forthcoming) A Connectionist Theory of Phenomenal Experience, Behavioral and Brain Sciences.
Pylyshyn, Z.W. (1980) Computation and cognition: Issues in the foundations of cognitive science, Behavioral and Brain Sciences , 3, pp. 111-69.
Pylyshyn, Z.W. (1984) Computation and Cognition (MIT Press).
Pylyshyn, Z.W. (1989) Computing in cognitive science, in: M.Posner (Ed) Foundations of Cognitive Science (MIT Press).
Ramsey, W., Stich, S. and Rumelhart, D.E. (Eds) (1991) Philosophy and Connectionist Theory (Lawrence Erlbaum).
Rumelhart, D.E. (1989) The architecture of mind: A connectionist approach, in: M.Posner (Ed) Foundations of Cognitive Science (MIT Press).
Rumelhart, D.E., Hinton, G.E. and Williams, R.J. (1986) Learning internal representations by error propagation, in: D.Rumelhart and J.McClelland (Eds) Parallel Distributed Processing: Explorations in the Microstructure of Cognition Vol 1: Foundations (MIT Press).
Rumelhart, D.E. and McClelland, J.L. (Eds) (1986) Parallel Distributed Processing: Explorations in the Microstructure of Cognition Vol 1: Foundations (MIT Press).
Rumelhart, D.E., Smolensky, P., McClelland, J.L. and Hinton (1986) Schemata and sequential thought processes in PDP models, in: J.McClelland and D.Rumelhart (Eds) Parallel Distributed Processing: Explorations in the Microstructure of Cognition Vol 2: Psychological and Biological Models (MIT Press).
Schacter, D. (1989) On the relation between memory and consciousness: Dissociable interactions and conscious experience, in: H.Roediger & F.Craik (Eds) Varieties of Memory and Consciousness: Essays in Honour of Endel Tulving (Lawrence Erlbaum).
Searle, J.R. (1992) The Rediscovery of Mind (MIT Press).
Shallice, T. (1988a) From Neuropsychology to Mental Structure (Cambridge University Press).
Shallice, T . (1988b) Information-processing models of consciousness: Possibilities and problems, in: A.Marcel & E.Bisiach (Eds) Consciousness in Contemporary Science (Clarendon Press).
Shanks, D.R. & St. John, M.F. (1994) Characteristics of dissociable human learning systems, Behavioral and Brain Sciences, 17, pp. 367-447.
Shoemaker, S . (1993) Lovely and suspect ideas, Philosophy and Phenomenological Research, 53, pp. 905-910.
Smolensky, P. (1988) On the proper treatment of connectionism, Behavioral and Brain Sciences , 11, pp. 1-23.
Sterelny, K. ( 1990) The Representational Theory of Mind (Blackwell).
Stich, S. (1978) Beliefs and subdoxastic states, Philosophy of Science , 45, pp. 499-518.
Tye, M. (1993) Reflections on Dennett and consciousness, Philosophy and Phenomenological Research, 53, pp. 893-98.
Umilta, C. (1988) The control operations of consciousness, in: A.Marcel & E.Bisiach (Eds) Consciousness in Contemporary Science (Clarendon Press).
Van Gelder, T. (1990) Compositionality: A connectionist variation on a classical theme, Cognitive Science , 14, pp. 355-84.
Von Eckardt, B. (1993) What is Cognitive Science? (MIT Press).


[Note 1] That this is a generic definition is important. Some writers tend to construe the computational theory of mind as the claim that cognitive processes are the rule-governed manipulations of internal symbolic representations. However, we will take this narrower definition to describe one, admittedly very popular, species of computational theory, viz: the classical computational theory of mind. Our justification for this is the emerging consensus in the field of cognitive science that computation is a broader concept than symbol manipulation. See, e.g., Cummins & Schwarz, 1991, p.64; Dietrich, 1989; Fodor, 1975, p.27; and Von Eckardt, 1993, pp.97-116.

[Note 2] In speaking of 'phenomenal experience' our intended target is neither self-consciousness nor what has come to be called access-consciousness (see Block, 1993, 1995). It is, rather, phenomenal consciousness : the "what it is like" of experience. We will speak variously of 'phenomenal experience', 'phenomenal consciousness', 'conscious experience', or sometimes just plain 'consciousness', but in each case we refer to the same thing.

[Note 3] Block also mentions self-consciousness and monitoring-consciousness. The latter is very similar to what is standardly understood as attention (1995, pp.235-6).

[Note 4] This is precisely Dennett's point when he accuses many contemporary cognitive scientists and philosophers of being covert Cartesian materialists (see especially his 1993, p.920). While they are process theorists in name, their metaphysical commitments prevent them from wholly entering into the spirit of this approach to consciousness.

[Note 5] See, e.g., Dennett, 1982; Pylyshyn, 1984; and Cummins, 1986. We discuss the distinction between explicit and inexplicit representation more fully in Section 3.2.

[Note 6] We are assuming here that connectionism does constitute a computational account of human cognition (and is hence a competing paradigm within the discipline of cognitive science). Although some have questioned this assumption, we think it accords with the orthodox view (see, e.g., Cummins & Schwarz 1991; Fodor & Pylyshyn 1988; and Von Eckardt 1993, Chp.3).

[Note 7] For a nice summary of this material see Dulany, 1991. See also O'Brien and Opie, forthcoming.

[Note 8] The more prominent contemporary philosophers and cognitive scientists who advocate a classical conception of cognition include Chomsky (1980), Field (1978), Fodor (1975, 1981, 1987), Harman (1973), Newell (1980), Pylyshyn (1980, 1984, 1989), and Sterelny (1990). For those readers unfamiliar with classicism, a good entry point is provided by the work of Haugeland (1981, 1985, especially Chps.2 and 3).

[Note 9] The locus classicus of PDP is the two volume set by Rumelhart, McClelland, and the PDP Research Group (Rumelhart & McClelland, 1986; McClelland & Rumelhart, 1986). Useful introductions to PDP are Rumelhart and McClelland 1986, Chps.1-3; Rumelhart 1989; and Bechtel & Abrahamsen 1991, Chps.1-4.

[Note 10] Some of the more prominent contemporary philosophers and cognitive scientists who advocate a connectionist conception of cognition include Clark (1989, 1993), Cussins (1990), Horgan and Tienson (1989), Rumelhart and McClelland (Rumelhart & McClelland, 1986; McClelland & Rumelhart, 1986), Smolensky (1988), and the earlier Van Gelder (1990). For useful introductions to connectionism, see Bechtel & Abrahamsen, 1991; Clark, 1989, Chps.5-6; Rumelhart, 1989; and Tienson, 1987.

[Note 11] For good general introductions to the representational properties of PDP systems, see Bechtel & Abrahamsen, 1991, Chp.2; Churchland, 1995; Churchland & Sejnowski, 1992, Chp.4; Rumelhart & McClelland, 1986, Chps.1-3; and Rumelhart, 1989. More fine-grained discussions of the same can be found in Clark, 1993; and Ramsey, Stich & Rumelhart, 1991, Part II.

[Note 12] Here we are relying on what has become the standard way of distinguishing between the explicit representations of classicism and connectionism, whereby the former, but not the latter, are understood as possessing a (concatenative) combinatorial syntax and semantics. The precise nature of the internal structure of connectionist representations, however, is a matter of some debate; see, e.g., Fodor & Pylyshyn, 1988; Smolensky, 1988; and Van Gelder, 1990 .

[Note 13] Dennett calls the former potentially explicit representation, and the latter tacit representation (1982, pp.216-18).

[Note 14] This fact about ourselves has been made abundantly clear by research in the field of artificial intelligence, where practitioners have discovered to their chagrin that getting computer-driven robots to perform even very simple tasks requires not only an enormous knowledge base (the robots must know a lot about the world) but also a capacity to very rapidly access, update and process that information. This becomes particularly acute for AI when it manifests itself as the frame problem . See Dennett (1984) for an illuminating discussion.

[Note 15] This is often expressed in terms of connectionism's break with the classical code/process divide (see, e.g., Clark, 1993).

[Note 16] Hints towards such a connectionist vehicle theory of consciousness can be found in Rumelhart, Smolensky, McClelland and Hinton, 1986; and Smolensky, 1988. For a much more detailed description and defence of this approach to phenomenal consciousness, see O'Brien and Opie, forthcoming; and Lloyd, 1995; 1996.