Where Did the Word "Cognitive" Come From Anyway?

Christopher D. Green
Department of Psychology
York University
North York, Ontario
M3J 1P3
CANADA

christo@yorku.ca

(1996) Canadian Psychology, 37, 31-39


Abstract

Cognitivism is the ascendant movement in psychology these days. It reaches from cognitive psychology into social psychology, personality, psychotherapy, development, and beyond. Few psychologists know the philosophical history of the term, "cognitive," and often use it as though it were completely synonymous with "psychological" or "mental." In this paper, I trace the origins of the term "cognitive" in the ethical theories of the early 20th century, and through the logical positivistic philosophy of science of this century's middle part. In both of these settings, "cognitive" referred not primarily to the psychological but, rather, to the truth-evaluable (i.e., those propositions about which one can say that they are either true or false). I argue that, strictly speaking, cognitivism differs from traditional mentalism in being the study of only those aspects of the mental that can be subjected to truth conditional analysis (or sufficiently similar "conditions of satisfaction"). This excludes traditionally troublesome aspects of the mental such as consciousness, qualia, and (the subjective aspects of) emotion. Although cognitive science has since grown to include the study of some of these phenomena, it is important to recognize that one of the original aims of the cognitivist movement was to re-introduce belief and desire into psychology, while still protecting it from the kinds of criticism that behaviorists had used to bring down full-blown mentalism at the beginning of the century.

1. Introduction

Cognitivism is big. So big that it seems to be turning up in almost every corner of psychology these days. In addition to "standard" (viz., human adult experimental) cognitive psychology there is cognitive development, cognitive therapy, cognitive neuropsychology, social cognition, animal cognition, and so forth. What is more, there is cognitive science, which merges cognitive psychology with aspects of philosophy, artificial intelligence, linguistics, neuroscience, and cognitive anthropology to form a discipline (or interdiscipline, or multidiscipline) that has enjoyed tremendous growth in the universities of North America over the last decade. Where once the prefix "behavioral" ranged over everything from language to emotion, now "cognitive" theories seem to prevail.

But where did this term, "cognitive," come from? There are many different versions of the tale, but among the most popular is that which claims that psychologists like Jerome Bruner, George Miller, and Ulrich Neisser transformed a psychology dominated by behaviorism by returning to a place of honor a set of traditional psychological topics once virtually banned outright by behaviorists during the time of their ascendancy. What justified this re-introduction (apart from the fact that such terms had been the traditional bread and butter of psychological thought before Watson had appeared on the scene) was the development of a new and rigorous way of studying them and their functions: the "information processing" approach.1

Some believe that the impetus for this change was original to psychology, and from there spread to philosophy, linguistics, etc. (e.g., Craik, 1991). Elsewhere (Green, 1994), I have argued that this view is historically incorrect; that cognition was a going concern in philosophy, artificial intelligence, and linguistics long before it caught on in experimental psychology. Even so, this does not explain how the term "cognitive" came to be used to cover pretty well what was once known as "mental," methodological quibbles aside. Obviously, "cognitive" is a cognate of Descartes' "cogito." In fact, some of those who opposed the so-called cognitive revolution (e.g., Skinner, 1989) argued that it is little more than an anachronistic resurgence of Cartesian dualism. On closer examination, however, such a view seems to be more tendentious, and less complete, than even the suspiciously Carlylian2 view that a few brave psychologists set out against the overwhelming behavioristic tide of the 1950s to save psychology from itself. Baars (1986, p. 158) has argued that the term, "cognitive" is inherently ambiguous; specifically, that an "old-fashioned" use meaning "conscious" competes with two more contemporary usages-one referring to the use of intervening variables in psychological theory, and the other referring to a particular subject area of psychology having to do with memory, reasoning, and the like.

In this paper I will argue that the term "cognitive" is not really ambiguous; that it has, in fact, a quite rigorous definition and that, probably surprisingly to psychologists, that definition is not inherently psychological, though it can be profitably applied to some problems that have dogged psychology since the beginning of the century. Put briefly, I will argue that the term, "cognitive," under this very strict and not particularly psychological definition, was derived from early 20th-century philosophical theories of ethics, and made its way, via the logical positivistic philosophy of science of the 1930s and 1940s, into the philosophical psychology of the 1950s and 1960s. At this point, the story becomes more complicated. Before there was a branch of psychology called "cognitive," the term was made popular among psychologists mainly by social psychologists such as Asch, Festinger, and Heider. As interest in thinking, language, and memory began to rise among experimental psychologists-primarily under the influence of the work of artificial intelligence researchers, linguists, and philosophers (see Green, 1994)-the technical meaning from outside psychology began to merge with the looser meaning employed by social psychologists. The strict meaning never took hold in psychology, however, and the conflation of the two has led many psychologists to effectively equate the cognitive with almost anything loosely regarded as "mental." This fate turns out to be a little ironic because the application of the term "cognitive" to problems of mind by philosophers was intended specifically to divide the mental into two categories-one to which the methods of logic and computer science could be successfully applied-the "cognitive"-and one to which they could not. Thus, "cognitive" was never intended by its philosophical advocates to be synonymous with "mental" and, consequently, much of what now goes by the name of "cognition" in psychology is not really "cognitive" in the strict sense at all.

Of course, the meanings of words evolve and change over time, and there is no a priori reason that contemporary cognitivists should abide by the usage developed by ethicists over half a century ago. It turns out, however, that it is precisely those parts of cognitive psychology that are not cognitive in the strict sense that have shown themselves to be least susceptible to rigorous scientific study. In fact, it was precisely in the attempt to disown scientifically troublesome aspects of the "mental" such as emotion and consciousness, while simultaneously retaining those such as thought, belief, and desire that the term "cognitive" was invoked by the philosophical psychologists of the 1950s and 1960s. That is to say, the distinction philosophers made with the application of the term "cognitive" to problems of mind is an important one that psychologists ignore at their peril.

2. Ethics and Cognitivism

Until the 20th century, ethicists of pretty well all stripes assumed that moral claims are truth-evaluable; i.e., that they express propositions that are either true or false. For example, it was assumed that a claim such as "Killing is bad" is, at minimum, either true or false. Which of these two it actually is might be a matter of dispute, but few doubted that it was either one or the other. The coming of the 20th century would bring great changes to ethics. G. E. Moore argued in Principia ethica (1903) that moral terms such as "good," though they are meaningful, describe no part of the "natural" world. Specifically, Moore argued that ethical properties supervene on natural ones. That is, there can be no change in the ethical properties of a thing unless there are changes in the underlying natural ones. Thus, for instance, if it is agreed that a given action A is good, then action A could not become bad unless there were some change in its natural (roughly, physico-spatio-temporal) properties. This is of interest to psychologists because the notion of supervenience has been transmitted from ethics to cognitive science as well. Briefly, philosophical psychologists, such as Davidson and Fodor, of the 1970s would apply Moore's principle of supervenience to mental properties; viz., there can be no change in mental state (e.g., the state of believing that P) without a change in the underlying physical state (e.g., a change in the state of the brain). Although an interesting aspect of the history of cognitivism, supervenience is not one that I will be discussing in this paper. The issue at hand-that of the origin of the term "cognitive"-is relatively independent of the issue of supervenience.

As fundamental a change as Moore's "non-naturalism" (as it was called) was to ethics, it did not undercut the general belief that moral claims are either true or false. By the 1930s, however, even the modest assumption that moral claims are true or false came under attack by a group of philosophers who came to be known as "noncognitivists." Noncognitive ethicists believed that moral claims are not about matters of fact at all; that, indeed, there is nothing-natural or otherwise-for them to be true or false of.

Although noncognitive ethics first arose in Scandinavia (Hägerström, 1911, 1917), its most celebrated proponents were British and American. Exactly what noncognitivists thought moral claims express, if not matters of fact, varied from one theorist to the next. Emotivists such as A. J. Ayer (1936, chapter 6) took moral claims to be nothing more than expressions of personal feelings, much like verbal ejaculations such as "Ouch!" For example, the claim "Killing is wrong" was thought to be nothing more than saying just "Killing...Boo!" More considered emotivists, such as Charles Stevenson (1937, 1938a, 1938b, 1944), argued that moral exhortations are not just expressions of emotion, but also direct attempts to influence the attitudes and behavior of others to be consonant with one's own inclinations. According to this view, although moral claims appear to have the form of indicative sentences (e.g., "Killing is wrong."), and thus appear to be cognitive, they actually function more like optatives (e.g., "O! Let there be no killing!"), enjoining others to behave in accordance with the expressed wish.

Other noncognitive ethical theories, apart from emotivism, include existentialism, whose advocates believed morality to be something entirely of human making, as well as the "linguistic" ethical theories of J. O. Urmson (1950) and R. M. Hare (1952, 1963). Urmson and Hare were both writing at the peak of Wittgenstein's influence on British philosophy and, consequently, focused their attention on the use to which moral language (e.g., "good," "just," etc.) is put, rather than its meaning, in the traditional sense (which they didn't believe moral language to have, in any case). The use to which Urmson thought moral language is generally put is the activity of grading; the sorting of things into better and worse qualities. As such, moral claims were thought to be neither true nor false, but simply tools of the grading activity. Hare, on the other hand, thought of moral language as an implicit form of imperative. For instance, saying, "Theft is bad," is simply a backhanded way of saying "Don't steal," or, "You ought not to steal." This is obviously related to Stevenson's view, but rather than believing that moral claims are overt attempts to change others' attitudes and behavior directly, Hare thought them only to serve the function of suggesting to others particular courses of action. Good short introductions to this material can be found in Abelson & Nielson (1967), Frankena (1973), MacIntyre (1966), and Warnock (1978).

To summarize, what ties this array of theories together under the term "noncognitive" is that they are all premised on the belief that moral claims are neither true nor false. Thus, by contrast, the term "cognitive" denotes statements that are either true or false. It is important to notice here that "cognitive" is not in any way synonymous with "psychological" or "mental." In fact, it is not clear that the realm of the "cognitive," under this usage, is even contained within the realm of the "psychological" or "mental." It is more abstract, like "logical" or "rational" or "meaningful" without being necessarily mental. This will seem like a non sequitur to many a psychologist. What could be more psychological than cognition? The non-psychological usage is in line, however, with Gottlob Frege's use of the term "thought," and few thinkers have been more influential on 20th century Anglo-American philosophy than Frege. As leading Frege-scholar Michael Dummett (1978, p. 458) has put it, the goal of philosophy, for Frege, was the analysis of the structure of thought. The study of thought, however, was to be sharply distinguished from the study of the psychological process of thinking. The distinction is like that between Reason and resoning; the latter only aspires to the former.

3. Philosophy of Science and Cognitivism

This usage of the term "cognitive"-designating that which is truth-evaluable-was passed along from ethics to philosophy of science, perhaps even by Ayer himself. Members of both the Vienna Circle (Rudolf Carnap and Herbert Feigl best known among them) and the Berlin Society for Empirical Philosophy (Carl Hempel and Hans Reichenbach best know among them) talked of meaningful claims as having "cognitive significance," whereas the "meaningless nonsense" (as they saw it) of metaphysics, theology, ethics, and "pseudoscience" does not have such significance. Hempel (1951) put the matter most succinctly:

It is a basic principle of contemporary empiricism that a sentence makes a cognitively significant assertion, and thus can be said to be either true or false [italics added], if and only if either (1) it is analytic [i.e., true by definition] or contradictory-in which case it is said to have purely logical meaning or significance-or else (2) it is capable, at least potentially, of test by experimental evidence-in which case it is said to have empirical meaning or significance. (p. 61)

Notice, again, that "cognitive" in this context carries with it no particularly psychological connotations; it is simply a way of folding both logical and empirical significance into a single term. Logical and scientific claims are "cognitive" only in the sense that they have determinate meaning, and can therefore assigned truth values, whereas metaphysical claims are not because (according to the empiricist criterion of meaning) they do not. Hempel was by no means alone in these beliefs, or in this usage of the term "cognitive." It was also recognized by Ayer (as we saw above), Carnap (a student of Frege's, incidentally), Reichenbach, and most other major philosophers of science up to Quine (1953), who argued, roughly, that there isn't even analytic meaning; i.e., that sentences are simply either empirical or meaningless.

In this stew of logical positivism and its descendent philosophies of science were steeped the philosophical psychologists of the 1950s and 1960s who would play crucial roles in the development of contemporary cognitive science: U. T. Place (1956), R. M. Chisholm (1963), Wilfred Sellars (& Chisholm, 1958), D. M. Armstrong (1968), Hillary Putnam (1960/1975), Jerry Fodor (1968, 1975, 1980), etc. When they first began to talk of the "cognitive" there can be little doubt that they meant to separate those aspects of the "mental" that are truth-evaluable-viz., the "propositional attitudes" (e.g., beliefs, desires, etc.)-from those that are not (e.g., emotion, consciousness, qualia,3 imagery).

To understand why this distinction is important, consider the following. Beliefs have a "mind-to-world direction of fit" as Searle (1983, p.8) likes to say; i.e., a belief is true only to the degree it "corresponds" to the conditions actually obtaining in the world. This property makes it susceptible to analysis by way of logical calculus, a mode of analysis that was central to the logical positivist project in the philosophy of science. Of course not all propositional attitudes have truth values. Desires, for instance, are neither true nor false, only satisfied or not satisfied. This is because, by contrast with beliefs, they have a "world-to-mind direction of fit;" i.e., the world must conform to what the mind wants in order for a desire to be satisfied. Thus, their conditions of satisfaction-"felicity conditions," as J. L. Austin used to say-are not matter of truth, per se. Nevertheless, they bear a type of "correspondence" with the world that is similar enough to truth conditions that they are susceptible to the same sort of logical analysis as bona fide truth conditions, and that is all that really matters for something to be considered "cognitive" in the sense I am recounting here.

To extend this sort of analysis a little further, it has been argued that other sorts of propositional attitudes can be fruitfully interpreted as combinations of beliefs and desires. Fearing that P (where P is a proposition describing some state of affairs), for instance, is a combination of believing that P will occur in the future and desiring that P not occur. Regretting that P, on the other hand, is a combination of believing that P occurred in the past and desiring that P did not occur. Hoping that P is a combination of believing that P has not yet occurred and desiring that P occur. By adding modals (i.e., possibility and necessity) and other relevant logical operators, this account can be made to capture a wide range of rather subtle distinctions. (See Searle, 1983, pp. 29-36, however, for a critique of this sort of account.)

By contrast, aspects of mental life that are not propositional attitudes (e.g., the experience of sweetness on biting into a ripe apple, the subjective feeling of sadness at the death of a loved one, the brilliance of colors in a daydream, etc.) are not subject to this sort of logical analysis. They are, by contrast, the key elements in the mentalist psychologies of the late 19th and early 20th centuries (viz., structuralism and some forms of phenomenology). These approaches to psychology, it will be recalled, had difficulties replicating phenomena in different labs-the hallmark of natural science-and became bogged down in seemingly irresolvable disputes about things like whether or not the sensation of green could be mentally analysed into the individual sensations of blue and yellow. It was precisely this sort of ineffability that led, or at least enabled, the behaviorists (of both philosophical and experimental varieties) to reject the mental, tout court, in the 1920s and 1930s in favour of the seemingly more tractable problems of expressed behavior.

In order to avoid a repeat of this debacle, when the term "cognitive" started to being (re-)introduced into the vocabulary of philosophical psychology in the 1960s, only those parts of the mental that could be subjected to rigorous (logical, scientific) analysis -the "cognitive" parts-were welcomed back, while those aspects of the mental out of which the behaviorists could once again make a lot of hay were quietly left behind. So began the contemporary philosophical psychology of beliefs and desires; an important branch of cognitive science.

Of course, consciousness and qualia have been on the agenda of philosophical psychology almost continuously since the 1950s, and this would seem to constitute counter-evidence to my claim that cognitive science, at least inasmuch as it is in part an extension of traditional philosophical psychology, is properly about beliefs and desires, and not about other aspects of the mental that are not susceptible to truth valuation. I would argue, however, that the study of topics such as consciousness and qualia stems from a much older tradition than the contemporary study of propositional attitudes, dating back to well before the rise of behaviorism. Put another way, there is a sense in which the continuing philosophical study of consciousness and qualia are, in a sense, not part of cognitive science proper at all but, rather, a continuation of traditional philosophy of mind. In support of this claim, consider, for instance, how little influence the computational approach to cognition-the hallmark of contemporary cognitive science-has had on these studies. What is more, where computationally-oriented cognitive scientists have attempted to address questions of consciousness or qualia (e.g., Dennett, 1991) or emotions (DeSousa, 1987; Oatley, 1992), they have been more likely to try to explain them away, or to deal primarily with their "functional," "rational," or "informational" character, rather than to grapple directly with their subjective mental qualities. The matter has been put most directly, I think, by Fodor (1992, p. 5): "Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious. So much for the philosophy of consciousness." It is well worth noting, in this regard, that Fodor's views on the prospects for a science of consciousness have not led him to reject psychology as a whole, or to reduce it to a behavioristic shell. As might be expected in light of the above discussion, his primary research interests have lain in the study of the mental instantiation of propositional attitudes, the aspect of the mental that is most patently susceptible to logical analysis. Thus, we see the explicit separation, in the mind of at least one prominent philosophical psychologist, between what is to count as a topic of cognitive science (viz., propositional attitudes), and what, for all its historical importance, is to count as a topic in the philosophy of mind that is outside of cognitive science, proper (viz., consciousness).

4. Psychology and Cognitivism

Psychologists have been aware of none of this, for the most part. Although there was some lipservice paid to logical positivism in the days of behaviorism, there never really seemed to be a deep understanding of its motives or aims (see, e.g., Green, 1992; Smith, 1986). Cognition simply was not much of a going concern in psychology before the 1950s. The use of the term often simply signaled an affiliation with either Tolman's learning theory, or Piaget's theory of intellectual development, though the latter had not yet reached deeply into North American psychology.

The Cumulative Subject Index of Psychological Abstracts for the years 1927-1960 lists 199 works under the subject heading "cognition"; about 1½ columns of citations. By contrast, "behavior" merited 8 columns, "culture" and "cultures" combined for 12, "sex" 19 (plus another 6 for "sex differences"), and "learning" 30. The 1½ columns of citations under "cognition" put it on about the same footing as "sports" for the years 1927-1960. Of the 199 "cognition" citations, only 42 are from the period 1927-1949. The subheadings indicate that the topics of these articles ranged from animal learning to color perception to intelligence testing to musical ability to Thomistic theory to psychosis; quite a motley collection. The other 157 are from 1950-1960. A note under the "cognition" heading says "see also logic; reasoning; thinking." "Logic" and "reasoning" received about a column each of citations. "Thinking," on the other hand, headed 7 full columns of citations. Clearly, then "thinking" was the preferred term during the first half of the 20th century. Consider, just for instance, the titles of Bartlett's (1958), Wertheimer's (1945), and Bruner's (Bruner, Goodnow, & Austin, 1956) influential books.

In addition to the 199 "cognition" citations there are 8 citations under "cognitive balance" and 3 under "cognitive dissonance," all from the period 1950-1960. It is these last 11 that give the best indication of where the word was to be most influential in psychology: not in the study of thinking and memory, as might now be expected, but in social psychology. Cognition was becoming an important topic in Festinger's (1957) theory of cognitive dissonance and Heider's (1958) theory of cognitive balance, as well as in Asch's (1952) social psychology text. The exact meaning of cognitive varied somewhat among these theorists.

Asch (1952) never explicitly defined what he meant by "cognitive," but he wrote of the "cognitive foundation" and "cognitive basis" of attitudes (p. 563-565), and even of emotions (pp. 109-113), by which he seems to have meant the belief systems that support them. This seems, at least superficially, pretty much in line with philosophical usage, but he also extends "cognitive functions" to include perception (pp. 128-129). Naturally enough, Asch never really provides an analysis of the mechanisms of cognition per se, because all mention of cognition on his part is in the service of explaining the relationship between the individual and society (e.g., how does society affect one's cognitions? how do one's cognitions affect one's interactions with society?), not the ground-level workings of cognition itself (e.g., just what is a mental representation, of society or of anything else? what is the actual mechanism, as opposed to just the external determinants, by which such a representation is transformed?). The issue of truth-evaluability, although perhaps implicit in Asch's thought about cognition, was never thought by him to be a strict criterion of the cognitive.

Festinger (1957) was only slightly more rigorous in his formulation of the cognitive that Asch, but still far from completely clear. After noting that "'dissonance' and 'consonance' refer to relations which exist between pairs of "elements,'" he went on to write, somewhat enigmatically, that "these elements refer [italics added] to what has been called cognition, that is, the things a person knows4 about himself, about his behavior, and about his surroundings. These elements, then, are 'knowledges,' if I may coin the plural form of the word" (p. 9). It would seem, then, that he did not mean, in fact, that the elements "refer" to cognition but are, rather, elements of cognition itself, which consist of "knowledges," by which we can assume he means individual items of knowledge. It is important to note that he leaves open the question of whether or not all cognition is knowledge; knowledge is but one, perhaps among other kinds, of cognition. Is his definition of knowledge a propositional, and therefore truth-evaluable, one? Exactly what is to be included under the rubric of knowledge is made no clearer than exactly what is to be included under that of cognition. If knowledge were thought to be propositional, then it stands to reason that dissonance would be primarily a matter of logical inconsistency. Festinger wrote, however, that "two elements are dissonant if, for one reason or another, they do not fit together [italics added]. They may be inconsistent or contradictory, culture or group standards may dictate that they do not fit, and so on" (pp. 12-13). Whereas Festinger was merely agnostic on the question of whether all cognition is knowledge, he seems to have been deeply ambivalent on the question of whether knowledge respects only logical constraints. At first he leaves the way open to failures of "fit" other than logical inconsistency when he says "for one reason or another," and when he includes "cultural and groups standards" as a possible reason for dissonance. But then he reveals his formal definition of dissonance: "two elements are in a dissonant relation if, considering these two alone, the obverse [italics added5] of one element would follow from the other" (p. 13). Of course, "obverse" is not a logical term, but it bears a strong metaphorical relation to "negation"; an interpretation that would make the relation of dissonance a purely logical one, his prior apparent resistance to such a strict reading notwithstanding. He then shatters the flexibility he seems to have so carefully paved the way for previously by stating baldly, "x and y are dissonant if not-x follows from y" (p. 13). To make matters even fuzzier still, however, the examples he gives-viz., of a person feeling fear when only friends are nearby, and of buying a car when one is in debt-are not examples of strict logical inconsistency. In the end, it seems that the issue of the truth-evaluability of cognitions was simply not salient to Festinger, though it may have lurked in the shadows of his thought.

Heider (1958) poses even more of a puzzle. The word "cognition" does not even appear in the index of his book. In several of the places that it appears in the main text, however, it is synonymous with "consciousness" or "awareness": "cognition of can" (as in "being able," p. 86), "cognition of trying" (p. 114), and "cognition of desire and pleasure" (p. 135). By contrast with this "old-fashioned" (Baars, 1986, p. 158) usage, however, the chapter on "sentiments" focuses on "cognitive organization," by which he meant the semantic relations of beliefs to each other. Sets of beliefs were said to be "in balance" if they were consistent with each other. What is more, relatively strict logical consistency seemed the ideal as well. In fact, the notation he used for representing cognitive organization was taken directly from the formal logical notations of the time (compare it, for instance, with the notation used by Hintikka, 1962, for his famous epistemic logic). Unfortunately, Heider never got much beyond the level of logical notation. He failed to follow through on the idea that beliefs, their relations, and even thought processes themselves might be powerfully represented in the form of a complex logical system.

The attitudes toward cognition represented by Asch, Festinger, and Heider continued on in social psychology. Work on "cognitive styles" in the 1970s (e.g., Messick, 1976) and on "social cognition" in the 1980s might be regarded as almost direct descendants. This tradition had little impact on the development of what is now known as cognitive science, as the latter developed and grew in the 1970s and 1980s. If anything it confused psychologists who expected from the burgeoning cognitive science something that resembled social psychology's idea of cognition. What they got was far different.

About the same time another research tradition was forming in psychology that would gravitate around the term "cognitive." This was the information processing approach advanced by people like George Miller (1956a, 1956b, Miller, Galanter, & Pribram, 1960) in the late 1950s and early 1960s. There was a strong link between this work and that being done in linguistics by (the philosophically-informed) Chomsky (1957, 1965), and in computer science by people like (the logically-informed) Newell and Simon (1956, 1963). One might have expected Miller to adopt the technical usage employed by these people, but the full philosophical force of the distillation of the truly cognitive from the broadly mental does not seem to have caught on even among psychologists as technically sophisticated as Miller. Rather, it seems, that "cognitive" was adopted primarily as a trendy new way of saying "mental." Miller himself recalls, "I don't think anyone was intentionally excluding 'volition' or 'conation' or 'emotion.' I think they were just reaching back for common sense. In using the word 'cognition' we were setting ourselves off from behaviorism. We wanted something mental-but "mental psychology" seemed terribly redundant.... We chose 'cognitive'" (Baars, 1986, p. 210).

Thus, Miller's (1962) psychology text, subtitled "the science of mental life," seems to have signaled more a desire to return to the more common-sense understanding of psychology characteristic of William James than an understanding of the cognitive revolution rising in philosophical psychology, computer science, and linguistics. The term "cognitive" receives only two citations in Miller's index: one of a glossary entry, where it is defined as "pertaining to the various psychological processes involved in knowing" (p. 346); the other of a single paragraph on "cognitive transitions," by which, he says, "are meant changes in the child's manner of knowing-knowing about himself and the world in which he lives" (p. 298). The exact nature of "knowledge" is not elaborated upon. Thus, as with the social psychologists before him, Miller adopts a general view of cognition as pertaining to knowledge, without recognizing either the crucial distinction between knowledge and belief (see n. 4, above), or the theoretically important criterion of truth-evaluability.

The very loose understanding of the term, "cognitive" spread to the first generation of cognitive psychology textbooks as well. Neisser's (1967) definition of cognition, for instance, included "sensation, perception, imagery, retention, recall, problem-solving, and thinking.... cognition is involved in everything a human being might possibly do" (p. 4). Such a broad concept, however, permits of few of the conceptual constraints that make for good theory-building. True to his word, Neisser's book included chapters on perception, pattern recognition, attention, and imagery, in addition to ones on more strictly cognitive topics such as memory, reasoning, and various verbal processes. This served as the model for future cognitive psychology texts. To the present day, they include this broad array of topics, plus others. The guiding motto seems to be "more is better."

Whereas the strict use of "cognitive" was intended to keep the behaviorist criticisms of early-century mentalism, its broad use in psychology returnd us to precisely these. Perhaps not surprisingly, those who resisted the reappearance of any aspect of the mental (under any name) in psychology, resisted the introduction of "cognitive" as well. Many of them heard something suspiciously akin to Descartes' "cogito" and argued that "cognitive science" and Cartesian dualism were one and the same (see Skinner, 1977, 1987, 1989). This is not to say that Skinner and his ilk were in favor of the strict usage. As with most other psychologists, the history of the term was, and remains, largely opaque to them as well.

All this is not to claim that psychologists, and other cognitive scientists, should not study aspects of the mental that do not fall under the traditional philosophical definition of cognitive. Psychologists, and everyone else, are free to study whatever aspect of the mental they want. It is interesting to note, however that it is precisely those aspects of the mental that are not cognitive in the strict sense that have proven most recalcitrant to successful scientific investigation. Thus, there is an important sense in which the problems of consciousness, emotion, imagery, etc. are not, strictly speaking, problems of cognitivism, per se. They are problems of a mentalism that extends beyond the bounds of a strict cognitivism.

Put in a nutshell, contrary to the supposition of many who have opposed it, the rise of cognitivism has not been, nor was it ever intended to be, a wholesale return to the mentalism of the past. To do so would have been to return to the very problems that led to the behavioristic revolution in the first place, just as the opponents of cognitivism have claimed. Instead, cognitivism was an answer to the problem: how can we introduce (at least part of) the mental back into scientific psychology while not falling prey to the criticisms that brought down the mentalism of old and led us to behaviorism? In other words, how can we have our mental cake and eat it too? The answer that cognitivism has provided, borrowed as it was from philosophy of science, and from ethics before that, is that as long as the aspects of the mental that are revived are restricted to those that are susceptible to truth-evaluation-which brings with it an extensive logical-analytic apparatus-then the behaviorists' criticisms of mentalism will be stayed.

Of course, not everyone will agree that cognitivism has lived up to its promise. Skinner, for instance, to the degree that he understood the promise, rejected it. Others will argue that even if this is the original aim of cognitivism, it has since moved beyond its origins. Nevertheless, it is important to understand what the motivations for the cognitive revolution, as it developed outside the bounds of university psychology departments, were-exactly which problems it was thought to solve and which problems it was thought to simply leave behind-especially since we seem, once again, to be running into difficulties on the very issues that led us to abandon mentalism in favor of behaviorism, now almost a century ago.

References

Abelson, R. & Nielsen, K. (1967). Ethics, history of. In P. Edwards (Ed.), The encyclopedia of philosophy (Vol. 3, pp. 81-117). New York: Macmillan & Free Press.

Armstrong, D. M. (1968). A materialist theory of mind. London: Routledge & Kegan Paul.

Asch, S. E. (1952). Social psychology. Englewood Cliffs, NJ: Prentice-Hall.

Ayer, A. J. (1936). Language, truth and logic. Harmondsworth, Eng.: Penguin.

Baars, B. J. (1986). The cognitive revolution in psychology. New York: Guilford.

Bartlett, F. (1958). Thinking: An experimental and social study. London: Allen & Unwin.

Bruner, J. S., Goodnow, J. J., & Austin, G. A. (1956). A study of thinking. New York: John Wiley & Sons.

Chisholm, R. M. (1963). Notes on the logic of believing. Philosophy and Phenomenological Research, 24, 195-201.

Chomsky, N. (1957). Syntactic structures. The Hague: Mouton.

Chomsky, N. (1959). Review of Skinner's Verbal behavior. Language, 35, 26-58.

Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press.

Craik, F. I. M. (1991). Will cognitivism bury experimental psychology? Canadian Psychology, 32, 440-443.

Dennett, D. C. (1991). Consciousness explained. Boston: Little, Brown.

De Sousa, R. (1987). The rationality of emotion. Cambridge, MA: MIT Press.

Festinger, L. (1957). A theory of cognitive dissonance. Evanston, IL: Row Peterson.

Fodor, J. A. (1968). Psychological explanation: An introduction to the philosophy of psychology. New York: Random House.

Fodor, J. A. (1975). The Language of Thought. Cambridge, MA: MIT Press.

Fodor, J. A. (1980). Representations: Essays on the foundations of cognitive science. Cambridge, MA: MIT Press.

Fodor, J. A. (1992, July 3). The big idea. Can there be a science of mind? Times Literary Supplement. pp. 5-7.

Frankena, W. K. (1973). Ethics (2nd ed.). Englewood Cliffs, NJ: Prentice-Hall.

Green, C. D. (1992). Of immortal mythological beasts: Operationism in psychology. Theory and Psychology, 2, 291-320.

Green, C. D. (1994). Cognitivism: Whose party is it anyway? Canadian Psychology, 35, 112-123.

Hägerström, A. (1911). Om moraliska föreställningars sanning [On the truth of moral propositions]. Uppsala: Uppsala University.

Hägerström, A. (1917). Til Frågan om den Gällande Rättens Begrepp. Uppsala: Uppsala University.

Hare, R. M. (1952). The language of morals. Oxford: Oxford University Press.

Hare, R. M. (1963). Freedom and reason. Oxford: Oxford University Press.

Heider, F. (1958). The psychology of interpersonal relations. New York: John Wiley & Sons.

Hempel, C. G. (1951). The concept of cognitive significance: A reconsideration. Dædalus: Proceedings of the American Academy for Arts and Sciences, 80, 61-77.

Jackson, F. (1982). Epiphenomenal qualia. Philosophical Quarterly, 32, 127-136.

MacIntyre, A. (1966). A short history of ethics: A history of moral philosophy from the Homeric age to the twentieth century. New York: Macmillan.

Messick, S. (1976). Personality difference in cognition and creativity. In (S. Messick, Ed.), Individuality in learning. London: Jossey-Bass.

Miller, G. A. (1956a). Human memory and the storage of information. IRE Transactions on Information Theory, IT-2(3), 128-137.

Miller, G. A. (1956b). The magical number seven, plus of minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97.

Miller, G. A. (1962). Psychology: The science of mental life. Harmondsworth, Eng.: Penguin.

Moore, G. E. (1903). Principia ethica. Cambridge: Cambridge University Press.

Neisser, U. (1967). Cognitive psychology. Englewood Cliffs, NJ: Prentice-Hall.

Newell, A. & Simon, H. A. (1956). The Logic Theory Machine: A complex information processing system. IRE Transactions on Information Theory, IT-2(3), 61-79.

Newell, A. & Simon H. A. (1963). General Problem Solver, a program that simulates human thought. In E. A. Feigenbaum & J. Feldman (Eds.), Computers and thought (pp. 279-293). New York: McGraw-Hill.

Oatley, K. (1992). Best laid schemes: The psychology of emotions. Cambridge: Cambridge University Press.

Place, U. T. (1956). Is consciousness a brain process? British Journal of Psychology, 47, 44-50.

Putnam, H. (1975). Minds and machines. In Mind, language, and reality (pp. 362-385). Cambridge, Eng.: Cambridge University Press. (Original work published 1960)

Quine. W. V. (1953). Two dogmas of empiricism. From a logical point of view (pp. 20-46). Cambridge, MA: Harvard University Press.

Searle, J. R. (1983). Intentionality: An essay in the philosophy of mind. Cambridge: Cambridge University Press.

Sellars, W. & Chisholm, R. M. (1958). Intentionality and the mental. In H. Feigl, M. Scriven, & G. Maxwell (Eds.), Minnesota studies in the philosophy of science: Concepts, theories, and the mind-body problem (pp. 507-539). Minneapolis: University of Minnesota Press.

Skinner, B. F. (1977). Why I am not an cognitive psychologist. Behaviorism, 5, 1-11.

Skinner, B. F. (1987b). Whatever happened to psychology as the science of behavior? American Psychologist, 42, 780-786.

Skinner, B. F. (1989). The origins of cognitive thought. American Psychologist, 44, 13-18.

Smith, L. D. (1986). Behaviorism and logical positivism: A reassessment of the alliance. Stanford, CA: Stanford University Press.

Stevenson, C. L. (1937). The emotive meaning of ethical terms. Mind, 46, 14-31.

Stevenson, C. L. (1938a). Ethical judgments and avoidability. Mind, 47, 45-57.

Stevenson, C. L. (1938b). Persuasive definitions. Mind, 47, 331-350.

Stevenson, C. L. (1944). Ethics and language. Hew Haven, CN: Yale University Press.

Tolman, E. C. (1932). Purposive behavior in animals and men. New York: Century.

Urmson, J. O. (1950). On grading. Mind, 59, 145-169.

Warnock, M. (1978). Ethics since 1900 (3rd ed.). Oxford: Oxford University Press.

Wertheimer, M. (1945). Productive thinking. New York: Harper & Brothers.

Wittgenstein, L. (1958). Philosophical investigations. Oxford: Basil Blackwell. (Original work published 1953)

Author Note

This research was conducted with the assistance of a York University Faculty of Arts Research Grant, a York University President's Natural Sciences and Engineering Research Council Research Grant, and a Natural Sciences and Engineering Research Council Individual Research Grant. Reprints of this article can be obtained from the author at the Department of Psychology, York University, North York, Ontario, M3J 1P3, CANADA.

Footnotes

1An anonymous reviewer of an earlier draft of this paper argued that Neisser was dubious of information processing; that the justification for studying cognitive processes was simply that they are there. Superficially this might be true, but what information processing did was give us a way of studying such processes in a rigorous way. By contrast, information processing did not give a rigorous way of studying consciousness and, thus, though it is admitted by almost everyone to "be there," its study has not been very successful and consciousness research is still looked upon with suspicion by many.

2Thomas Carlyle (1795-1881) believed that history is driven forward by strong, heroic leaders, such as Julius Caesar and Napoleon, rather than being primarily the product of social forces over which individuals have little control. Carlyle's view of history is currently decidedly out of vogue among most professional historians.

3Qualia are the subjective experiences of, for instance, pain, or of seeing color, or even of being a human being (as opposed to some other sort of animal). They are sometimes called the "raw feels" of experience. In a classic paper, Jackson (1982) elucidates the idea of qualia by having the reader imagine someone (named "Mary") who is never shown anything red, but taught all of the behavioural information associated with red (e.g., associations with heat, with activity, etc.). After this training, he asks, whether there is there still a gap in her knowledge of red? Most people answer "yes" because she has never had the direct experience of red; she does not know the qualia that, at least in part, constitute the basic experience of red.

4The failure to distinguish between knowledge and belief is common in the psychological literature. Since psychologists are rarely interested in explicating the relation that must hold between a belief and the world in order to make the belief an item of knowledge (at least according to "correspondence theories" of knowledge), this oversight is usually harmless enough. In the context of examining the relationship between psychologists' use of the word "cognitive" and the stricter philosophical usage, however, this failure serves to highlight the degree of conceptual confusion under which psychologists can often be found to labour. This is not an opinion by any means idiosyncratic to me. Philosophers as different in their general outlooks as Wittgenstein (1953/1958, p. 232) and Fodor (1968, p. vii) have noted the problem.

5Festinger actually italicizes this entire sentence. I have rendered it in normal typeface in order to emphasize "obverse."