Return to home
Simulation and Self-knowledge
In this chapter I shall be attempting to curb the pretensions of simulationism. I shall argue that it is, at best, an epistemological doctrine of limited scope. It may explain how we go about attributing beliefs and desires to others, and perhaps to ourselves, in some cases. But simulation cannot provide the fundamental basis of our conception of, or knowledge of, minded agency.
Let me begin by pinning my colours to the mast: I am a theory-theorist. I believe that our understanding of mentalistic notions — of belief, desire, perception, intention, and the rest — is largely given by the positions those notions occupy within a folk-psychological theory of the structure and functioning of the mind. To understand one of these notions is to know — at least implicitly — sufficiently much of the corpus of folk-psychology, and to know the role within that theory of the notion in question. I also maintain that children's developing competence with these mentalistic notions involves them in moving through a series of progressively more sophisticated theories — for example, moving from desire-perception theory, through a copy-theory of belief, to full-blown, intentionalistic, belief-desire theory (see Wellman, 1990).
This theory-theory approach is definitely to be preferred, in my view, either to various forms of Cartesianism and neo-Cartesianism on the one hand, or to behaviourist and quasi-behaviourist accounts of our conception of the mental on the other — preserving for us the realism of the former without the first-person primacy, and something of the essential potential publicity of the latter without its associated anti-realism. As we shall see later, any radical form of simulationism is in danger of slipping into Cartesianism in the one direction, or into some form of quasi-behaviourism in the other.
I also believe — to pin my colours to another mast — that at least the core of this folk-psychological theory is given innately, rather than acquired through a process of theorising, or learning of any sort. It makes its appearance in the individual through a process of ontogenetic development (though perhaps also requiring triggering experiences of particular sorts). And the different mentalistic theories that young children entertain should be thought of as different stages in the maturation of their theory-of-mind faculty, perhaps corresponding to, and replicating, the different stages in the history of its evolution in the human species (see Segal, this volume).
I favour such a nativistic theory-theory because if, firstly, young children are pictured as little scientists, constructing a mentalistic theory as the best explanation of the data (which data? — even action-descriptions presuppose folk-psychology!), then it beggars belief that they should all hit upon the same theory, and at the same tender age too (at about the age of four, in fact). But if, secondly, the theory is supposed to be learned by the child from adult practitioners, then it is puzzling how this can take place without any explicit teaching or training; and also how the theory itself, as a cultural construct, could remain invariant across cultures and historical eras. In contrast, the suggestion that folk-psychology might be innate is not at all implausible, given the crucial role that it plays in facilitating communication and social co-operation in highly social creatures such as ourselves. The nativist hypothesis also coheres well with what we know about the development of social competence in our nearest cousins, the apes (see Byrne and Whiten, 1988); and with what we know about the absence of mentalistic abilities in the case of people with autism (see Baron-Cohen, 1989a; Leslie, 1991; Carruthers, this volume).
This nativistic version of theory-theory is not one that I need to depend upon here, however, except insofar as it may be necessary to remove one support for simulationism. For the picture of two- and three-year-old children as little scientists constructing their theories through a process of data collection, hypothesis formation, and testing, is otherwise apt to seem extravagant, even if they are partly guided in this process by the implicit theory-deployments of the adults around them. (See Goldman, 1989, pp.167-8, and Gordon, 1986, p.170, where these arguments are made much of in support of simulation-theory; see also Gopnik and Wellman, 1992, pp.167-8, for a non-nativist reply.) There is some reason to think, indeed, that the hypothesis of child-as-scientist is not only extravagant, but close to incoherent. For recall that it is agreed on all hands by developmental psychologists that children do not acquire the concept of false belief until sometime in their fourth year. Then how could the child-scientist possibly realise that their previous mentalistic theory was false or inadequate, and hence replace or modify it, prior to acquiring such a concept? It is difficult to understand how anyone — whether child or adult — could operate as a scientist, who did not yet possess the concept of false belief.
2. Simulation within a theory
Now how, on the theory-theoretic account, does one set about attributing beliefs, desires, and intentions to others? Partly, and most fundamentally, through deploying one's theoretical knowledge. It is in virtue of knowing such things as: the relationship between line of vision, attention, and perception; between perception, background knowledge, and belief; between belief, desire, and intention; and between perception, intention, and action; that one is able to predict and explain the actions of others, on this account. One may also deploy theoretical knowledge of the distinctive manner in which propositional attitudes will interact with one another in processes of reasoning in virtue of their form, deploying such principles as: that someone who wants it to be the case that Q, and believes that if P then Q, and believes that P is within their power, will, other things being equal, form an intention to cause it to be the case that P; that someone who has formed an intention to bring it about that P when R, and who believes that R, will then act so as to bring it about that P; that someone who believes that all Fs are Gs, and who comes to believe that F-of-a, will also believe that G-of-a; and so on. But it is also very plausible that this theoretical knowledge may be supplemented, on occasion, by a process of simulation.
General theoretical knowledge — that is, the sort of non-content-specific knowledge that might very plausibly be held to be innately given — is all very well as a framework, but plainly needs to be supplemented in some way if one is to be able to provide fine-grained intentionalistic predictions and explanations. There appear to be only two options here: either to supplement one's initial folk-psychological theory with a whole lot of further more specific theoretical knowledge, concerning what people with particular beliefs and desires may be inclined to do or think; or to simulate, using the inferential connections amongst one's own contentful states to derive a prediction for, or explanation of, the other. It may be that both of these options are realised in us, to some degree. But there are arguments to suggest that we do — perhaps must — use simulation sometimes, at least.
One's grasp of the immediate inferential connections entered into by someone's beliefs and desires will sometimes be crucial in the attempt to provide predictions and explanations of either their mental states or their behaviour. Thus, for example, if I attribute to someone the belief that a particular brick is cubic, then I should predict that they will be surprised if it should turn out to look oblong when viewed from another angle. Now, suppose that something along the lines of the theory of concepts proposed by Peacocke (1986, 1991) is correct in this regard — that is, suppose that the conditions for possessing any given concept may include a set of canonical grounds for, and/or a set of canonical commitments of, thoughts containing it. So a condition for possessing the concept cubic will be, for example, that if one is prepared to judge, of a particular perceptually-presented object, 'That is cubic', then one will be primitively disposed to accept that the object will continue to appear as such when viewed from any other angle in normal conditions. (See Peacocke, 1986, p.15ff.) It will be, then, precisely this feature of the possession-conditions of cubic that underlies the prediction made earlier, concerning the conditions under which the subject will show surprise. And so on for many other concepts.
The upshot is that in order to predict what someone who entertains a thought containing a concept such as cubic will do or think, I shall have to predict the inferential role of that concept. I could do this by deploying a portion of what would be an extensive theory of concepts, whose clauses would severally specify the possession-conditions for the full range of concepts available. But it is immensely implausible that I should ever have had the opportunity to learn such a theory, and even more implausible that it should be innate (remember, many of these concepts, about whose possession-conditions I would be supposed to have a theory, would definitely not be innate, including such concepts as cow, horse, and car). There is, moreover, an easy alternative — I can simulate the role of the concept in the mental life of the other by relying on my grasp of that same concept, inserting thoughts containing it into my reasoning systems, in order to see what I should then be disposed to do or think as a result.
The sort of limited role for simulation sketched above is something that a theory-theorist should have no principled objection to, in my view. (As I understand it, Heal defends a limited form of simulationism of this sort — see her 1986, 1994, and this volume; as does Wellman, 1990.) Such a proposal leaves in place the fundamental, and defining, framework of theory beloved of theory-theorists, while allowing simulation a role in generating fine-grained predictions and explanations of the thoughts, feelings, and actions of other people. On such a view it will be, at least partly, an empirical matter just which sorts of mental-state attributions employ simulation, and which operate purely theoretically — indeed, a good many of the empirical debates in the literature can be seen as directed at this question. My quarrel with simulationism only begins when the latter attempts to usurp the role accorded to theory in the account sketched above. My particular focus will be on the treatments that the various accounts can give of self-knowledge — of the knowledge that one has of one's own mental states. I shall begin by outlining what I take to be a plausible theory-theoretic account of the matter, before criticising the two main simulationist alternatives.
3. Self-knowledge as theory-laden recognition
What account is the theory-theorist to provide of our knowledge of our own mental states? In particular, must such a theorist be committed to the implausible view that we know of our own mental states just as we know of the mental states of other people — by means of an inference to the best explanation of the (behavioural) data, operated within the framework of a folk-psychological theory? Most theory-theorists have not thought it necessary to travel this route, maintaining, rather, that self-knowledge should be thought of as analogous to the theory-laden perception of theoretical entities in science. Just as a physicist can (in context, and given a background of theoretical knowledge) sometimes see that electrons are being emitted by the substance under study; and just as a diagnostician can see a particular sort of tumour in the blur of an X-ray photograph; so, too, can we each of us sometimes see (that is, know intuitively and non-inferentially) that we are in a state accorded such-and-such a role by folk-psychological theory.
In saying that our beliefs about our own mental states are arrived at non-inferentially, of course I only mean to exclude the intervention of conscious, person-level, inferences. For it is highly plausible to claim that all perception is inferential, at some level. Indeed, it may be that the above analogy proves to be somewhat less than perfect when one considers the kinds of sub-personal inferences that might be involved. In particular, in the scientific case the perception of an electron, or of a tumour, will presumably be mediated by sub-personal inferences that somehow access and deploy the conscious theoretical knowledge of the person in question. But this may not always be so in the case of knowledge of our own mental states, since much of folk-psychology may be only implicitly, not consciously, known.
Thus, it may be part of the normal functioning of the mind that a mental state, M, if conscious, will automatically give rise to the belief that I have M, without all the principles of folk-psychology that play a part in generating that belief being accessible to me. But still, what I will recognise when I recognise that I have M is a state having a particular folk-psychological characterisation. Although the process of acquiring self-knowledge may involve theories, or aspects of theories, that are only implicitly known by the subject, still the upshot of that process — the knowledge that I am in M — will nevertheless be theory-involving. On the theory-theory account, what I recognise my own mental states as, are states having a particular folk-psychological role, even if I am unable to provide, consciously, a complete characterisation of that role.
I have explained how a theory-theorist can claim that we have knowledge of our own mental states 'immediately', without conscious inference (and in the normal case, surely, without any sort of inference from observations of our own behaviour). But my view is that such an account should only be endorsed for self-knowledge of occurrent mental states, such as pains, perceptions, acts of wondering whether, forming an intention to, and judging that. In particular, it does not apply to standing states such as beliefs, desires, and long-term intentions. Rather, our knowledge of such standing states is normally achieved by activating them into corresponding occurrent events, to which one does then have immediate access. So what enables me to have knowledge that I believe that P is not, in the first instance, that my state is reliably disposed to give rise to the belief that I believe that P. Rather, it is that my belief is apt to emerge in an occurrent judgement that P, where such a judgement is an event of which I will have immediate non-inferential knowledge.
The suggestion above is supported by the intuitive epistemology of self-attributions of belief, desire, and intention, noted by a number of writers (see, for example, Evans, 1982, p.225; Gordon, 1994). Thus, what I do when I attempt to determine whether or not I believe that world deforestation will be disastrous is ask myself, first, the first-order question, 'Is it the case that world deforestation will be disastrous?'; if I find myself inclined to answer, 'Yes', then I make a semantic ascent, and am prepared on that basis to issue the second-order statement, 'I believe that world deforestation will be disastrous'. Thus the primary way in which I have knowledge of my own standing-state beliefs is through the manner in which those beliefs are apt to emerge in occurrent judgements with the same content, and it is only the latter of which I should be said to have quasi-perceptual knowledge.
The above point is important for a number of reasons, not least the way in which it enables us the handle self-mis-attributions of belief, desire, or intention. It is well-known that in a variety of situations subjects will confabulate false explanations of their own behaviour. (See Nisbett and Ross, 1980.) For example, if asked to choose from what is, in fact, an identical array of objects, subjects will consistently choose from the right hand side; but if asked to explain their choice they will claim that the object in question was more attractive, or a brighter colour, or something of the sort. I hypothesise that in all such cases the reasoning that leads to action is not conscious — that is, in the present context, it consists in occurrent events that are not apt to emerge in self-knowledge of their own existence. So what one does in such cases, knowing that there must be some explanation for one's action, is to construct an explanation post hoc, either by deploying relevant theoretical knowledge, or by using a simulation strategy. This enables the explanation to be wildly at variance with the facts, just as it can be in our attempted explanations of the behaviour of others. It will only be in those cases where the reasoning that leads to action consists in occurrent conscious thoughts, that one would expect a much higher degree of reliability in self-attribution, according to the account sketched here.
There is one important problem with the account still remaining, however. For I have conceded that one may have to use simulation to discover the causal roles of some particular beliefs and desires (and also, by implication, the occurrent counterparts thereof). So when I have self-knowledge of the occurrence of these events, what am I aware of them as? Since, by hypothesis, the background folk-psychological theory is not sufficient to individuate the particular events in question (at least as objects of knowledge), quite what, then, does individuate them? Here I suggest that it is the linguistic form of those events. Our conscious occurrent judgements may mostly consist in deployments of imaged sentences, generally the very same sentences that one would use to express those judgements aloud. So self-knowledge of the judgement is mediated by self-knowledge of the occurrence of the vehicle of the judgement, namely an imaged sentence. (For more detailed development and defence of this view, see my 1996.) Then when the thought occurs to me that world deforestation will be disastrous, I shall immediately know what I have just thought because I can recall — and hence reliably report and reproduce — the very form of words that my thought employed; namely, 'World deforestation will be disastrous'.
I have sketched my preferred theory-theoretic account of self-knowledge of mental states, which models the latter on the case of theory-dependent perceptual knowledge. This account is, I think, an attractive one, having both intrinsic plausibility and substantial explanatory power. I shall now contrast that account with two different forms of simulation-theory — each of which is more ambitious than the kind of limited simulationism discussed above — due to Goldman and Harris on the one hand, and Gordon on the other. Each of these runs into insuperable trouble, I shall argue, in its treatment of self-knowledge in particular.
4. Simulationism and the priority of introspection — Goldman and Harris
The versions of simulationism developed by Goldman (1989, 1992, 1993) and Harris (1989, 1992) are ambitious — and distinctively anti-theoretical — in the sense that they purport to provide the very basis for the child's ability to ascribe mental states to other people; but they take self-knowledge of mental states for granted. On this view, when I explain or predict the behaviour of another person, I pretend to adopt some beliefs or desires which issue, as a result of the normal operating of my practical reasoning processes, in a pretend intention to perform the action to be explained or predicted. But for this to work I have to be able to recognise, in my own case, the beliefs, desires, and intentions in question (or at least the pretend versions of them). Having access to my own mental states, I use simulation to ascribe mental states to others.
What kind of access am I supposed to have to my own mental states, on this account? In particular, when I recognise in myself a given mental state, M, what do I recognise it as? Not, presumably, as a state normally occupying a particular causal role, or satisfying a certain theoretical description, since this would then be just another version of theory-theory. As what, then? It is hard to see how there can be any alternative but to say: as a particular distinctive feeling, or quale. On such a view, the basic mentalistic concepts, instances of which serve as inputs to the process of simulation, are purely recognitional ones. I begin by distinguishing between one type of mental state and another purely on the basis of their intrinsic, subjectively accessible, qualities. I may then later, through observing regularities in my subjective life, and through success in simulating the mental lives of others, come to have a body of theoretical knowledge about these states. But such knowledge is derivative, not fundamental.
The above statement of this form of simulation-theory fits Goldman's (1993) characterisation of his position very well, and is at least consistent with Harris's somewhat less explicit presentation of the nature of self-knowledge (see his 1989, pp.54-7). However, Goldman once attempted to avoid the conclusion that mentalistic concepts must be grounded in capacities to recognise subjective qualities. He wrote as follows:
'If the simulation-theory is right, however, it looks as if the main elements of the grasp of mental concepts must be located in the first-person sphere. Is this objectionable? We should recall, first, how problematic purely third-person accounts of the mental have turned out to be. Second, we should note that the simulation approach does not confine its attention to purely "private", "internal" events. It also invokes relations between mental states, on the one hand, and both perceptual situations and overt actions. Thus, there may well be enough ingredients of the right sort to make sense of a first-person-based grasp of mental concepts.' (1989, p.183.)
This was wholly unconvincing. First, that purely third-person accounts of mental concepts are problematic provides no support whatever for accounts that are purely first-personal. For there remains the intermediate theory-theoretic account sketched in section 3 above, which allows for the existence of first-person recognition of mental states, but maintains that this is recognition of them as states with particular theoretical characterisations. Second, if knowledge of the relations between mental states, and with perceptual situations and overt actions, is supposed to be constitutive of grasp of mental concepts, then what we have here is just another version of theory-theory. If, on the other hand, Goldman meant that these relations are to be learned subsequent to our grasp of mentalistic concepts, then it really is the case, after all, that such concepts must consist, at bottom, in pure recognitional abilities for distinctive feel.
What would be wrong with that? For most contemporary philosophers it is sufficient to refute the Goldman-Harris suggestion, to point out that it commits them to a form of neo-Cartesianism. But lest we reject the position too hastily, let us consider whether or not it is really vulnerable to the standard objections to Cartesian accounts of the mental. Firstly, it need not be committed to the ontological aspects of Cartesianism, of course, since the thesis only concerns the nature of our most basic mentalistic concepts, not the nature of mental phenomena. So the position is fully consistent with the sort of physicalism which is obligatory, now-a-days, for all right-thinking men and women.
Now secondly, what of the objection that Cartesian conceptions of the mental must inevitably render our knowledge of the mental states of other people problematic? This was a popular line of objection to Cartesianism in the 50s and 60s, when it was often claimed that by starting from acquaintance with my own mental states, and then having to argue by analogy to the mental states of others, I should be making what is, in effect, a weak induction from just one instance (see, for example, Malcolm, 1958). This objection, too, is easily answered, as Goldman himself points out (1989, pp.181-2), provided that we are prepared to accept a reliabilist conception of knowledge — provided, that is, we accept that knowledge is reliably acquired true belief rather than being, as tradition would have it, justified true belief. For, given reliabilism, my beliefs about the mental states of others will count as known provided that the process by which I arrive at them is in fact a reliable one. And it may be that simulation is just such a process.
Finally, what of the point that not every different type of mental state really does have a distinctive feel to it? While recognition of feel may be plausible for experiential states such as pains, tickles, and sensations of red, it is, surely, hugely implausible for beliefs, desires, and intentions. So my concepts of the latter cannot consist in any bare recognitional capacity. Here Goldman and Harris can reply that it is only occurrent mental events that have qualia — our concept of belief then being the concept of a standing-state that is apt to emerge in an event (an occurrent judgement) with a particular distinctive feel.
Thus if Goldman and Harris can make out the case the every occurrent mental state — in particular, every act of judging, wondering whether, wishing, and hoping — has a distinctive feel to it, appropriate to be an object of bare recognition, then it appears that they may be home and dry. But this now looks like a more promising avenue of criticism. For there are a potential infinity of such states, in virtue of the unlimited creativity of thought. Are we to suppose that each of us possesses, miraculously, an unlimited set of corresponding recognitional capacities? And anyway, what does it feel like to judge that today is Tuesday, as opposed to judging that today is Wednesday? Are there really any distinctive subjective feelings here to be had?
The only way forward for this form of simulationism that I can see, is to borrow the claim defended briefly above, that conscious propositional episodes of judging, wondering whether, and so on, consist in deployments of imaged sentences; and to couple this with the claim that we can immediately recognise such images in virtue of the way they feel to us. This enables the account to harness the creative powers of language to explain our capacity to recognise in ourselves an unlimited number of propositional episodes, and makes it seem plausible that there will, indeed, be a feeling distinctive of judging that today is Tuesday — namely the distinctive feel of imaging the sentence, 'Today is Tuesday'. Thus, I can, on this account, recognise in myself the new act of wondering whether there is a dragon on the roof (never before encountered), because this action consists in the formation of an image of the sentence, 'Is there a dragon on the roof?' (which is a state a bit like hearing that sentence), and because I can recognise this image in myself in virtue of being capable of recognising the distinctive feels of its component parts.
While such a view can avoid the standard objections to Cartesianism, there remain a great many difficulties with it. Notice, to begin with, that I should have to do a good deal of inductive learning from my own case before I could be capable of simulation, on this account. I should have to learn, in particular, that whenever I am aware of the distinctive feel of an intention, where the feel is similar to that of hearing an utterance of the form of words 'P', that I thereafter generally find myself performing actions describable as 'P'. For only so will I have any way of generating a predicted action for another person from the pretend-intention with which my simulation of them concludes. And since these feelings do not wear their causal efficacy on their sleeves, I should also have to reason to the best explanation, having discovered reliable correlations between feelings of various types, to arrive at a theory of the causal sequences involved.
(Notice that simulationism now inherits all the difficulties of the child-as-scientist theory-theory account. For the child has to be pictured as building up a body of theoretical knowledge of the causal relations amongst states which it can introspectively recognise immediately on the basis of their feels. It remains remarkable that all normal children should end up with the same body of knowledge at about the same time. And it remains mysterious how anyone is to engage in a practice of inferring to the best explanation who does not yet possess the concept of false belief.)
Notice, too, just how opaque an explanation of action would seem at this early stage. It would have the normal form: 'This feel and that feel caused that feel. [This belief and that desire caused that intention.] And that latter feel caused me to do P. [That intention caused my action.]' The suggestion that one could get from here to anything recognisable as belief-desire psychology is about as plausible (that is, immensely implausible) as the claim that we can get from descriptions of sequences of sense-data to full-blown descriptions of physical reality.
Philosophers and psychologists alike have long since given up believing that children learn to construct the world of three-dimensional physical objects, and then arrive at something resembling common-sense physics, by establishing inductive correlations amongst sense-data and reasoning to the best explanation thereof. The idea that children have to construct folk-psychology from their first-person acquaintance with their own feelings, supplemented by simulation of the feelings of others, should seem equally indefensible. For in both domains, note, the classifications made by the folk have to reflect, and respect, a rich causal structure. Even if we agree that all mental states have introspectively accessible feels, fit to be subjects of immediate recognition, it still remains the case that such feelings are useless for purposes of explanation until supplemented by much additional causal knowledge. And the question of how we acquire such knowledge is no more plausibly answered by simulationism than by child-as-scientist versions of theory-theory.
I have argued that the version of simulationism due to Goldman and Harris must face severe difficulties. Let me now conclude this section with two rather more precise sources of worry for their account. The first is that there are cases where we can have, and know that we have, distinct propositional episodes, where it is nevertheless implausible that there would be any difference between them in terms of introspectible feel. For example, consider the difference between intending and predicting that if the party should turn out to be a bore then I shall fall sleep. Each state will consist, on the above account, in an image of the very same sentence — the sentence, namely, 'If the party is a bore I shall go to sleep'. So the claim must be that imaging this sentence in the mode of intention is subjectively, introspectibly, different from imaging it in the mode of prediction. This certainly does not fit with my phenomenology. Granted, I will immediately know that I have formed an intention, if I have; but not on the basis of the distinctive way the event felt.
The second difficulty is the converse one — that there are cases where propositional episodes would be distinct, on the above account, which are, in reality, the same. Thus two token actions of judging that the dog bit the postman might consist, in the one case, of an image of the sentence, 'The dog bit the postman', and in the other case of an image of the distinct sentence, 'The postman was bitten by the dog'. Since the sentences are different, so are the images, and so too, on the account above, must be the mental states in question. But they are not. These are tokens of the very same type of thought.
Note that these examples present no difficulty for the sort of theory-theoretic account of introspective knowledge considered earlier. For our cognitive systems might very well be able to tell the difference between intending that P and predicting that P (which will, for the theory-theorist, be a difference in distinctive causal role) on the basis of differences that are not available to consciousness, or at any rate differences that are not phenomenological. Similarly, the imaged sentences about the postman will both be counted as constitutive of the very same thought, in virtue of my background theoretical knowledge that active-passive transformations have no significant effect upon causal role.
5. Simulationism without introspection — Gordon
Gordon has an even more ambitious story to tell about how it is possible to represent the beliefs and desires of another person. He claims, in particular, that this can occur without introspective access to one's own mental states, without yet having any mentalistic concepts, and without engaging in any sort of analogical inference from oneself to the other. (See his 1986, 1992, 1994, and this volume.) The story is, first, that I put my own practical reasoning system into suppositional mode by pretending to be the other person, A. Then within the scope of such a pretence, my uses of the first-person pronoun refer to A, and my expressions of pretended belief in the form 'P' or 'I believe that P' therefore represent the beliefs of A. To this is added the claim that basic competence in the use of utterances of the form, 'I believe that P', 'I want that P', or 'I intend that P' require, not introspective access to the states of belief, desire, or intention in question, but only an ascent routine whereby one expresses one's beliefs, desires, and intentions in this new linguistic form. Here we have the materials for an account of how simulation can enable a child to boot-strap its way into acquiring mentalistic concepts without introspective access, and of how it can attribute the corresponding states to others without relying on an analogical inference.
There is one respect in which Gordon's account is in need of supplementation, I believe. For representing A as believing that P is not the same thing as ascribing to A the belief that P, or as asserting, or judging, that A believes that P. (Similar remarks apply, mutatis mutandis, to representing versus ascribing a desire or an intention.) Pretend-asserting 'P' or 'I believe that P' while simulating A is surely one thing, making, assertorically, the attribution, 'A believes that P' is quite another. For the first assertion occurs within the scope of a pretence, and is therefore not properly an assertion at all. How, then, is a simulator to get from the former to the latter? Will this re-introduce, after all, an inference from me to you? I think not, or not necessarily. Gordon should claim that what is distinctive of simulation, as opposed to other forms of imaginative identification, is that I am primitively disposed to complete the process by transforming pronouns, or by substituting a name for the first-person pronoun. So when I conclude my simulation of A with the pretend-assertion, 'I believe that P', I am then disposed to assert, outside of the scope of a simulation, 'He believes that P' or, 'A believes that P'.
Is this reply really sufficient to save Gordon from trouble? For does it not look like there must be at least a tacit inference from me to you? For the transforming of pronouns is only going to be valid on the (tacit) assumption that you are relevantly similar to myself. I think Gordon should concede this point, since it does not damage his main case. The only sort of inference from me to you that Gordon is committed to rejecting, in my view, is one which would require us to have introspective access to our own mental states, and/or one which would require us to possess the concepts of belief and desire in advance. The tacit inference from me to you which is involved in the transformation of pronouns at the end of a process of simulation seems to require neither.
Now, I applaud Gordon's rejection of the introspectionist account of knowledge of our own beliefs and desires. Beliefs and desires are standing states; they are not experiences, nor even occurrent events. We therefore cannot have knowledge of them by virtue of introspecting their distinctive qualities. I also applaud what Gordon calls the answer-check procedure as an account of the way in which we give reports of our own beliefs and desires. But this is not because I agree that we only begin to acquire the concept of belief by being trained to preface our assertions with 'I believe that'. Rather, it is because I think that the primary way in which beliefs and desires, as standing states, become occurrent, and contribute to the causation of behaviour, is by emerging in acts of thinking with the same content. What is distinctive of my conscious standing-state belief that February contains twenty-eight days except in a leap year, is that I am, in appropriate circumstances, disposed to judge (think to myself), 'February has twenty-eight days, except in a leap year'. And what is distinctive of my conscious standing-state desire to have a holiday in France, is that I am disposed, in suitable circumstances, to think to myself, 'If only I were on holiday in France!', or, 'I want to have a holiday in France'. Whether or not I express these thoughts is irrelevant, in my view.
However, I do think that our occurrent thinkings are introspectible. By this I mean at least that we are each of us aware, immediately and non-inferentially, of what we have just judged, wondered whether, or made up our minds to do. (Remember, by a non-inferential process I mean only one that involves no conscious inferences.) We are also aware of the sequences of our thoughts, and will know, at least shortly afterwards, what led us to think one thing after another, and what sequence of reasoning led up to our decisions. (If saying this puts me in the opposite camp from Wittgenstein, Ryle, and Malcolm — see Gordon, 1994, note 2 — well then, so be it. My counter-charge is that Gordon is committed to something resembling the unacceptable behaviourism of these writers.) Acknowledging these facts does not have to make one into a Cartesian. They can equally well be accommodated by a functionalist theory of the mental, supplemented by a theory-theory account of our understanding of mentalistic concepts, as we saw in section 3 above. As a functionalist I can claim that it is distinctive of conscious thinkings that they are apt to give rise to the knowledge that those thinkings have just taken place, where the concepts deployed in such items of second-order knowledge are embedded in a common-sense theory of the workings of the mind.
Can Gordon find a place for the introspective phenomena mentioned above? I can't see how. For how could he possibly make an account of introspective knowledge ride on the back of this sort of radical — introspectionless — story about simulation? Indeed, I can't see how, from the process of simulation as Gordon describes it, I could even so much as get the idea that processes of reasoning often lead up to decisions, let alone get to know of the details of those processes in my own case. Let me elaborate.
According to Gordon, the child begins by being disposed to make assertions about the world, and by being disposed to express (not describe) its own desires and intentions. On this basis some new locutions can easily be introduced — the child can be trained to preface its assertions with, 'I believe that', its expressions of desire with, 'I want', and its expressions of intention with, 'I intend'. Having got so far, it can then begin to use increasingly-sophisticated forms of simulation to attribute beliefs, desires, and intentions to other people, and can come to realise that an assertion of the form, 'A believes that P' can be appropriate when an outright assertion of the form, 'P', is not, and vice versa. (People can have false beliefs, and can be ignorant.) The child can also, on this basis, form a descriptive conception of the belief that P as: that state which is apt to issue in (cause) the utterances 'P' and/or 'I believe that P'. It can also form a descriptive conception of the difference between standing-state and occurrent beliefs, characterising the latter as a belief-state which is currently engaged in the causation of behaviour. But none of this would begin to give the child introspective access to its own occurrent beliefs (judgements), except by inference (presumably employing simulation) from its own recent behaviour; nor would it yet have any idea that it often engages in trains of thinking.
Can one imagine the process of self-simulation becoming so smooth and swift as to give us almost instantaneous knowledge of our own occurrent thoughts, independently of any disposition that we might have to verbalise those thoughts aloud? If so, then simulation might be able to boot-strap us into something at least resembling introspection. But the answer to the question is clearly negative. For simulation requires data to operate upon. In the case of attributions of occurrent thought, there must, in particular, be some overt behaviour to explain. Any process of simulation which concludes with a thought-attribution of the form, 'A has just judged that P', must begin with a representation of an action-to-be-explained, the process of simulation then consisting in trying out various pretend-judgements until one hits upon one that issues in a pretend-intention to perform the action in question.
Thus one can, by using simulation, only come to know of an occurrent thought after the behaviour which it causes. But since many thoughts occur some time prior to the actions that they rationalise, simulation will never be able to issue in thought-attributions to oneself that are, in general, anything like simultaneous with the thoughts ascribed. Moreover, since many occurrent thoughts never issue in action at all, they must forever lie beyond the reach of a simulationist strategy, no matter how swift and smooth it may become.
6. Three sets of empirical commitments
I have argued, on the basis of a variety of armchair (or rather typing-stool) considerations, that theory-theory is preferable to either form of simulation-theory in terms, at least, of its treatment of first-person knowledge. It is worth noting that some of the empirical data, too, pull in the same direction, specifically relating to the developmental sequence of self- and other-attributions of propositional attitudes. This is a good testing-ground for me, since each of the three theories I have considered makes distinct predictions about the normal order of development.
The theory-theory predicts that there should be no difference in the development of self- and other-attributions. As more sophisticated mentalistic theories and concepts become available, either through learning or maturation, so they can feed into more sophisticated attributions either to oneself or to other people. So we should expect a pattern of development in which children make essentially the same sorts of characteristic errors in self- and other-attributions at the same developmental stages.
The Goldman-Harris theory, on the other hand, predicts that competence in self-attribution should be achieved before competence in other-attribution. For simulation, in this version of the story, consists in projections of mentalistic attributions from oneself to a simulated other. So the predicted pattern of development will be a movement from common errors in both self- and other-attribution, through a stage at which there are characteristic errors still occurring in other-attribution which have disappeared in the case of self-attribution, to a stage of overall competence.
Finally, Gordon's form of simulationism predicts (counter-intuitively) that competence in self-attribution should only be achieved after competence in other-attribution. For on such an account (as Gordon himself notes, 1994), attributing mental states to oneself with full understanding (not just using the answer-check procedure followed by semantic ascent) requires a dual (and hence more difficult) simulation — in fact I must simulate another person (or myself at a later time) simulating myself.
Pleasingly, the available empirical data count in favour of theory-theory on this matter. At the stage at which children are still making errors in allowing for the possibility of false belief, or in distinguishing between different sources of belief, or in describing the appearance of an illusory object (such as the "rock-sponge"), they are just as likely to make these errors in relation to their own states as to the states of other people, and vice versa. (See Gopnik and Wellman, 1992, pp.160-6, and Gopnik, 1993, pp.3-8 & 90-3.) And when these errors disappear, they disappear across both self- and other-attributions together.
Granted, simulationism may have some valuable things to tell us about the way in which we go about predicting and explaining the mental states and actions of other people, and of our own past selves, in some circumstances. But it has, I claim, nothing of value to tell us about the manner in which those states are conceptualised or introspectively known. If I am right, then the only defensible form of the doctrine will be: simulationism circumscribed by theory.
To oversimplify the history just a little: first there was Cartesianism, then there was behaviourism, and then there was theory-theory. This sequence was generally perceived to be progressive. Now we have simulationism, which is claimed to advance our understanding still further. But it is, in reality, a step back — either to Cartesianism, in the Goldman-Harris version of it, or to quasi-behaviourism, in the Gordon version. So theory-theory still rules OK!
I am grateful to the following for their comments on an earlier draft: George Botterill, Jack Copeland, Paul Harris, and Peter Smith.
Unfortunately these have been snipped out to figure in the consolodated bibliography to the volume.