Hauser, L (Spring/Summer,1994) Behavior and Philosophy, Vol. 22, No.1 , pp. 22-28.


Acting, Intending, and Artificial Intelligence

Larry Hauser

Abstract

What remains if I subtract the fact my arm went up from the fact that I raised my arm? (Wittgenstein 1958, Sect.621)

Abstract

Hauser considers John Searle's attempt to distinguish acts from movements. On Searle's account, the difference between me raising my arm and my arm's just going up (e.g., if you forcibly raise it), is the causal involvement of my intention to raise my arm in the former, but not the latter, case. Yet, we distinguish a similar difference between a robot's raising its arm and its robot arm just going up (e.g., if you manually raise it). Either robots are rightly credited with intentions or it's not intention that distinguishes action from mere movement. In either case acts are attributable to robots. Since the truth of such attributions depends not on the speaker's "intentional stance" but on "intrinsic" features of the things they are not merely figurative "as if" attributions.

Gunderson allows that internally propelled programmed devices (Hauser Robots) do act but denies that they have the mental properties such acts seem to indicate. Rather, given our intuitive conviction that these machines lack consciousness, such performances evidence the dementalizability of acts.Hauser replies that the performances in question provide prima facie warrant for attributions of mental properties that considerations of consciousness are insufficient to override.

Action minus Movement: Wittgenstein's Question

On John Searle's account, the difference between my arm going up and my raising it is the causal involvement of my intention in the latter case (Searle 1979a). This analysis of the difference, together with Searle's well known denial that machines such as computers have intentions, seems to entail that nothing remains if I subtract the fact that the robot arm goes up from the fact that the robot raises its arm (Searle 1980, 1990). Yet we distinguish much the same difference in the robot case as our own. I raise the robot's arm to check its alignment as the doctor might raise mine. A short circuit might make the robot's arm go up as an epileptic seizure might make my arm go up. So, either Searle's analysis of action as movement plus intent, or his denial that computers have any intentions, must be mistaken.

Clearly Searle's characterization of action as movement with intent is overly simple. Something remains if I take away the fact that the tree was uprooted from the fact that the wind uprooted it, but it is not the wind's intent. The distinction between action and movement is of wider application than the concept of intention or aim. I say that we ascribe actions to some machines in much the same robust sense in which we predicate acts of people and animals; and we do so as much on the basis of these machines's internal states and operations as on people's and animals's (when we ascribe acts to them).{1}

Action and Intention

Against attempts of "standard behavioristic analyses of mental states blandly [to] use the concept of intentional behavior as if it were somehow less mentalistic than the other mental notions" (Searle 1979b, p.196) Searle observes:... to say that a man is walking to the store or eating a meal is to attribute to him mental states which are no less mental than to say that he wants to get to the store or he believes that the stuff on his plate is food. We have the illusion that behavior is not a mental notion because we can observe bodily movements, but the bodily movements only constitute human actions given the assumption of the appropriate intentions and beliefs. (Searle 1979b, p.196)

This requires Searle to stake out a radical position in order to defend a blanket denial of mental states to computers. If behavior is "a mental notion" and computers have no share in the mental, computers must be said to lack not only the highest and most human cognitive faculties (e.g., natural language comprehension) but also such lower capacities as perception and action. Not only doesn't Schank and Abelson's program, SAM, understand the stories it processes, it doesn't even process them! Neither does WordPerfect detect my keypresses nor display text I type. Neither do adding machines add nor calculators calculate. Deep Thought doesn't really play chess. In all these cases, according to Searle, we take an "intentional stance" toward the machines in question only to predict their behavior. We speak figuratively as if computers perceive or act or cognize; but we don't, in so speaking, literally attribute "intrinsic" action, perception, or cognition to them. All such talk, Searle claims is "at best metaphorical" (Searle 1982c, p.345).

Yet, Searle admits elsewhere, in another connection, "an ordinary application of Occam's razor places the onus of proof on those who wish to claim that these sentences are ambiguous" (Searle, 1975). Responsibility lies with Searle to show that the ambiguity he invokes exists and applies to everything that computers seem to do.

"As If" Intentionality

When modern folk describe the doings of some inanimate things in robustly mental terms -- as when they say water "tries to get to the bottom of the hill by ingeniously seeking the line of least resistance" (Searle 1989, p.198) -- they think that they speak figuratively. On the other hand we speak literally enough when we attribute action in a weak sense to inanimate things -- as when we say "the wind uprooted the tree." What remains when we subtract the tree's being uprooted from the wind's uprooting is, roughly, the causal process by which the uprooting came about. Where we distinguish acts from their constituent movements, characterization of the events as acts includes information about the movement's manner of production that is not included in the bare description of the movement itself. This fact seems to distinguish action from movement independently of whether the manner of production is intentional or not.

It may appear that no philosophical ice is cut, then, by our distinction between the robot raising its arm and the robot arm going up: perhaps the causal remainder (if we take away the movement) in these machine actions is no more intentional than what remains in the case of the wind. Yet note the difference between attributing an action to an agent and merely describing the movements or effects in which the act is accomplished. The former has implications concerning causation that the latter description doesn't. Since this difference between action and movement is a matter of the presence or absence of processes or properties internal to the agent, at least this difference would seem -- in Searle's terms -- "intrinsic." "To say they are intrinsic is just to say that the states and events really exist in the ... agents" (1984, p.4).

Still, we make a further distinction between actions that are voluntary and actions that are not. Perhaps it is only the former full-blooded deeds that Searle would deny to robots and computers. What's not yet clear is whether the robot's raising its arm belongs to the class of voluntary actions and whether the voluntary/involuntary distinction depends on the presence of internal states or processes in the agent.

Searle's Slippery Slope

Searle proposes that if we do not rule out ascribing acts to inanimate things across the board, almost any intentional state will be literally ascribable to anything at all. Thus, if we allow that computers act full-bloodedly, then we shall have to admit that water and wind do too. Since "relative to some purpose or other anything can be treated as if it were mental," Searle warns, we must distinguish literal ascriptions of full-blooded actions to humans and animals from metaphorical attributions to computers; or else "everything is mental" (Searle 1989, p.198).

The trouble with this argument is that it abolishes distinctions that we make. Our attributions of action to computers don't all seem equally metaphorical. Rather, we think we discern a difference between saying, "DOS recognized the print command and initialized the printer" and saying, "DOS maliciously erased my files because it hates me." Only in the later case do we recognize that we are speaking figuratively. Searle must either dismiss our intuitions about this and say that the former attributions are as figurative as the latter; or acknowledge that some usages are more figurative than others.

Perhaps differences in degrees of figurativeness might be cashed out in terms of degrees of behavioral similarity: DOS acts very much as if it recognizes certain commands, and only a little like it hates me. But however we do it, distinguishing degrees of figurativeness of mental attribution avoids the panpsychism Searle fears. It does so because differences in how much things act as if they have specific mental properties cut across the boundaries between the animate and the inanimate. Behavioral differences we already use to attribute and refrain from attributing specific mental states to people and animals suffice to prevent panpsychic proliferation without Searle's blanket distinction.

What Remains Reconsidered: "Aspectuality"

If you raise my arm or a technician raises the robot's, there is no genuine act of mine or the robot's to consider. There's no problem: it's not me or the robot that does the raising. It's you or the technician. If my arm goes up in an epileptic seizure, or the robot's arm due to a short circuit, the origin of the movement is internal to me or the robot. Nonetheless, the movement is not an act of mine or the robot in the full-blooded sense we are now trying to make out. Finally there's the case of unintended consequences. If the arm goes up to flip a switch to turn on a light, and this happens to alert a prowler, alerting the prowler (unlike raising the arm, flipping the switch, and turning on the light) is an unintended side effect rather than a full-blooded act (Davidson, 1967). But it's a side effect in either case, mine or the robot's. In both cases, the difference is that full-blooded acts come under explanations that subsume them as aims of the agent: others (due to external compulsion, being mere twitches, or merely being side effects) do not. I raise my arm to flip the switch and flip the switch to turn on the light; but I don't turn on the light to alert the prowler. So, alerting the prowler is not something I do in the full-blooded sense but an unintended consequence of what I do. It's unintended by the robot, also; but alas, on Searle's account, raising its arm, flipping the switch, and turning on the light are all equally unintended by the robot. Yet we do seem to be able to distinguish the robot's full-blooded acts from what it "does" inadvertently where Searle's analysis seems to deny there's any difference to be distinguished.

From "s advertently does e" (the turning on of the light), and "e = f" (the alerting of the prowler), "s advertently does f" does not follow. Apropos of this sort of referential opacity Davidson speaks of acts being done "under descriptions" (Davidson 1964): advertence, so to speak, is de dicto and not de re. In Searle's terms acts are done "under aspects": aspects are understood to be like dicta or descriptions except for being (possibly) nonverbal.{2} Searle rightly notes in giving reasons for preferring this "aspect" terminology to Davidson's talk of "descriptions," we think that animals without language sometimes act in robust aspectual ways also. Rover fetches the ball under the aspect fetch the ball and not under the aspect tear up the flower bed, though these happen (ex hypothesi) to be the same event (Searle 1983, p. 101). Similarly, I intend what I do under aspects expressed by the descriptions "turning on the light" and "raising my arm" but not under the aspect answering the description "alerting the prowler." Yet, here again, much the same can be said of the robot. It's not the robot's aim (it's not designed and programmed) to alert the prowler, but only to raise its arm to flip the switch to turn on the light. The distinguishability of advertent deeds of robots from their inadvertent consequences shows that "aspectuality" (as Searle calls it) does not signal a crucial difference such as would warrant dismissing robot doings as merely "as-if" purposeful "as if" actions. Since computers sometimes act under aspects, aspectuality does not distinguish a sense of "action" in which humans and certain animals act and computers don't.

Conclusion

In addition to the distinction between weak action (action simpliciter) and purposeful or voluntary (full-blooded) action we have been trying to make out, we make a further distinction for legal and moral purposes between deliberate premeditated acts (planned acts of mentally competent adults) and merely purposeful or voluntary (nondeliberate, unpremeditated) acts, such as children and animals are also capable of (cf. Aristotle, Nich. Eth., Bk.3:1-3). I suppose in this strongest sense of "action" that underwrites attributions of full legal and moral responsibility humans act but computers don't. But neither do we think animals, children, mental incompetents, or competent adults "in the heat of passion" act in this sense; and we do not, for all this, deny children, animals, and the rest, some share in the mental in virtue of their voluntary (albeit nondeliberate) acts. To have some share in the mental would seen only to require acting voluntarily, not deliberately, and in this intermediately strong sense, I argue, robots too are rightly said to act.

The picture is this: among acts, we distinguish a proper subset as voluntary; among voluntary acts, we distinguish a proper subset as deliberate. We have seen that whether anything simply acts in the weak sense in which the wind uproots the tree (that goes with attributions of causal responsibility) depends on internal properties of the putative agent -- whether an appropriate causal chain goes through it. This is an intrinsic difference as Searle styles such differences. Furthermore the difference between acting in the merely causal sense, and full-blooded purposeful action -- what needs to be added to action simpliciter to make it full-blooded -- seems no less "intrinsic" or a matter of internal properties of the would-be agent in the computer's case than in ours.{3} For consider: my MS-DOS computer sometimes initializes my printer when I issue the print command; or tries to (I often forget to turn the printer on). My computer is on my desk. My printer stands to the east. Initializing the printer at my house is sending electric signals east. Here we have action under an aspect. DOS acts, it seems, under the aspect initialize the printer and not under the aspect send electric signals east; yet it happens at my house that initializing the printer is sending electric signals east. The difference between the computer's advertent initialization of the printer and its sending electricity east inadvertently is that the computer is programmed to initialize the printer not to send current east. Notice that whether a computer is running programs and which it's running depends (to put it crudely) on whether it's turned on, and which diskette is inserted: on intrinsic states of the computer (if anything's intrinsic).

Much as different purposes of mine support different counterfactual expectations of me, different programs in computers support different counterfactual expectations of them: my computer will still initialize the printer if I move the printer west of it. Just as Searle holds an argument for the view that "actions [are movements] caused by intentions ... would be that intentions enable us to justify counterfactuals in a way that is typical of causal phenomena" (Searle 1979a, p.253); so also, an argument for the view that movements caused by programs are full-blooded acts would be that programs "enable us to justify counterfactuals" concerning behavior under aspects.

Gunderson's Reply: "Movements, Actions, the Internal, and Hauser Robots"

References

Aristotle. Nichomachean ethics. Trans., Terence Irwin. Hackett Publishing Company Inc., Indianapolis, IN (1985).

Davidson, D. (1964). Actions, reasons, and causes. Essays on actions and events, 3-20. Oxford: University Press, New York (1980).

Davidson, D. (1967). The logical form of action sentences." Essays on actions and events. Oxford University Press, New York (1980).

Frege, G. (1892). On sense and reference. Trans. Max Black. Translations from the philosophical writings of Gottlob Frege, ed. Peter Geach and Max Black. Oxford: Basil Blackwell.

Putnam, H. (1975). The meaning of "meaning". Mind language and reality. Cambridge University Press: Cambridge.

Searle, J. R. (1975). Indirect speech acts. Expression and meaning. Cambridge University Press: Cambridge (1979).

Searle, J. R. (1979a). The intentionality of intention and action. Inquiry, 22, 253-280.

Searle, J. R. (1979b). Intentionality and the use of language. Expression and meaning. Cambridge University Press: Cambridge (1979).

Searle, J. R. (1980). Minds brains and programs. Behavioral and Brain Sciences, 3, 417-424.

Searle, J. R. (1983). Intentionality: an essay in the philosophy of mind. Cambridge University Press: New York.

Searle, J. R. (1984). Intentionality and its place in nature. Synthese, 61, 3-16.

Searle, J. R. (1989). Consciousness, unconsciousness and intentionality. Philosophical Topics, xxxvii, 10, 193-209.

Searle, J. R. (1990). Is the brain's mind a computer program? Scientific American, 262, 126-131.

Wittgenstein, L. (1958). Philosophical investigations. Trans., G. E. M. Anscombe. MacMillan Publishing Co., New York, NY.

Notes

1. Or as little -- pace Putnam (1975).^

2. These aspects are like Fregean senses or "modes of presentation" (Frege 1892). Searle's "aspect" terminology stems from Wittgenstein (1958, II:xi).^

3. And again, pace Putnam (1975), no more.^