Dennett, Daniel C. (1993) Evolution, Teleology, Intentionality. Behavioral and Brain Sciences. 16 (2) 289-391.
Copyright Cambridge University Press.

Evolution, Teleology, Intentionality

Reply to Ringen and Bennett, (continuing commentary on "Précis of The Intentional Stance"), Behavioral and Brain Sciences, 16, (2), 289-391, 1993 Daniel C. Dennett

No response that was not as long and intricate as the two commentaries combined could do justice to their details, so what follows will satisfy nobody, myself included. I will concentrate on one issue discussed by both commentators: the relationship between evolution and teleological (or intentional) explanation. My response, in its brevity, may have just one virtue: it will confirm some of the hunches (or should I say suspicions) that these and other writers have entertained about my views. For more closely argued defenses of my points, see Dennett 1990a,b,c; 1991a,b.
As Ringen notes, I have claimed that mentalistic or intentional explanations are not just similar to adaptationist explanations of evolution but continuous with them; there is just one sort of explanation here, operating according to one set of principles. Ringen thinks this is mistaken, and presents me with a dilemma: I must side either with the neo-Darwinians (who offer to reduce or even eliminate teleology via a mechanistic model of natural selection) or with Bennett (who according to Ringen champions a non-reductive, Aristotelian concept of real teleology). The position Bennett presents is more nuanced than Ringen suggests, but he supports a version of Ringen's challenge: he deplores what he sees as my fence-sitting "stance" talk, and urges me to get real: where I have said "nothing without a great deal of structural and processing complexity could conceivably realize an intentional system of any interest," Bennett "would replace the last phrase by 'a genuinely intentional system', leaving 'interest' out of it." He sketches several of his proposals for settling the determinable questions of intentional attribution in ways, he claims, that are independent of my appeals to evolution.

There is a symmetry to their disagreements with me. Ringen maintains that contrary to what I have said, the concept of selection for, and hence a basis for adaptationist theorizing in biology, can be secured independently of any intentionalizing of the design process--one needn't appeal to "what Mother Nature had in mind". Bennett maintains that contrary to what I have said, assertions about intentional attributions--about what an organism "had in mind"--can be secured independently of any assumptions about the provenance in evolution of the organism in question. If they were both right, we could have a non-intentional evolutionary theory and a non-evolutionary theory of intentionality.

I continue to think they are both wrong. The apparent differences between adaptationist theorizing in biology and intentionalist theorizing in psychology are due, in my view, to the huge differences in time scale, and--more evident in the discussions of both Ringen and Bennett--a downplaying of the importance of the implications of the ubiquitous idealizing assumptions in both enterprises. When we grasp the nettle and confront the ineliminable "practical difficulties" (Ringen) that beset the evolutionary theorist intent on distinguishing actual cases of selection for, and the parallel practical inability of the intentionalist psychologist to cash out the idealizing assumptions that permit talk (in Bennett's example) about a "class of environments . . .unified with help from the concept of food-getting" we see that both enterprises continue to avail themselves--quite appropriately and defensibly--of what Quine called the "dramatic idiom"; the sense-making interpretation-talk of the intentional stance. I claim that since there is just one sort of explanation going on in both quarters, the choice Ringen offers me must be rejected: teleology is neither as illusory as his neo-Darwinians claim, nor as real and irreducible as his Aristotelian Bennett claims.

Ringen renders usefully explicit the vision of real teleology that haunts current thinking both in evolutionary theory and in philosophy of mind--where I have in mind particularly Dretske's (1986, 1988) quest for a causal role for meanings. Suppose there were such a thing as a genuinely teleological system, or, equivalently, a real (as opposed to approximate or "as-if") intentional system: "Teleological principles provide a basis for predicting what response to new circumstances a system which conforms to them will produce." (Ringen, ms pp3-4; see also Bennett, ms. p.17, but not also that he recognizes that this is too idealized, because of the omnipresent possibility of error). Such a system would not just happen to track appropriateness; it would do so in a principled way. It would be caused, in Dretske's view, to track meanings in an appropriate way. But there are no such systems, human or otherwise. There are only better and worse approximations of this ideal--which is rather like the ideal of a frictionless bearing, or a perfectly failsafe alarm system. As Ringen points out, the process of natural selection doesn't quite measure up as a teleological system. Selection itself can only filter, at best supporting the conditional: if the appropriate sort of variation is generated, it will be selected. The generation process that provides the candidates for sorting itself is deemed by orthodoxy to be unresponsive to appropriateness. So there can be no guarantee, or anything even close to a guarantee, of genuine "teleological" or meaning-tracking behavior in evolution. I agree, then, with the passage Ringen quotes from Lewontin (1979): "The dynamics of natural selection does not include foresight, and there is no theoretical principle that assures optimization as a consequence of selection."

Both Ringen and Bennett would like to accept the invited contrast of this orthodox view of evolution with a design process controlled by an Intelligent Artificer (or just an intelligent artificer--an everyday, foresightful intentional system such as an engineer). When we look closely at the contrast, however, do we discover anything but differences in degree? Some engineers are doltish and habit-bound; if a particular design solution happens to occur to them, they'll adopt it, but there is no guarantee that they will generate the move that we can see in hindsight is the appropriate move in the circumstances. Some engineers are much cleverer, and some have positively brilliant insights into the reasons for and against particular design proposals. How adroit, how flexible, how sensitive must a system be to these reasons for it to be a real intentional system? Bennett's "unity condition" is supposed to answer this question: if "the class of environments is unified with help from the concept of behaving in a manner appropriate" to this or that feature, then we are entitled to attribute that concept to that system, not as a façon de parler but literally. But one theorist's unifying concept is another theorist's inflationary shorthand for a mere disjunction of tropisms (Cf. Dretske, and Dennett, 1987, chapter 8). Bennett in effect concedes this, for he casts his question in terms of when we may hypothesize that there are going to be more disjuncts than we have observed: "What can make it all right for us to trust an intentional or teleological generalization to lead us from some S-M linkages to predict others?" (ms, p.17) Bennett suggests that either hypotheses about evolution or learning or both could underly our confidence that one way or another there were mechanisms in an organism (or artifact) that would tend to yield further appropriate linkages. I agree (see Dennett, 1990a, b, 1991c), and I don't see (yet) why Bennett claims that his view in this regard is "quite different from" what I have been saying.

I think Ringen's optimism about the independent application of optimality principles in evolutionary theory is similarly undercut. In discussing the case of the sexually reproducing snails' response to the castrating trematode parasite, for instance, he says: "optimality principles predict that such optimal traits will emerge. . ." I think not. Optimality principles predict that either such optimal traits will emerge or they won't; in the latter case, either the parasites will secure their own extinction by the extinction of their non-adapting hosts, or some semi-stable exploitation cycle will persist indefinitely. There are no guarantees, only the rationales of hindsight. But don't knock hindsight; one way or another, it's the only sort of sight we can ever count on having. At our best, our adaptive mechanisms lag slightly behind reality, tracking it ever more doggedly, but never giving us a "principle" by which we might predict genuinely teleological activity.

Finally, I will comment all too briefly on some of Bennett's other constructive criticisms and objections. Bennett corrects my interpretation of his views on Quine's indeterminacy thesis. The view he and Blackburn hold had not occurred to me, and I have no opinion, yet, on whether, as he claims, determinacy of language is consistent with indeterminacy of thoughts.

Bennett describes my view of intentional attribution as "free-ranging, somewhat haphazard" since it is governed by only "two extremely mild constraints": a rationality assumption and a prohibition of inflationary attributions. He claims that on the contrary "a good deal of discipline" can be brought to bear on the project. I have no quarrel with the details of his sketched example (the animal's food-seeking behavior); I just think the considerations he correctly raises are subsumed under my constraints, which are not mild at all. There is plenty of structure to the reasoning processes that govern the postulation and support of intentional attributions, and it is generated, indirectly, by my minimal constraints.

I think Bennett misunderstands the strategy of my vending machine example. He is not alone, so it is my fault. He is right that the vending machine is even worse than the thermostat as an example of an intentional system--that was deliberate on my part. I wanted to choose an example of a dead-simple quasi-perceptual mechanism (the counterfeit-coin detector), so there would be no controversy about "what we would say"; of course there is no deep fact of the matter in this instance about which cases of coin-rejection count as "errors". Any grounds for calling some cases errors and others proper functioning will have to depend on the embedding of the device in a larger context of purposes: the purposes of its users. The challenge is then for the believers in deeper facts about content in fancier cases (Twin Earth cases in particular) to show what features of these fancier cases permit us to invoke other principles. I claim they cannot, and I do not see that Bennett's discussion provides any such grounds. Bennett says we do not have to solve the problem of error for the vending machine. We do not have to solve the problem of error for anything; we can always ("in principle") eschew intentional discourse and settle for brute physical stance mechanism. But if we find it illuminating to adopt the intentional stance (and even in the case of the vending machine, the error-talk is illuminating--just think of the design-improvement process, the invocation of Gresham's Law, etc.), we will find ourselves invoking the minimal but none-too-mild constraints of the intentional stance.

 

Bibliography

Dennett, D. C., 1987, The Intentional Stance, Cambridge, MA:Mit Press/A Bradford Book.

 Dennett, D. C., 1990a, "Ways of Establishing Harmony," in B. McLaughlin, ed., Dretske and His Critics, Oxford: Blackwell, pp. 118-130.

 Dennett, D. C., 1990b, "The Interpretation of Texts, People, and other Artifacts," Philosophy and Phenomenological Research, 50, pp. 177-94.

 Dennett, D. C., 1990c, "Dr. Pangloss Knows Best," (reply to Amundsen) Behavioral and Brain Sciences, 13, pp 581-2.

 Dennett, D. C., 1991a, "Real Patterns," J.Phil., 87, pp.27-51.

 Dennett, D. C., "Do-it-Yourself Understanding," Center for Cognitive Studies Preprint CSS-90-4, Tufts University, Medford, MA.

 Dretske, F., 1986, "Misrepresentation," in R. Bogdan, ed., Belief, Oxford: Oxford Univ. Press.

 Dretske, F., 1988, Explaining Behavior: Reasons in a World of Causes, Cambridge MA: MIT Press/A Bradford Book.