PTN Webbed Reprint Collection
William H. Calvin
University of Washington
Box 351800
Seattle WA 98195-1800 USA
Email || Home Page || publication list

William H. Calvin

"The unitary hypothesis: A common neural circuitry for novel manipulations, language, plan-ahead, and throwing?"

as it appeared in
Tools, Language, and Cognition in Human Evolution, edited by Kathleen R. Gibson and Tim Ingold. Cambridge University Press, pp. 230-250 (1993).

copyright ©1993 by William H. Calvin and Cambridge University Press.


Plan-ahead becomes necessary for those movements which are over-and-done in less time than it takes for the feedback loop to operate. Natural selection for one of the ballistic movements (hammering, clubbing, and throwing) could evolve a plan-ahead serial buffer for hand-arm commands that would benefit the other ballistic movements as well. This same circuitry may also sequence other muscles (children learning handwriting often screw up their faces and tongues) and so novel oral-facial sequences may also benefit (as might kicking and dancing). An elaborated version of the sequencer may constitute a Darwin Machine that spins scenarios, evolves sentences, and facilitates insight by offline simulation. An example is given of an evolutionary scenario from an apelike ancestor, demonstrating the transition behaviors and growth curve considerations that any such theory needs to have; this particular scenario (involving throwing improvements) also suggests an explanation for the puzzling design of the Acheulean "handaxe."

In considering transitions of organs, it is so important to bear in mind the probability of conversion from one function to another....
Charles Darwin, On the Origin of Species, 1859

That evolution, over all-but-infinite time, could change one physical organ into another, a leg into a wing, a swim bladder into a lung, even a nerve net into a brain with billions of neurons, seems remarkable, indeed, but natural enough. That evolution, over a period of a few million years, should have turned physical matter into what has seemed to many, in the most literal sense of the term, to be some kind of metaphysical entity is altogether another matter.

Derek Bickerton, Language and Species, 1990


It is traditional to talk about language origins in terms of adaptations for verbal communications, to talk about toolmaking in terms of natural selection shaping up the hand and the motor cortex, to talk about the evolution of intelligence in terms of how useful versatile problem-solving and look-ahead would have been to hominids. But might natural selection for one of these improvements serve to haul along the others as well? Indeed, might some fourth function be the seed from which the others grew? What are the chances of coevolving talk, technique, and thought?

Getting "something for nothing" is, I know, profoundly anti-Calvinist. And the Puritan ethic seems to require us to look for a function's antecedents in their usefulness to that very function. But, as the epigram shows, new uses for old things is not anti-Darwinian. Conversions of function can be profound, providing an enormous boost toward a new ability. Capabilities occasionally arrive unheralded by gradual predecessors. In the familiar case of bird flight origins, one observes that it takes a lot of wing feathers to fly even a little. Presumably natural selection for thermal insulation shaped forelimb feathers up to the threshold for flight. Natural selection for a better airfoil shaped feathers thereafter, but the switch-over from the insulation function to the flying function was presumably a surprise, leaving the protobirds to explore their newfound gliding abilities rather as we might try to figure out a holiday gift that arrived without an instruction manual.

Inventions are the novelties in evolution, though you'd think that shaping-up streamlining was what it was all about, when reading most of the literature. But adaptations are only improvements on an existing function; what we're talking about here is the invention itself before streamlining, and that's often a matter of a Darwinian conversion of function. Nature does take leaps, and such conversions of function are even faster than those anatomical leaps envisaged by proponents of punctuated equilibria and hopeful monsters.

Why might we expect one of these hominid improvements (tools, language, intelligence -- and, I might add, accurate throwing and making music) to serve as a preadaptation for the others? First the general reason, then some specific ones.

In general, the brain is better at "new uses for old things" than any other organ of the body. Sometimes two digestive enzymes, each evolved separately for a different food, can act in combination to digest a third foodstuff. But a brain can easily combine sensory schemas and movement programs in new ways, since it tends to use a common currency that allows disparate things to be compared. From whatever source, an excitatory or inhibitory input is first converted into positive or negative millivolts; nerve cells then add and subtract in this substitute value system. For one input to substitute for another, it is only necessary that they produce similar voltage changes in the relevant nerve cells. One can add apples and oranges to get so many pieces of fruit.

More specifically, the aforementioned functions are all examples of stringing things together, that specialty of left brain:

But why might all these functions overlap in the brain? Why might some neural machinery be used interchangeably for one, then the other?

"Borrowing" Neural Machinery

Thanks to our map-making tendencies, we usually assume that the neural machinery for moving the hand should have little to do with that for moving the mouth: at least on the motor strip, they're in "different compartments" (though, I might note, neighboring ones). But it is premotor and prefrontal cortex that has the reputation for stringing things together, getting them ready to finally execute, not motor cortex. It isn't even clear that such sequences feed exclusively through motor strip, since premotor cortex has as many direct connections to spinal cord as does motor strip itself.

Furthermore, both inputs and outputs tend to be broadly wired in cerebral cortex: the functional maps may suggest segregation, but the anatomy is one big smear (another example of Alfred Korzybski's "The map is not the territory"). For example, in the somatosensory path from a fingertip to the cortex's sensory strip, a given thalamocortical neuron responding only to that small patch of skin will nonetheless send axon branches to nearly all of the cortical map for the hand, not just the map of that particular finger (presumably the synaptic strengths are stronger there than elsewhere). And a cell near the thumb-face boundary line of sensory strip seems to receive inputs from both; some weeks the face connections are stronger, other weeks the hand connections dominate that cell's functioning (references in Calvin, 1989).

A cell in motor cortex, seemingly specializing in a hand movement, will nonetheless send axon branches to numerous other segments of the spinal cord; perhaps overcoming the clumsiness of youth involves tuning up the synaptic strengths to emphasize one destination over the others. So to suggest that the parts of cerebral cortex which optimize hand movements might also aid mouth movements is not such a radical proposal. Indeed, Charles Darwin made such neurological observations in his 1872 book, The Expression of the Emotions in Man and Animals:

[Persons] cutting anything with a pair of scissors may be seen to move their jaws simultaneously with the blades of the scissors. Children learning to write often twist about their tongues as their fingers move, in a ridiculous fashion.
It looks as if that broad smear of wiring may sometimes cause simultaneous activation of rather different muscle groups. The tendency of right-handed gestures to accompany left-brain-generated speech (Poizner et al, 1987) is only the most commonly noticed manifestation of this higher-order control of sequence.

Cortical Specializations for Sequence per se

A hundred years after Darwin, the neuropsychologists also began to emphasize sequencing. A. R. Luria noted (a modern review is Stuss & Benson, 1986; pp. 225-226) how patients with prefrontal damage had difficulty in planning a sequence of movements. For example, a patient is in bed with his arms under the covers. He is asked to raise his arm. He doesn't seem able to do so. But if you ask him to remove his arm from under the covers, he can do that. If you then ask him to raise it up and down in the air, he does it all correctly and smoothly. There's no paralysis (which would suggest damage to motor strip or below), and no difficulty with executing a fluent sequence (which would suggest premotor cortex damage) -- just a difficulty in planning the sequence, getting stuck on the condition of working around the obstacle of the confining bedcovers.

Further posteriorly, there was demonstrated an overlap between language disorders and sequential hand movements. In aphasic patients without any paralysis, Doreen Kimura (1979) and her coworkers showed that these stroke patients had trouble stringing together a sequence of hand movements, even though they could carry out each individual motion with ease. To use an everyday example (rather than the novel manual-brachial sequences they tested), the patient might be able to turn a key in a lock, turn the doorknob, and push open the door -- but not all in one facile sequence. It was as if the stroke had damaged some sequencing machinery that was often used for language but also for planning novel hand-arm sequences.

And then the neurosurgeon George Ojemann began repeating Wilder Penfield's classic brain stimulation studies in awake epileptic patients, using much more sophisticated tests of language and related sensory- and movement-sequencing functions (Ojemann, 1983; Ojemann & Creutzfeldt, 1987; Calvin & Ojemann, 1980). "Language cortex" includes such left frontal lobe regions as premotor and prefrontal, and extends posteriorly to include the parietal and temporal lobe regions surrounding the Sylvian fissure. While highly variable from patient to patient, the peri-Sylvian "core" of the cortex from which function could be disrupted was always related to sequencing. Tests of nonlanguage auditory sequences were disrupted from the same regions as tests of nonlanguage oral-facial movement sequences, suggesting that such cortex subserves both receptive and expressive sequencing. This "sequencing core" is typically surrounded by a peripheral region where stimulation disrupts short-term verbal memory or some other aspect of language.

So it would appear that the brain has some regions which are particularly specialized for generating and analyzing sequences, and that they may be used by multiple sensory and motor systems.

Why a Versatile Sequencer?

The left brain has some specializations for sequencing that go back at least as far as the monkeys -- we don't know how it evolved, but the left hemisphere seems to be better at deciphering a novel string of sounds than is the right hemisphere (reviewed by Falk, 1987). But there isn't much evidence that animal calls are sequenced for an additional meaning, except for the escalating loudness of monkey cries possibly indicating predator proximity.

On the motor side of things, it is easier to see why some neural sequencing machinery might be needed. Normally, it is not essential to make a detailed movement plan before starting to move. That is because goal-and-feedback works pretty well, even for most novel movements, as the brain gets "progress reports" along the way. But there are some movements which are too rapid for feedback to guide them: they are over-and-done-with by the time that the first feedback arrives. Compared to most electronic and hydraulic systems, neural feedback is quite slow: human reaction times require hundreds of milliseconds. One of the fastest loops is from arm sensors to spinal cord and back out to arm muscles: it takes 110 milliseconds for feedback corrections to be made to an arm movement (Cordo, 1987).

But dart throwing doesn't take much longer. So that feedback from joints and muscles is wasted -- you might use it to help you plan for the next time, but your arm is an unguided missile shortly after the throw has begun. You must plan perfectly as you "get set" to throw, create a chain of muscle commands, all ready to be executed in exactly the right order. The same is true for hammering, which both baboons and chimpanzees use effectively to crack open shells. You need something like a serial buffer memory in which to load up all the muscle commands in the right order and with the right timing relative to one another -- and then you pump them out blindly, without waiting for any feedback.

Recall player pianos, where you punch an instruction roll that remembers which keys should be hit, and when. It may not look like a sheet of music (it is 88 channels high, one for each key) but it functions like one. The number of muscles that you need to control for throwing or hammering is similar, once you start counting up all those finger flexors and shoulder muscles.

To reprogram a player piano roll is a nuisance -- to make corrections requires tape and a punch at a minimum -- and to slow down an arpeggio in the middle of a long piece makes one wish for a recording medium that could be locally stretched a little. Since the brain is always doing little variations on a theme in improving its hammering or throwing repertoire, it has probably found a less awkward way of "recording motor tapes." Especially for throwing, where the distance to the target and the desired speed tend to vary widely with different targets -- and as a hunter, one is most often in situations never encountered before.

More than One Planning Track for Sequences?

As you get set to throw (at least in a situation less standardized than a basketball free-throw), the brain has to discard a lot of possibilities, narrow down the search to a few good candidates, then select one to be let loose. So at least two serial buffers are required, one to hold the current candidate train while it is graded by memories of past successes and failures, and another buffer to hold onto the best-so-far.

But the more planning tracks, the sooner a better candidate sequence might emerge: think of dozens or even hundreds of planning tracks in simultaneous operation. It would be like a railroad marshalling yard, many candidate trains vying for a chance to be let loose on the main track. You might keep the muscles inhibited (much as in dreaming sleep) until you have found a good-enough choice, then you could "play that tape", acting out that movement scenario (I suspect, however, that some reproduction steps intervene). Until then, you presumably shuffle the trains-in-formation after each round, occasionally substituting something linked in memory.

One obvious way for this to operate would be for the higher-rated trains to "reproduce themselves" (with, of course, copying errors on occasion) in the tracks of the trains rated poorly by the memorized environment. In this manner, the "best sequence" would also be the most common sequence (and attaining a certain percentage of the total tracks might represent the threshold for being the sequence being "let loose" on the main track of action).

In Fig. x.1, I show words being sequenced, to use a serial-order example that is more easily communicated than muscle command sequences. I show only four of the many planning tracks, each randomly assembling a train of words, grading it against memories of how similar sequences have performed in the past, keeping the best one or two sequences, and trying again. Such rounds of shaping up, as Dawkins (1986; ch. 3) showed for shaping up random letter sequences to a Shakespearian model, can quickly converge on a good-enough sequence.

The Brain as a Darwin Machine

This should be beginning to sound familiar. What other process has lots of serially-coded individuals, the sequence judged by the environment, and some individuals correspondingly more successful at reproducing themselves (albeit with a little shuffling of the code)? Actually, there are now two good examples: 1) DNA strings called genes being shaped up by an environment of prey-predators-pathogens, and 2) amino acid strings called antibodies, shaped up during the immune response by the differential reproduction associated with finding a foreign protein to destroy. Shuffling the code a little seems to be far more important than inducing errors, e.g., the permutations of crossing-over during meiosis is the common source of new variants, not mutations per se.

As I have recently argued (Calvin, 1987, 1989), many neural sequencers to select among gives a very interesting property: brain processes can, in effect, simulate the process used by Darwinian evolution. They can shape up new serially-coded throws/thoughts/- plans/sentences rather than new species or antibodies, using innocuous remembered environments rather than real-time noxious ones, operating in seconds rather than the days of the immune response or the millennia of species evolution. With this "Darwin Machine," the brain selects sequences of schemas. Some possible examples:

Language Implications of a Neural Sequencer

One can imagine a manual-brachial sequencer being adapted to simple kinds of language. A verb is usually a stand-in for a movement. And the targets of a ballistic movement are examples of nouns: "Throw at that rabbit!" or "Hit that nut." This seems just like the predicate of a sentence: the verb and its object. Spontaneous sign-language constructions in apes are often of this type, as are the earliest utterances of human children.

All animals exhibit associative learning, but such sequencing machinery might greatly augment associative abilities, quite without making any initial use of the exact ordering. All humans go through a stage of protolanguage (Bickerton, 1990) before the age of two, where the mere association of words serves to carry a simple message (and many a traveller reverts to protolanguage when the foreign-language phrase book fails). Were chimpanzees able to use combinations of their 36 standard vocalizations to convey nonstandard messages, they could acquire some of the power of pidgin languages of today.

Using a noun as the subject of a sentence-like construction seems a bit more novel, with less precedent in movement planning (where the subject is most often implied: oneself!). There is nothing particularly verbal about such sentences: "You go there" can be communicated with gestures, even eye movements. Having two nouns in the same sentence may create some ambiguity as to who is the actor and who is the acted-upon. If the nouns are of very different classes (a person and a place), there may be no ambiguity. Yet a construction such as "John called Julie," where both nouns are capable of playing either role, starts requiring syntax, such as the English convention that the word order in a simple declarative sentence is actor-action-object. "Julie called John" means something different than "John called Julie." The mere addition of word order conventions would have enormously expanded the number of possible messages that could be constructed.

Bickerton (1990) cautions that much of modern language requires more of a structural analysis than these examples of sequencing provide. Beyond the second year of life, most normal humans acquire an ability to construct sentences with lots of grammatical words (below, before, because...) and embedded phrases (He ate it with a spoon). Embedded phrases tend to be nestled like Russian dolls rather than sequenced like a string of beads, requiring what linguists call a phrase structure analysis. And argument structure serves to identify the thematic roles associated with a verb, as when "give" tends to require a "Who", "What", and "to Whom." The verb "sleep" merely sends one in search of a "Who" in the sentence.

To accomplish such modeling of the possible message, in a manner far more complex than mere word-order allows, suggests considerably more neural machinery for sequencing than two tracks. Alternative interpretations of ambiguous sentences would need to be maintained simultaneously for comparison against memories, and better models evolved. It is this level of language which needs a Darwin Machine in the brain, not simple association or word ordering.

And so sequencing machinery could play a triple role: 1) even a single sequencer augments the association of various schemas, 2) it can also allow some conventions about word order to be used to greatly expand the possible messages that can be constructed, and 3) combinations of sequencers can permit an evolution of mental models for "Who did what to whom?" that use episodic memories of similar situations and sequences. Phrase structure can be mimicked in a railroad-yard-like branching of sequencer tracks; argument structure (and the other six theorems of modern grammar) may simply be part of the judgements that shape the evolving model. Any remaining ambiguity in the message (as in Bickerton's example of The cow with the crumpled horn that Farmer Giles likes) can often be resolved from memories (perhaps Farmer Giles is known to collect horns, suggesting that it is the horn that he likes, rather than the cow).

Besides the issues associated with innovative secondary uses, we have the evolutionary issue of which function came first. Or at least, which was fastest: Fast tracks in evolution tend to preempt slow tracks. Even if the advantages of plan-ahead intelligence or embedded phrases might suffice to evolve the more elaborate neural sequencing machinery in the long run, might nut-cracking hammering have done it faster?

While we might agree that music ought to have so little feedback on natural selection as to be near the bottom of the list, the top of the list will surely be an interesting debate. Linguists will be able to envisage the advantages of improvements in representations and communications, and will suggest that language improvements ought to be a fast track. Behaviorists will probably favor problem-solving versatility as a fast evolutionary path. Since I'm a neurophysiologist originally concentrating on the neurons that make decisions about movements, my contribution to the debate will be an evolutionary scenario for why accurate throwing might be the fastest track of all.

Accurate Throwing as a Growth Industry

Many evolutionary improvements are "one-shot deals" that cannot be repeated for further improvement; in a sense, they are like college courses that cannot be repeated for additional credit. Yet some, like the Music Department's performance courses, can be taken again for credit, and so generate a significant growth curve. For example: To an aquatic mammal with sufficient fat, less hair seems to be better. And even less, better still. And so on until the growth curve reaches a plateau (one can only become so naked).

What is the hominid growth curve for hammering nuts? Given how skillful the chimpanzees studied by Boesch & Boesch (1981) seem to be (at least, at positioning the targets; the hammering sequence may not be as carefully varied), that curve might have been close to a plateau even in our common ancestor. Redoubling the profitability of nut-cracking probably wasn't done very many times during hominid evolution, if our common ancestor was chimplike in this aspect of behavior.

This is in contrast to throwing's growth curve. Chimpanzees mostly engage in threat throwing, trying to intimidate leopards, snakes, and fellow chimps; it seems to be an adjunct to flailing about with a branch, rocks being secondarily used. The great growth potential comes with converting the threat throw into a predatory throw (see Plooij, 1978, for one possible scenario). Action at a distance is an important invention for hunters, as it reduces the chances of injury from hooves, horns, and teeth -- but this is a good example of an improvement that cannot be repeated for additional advantage.

What can be improved is the distance of the throw: twice as far is always better, no matter how many times the distance redoubles. And because throwing twice as far usually means throwing twice as fast, the kinetic energy (or "stopping power" in military terms) quadruples as distance doubles.

Accuracy can also be improved time and again for additional advantage, as one comes to be able to hit smaller and smaller targets at the same distance. The supply of small birds and mammals is much more reliable than that of large mammals. Attaining sufficient accuracy to strike the head is an important improvement when dealing with medium-sized mammals.

Adding some technology to the rock throw further increases the productivity of predatory throwing. The line of improvement from spear to bow-and-arrow to crossbow is familiar to us, though archaeology suggests that these were the products of the last ice age and the present interglaciation; the line of improvement from the common ancestor through Homo erectus to early Homo sapiens has to be based largely on theory at this point.

A Throwing-based Evolutionary Scenario

In this exercise (which I hope that fans of tool-, language-, or intelligence-based hominid evolution will consider emulating, so as to facilitate comparisons), I will start with the skills that I will ascribe to our common ancestor with the chimps and bonobos. Thus I will assume that the basic ballistic skills were present (and in the case of hammering, even well-developed). For throwing, we don't know how accurate chimps really are; the anecdotes only record their occasional (perhaps haphazard) hits. I think we can assume that if chimps were accurate enough to bring down game with any regularity, we would have heard about it (given how much they enjoy fresh kills, they'd probably be the terror of Africa).

In evolution, changes in behavior tend to precede adaptations in anatomy, so what kind of behavioral improvement could a chimpanzee-like ape make that would be rewarded by a big improvement in diet? Or in defense? If we assume that they initially lack accuracy in throwing, this implies a big target or some special circumstance. And the waterhole provides both.

Transition from Chimplike Behaviors. I suggest that hominids practiced the familiar carnivore strategy: lying in wait at the waterhole for herds to come to drink. Since chimpanzees like to flail large tree branches around and then fling them, assume one threw such a branch into the midst of a herd that was busy drinking. Even before the branch landed, the herd would startle from the sudden movement. By the time that the branch landed somewhere in their midst, the herd would be wheeling around and beginning to stampede.

Even if the branch didn't knock an animal down, some animal might well become entangled with the branch and trip. Ordinarily, such an animal would quickly get back on its feet and successfully run away from the hunters. But in the midst of a stampede, it would likely be knocked down again by other herd animals in their panic. Some such animals are likely seriously injured; even if the hominid had merely intended to chase away competitors for scarce water, there might soon have been meat to scavenge.

Once the hominids got the idea of flinging branches to gain meat, the yield would have increased: the animal need not be fatally injured by the branch or stampeding herd. It need only be delayed enough in regaining its feet so that the hunters could apprehend it. Certainly chimpanzees have no trouble dispatching small pigs or monkeys, once they manage to lay hands upon them; one imagines the hominids starting with some herd animal smaller than zebra (Thomson's gazelles, for example) and tackling the larger ones after their finishing-them-off technique improved.

I suggest that the waterhole fling technique survives to the present day in the form of throwing sticks like the knobkerrie and boomerang, used to trip up a running animal; its virtue as a candidate for the earliest technique lies in its minimal requirements for accuracy and its continuity with known abilities and preferences of the chimpanzees. Flinging branches into herds at a waterhole would have made a good transition onto the bottom of the ladder of predatory throwing; the herd not only makes a large target (it doesn't matter which one is hit) but the herd "amplifies" the effect of the projectile by delay or injury of a downed animal. And the technique would be well-rewarded, what with a new-found ability to eat meat every day; one can imagine a "new niche" level of boom times resulting for the hominids that invented this technique.

Transition to Rock Throwing: Lacking a handy branch (and the technique might well exhaust the supply near a waterhole), chimps tend to throw a big rock. Thrown into a herd, it might well land between animals (and so be less effective than a large branch). But herds that are feeling threatened by predators tend to cluster together tightly, and this is especially notable upon visits to the waterhole. Even if you miss, it's easier next time!

A large rock landing on the back or side of a small grazing animal might well injure it sufficiently; however, all the rock really has to do is knock the animal off its feet and the stampeding herd will serve to delay its escape. For larger herd animals, however, one can imagine a rock as being frequently ineffective, except on the rare occasions when an animal is hit on the head. What might improve the technique?

Enter the Handaxe: A flat rock (a chunk of slate, for example) may spin in flight like a Frisbee. If thrown in the plane of spin, air resistance is minimized and distance increased. If it hits an animal this way, it may also do more damage, as all of the impact is concentrated in a narrow edge; the sharper the edge, the more damage to the skin.

By itself, this does not seem very promising. I mention it because it seems to describe the enigmatic Acheulean handaxe in several respects: nice spin symmetry, and sharpened edges all around. Those sharpened edges are the prime reason that "handaxe" is surely a misnomer: pounding with one would cause it to bite the hand that held it. While damaged versions of the classic shape might well serve a chopping function, surely the symmetry, the edging, and the point are indicative of some function that we have failed to comprehend. I suggest that classic handaxes may be better appreciated in the context of the waterhole ambush.

Following the suggestion of Jeffreys (1965) that handaxes might have been thrown with spin, e.g., into a flock of birds, O'Brien (1981, 1984) undertook to test the aerodynamic properties of a fiberglass replica of one of the very large Acheulean handaxes from Olorgesaile. Even when thrown by experienced athletes, the handaxe did exactly what Frisbees do when thrown by beginners: it turned from horizontal to vertical orientation in mid-flight and then landed on edge. I recently repeated her experiments with five heavily-weathered handaxes from the Sahara, with much the same results. Unlike Frisbees that roll along the ground for some distance, a spinning handaxe will soon impale its point in the ground and come to a sudden halt.

While of no significance when throwing into a flock of birds, I can see distinct advantages of such properties when lobbing a handaxe into a herd visiting a waterhole. An ordinary rock, lobbed into a tightly-packed herd, is most often going to hit an animal on its back or side. Generally the rock will then bounce off -- but if it were to somehow impale itself, it would transfer all of its momentum to the animal, rather than only a fraction.

Consider throwing a handaxe into the herd. It lands on edge, doing considerable damage to some animal's skin (whether or not the edge actually penetrates it). The handaxe spins forward and the point tends to snag the skin (catching a roll of skin pushed forward by the impact, even if it doesn't penetrate). Then the handaxe drops to the ground, its forward momentum exhausted since it was effectively transferred to the animal's back or side. This is far more likely to knock the animal off its feet than is the impact of an ordinary rock that bounces away.

Neurological Knock-down: While the foregoing might suffice, I have discovered (Calvin, 1990) some additional neurological reasons why even a small handaxe (and most will fit comfortably into the hand) might bring down a large animal: the skin damage tends to countermand the reflex commands that ordinarily would prevent toppling after impact.

As background, note that all mammals have righting reflexes; a blow pushing the animal to its right will tend to generate extension commands to the right legs. If the response is quick enough, the animal's slow rightward movement from the momentum transfer will be countered and the animal will manage to keep its center of gravity to the left of its right legs. Once the center of gravity passes the supports, the animal will collapse (the prime virtue of higher velocities and bigger rocks is that the rightward drift becomes so rapid that instability is reached before the reflex loop can operate).

But there are other reflexes which may overwhelm righting reflexes, e.g., the withdrawal reflexes which tend to create movement away from a site of injury, as when one lifts the leg after stepping on something sharp. Withdrawal reflexes depend on the site of injury; for a four-legged animal, injury to its back (which would most often come from an overhanging tree branch or rock outcrop) causes both hindlegs to flex, dropping the back down to disengage it from the overhang. Pain is the effective stimulus for a withdrawal reflex, and the handaxe's sharpened edge would surely cause much more pain than the featureless rock. Still, there are many examples (see Calvin, 1983a) of people not noticing skin cuts if busy with something else.

It is manipulation of the edges of an incision which cause the most requests from patients for boosters in the local anesthetic. A handaxe that injured the skin, and then yanked hard on the damaged area (as would occur when the handaxe's point snagged the skin), seems likely to cause a massive flexion reflex of the hindlimbs. This alone might cause the animal to collapse, even if the sideward momentum transfer was minimal because it was a small handaxe.

So the point need not penetrate, nor need the edges incise, the skin for such a use to be effective. Nor need the handaxe be heavy, or have a lot of horizontal velocity, as pushing the animal sideways is no longer the only way to make it collapse. (Needless to say, the hunter need not understand neurological knock-down mechanisms in order to have noticed that even a juvenile's small handaxe was effective, so long as it had a sharp edge and point). Still, this knock-down might merely result in a limping animal running away from the pursuing hunters -- were it not for the herd stampeding and delaying the downed animal. The context of the herd is still essential (which also means that the technique would be ineffective against fellow hominids -- unless they clustered together for protection), at least until the side-of-the-barn accuracy improves.

I suggest that the handaxe's characteristic features (flattened spin symmetry, all edges sharpened, and something of a point) make it an excellent projectile for waterhole predation, one that might have greatly increased the yield. That handaxes (Howell, 1961, Isaac, 1977, Kleindienst & Keller, 1976) are regularly found along watercourses and lake margins (as if lost in the mud like a golf ball; often they are standing on edge in situ) supports this explanation. While handaxes also make good fleshers (and broken ones may suffice as choppers), none of the other uses proposed for handaxes can explain all of the afore-mentioned characteristic features or the archaeological peculiarities. I would suggest that the Acheulean cleavers, and the handaxes without all-around edging, may be damaged versions of the classic design; if they were thought to be less effective in downing herd animals, they might well have been "retired" and converted to less demanding secondary uses such as chopping or fleshing. The handaxes found in lake beds are probably the ones that the hunter lost in the mud (Indian arrowheads in North America, as well as golf balls, similarly accumulate at waterholes). Thus the theoretical prediction: waterhole handaxes ought to be more completely edged, in comparison to the ones found elsewhere.

Improvements Beyond the Handaxe Lob: The waterhole lob ought to work well on the savannah, with its herds and obligate waterholes during the dry season that must be visited, predators or not. While flinging branches into such herds suggests a natural transition from chimpanzee-like behaviors, and while the handaxe might have been the "high technology" of this waterhole ambush technique, we must ask where the growth curve goes from there. There are some important inventions, such as persuading animals to injure themselves by stampeding over a cliff, but their growth curves are minor.

Longer and longer lobs would be useful as herds became wary, but the real growth curve lies with accuracy: being able to hit even a small herd. Or being able to prey on animals that do not cluster together so conveniently and compound the injury done the downed animal. Or being able to prey on the smaller mammals and birds which are far more plentiful (and which, when their numbers are depleted by heavy predation, can regain population numbers more quickly because of their shorter gestation times and juvenile periods).

The only problem is that accuracy is expensive, whether trying to hit a smaller target from the same distance or trying to hit the same target from twice the distance. This is because of one of the fundamental differences between the electrical operation of our excitable cells (both nerve and muscle) and that of modern electronics; because of the way that ion channels through the cell membrane function, the cells that control our movements cannot time events with any great accuracy. They are jittery, irreducibly noisy.

And throwing has a crucial timing step that is not present in hammering or clubbing or kicking: the projectile must be released at just the right instant (Fig. x.2). Too early, and the launch angle will be too high, the projectile overshooting the target. Too late, and the projectile hits the ground in front of the target. We can talk of the "launch window" as that span of release times where the projectile will hit the target somewhere between its top and bottom. For a throw at a rabbit-sized target from a four meter distance (about the length of an automobile), this launch window is 11 ms wide. So at the end of a throw which started several hundred ms earlier, one must time the relaxation of the grip to stay within that 11 ms. The typical spinal motorneuron has intrinsic timing jitter of at least that much (Calvin & Stevens, 1968; Calvin, 1983b, 1986), when the cell is generating action potentials at 200 ms intervals.

But most people can, with practice, hit a rabbit-sized target at such a comfortable distance on perhaps half of the tries, so let us assume that the timing mechanisms are sufficient. Move the target out to twice the distance, eight meters, and practice enough to achieve hits on half the tries. The reason that this is so hard is that the launch window shrinks a factor of eight to only 1.4 ms. The solid angle subtended by the target fell by a factor of four; furthermore, throwing twice as far with a reasonably flat trajectory means throwing twice as fast, so the time scale halved. An electronic instrument would have no trouble keeping its timing jitter to within a 1.4 ms window over several hundred milliseconds. But neurons would probably have to be redesigned from scratch.

The Law of Large Numbers. Nature seldom redesigns, but it does one thing very well: it duplicates cells endlessly. And it seems that the way around cellular jitter is to simply assign a great many neurons to the same task, timing the launch, and then averaging their recommendations. Need to halve the jitter? Just use four times as many cells as originally sufficed. To double the throwing distance and maintain hit rate, one needs an eight-fold reduction in jitter: that merely requires 64 times as many timing cells as originally sufficed. Triple the distance? Only 729 times as many cells.

Now getting good enough to re-attain your hit rate at twice the distance is surely not a matter of growing a lot of new cells for the movement control circuits. It seems more likely that they are borrowed as one gets set to throw, the experts (probably the premotor cortex and the cerebellum) recruiting some temporary help from other brain regions, much as the choir in the Hallelujah Chorus borrows the audience temporarily.

Might, however, our four-fold increase in brain size have something to do with this Law of Large Numbers principle, our developmental genes growing more nerve cells as a result of throwing accuracy coming under natural selection for feeding families during hominid evolution? Alas, a four-fold increase in neuron numbers will only "buy" a 26 percent increment in throwing distance -- and the first 10 percent increase in neuron numbers (about what the natural variation in the human population might presently be) will only buy a 2 percent farther throw. And so I doubt that it was a simple matter of bigger-brains-throw-better; I suspect that recruiting the helpers is easier with a more juvenilized brain (Calvin, 1990), and that the bigger brain is simply a side-effect of neoteny, that there was not selection for size per se.

Concluding Remarks

Many things have contributed to hominid evolution, and humans wouldn't be what we are today without effects from many of them. The inventions that would seem to make the most demands on brain reorganization, however, are accurate throwing and structured language.

Predatory throwing seems possible from such transition stages as my waterhole lobbing proposal; accurate throwing is not a single invention but a long climb, fortunately with a growth curve that seems unlimited. Eating meat regularly would have greatly expanded the hominid niche, allowing hominids to expand well outside the savannah, even into the temperate zone (where the major constraint is that the gathering is so poor in the winter months that either hoarding or meat-eating seems essential).

Throwing seems to be a fast track for hominid evolution; whether it is the fastest track to a versatile sequencer and its Darwin Machine secondary uses can be better judged when a similar analysis is available for language, intelligence, toolmaking, and other candidates. What are their transitions from apelike behaviors, their growth curves, their niche-expanding properties, their secondary uses?


I thank Derek Bickerton, Kathleen Gibson, and Katherine Graubard for comments on the manuscript; revision of the manuscript was greatly aided by conversations with Christopher Boehm, Christophe Boesch, David Brin, Iain Davidson, Daniel C. Dennett, Dean Falk, Susan Goldin-Meadow, Doug Hofstadter, Nick Humphrey, Ray Jackendoff, Adam Kendon, Marcel Kinsbourne, Philip Lieberman, Andy Lock, Peter MacNeilage, William McGrew, George Ojemann, Martin Pickford, Duane Rumbaugh, Vince Sarich, Sue Savage-Rumbaugh, Mark Sullivan, Nick Toth, Peter Warshall, Pete Wheeler, and Thomas Wynn.

Fig. x.1.

The basic design and operation of a Darwin Machine: