Eliasmith, C. (1996). The third contender: A critical examination of the dynamicist theory of cognition. Journal of Philosophical Psychology. Vol. 9 No. 4 pp. 441-463.

[Note: Page breaks in the publication are marked in this document
as italicized bold e.g. [** 446**] ]

[** 441**]

The third contender: a critical examination of the dynamicist theory of cognition*

CHRIS ELIASMITH

*Philosophy-Neuroscience-Psychology Program,
Department of Philosophy, Washington University in St. Louis,
Campus Box 1073, One Brookings Drive, St. Louis, MO 63130-4899,
chris@twinearth.wustl.edu*

In a recent series of publications, dynamicist researchers have proposed a new conception of cognitive functioning. This conception is intended to replace the currently dominant theories of connectionism and symbolicism. The dynamicist approach to cognitive modeling employs concepts developed in the mathematical field of dynamical systems theory. They claim that cognitive models should be embedded, low-dimensional, complex, described by coupled differential equations, and non-representational. In this paper I begin with a short description of the dynamicist project and its role as a cognitive theory. Subsequently, I determine the theoretical commitments of dynamicists, critically examine those commitments and discuss current examples of dynamicist models. In conclusion, I determine dynamicism's relation to symbolicism and connectionism and find that the dynamicist goal to establish a new paradigm has yet to be realized.

Since the emergence of connectionism in the 1980s, connectionism
and symbolicism have been the two main paradigms of cognitive
science (Bechtel & Abrahamsen 1991). However, in recent years,
a new approach to the study of cognition has issued a challenge
to their dominance; that new approach is called *dynamicism*.
There have been a series of papers and books (Globus 1992; Robertson,
et al. 1993; Thelen and Smith 1994; van Gelder 1995; van Gelder
and Port 1995) that have advanced the claim that cognition is
not best understood as symbolic manipulation or connectionist
processing, but rather as complex, dynamical interactions of a
cognizer with its environment. Dynamicists have criticized both
symbolicism and connectionism and have decided to dismiss these
theories of cognition and instead wish to propose a "radical
departure from current cognitive theory," one in which "there
*are* no structures" and "there *are* no rules"
(Thelen and Smith 1994, p. xix, italics added).

Dynamicism arose because many powerful criticisms which the
symbolicist and connectionist paradigms leveled at one another
remained unanswered (Bechtel & [** 442**] Abrahamsen
1991; Fodor and McLaughlin 1990; Fodor and Pylyshyn 1988; Smolensky
1988); it seems there must be a better approach to understanding
cognition. But, more than this, there are a number of issues which
dynamicists feel are inadequately addressed by either alternative
approach. Dissatisfaction with the symbolicist

Symbolicism is most often the approach against which dynamicists
rebel (van Gelder and Port 1995). Dynamicists have offered a number
of clear, concise reasons for rejecting the symbolicist view of
cognition. The symbolicist stance is well exemplified by the work
of Newell, Chomsky, Minsky and Anderson (van Gelder & Port
1995, p. 1). However, Newell and Simon (1976) are cited by van
Gelder as having best identified the *computationalist hypothesis*
with the following *Physical Symbol System Hypothesis* (Newell
1990, pp. 75-77; van Gelder & Port 1995, p. 4):

Natural cognitive systems are intelligent in virtue of being physical symbol systems of the right kind.

Similarly, though for more obscure reasons, dynamicists wish
to reject the connectionist view of cognition. Churchland and
Sejnowski espouse a commitment to the connectionist view with
the hypothesis that "emergent properties are high-level effects
that depend on lower-level phenomena in some systematic way"
(Churchland and Sejnowski 1992, p. 2). As a result, they are committed
to a low-level neural-network type of architecture to achieve
complex cognitive effects (Churchland and Sejnowski 1992, p. 4).
These same commitments are echoed by another connectionist, Smolensky,
in his version of the *connectionist hypothesis* (1988, p.
7)**1**:

The intuitive processor is a subconceptual connectionist dynamical system that does not admit a complete, formal, and precise conceptual-level description.

Or, to rephrase:

Natural cognitive systems are dynamic neural systems best understood as subconceptual networks.

However, dynamicists wish to reject both of these hypotheses
in favor of an explicit commitment to understanding cognition
as a dynamical system. Taken at its most literal, the class of
*dynamical systems* includes any systems which change through
time. Clearly, such a definition is inadequate, since both connectionist
networks and symbolicist algorithms are dynamic in this sense
(Guinti (1991) as cited in van Gelder 1993). Thus, dynamicists
wish to delineate a specific *type* of dynamical system that
is appropriate to describing cognition. This is exactly van Gelder's
contention with his version of the *Dynamicist Hypothesis*
(1995, p. 4): [** 443**]

Natural cognitive systems are certain kinds of dynamical systems,
and are best understood from the perspective of dynamics*.*

The hypothesis suggests the heavy reliance of dynamicism on
an area of mathematics referred to as *dynamical systems theory*.
The concepts of dynamical systems theory are applied by dynamicists
to a description of cognition. Mathematical ideas such as *state
space*, *attractor, trajectory*, and *deterministic
chaos *are used to explain the internal processing which underlies
an agent's interactions with the environment. These ideas imply
that the dynamicist should employ systems of differential equations
to represent an agent's cognitive trajectory through a state space.
In other words, cognition is explained as a multi-dimensional
space of all possible thoughts and behaviors that is traversed
by a path of thinking followed by an agent under certain environmental
and internal pressures, all of which is captured by sets of differential
equations (van Gelder and Port 1995). Dynamicists believe that
they have identified what *should* be the reigning paradigm
in cognitive science, and have a mandate to prove that the dynamicist
conception of cognition is the correct one to the exclusion of
symbolicism and connectionism.

Through their discussion of the *dynamicist hypothesis*,
dynamicists identify those "certain kinds" of dynamical
systems which are suitable to describing cognition. Specifically,
they are: "state-determined systems whose behavior is governed
by differential equations... Dynamical systems in this strict
sense always have variables that are evolving continuously and
simultaneously and which at any point in time are mutually determining
each other's evolution" (van Gelder and Port 1995, p. 5)
-- in other words, systems governed by coupled nonlinear differential
equations. Thus the *dynamicist hypothesis* has determined
that a dynamicist model must have a number of component behaviors,
they must be: deterministic; generally complex; described with
respect to the independent variable of time; of low dimensionality;
and intimately linked (van Gelder 1995; van Gelder and Port 1995).
Before discussing what each of these component behaviors mean
to the dynamicist view of cognition, we need to briefly examine
the motivation behind the dynamicist project -- dynamical systems
theory.

The branch of mathematics called *dynamical systems theory*
describes the natural world with essentially geometrical concepts.
Concepts commonly employed by dynamicists include: *state space*,
*path* or *trajectory*, *topology*, and *attractor*.
The *state space* of a system is simply the space defined
by the set of all possible states that the system could ever pass
through. A *trajectory* plots a particular succession of
states through the state space and is commonly equated with the
*behavior* of the system. The *topology* of the state
space describes the "attractive" properties of all points
of the state space. Finally, an *attractor* is a point or
path in the state space towards [** 444**] which the
trajectory will tend when in the neighborhood of that attractor.
Employing these concepts, dynamicists can attempt to predict the
behavior of a cognitive system if they are given the set of governing
equations (which will define the state space, topology and attractors)
and a state on the trajectory. The fact that dynamical systems
theory employs a novel set of metaphors for thinking about cognition
is paramount. Black's emphatic contention that science must start
with metaphor underlines the importance of addressing new metaphors
like those used by dynamicists (Black 1962). These metaphors may
provide us with a perspective on cognition that is instrumental
in understanding some of the problems of cognitive science.

The practical and theoretical advantages of dynamical systems theory descriptions of cognition are multitude. The most obvious advantage is that dynamical systems theory is a proven empirical theory. Thus, the differential equations used in formulating a description of a cognitive system can be analyzed and (often) solved using known techniques. One result of having chosen this mathematical basis for a description of cognition is that dynamicists are bound to a deterministic view of cognition (see section 2.0; Bogartz 1994, pp. 303-4).

As well, the disposition of dynamical descriptions to exhibit complex and chaotic behavior is generally considered by dynamicists as an advantage. Dynamicists convincingly argue that human behavior, the target of their dynamical description, is quite complex and in some instances chaotic (van Gelder 1995; Thelen and Smith 1994).

Dynamical systems theory was designed to describe continuous
temporal behaviors, thus the dynamicist commitment to this theory
provides for a natural account for behavioral continuity. Though
the question of whether or not all intelligent behavior is continuous
or discrete is a matter of great debate among psychologists (Miller
1988; Molenaar 1990), dynamical systems models possess the ability
to describe both. So, relying on the assumption that behavior
is "pervaded by *both* continuities and discrete transitions"
(van Gelder and Port 1995, p. 14) as seems reasonable (Churchland
and Sejnowski 1992; Egeth and Dagenbach 1991; Luck and Hillyard
1990; Schweicker and Boggs 1984), dynamicism is in a very strong
position to provide good cognitive models based on its theoretical
commitments.

Fundamentally, dynamicists believe that the other approaches
to cognition "leave time out of the picture" (van Gelder
and Port 1995, p. 2). They view the brain as continually changing
as it intersects with information from its environment. There
are no representations, rather there are "state-space evolution[s]
in certain kinds of non-computational dynamical systems"
(van Gelder and Port 1995, p. 1). The temporal nature of cognition
does not rely on "clock ticks" or on the completion
of a particular task, rather it is captured by a *continual*
evolution of interacting system [** 445**] parts which
are always reacting to, and interacting with the environment and
each other. These temporal properties can be captured with relatively
simple sets of differential equations.

In order to avoid the difficult analyses of high-dimensional
dynamical systems, dynamicists have claimed that accurate descriptions
of cognition are achievable with low-dimensional descriptions.
The aim of dynamicists is to "provide a *low-dimensional*
model that provides a scientifically tractable description of
the same qualitative dynamics as is exhibited by the high-dimensional
system (the brain)" (van Gelder and Port 1995, p. 28).

The dimension of a dynamical systems model is simply equal to the number of parameters in the system of equations describing a model's behavior. Thus, a low dimensional model has few parameters and a high dimensional model has many parameters. The dimensionality of a system refers to the size of its state space. Therefore, each axis in the state space corresponds to the set of values a particular parameter can have.

The low dimensionality of dynamicist systems is a feature which
contrasts the dynamicist approach with that of the connectionists.
By noting that certain dynamical systems can capture very complex
behavior with low dimensional descriptions, dynamicists have insisted
that complex *cognitive* behavior should be modeled via this
property. Thus, dynamicists avoid the difficult analyses of high
dimensional systems, necessary for understanding connectionist
systems. However, it also makes the choice of equations and variables
very difficult (see section 3.3).

The linked, or *coupled*, nature of a system of equations
implies that changes to one component (most often reflected by
changes in a system variable) have an immediate effect on other
parts of the system. Thus, there is no representation passing
between components of such a system, rather the system is linked
via the inclusion of the same parameter in multiple equations.
The ability of such systems of equations to model "cognitive"
behaviors has prompted theorists, like van Gelder, to insist that
the systems being modeled similarly have no need of representation
(van Gelder and Port 1995; van Gelder 1995). In a way, "coupling"
thus replaces the idea of "representation passing" for
dynamicists.

Dynamicist systems also have a special relation with their
environment in that they are not easily distinguishable from their
surroundings: "In this vision, the cognitive system is not
just the encapsulated brain; rather, since the nervous system,
body, and environment are all constantly changing and simultaneously
influencing each other, the true cognitive system is a single
unified system embracing all three" (van [** 446**]
Gelder 1995, p. 373). Since the environment is also a dynamical
system, and since it is affecting the cognitive system and the
cognitive system is affecting it, the environment and cognitive
system are strongly coupled. Such

The power of dynamical systems theory to provide useful descriptions of natural phenomena has been demonstrated through its application to many non-cognitive phenomena on various scales, ranging from microscopic fluid turbulence and cell behavior, to macroscopic weather patterns and ecosystems. Still, the questions remains: Why should we apply these tools to a cognitive system? Why should we accept the claim that "cognitive phenomena, like so many other kinds of phenomena in the natural world, are the evolution over time of a self-contained system governed by differential equations" (van Gelder and Port 1995, p. 6)?

A dynamicist advances this claim because of the embeddedness
and obvious temporal nature of cognitive systems (van Gelder and
Port 1995, p. 9). The omnipresence of embedded, temporal cognitive
systems lead van Gelder and Port to conclude that dynamical descriptions
of cognition are not only necessary, but also sufficient for an
understanding of mind: "...whenever confronted with the problem
of explaining how a natural cognitive system might interact with
another system which is essentially temporal, one finds that the
relevant aspect of the cognitive system itself *must* be
given a dynamical account" (van Gelder and Port 1995, p.
24, italics added). This strong commitment to a particular form
of modeling has resulted in the dynamicists claiming to posit
a new "paradigm for the study of cognition" (van Gelder
and Port 1995, p. 29) -- not, notably, an extension to either
of connectionism or symbolicism, but a *new* *paradigm*.
Thus, the dynamicists are insisting that there is an inherent
value in understanding cognition as dynamical *instead of*
connectionist or symbolicist (van Gelder and Port 1995).

One of the greatest strengths of the mathematics of dynamical
systems theory is its inherent ability to effectively model complex
temporal behavior. It is a unanimous judgement among the paradigms
that the temporal features of natural cognizers must be adequately
accounted for in a good cognitive model (Newell 1990; Churchland
and Sejnowski 1992; van Gelder and Port 1995). Not only do dynamicists
address the temporal aspect of cognition, they make this aspect
*the most important*. The reasons for espousing this theoretical
commitment are obvious: we humans exist in time; we act in time;
and we cognize in time -- *real* time. [** 447**]
Therefore, dynamical systems theory, which has been applied successfully
in other fields to predict complex temporal behaviors, should
be applied to the complex temporal behavior of cognitive agents.
Whether or not we choose to subscribe to the dynamicist commitment
to a particular type of dynamical model, they convincingly argue
that we cannot remove temporal considerations from our models
of cognition -- natural cognition is indeed inherently temporal
in nature.

Dynamicists have often pointed to their temporal commitment
as the most important (van Gelder and Port 1995, p. 14). Unfortunately,
it is not clear that dynamicists have a monopoly on good temporal
cognitive models. In particular, connectionists have provided
numerous convincing models of sensorimotor coordination, sensorimotor
integration and rythmic behaviors (such as swimming) in which
they "embrace *time*" (Churchland and Sejnowski
1992, p. 337). If dynamicists do* not* have this monopoly,
it will be difficult to argue convincingly that dynamicism should
properly be considered a new paradigm.

The intuitive appeal of a dynamical systems theory description
of many systems' behaviors is quite difficult to resist. It simply
makes sense to think of the behavior of cognitive systems in terms
of an "attraction" to a certain state (*e.g.* some
people seem to be disposed to being happy). However, can such
metaphorical descriptions of complex systems actually provide
us with new insights, integrate previously unrelated facts, or
in some other way lead to a deeper understanding of these systems?
In other words, can dynamical descriptions be more than metaphorical
in nature?

In order to answer this question in the affirmative, we must be able to show the potential for new predictions and explanations. The dynamicist analogy between cognition and dynamical systems theory (see section 2.0) is compelling, but is it predictive and explanatory? We cannot allow ourselves to accept new concepts and theories which do not deepen our understanding of the system being modeled: "[even though] dynamical concepts and theory are seductive, we may mistake translation for explanation" (Robertson, et al. 1993, p. 119).

Philosopher of science Mary Hesse has noted that theoretical models often rely on this sort of analogy to the already familiar (1988, p. 356):

[Theoretical models] provide explanation in terms of something already familiar and intelligible. This is true of all attempts to reduce relatively obscure phenomena to more familiar mechanisms or to picturable non mechanical systems...Basically, the theoretical model exploits some other system (such as a mechanism or a familiar mathematical or empirical theory from another domain) that is already well known and understood in order to explain the less well-established system under investigation.

Clearly, this tack is the one that dynamicists have taken.
They are attempting to address the obscure and poorly understood
phenomena of cognition in terms of the [** 448**] more
familiar mathematical theory of dynamical systems, which has been
successfully applied to complex mechanical and general mathematical
systems.

However, simply providing an analogy is not enough (Robertson, et al. 1993, p. 119):

There is a danger, however, in this new fascination with dynamic theory. The danger lies in the temptation to naively adopt a new terminology or set of metaphors, to merely redescribe the phenomena we have been studying for so long, and then conclude that we have explained them.

In science it is necessary to provide a *model*. A model
is no longer simply a resemblance, but rather a precise description
of the properties of the system being modeled. The more important
properties of the source that are exactly demonstrated by the
model, the better the model. So, to differentiate between model
and analogy in science, one can determine if the mapping of these
important properties is explicit, leaving no room for interpretation;
if so, one is dealing with a model. In this sense, a model can
be thought of as "a kind of controlled metaphor" (Beardsley
1972, p. 287)**2**.
Thus, where an analogy may consist of the statement: "The
atom is like the solar system" -- leaving room for the listener
to fill in the details, and possibly to infer wrongly that the
orbits of electrons and planets are similar -- a model would consist
of a picture, physical prototype, or mathematical description
in which each element of the source would be explicitly represented
by some particular aspect of the model. In other words, a model
presents a precisely constrained analogy.

An excellent example of the mistake of considering analogical
application of dynamical systems theory and concepts to be a valid
model can be found in Abraham, et al. (1994). Specifically, their
dynamical descriptions of behavior apply the concepts of dynamical
systems theory to a Jungian analysis of human behavior. However,
applying these concepts in such a metaphorical manner simply seems
to relate the phenomena in a new way. There is no rigor added
to their model simply because the chosen metaphor is mathematical.
They have provided a metaphor, not a model. Barton duly notes
that in the paper describing one such dynamical model of a Jungian
hypothesis, Abraham *et al.* "imply a level of measurement
precision we don't have in clinical psychology" (Barton 1994,
p. 12).

Often, clinical psychologists applying dynamics to their field ignore the differences between their field and the rigorous ones from which dynamical systems theory arose: "One way that the distinction between fields is set aside is when authors use rigorous terminology from nonlinear dynamics to refer to psychological variables that are multidimensional and difficult to quantify" (Barton 1994, p. 12). For example, some psychologists have equated the dynamical concept of chaos with overwhelming anxiety, others with creativity, and still others with destructiveness (Barton 1994). These diverse applications of the concept of chaos are clearly more metaphorical than rigorous, and bear little resemblance to the definitions used in precise dynamical systems theory models.

For this reason, there is no real *explanation* provided
by such psychological applications of dynamical systems theory
to the phenomenology or intentionality of cognition. These supposed
"models" are simply metaphorical *descriptions*,
they [** 449**] advance no new insights in clinical psychology.
They do not reveal any details about what is being described (

From the standpoint of cognitive models, there is not a lot of value in such descriptions. We cannot generate a rigorous explanatory model, nor produce computational simulations from metaphor, so we are not able to discover if the models are predictive. This is a serious failure for any scientific model (Cartwright 1991; Hesse 1972; Koertge 1992; Le Poidevin 1991). Of course, it is possible to haphazardly generate a model which produces data that seems appropriate, but since we have no explicit map between the concepts of clinical psychology and those of dynamical systems theory, the data is meaningful only in its mathematical context, not in a cognitive one.

Even in the most rigorous of dynamical models, such as the Skarda and Freeman (1987) model of the rabbit olfactory bulb, extending dynamical systems theory concepts beyond the metaphorical still proves difficult: "Given this broad picture of the dynamics of this neural system we can sketch a metaphorical picture of its multiple stable states in terms of a phase portrait" (Skarda and Freeman 1987, p. 166). Despite the application of nonlinear differential equations in their model, when it comes time to show how the model relates to cognition, a metaphorical description is employed.

The concepts of dynamical systems theory provide an interesting
method of thinking about cognitive systems, but they have not
yet been shown to be successfully transferable to rigorous definitions
of human behavior or cognition. The "haziness" of clinical
psychology does not allow for quantification of mechanisms in
dynamical systems theory terms. Furthermore, even some physiological
processes do not seem to lend themselves to precise quantitative
dynamicist descriptions that are able to provide the predictive
or explanative powers expected of good models (*c.f.* van
Geert, 1996).

In providing any dynamical systems theoretic model, one must provide a set of differential equations. These equations consist of constants and parameters (or variables). In a simple equation describing the motion of a pendulum, an example of a parameter would be the current arm angle, which changes as the pendulum swings, whereas a constant is exemplified by the gravitational field which remains constant no matter the position of the pendulum.

One reason that it has been so difficult for dynamicists to
provide good cognitive models is that they have been unable to
meet the challenge of identifying and quantifying the parameters
sufficiently for a dynamical model. It is extremely difficult,
if not impossible, to simply examine a complex cognitive system
and select which behaviors are appropriately mapped to parameters
to be used in a dynamical model (Robertson, et al. 1993, p. 142):
[** 450**]

The central dilemma faced by any experimentalist hoping to apply dynamic systems theory is ignorance, in particular, ignorance of the state variables...

This is a common problem in investigating complex natural nonlinear
systems: "Not only are investigators rarely able to completely
characterize all the variables that affect a complex system, but
they must isolate a system well enough to cut through what Morrison
(1991) called a 'sea of noise'" (Barton 1994, p. 10). Dynamicists
must realize that the natural systems they wish to model (*i.e.*
cognitive systems) are among the most complex systems known.

When it is difficult to define or even distinguish individual
cognitive behaviors (and thus parameters), and similarly challenging
to find the signal of interest in ambient noise, it is common
practice for dynamicists to define *collective parameters *(also
referred to as order parameters) (Thelen and Smith 1994). A collective
parameter is one which accounts for macroscopic behaviors of the
system. In other words, many behaviors which *could* be identified
with unique system parameters are "collected" into a
group, and the overall behavior of that group is represented by
a single system parameter; the collective parameter. Consequently,
assigning a meaning or particular interpretation to a collective
variable becomes very difficult to justify. Barton (1994) attributes
this practice to a confusion of techniques between levels of analysis.
Clearly, if the meaning of a parameter cannot be determined, it
becomes next to impossible to test a model, or to verify hypotheses
derived from observing the behavior of the model. In other words,
having identified a parameter which controls a macroscopic behavior
of an equation describing a system, does not mean that the parameter
can be interpreted in the context of the system being described**3**. Thus, it is difficult
to determine if such a parameter provides an explanation of the
mechanisms at work in the system or what, precisely, the relation
between the parameter and the original system is.

However, dynamicists have a mandate to "provide a *low-dimensional*
model that provides a scientifically tractable description of
the same qualitative dynamics as is exhibited by the high-dimensional
system (the brain)" (van Gelder and Port, 1995, p. 28). The
only feasible way to generate low-dimensional models of admittedly
high-dimensional systems is to use collective parameters. Thus,
dynamicists must reconsider their criteria of accepting only low-dimensional
models as being valid models of cognition.

By adopting a purely dynamicist approach and thus necessitating
the use of collective parameters, it becomes impossible to identify
the underlying mechanisms that affect behavior. In contrast, connectionism
provides a reasonably simple unit (the neuron or node) to which
behavior can ultimately be referred. Similarly, symbolicism provides
fundamental symbols to which we can appeal. In both of these instances,
understanding global behavior is achieved through small steps,
modeling progressively more complex behavior and allowing a "backtrace"
when necessary to explain a behavior. With dynamical equations,
on the other hand, no such progression can be made. The model
is general to such an extent as to lose its ability to explain
from where the behaviors it is producing are coming. [** 551**]

Perhaps the best solution to the difficulties involved in using collective parameters is simply to not use them. Unfortunately, this solution does not help to avoid an important new problem, one which is a consideration for dynamicist models that used collective parameters as well; the problem of systems boundaries.

Dynamicists claim that through their critique of the current state of cognitive science, they are challenging a conceptual framework which has been applied to the problem of cognition since the time of Descartes. Rather than a Cartesian distinction between the cognizer and its environment, dynamicists hold that "the human agent as essentially embedded in, and skillfully coping with, a changing world" (van Gelder and Port 1995, p. 39) (see section 2.0). Thus, dynamicists feel that it is unnatural to distinguish a cognitive system from its environment.

To begin, let us assume that we are attempting to construct
a dynamicist model with a number of parameters, let us say *n*
of them. Thus, we will need an *n*-tuple that we can use
(we hope) to completely characterize the behavior of the system
we are modeling. Of course, these *n* parameters are contained
in coupled, nonlinear, differential equations. As an example let
us assume we are to model a human cognizer; let us think for a
moment about the complexity of the model we are attempting to
construct.

The human brain contains approximately one trillion connections
(Pinel 1993). Furthermore, the number of parameters *affecting*
this system seems almost infinite. Remember, we must account for
not only the cognitive system itself, but all environmental factors
as there are no discernible system boundaries on the cognitive
system. The environment, which must be coupled to our human cognizer,
consists not only of other provably chaotic systems like weather,
ocean currents, and species populations but billions of other
brains (let alone the artificial systems we interact with every
day, or the planets, moons, stars, *etc*.). Having put ourselves
in the place of the dynamicist it seems we have the impossible
task of characterizing a nearly limitless system. Thus we will,
for argument sake, assume the number of parameters is large, but
also finite. This same result could be arrived at by using collective
parameters (see section 3.3).

So, we now have a large, finite *n*-tuple of parameters
for the equations describing our complex system. Is such a coupled
nonlinear differential equation description of human behavior
of any value? Using even the most advanced numerical methods,
and the most powerful computers, such a problem would probably
be unsolvable. So, let us assume infinite computing power. Let
us thus be guaranteed that we can solve our system of equations.
However, before we can solve our system, we must ask: What can
we use for initial conditions?

Is it feasible for us to be able to measure *n* starting
conditions for our model, with any kind of precision? Unlikely,
but let us assume once more, that we have *n* initial conditions
of sufficient accuracy. What kind of answer can we expect? Of
course, we will get an *n* dimensional trajectory through
an *n* dimensional state space. How can we possibly interpret
such output? At this point, it seems that an interpretation of
such a trajectory becomes, if not impossible, meaningless. There
is [** 452**] absolutely no way to either 'un-collect'
parameters, or to find out exactly what it means for the system
to move through the trillion dimensional (to be conservative)
state space.

As our attempt to construct a dynamicist model progresses, it becomes more and more difficult to continue justifying further assumptions. However, such assumptions are necessary in light of the dynamicist hypothesis and its commitments to a "certain type" of dynamical model which is both low-dimensional and completely embedded in its environment. Perhaps these commitments should be reexamined.

An important distinction between dynamicism and either symbolicism
or connectionism is the dynamicists' unique view of representation;
to be a truly dynamicist model, there should be *no* *representation*.
In contrast, symbolicist models are fundamentally dependent on
symbolic representations, so clearly they are inadequate. Similarly,
connectionists represent concepts (via either distributed representation
or local symbolic representation) in their simplified networks.
But dynamicists decry the use of representation in cognitive models
(Globus 1992; Thelen and Smith 1994; van Gelder 1993, 1995).

In a criticism of connectionism, Globus concludes: "It
is the processing of representations that qualifies simplified
nets as computational (*i.e.* symbolic). In realistic nets,
however, it is not the representations that are changed; it is
the self-organizing *process* that changes via chemical modulation.
Indeed, it no longer makes sense to talk of 'representations'"
(Globus 1992, p. 302). Similarly, van Gelder insists: "it
is the concept of *representation* which is insufficiently
sophisticated" (van Gelder, 1993, p. 6) for understanding
cognition. Again, Thelen and Smith pronounce: "We are not
building representations at all!" (Thelen and Smith 1994,
p. 338). However, it is never mentioned what it *would* "make
sense" to talk of, or what *would* be "sophisticated"
enough, or what dynamicists *are* "building". Notably,
the dynamicist assertion that representation is *not* necessary
to adequately explain cognition is strongly reminiscent of the
unsuccessful behaviorist project.

In the late 1950s there was extensive debate over the behaviorist
contention that representation had no place in understanding cognition.
One of the best known refutations of this position was given by
Chomsky in his 1959 review of B. F. Skinner's book *Verbal Behavior*.
Subsequently, behaviorism fell out of favor as it was further
shown that the behaviorist approach was inadequate for explicating
even basic animal learning (Thagard 1992, p. 231). The reasons
for the behaviorist failure was its fundamental rejection of representation
in natural cognizers.

Dynamicists have advanced a similar rejection of representation
as important to cognition. Consequently, they fall prey to the
same criticism that was forwarded over three decades ago. Furthermore,
the early work of researchers like Johnson-Laird, Miller, Simon
and Newell firmly established a general commitment to representa-[** 453**]tion
in cognitive science inquiries (Thagard 1992, p. 233). There have
been no alternatives offered by dynamicists which would fundamentally
disturb this commitment.

Thus, it is not easy to convincingly deny that representation plays an important role in cognition. It seems obvious that human cognizers use representation in their dealings with the world around them. For example, people seem to have the ability to rotate and examine objects in their head. It seems they are manipulating a representation (Kosslyn 1980, 1994). More striking perhaps is the abundant use of auditory and visual symbols by human cognizers everyday to communicate with one another. Exactly where these ever-present communicative representations arise in the dynamicist approach is uncertain. It will evidently be a significant challenge, if not an impossibility, for dynamicists to give a full account of human cognition, without naturally accounting for the representational aspects of thinking. Though dynamicists can remind us of the impressive behaviors exhibited by Brooks' (1991) dynamical robots, it is improbable that the insect-like reactions of these sorts of systems will scale to the complex interactions of mammalian cognition.

To better understand the outcome of the theoretical difficulties discussed in the previous sections, we will now examine three examples that dynamicists have cited as being good dynamicist models. These models are not only considered to be examples of application of the dynamicist hypothesis, but are considered by dynamicists to be exemplars of their project.

Though a number of dynamicist models have been proposed by clinical psychologists, many have not been cited as paradigmatic. Because of the difficulties involved in developing convincing, non-metaphorical models of psychological phenomena, even dynamicist proponents tend to shy away from praising these abundant models.

Physiological psychologists, in contrast, have developed far
more precise models. Robertson *et al.* (1990), outlined
a model for CM (cyclicity in spontaneous motor activity in the
human neonate) using a dynamical approach. It seems that such
quantifiable physiological behavior should lend itself more readily
to a non-metaphorical dynamical description than perhaps clinical
psychology would, allowing the psychophysiologist to avoid the
poor conceptual mappings of clinical psychologists.

Indeed, Robertson *et al.* gathered reams of data on the
cyclic motor activity apparent in human children. Because of the
availability of this empirical data, this dynamicist CM model
is one of the few able to begin to breach the metaphor/model boundary
(Thelen and Smith 1994, p. 72) which proves impenetrable to many
(see section 3.2). However, it is another matter to be able to
understand and interpret the data in a manner which sheds some
light on the mechanisms behind this behavior. [** 454**]

Robertson *et al.*, after "filtering" the observedstate
space, obtained a dynamicist model with desirably few degrees
of freedom which seemed to be able to model the stochastic process
of CM. However, upon further investigation, the only conclusions
that could be drawn were: "We clearly know very little about
the biological substrate of CM" (Robertson, et al. 1993,
p. 147). In the end, there is no completed dynamicist model presented,
though various versions of the model which do *not* work
are discounted. So, Robertson *et al.* have employed dynamicist
models to constrain the solution, but not to provide new insights.
In their closing remarks, they note (Robertson, et al. 1993, p.
47):

We are therefore a long way from the goal of building a dynamical model of CM in which the state variables and parameters have a clear correspondence with psychobiological and environmental factors.

In other words, a truly dynamicist model is still a future consideration.

The olfactory bulb model by Skarda and Freeman is one of the few well-developed models that dynamicists claim as their own. Many authors, including van Gelder, Globus, Barton, and Newman have cited this work as strong evidence for the value of dynamical systems modeling of cognition. Upon closer examination however, it becomes clear that this model is subject to important theoretical difficulties. Furthermore, it is not even evident that this dynamicist exemplar is indeed a truly dynamicist model.

In Skarda and Freeman's (1987) article *How brains make chaos
in order to make sense of the world*, a dynamical model for
the olfactory bulb in rabbits was outlined and tested to some
degree. They advanced a detailed model of the neural processing
underlying the ability to smell. This model relies on a complex
dynamical system description which may alternate between chaotic
activity and more orderly trajectories, corresponding to a learning
phase or a specific scent respectively. They hypothesized that
chaotic neural activity serves as an essential ground state for
the neural perceptual apparatus. They concluded that there is
evidence of the existence of important sensory information in
the spatial dimension of electroencephalogram (EEG) activity and
thus there is a need for new physiological metaphors and techniques
of analysis.

Skarda and Freeman observed that their model generated output
that was statistically indistinguishable from the background EEGs
of resting animals. This output was achieved by setting a number
of feedback gains and distributed delays "in accordance with
our understanding of the anatomy and physiology of the larger
system" (Skarda and Freeman 1987, p. 166) in the set of differential
equations that had been chosen to model the olfactory bulb. Notably,
the behavior of the system can be greatly affected by the choice
of certain parameters, especially if the system is potentially
chaotic (Abraham and Shaw 1992). It is thus uncertain whether
the given model is providing an accurate picture of the behavior,
or whether it has been molded by a clever choice of system parameters
into behaving similarly to the system being modeled. [** 455**]

Even assuming that the model is not subject to this objection,
a further criticism can be directed at its predictive or correlative
properties. Although the model accounts quite well for a number
of observed properties, "it does not correspond with the
actual EEG patterns in the olfactory lobe" (Barton 1994,
p. 10). The consequences of this inaccuracy seem quite severe.
For, if both the model, and what is *being* modeled are indeed
chaotic systems (*i.e.* very sensitive to initial conditions),
but they are not the *same* chaotic system, and if there
are any inaccuracies in their initial conditions**4**, then the divergence of the state
spaces of the model and the real system will be enormous within
a short time frame. Consequently, the model will not be robust
and will be difficult to use in a predictive role.

Finally, the authors themselves see their paper and model as showing that "the brain may indeed use computational mechanisms like those found in connectionist models" (Skarda and Freeman 1987, p. 161). Furthermore, they realized that: "Our model supports the line of research pursued by proponents of connectionist or parallel distributed processing (PDP) models in cognitive science" (Skarda and Freeman 1987, p. 170). Dynamicists, however, wish to rest their cognitive paradigm on the shoulders of this model. Ironically, the model is simply not a dynamicist model; the architecture is very much like a connectionist network, only with the slightly less typical addition of inhibition and far more complex transfer functions at each node. These facts make it rather curious that it is touted as a paradigmatically important dynamical systems model. The model's similarities with connectionism make it quite difficult to accept the assertion that this type of dynamical model is the seed of a new paradigm in cognitive modeling.

The model which has been touted by van Gelder (1995) as an exemplar of the dynamicist hypothesis is the Motivational Oscillatory Theory (MOT) modeling framework by James Townsend (1992; see also Busemeyer and Townsend 1993). In this case, unlike the Skarda and Freeman model, MOT does indeed provide dynamicist models, though simplified versions. However, it is also evident that the model provided falls victim to the theoretical criticisms already advanced (see sections 3.2-5).

The most evident difficulty in the MOT model relates to the correct choice of systems parameters (see sections 3.3 and 3.4). Admittedly, for dynamicist models, "changing a parameter of [the] dynamical system changes its total dynamics" (van Gelder 1995,p. 357). Thus, it is extremely important to be able to correctly select these parameters. However, the MOT model does not seem to have any reliable way of doing so (Townsend 1992, pp. 221-2):

A closely allied difficulty [i.e. allied to the difficulty
of setting initial conditions] -- in fact, one that interacts
with setting the initial conditions -- is that of selecting appropriate
parameter values. Unlike physics, where initial conditions and
parameter values are usually prescribed by the situation, usually
in psychology, the form of the functions is hypothesized in a
"reasonable way". However, we often have little idea
as to the "best" numbers to assign, especially for parameters.
[** 456**]

The devastating result of this difficulty is that the model needs new descriptions for each task. In other words, it becomes impossible to apply the model more than once without having to rethink its system parameters. It seems that this points to the likelihood of the model being molded by a clever choice of parameters, not to the ability of the model to predict the trajectory of a class of behaviors.

Furthermore, this model is an admittedly simple one (Townsend 1992, p. 219), which makes it rather disconcerting that it is necessary to fix the system manually when it is not behaving correctly (Townsend 1992, p. 223). This presents a great limitation because the expected complexity and dimension of a truly dynamicist model is immense (see section 3.4). Such manual fixing and redescription of each task would surely be impossible in a full-scale model.

The admissions that: "[MOT] appears very simple indeed but is nevertheless nonlinear, and at this time, we do not have a complete handle on its behavior" (Townsend 1992, p. 220) does not bode well for the dynamicist project. If it is not possible to have a handle on the behavior of the simplest of models, and the dynamicist hypothesis calls for massively complex models, what chance do they have of ever achieving the goal of a truly dynamicist model?

Dynamicists tend to be quite succinct in presenting their opinion
of what the relation between dynamicism and the other two approaches
should be, and are not shy about their project to replace current
cognitive approaches: "we propose here a radical departure
from current cognitive theory" (Thelen and Smith 1994, p.
xix). The dynamicist project to supersede both connectionism and
symbolicism has given them reason to assess critically the theoretical
commitments of both paradigms. Dynamicists have effectively distinguished
themselves from the symbolicist approach and, in doing so, have
provided various persuasive critical arguments (Globus 1992; Thelen
and Smith 1994; van Gelder 1995; van Gelder and Port 1995). However,
dynamicists are not nearly as successful in their attempts to
differentiate themselves from connectionists. When they manage
to do so, they encounter their greatest theoretical challenges;
*e.g.* providing a non-representational account of cognition.
For this reason, deciding the place of dynamicism in the space
of cognitive theories, reduces to deciding its relation to connectionism.
If dynamicism does not include connectionism in its class of acceptable
cognitive models, or if it is not a distinct cognitive approach,
there is no basis for accepting the dynamicist hypothesis as defining
a new paradigm.

Critics may claim that a dynamical systems approach to cognition
is simply not new -- as early as 1970, Simon and Newell were discussing
the dynamical aspects of cognition (Simon and Newell 1970, p.
273). In 1991, Giunti showed that the symbolicist Turing Machine
*is* a dynamical system (van Gelder, 1993), so it could be
concluded that there is nothing to gain from introducing a separate
dynamicist paradigm for studying cognition. However, Turing Machines
and connectionist networks have *also* been shown to be computationally
equivalent yet these approaches are vastly disparate in their
methods, strengths, and philosophical commit-[** 457**]ments
(Fodor and Pylyshyn 1988, p. 10). Similarly, though Turing Machines
are dynamical in the strictest mathematical sense, they are nonetheless
serial and discrete. Hence, symbolicist models do not behave in
the same ideally coupled, dynamical and continuous manner as dynamicist
systems are expected to. Dynamicist systems can behave either
continuously or discretely, whereas Turing Machines are necessarily
discrete. Furthermore, they are not linked in the same way to
their environment, and the types of processing and behavior exhibited
is qualitatively different. For these reasons, dynamicists believe
their approach will give rise to fundamentally superior models
of cognition. Biological evidence and the symbolicists' practical
difficulties lend support to many of the dynamicists criticisms
(Newell 1990; Churchland and Sejnowski 1992; van Gelder and Port
1995).

However, Smolensky's (1988) claim that connectionism presents
a dynamical systems approach to modeling cognition can not be
similarly dismissed. Connectionist nets *are* inherently
coupled, nonlinear, parallel dynamical systems. These systems
are self-organizing and evolve based on continuously varying input
from their environment. Still, dynamicists claim that connectionist
networks are limited in ways that a *truly* dynamical description
is not.

However, differentiating between connectionist networks and
dynamical systems models is no easy task; connectionists often
assert that a connectionist network "is a dynamical system"
(Bechtel and Abrahamsen 1991; *c.f.* Churchland 1992). Frequently,
dynamicists themselves admit that connectionist networks are indeed
"continuous nonlinear dynamical systems" (van Gelder
and Port 1995, p. 26). Smolensky outlined the many ways in which
a connectionist network *is* a dynamical system -- he encapsulated
the essence of dynamical systems in their relation to cognition
and connectionism (Smolensky 1988). Churchland and Sejnowski have
gone further, discussing limit cycles, state spaces, and many
other dynamical properties of nervous systems and have included
purely dynamical analyses in their connectionist discussions of
natural cognitive systems (Churchland and Sejnowski 1992, p. 396).

The relationship between connectionism and dynamicism is undeniably more intimate than that between either of these approaches and symbolicism. Nevertheless, dynamicists wish to subordinate connectionism to their cognitive approach (van Gelder and Port 1995, p. 27). Dynamicists fundamentally reject the connectionist commitment to computationalism, representationalism, and high dimensional dynamical descriptions.

Critiques of connectionism from dynamicists do not seem to
present any sort of united front. Some dynamicists note the lack
of realism in some networks (Globus 1992). Others reject connectionism
not because of a "failure in principle" but because
of "a failure of spirit" (Thelen and Smith 1994, p.
41). Still others reject connectionism as being high-dimensional
and too committed to symbolicist ideas: ideas like representation*
*(see section 3.5).

The lack of realism in networks is often due to the limitations
of current computational power. Networks as complex as those found
in the human brain are infeasible to simulate on present-day computers.
The complexity of real networks does not represent a qualitatively
distinct functioning, rather just the end-goal of [** 458**]
current connectionist models. Thus, claims consonant with: "simplified
silicon nets can be thought of as computing but biologically realistic
nets are non computational" (Globus 1992, p. 300) are severely
misleading. The chemical modulation of neurotransmitter synthesis,
release, transport,

The claim that connectionism is simply a "failure in spirit" does nothing to advance the dynamicist cause, it simply reminds us where (perhaps) connectionist modeling should be headed. The final two criticisms of connectionism as being high-dimensional and representational have been addressed in sections 3.3-5. It is not clear from this discussion that either of these properties is a hindrance to connectionism. Rather, denying them provides great theoretical difficulty for dynamicism. What is clear, however, is that dynamicism does not include connectionism in its class of acceptable cognitive models. Maybe, then, dynamicism and connectionism are completely distinct cognitive approaches.

Van Gelder urges us to accept that connectionist networks are too limited a way to think about dynamical systems. He claims that "many if not most connectionists do not customarily conceptualize cognitive activity as state-space evolution in dynamical systems, and few apply dynamical systems concepts and techniques in studying their networks" (van Gelder, 1993, p. 21). However, there are a great number of influential connectionists, including the Churchlands, Pollack, Meade, Traub, Hopfield, Smolensky and many others who have addressed connectionist networks in exactly this manner.

There does not seem to be any lack of examples of the application
of dynamical systems descriptions to networks (Churchland and
Sejnowski 1992; Pollack 1990; Smolensky 1988). In one instance,
Kauffman (1993) discusses massively parallel Boolean networks
in terms of order, complexity, chaos, attractors, *etc*.
In fact it seems the only viable way to discuss such large (*i.e.*
100 000 unit) networks is by appealing to the overall dynamics
of the system and thoroughly apply dynamical systems concepts,
descriptions and analysis (Kauffman 1993, p. 210).

Van Gelder insists that dynamical descriptions of connectionist
networks is where connectionists should be headed; many connectionists
would no doubt concur. However, he goes on to conclude that connectionism
"is little more than an ill-fated attempt to find a half-way
house between the two worldviews [*i.e.* dynamicism and symbolicism]"
(van Gelder and Prot 1995, p. 27). Rather, it seems connectionism
may be the only viable solution to a unified cognitive theory,
since cognition seems to be neither solely representational/symbolic
nor nonrepresentational/dynamical. Connectionism is able to naturally
incorporate both dynamical and representational commitments into
one theory. In any case, all that van Gelder has [** 459**]
really accomplished is to cast a dynamical systems theory description
of cognition into the role of a normative goal for connectionism
-- he has not provided a basis for claiming to have identified
a new paradigm.

The fundamental disagreement between connectionists and dynamicists
seems to be whether or not connectionist networks are satisfactory
for describing the class of dynamical systems which describes
human cognition. By claiming that connectionist networks are "too
narrow" in scope, van Gelder wishes to increase the generality
of the dynamicist hypothesis, excluding high-dimensional, neuron-based
connectionist networks. However, connectionist networks naturally
exhibit *both* high-level and low-level dynamical behaviors,
providing room for van Gelder's desired generality while not sacrificing
a unit to which behavior can be referred. In other words, the
mechanism of cognition remains comprehensible in connectionist
networks and does not fall prey to the difficulties involved with
collective parameters (see section 3.3). The fact that connectionist
networks are amenable to high-level dynamical descriptions makes
it hardly surprising that differentiating between connectionist
networks and dynamical systems is no easy task. Frequently, dynamicists
realize: "indeed, neural networks, which are themselves typically
continuous nonlinear dynamical systems, constitute an excellent
medium for dynamical modeling" (van Gelder and Port 1995,
p. 26). Furthermore, in Smolensky's paper *On the Proper Treatment
of Connectionism*, he has outlined some of the many ways in
which a connectionist network *is* a dynamical system (1988,
p. 6):

The state of the intuitive processor at any moment is precisely defined by a vector of numerical values (one for each unit). The dynamics of the intuitive processor are governed by a differential equation. The numerical parameters in this equation constitute the processor's program or knowledge. In learning systems, these parameters change according to another differential equation.

As Smolensky explicitly noted, a connectionist network represents
the state of the system at any particular time by the activity
of all units in the network. These units are naturally interpretable
as axes of a state space. Their behaviors can be effectively described
at a general level in dynamical systems theory terms. Such systems
*are* nonlinear, differentially describable, self-organizing
and dynamical as they trace a path through their high order state
space. The behavior of these networks is *exactly* describable
by the state space and the system's trajectory; as in any typical
dynamical system. In other words, the tools provided by dynamical
systems theory are directly applicable to the description of the
behavior of connectionist networks. Examples of strange attractors,
chaos, catastrophe, *etc*. are all found in connectionist
networks, and such concepts have been used to analyze these networks.
These qualities lend such systems all the desirable traits of
dynamicism (*e.g.* natural temporal behavior, amenability
to general descriptions) but they remain connectionist and thus
representational, computational, and high-dimensional.

So, dynamicism does not include connectionism in its class
of models, as some of their theoretical commitments are incompatible.
Neither, however, is dynamicism a distinct cognitive approach
as important aspects (and the least controversial) of the [** 460**]
dynamicist hypothesis are naturally addressed by connectionist
networks. Thus, the dynamicist hypothesis has not provided a foundation
on which to build a new paradigm. What it has provided, however,
are reasons to intensify a particular type of connectionist modeling;
one which uses the tools of dynamical systems theory to understand
the functioning of connectionist networks.

It is undeniable that brains are dynamical systems. Cognizers are situated agents, exhibiting complex temporal behaviors. The dynamicist description emphasizes our ongoing, real-time interaction with the world. For these reasons, it seems that dynamical systems theory has far greater appeal for describing some aspects of cognition than classical computationalism.

However, by restricting dynamicist descriptions to low dimensional systems of differential equations which must rely on collective parameters, the dynamicist has created serious problems in attempting to apply these models to cognition. In general it seems that dynamicists have a difficult time keeping arbitrariness from permeating their models. There are no clear ways of justifying parameter settings, choosing equations, interpreting data, or creating system boundaries. The dynamicist emphasis on collective parameter models makes interpretation of a system's behavior confounding; there is no evident method for determining the 'meaning' of a particular parameter in a model.

Similarly, though dynamicists present interesting instances
when it seems representation may be inappropriate (*e.g.*
motor control, habitual behavior, *etc*.) it is difficult
to understand how dynamicists intend to explain the ubiquitous
use of representation by human cognizers while maintaining a complete
rejection of representation. This project failed with the behaviorists,
and it is not clear why it should succeed now.

It is difficult to accept that dynamical models can effectively
stand as their own class of cognitive models. The difficulties
which arise at the proposed level of generality seem insurmountable,
no matter the resources available. They seem to offer exciting
new ways of understanding these systems and of thinking intuitively
about human behavior. However, as a rigorous descriptive model
of either, the purely dynamical description falls disappointingly
short. At most, dynamicists offer new metaphors and interesting
discussion, but shaky models. However, at the very least they
offer a compelling normative direction for cognitive science (*c.f.*
Aslin, 1993).

Despite the power and intuitive appeal of dynamical systems
theory, the dynamicist interpretation of how this field of mathematics
should be applied to cognitive modeling is neither trivial nor
obviously preferable to connectionism and symbolicism, as dynamicists
would have us believe. However, dynamical systems theory can contribute
invaluably to the description, discussion and analysis of cognitive
models. Possibly more cognitive scientists should realize "Our
brains are dynamical, not incidentally or in passing, but essentially,
inevitably, and to their very core" (Churchland and Sejnowski
1992,p. 187). [** 461**]

* Special thanks to Cameron Shelley, Paul Thagard and Jim Van Evra for helpful comments on earlier drafts of this paper.

- Smolensky actually refers to this hypothesis as the subsymbolic hypothesis to address the distinction between local and distributed connectionist commitments. However, Churchland and Sejnowksi reject local connectionist networks as biologically unrealistic (Churchland and Sejnowski 1992, pp.179-182) and so this form of the hypothesis is suitable for our purposes.
- In contrast to an explicitly controlled metaphor, Beardsley describes to the use of a normal metaphor: "But of course the (normal) metaphorical description, as its implications are pursued, can be checked at each step, and we need not feel committed to all of its implications merely because it has a general appropriateness. So the metaphorical description may least misleadingly, perhaps, be considered as an aid to thought rather than a special mode of thinking" (Beardsley 1972,p. 287).
- It has been noted by an anonymous reviewer that a number of dynamics-oriented researchers explicitly remark on the importance of providing component referents for collective variables and others derive these variables from low-level behaviors. For example, Thelen and Smith (1994) note that Edelman "provide[s] a neural account of the more macroscropic [sic] dynamic principles of behavioral development" (p.143). As well, Kauffman derives collective variables from complex binary networks (see section 4.0). However, in van Gelder’s characterization of dynamicism, derivation or grounding of collective variables is not mentioned as criteria for "good" collective variables. Furthermore, in practice, many collective variables are neither derived nor are their neural correlates mentioned (see Clark, et al., 1993; Robertson, et al., 1993; Abraham, et al., 1994).
- Which there are theoretically guaranteed to be, given that the systems are chaotic (Gleick, 1987).

ABRAHAM, F., ABRAHAM, R. & SHAW, C.D. (1994). Dynamical
Systems for Psychology, in: R. VALLACHER & A. NOWAK (Eds)
*Dynamical systems in social psychology*, (San Diego, Academic
Press).

ASLIN, R. N. (1993). Commentary: The strange attractiveness
of dyanmics systems to develoment, in: L.B. SMITH & E. THELEN
(Eds) *A dynamic systems approach to development: Applications*
(Cambridge, MIT Press) pp. 385-400.

BARTON, S. (1994) Chaos, self-organization, and psychology,
*American Psychologist* **49**(1): 5-14.

BEARDSLEY, M. (1972) Metaphor, *The encyclopedia of philosophy*
(New York, MacMillan Publishing Co. & The Free Press) pp.
284-289.

BECHTEL, W. & ABRAHAMSEN, A. (1991) *Connectionism
and the mind: an introduction to parallel processing in networks*
(Cambridge, MA, Basil Blackwell).

BOGARTZ, R. S. (1994). The future of dynamic systems models
in developmental psychology in the light fo the past. *Journal
of Experimental Child Psychology* **58**: 289-319.

BROOKS, R. (1991) Intelligence without representation, *Artificial
Intelligence* **47**: 139-159.

BUSEMEYER, J. R. & J. T. TOWNSEND (1993) Decision field
theory: A dynamic-cognitive approach to decision making in an
uncertain environment, *Psychological Review* **100**(3):
432-459.

CARTWRIGHT, N. (1991) Fables and models I, *Proceedings of
the Aristotelian Society Supplement* **65**: 55-68.

CLARK, J.E., TRULY, T.L., & PHILLIPS, S.J. (1993) On the
development of walking as a limit-cycle system, SMITH, L.B. &
THELEN, E. (Eds), *A dynamic systems approach to development:
Applications* (Cambridge, MIT Press) pp. 71-94.

CHOMSKY, N. (1959) A review of B. F. Skinner's Verbal Behavior,
*Language,* 35, pp. 26-58.

CHURCHLAND, P. S. & T. SEJNOWSKI (1992) *The computational
brain*, (Cambridge, MA, MIT Press).

EGETH, H. E. AND D. DAGENBACH (1991) Parallel versus serial
processing in visual search: Further evidence from subadditive
effects of visual quality, *Journal of Experimental Psychology:
Human Perception and Performance* **17**: 551-560.

FODOR, J. AND Z. PYLYSHYN (1988) Connectionism and cognitive
architecture: A critical analysis, *Cognition* **28**:
3-71.

FODOR, J. AND B. MCLAUGHLIN (1990) Connectionism and the problem
of systematicity: Why Smolensky's solution doesn't work, *Cognition*
**35**: 183-204.

GLEICK, J. (1987) *Chaos: making a new science* (New York,
Viking).

GLOBUS, G. G. (1992) Toward a noncomputational cognitive neuroscience,
*Journal of Cognitive Neuroscience* **4**(4): 299-310.

HESSE, M. (1972). Models and analogies in science, *The encyclopedia
of philosophy*, (New York, MacMillan Publishing Co. & The
Free Press) pp. 354-359.

HESSE, M. (1988) Theories, family resemblances and analogy,
*Analogical Reasoning*. Kluwer Academic Publishers, pp. 317-340.

KAUFFMAN, S. A. (1993) *The origins of order: self-organization
and selection in evolution* (Oxford, Oxford University Press).

KOERTGE, N. (1992) Explanation and its problems, *British
Journal of the Philosophy of Science* **43**: 85-98.

KOSSLYN, S.M. (1980). *Image and mind*. Cambridge, MA: Harvard
University Press.

KOSSLYN, S.M. (1994). *Image and brain: the resolution
of the imagery debate*. Cambridge, MA: MIT Press.

LE POIDEVIN, R. (1991) Fables and Models II, *Proceedings
of the Aristotelian Society Supplement* **65**: 69-82.

LUCK, S. J. AND S. A. HILLYARD (1990) Electrophysiological
evidence for parallel and serial processing during visual search,
*Perception and Psychophysics* **48**: 603-617.

MEADE, A. J. AND A. A. FERNANDEZ (1994) Solution of nonlinear
ordinary differential equations by feedforward neural networks,
*Mathematical and Computer Modelling* To Appear.

MILLER, J. O. (1988), Discrete and continuous models of human
information processing: Theoretical distincitons and empirical
results, *Acta Psychologica* **67**: 191-257.

MOLENAAR, P. C. M. (1990) Neural netowrk simulation of a discrete
model of continuous effects of irrelevant stimuli, *Acta Psychologica*
**74**: 237-258.

MORRISON, F. (1991) *The art of modeling dynamic systems*.
(New York: Wiley).

NEWELL, A. (1990) *Unified theories of cognition* (Cambridge,
MA, Harvard University Press).

NEWELL, A. & SIMON, H.A. (1976). Computer science
as empirical enquiry: Symbols and search. *Communications of
the Association for Computing Machinery, *19, 113-126.

PINEL, J. P. (1993) *Biopsychology* (Allyn & Bacon
Inc.).

POLLACK, J. (1990) Recursive distributed representation. *Artificial
Intelligence*, 46, 77-105.

ROBERTSON, S.S., COHEN, A.H. AND MAYER-KESS, R.G. (1993) Behavioural
Chaos: Beyond the Metaphor, in SMITH, L.B. & THELEN, E. (Eds),
*A dynamic systems approach to development: Applications*
(Cambridge, MIT Press) pp. 120-150.

SCHWEICKER, R. AND G. J. BOGGS (1984) Models of central capacity
and concurrency, *Journal of Mathematical Psychology* **28**:
223-281.

SIMON, H. A. AND A. NEWELL (1970) Information-processing in
computer and man, *Perspectives on the computer revolution*
(Englewood Cliffs, Prentice-Hall, Inc.).

SKARDA, C. A. AND W. J. FREEMAN (1987) How brains make chaos
in order to make sense of the world, *Behavioral and Brain Sciences*
10: 161-195.

SMOLENSKY, P. (1988) On the proper treatment of connectionism,
*Behavioral and Brain Sciences* **11**(1): 1-23.

THAGARD, P. (1992) *Conceptual revolutions* (Princeton,
Princeton University Press).

THELEN, E. AND L. B. SMITH (1994) *A dynamic systems approach
to the development of cognition and action* Cambridge, MIT
Press.

TOWNSEND, J. T. (1992) Don't be fazed by PHASER: Beginning
exploration of a cyclical motivational system, *Behavior Research
Methods, Instruments, & Computers* **24**(2): 219-227.

VAN GEERT, P. (1996). The dynamics of Father Brown. Essay review
of A dynamic ssytems approach to the development of cognition
and action. *Human Development* **39**(1): 57-66.

VAN GELDER, T. (1993) What might cognition be if not computation?,
*Cognitive Sciences Indiana University Research Report 75*.

VAN GELDER, T. (1995) What might cognition be if not computation?,
*Journal of Philosophy*, 91, 345-381.

VAN GELDER, T. AND R. PORT (1995) It's about time: An overview
of the dynamical approach to cognition, *Mind as motion: Explorations
in the dynamics of cognition*. (Cambridge, MA, MIT Press).