Memory Evolutive Systems

by Andrée C. Ehresmann and Jean-Paul Vanbremeersch

Faculté de Mathématique et Informatique

33 rue Saint-Leu 80039 Amiens France

ehres@u-picardie.fr

 Preprint October 1999

Abstract. Natural autonomous systems, such as biological, neural, social or cultural systems, are open, self-organized with a more or less large hierarchy of interacting complexity levels; they are able to memorize their experiences and to adapt to various conditions through a change of behavior. These last fifteen years, the Authors have developed a mathematical model for these systems, based on Category Theory. The aim of the paper is to give an overview of this model, called MES.

1. Introduction

The biological, neural, social or cultural systems are complex natural systems. As such, they have some common characteristics: they are open (they exchange with their environment), self-organized, with a more or less large hierarchy of interacting complexity levels; and they may memorize their experiences to adapt to various conditions through a change of response. These properties make it difficult to model them by the usual methods inspired by the Newtonian paradigm.

To study such systems, in the last fifteen years the authors have gradually developed the notion of Memory Evolutive Systems (MES); the MES represent a mathematical model, based on Category Theory.

Here we first give their main characteristics in an intuitive way; then we formally recall the necessary concepts of Category theory, and we show how they allow to better understand the properties and the functioning of these systems, with an application to neural systems. For more details, we refer to our Internet site [21] which contains a list of our main publications on MES since 1987, an index, a glossary of the key concepts, and a long paper on consciousness non yet published.

2. Main characteristics of Memory Evolutive Systems

a) Evolutive and Hierarchical Systems

1. Following Bertalanffy [6], a system is a "set of interacting elements". But the elements of a natural complex system, and their organization, vary in time because of exchanges with the environment (open system), internal formation and/or suppression of components, for instance learning new skills. Thus they cannot be studied using observables defined on a fixed space of phases and following uniform laws. To take this change in account, we will give:

- on one hand, the successive configurations of the system, formed by its components and their interactions at a given time t (state-category at t),

- and on the other hand the process of change between these configurations (transition functors).

2. The components of the system are organized in a hierarchical structure, with several levels of complexity; a complex component is itself obtained by binding together ("glue" [37]) a sub-system, or pattern, which determines its internal organization; this pattern consists in a family of more elementary components with distinguished links between them. Each level depends on distinct laws, but the interfaces between levels play an essential part [23]. For instance in a cell [8, 9], we distinguish its components of increasing levels (atoms, molecules, macromolecules, organelles), as well as their intra-levels but also inter-levels chemical and topological relations.

3. The transition between successive configurations comes from the archetypal operations pointed up by Thom [41]: "birth, death, scission, collusion". It is modeled by the process of complexification with respect to a strategy. This process describes the change resulting from the following operations:

- addition of new elements (example: endocytosis for a cell),

- destruction of components or their rejection in the environment,

- binding of patterns into more complex components (synthesis of a protein),

- decomposition of higher order components.

A sequence of complexifications can lead to the formation of components with strictly increasing orders of complexity (hyperstructures in the sense of Baas [2]).

b) Local regulations

1. The autonomy of the system comes from the fact that its dynamics is generated by a net of internal local regulations which are coordinated and possibly conflicting. To model this situation, the architecture of a MES is a compromise between a parallel process with a modular organization (as in multi-agents systems) [28, 36] and an associative hierarchical net (e.g., [1, 25, 40]). Indeed, we suppose that each local regulation is directed by a functional module, modeled as a sub-system called a coregulator (CR). A CR consists in a pattern formed by a small number of components of the system, its agents which have the same complexity level and perform collective actions arising from their interactions, along the distinguished links of the pattern.

The lower level CRs represent specialized modules, possibly interacting with the environment (example: ribosome, or signal transduction system in a cell). In higher levels, associative CRs coordinate the activity of some lower level CRs, either directly, or indirectly through the constraints they impose.

2. Each CR develops a stepwise process, with a specific timescale; it acts successively as an internal observation, regulation and evaluation organ, to perform some kinds of actions (we call them its strategies) and to record their results, thus realizing an epistemo-praxeological loop in the sense of Vallée [42]. A step of the CR extends between two successive dates of its timescale and is divided in several more or less overlapping phases:

- formation of an internal representation of the system, called the landscape of the CR,

- analysis of the possible responses and selection of an adequate strategy,

- commands to effectors to realize this strategy,

- evaluation of the result with error-detection, and its memorization.

3. The actual landscape of the CR gathers the information on the system that can be obtained by the agents during their actual present (partial information, more or less distorted, and lasting long enough to be analyzed). It is not a sub-system of the system, but an internal description which acts as a filter and masks the information the agents cannot perceive; the distortion it introduces with respect to the system is not seen at the level of the agents. It also plays the part of a working memory, preserving its information during all the step.

A strategy is chosen on this landscape, taking into account the results of the preceding step, the constraints and the strategies formerly used in similar situations. The strategy can be chosen by the agents, or 'externally' imposed on them (for example by a higher CR).

The corresponding commands are then sent to effectors which relay the strategy to the system.

4. The landscape is a more or less faithful representation of the system, and the various CRs may have conflicting strategies. Thus the fixed objectives are not necessarily obtained, and the step can be interrupted by a fracture.

At the next step, the CR can evaluate on its new landscape if the anticipated result has been obtained or not. The dynamics on the landscape during one step (from the choice of the strategy up to its realization) can be modeled as a 'classical' physical system (e.g., by systems of differential equations satisfied by appropriate observables, or by a dynamical system [41]).

c) Global dynamics

The CRs act more or less autonomously, but the information and instructions they send to the system must be integrated.

1. We have seen that the strategies relayed by the various CRs are not always compatible, and they are in competition. Indeed, all the CRs share the same common resources, and there are direct and indirect interactions between them.

Thus an equilibration process is necessary between their strategies, called the interplay among the strategies of the CRs. It is not a centrally directed process, but a dynamic modulation between the relayed strategies and the different CRs. It depends on the respective 'weights' of the strategies and, in an essential way, on specific structural temporal constraints which connect the period of a CR (mean length of its steps) to the propagation delays of the information and to the stability spans of the components figuring in its landscape.

The interplay eliminates the strategy of a CR whose constraints cannot be satisfied, so that a fracture is caused to its landscape (and its repair imposes a change of strategy). In the description of a step by a 'simple' system, a fracture may introduce a singularity (bifurcation or chaotic comportment, catastrophe in the sense of Thom [41, 44]), or require a complete change in the representation.

2. It follows that a dialectics via functional loops arises between CRs which are heterogeneous with respect to their complexity and their temporality. This dialectics modulates the evolution of the system and makes its long term evolution not predictable. It may lead to the emergence of higher order objects and of complex adaptive processes, such as the modification of the period of some CRs.

For instance, in [19] we have proposed a theory of aging for an organism, based on a cascade of de/resynchronizations: the periods of higher and higher CRs are successively increased, to neutralize a lengthening of the propagation delays at lower levels, caused by an accumulation of external random events whose consequences cannot be repaired soon enough; the process stops when the instability is too large (for the period must remain less than the stability span), and then errors accumulate up to the death of the organism.

d) Memory

1. All the preceding operations rely on an internal central memory, which is modified and developed in time to record new situations, used strategies and their results, so that they might be recalled later on.

Each CR has a differential access to the memory. It performs a trial and error learning process: at each step it seeks on its landscape what are the possible strategies, and it participates in the storing of new information and activities.

The lower CRs may have only one possible strategy, in which case they develop a cyclic process. But higher CRs (e.g., in a neural system), may have a large choice of strategies, so that they develop and memorize more and more complex actions, by coordination of already known actions.

2. An object in the memory, called a record, takes its own identity, independent from the manner it has been constructed; thus it may later be applied to various particular cases just by adapting the parameters to the context.

This flexibility is still greater in the neural systems of higher animals which develop a semantics allowing to classify the records and to recall an invariance class through anyone of its instances. Then the interplay among the strategies benefits of a double plasticity: choice of a particular instance, and in it of the most adapted parameters.

And the system may develop conscious CRs which we characterize by the capacity of extending their actual landscape after a fracture (through increased attention), of operating an abduction process in this extended landscape to find the possible causes of the fracture, and of constructing 'virtual' anticipated landscapes for a long term planning.

 

3. Recalls on categories

Category Theory is a sub-domain of Mathematics which unifies a large part of Mathematics by developing a general theory of relations and structures. It has been introduced by Eilenberg and Mac Lane [22] to transform difficult problems of Topology into more accessible problems of Algebra. Later on, it has been developed, both for itself [33] and for its applications in several domains of Mathematics. For instance, already in the fifties, C. Ehresmann takes it as a basis for Differential Geometry [15]; and A.C. Ehresmann has used it in Functional Analysis [3] and in optimization problems [4]. The theory of toposes [31] has applications in Logic and more recently in Physics; and a fruitful collaboration expands between categoricians and computer scientists [43]. Rosen has introduced the language of categories in Biology as soon as '58 [38] and several authors have followed his lead (e.g. [29, 32]).

MES use Category theory not only as a language, but also as a powerful method to uncover the main processes underlying complexity. For that, we must adapt and generalize several fine results, in particular theorems of the theory of sketches, developed by C. Ehresmann, A. Ehresmann and their research students in the seventies [5], and now largely used in Computer Science.

Our basic idea is that category theory is a reflection on the fundamental laws of brain functioning such as they have been determined by natural selection during Evolution: formation of relations between objects of various types allowing for the transfer and the analysis of information, formation and recognition of patterns of coordinated objects, optimization processes. We could paraphrase for this theory what Chapline [11] says about quantum mechanics:

"quantum mechanics can be regarded as a fundamental theory of distributed parallel information processing and pattern recognition... we are led to suggest that the fundamental link between mathematics and theoretical physics is the pattern recognition capabilities of the human brain".

a) Category, Evolutive System

In a MES, the configuration of the system at a given time will be modeled by a category.

1. A graph (also called oriented graph, or diagram scheme, or polygraph) is formed by a set of objects, called its vertices, with links between them, represented by arrows from a vertex N to a vertex N'; we say that N is the source of the link, and N' its target. There can be several 'parallel' arrows with the same source and same target, as well as 'closed' arrows for which the source and target are identical. Two arrows f, g are successive if the target N' of the first one is also the source of the second; then they form a path of length 2 from the source N of f to the target N" of g. More generally a path of length n is a sequence of n successive arrows.

A category is a graph on which there is defined a (partial) composition law which associates to each path (f,g) of length 2 from N to N" an arrow of the graph from N to N", called the composite of the path, denoted by fg. This composition verifies the following conditions:

- Associativity. If (f,g,h) is a path of length 3, the two composites f(gh) and (fg)h are equal. It follows that any path of length n has also a unique composite.

- Identities. For each vertex N there exists a closed arrow from N to N, called the identity of N, whose composite with any arrow with source or target N is identical to this arrow.

The vertices of the graph are also called the objects of the category and the arrows its morphisms (or simply its links).

2. Examples of categories. A group is a category with a unique object and in which all the morphisms are invertible. A congruence or an order relation on a set E defines the category whose objects are the elements of E and with just one arrow between two objects which are in relation. To a graph G is associated the category of its paths, whose objects are the vertices of the graph and the morphisms its paths, the composition law being the concatenation.

On the other hand, we have the 'large' category of sets and maps between them; the category of topological spaces and continuous maps; the category of groups and their homomorphisms,… And also the category whose objects are the 'small' categories and the morphisms the functors between them, that is the maps which preserve the graph structures and the compositions.

In the categories modeling natural systems (example: categories of neurons), the energetic or temporal constraints are often translated by the association to their morphisms of a weight (real number or vector) measuring their strength or their propagation delay, and the weight of a composite is a given function of the weights of its factors. In this case, the category can be constructed as the quotient of the category of paths of an appropriate graph (of generators) by the relation which identifies two paths having the same weight.

3. To take into account the temporal change of the system, a MES is not represented by a unique category, but by what we call an Evolutive System [16], which is defined by:

- a (finite or infinite) part of the reals R , called its timescale,

- for each of its dates t, a category Kt (the state-category at t),

- for each date t' > t, a functor k(t,t') (the transition from t to t') from a sub-category of Kt to Kt', these transitions satisfying the transitivity condition:

if k(t,t')(Nt) = Nt' is defined and if t' < t", then k(t',t")(Nt') is defined if and only if k(t,t")(Nt) is defined, and then both are equal.

b) Colimits; simple and complex links

Many mathematical constructions (union of sets, upper bound in a lattice,…) can be interpreted as a colimit (or inductive limit), notion introduced by Kan in 1958 [30]. In a MES colimits allow to distinguish the hierarchy of components, a complex component being represented as the colimit of a pattern of interacting more elementary components which corresponds to its internal organization.

1. A pattern in a category K is the data of a graph I and a homomorphism P from I to K; it maps a vertex i of I on an object Ni of K, called an object of the pattern, and an arrow x from i to j in I on a morphism P(x) from Ni to Nj , called a distinguished link of the pattern. A collective link from P towards an object N' is a family of morphisms (fi) from each Ni to N', correlated by the distinguished links of the pattern, that is, for each x from i to j, we have fi = P(x)fj . Such a collective link defines a cone with vertex N' and the pattern as its basis.

The pattern P admits a colimit N if there exists an object N satisfying the following conditions:

(i) there exists a canonical collective link (ci) from P to N, and

(ii) each collective link (fi) from the pattern to any object N' binds into a unique morphism f from N to N' (i.e., fi = cif for each i).

The colimit does not always exist, but if it exists it is unique (up to an isomorphism). In this case, the pattern is called a decomposition of N. But while the pattern determines its colimit, the converse is not true, the same object may have several distinct decompositions.

The coherence and the constraints introduced by the distinguished links of the pattern are measured by comparing the colimit N of the pattern P to the sum (or coproduct) of its objects, which is the colimit of the pattern formed by the family of the objects (Ni) without any links between them; and there is a canonical comparison morphism from this coproduct to N.

2. In a natural system, the interactions between components play an important part. One the problems that category theory can handle is to determine what are the (simple and complex) links which are formed between emerging complex objects.

The simple links from P to P' come from 'direct' interactions between the objects of these two patterns, and they are modeled by the notion of a cluster: Given two patterns P and P' in a category, a cluster G from P to P' is a maximal set of morphisms between their objects, called links of the cluster, satisfying the following conditions:

(i) For each object Ni of P, there exists at least one link of the cluster from Ni to an object of P', and if there exist several such links, they are correlated by a zigzag of distinguished links of P'.

(ii) The morphisms obtained by composing a link of the cluster with a distinguished link of P on the left, or a distinguished link of P' on the right are in the cluster.

If the patterns P and P' admit colimits N and N' respectively in the category, a cluster from P to P' binds into a unique morphism from N to N', called a (P,P')-simple link. The composite of two simple links binding adjacent links is also a simple link: if f is (P,P')-simple and if f' is (P',P")-simple, then their composite is (P,P")-simple.

3. In a category, an object N can be the colimit of several patterns. But a (P,P')-simple link is not always a (Q,Q')-simple link for other decompositions Q of N and Q' of N'. Two decompositions P and Q of N are said to be equivalent if there exists a cluster from P to Q binding into the identity of N (i.e., if this identity is a (P,Q)-simple link). In particular, a sub-pattern R of P can be equivalent to P, so that P and R have the same colimit (if it exists); then R is called a representative sub-pattern of P.

An object N' is multifold if it admits at least two non-equivalent decompositions P' and Q'; then the switch from P' to Q' is called a complex switch. These objects are at the root of the existence of 'complex' links. Indeed, if f is a (P,P')-simple link from N to N' and if g is a (Q',P")-simple link from N' to N", these two morphisms must compose in the category, but their composite fg may not bind a cluster from P to P"; it is then called a (P,P")-complex link from N to N". More generally, a complex link is the composite of a path of simple links binding non-adjacent clusters, separated by complex switches.

4. Iterated colimits and ramifications. If N is the colimit of a pattern P of linked objects Ni and if each Ni is the colimit of a pattern Pi , we say that N is the 2-iterated colimit of (P,(Pi)), or that (P,(Pi)) is a ramification of N of length 2.

By induction, we define:

A k-iterated colimit A is the colimit of a pattern each object of which is itself a (k-1)-iterated colimit. A k-ramification of A is the data of a decomposition of A and of a (k-1)-ramification of each component of this decomposition.

c) Hierarchical Systems

To model the hierarchical structure of a system, we define a Hierarchical Evolutive System [16] as an evolutive system in which the state-categories are hierarchical in the following sense, and the transition functors respect the level of an object:

1. A category is hierarchical if its objects are partitioned in a sequence of complexity levels 0,1,…,m, so that an object N of level n+1 is the colimit of at least one pattern formed by linked objects Ni of level n.

In this case N has also a 2-ramification down to level n-1. Indeed, each object Ni of P is itself the colimit of a pattern Pi of level n-1, so that N is a 2-iterated colimit of (P,(Pi)). Going down, N admits also ramifications arriving to lower and lower levels k; they are defined by induction, taking account at each level of the 'vertical' links from the objects of a pattern to its colimit at the upper level, and of the 'horizontal' links such as the distinguished links of the pattern.

The reductionism problem consists to determine if N can also be constructed directly in one step as the simple colimit of a 'large' pattern of level k, avoiding the intermediate levels. An analysis of the nature of the morphisms gives an access to this problem.

2. A morphism between two objects of level n+1 is called an n-simple link if it binds a cluster between two patterns of level less or equal to n. It is an n-complex link if it is the composite of n-simple links binding non-adjacent clusters, so that it is not n-simple. The existence of n-complex links necessitates that the category satisfies the Multiplicity Principle [20]:

(i) For each n, there exist objects of level n+1 which admit non-equivalent decompositions of level n ; they are said n-multifold.

(ii) An object of level n can participate in several patterns having different colimits at level n+1.

3. The level of an object is not a faithful indicator of its 'functional' complexity, which is measured by its order. The order of complexity of an object N is the smallest k such that there exists a pattern of linked objects of level k with N as its colimit. And N is p-reducible for each p greater than or equal to its order.

By definition, each object of level n+1 is n-reducible. But when is it p-reducible, for a p < n ? An important theorem to understand the structure of a complex system is the following one:

An object N of level n+1 which admits a decomposition P of level n in which all the distinguished links are (n-1)-simple is (n-1)-reducible. But it will not be (n-1)-reducible (nor a fortiori reducible to lower levels) if its decompositions of level n have some of their distinguished links which are not (n-1)-simple.

d) Complexification

In a MES, the transitions come from a complexification process with respect to a strategy.

1. A strategy S on a category K consists in the following data: a set A of external elements 'to be absorbed'; a set O of objects of the category 'to be suppressed'; a set of patterns P without a colimit 'to bind' (so that they acquire a colimit); a set of cones 'to be transformed into colimit-cones' (so that their vertex becomes a colimit of their basis); a set Q of patterns with a colimit 'to decompose' (so that the pattern loses its colimit).

The complexification of K with respect to the strategy S [16] is the 'universal solution' of the problem of constructing a category K' in which the (concrete) objectives of S are realized in the most 'economical' manner.

2. To model this process, we adopt results from sketch theory: the data of S determines a special structure of sketch (cf. Appendix) and the complexification corresponds to the prototype associated to this sketch (as it is constructed by A. and C. Ehresmann in [5]). From the explicit construction we deduce in particular:

- The objects of the complexification K' are: all the objects of K which are not required to be suppressed, the elements of A, and, for each pattern P to bind a new object CP which becomes its colimit in K' (it corresponds to a higher order object that emerges by integration of the pattern in a higher unit).

- Among the morphisms, there are simple links binding clusters between patterns to bind, but also complex links obtained by composing a path of simple links binding non-adjacent clusters.

3. The complexification process can be iterated. An important result to understand how the evolution of a complex natural system is different from that of 'simple' physical system is the following theorem [20]:

If the Multiplicity Principle is satisfied, a sequence of complexifications cannot be reduced to a unique complexification with respect to a strategy integrating the successive strategies, and it leads to the emergence of objects with strictly increasing complexity orders as soon as the patterns to bind contain complex links.

 

4. Memory Evolutive Systems (MES)

The preceding categorical concepts will be used to define and study the MES more thoroughly.

a) Definition of a MES

1. First a MES is an Evolutive System on a continuous timescale. The state-category Kt at t represents the components of the system which exist at this date and their interactions. Thus each of its objects plays a double part:

- It acts as an emitter through its morphisms (or links) to the other objects, which represent its actions, or the messages it sends.

- It becomes a receptor via the morphisms which arrive to it and correspond to the information it receives or the constraints which are imposed to it.

The transition k(t,t') from t to t' indicates what the components at t have become at t' if they still exist, as we would point the same particular member on two successive snapshots of a social group. This functor is only 'partial' to take into account the possible suppression of components (death of a cell).

This contrasts with usual models where, if we speak, say, of a cell in an organism we consider it remains 'the same' at the different dates. Here a component as this cell will not be represented by a unique object, but by the sequence of its successive states; more precisely, a component N of the system is a maximal sequence of objects (Nt) in the state-categories, such that, for t' > t , the object Nt' is the image of Nt by the transition from t to t'. And the same for the links between components.

2. If the evolutive system is hierarchical, this definition of a component allows to understand how a component N of level n+1 can keep its identity in time though its internal organization of level n changes gradually; for instance a cell remains itself in spite of the progressive renewal of its components. Indeed, if the state Nt of N is the colimit of a pattern Pt of level n, it is possible that its state Nt' at t' > t is no more the colimit of the transformed pattern Pt' since objects of Pt can have been lost of replaced.

To measure the rate of change, we define the stability span of N at t: it is the greatest real dt such that there exists a pattern Q of components of level n whose state Qt' admits the state Nt' of N as its colimit in the state-category at t', for each t' between t and t+dt. During stability periods, this span is long, while it shortens during development or decline.

In a MES a transition comes often from a complexification process with respect to a strategy S. If N has emerged at t to bind a pattern Q , the evolutions of N and of Q remain correlated up to t+dt, but after that they may diverge. We say that N takes its own complex identity, independent from Q.

3. A Memory Evolutive System (or MES) [17] is a hierarchical evolutive system satisfying the following conditions:

- The state-categories are weighted: to each morphism is associated a positive real number, called its propagation delay; the delay of a composite fg is the sum of the delays of its factors f and g. They satisfy the Multiplicity Principle (cf. Section 3, c).

- There is a hierarchical evolutive sub-system on the same timescale, called the Memory, which develops in time with the emergence of new objects.

- The dynamics of the system is modulated by a net of evolutive sub-systems with discrete timescales, called coregulators (or CR). The components (or agents) of a CR have the same complexity level, which depends on the CR. The propagation delay of the links between agents of a CR is smaller than the length of its steps (which extends between two successive dates of its timescale).

Several CRs may have the same level, possibly with different temporalities. For instance, in a large company, the CRs could be the directions of the various branches, but also of the different departments in each of them. In a neural system, the lower CRs receive messages from the environment, and there are 'color' CRs, 'shape' CR,…; the cortex contains more and more complex associative CRs.

b) Landscape of a CR

1. As we have said, the step of a CR extends between two successive dates t and t+d of its timescale, and is divided in several overlapping phases; the first phase, or actual present, leads to the formation of the landscape. The agents have no direct access to the system, of which a component B is observable only by a perspective which is a cluster from the pattern reduced to B to the pattern formed by the CR (in the category coproduct of the state-categories during the actual present); such a perspective is entirely determined by anyone of its links b.

The actual landscape L of the CR at t is the category whose objects are the perspectives pb corresponding to links b with a mean propagation delay less than the actual present, and coming from components B of a level adjacent to the CR and with a stability span greater than the length of the step; the morphisms from pb to a perspective pc coming from C are defined by the links from B to C in the system whose composites with c belong to pb. There is a distortion functor from the landscape to the system, mapping the perspective pb on the component B.

2. A strategy is chosen on this landscape L (using the perspectives coming from the part of the memory accessible in L), and its commands are transmitted to the effectors. The anticipated landscape for the end of the step is modeled by the complexification of L with respect to this strategy. But the objectives are not always realized.

The result is evaluated at the level of the CR by a comparison functor from the anticipated landscape to the landscape effectively constructed at the next step; it detects the errors and their possible corrections. The memorization of the strategy and its result will be an objective of the CR at this next step.

3. It is necessary to respect some temporal constraints for the step to be completed without problem. Let d(t) denote the period of the CR at t, which is the mean length of the steps of the CR preceding t (the mean being computed on all the steps covering the preceding step of a higher CR whose period is of a larger magnitude order), u(t) the mean propagation delay of the links occurring in the actual landscape of the CR and v(t) the mean stability spans of the components used in L and in the strategy. Then the structural temporal constraints of the CR are:

For almost all t (i.e., except on a set of measure 0), the magnitude order of the period is larger than that of u(t) and less than that of v(t):

u(t) << d(t) << v(t).

If these constraints cannot be respected during several steps, there is a dyschrony for the CR. In this case, it might be necessary to later change its period for the constraints to be respected (we then speak of de/resynchronization).

c) Dialectics between CRs

1. At each date, the strategies selected on the actual landscapes of the various CRs are relayed to the system itself, via the distortion functors of these landscapes. If all the relayed strategies are realizable and compatible, the strategy S which will be realized on the system will be their union. If not, there will be a competition between these strategies, and S will result from the interplay among the strategies which will retain as many as possible of the objectives of the CRs and only eliminate those which are not compatible (whence a fracture in the corresponding landscapes). The CRs participate in this process with unequal weights depending on their level and their period.

2. For instance, let us consider two heterogeneous CRs, say a 'micro CR' of a lower level with a short period, and a 'macro CR' of a higher level with a much longer period. Both form, in their respective landscapes, very different representations of the system and in particular of each of them. The microlevel is characterized by short steps with fast change; the steps at the macrolevel are much longer and the microchanges are not transmitted to the macrolandscape in 'real time', but only known by the accumulation of their effects during the macrostep, because of the propagation delays. Thus the representation that the macroagents have of the micro CR is more and more inadequate, and the risk of fracture increases in the macrolandscape. To repair such a fracture and maintain its homeostasis, the macro CR will have to change its strategy, and possibly modify its period so that its temporal constraints be respected anew. This might result on the imposition of a new strategy to the micro CR, and later on have consequences for the whole system, e.g., causing a cascade of fractures for higher and higher CRs.

In practice, the change in higher CRs are slower to take place, but their consequences are more important, because they affect the other levels more deeply. Their fractures can also have a creative role if they force a complete reappraisal of the situation and the recourse to original strategies.

d) Operation of the Memory

The memory plays a main role in the choice of strategies by the CRs and in their interplay: the recall of records stored in it allows to recognize objects or situations already met, and to select strategies already applied taking their results into account. And it is developed and modified to adjust to new conditions. One of the objectives of the global strategy obtained after the interplay among strategies will be the formation of new records which store the preceding situation, the strategy and its result.

1. For instance, let us describe how a new signal C issued from the environment (e.g., unknown object) or from the system itself (command of effectors, sensation,…) will be memorized. C is internally represented by the formation of a pattern P of linked components (we say it is 'activated' by C), which remains long enough in the state-categories; its objects can be direct receptors of the signal, or components related to them, such as already formed records of parts of C.

A particular CR, say E, can only detect some attributes of a signal (e.g., color, shape,… in a neural system). Thus C is observable in the actual landscape L of E only through a pattern pE of perspectives issued from a sub-pattern (possible void) PE of P. This pattern pE will be distinguished by E because it remains synchronously activated during the actual present of E. As one of the objectives of a CR is to memorize the new patterns, the following strategy of E will require that the pattern pE be binded. This is relayed to the system by the command of binding the pattern PE image of pE by the distortion functor.

We say that C has an E-record if there is a component ME in the memory which is a colimit of the pattern PE and which has a perspective mE colimit of the pattern pE of perspectives (cf. Figure 1).

At the same time, the other CRs which observe perspectives of P form records with respect to the attributes they detect. The global strategy of the system after the interplay among the strategies may bind all the corresponding ME into a colimit of P itself, called the record of C. It will emerge as a new object in the memory, and then it takes its own complex identity (cf. a, 2).

A later presentation of C will re-activate the pattern P, hence also the sub-pattern PE and its colimit ME, which will become observable in the actual landscape of E via its perspective, leading to the recognition of the attributes of C detected by E; and the same for the other CRs, so that the interplay among the strategies will bind the various ME, thus recalling the record M of C. This recall of the signal is only possible if the CRs can coordinate their actions, while respecting their respective structural temporal constraints.

Once it has been formed, the record M can participate to the memorization of a more complex signal having C as one of its parts.

2. The same process applies to the development and the memorizing of a new strategy representing a certain action, which could then be taken as one of the elements of a more complex activity. Generally, the learning of a new skill or action A will rest on its decomposition in already known simpler actions; under the effect of the interplay among the strategies of the different CRs, those will be coordinated in patterns which, being reinforced by repetition, acquire a colimit memorizing them. These new records can themselves be assembled in more complex patterns, which are memorized in the form of iterated colimits. And so on until the formation of an iterated colimit A memorizing the action.

Now, A will take its own identity, independent of the ramification having been used for constructing it initially, so that it can then be activated through various parameters for better adapting to the context. Indeed, the recall of A will be done by the unfolding of any of its ramifications, say (R,(Ri)) for a ramification of order 2. For that, different CR simultaneously will activate (through their current landscapes) objects of a pattern R having A for colimit, then, at a following stage, each one of these objects in its turn will activate one of the patterns Ri of which it is the colimit via lower CRs. The choice of the ramification is determined gradually, from top downwards: initially choice of the pattern R, then for each one of its objects, choice of a pattern Ri... and so on if the ramification goes 'deeper'. Which ramification is finally unfolded will depend on the constraints of any nature imposed by the context, and its choice will be specified or changed at each level during the realization of the action under the effect of the interplay among the strategies of the CRs, taking account of their own temporal structural constraints. If those cannot be respected, the action will fail.

For example, before knowing to walk, a child learns how to coordinate various more or less innate simple movements to stand up and advance a leg when he is held, which leads to the memorizing of the corresponding strategies in the motor areas. Then these strategies themselves are coordinated in patterns which acquire their own colimits making it possible to take a step without losing balance. And so on until the formation of a complete strategy of walking, usable under the most varied conditions [28]. But if the child tries to advance too quickly, the motor coordination is lost, and he is likely to fall.

3. The development of the memory by formation of records of increasing orders memorizing the signals, the situations or the strategies results from a succession of complexifications which also introduce the compatible (simple or complex) links between the records. In particular, the record M of a signal C can be connected by a (simple or complex) link f in the memory to a strategy A in response to this signal. A later presentation of C will recall M and, by the link f, the strategy A. This will be done by the intermediary of the different CRs and the interplay among their strategies, as indicated above; with a risk of fracture if the structural and temporal constraints of some CRs cannot be respected.

For example, the sight of a prey C will have no effect for a satiated animal. But if the animal is hungry, it starts a strategy to catch C; the realization requires a coordination between visual CRs which determine the location and the size of the prey, and motor CRs which control the movements to catch it. But if the prey moves very quickly, or in an unforeseen way, information coming from the visual CRs on its position will arrive too late at the motor CRs for them to adjust the movement, and the prey will escape.

e) Emergentist reductionnism

1. We have just seen that the development of the memory is done by iteration of the process of complexification. The MES satisfying the Principle of Multiplicity, a succession of complexifications is not reducible to a single complexification and it leads to the emergence of components of increasing orders, connected by simple or complex links (cf Section 3, d-3). This explains why any representation of the system by the traditional models based on differential equations (or dynamic systems) can be valid only "locally and temporarily" (as suggested by Rosen [39]; cf also Matsuno [35]). Indeed, such a model can describe only one particular process of complexification entirely determined by the initial state (the category that one complexifies) and the strategy (which fixes the parameters), and not a not-reducible succession of complexifications, in which objects of increasing orders emerge at each stage.

2. Remains the problem to know how the properties of a component of a higher level are connected to those of the elements of lower levels having been used to construct it, and how new properties can emerge for such a component compared to the lower levels.

- Let us consider initially the passage from n to n+1. A component of level n+1 is constructed as the colimit of a pattern of linked objects of level n; therefore its properties are determined by these objects for they correspond to the collective links of this pattern. In the same way for an n-simple link, which binds a cluster of the preceding level. The situation is different for an n-complex link. Such a link from N to N" is the composite of a (P,P')-simple link g from N to N' and of a (Q',P")-simple link g' from N' to N", with a complex switch between P' and Q'. The properties of this link are derived from the local properties of the two clusters of level n that g and g' bind, but also from the fact that the patterns P' and Q' have the same colimit. This last condition means that both patterns impose on each object of level equal or lower than n the same constraints, which takes into account the entire level n. Thus the constraints imposed on N and N" by the complex link gg' cannot be reduced to local constraints imposed on their components in P and P", but emerge from the total structure of level n.

- This has consequences for the passage from level n to the level n+2. An object A of level n+2 is obtained in two stages starting from level n as the 2-iterated colimit of a 2-ramification (R,(Ri)). So if some distinguished links in R are complex, it results from what precedes that they impose on A properties emerging from the total structure of level n, and not coming only from the 'local' properties of the components of A of this level n (as would assert the 'hard' reductionistic program).

- And the situation reproduces for the passage to higher levels. Thus, the lower levels make it possible to construct a higher level, but on the condition of utilizing for each element the holistic structure of each intermediate level and not only the 'local' structure of its own components of the lower levels. We can speak of an emergentist reductionnism (specifying the concept suggested by Mario Bunge [7]).

 

5. Application to a neural system

a) Assemblies of neurons

The response of a neuronal system to a simple signal is the activation of a specialized neuron; for example, in the visual areas, there exist neurons activated by a segment of a certain direction ('simple' cells), or by an angle ('complex' cells),... [27]. But more complex signals, apart from a few exceptions (e.g., a neuron representing a hand holding a banana in the monkey [24]), do not have their own "grandmother neuron". The development of neuronal imagery shows that complex signals, or motor programs, are represented by the short synchronization of a specific assembly of neurons. And learning would consist in the formation of such synchronous assemblies, under the effect of the reinforcement of the synapses between their neurons, while following the rule already suggested by Hebb in 1949 [26]: a synapse between two neurons is strengthened if the two neurons are active at the same time, and its force decreases if one is active while the other is not.

These processes can be described in a MES modeling a neuronal system [21], and this model makes it possible to understand how they form the basis for the development of increasingly complex mental objects, enjoying a great flexibility.

b) MES of neurons

In such a MES, the state-categories will be obtained by successive complexifications of a category of neurons defined as follows:

1. We consider the graph whose vertices are the neurons at the time t, and the arrows f from N to N' are the synapses having N and N' respectively as their presynaptic and post-synaptic neurons. The category of paths of this graph is weighted, the weight of a synaptic path being connected to the probability that the activity of N (determined by its instantaneous frequency of impulses) is propagated to N', and to the propagation delay of this activation. The category of neurons at t is obtained from this category by identifying two synaptic paths from N to N' with the same weight.

2. An assembly of neurons is represented by a pattern P in such a category of neurons. Its synchronization is then modeled by the emergence of a colimit of P in a complexification of the category; this colimit operates like a single 'neuron of a higher order' integrating the assembly, and which takes its own identity; it is called a neuron of category (or cat-neuron). The construction of the complexification determines which are 'the good' links between cat-neurons (namely the simple and complex links), therefore between synchronous assemblies of neurons, which solves a problem raised in Neurosciences [34]. And, by iteration of the process of complexification, one can define cat-neurons of order 2 representative of assemblies of assemblies of neurons, then of order 3 etc..., modeling increasingly complex mental objects, or cognitive processes of a higher order. That makes it possible to describe explicitly how an "algebra of mental objects" (in the sense of Changeux [10]) develops. In particular, extension of the memory, under the effect of different CRs, leads to the emergence of cat-neurons of increasing orders. Among them, the cat-neurons which 'memorize' the strategies carried out and their results form the procedural memory.

c) Semantics

The neuronal system of a higher animal will be able to classify the items it recognizes by the formation of classes of invariance.

1. For a lower CR, say E, this classification will be only 'pragmatic': two items 'are acted' as equivalent when their traces in the landscape activate the same pattern of agents: for example, it is the same pattern of agents of a 'color' CR which is activated by all the blue objects. (This is modeled [18] using the concept of shape as defined in Borsuk "shape theory" [12].)

2. But this classification will take a 'meaning' only at the level of a higher CR, with a longer period, which can determine what have in common the different items that E has classified together, and memorize such a class in the form of an object called an E-concept. This object will be modeled by the limit of the pattern of agents of E activated by all the items of the class, and its various instances form the class of invariance of the concept (for example, for the blue color-concept, all representations of blue objects).

3. The CR-concepts relating to the different CRs will form a Semantic Memory which is developed under the effect of successive complexifications, by addition of concepts classifying several attributes (such as a 'blue triangle'), then of more abstract concepts obtained as limits of patterns of such 'concrete' concepts, linked by complex links. A concept can be seen as a prototype for a class of items having a "family air" (in the sense of Wittgenstein), and it does not suppose the existence of a language. The activation of a concept rests on a double indetermination: initially choice of a particular instance of the concept, then choice of a particular decomposition of this instance. It results from it that the introduction of a semantics makes the interplay among the strategies of the different CRs still more flexible; indeed the procedural memory will become semantic, so that, in higher CRs, the choice of a strategy can be made in the form of a concept, without specifying a particular object of the class of invariance of the concept. This gives a new degree of freedom in the formation of the effective strategy on the system, since it makes it possible to activate the instance best adapted taking account of the strategies reflected by other CRs. For example the command for lifting an object will be modulated according to the shape and the size of the object to lift.

4. During the evolution of a MES, regrouping of emerging components and links can lead to the formation of new CRs of increasingly high levels, developing particular aptitudes.

In particular, a MES equipped with a semantic memory can develop 'conscious' CRs, able to have an internal view of the semantics and of the concept of time. We propose to characterize such a CR by the following capacities (cf. the article on consciousness in [21]):

- retrospectively to extend its current landscape to lower levels of the recent past by an increase in attention, in particular after a fracture;

- to operate a process of abduction in this extended landscape to find possible causes of the fracture;

- to plan a choice of strategies covering several future steps by the formation of 'virtual' landscapes in which the strategies (selected in the form of concepts) can be tested without energy cost.

From the neurological point of view, these properties rest on the existence of functional loops between various areas of the brain, which form what Edelman [14] calls the "loop of consciousness".

d) Mind-body problem

The representation of a mental state by a cat-neuron of a higher order leads to a new approach of the problem of the identity between mental states and physical states of the brain.

Indeed, a physical state, such as it is seen by medical imagery, corresponds to the activation of a simple assembly of neurons (modeled by a cat-neuron of order 1). But a cat-neuron of a higher order is not directly reducible at such an assembly: its activation requires several stages, passing by the intermediate levels of one of its ramifications down to the level of the physical states; and, at each stage, it can be propagated by one or the other of the not-equivalent decompositions of multifold objects, with complex switches between them whose origin can be random (noise), quantum (according to Eccles [13]) or controlled. Although this process represents a well determined physical 'event', it is not identified with a physical 'state': mental states emerge in a dynamic way (through the gradual unfolding of a ramification) of the physical states of the brain, without being identical to them. This would define an emergentist monism in the sense of Bunge [7].

Acknowledgments. The drafting of this article profited from stimulating exchanges with Jerry Chandler, George Farre and Brian Josephson.

 

Appendix

To a strategy S on a category K, one associates the following sketch:

- it contains K as ae subcategory,

- the other objects are: the elements of A and an object CP for each pattern P to bind. Moreover, for each object Ni of such a pattern P, we add an arrow di from Ni to CP, so that the family (di) defines a cone cP with basis P and vertex CP.

- the distinguished cones of the sketch are: the 'added' cones cP and the cones to be transformed into colimits; and one imposes that the objects of O and the colimits of the patterns Q be transformed into initial objects.

The complexification is the full subcategory of the prototype of this sketch (in the sense of [5]) having for objects the non-initial objects.

 

Références

1. Auger, P. (1989), Dynamics and thermodynamics in hierarchically organized systems, Pergamon Press.

2. Baas, N. A. (1992), Hyperstructures - a framework for emergence, hierarchies and complexity, in "Proceedings du Congrès sur l'émergence dans les modèles de la cognition", Télécom. Paris, 67-93.

3. Bastiani(-Ehresmann), A. (1964), Applications différentiables..., Jour. d'Analyse Math. Jérusalem XIII, 1-113.

4. Bastiani(-Ehresmann), A. (1967), Sur le problème général d'optimisation, in "Identification, Optimalisation et stabilité des systèmes automatiques", Dunod, 125-135.

5. Bastiani(-Ehresmann), A. & Ehresmann, C. (1972), Categories of sketched structures, Cahiers Top. et Géom. Diff. XIII-2; reprinted in "Charles Ehresmann, Oeuvres complètes et commentées", Partie IV-1, Amiens (1984).

6. Bertalanffy, L. von (1973), General System Theory, Harmondsworth, Penguin.

7. Bunge, M. (1979), Treatise on Basic Philosophy, Vol. 4; Reidel, Dordrecht.

8. Chandler, J.L.R. (1997), Semiotics of complex systems: a hierarchical notation for the mathematical structure of a single cell, in "Proceedings Conference Euro'97".

9. Chandler, J., Ehresmann, A.C. and Vanbremeersch, J.-P. (1995), Contrasting two representations of emergence of cellular dynamics, in "Proceedings Symposium on Emergence, Inter'Symp'95", The International Institute for advanced studies in Systems research and Cybernetics, Windsor.

10. Changeux, J.-P. (1983), L'homme neuronal, Fayard, Paris.

11. Chapline, G. (1999), Is theoretical physics the same thing as mathematics, Physics Report 315, 95-105.

12. Cordier, J.-M & Porter, T. (1989), Shape Theory, Wiley.

13. Eccles, J.C. (1986), Do mental events cause neural events? Proc. R. Soc. Lond. B227, 411-428.

14. Edelman, G.M. (1989), The remembered Present, Basic Books, New York.

15. Charles Ehresmann, Oeuvres complètes et commentées, Partie I, Amiens, 1983.

16. Ehresmann, A.C. and Vanbremeersch J.-P. (1987), Hierarchical evolutive systems, Bull. Math. Biol. 49, 13-50.

17. Ehresmann, A.C. and Vanbremeersch J.-P. (1991), Un modèle pour des systèmes évolutifs avec mémoire..., Revue Intern. de Systémique 5 (1), 5-25.

18. Ehresmann, A.C. and Vanbremeersch J.-P. (1992), Outils mathématiques pour modéliser les systèmes complexes, Cahiers Top. et Géo. Diff. Cat. XXXIII, 225-236.

19. Ehresmann, A.C. and Vanbremeersch J.-P. (1993), Memory Evolutive systems: An application to an aging theory, in "Cybernetics and Systems" (Ghosal & Murthy, Ed.); Tata McGraw-Hill Pub. C° New Delhi, 90-92.

20. Ehresmann, A.C. and Vanbremeersch J.-P. (1996), Multiplicity Principle and emergence in MES, SAMS 26, 81-117.

21. Ehresmann, A.C. and Vanbremeersch J.-P. (1999), Site Internet: http://perso.wanadoo.fr/vbm-ehr

22. Eilenberg, S. & Mac Lane, S. (1945), General theory of natural equivalences, Trans. Am. Math. Soc. 58, 231-294.

23. Farre, G.L. (1994), Reflections on the question of emergence, in "Advances in Synergetics", Volume I, The International Institute for advanced studies in Systems research and Cybernetics, Windsor.

24. Gazzaniga, M.S. (1985), The social brain, Basic Books, New York.

25. Goguen, J.A. (1970), Mathematical representation of hierarchically organized systems, in "Global Systems Dynamics", Ed. Attinger, Basel, 65-85.

26. Hebb, D. O. (1949), The organization of behaviour; Wiley, New York.

27. Hubel, D.H. and Wiesel, T.N. (1962), Receptive fields..., J. Physio.160 (1) (1962).

28. Josephson, B. (1998), Extendibility of activities and the design of the nervous system, in "Proceedings third international conference on emergence ECHO III" (ed. Farre), Helsinki.

29. Kainen, P.C. (1990), Functorial cybernetics of attention, in "Neurocomputers and Attention II" (ed. Holden and Kryukov), Manchester University Press, Chap. 57.

30. Kan, D. M. (1958), Adjoint Functors, Trans. Am. Math. Soc. 89, 294-329.

31. Lawvere, F.W. (1972), Introduction: Toposes, Algebraic Geometry and Logic, Lecture Notes in Math. 274, Springer, 1-12.

32. Louie, A.H. (1983), Categorical System Theory, Bull. math. Biol. 45, 1029-1072.

33. Mac Lane, S. (1991), Categories for the working mathematician, Springer.

34. Malsburg C. (von der) & Bienenstock E. (1986), Statistical coding and short-term synaptic plasticity, in "Disordered systems and biological organization", NATO ASI Series 20, Springer.

35. Matsuno, K. (1989), Protobiology: Physical basis of Biology, CRC Press, Boca Raton.

36. Minsky, M. (1986), The society of mind, Simon & Schuster, New York.

37. Paton, R.C (1997), Glue, verb and text metaphors in Biology, Acta Biotheoretica 45, 1-15.

38. Rosen, R. (1958), The representation of biological systems from the standpoint of the Theory of Categories, Bull. math. Biophys. 20, 245-260.

39. Rosen, R. (1986), Theoretical Biology and complexity, Academic Press.

40. Salthe, S.N. (1985), Evolving hierarchical systems: their structure and representation, Columbia University Press.

41. Thom, R. (1988), Esquisse d'une Sémiophysique, InterEditions, Paris.

42. Vallée, R. (1995), Cognition et Système, L'Interdisciplinaire, Limonest.

43. Walters, R.E.C. (1991), Categories and computer science, Cambridge University Press.

44. Zeeman, E.C., Catastrophe Theory, selected papers, Addison-Wesley (1977).