Vincent CHEVRIER – Vincent CHEVRIER

Transcription

Vincent CHEVRIER – Vincent CHEVRIER
Physica D 204 (2005) 70–82
Ordered asynchronous processes in multi-agent systems
David Cornforth a,∗ , David G. Green b , David Newth a,c
a
School of Environmental and Information Sciences, Charles Sturt University, P.O. Box 789, Albury, NSW 2640, Australia
b School of Computer Science and Software Engineering, Monash University, Clayton, Vic. 3800, Australia
c CSIRO Centre for Complex Systems Science, GPO Box 284, Canberra, ACT, Australia
Received 7 April 2004; received in revised form 1 February 2005; accepted 6 April 2005
Communication by S. Kai
Abstract
Models of multi-agent systems usually update the states of all agents synchronously, but in many real life systems, agents behave
asynchronously. Relatively little is yet known about the dynamic characteristics of asynchronous systems. Here we compare
synchronous, random asynchronous, and ordered asynchronous updating schemes. Using one-dimensional (1D) cellular automata
as a case study, we show that the type of update scheme strongly affects the dynamic characteristics of the system. We also
show that global synchronisation can arise from local temporal coupling. Furthermore, it is possible to switch between chaotic,
cyclic and modular behaviour by varying a single parameter, which suggests a possible mechanism by which environmental
parameters influence emergent structure. We conclude that ordered asynchronous processes with local temporal coupling play
a role in self-organisation within many multi-agent systems.
© 2005 Elsevier B.V. All rights reserved.
PACS: 05.45.Xt; 05.65.+b; 07.05.Tp; 89.75.Fb
Keywords: Asynchronous; Multi-agent; Cellular automata; Models
1. Introduction
From crystal lattices to human societies, many natural and artificial phenomena can be represented as
multi-agent systems. In general, multi-agent systems
∗ Corresponding author. Tel.: +61 2 6051 9652;
fax: +61 2 6051 9897.
E-mail addresses: [email protected]
(D. Cornforth), [email protected] (D.G. Green),
[email protected] (D. Newth).
0167-2789/$ – see front matter © 2005 Elsevier B.V. All rights reserved.
doi:10.1016/j.physd.2005.04.005
consist of simple processing elements, or agents (such
as atoms, cells or people), and the links between
the agents that form a network [1]. Common representations include cellular automata (CA), random
Boolean networks (RBNs) and artificial neural networks (ANNs). Studies have shown that these systems
display a rich variety of behaviour, including stable
points, cycles, complexity and chaos.
Models of multi-agent systems have traditionally
treated time as discrete and state updates as occurring
synchronously and in parallel. Implicitly, they assume
D. Cornforth et al. / Physica D 204 (2005) 70–82
that components are updated in a single pass, and before any of the new states influence other nodes. So
predominant is synchronous updating in models that
asynchronous behaviours are usually regarded as artefacts or programming errors.
Models of multi-agent systems are essentially modelling many processes that occur in parallel, but parallel
does not necessarily mean synchronous. Implementations using synchronous updating divide updating into
two phases. The first phase, interaction, calculates the
new state of each element based on the neighbourhood
and the update function. These state values are held in
a temporary store. In the second phase, update, the new
state values are copied back to the original elements.
In contrast, asynchronous updating does not separate
these two phases: changes in state are implemented
immediately. We can summarise this difference as follows:
Synchronous
Asynchronous:
(interaction):
(update):
(t+1)
(t)
∀i ∈ N : τi
= f (σk ∈ Ki )
σ̂ (t+1) = τ̂ (t+1)
(t+1)
(t)
∀i ∈ N : σi
= f (σk ∈ Ki )
where σ̂ (t) is the vector of states of the elements at
time t, τ̂ (t) a temporary copy used in the updating, i the
index to an individual element, N the total number of
elements in the model, and f() a function that calculates
the new state of an element based on the current state
of the elements in set Ki , where |Ki | ≤ N.
Models of synchronous updating use a temporary
store to hold new values of state variables “offline”
until calculation of all new states is complete. They
also require a global clock, ensuring all updates occur
simultaneously. However, the requirement for a clock
signal to be propagated to all agents in a system instantaneously is unrealistic in many cases, such as communication networks, or information flow within a factory
shop floor.
Several authors (e.g., [2–4]) have argued that asynchronous models are viable alternatives to synchronous
models. We take their argument further and suggest that
asynchronous models should be preferred where there
is no evidence of a global clock.
In this paper, we examine the nature of state updating
in real life systems, and in particular the role of asynchronous processes. We argue that asynchronous updating is widespread and ubiquitous in both natural and
artificial networks. We describe two classes of asyn-
71
chronous behaviour: random asynchronous (RAS), and
ordered asynchronous (OAS) updating. As a practical
case study, we demonstrate the effects of these update
schemes on simple CA models.
Recently, some studies have begun to investigate
the behaviour of systems in which the updating of the
states of component processes is asynchronous. Most
of this work has focused on RAS updating, where the
node to be updated at any stage of the model is selected
randomly. However, many processes observed in real
life are better described as OAS processes, that is, updates do not occur completely at random. As we shall
show, OAS updating is implicit in many models, but
has usually been obscured by the manner in which the
models are implemented. The widespread occurrence
of OAS has not been recognised. We argue, therefore,
that for many systems, OAS models provide better representations of processes than synchronous models. In
addition, studying the behaviour of this class of models will provide a deeper understanding of behaviour in
many systems. Finally, we show that in many systems
OAS updating plays a role in self-organisation.
2. Asynchronous systems
Asynchronous updating is very common in natural
and artificial systems. Large-scale integrated circuit design provides a good example. In the design of largescale high-speed electronic logic circuits, a central oscillator provides a clock signal that is distributed to all
areas of the chip. All logic gates across the chip are
configured to change their state in synchrony with the
global clock. In this way, indeterminate states that are
caused during the clock transition are not propagated
to other parts of the chip. However, problems arise with
the growing trend in electronic chip design towards bigger and more complex devices fabricated on a single
chip [5]. Large chips and fast clock speeds mean that
the time interval between clock ticks is shorter than
the time taken to propagate the clock signal across the
whole chip. Large chips can no longer be made with
the expectation that a clock signal will propagate to
all parts of the chip with insignificant delay. An asynchronous solution is to divide the chip into modules. In
this scenario, each chip component has its own internal
clock, which can satisfy the demands of synchronous
logic within the module. The modules then communicate with each other asynchronously.
72
D. Cornforth et al. / Physica D 204 (2005) 70–82
Having removed the synchronisation, the updating
of agent states can follow many different patterns [6].
Random asynchronous updating includes any process
in which at any given time individuals to be updated are
selected at random according to some probability distribution. Conversely, ordered asynchronous updating
includes any process in which the updating of individual states follows a systematic pattern. Several variations on these two schemes have been identified, and
we will show that the updating scheme chosen has a
significant effect upon the dynamic behaviour of the
system. Here we will consider a total of six specific
update patterns, including two RAS schemes and three
OAS schemes (see Fig. 1):
• The synchronous scheme, in which all individuals
are updated in parallel at each “time step”.
• The random independent scheme is an example of
a RAS process. At each time step, a node is chosen
at random with replacement. Social networks (see
below) often follow this scheme. An implementation of this scheme by Kanada [3] provided evidence that randomised automata models are more
likely to generate edge of chaos patterns than models using a fixed update order. This finding is highly
significant, as systems poised on the edge of chaos
produce the richest, most interesting behaviour [7].
Another implementation [8] concluded that a synchronous RBNs might be applicable to biological
systems where there are no special reasons to believe that some synchronising mechanism exists.
• The random order scheme is an example of a RAS
process. At each time step, all nodes are updated,
but in random order. At each time step, each node
is updated exactly once. This has previously been
implemented by Harvey and Bossomaier [8].
• The cyclic scheme is an example of an OAS process.
At each time step a node is chosen according to a
fixed update order, which was decided at random
during initialisation of the model [3]. Distribution
networks and token ring networks (see below) follow
this scheme
• The clocked scheme is an example of an OAS process, and assigns a timer to each cell, so that updating is autonomous and proceeds at different rates for
different cells [2]. Low and Lapsley [9] used a similar scheme for modelling link utility and bandwidth
cost across a communications network, where the
goal is to calculate the bandwidth for each node that
maximises the sum of the utilities. Such a network
is necessarily asynchronous because the nodes communicate at different times and with different frequencies. This scheme represents the examples provided below, in collision sense computer networks,
ant colonies, neural tissue, forest ecosystems, and
fire spread. This scheme could be useful when investigating the independent timing that apparently
occurs in such systems. However, this does not reproduce the self-synchronising behaviour observed
in some of these examples.
• The self-sync scheme is similar to the clocked
scheme, but incorporates local synchrony. More recently, a number of schemes have appeared in which
the order of updating depends on local interactions.
Nehaniv [10] has demonstrated an asynchronous CA
model that can behave as a synchronous CA, due to
the addition of extra constraints on the order of updating. These constraints effectively provide a type
of local synchronisation. Another study by Clapham
[11] has shown that global synchrony can be observed in asynchronous models that employ local
synchrony. In this work we chose to implement local
Fig. 1. Simplified diagrams of the six updating schemes used in the experiments. The horizontal axis shows time, and marks indicate when the
cell is updated.
D. Cornforth et al. / Physica D 204 (2005) 70–82
synchrony by using a coupled oscillator approach, as
this has not previously appeared in the CA literature.
The period of each timer is adjusted after an update
so as to more closely match the period of other cells
in its neighbourhood.
The following examples serve to clarify the above
definitions, as well as drawing attention to the wide
range of asynchronous processes found in natural and
artificial systems.
2.1. RAS updating in social networks
Social networks are composed of people interacting
via social contacts. The states of people include their
opinions and beliefs [12]. Interactions and change of
states may take place synchronously (e.g., change of
opinion due to mass media) or asynchronously. We
identify two types of RAS updating in social networks.
The first type occurs, for example, when people meet
by chance and subsequent conversations cause them to
re-evaluate their opinions. This is independent random
sampling, that is, an individual is chosen by random
with replacement. The probability of a state update is
independent of the number of previous state updates.
The second type occurs, for example, during attendance
at a polling booth. In this case, once a person has voted,
they are not allowed to vote again until the next election. In this scenario, the order of update is random, but
each individual is updated once and once only during
each round. At each stage, the individual to be updated
is chosen by random without replacement. Another example of the second type is vaccination, where the order
of people being vaccinated is determined at random,
but each person is only vaccinated once for a particular
disease.
2.2. OAS updating in distribution networks
A power distribution network consists of a number
of power stations, transmission lines, and local substations. The effect of any fault will propagate through
the system in a fixed order. This is different to the example of RAS updating considered above, as here the
order of updates is not random, but is fixed by the spatial characteristics of the system. This is a type of OAS
updating, and is typical of distribution networks. Other
examples include the propagation of signals through
the nervous system, the progress of work through an
73
assembly line, and traffic flow through road junctions.
In the latter case, a road section can assume several
states from “empty” to “jam”, and the effects of road
congestion spread from one section to another in a fixed
order, determined by the major routes through the city.
2.3. OAS communication in computer networks
Asynchronous updating in communications networks is in widespread use in spite of well-known problems such as packet collision [13]. Centralised control is undesirable, since state information cannot be
passed immediately to all nodes due to propagation delays [14]. Token ring networks employ a cyclic update
pattern similar to the distribution networks considered
above. In networks employing collision-sensing methods, each workstation can be considered as having its
own clock, which determines its rate of update.
2.4. OAS updating in forest ecosystems
The competition of different species within a forest ecosystem, coupled with catastrophic events such
as forest fires, leads to a complex system of interactions. Succession between different community classes
(such as a transition from rainforest to open sclerophyll
woodland) require vastly different times to complete
[15]. A fire-induced transition from woodland to grassland may be virtually instantaneous, whereas a transition to mature rainforest might take hundreds of years
to complete. Recognition of the asynchronous nature
of forest succession led to the adoption of the semiMarkov model in this context [16,17]. Although OAS
updating is implicit in these models, forest succession
has not been widely recognised as belonging to this
category of processes.
Another topic studied in forest ecosystems is the
dynamics of bushfires, that propagate changes of state
(unburnt, burning, burnt) between plants. When a plant
ignites, its neighbours ignite asynchronously, with the
order determined by heat accumulation. Asynchronous
ignition patterns have been incorporated in cellular automata models of fire spread in several ways, for example, using a list of cells due to ignite [18], or by adding
extra states [19].
2.5. OAS updating in neural tissue
The behaviour of interconnected neurons leads to
global patterns of stimulation across the whole brain.
74
D. Cornforth et al. / Physica D 204 (2005) 70–82
This activity does not exhibit stationary patterns, but
periodic, quasi-periodic and chaotic patterns [20].
There is no known mechanism such as a global clock
in the brain, but neurons are able to self-synchronise
by adjusting internal parameters.
2.6. OAS updating in ant colonies
Ants do not work constantly, but spend between 55
and 72% of their time resting, depending on species
[21]. Individual ants separated from the colony display
active and resting periods with an aperiodic or chaotic
pattern. However, the whole colony displays a synchronised periodic pattern of active and resting behaviour
[22,23]. The updating of states is asynchronous and is
ordered rather than being purely random. The use of
self-synchronisation suggests that some level of global
synchrony could be achieved in asynchronous models
by using modifications based on coupled oscillator theory [24].
3. Properties of asynchronous updating
The previous examples show that asynchronous processes are common in real life systems. The remainder
of this paper investigates some of the properties of asynchronous systems. Several studies have examined the
implications of alternatives to synchronous updating.
Most of these studies have considered RAS updating,
where at each time step, a single node is chosen at random from a uniform distribution, and updated.
Not surprisingly, RAS updating changes the characteristics of a system. For example, Harvey and Bossomaier [8] pointed out that stochastic updating in RBNs
results in the expression of point attractors only: there
is no repeatable cyclic behaviour, although they introduced the concept of loose cyclic attractors. However, the examples above show that cyclic behaviour is
known to occur in ant colonies and neural networks, and
is a requirement in logic circuits, which must be provided even in the absence of global synchrony. These
examples beg the question of how synchronised, cyclic
behaviour can be achieved in systems governed by
asynchronous processes.
Kanada [3] has shown that some one-dimensional
(1D) CA models that generate non-chaotic patterns
when updated synchronously generate edge of chaos
patterns when randomised. This raises the question
of whether asynchronous processes can duplicate the
same functions as synchronous systems. Some researchers have claimed that RAS models can exhibit
all the behaviour normally associated with synchronous
models. For example, Orponen [25] has demonstrated
that any synchronously updated network of threshold
logic units can be simulated by a network that has no
constraints on the order of updates. However, this work
depends on a carefully crafted pattern of network connectivity that is unlikely to be observed in natural systems.
Sipper et al. [26] investigated the evolution of nonuniform CAs that perform specific computing tasks.
These models relax the normal requirement of all nodes
having the same update rule. In their models, nodes
were organised into blocks. Nodes within a block were
updated synchronously, but blocks were updated asynchronously. They experimented with three schemes: (1)
at each time step, a block is chosen at random with replacement; (2) at each time step, a block is chosen at
random without replacement; and (3) at each time step,
a block is chosen according to a fixed update order. The
configuration of models was arrived at using an evolutionary algorithm. They conclude that synchronous
and asynchronous models can be evolved with equivalent computational properties, but models of the asynchronous type may require a larger number of nodes.
4. Experimental comparisons of different
updating schemes
To investigate the differences in behaviour that
result from the updating schemes described above,
we implemented all six schemes in a simple, onedimensional CA model. We then performed a series of
experiments to illustrate the difference in performance
that result and to systematically test the effect of each
update scheme on all one-dimensional CAs.
4.1. Methods
All experiments were performed with a onedimensional CA having 250 cells and two states, with
each cell connected to its two neighbours and itself.
This allows 256 possible local rules. Periodic boundary conditions were used. The states of all cells were
D. Cornforth et al. / Physica D 204 (2005) 70–82
randomly initialised before each run, and all models
were evolved for 5000 time steps for each run. The
meaning of a time step is slightly different for different schemes. For the synchronous and cyclic schemes,
one time step is completed when each cell has been
updated once. For the other schemes, one time step is
completed when there have been 250 cell updates.
Implementation of some of the update schemes requires explanation of certain details. In the two RAS
schemes, for instance, we used a uniform probability
distribution to select individual agents to be updated.
In our implementation of the clocked scheme, the
period and phase of each timer was set at random.
The model is evolved by incrementing the values of
all timers at each time step, then checking the values.
Those timers that have exceeded the value of their period variable are updated in order. After a cell has been
updated, its timer is set to zero. It seems unlikely that
any living system would be able to sustain a repeatable
clock with infinite accuracy in its period. Therefore a
variant of this scheme, which has not appeared before
in the literature, is used here. At each update of a node,
random noise is added to the period for each node, to
simulate natural inter-cycle variability.
Our implementation of the self-sync scheme used
a mechanism similar to the Kuramotos model of selfsynchronising oscillators [24]:
θ̇i = ωi +
N
Γij (θj − θi )
j=1
where θ i is the phase and ωi the natural frequency of
oscillator i, and Γ ij (θ j − θ i ) is a function of the phase
difference between the two oscillators. This model is
used when there is total coupling between all oscillators. For our model we modified this, using:
Γij (θj − θi ) = β(θj − θi )
with β a simple gain term, so that
ωi(i+1) = ωi(t) + β(θi+1(t) − θi(t) ) + β(θi−1(t) − θi )
When β = 0, the self-sync scheme is identical to the
clocked scheme.
4.2. The experiments
In order to evaluate the different models quantitatively, we calculated left and right Lyapunov expo-
75
nents, which measure the sensitivity of the model to
initial conditions. Calculating separate left and right
Lyapunov exponents provides more information that a
single Lyapunov exponent, as cyclic behaviour can also
be detected. We aimed to evaluate statistics on all update schemes in a manner similar to those published by
Wolfram [27], but extending this work to asynchronous
update schemes. However, as this work [27] did not include a formal definition of Lyapunov exponents, we
have calculated this as follows. We define a one dimensional CA of size N and S possible cell states, with the
state of a single cell given by xi ∈ S|i ∈ N . At initialisation, we make a copy of the state of each cell yi but
introduce a perturbation in one cell yc∈N , such that:
xi
if i = c
∀i : yi =
,
(xi + 1)mod S if i = c
We evolve the copy y in parallel with x, using the same
rules and update scheme. We measure the divergence
of the pattern y from x after T time steps, by measuring the position dL ∈ N of the left-most difference such
that xi = yi for i =0 to (dL − 1), but xdL = ydL , and measuring the position dR ∈ N of the right-most difference
such that xi = yi for i = (dR + 1) to N, but xdR = ydR .
Then we define the left and right Lyapunov exponents
as:
1
1
λL = (c − dL ) and λR = (dR − c)
T
T
In all experiments, N = 250, S = 2, c = N/2 and
T = 5000. The difference between the states of the two
CAs is initially a single cell, but over time the difference may spread to the left or to the right, or both.
In the first experiment, time space diagrams were
obtained for the six schemes, using a selection of rules
believed to show a wide variety of behaviour. From observation of synchronous CA models [27], rules 2 and
38 are associated with evolution to a cyclic attractor,
rule 4 is associated with evolution to a point attractor,
while rules 18, 22 and 146 are associated with chaotic
behaviour. For the self-sync scheme, the value of the
gain parameter β was set at 0.1 for all runs.
In the second experiment, left and right Lyapunov
exponents were calculated for each of the 6 updating
schemes applied to every one of the 256 possible sets of
rules for a 1D CA. To avoid bias from particular starting
configurations, we averaged the values for each scheme
and each model over 20 iterations, starting with a ran-
76
D. Cornforth et al. / Physica D 204 (2005) 70–82
dom configuration of states. For the self-sync scheme,
the value of the gain parameter β was set at 0.1 for all
runs.
The third experiment was a sensitivity analysis of
the effect that variations in the gain parameter β have on
the outcome of the self-sync scheme. The gain parameter was increased to 0.2, and Lyapunov values were
obtained for 20 runs of the model, for all 256 rules.
5. Results
Results of the first experiment, shown in Fig. 2,
are time space diagrams for the behaviour of all
schemes on selected rules that typify each of Wolfram’s
classes.
Results of the second experiment are shown by the
graphs of λL against λR in Fig. 3, and the average Lyapunov exponents from 20 runs in Table 1. Fig. 3 includes results from all 256 rules, while Table 1 includes
only results from the minimum rule set [27] to avoid
repetition. Cyclic behaviour is indicated by λL + λR = 0,
or points in Fig. 3 close to the diagonal running from
top left to bottom right. For example, rule 2 in the
synchronous scheme is well known as converging to
a cyclic attractor. The larger the magnitude of λL or
λR , the quicker the pattern reaches an edge and wraps
around, therefore the smaller the period of the attrac-
Fig. 2. Time space diagrams for 1D cellular automata models using different update schemes. The rules shown represent different classes of
behaviour. Rules 2 and 38: cyclic attractor, rule 4: point attractor, rules 18, 22 and 146: chaotic behaviour.
D. Cornforth et al. / Physica D 204 (2005) 70–82
tor. The period of the attractor is given by N/|λL |. For
rule 2 in the synchronous scheme, this implies a cyclic
attractor of period 250.
Convergence to a point attractor is characterised by
λL = λR = 0, or points in Fig. 3 near the origin. Chaotic
77
behaviour is characterised by λL > 0 and λR > 0, or
points in Fig. 3 close to the diagonal running from bottom left to top right.
The synchronous scheme reveals the familiar behaviour as reported in many publications (for example
Fig. 3. Graphs of λL against λR for six different updating schemes in the CA models. Each graph consists of 5120 points, created from 20 runs
for each of the 256 rules.
78
D. Cornforth et al. / Physica D 204 (2005) 70–82
Table 1
Lyapunov exponents calculated from CA models using six different update schemes
Rule
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
18
19
22
23
24
25
26
27
28
29
30
32
33
34
35
36
37
38
40
41
42
43
44
45
46
50
51
54
56
57
58
60
62
72
73
74
Sync
Cyclic
Random ind
Random ord
Clock
Self-sync
λL
λR
λL
λR
λL
λR
λL
λR
λL
λR
λL
λR
0
0
0.97
−0.48
0
0
0.97
−0.48
0
−0.78
0.97
−0.92
0
0
0.04
−1.00
0.99
0
0.76
0
−0.97
−0.08
0.97
−0.48
0
0
0.24
0
0
0.97
−0.47
0
0
0.97
0
−0.36
0.99
0.47
0
0.17
0.97
0
0
0.48
−0.94
0.63
0.97
0
−0.06
0
0
0.97
0
0
−0.97
0.49
0
0
−0.96
0.49
0
0.97
−0.97
0.97
0
0
0.48
1.00
0.99
0
0.75
0
0.97
0.40
−0.93
0.49
0
0
1.00
0
0
−0.97
0.49
0
0
−0.96
0
0.97
−0.99
0.22
0
1.00
−0.96
0
0
0.47
0.97
0.23
−0.86
1.00
0.06
0
0
−0.95
0
0.86
1.93
−0.65
0
0
0.66
−0.40
0
0.77
1.37
−0.65
0
0
−0.71
−0.52
0.05
0.10
1.65
1.00
−0.70
1.79
1.96
−0.64
0
0
0.08
0
0.39
1.92
0
0
−0.12
−0.46
0
1.88
0.79
0
0
0.03
−0.57
0.89
0
0.82
0.21
0.60
1.91
0.59
−0.64
0
0
1.99
0
0.74
−1.38
0.65
0
0
1.92
1.94
0
0.63
−0.78
0.66
0
0
1.89
1.31
1.31
0.19
1.24
0.12
1.84
−0.20
−0.96
0.65
0
0
1.77
0
0.31
−1.52
0
0
0.12
0.64
0
−1.17
−0.59
0
0
1.90
0.63
0.04
0
0.84
1.86
0.58
−1.15
1.95
0.65
0
0
−1.51
0
0.14
0
0
0
0
−0.09
−0.03
0
0.15
0
0
0
0
−0.13
−0.10
0
0
0.02
0.01
0
0.28
0
0
0
0
−0.05
0
0.19
0
0
0
0.16
0.05
0
0.18
0
0.01
0
−0.01
0
0
0
0.12
0
0.10
0
0.15
0
0
0
0.03
0
0.14
0
0
0
0
0.43
0.36
0
0.14
0
0
0
0
0.58
0.50
0
0
0.01
0.01
0
0
0
0
0
0
0.36
0
0.18
0
0
0
0.13
−0.04
0
0.18
0
0
0
0.39
0
0
0
0.11
0
0.10
0
0.37
0
0
0
−0.02
0
0.30
0
0
0
0
0
0
0
0.30
0
0
0
0
−0.14
−0.07
0
0.01
0.03
0.04
0
0.58
0.07
0
0
0
−0.02
0
0.44
0
0
0
0.51
0.12
0
0.46
0
0
0
0.07
0
0
0
0.25
0
0.22
0.31
0.38
0
0
0
0.37
0
0.29
0
0.01
0
0
0.81
0.63
0
0.29
0
0
0
0
1.12
0.90
0
0.02
0.03
0.04
0
0.04
−0.04
0
0
0
0.66
0
0.42
0
0
0
0.48
−0.07
0
0.34
0
0.01
0
0.84
0
0
0
0.24
0
0.22
−0.13
0.77
0
0
0
−0.20
0
1.73
0.67
−0.32
0
0
1.44
1.04
0
1.76
0.74
−0.27
0
0
1.27
1.00
1.16
0.05
1.65
1.42
0.51
2.46
2.89
−0.40
0
0
1.28
0
0.91
0.66
0
0
0.16
−0.02
0
2.97
0.67
0
0
1.61
−0.07
1.08
0
1.75
0.50
1.22
2.70
2.70
−0.11
0
0
2.47
0
1.68
−0.16
0.59
0
0
2.21
1.83
0
1.68
−0.15
0.61
0
0
1.77
1.42
0.65
0.05
1.40
1.20
0.85
0.75
1.13
0.81
0
0
2.22
0
0.86
−0.14
0
0
0.15
0.05
0
0.69
−0.22
0
0
3.08
0.15
0.58
0
1.71
0.84
1.18
0.78
3.60
0.24
0
0
−0.08
0
0.91
0.12
−0.07
0
0
0.77
0.29
0
0.92
0.21
−0.11
0
0
0.58
0.45
0.18
0.01
0.31
0.18
0.02
1.66
0.69
−0.05
0
0
0.48
0
0.76
0.10
0
0
0.04
0.95
0
1.64
0.10
0
0
0.67
0.28
0.16
0
0.95
0.08
0.77
0.83
1.33
0.01
0
0
0.90
0
0.84
−0.01
0.21
0
0
1.00
0.53
0
0.80
−0.03
0.22
0
0
1.35
0.72
0.15
0.01
0.37
0.22
0.13
0.45
0.33
0.21
0
0
0.83
0
0.70
0.01
0
0
0.02
0.13
0
0.76
−0.04
0
0
1.24
0.04
0.12
0
0.82
0.15
0.75
0.28
1.69
0.03
0
0
0
D. Cornforth et al. / Physica D 204 (2005) 70–82
79
Table 1 (Continued )
Rule
76
77
78
90
94
104
105
106
108
110
122
126
128
130
132
134
136
138
140
142
146
150
152
154
156
160
162
164
168
170
172
178
184
200
204
232
Sync
Cyclic
λL
λR
0
0
0
1.00
0
0
1.00
1.00
0
0.32
0.98
0.98
0
0.97
0
0.97
0
0.98
0
0.55
0.99
1.00
−0.97
1.00
0
0
0.97
0
0
1.00
0
0
−0.18
0
0
0
0
0
0
1.00
0
0
1.00
−0.13
0
0.09
0.99
0.98
0
−0.97
0
−0.94
0
−0.97
0
0.20
0.99
1.00
0.97
−0.99
0
0
−0.97
0
0
−1.00
0
0
0.20
0
0
0
λL
0
0
0
1.71
0
0
1.81
1.61
0
0.91
0.39
0.90
0
1.93
0
1.30
0
1.91
0
0.62
0.72
1.96
0.59
1.97
0
0
2.00
0
0
0.97
0
1.13
0.50
0
0
0
λR
0
0
0
1.67
0
0
0.82
−1.37
0
0.76
0.35
0.74
0
−1.16
0
0
0
−0.91
0
0.72
0.58
1.21
1.07
−1.54
0
0
−1.40
0
0
−0.77
0
0.10
0.40
0
0
0
Random ind
Random ord
Clock
λL
λL
λL
0
0
0
0.45
0
0
0.02
0.14
0
0.15
0.22
0.15
0
0
0
0.04
0
0.52
0
0.01
0.01
0.02
0
0.41
0
0
0
0
0
0.48
0
0.01
0.01
0
0
0
λR
0
0
0
0.44
0
0
0.01
−0.10
0
0.15
0.21
0.14
0
0
0
−0.03
0
−0.33
0
0
0.01
0.01
0.01
−0.34
0
0
0
0
0
−0.32
0
0
0
0
0
0
[27]). Fig. 2 shows typical behaviour for this scheme,
including evolution to a cyclic attractor (rules 2 and 38),
to a fixed-point attractor (rule 4), and longer, possibly
chaotic transients (rules 18, 22 and 146). The Lyapunov
exponents show that it displays both cyclic and chaotic
behaviour, but Fig. 3 shows that the behaviour of this
scheme falls into several well-defined clusters. This
suggests that this scheme is capable of only a small
repertoire.
The two random schemes have similar behaviour,
with low values for the Lyapunov exponents, indicating that perturbations do not spread easily using
0
0
0
1.13
0
0
0.03
0.66
0
0.30
0.48
0.30
0
0
0
0.09
0
1.03
0
0
0.04
0.04
0.01
0.86
0
0
0
0
0
0.92
0
0.04
0.02
0
0
0
λR
0
0
0
1.10
0
0
0.04
−0.38
0
0.29
0.46
0.29
0
0
0
−0.07
0
−0.69
0
0.09
0.03
0.03
0.02
−0.60
0
0
0
0
0
−0.61
0
0.03
0.02
0
0
0
0
0
0
3.64
0
0
1.79
2.58
0
1.76
1.01
1.76
0
0.96
0
0.51
0
2.31
0
0.72
1.46
1.72
0.86
2.82
0
0
0.93
0
0
1.86
0
1.46
1.06
0
0
0
Self-sync
λR
λL
λR
0
0
0
3.59
0
0
1.60
0.04
0
1.70
0.95
1.70
0
−0.10
0
0.38
0
−0.20
0
0.41
1.26
1.46
1.17
−0.33
0
0
−0.08
0
0
−0.15
0
1.20
0.94
0
0
0
0
0
0
1.50
0
0
0.51
0.57
0
0.90
0.88
0.92
0
0.09
0
0.06
0
0.91
0
−0.01
0.34
0.71
0.11
2.11
0
0
0.13
0
0
0.56
0
0.29
0.18
0
0
0
0
0
0
1.41
0
0
0.31
−0.02
0
0.81
0.80
0.76
0
−0.05
0
0.08
0
−0.10
0
0.04
0.25
0.38
0.12
−0.15
0
0
0.01
0
0
0
0
0.25
0.16
0
0
0
RAS updating, in contrast to OAS updating. These
schemes show little evidence of cyclic behaviour. This
is to be expected from their inherent non-repeatability.
This result supports the work of, for example Harvey
and Bossomaier [8], providing empirical evidence that
the random scheme is incapable of strictly cyclic behaviour. However, Fig. 3 shows more points near the
cyclic diagonal than on the chaotic diagonal, indicating
quasi-cyclic behaviour. There are many points near the
origin in Fig. 3, indicating point attractors. The points
do not show the strong clustering of the synchronous
or cyclic schemes, indicating a wider repertoire.
80
D. Cornforth et al. / Physica D 204 (2005) 70–82
The cyclic scheme scores high on cyclic behaviour,
as would be expected, and scores low on chaotic behaviour. Fig. 2 shows behaviour typical of a cyclic attractor for all rules except rule 4, which shows the behaviour of a point attractor. Fig. 3 suggests that this
scheme has a repertoire similar to the synchronous
scheme, although its behaviour is even more biased towards cyclic behaviour, and its behaviour is more variable. For this reason, the cyclic scheme may be a poor
choice for modelling many systems in which chaotic
behaviour is expected.
The clocked scheme appears quite different to any
other scheme, both in Figs. 2 and 3. The latter shows
a strong bias towards chaotic behaviour (λL > 0 and
λR > 0). This implies that it may be a poor choice for
representing cyclic behaviour. However, there is much
less clustering than the synchronous or cyclic schemes,
suggesting a wider repertoire. Fig. 2 shows rapid evolution towards a point attractor for rules 4, 2, 18, and
146. For rule 22 longer transients can be seen, but
these eventually died out as the model evolved towards
a point attractor. However, rule 38 suggests chaotic
behaviour.
The self-sync scheme appears similar to the clocked
scheme for rules 2, 4, 18 and 22 (Fig. 2). However,
rule 38 shows the formation of groups of cells that become synchronised, producing modules of cells in synchrony. This could provide a mechanism to explain the
formation of modules as a type of self-organisation in
real life systems. Rule 146 shows complex behaviour
a little reminiscent of the random schemes for this
rule. In Fig. 3, this scheme shows a similar pattern
to the clocked scheme, indicating that the coupling
is offset by random noise in the cell period, and has
a marginal affect on the dynamic properties of this
scheme.
However, when the gain parameter β is increased to
0.2, the pattern shown in Fig. 3 changes to resemble
the cyclic scheme more closely (Fig. 4). As coupling
becomes stronger, the synchronisation between cells
dominates the disruption due to random noise in the
period of the clocks, and full synchronisation is rapidly
achieved. This suggests that self-synchronisation may
provide a mechanism for real world systems to switch
between cyclic and chaotic behaviour, and to control
system performance using an external parameter. In
real life systems, this behaviour may be controlled
by external parameters such as temperature, light, and
Fig. 4. Graph of λL against λR for the self-sync scheme, when the
gain parameter β is increased to 0.2.
availability of nutrients. From Fig. 3 it appears that
the self-synchronous scheme shows the largest range
of behaviour, as well as being able to represent the
self-synchronising behaviour observed in the natural
examples considered earlier.
6. Discussion
In the real world, synchronous updating appears to
be rare in multi-agent systems, and asynchronous updating of states is more common. Real-world systems
lie between the two current modelling approaches that
use either synchronous or random asynchronous updates. There is a third class of processes, not well
recognised, that we define as ordered asynchronous
processes. The examples considered above suggest that
the “clocked” scheme is the most commonly encountered in biological systems, while the “cyclic” scheme
is fairly common in artificial (engineered) systems. We
suggest that not only is asynchrony important, but the
type of asynchrony has significant effects. As our results show, the exact manner of updating can have a profound effect on overall system behaviour and structure.
We have shown that in simple models, asynchronous
updating gives rise to different dynamic characteristics
than synchronous updating. One implication of this is
that the majority of real life systems are able to function
without external synchronisation. Another is that when
building models of natural systems, it is important to
D. Cornforth et al. / Physica D 204 (2005) 70–82
consider the updating scheme used. Reports of experiments using models of multi-agent systems should
state which updating scheme has been used, otherwise
meaningful comparison between different studies may
not be possible.
We have seen that the type of order in OAS systems
takes the form of internal timers, local synchronisation
(e.g., ants, brain), internal local environmental factors
(neurotransmitters), non-uniform local inherent properties (e.g., bush fires, social nets), and non-uniform
external demands upon the system (e.g., computer networks).
We have presented the first systematic quantitative
comparison of the behaviour of a simple 1D CA under six different update schemes. Examination of Lyapunov exponents for the CA model incorporating synchronous updating reveals that the values of these parameters are highly constrained in a small number of
clusters, while the corresponding figures for models using asynchronous updating are much less constrained.
This implies that systems using asynchronous processes are less constrained in the types of behaviour
they can achieve. The Lyapunov figures also suggest
that systems implementing OAS schemes allow local
patterns to propagate more quickly than in systems implementing RAS schemes. Thus from these results it
appears that OAS processes confer a degree of flexibility upon the system.
We have also observed that some natural systems
make use of local temporal connections, which can produce global (system-wide) synchronisation. A model of
this process shows that modules can form due to clustering of elements that are nearly in synchronisation.
This may form a mechanism for emergent modularity,
a type of self-organisation often observed in real life
systems, such as colonies of eusocial animals. Also,
the strength of temporal connections can control the
number of elements that can be synchronised. The implications of this are that the size of systems, such as
insect colonies, could depend upon the strength of local
temporal connections.
Furthermore, we have seen that it is possible to
control the type of dynamic behaviour of a system by changing the strength of temporal coupling.
This may provide a way to explain how systems can
make the transition between cyclic or more complex behaviour simply by changing an environmental
variable.
81
Note that ordered asynchrony provides a crucial link
between different processes. Traditional models adopt
a top-down approach, and focus on the limiting effects
of biochemical gradients on rate of growth. In contrast,
more recent L system models adopt a bottom-up approach. They represent the detailed nature of local interactions. However, each model by itself is only part of
the true picture. They combine quite naturally because
the biochemical gradient limits the supply of essential nutrients, thus setting a time scale, which varies
from place to place. We suspect that similar interactions between top-down and bottom-up mechanisms
may be found in many systems. If so, then ordered
asynchronous models provide the key to developing a
better understanding of many processes.
7. Conclusion
In this work, we have shown the importance of
asynchronous updating in multi-agent systems, and
have distinguished random asynchronous from ordered
asynchronous updating. We have published, for the first
time, a systematic numerical analysis of sensitivity to
initial conditions of 1D CAs with a neighbourhood
of three, for the full rule set, using six different update schemes. The results add to those published by
Wolfram [27] for the synchronous schemes. Here we
have extended this work to include five asynchronous
schemes, including the self-synchronous case. We also
introduce a new form of diagram, where left and right
Lyapunov exponents provide a graphical illustration of
the dynamic behaviour of the CA. This work shows the
importance of the update scheme in building accurate
models, and suggests mechanisms that could explain
emergent effects such as modularity.
Acknowledgements
The software program used in the simulations
was based on CA models written by David Eck
and available from http://math.hws.edu/eck/. The authors modified version, which includes the different updating schemes, is available from http://athene.
riv.csu.edu.au/∼dcornfor/masys.html. We are grateful
to Nathan Clapham and David Watson for useful comments and discussions. David Newth’s contribution
was supported by a postgraduate research award.
82
D. Cornforth et al. / Physica D 204 (2005) 70–82
References
[1] J.H. Holland, Hidden Order: How Adaptation Builds Complexity, Addison–Wesley, New York, 1996.
[2] R. Thomas (Ed.), Kinetic Logic: A Boolean Approach to the
Analysis of Complex Regulatory Systems. Lecture Notes in
Biomathematics, 29, Springer-Verlag, Berlin, 1979.
[3] Y. Kanada, The Effects of Randomness in Asynchronous 1D
Cellular Automata, Artificial Life IV, Poster, 1994.
[4] E.A. Di Paolo, Searching for Rhythms in Asynchronous Random Boolean Networks, in: Bedau, McCaskill, Packard, Rasmussen (Eds.), Proceedings of the Seventh Conference on Artificial Life, 2000.
[5] I.E. Sutherland, J. Ebergen, Computers Without Clocks, Scientific American, August 2002.
[6] D. Cornforth, D. Green, D. Newth, M. Kirley, Ordered
asynchronous processes in natural and artificial systems, in:
Whigham, Richards, McKay, Gen, Tujimura, Namatame (Eds.),
Proceedings of the Fifth Australasia–Japan Joint Workshop on
Intelligent and Evolutionary Systems, The University of Otago,
2001, pp. 105–112.
[7] C. Langton, Computation at the edge of chaos: phase transitions
and emergent computation, Physica D 42 (1990) 12–37.
[8] I. Harvey, T.R.J. Bossomaier, Time out of joint: attractors in
asynchronous Boolean networks, in: Husbands, Harvey (Eds.),
Proceedings of the Fourth European Conference on Artificial
Life, MIT Press, 1997, pp. 67–75.
[9] S.H. Low, D.E. Lapsley, Optimization Flow Control, Part I:
Basic Algorithm and Convergence, IEEE/ACM Transactions
on Networking, 1999.
[10] C. Nehaniv, Evolution in asynchronous cellular automata, in:
Standish, Abbass, Bedau (Eds.), Proceedings of the Eighth Conference on Artificial Life, MIT Press, 2002, pp. 65–74.
[11] Clapham, Emergent synchrony: simple asynchronous update rules can produce synchronous behaviour, in: Sarker,
McKay, Gen, Namatame (Eds.), Proceedings of the Sixth
Australia–Japan Joint Workshop on Intelligent and Evolutionary Systems, Australian National University, pp. 41–46.
[12] R. Stocker, D.G. Green, D. Newth, Connectivity, cohesion and
communication in simulated social networks, in: A. Namatame,
et al. (Eds.), Proceeding of the Fourth Japan–Australia Joint
Workshop on Intelligent and Evolutionary Systems, 2000, pp.
177–184.
[13] A.S. Tanenbaum, Computer Networks, Prentice Hall,
1989.
[14] F.J. Vázquez-Abad, C.G. Cassandras, V. Julka, Centralized and
decentralized asynchronous optimization of stochastic discrete
event systems, IEEE Trans. Automatic Control 43 (5) (1998)
631–655.
[15] I.R. Noble, R.O. Slatyer, The use of vital attributes to predict
successional changes in plant communities subject to recurrent
disturbances, Vegetation 43 (1980) 5–21.
[16] B.L. Lord, Markov and semi-Markov modelling of spatial effects of fire, in: J. Ross (Ed.), The Burning Question: Fire Management in NSW, University of New England, Armidale, 1993.
[17] R.A. Howard, Dynamic Probabilistic Systems: Semi-Markov
and Decision Processes, Wiley, NY, 1971.
[18] P. Kourtz, W.G. O’Regan, A model for a small forest fire. . . to
simulate burned and burning areas for use in a detection model,
Forest Sci. 17 (2) (1971) 163–169.
[19] D.G. Green, Simulated fire spread in discrete fuels, Ecol. Modelling 20 (1983) 21–32.
[20] W.J. Freeman, Tutorial on neurobiology: from single neurons
to brain chaos, Int. J. Bif. Chaos 2 (3) (1992) 451–482.
[21] J. Delgardo, R.V. Sole, Self-synchronization and task fulfillment in ant colonies, J. Theor. Biol. 205 (2000) 433–441.
[22] B.J. Cole, Short-term activity cycles in ants: generation of periodicity by worker interaction, Am. Nat. 137 (1991) 244–259.
[23] N.R. Franks, S. Bryant, R. Griffiths, L. Hemerik, Synchronization of the behavior within nests of the ant leptothorax acervorum (fabricius) – Part I: Discovering the phenomenon and its
relation to the level of starvation, Bull. Math. Biol. 52 (5) (1990)
597–612.
[24] S.H. Strogatz, From Kuramoto to Crawford: exploring the onset
of synchronization in populations of coupled oscillators, Physica D 143 (2000) 1–20.
[25] P. Orponen, Computing with truly asynchronous threshold
logic networks, Theor. Comput. Sci. 174 (1–2) (1997) 123–
136.
[26] M. Sipper, M. Tomassini, M.S. Capcarrere, Evolving asynchronous and scalable non-uniform cellular automata, in: Proceedings of International Conference on Artificial Neural
Networks and Genetic Algorithms (ICANNGA97), SpringerVerlag, 1997.
[27] S. Wolfram, Cellular automata as models of complexity, Nature
311 (1984) 419–424.