Self-Organization and
Artificial Life
Abstract
Self-organization can be broadly defined as the ability of a
system to display ordered spatiotemporal patterns solely as the result
of the interactions among the system components. Processes of
this kind characterize both living and artificial systems, making
self-organization a concept that is at the basis of several disciplines,
from physics to biology and engineering. Placed at the frontiers
between disciplines, artificial life (ALife) has heavily borrowed
concepts and tools from the study of self-organization, providing
mechanistic interpretations of lifelike phenomena as well as useful
constructivist approaches to artificial system design. Despite its broad
usage within ALife, the concept of self-organization has been often
excessively stretched or misinterpreted, calling for a clarification
that could help with tracing the borders between what can and cannot
be considered self-organization. In this review, we discuss the
fundamental aspects of self-organization and list the main usages
within three primary ALife domains, namely “soft” (mathematical/
computational modeling), “hard” (physical robots), and “wet”
(chemical/biological systems) ALife. We also provide a classification
to locate this research. Finally, we discuss the usefulness of
self-organization and related concepts within ALife studies, point
to perspectives and challenges for future research, and list open
questions. We hope that this work will motivate discussions related to
self-organization in ALife and related fields.
Carlos Gershenson*
Universidad Nacional Autónoma
de México
Instituto de Investigaciones en
Matemáticas Aplicadas y en Sistemas
Centro de Ciencias de la Complejidad
cgg@unam.mx
ITMO University
Vito Trianni
Italian National Research Council
Institute of Cognitive Sciences
and Technologies
vito.trianni@istc.cnr.it
Justin Werfel
Harvard University
Wyss Institute for Biologically
Inspired Engineering
justin.werfel@wyss.harvard.edu
Hiroki Sayama
Binghamton University
Center for Collective Dynamics of
Complex Systems
sayama@binghamton.edu
Waseda University
Waseda Innovation Laboratory
Keywords
Self-organization, review, classification,
soft ALife, hard ALife, wet ALife
1 What Is Self-Organization?
The idea of self-organization can be traced to antiquity, including Greek and Buddhist philosophies
[68, 108]. The term “self-organization” was used sparingly in the 19th century, mainly applied to
social systems. Similar concepts had been proposed earlier by Kant [98], and in the 1930s, it was
introduced into embryology [184].
* Corresponding author.
© 2020 Massachusetts Institute of Technology Artificial Life 26: 391–408 (2020) https://doi.org/10.1162/artl_a_00324
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
a
r
t
l
/
/
l
a
r
t
i
c
e
–
p
d
f
/
/
/
/
2
6
3
3
9
1
1
8
9
6
0
8
8
a
r
t
l
/
_
a
_
0
0
3
2
4
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
C. Gershenson et al.
Self-Organization and Artificial Life
The modern term “self-organizing system” was coined by Ashby [8] to describe phenomena
where local interactions between independent elements lead to global behaviors or patterns. The
phrase is used when an external observer perceives a pattern in a system with many components,
and this pattern is not imposed by a central authority among or external to those components, but
rather arises from the collective behavior of the elements themselves. Natural examples are found in
areas such as collective motion [198], as when birds or fish move in flocks or schools exhibiting
complex group behavior; morphogenesis [120], in which cells in a living body divide and specialize
to develop into a complex body plan; and pattern formation [36] in a variety of physical, chemical,
and biological systems [29, 49], such as convection and crystal growth as well as the formation of
patterns like stripes and spots on animal coats.
A formal definition of the term runs into difficulties in agreeing on what is a system, what is
organization, and what is self [72], none of which is perfectly straightforward. However, a pragmatic
approach focuses on when it is useful to describe a system as self-organizing [64]. This utility typically
comes when an observer identifies a pattern at a higher scale but is also interested in phenomena at a
lower scale; there then arise questions of how the lower scale produces the observables at the higher
scale, as well as how the higher scale constrains and promotes observables at the lower scale. For
example, bird behavior leads to flock formation, and descriptors at the level of the flock can also be
used to understand regulation of individual bird behavior [105].
Self-organization has been an important concept within a number of disciplines [179], including
statistical mechanics [37, 210], supramolecular chemistry [123], and computer science [110, 129].
Artificial life (ALife) frequently draws heavily on self-organizing systems in different contexts [5],
starting in the early days of the field with studies of systems like snowflake formation [140] and
agent flocking [161], and continuing to the present day. However, there are often confusions and
misinterpretations involved with this concept, possibly due to an apparent lack of recent systematic
literature.
In this work, we intend to:
1. Review research at the intersection of self-organization and ALife.
2. Provide a classification to locate this research.
3. Guide newcomers to the field with this classification.
4. Synthesize relevant concepts, challenges, and open questions.
5. Open discussions on this topic within ALife and related fields.
We first articulate some fundamental aspects of self-organization, outline ways the term has been
used by researchers in the field, and then summarize work based on self-organization within soft
(simulated), hard (robotic), and wet (chemical and biochemical) domains of ALife. We then present
a classification for categorizing different types of self-organization. We also provide perspectives for
further research. A list of open questions closes this article.
2 Usage
Ashby coined the term “self-organizing system” to show that a machine could be strictly determin-
istic and yet exhibit a self-induced change of organization [8]. This notion was further developed
within cybernetics [9, 200]. In many contexts, a thermodynamical perspective has been taken [81, 82,
96], where “organization” is viewed as a reduction of entropy in a(n) (open) system [137]. Since there
is an equivalence between Boltzmann-Gibbs entropy and Shannon information, this notion has also
been applied in contexts related to information theory [50, 146, 147, 152]. In this view, a self-
organizing system is one whose dynamics lead it to decrease its information content, hence becoming
more predictable. Based on information theory, the recent subfield of guided self-organization explores
392
Artificial Life Volume 26, Number 3
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
a
r
t
l
/
/
l
a
r
t
i
c
e
–
p
d
f
/
/
/
/
2
6
3
3
9
1
1
8
9
6
0
8
8
a
r
t
l
/
_
a
_
0
0
3
2
4
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
C. Gershenson et al.
Self-Organization and Artificial Life
mechanisms by which self-organization can be regulated for specific purposes—that is, how to find or
design dynamics for a system such that it will have particular attractors or outcomes [10, 148, 150,
151, 153]. For example, the self-organization of random Boolean networks [100, 101] can be guided
to specific dynamical regimes [65].
There are several other definitions of self-organization as well. [176] defines self-organization as
an increase in statistical complexity, which in turn is defined as the amount of information required
to minimally specify the state of the systemʼs causal architecture. As an alternative to entropy, the
use of the mean value of random variables has also been proposed [92].
The concept of self-organization is also heavily used in organization science, with relevance to
early artificial society models [46, 74], which have evolved into what is known today as computational
social science [121].
Self-organization is commonly used in a broad sense that encompasses self-assembly and other
processes, but the term has at times been used in a more restrictive sense for far-from-equilibrium
processes [132].
While there may be no single agreed-on definition of self-organization, this lack need not be an
insurmountable obstacle for its study, any more than a lack of a unanimous formal definition of
“life” has been an obstacle for progress in the fields of biology or ALife. In what follows, we provide
a concise review of how the idea of self-organization has contributed to the progress of ALife.
3 Domains
One way to classify ALife research is to divide it into soft, hard, and wet domains, roughly referring to
computer simulations, physical robots, and chemical/biological research (including living technology
as the application of ALife [21]), respectively. Self-organization has played a central role in work in all
three domains.
3.1 Soft ALife
Soft ALife, or mathematical and computational modeling and simulation of lifelike behaviors, has
been linked to self-organization in many subdomains. Cellular automata (CAs) [94], one of the most
popular modeling frameworks used in earlier forms of soft ALife, are well-explored, illustrative ex-
amples of self-organizing systems. A CA consists of many units (cells), each of which can be in any
of a number of discrete states, and each of which repeatedly determines its next state in a fully
distributed manner, based on its current state and those of its neighbors. With no central controller
involved, CAs can organize their state configurations to demonstrate various forms of self-
organization: dynamical critical states such as in sand-pile models [15] and in the Game of Life
[14], spontaneous formation of spatial patterns [47, 211, 216] (Figure 1(a)), self-replication1 [116,
117, 158, 178], and evolution by variation and natural selection [138, 139, 164, 166, 167, 185]. Sim-
ilarly, partial differential equations (PDEs), a continuous counterpart of CAs, have an even longer
history of demonstrating self-organizing dynamics [52, 76, 142, 190] (Figure 1(b)).
Another representative class of soft ALife that shows self-organization comprises models of col-
lective behavior of self-propelled agents [198]. Reynoldsʼ “Boids” model [161] is probably the best
known in this category. In this work, self-propelled agents move in a continuous space according to
three kinetic rules: cohesion (to maintain positional proximity), alignment (to maintain directional
similarity), and separation (to avoid overcrowding and collision). A variety of related models have
since been proposed and studied, including simplified, statistical-physics-oriented ones [6, 125, 135,
197] and more detailed, behavioral-ecology-oriented ones [35, 90, 115]. These models produce
natural-looking flocking/schooling/swarming collective behaviors out of simple decentralized
1 Note that earlier literature on self-reproducing cellular automata [34, 201] is not included here, because those models typically had a
clear separation between a central universal controller and a structure that is procedurally constructed by the controller; thus they may
not constitute a good example of self-organization as discussed in this article.
Artificial Life Volume 26, Number 3
393
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
a
r
t
l
/
/
l
a
r
t
i
c
e
–
p
d
f
/
/
/
/
2
6
3
3
9
1
1
8
9
6
0
8
8
a
r
t
l
/
_
a
_
0
0
3
2
4
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
C. Gershenson et al.
Self-Organization and Artificial Life
Figure 1. Turing pattern formation [190] as an illustrative example of self-organization in computational models. (a)
Simulation in CA using Youngʼs discrete model [216]. (b) Simulation in PDE using Turingʼs original formulation. Figures
from [171].
behavioral rules, and they also exhibit phase transitions between distinct macroscopic states. They
have also been used as inspiration for a variety of optimization algorithms [40, 103, 114, 145, 214].
Such collective behavior models have been brought into artificial chemistry studies as well [17, 39],
such as swarm chemistry, its variants, and similar models [48, 112, 137, 168, 169, 170, 175], in which
kinetically and chemically distinct species of idealized agents interact to form nontrivial spatiotem-
poral dynamic patterns. More recently, these collective behavior models have also been actively uti-
lized in morphogenetic engineering [43, 44], in which researchers attempt to achieve a successful merger
of self-organization and programmable architectural design, by discovering or designing agent rules
that result in specific desired high-level patterns.
Other examples of self-organization in soft ALife are found in simulation models of artificial
societies. Their roots can be traced back to the famous segregation models developed by Sakoda
and Schelling back in the early 1970s [87, 163, 174], in which simple, independent decision making
by individual agents would eventually cause a spatially segregated state of society at a macroscopic
level. Agent-based simulation of artificial societies has been one of the core topics discussed in the
ALife community [46, 118], and has elucidated self-organization of phenomena in social order such
as geographical resource management [23, 119], cooperative strategies [4, 25, 93, 126], and common
languages [107, 127, 181, 183]. The literature on adaptive social network models may also be
included in this category [28, 38, 62, 80, 172, 193], as those “artificial society” models describe self-
organization of society into a nontrivial configuration through coevolution of autonomous dynamic
state changes of social constituents and topological changes of social ties.
As adaptive networks at an individual organism level, brains and nervous systems also have been
described for decades as self-organizing systems [88, 108], in that neurons interact to produce behav-
ioral and cognitive patterns. Self-organization of such neural systems has been particularly useful in
computer science, and in the study of artificial neural networks [63]; as a particularly conspicuous
example, Kohonen networks [110] are also called self-organizing maps. Since a large part of soft
and hard ALife research deals with agents, animats, or robots (virtual or physical) being controlled
by artificial neural networks, it can be said that self-organization is present not only at the behavioral
level, but also at the controller level in many cases.
Similar approaches have also been used in search and optimization techniques [45]. For example,
Watson and colleagues have proposed using Hebbian learning [86] to self-organize components of a
394
Artificial Life Volume 26, Number 3
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
a
r
t
l
/
/
l
a
r
t
i
c
e
–
p
d
f
/
/
/
/
2
6
3
3
9
1
1
8
9
6
0
8
8
a
r
t
l
/
_
a
_
0
0
3
2
4
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
C. Gershenson et al.
Self-Organization and Artificial Life
complex system to resolve conflicts [205, 206]. This mechanism probably has also been exploited
beyond neural systems, as computational anthropology studies suggest [56, 57].
3.2 Hard ALife
Robots can be considered to be lifelike artefacts in their ability to sense their physical environment
and take action in response. Physical agents, even very simple ones, can evoke in the observer a
particularly strong sense of being animate. From W. Grey Walterʼs tortoises [203, 204] to simple
machines based on the principles of Braitenbergʼs vehicles [24], from behavior-based reactive robots
[26] to recent biomimetic and bioinspired designs [106, 165, 213], ALife built into machines stems
from the rich dynamics underlying the interaction between the embodied agent and its environment,
so that even simple mechanisms and behavioral rules can confer sophisticated lifelike attributes on
limited machines [177]. Complex ALife forms can be attained either by increasing the sophistication
of a single robot, or by increasing the number of robots in a system that, through the resulting
interaction and self-organization, can then display more sophisticated abilities collectively, from
adaptive responses to group decision making.
Hardware has the strong advantage that the physical characteristics of the system (dynamics,
sensor performance, actuator noise profiles, etc.) are by definition realistic, whereas simulations are
necessarily simplified and typically fail to capture phenomena that only become evident through ma-
terial experimentation [27, 97, 162]. Conversely, while simulation can readily handle very large num-
bers of agents, hardware considerations (cost, space, scalability of operation, etc.) have traditionally
limited hard ALife studies to using a small number of robots. In some scenarios, self-organizing phe-
nomena of interest do not necessarily require many robots. When the mechanism for coordination is
based on stigmergy (persistent information left in a shared environment), the important element is a
large number of interactions between robots and environment, and even a single robot could suffice
to generate complex patterns [18, 208]. More recently, hardware advances have made it possible to
conduct physical experiments with robots in numbers exceeding a thousand [162].
Physical experiments have been used to explore self-organizing phenomena in a variety of areas.
Aggregation of objects has been studied from a physics perspective [75] in ways inspired by behavior
observed in living systems, such as cockroaches or bees [61, 83, 104] and using controllers designed
through automatic methods such as artificial evolution [41, 53]. Another topic is collective navigation,
in which groups of robots coordinate their overall direction of motion and collectively avoid obstacles
[16, 188, 189]. The coordination of flying robots has also been explored using self-organization [196,
199]. In other studies, collective decision-making processes are determined by positive feedback from
recruitment and negative feedback from cross-inhibition [53, 59, 60, 104, 159, 173, 191, 192]. Self-
assembly [209] is another form of self-organization largely studied in hard ALife with self-assembling
or self-reconfiguring robots [7, 42, 78, 134, 162, 180, 215, 217].
3.3 Wet ALife
Wet ALife, or physico-chemical synthesis of lifelike behaviors, extensively utilizes self-organization
as its core principle. A classic example is spatial pattern formation in experimentally realized reaction-
diffusion systems, such as the Belousov-Zhabotinsky reaction [3, 192] and Gray-Scott-like self-
replicating spots [55, 122], where dynamic patterns self-organize entirely from spatially localized
chemical reactions. Similar approaches can also be taken by using microscopic biological organisms
(e.g., slime molds) as the media of self-organization [2, 3, 58, 91, 130].
In research on the origins of life, molecular self-assembly plays the essential role in producing protocell
structures and their metabolic dynamics [84, 155, 156, 157]. Chemical autopoiesis such as dynamic for-
mation and maintenance of micelles and vesicles [12, 13, 128, 202] may also be included in this context.
More recently, dynamic behaviors of macroscopically visible chemical droplets, AKA liquid robots
[31], have become a focus of active study in ALife. In this line of research, interactions among
chemical reactions, physical microfluid dynamics, and possibly other not yet fully understood
microscopic mechanisms cause self-organization of spontaneous movements [33, 85] and complex
Artificial Life Volume 26, Number 3
395
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
a
r
t
l
/
/
l
a
r
t
i
c
e
–
p
d
f
/
/
/
/
2
6
3
3
9
1
1
8
9
6
0
8
8
a
r
t
l
/
_
a
_
0
0
3
2
4
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
C. Gershenson et al.
Self-Organization and Artificial Life
morphology [32] of those droplets. Moreover, droplet-based systems have also been used to dem-
onstrate artificial evolution in experimental chemical systems [141].
Recently, there have been a few studies on the collective behavior of protocells (e.g., [154]) and
droplets [31]. The potential chemical interaction space is vast, so it is difficult to explore with tradi-
tional techniques. Still, the automation of this exploration offers a promising approach [79].
Wet ALife has developed more recently than the soft and hard perspectives, but it has great
potential to better understand living processes and also to exploit and regulate them with engineering
principles and purposes.
4 A Classification
There are different potential classifications that could be considered to characterize self-organization in
the context of ALife studies. One fundamental aspect concerns the level at which self-organization takes
place with respect to the lifelike process under consideration. In this respect, we can distinguish between
internal and external self-organization. Internal self-organization would occur within an individual or agent,
and could be functional for the production of lifelike properties (e.g., morphogenesis) as well as useful
for determining physical characteristics or behavioral responses that determine the way in which the
individual agent interacts with its environment (e.g., pattern formation, neural plasticity). External self-
organization is that occurring among individuals or agents. Such forms of self-organization pertain to the
social aspects of life like processes, which are often fundamental to support reproduction and survival.
These include collective behavior, social coordination, and ecological organization. Note that in some
cases the same process could be considered internal or external, depending on the observation level. For
example, morphogenesis would be external at the cell level, but internal at the organism level. Behavior
can be external at the individual level, but internal at the social level.
An orthogonal direction that characterizes a self-organizing system concerns the nature of the
interactions among the system components that bring about the lifelike spatiotemporal patterns.
In this respect, it is customary to distinguish between direct and indirect (e.g., stigmergic) forms of
interaction. When elements, individuals, or agents interact directly, their coordination can be fast.
However, they need to be synchronized in time and space, and this sometimes can be challenging.
Additionally, mechanisms must be concurrently provided for interactions to be encoded into a com-
munication act and then suitably decoded. Indirect, stigmergic interactions take place by means of
traces left in the environment, usually as a result of a unit of work performed by some agent that is
recognized by a fellow agent [187]. Initially used to describe the organization of work in social
insects (e.g., nest construction in termites or pheromone communication in ants), the concept of
indirect interactions has been expanded to include any external medium that can store information
and thus allow for coordination without the need of synchronous, direct communication. Indeed,
the persistence of indirect interactions within the environment facilitates asynchronous coordination
and the stratification of information, which can lead to complex patterns that extend in space and
time. Note that internal self-organization is usually direct. This is because the environment in most
cases is considered external to agents.
Examples of different types of self-organization belonging to different domains are given in Table 1.
5 Perspectives
As mentioned above, we can understand a self-organizing system as one in which organization increases
in time, without an external agency imposing this change. However, it can be shown that, depending on
how the variables of a system are chosen, the same system can be said to be either organizing or disor-
ganizing [72]. Moreover, in several examples of self-organization, it is not straightforward to identify the
self of the system, as oftentimes all elements composing the system can be ascribed equal agency. Finally,
in cybernetics and systems theory, the dependence of the boundaries of a system on the observer has
396
Artificial Life Volume 26, Number 3
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
a
r
t
l
/
/
l
a
r
t
i
c
e
–
p
d
f
/
/
/
/
2
6
3
3
9
1
1
8
9
6
0
8
8
a
r
t
l
/
_
a
_
0
0
3
2
4
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
C. Gershenson et al.
Self-Organization and Artificial Life
Table 1. Examples of ALife systems classified according to different types of self-organization and domains. Depending on
the observerʼs purposes, the same ALife system could be considered as exhibiting different types of self-organization.
Therefore, the types are non-exclusive, and the boundaries between them are not sharp.
Level:
Internal
External
Interactions:
Direct
Indirect
Soft
Pattern formation,
cellular automata,
artificial neural networks
Boids [161], swarm
chemistry [168]
Ant colony
optimization [40]
Hard
Wet
Self-modeling robots [22],
Alice [61], Jasmine [104],
TERMES [208]
swarm-bots [42]
Kilobots [162]
Protocells [155], active
Xenobots [113], Predator
droplets [31]
protocells [154]
Slime mold machines [1],
collective behavior of
droplets [31]
been thoroughly discussed [70]: One wants to have an objective description of phenomena, but descrip-
tions are necessarily made by observers, making them partially subjective.
It becomes clear, then, that discussing self-organization requires the identification of what is self
and what is other, and what are the elements that are increasing in their organization. Similar issues have
been tackled in [131] in the definition of living systems as autopoietic systems. According to this
tradition, a living system is inherently self-organizing because the self is continuously produced or
renewed by processes brought forth by the systemʼs internal components. In other words, an
autopoietic system can be recognized as a unity with boundaries that encompass a number of
simpler elementary components that are at the basis of the organization of the system, as they
are responsible for the definition of the system boundaries and for the (re)production of the very
same components [195]. This is a peculiar characteristic of living systems. If life is deeply rooted in
self-organization, so can be ALife, and the several acceptations of ALife discussed above demonstrate
the richness of the links it holds with self-organization. Nevertheless, the tradition of autopoiesis
did not originally consider evolution (history), an essential aspect of biology.
Whether evolution itself is an example of self-organization warrants discussion, too. Evolution is often
depicted as synonymous with adaptation, a convergent process toward optimal types that are driven by
external mechanisms (selection criteria or fitness landscapes). This has often been discussed as opposed
or complementary to self-organization, most notably by [77] and [101]. Meanwhile, there is also an effort
toward re-describing biological evolution as a kind of self-organization [207], as all the mechanisms of
evolution, such as variation, reproduction, and selection, are ultimately grounded upon local, uncontrolled
physical/chemical processes. Also, if one uses a very large spatial/temporal-scale perspective to observe
evolution, it can be regarded as a self-organizing process of the population of evolving organisms in that
they may spontaneously generate more diverse species, more complex interspecific interactions, and even
higher-order evolving entities, as diverse scales of space, time, and complexity are relevant [124].
Looking at the perspectives of ALife, it can be useful to think of self-organization as a common
language that unifies the soft, hard, and wet domains. The term is broadly used across many areas,
pointing to the existence of common features that can tie together otherwise disparate studies. By
recognizing and exploiting these commonalities, a better understanding of self-organization should
help the advancement of ALife. The ALife community can progress owing to shared concepts and
definitions, and despite the mentioned difficulties, self-organization stands as a common ground on
which to build consensus. Most importantly, we believe that the identification and classifications of the
mechanisms that underpin self-organization can be extremely useful to synthesize novel forms of ALife
and gain a better understanding of life itself.
Artificial Life Volume 26, Number 3
397
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
a
r
t
l
/
/
l
a
r
t
i
c
e
–
p
d
f
/
/
/
/
2
6
3
3
9
1
1
8
9
6
0
8
8
a
r
t
l
/
_
a
_
0
0
3
2
4
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
C. Gershenson et al.
Self-Organization and Artificial Life
These mechanisms should be identified at the level of the system components and characterized by
the effects they have on the system organization. Mechanisms pertain to the modalities of interaction
among system components (e.g., collisions, perceptions, direct communication, stigmergy), to behavioral
patterns pertaining to individual components (e.g., exploration versus exploitation), and to information
enhancement or suppression (e.g., recruitment or inhibitory processes). The effects of the mechanisms
should be visible in the creation of feedback loops—positive or negative—at the system level, which
determine the complex dynamics underlying self-organization. We believe that, by identifying and char-
acterizing the mechanisms that support self-organization, the synthesis of artefacts with lifelike properties
would be much simplified. In this perspective, mechanisms underlying self-organization could potentially
be thought of as design patterns to generate ALife systems [11, 52, 160]. By exploiting and composing them,
different forms of ALife could be designed with a principled approach, owing to the understanding of the
relationship between mechanisms and system organization.
The possibility of exploiting self-organization for design purposes is especially relevant to the
development of living technologies, that is, technologies presenting features of living systems [21], such
as robustness, adaptability, and self-organization, which can include self-reconfiguration, self-healing,
self-management, self-assembly, and so on, often named together as “self-*” in the context of
autonomic computing [149].
Self-organization has been used directly in living technologies within a variety of domains [20],
from protocells [155] to cities [67]. Recent work programming [1] or designing multicellular organ-
isms [99, 112] also falls within this category. Also, several methodologies that use self-organization
have been proposed in engineering [54]. A major leap forward can be expected when principled
design methodologies are laid down, and a better understanding of self-organization for ALife
can be at the forefront of the development of such methods.
It is also worth considering when self-organization is not useful in the context of ALife. Tracing a
clear line across the domain is of course impossible, but our reasoning above provides some sugges-
tions. Indeed, self-organization does not account for every lifelike process, for instance when there is
no clear increase in organization. For instance, hard ALife has strongly developed the concept of
embodied cognition and morphological computation [143, 144], where the dynamics of mind-
body-environment interaction are fundamental aspects. These dynamics, albeit very complex, are
not easily described within the framework of self-organization. Self-organization is useful when we
are interested in observing phenomena at more than one scale, as it allows us to describe how elements
interact to produce systemic properties. Still, if we are only interested in observing phenomena at a
single scale, then perhaps self-organization would not offer any descriptive advantage. Examples
include embodied cognition (when we are focusing on a single cognitive agent and its interaction with
its environment) and most of the traditional types of evolutionary algorithms (when there are no
interactions between individuals of a population).
Depending on the desired function of a system and the properties of its environment, several
balances have to be considered, for example, between order and chaos, between robustness and
adaptability, between production and destruction, and between exploration and exploitation. Self-
organization can be useful for letting systems find by themselves the appropriate balances for their
current context, as the optimal balance can change [71].
6 Open Questions
There are several open questions that make for promising lines of research in the near future within
ALife:
1. How can self-organization be programmed? Self-organization relies on interactions (direct or
indirect). Thus, it makes sense to focus on designing interactions to regulate and guide
self-organization. Mediators [89, 132] can promote or constrain individual behaviors,
precisely to achieve the proper interactions that will lead to the desired self-organization
398
Artificial Life Volume 26, Number 3
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
a
r
t
l
/
/
l
a
r
t
i
c
e
–
p
d
f
/
/
/
/
2
6
3
3
9
1
1
8
9
6
0
8
8
a
r
t
l
/
_
a
_
0
0
3
2
4
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
C. Gershenson et al.
Self-Organization and Artificial Life
[64]. Information-theoretical approaches can also be used to program self-organization
[111, 151]. Still, proposed approaches have been either too general or too specific. This
makes it difficult to replicate successful self-organizing solutions beyond the original
problems and remains an open challenge.
2. Can the macroscopic outcomes of self-organization be predicted? Interactions in complex systems
generate novel information that is not present in initial or boundary conditions, limiting
predictability. This is referred to as “computational irreducibility” [66, 212]: There is no
shortcut to the future; a system has to go through all intermediate steps. Thus, a priori
claims are limited, and we often work with a posteriori approaches. In some cases, coarse-
grained descriptions can be found to predict self-organization and other properties (e.g.,
[95]). Still, this has not been generalized. In the ALife community, we rely on the synthetic
method [182]: We build artificial systems to contrast theories, but of course this is a
posteriori. Even when prediction is limited by the complex nature of phenomena studied
within ALife, forecasting could be useful. Just as with the weather, precise prediction is
not possible (e.g., when, where, and how much will it rain?), but within a certain range,
forecasts can be made with a high probability (e.g., 80% chance of rain).
3. What is the role of self-organization in the open problems of ALife? [19] listed fourteen challenges
grouped in three broad subjects: the transition to life; the evolutionary potential of life; and
the relation between life, mind, and culture. It can be argued that self-organization is
present in all of these, and thus relevant. Certainly, solving problems related to self-
organization will not solve all ALife problems, but it can provide useful advances. For
example, research related to open-ended evolution [186] goes beyond self-organization.
Still, better understanding self-organizing mechanisms could assist in the development and
characterization of systems that exhibit open-ended evolution.
4. What are the theoretical and practical limits of self-organization? Even when it has demonstrated its
usefulness, self-organization is no panacea. Self-organization is most appropriate when
there is multiscale causality and high complexity [69], but centralized or distributed
approaches can be more appropriate for other contexts (when there is only bottom-up or
top-down causality, or when complexity is low or medium). Still, further work is needed to
be able to identify qualitative and quantitative limits of self-organization.
5. How can understanding of self-organization in ALife benefit other disciplines? These include biology,
medicine, engineering, philosophy, sociology, economics, and more. Independent of
whether ALife is credited or not, the question is whether ALife research will be able to
contribute to the solution of problems that otherwise would not be solvable. There are
promising examples and successful case studies (e.g., [30, 109]), but broader adoption and
dissemination are required to make a difference.
These and more questions highlight the strong role that self-organization has within ALife. Searching
for their answers will be challenging, but the insights provided will permeate beyond ALife.
Acknowledgments
This article benefited from comments by Luis Rocha and reviewers from the ALIFE 2018 conference
on an earlier version of this work [73].
References
1. Adamatzky, A. (2010). Physarum machines. Singapore: World Scientific.
2. Adamatzky, A. (2015). A would-be nervous system made from a slime mold. Artificial Life, 21, 73–91.
3. Adamatzky, A., de Lacy Costello, B., & Shirakawa, T. (2008). Universal computation with limited
resources: Belousov–Zhabotinsky and Physarum computers. International Journal of Bifurcation and Chaos, 18,
2373–2389.
Artificial Life Volume 26, Number 3
399
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
a
r
t
l
/
/
l
a
r
t
i
c
e
–
p
d
f
/
/
/
/
2
6
3
3
9
1
1
8
9
6
0
8
8
a
r
t
l
/
_
a
_
0
0
3
2
4
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
C. Gershenson et al.
Self-Organization and Artificial Life
4. Adami, C., Schossau, J., & Hintze, A. (2016). Evolutionary game theory using agent-based methods.
Physics of Life Reviews, 19, 1–26.
5. Aguilar, W., Santamaría Bonfil, G., Froese, T., & Gershenson, C. (2014). The past, present, and future of
artificial life. Frontiers in Robotics and AI, 1.
6. Aldana, M., Dossetti, V., Huepe, C., Kenkre, V. M., & Larralde, H. (2007). Phase transitions in systems of
self-propelled agents and related network models. Physical Review Letters, 98, 095702.
7. Ampatzis, C., Tuci, E., Trianni, V., Christensen, A., & Dorigo, M. (2009). Evolving self-assembly in
autonomous homogeneous robots: Experiments with two physical robots. Artificial Life, 15(4), 465–484.
8. Ashby, W. R. (1947). Principles of the self-organizing dynamic system. Journal of General Psychology, 37,
125–128.
9. Ashby, W. R. (1962). Principles of the self-organizing system. In H. V. Foerster, & G. W. Zopf, Jr. (Eds.),
Principles of self-organization (pp. 255–278). Oxford: Pergamon.
10. Ay, N., Der, R., & Prokopenko, M. (2012). Guided self-organization: Perception–action loops of embodied
systems. Theory in Biosciences, 131, 125–127.
11. Babaoglu, O., et al. (2006). Design patterns from biology for distributed computing. ACM Transactions on
Autonomous Adaptive Systems, 1, 26–66.
12. Bachmann, P. A., Luisi, P. L., & Lang, J. (1992). Autocatalytic self-replicating micelles as models for
prebiotic structures. Nature, 357, 57.
13. Bachmann, P. A., Walde, P., Luisi, P. L., & Lang, J. (1990). Self-replicating reverse micelles and chemical
autopoiesis. Journal of the American Chemical Society, 112, 8200–8201.
14. Bak, P., Chen, K., & Kreutz, M. (1989). Self-organized criticality in the “game of life.” Nature, 342, 780–782.
15. Bak, P., Tang, C., & Wiesenfeld, K. (1988). Self-organized criticality. Physical Review A, 38, 364.
16. Baldassarre, G., Trianni, V., Bonani, M., Mondada, F., Dorigo, M., & Nolfi, S. (2007). Self-organized
coordinated motion in groups of physically connected robots. IEEE Transactions on Systems Man and
Cybernetics, Part B (Cybernetics), 37, 224–239.
17. Banzhaf, W., & Yamamoto, L. (2015). Artificial chemistries. Cambridge, MA: MIT Press.
18. Beckers, R., Holland, O. E., & Deneubourg, J.-L. (1994). From local actions to global tasks: Stigmergy
and collective robotics. In Proceedings of ALife IV. Cambridge, MA: MIT Press.
19. Bedau, M., McCaskill, J., Packard, P., Rasmussen, S., Green, D., Ikegami, T., Kaneko, K., & Ray, T.
(2000). Open Problems in Artificial Life. Artificial Life, 6, 363–376.
20. Bedau, M. A., McCaskill, J. S., Packard, N. H., Parke, E. C., & Rasmussen, S. R. (2013). Introduction to
recent developments in living technology. Artificial Life, 19, 291–298.
21. Bedau, M. A., McCaskill, J. S., Packard, N. H., & Rasmussen, S. (2009). Living technology: Exploiting
lifeʼs principles in technology. Artificial Life, 16, 89–97.
22. Bongard, J., Zykov, V., & Lipson, H. (2006). Resilient machines through continuous self-modeling. Science,
314, 1118–1121.
23. Bousquet, F., & Page, C. L. (2004). Multi-agent simulations and ecosystem management: A review.
Ecological Modelling, 176, 313–332.
24. Braitenberg, V. (1986). Vehicles: Experiments in synthetic psychology. Cambridge, MA: MIT Press.
25. Brede, M. (2011). The evolution of cooperation on correlated payoff landscapes. Artificial Life, 17, 365–373.
26. Brooks, R. A. (1989). A robot that walks; emergent behaviors from a carefully evolved network. Neural
Computation, 1(2), 253–262.
27. Brooks, R. A., & Matarić, M. J. (1993). Real robots, real learning problems. In J. H. Connell &
S. Mahadevan (Eds.), Robot learning (pp. 193–213). Dordrecht: Kluwer Academic Press.
28. Bryden, J., Funk, S., Geard, N., Bullock, S., & Jansen, V. A. A. (2010). Stability in flux: Community
structure in dynamic networks. Journal of the Royal Society Interface, 8, 1031–1040.
29. Camazine, S., Deneubourg, J.-L., Franks, N. R., Sneyd, J., Theraulaz, G., & Bonabeau, E. (2003).
Self-organization in biological systems. Princeton, NJ: Princeton University Press.
400
Artificial Life Volume 26, Number 3
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
a
r
t
l
/
/
l
a
r
t
i
c
e
–
p
d
f
/
/
/
/
2
6
3
3
9
1
1
8
9
6
0
8
8
a
r
t
l
/
_
a
_
0
0
3
2
4
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
C. Gershenson et al.
Self-Organization and Artificial Life
30. Carreón, G., Gershenson, C., & Pineda, L. A. (2017). Improving public transportation systems with self-
organization: A headway-based model and regulation of passenger alighting and boarding. PLoS ONE,
12, 1–20.
31. Čejková, J., Banno, T., Hanczyc, M. M., & Štěpánek, F. (2017). Droplets as liquid robots. Artificial Life, 23,
528–549.
32. Čejková, J., Hanczyc, M. M., & Štěpánek, F. (2018). Multi-armed droplets as shape-changing protocells.
Artificial Life, 24, 71–79.
33. Čejková, J., Novak, M., Štěpánek, F., & Hanczyc, M. M. (2014). Dynamics of chemotactic droplets in salt
concentration gradients. Langmuir, 30, 11937–11944.
34. Codd, E. F. (1968). A self-reproducing universal computer-constructor. In Cellular automata (pp. 81–105).
Cambridge, MA: Academic Press.
35. Couzin, I. D., Krause, J., James, R., Ruxton, G. D., & Franks, N. R. (2002). Collective memory and spatial
sorting in animal groups. Journal of Theoretical Biology, 218, 1–11.
36. Cross, M. C., & Hohenberg, P. C. (1993). Pattern formation outside of equilibrium. Reviews of Modern
Physics, 65, 851–1112.
37. Crutchfield, J. P. (2011). Between order and chaos. Nature Physics, 8, 17–24.
38. Davies, A. P., Watson, R. A., Mills, R., Buckley, C. L., & Noble, J. (2011). “If you canʼt be with the one
you love, love the one youʼre with”: How individual habituation of agent interactions improves global
utility. Artificial Life, 17, 167–181.
39. Dittrich, P., Ziegler, J., & Banzhaf, W. (2001). Artificial chemistries—a review. Artificial Life, 7, 225–275.
40. Dorigo, M., & Stützle, T. (2004). Ant colony optimization. Cambridge, MA: MIT Press.
41. Dorigo, M., Trianni, V., Sahin, E., Groß, R., Labella, T. H., Baldassarre, G., Nolfi, S., Deneubourg, J.-L.,
Mondada, F., Floreano, D., & Gambardella, L. M. (2004). Evolving self-organizing behaviors for a swarm-bot.
Autonomous Robots, 17, 223–245.
42. Dorigo, M., Tuci, E., Trianni, V., Groß, R., Nouyan, S., Ampatzis, C., Labella, T. H., OʼGrady, R., Bonani,
M., & Mondada, F. (2006). SWARM-BOT: Design and implementation of colonies of self-assembling
robots. In Computational intelligence: Principles and practice. New York: IEEE Computational Intelligence
Society.
43. Doursat, R. (2011). The myriads of ALife: Importing complex systems and self-organization into
engineering. In 2011 IEEE Symposium on Artificial Life (pp. 1–8). New York: IEEE.
44. Doursat, R., Sayama, H., & Michel, O. (Eds.) (2012). Morphogenetic engineering: Toward programmable complex
systems. New York: Springer-Verlag.
45. Downing, K. L. (2015). Intelligence emerging: Adaptivity and search in evolving neural systems. Cambridge, MA:
MIT Press.
46. Epstein, J. M., & Axtell, R. L. (1996). Growing artificial societies: Social science from the bottom up. Washington,
DC: Brookings Institution Press, & Cambridge, MA: MIT Press.
47. Ermentrout, G. B., & Edelstein-Keshet, L. (1993). Cellular automata approaches to biological modeling.
Journal of Theoretical Biology, 160, 97–133.
48. Erskine, A., & Herrmann, J. M. (2015). Cell-division behavior in a heterogeneous swarm environment.
Artificial Life, 21(4), 481–500.
49. Feltz, B., Crommelinck, M., & Goujon, P. (Eds.) (2006). Self-organization and emergence in life sciences. New York:
Springer.
50. Fernández, N., Maldonado, C., & Gershenson, C. (2014). Information measures of complexity, emergence,
self-organization, homeostasis, and autopoiesis. In M. Prokopenko (Ed.), Guided self-organization: Inception
(pp. 19–51), New York: Springer.
51. Fernandez-Marquez, J. L., Di Marzo Serugendo, G., Montagna, S., Viroli, M., & Arcos, J. L. (2013).
Description and composition of bio-inspired design patterns: A complete overview. Natural Computing,
12, 43–67.
52. Field, R. J., & Noyes, R. M. (1974). Oscillations in chemical systems. IV. Limit cycle behavior in a model
of a real chemical reaction. The Journal of Chemical Physics, 60, 3349.
Artificial Life Volume 26, Number 3
401
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
a
r
t
l
/
/
l
a
r
t
i
c
e
–
p
d
f
/
/
/
/
2
6
3
3
9
1
1
8
9
6
0
8
8
a
r
t
l
/
_
a
_
0
0
3
2
4
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
C. Gershenson et al.
Self-Organization and Artificial Life
53. Francesca, G., Brambilla, M., Brutschy, A., Trianni, V., & Birattari, M. (2014). Automode: A novel
approach to the automatic design of control software for robot swarms. Swarm Intelligence, 8, 89–112.
54. Frei, R., & Di Marzo Serugendo, G. (2011). Advances in complexity engineering. International Journal of
Bio-Inspired Computation, 3, 199–212.
55. Froese, T., Virgo, N., & Ikegami, T. (2014). Motility at the origin of life: Its characterization and a model.
Artificial Life, 20, 55–76.
56. Froese, T., Gershenson, C., & Manzanilla, L. R. (2014). Can government be self-organized? A
mathematical model of the collective social organization of ancient Teotihuacan, central Mexico. PLoS
ONE, 9, e109966.
57. Froese, T., & Manzanilla, L. R. (2018). Modeling collective rule at ancient Teotihuacan as a complex
adaptive system: Communal ritual makes social hierarchy more effective. Cognitive Systems Research, 52,
862–874.
58. Garfinkel, A. (1987). The slime mold Dictyostelium as a model of self-organization in social systems. In
Self-organizing systems (pp. 181–213). New York: Springer.
59. Garnier, S., Combe, M., Jost, C., & Theraulaz, G. (2013). Do ants need to estimate the geometrical
properties of trail bifurcations to find an efficient route? A swarm robotics test bed. PLoS Computational
Biology, 9, e1002903.
60. Garnier, S., Gautrais, J., Asadpour, M., Jost, C., & Theraulaz, G. (2009). Self-organized aggregation
triggers collective decision making in a group of cockroach-like robots. Adaptive Behavior, 17, 109–133.
61. Garnier, S., Jost, C., Gautrais, J., Asadpour, M., Caprari, G., Jeanson, R., Grimal, A., & Theraulaz, G.
(2008). The embodiment of cockroach aggregation behavior in a group of micro-robots. Artificial Life, 14,
387–408.
62. Geard, N., & Bullock, S. (2010). Competition and the dynamics of group affiliation. Advances in Complex
Systems, 13, 501.
63. Gershenson, C. (2003). Artificial neural networks for beginners, teaching package. arXiv preprint cs/0308031.
64. Gershenson, C. (2007). Design and control of self-organizing systems. CopIt Arxives, http://tinyurl.com/
DCSOS2007.
65. Gershenson, C. (2012). Guiding the self-organization of random Boolean networks. Theory in Biosciences,
131, 181–191.
66. Gershenson, C. (2013). The implications of interactions for science and philosophy. Foundations of Science,
18, 781–790.
67. Gershenson, C. (2013). Living in living cities. Artificial Life, 19, 401–420.
68. Gershenson, C. (2018). Information in science and Buddhist philosophy: Towards a non-materialistic
worldview, preprint. https://www.preprints.org/manuscript/201812.0042.
69. Gershenson, C. (2020). Guiding the self-organization of cyber-physical systems. Frontiers in Robotics and AI,
7, 41.
70. Gershenson, C., Csermely, P., Erdi, P., Knyazeva, H., & Laszlo, A. (2014). The past, present and future of
cybernetics and systems research. Systema: Connecting matter, life, culture and technology, 1, 4–13.
71. Gershenson, C., & Helbing, D. (2015). When slower is faster. Complexity, 21, 9–15.
72. Gershenson, C., & Heylighen, F. (2003). When can we call a system self-organizing? In W. Banzhaf, T.
Christaller, P. Dittrich, J. T. Kim, & J. Ziegler (Eds.), Advances in Artificial Life, 7th European Conference,
ECAL 2003, LNAI 2801 (pp. 606–614). New York: Springer.
73. Gershenson, C., Trianni, V., Werfel, J., & Sayama, H. (2018). Self-organization and artificial life: A review.
In T. Ikegami, N. Virgo, O. Witkowski, M. Oka, R. Suzuki, & H. Iizuka (Eds.), The 2018 Conference on
Artificial Life: A hybrid of the European Conference on Artificial Life (ECAL) and the International Conference on the
Synthesis and Simulation of Living Systems (ALife) (pp. 510–517). Cambridge, MA: MIT Press.
74. Gilbert, N., & Conte, R. (Eds.) (1995). Artificial Societies: The computer simulation of social life. Milton Park, UK:
Taylor & Francis.
75. Giomi, L., Hawley-Weld, N., & Mahadevan, L. (2013). Swarming, swirling and stasis in sequestered
bristle-bots. Proceedings of the Royal Society A, 469, 20120637.
402
Artificial Life Volume 26, Number 3
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
a
r
t
l
/
/
l
a
r
t
i
c
e
–
p
d
f
/
/
/
/
2
6
3
3
9
1
1
8
9
6
0
8
8
a
r
t
l
/
_
a
_
0
0
3
2
4
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
C. Gershenson et al.
Self-Organization and Artificial Life
76. Glansdorff, P., & Prigogine, I. (1971). Thermodynamic theory of structure, stability and fluctuations. Hoboken, NJ:
Wiley-Interscience.
77. Gould, S. J. (1990). Wonderful life: The Burgess shale and the nature of history. New York: W.W. Norton.
78. Griffith, S., Goldwater, D., & Jacobson, J. (2005). Self-replication from random parts. Nature, 437, 636.
79. Gromski, P. S., Granda, J. M., & Cronin, L. (2020). Universal chemical synthesis and discovery with ‘the
chemputer.’ Trends in Chemistry, 2, 4–12.
80. Gross, T., & Sayama, H. (Eds.) (2009). Adaptive networks: Theory, models and applications (understanding complex
systems). New York: Springer.
81. Haken, H. (1981). Synergetics and the problem of selforganization. In G. Roth & H. Schwegler (Eds.),
Self-organizing systems: An interdisciplinary approach (pp. 9–13). New York: Campus Verlag.
82. Haken, H. (1988). Information and self-organization: A macroscopic approach to complex systems. New York:
Springer-Verlag.
83. Halloy, J., Sempo, G., Caprari, G., Rivault, C., Asadpour, M., Tâche, F., Saïd, I., Durier, V., Canonge, S.,
Amé, J. M., Detrain, C., Correll, N., Martinoli, A., Mondada, F., Siegwart, R., & Deneubourg, J. L. (2007).
Social integration of robots into groups of cockroaches to control self-organized choices. Science, 318,
1155–1158.
84. Hanczyc, M. M., Fujikawa, S. M., & Szostak, J. W. (2003). Experimental models of primitive cellular
compartments: Encapsulation, growth, and division. Science, 302, 618–622.
85. Hanczyc, M. M., Toyota, T., Ikegami, T., Packard, N., & Sugawara, T. (2007). Fatty acid chemistry at the
oil-water interface: Self-propelled oil droplets. Journal of the American Chemical Society, 129, 9386–9391.
86. Hebb, D. O. (1949). The organization of behavior: A neuropsychological theory. New York: Wiley.
87. Hegselmann, R. (2017). Thomas C. Schelling and James M. Sakoda: The intellectual, technical, and social
history of a model. Journal of Artificial Societies and Social Simulation, 20, 15.
88. Hesse, J., & Gross, T. (2014). Self-organized criticality as a fundamental property of neural systems.
Frontiers in Systems Neuroscience, 8, 166.
89. Heylighen, F. (2006). Mediator evolution: A general scenario for the origin of dynamical hierarchies. In D.
Aerts, B. DʼHooghe, & N. Note (Eds.), Worldviews, science and us (pp. 45–48). Singapore: World Scientific.
90. Hildenbrandt, H., Carere, C., & Hemelrijk, C. (2010). Self-organized aerial displays of thousands of
starlings: A model. Behavioral Ecology, 21, 1349–1359.
91. Höfer, T., Sherratt, J. A., & Maini, P. K. (1995). Dictyostelium discoideum: Cellular self-organization in an
excitable biological medium. Proceedings of the Royal Society of London. Series B: Biological Sciences, 259, 249–257.
92. Holzer, R., & De Meer, H. (2011). Methods for approximations of quantitative measures in self-organizing
systems. In C. Bettstetter & C. Gershenson (Eds.), Self-organizing systems (pp. 1–15). New York: Springer.
93. Ichinose, G., & Sayama, H. (2017). Invasion of cooperation in scale-free networks: Accumulated versus
average payoffs. Artificial Life, 23, 25–33.
94. Ilachinski, A. (2001). Cellular automata: A discrete universe. Singapore: World Scientific.
95. Israeli, N., & Goldenfeld, N. (2004). Computational irreducibility and the predictability of complex physical
systems. Physical Review Letters, 92, 074105.
96. Jaffe, K. (2017). The scientific roots of synergy: And how to make cooperation successful. Seattle, WA: Amazon Books.
97. Jakobi, N. (1997). Evolutionary robotics and the radical envelope of noise hypothesis. Adaptive Behavior, 6,
325–368.
98. Juarrero-Roqué, A. (1985). Self-organization: Kantʼs concept of teleology and modern chemistry. The
Review of Metaphysics, 39, 107–135.
99. Kamm, R. D., et al. (2018). Perspective: The promise of multi-cellular engineered living systems. APL
Bioengineering, 2, 040901.
100. Kauffman, S. A. (1969). Metabolic stability and epigenesis in randomly constructed genetic nets. Journal of
Theoretical Biology, 22, 437–467.
101. Kauffman, S. A. (1993). The origins of order. Oxford, UK: Oxford University Press.
Artificial Life Volume 26, Number 3
403
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
a
r
t
l
/
/
l
a
r
t
i
c
e
–
p
d
f
/
/
/
/
2
6
3
3
9
1
1
8
9
6
0
8
8
a
r
t
l
/
_
a
_
0
0
3
2
4
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
C. Gershenson et al.
Self-Organization and Artificial Life
102. Kelso, J. S. (1997). Dynamic patterns: The self-organization of brain and behavior. Cambridge, MA: MIT Press.
103. Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. In Proceedings of the IEEE International
Conference on Neural Networks (pp. 1942–1948). New York: IEEE Press.
104. Kernbach, S., Thenius, R., Kernbach, O., & Schmickl, T. (2009). Re-embodiment of honeybee
aggregation behavior in an artificial micro-robotic system. Adaptive Behavior, 17, 237–259.
105. Keys, G. C., & Dugatkin, L. A. (1990). Flock size and position effects on vigilance, aggression, and prey
capture in the European starling. The Condor, 92, 151–159.
106. Kim, S., & Wensing, P. M. (2017). Design of dynamic legged robots. Foundations and Trends in Robotics, 5(2),
117–190.
107. Kirby, S. (2002). Natural language from artificial life. Artifical Life, 8, 185–215.
108. Kirk, G. S. (1951). Natural change in Heraclitus. Mind, 60, 35–42.
109. Knight, C. J. K., Penn, A. S., & Hoyle, R. B. (2014). Comparing the effects of mutualism and competition
on industrial districts. Physica A: Statistical Mechanics and its Applications, 416, 541–557.
110. Kohonen, T. (2000). Self-organizing maps (3rd ed.). New York: Springer.
111. Krakauer, D., Bertschinger, N., Olbrich, E., Flack, J. C., & Ay, N. (2020). The information theory of
individuality. Theory in Biosciences, 139, 209–223.
112. Kreyssig, P., & Dittrich, P. (2011). Reaction flow artificial chemistries. In ECAL 2011 (pp. 431–437).
Cambridge, MA: MIT Press.
113. Kriegman, S., Blackiston, D., Levin, M., & Bongard, J. (2020). A scalable pipeline for designing
reconfigurable organisms. Proceedings of the National Academy of Sciences of the USA, 117, 1853–1859.
114. Krishnanand, K. N., & Ghose, D. (2009). Glowworm swarm optimization for simultaneous capture of
multiple local optima of multimodal functions. Swarm Intelligence, 3, 87–124.
115. Kunz, H., & Hemelrijk, C. K. (2003). Artificial fish schools: Collective effects of school size, body size,
and body form. Artificial Life, 9, 237–253.
116. Langton, C. G. (1984). Self-reproduction in cellular automata. Physica D: Nonlinear Phenomena, 10, 135–144.
117. Langton, C. G. (1986). Studying artificial life with cellular automata. Physica D: Nonlinear Phenomena, 22,
129–149.
118. Lansing, J. S. (2002). “Artificial Societies” and the social sciences. Artificial Life, 8, 279–292.
119. Lansing, J. S. & Kremer, J. N. (1993). Emergent properties of Balinese water temple networks:
Coadaptation on a rugged fitness landscape. American Anthropologist, 95, 97–114.
120. Lawrence, P. A. (1992). The making of a fly: The genetics of animal design. Hoboken, NJ: Blackwell Scientific
Publications.
121. Lazer, D., et al. (2009). Life in the network: The coming age of computational social science. Science, 323, 721.
122. Lee, K.-J., McCormick, W. D., Pearson, J. E., & Swinney, H. L. (1994). Experimental observation of
self-replicating spots in a reaction–diffusion system. Nature, 369, 215.
123. Lehn, J.-M. (2017). Supramolecular chemistry: Where from? Where to? Chemical Society Reviews, 46,
2378–2379.
124. Levin, S. A. (2005). Self-organization and the emergence of complexity in ecological systems. AIBS
Bulletin, 55, 1075–1079.
125. Levine, H., Rappel, W.-J., & Cohen, I. (2000). Self-organization in systems of self-propelled particles.
Physical Review E, 63, 017101.
126. Lindgren, K., & Nordahl, M. G. (1993). Cooperation and community structure in artificial ecosystems.
Artificial Life, 1, 15–37.
127. Lipowska, D., & Lipowski, A. (2012). Naming game on adaptive weighted networks. Artificial Life, 18,
311–323.
128. Luisi, P. L., & Varela, F. J. (1989). Self-replicating micelles—a chemical version of a minimal autopoietic
system. Origins of Life and Evolution of the Biosphere, 19, 633–643.
404
Artificial Life Volume 26, Number 3
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
a
r
t
l
/
/
l
a
r
t
i
c
e
–
p
d
f
/
/
/
/
2
6
3
3
9
1
1
8
9
6
0
8
8
a
r
t
l
/
_
a
_
0
0
3
2
4
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
C. Gershenson et al.
Self-Organization and Artificial Life
129. Mamei, M., Menezes, R., Tolksdorf, R., & Zambonelli, F. (2006). Case studies for self-organization in
computer science. Journal of Systems Architecture, 52, 443–460.
130. Marée, A. F., & Hogeweg, P. (2001). How amoeboids self-organize into a fruiting body: Multicellular
coordination in Dictyostelium discoideum. Proceedings of the National Academy of Sciences of the USA, 98,
3879–3883.
131. Maturana, H., & Varela, F. (1980). Autopoiesis and cognition: The realization of the living (2nd ed.). Dordrecht:
D. Reidel.
132. Michod, R. E. (2003). Cooperation and conflict mediation during the origin of multicellularity. In P.
Hammerstein (Ed.), Genetic and cultural evolution of cooperation (pp. 261–307). Cambridge, MA: MIT Press.
133. Moreno, A., & Ruiz-Mirazo, K. (2009). The problem of the emergence of functional diversity in prebiotic
evolution. Biology & Philosophy, 24, 585–605.
134. Murata, S., Kurokawa, H., & Kokaji, S. (1994). Self-assembling machine. In Proceedings of the 1994 IEEE
International Conference on Robotics and Automation. New York: IEEE.
135. Newman, J. P., & Sayama, H. (2008). Effect of sensory blind zones on milling behavior in a dynamic self-
propelled particle model. Physical Review E, 78, 011913.
136. Nicolis, G., & Prigogine, I. (1977). Self-organization in non-equilibrium systems: From dissipative structures to order
through fluctuations. Hoboken, NJ: Wiley.
137. Nishikawa, N., Suzuki, R., & Arita, T. (2018). Exploration of swarm dynamics emerging from asymmetry.
Applied Sciences, 8, 2076–3417.
138. Oros, N., & Nehaniv, C. L. (2007). Sexyloop: Self-reproduction, evolution and sex in cellular automata. In
2007 IEEE Symposium on Artificial Life. New York: IEEE.
139. Oros, N., & Nehaniv, C. L. (2009). Dude, where is my sex gene?—persistence of sex over evolutionary
time in cellular automata. In 2009 IEEE Symposium on Artificial Life. New York: IEEE.
140. Packard, N. (1986). Lattice models for solidification and aggregation. In S. Wolfram (Ed.), Theory and
application of cellular automata (pp. 305–310). Singapore: World Scientific.
141. Parrilla-Gutierrez, J. M., Tsuda, S., Grizou, J., Taylor, J., Henson, A., & Cronin, L. (2017). Adaptive
artificial evolution of droplet protocells in a 3d-printed fluidic chemorobotic platform with configurable
environments. Nature Communications, 8, 1144.
142. Pearson, J. E. (1993). Complex patterns in a simple system. Science, 261, 189–192.
143. Pfeifer, R., Lungarella, M., & Iida, F. (2007). Self-organization, embodiment, and biologically inspired
robotics. Science, 318, 1088–1093.
144. Pfeifer, R., & Gómez, G. (2009). Morphological computation—connecting brain, body, and environment.
In B. Sendhoff, E. Körner, O. Sporns, H. Ritter, & K. Doya (Eds.), Creating brain-like intelligence (pp. 66–83).
Berlin, Heidelberg: Springer.
145. Pham, D., Ghanbarzadeh, A., Koc, E., Otri, S., Rahim, S., & Zaidi, M. (2006). The bees algorithm—a
novel tool for complex optimisation problems. In Intelligent Production Machines and Systems: 2nd I*PROMS
virtual conference (p. 454). Amsterdam: Elsevier Science.
146. Polani, D. (2003). Measuring self-organization via observers. In W. Banzhaf, J. Ziegler, T. Christaller, P.
Dittrich, & J. T. Kim (Eds.), Advances in artificial life (pp. 667–675). Berlin, Heidelberg: Springer.
147. Polani, D. (2008). Foundations and formalizations of self-organization. In M. Prokopenko (Ed.), Advances
in applied self-organizing systems (pp. 19–37). London: Springer.
148. Polani, D., Prokopenko, M., & Yaeger, L. S. (2013). Information and self-organization of behavior. Advances
in Complex Systems, 16, 1303001.
149. Poslad, S. (2009). Ubiquitous computing. In Autonomous systems and artificial life (pp. 317–341). Hoboken, NJ:
Wiley-Blackwell.
150. Prokopenko, M. (2009). Guided self-organization. HFSP Journal, 3, 287–289.
151. Prokopenko, M. (Ed.) (2014). Guided self-organization: Inception. New York: Springer.
152. Prokopenko, M., Boschetti, F., & Ryan, A. (2009). An information-theoretic primer on complexity, self-
organisation and emergence. Complexity, 15, 11–28.
Artificial Life Volume 26, Number 3
405
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
a
r
t
l
/
/
l
a
r
t
i
c
e
–
p
d
f
/
/
/
/
2
6
3
3
9
1
1
8
9
6
0
8
8
a
r
t
l
/
_
a
_
0
0
3
2
4
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
C. Gershenson et al.
Self-Organization and Artificial Life
153. Prokopenko, M., & Gershenson, C. (2014). Entropy methods in guided self-organisation. Entropy, 16, 5232–5241.
154. Qiao, Y., Li, M., Booth, R., & Mann, S. (2017). Predatory behaviour in synthetic protocell communities.
Nature Chemistry, 9, 110–119.
155. Rasmussen, S., Bedau, M. A., Chen, L., Deamer, D., Krakauer, D. C., Packard, N. H., & Stadler, P. F.
(Eds.) (2008). Protocells: Bridging nonliving and living matter. Cambridge, MA: MIT Press.
156. Rasmussen, S., Chen, L., Deamer, D., Krakauer, D. C., Packard, N. H., Stadler, P. F., & Bedau, M. A.
(2004). Transitions from nonliving to living matter. Science, 303, 963–965.
157. Rasmussen, S., Chen, L., Nilsson, M., & Abe, S. (2003). Bridging nonliving and living matter. Artificial Life,
9, 269–316.
158. Reggia, J. A., Armentrout, S. L., Chou, H.-H., & Peng, Y. (1993). Simple systems that exhibit self-directed
replication. Science, 259, 1282–1287.
159. Reina, A., Bose, T., Trianni, V., & Marshall, J. A. R. (2018). Effects of spatiality on value-sensitive decisions
made by robot swarms. In R. Groß, A. Kolling, S. Berman, E. Frazzoli, A. Martinoli, F. Matsuno, & M. Gauci
(Eds.), Distributed autonomous robotic systems (pp. 461–473). New York: Springer International.
160. Reina, A., Valentini, G., Fernández-Oto, C., Dorigo, M., & Trianni, V. (2015). A design pattern for
decentralised decision making. PLoS ONE, 10, e0140950–18.
161. Reynolds, C. W. (1987). Flocks, herds, and schools: A distributed behavioral model. Computer Graphics, 21,
25–34.
162. Rubenstein, M., Cornejo, A., & Nagpal, R. (2014). Programmable self-assembly in a thousand-robot
swarm. Science, 345, 795–799.
163. Sakoda, J. M. (1971). The checkerboard model of social interaction. The Journal of Mathematical Sociology, 1,
119–132.
164. Salzberg, C., & Sayama, H. (2004). Complex genetic evolution of artificial self-replicators in cellular
automata. Complexity, 10, 33–39.
165. Saranli, U., Buehler, M., & Koditschek, D. E. (2001). Rhex: A simple and highly mobile robot. International
Journal of Robotics Research, 20, 616–631.
166. Sayama, H. (1999). A new structurally dissolvable self-reproducing loop evolving in a simple cellular
automata space. Artificial Life, 5, 343–365.
167. Sayama, H. (2004). Self-protection and diversity in self-replicating cellular automata. Artificial Life, 10, 83–98.
168. Sayama, H. (2008). Swarm chemistry. Artificial Life, 15, 105–114.
169. Sayama, H. (2011). Seeking open-ended evolution in swarm chemistry. In 2011 IEEE Symposium on
Artificial Life (pp. 186–193). New York: IEEE.
170. Sayama, H. (2012). Morphologies of self-organizing swarms in 3d swarm chemistry. In Proceedings of the
14th Annual Conference on Genetic and Evolutionary Computation (pp. 577–584). New York: ACM.
171. Sayama, H. (2015). Introduction to the modeling and analysis of complex systems. Open SUNY Textbooks. New York:
State University of New York.
172. Sayama, H., & Yamanoi, J. (2020). Beyond social fragmentation: Coexistence of cultural diversity and
structural connectivity is possible with social constituent diversity. In International Conference on Network
Science (pp. 171–181). New York: Springer.
173. Scheidler, A., Brutschy, A., Ferrante, E., & Dorigo, M. (2016). The k-unanimity rule for self-organized
decision-making in swarms of robots. IEEE Transactions on Cybernetics, 46, 1175–1188.
174. Schelling, T. C. (1971). Dynamic models of segregation. The Journal of Mathematical Sociology, 1, 143–186.
175. Schmickl, T., Stefanec, M., & Crailsheim, K. (2016). How a life-like system emerges from a simple particle
motion law. Scientific Reports, 6, 37969.
176. Shalizi, C. R. (2001). Causal architecture, complexity and self-organization in time series and cellular automata. PhD
thesis, University of Wisconsin at Madison.
177. Simon, H. A. (1969). The sciences of the artificial. Cambridge, MA: MIT Press.
178. Sipper, M. (1998). Fifty years of research on self-replication: An overview. Artificial Life, 4, 237–257.
406
Artificial Life Volume 26, Number 3
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
a
r
t
l
/
/
l
a
r
t
i
c
e
–
p
d
f
/
/
/
/
2
6
3
3
9
1
1
8
9
6
0
8
8
a
r
t
l
/
_
a
_
0
0
3
2
4
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
C. Gershenson et al.
Self-Organization and Artificial Life
179. Skår, J., & Coveney, P. V. (Eds.) (2003). Self-organization: The quest for the origin and evolution of
structure. Philosophical Transactions of the Royal Society of London Series A, 361(1807).
180. Slavkov, I., Carrillo-Zapata, D., Carranza, N., Diego, X., Jansson, F., Kaandorp, J., Hauert, S., & Sharpe, J.
(2018). Morphogenesis in robot swarms. Science Robotics, 3.
181. Smith, K., Kirby, S., & Brighton, H. (2003). Iterated learning: A framework for the emergence of
language. Artificial Life, 9, 371–386.
182. Steels, L. (1993). Building agents out of autonomous behavior systems. In L. Steels & R. A. Brooks (Eds.),
The artificial life route to artificial intelligence: Building embodied situated agents (pp. 102–137). Mahwah, NJ:
Lawrence Erlbaum.
183. Steels, L. (1995). A self-organizing spatial vocabulary. Artificial Life, 2, 319–332.
184. Stengers, I. (1985). Généalogies de lʼauto-organisation. Cahiers du CREA, 8.
185. Suzuki, K., & Ikegami, T. (2006). Spatial-pattern-induced evolution of a self-replicating loop network.
Artificial Life, 12, 461–485.
186. Taylor, T., et al. (2016). Open-ended evolution: Perspectives from the OEE workshop in York. Artificial
Life, 22, 408–423.
187. Theraulaz, G., & Bonabeau, E. (1999). A brief history of stigmergy. Artificial Life, 5, 97–116.
188. Trianni, V., & Dorigo, M. (2006). Self-organisation and communication in groups of simulated and
physical robots. Biological Cybernetics, 95, 213–231.
189. Turgut, A. E., Çelikkanat, H., Gökçe, F., & Şahin, E. (2008). Self-organized flocking in mobile robot
swarms. Swarm Intelligence, 2, 97–120.
190. Turing, A. (1952). The chemical basis of morphogenesis. Philosophical Transactions of the Royal Society of
London. Series B, Biological Sciences, 237, 37–72.
191. Valentini, G., Ferrante, E., & Dorigo, M. (2017). The best-of-n problem in robot swarms: Formalization,
state of the art, and novel perspectives. Frontiers in Robotics and AI, 4.
192. Valentini, G., Ferrante, E., Hamann, H., & Dorigo, M. (2015). Collective decision with 100 kilobots: Speed
versus accuracy in binary discrimination problems. Autonomous agents and multi-agent systems, 30, 553–580.
193. Van Segbroeck, S., Santos, F. C., Lenaerts, T., & Pacheco, J. M. (2009). Emergence of cooperation in
adaptive social networks with behavioral diversity. In European Conference on Artificial Life (pp. 434–441).
New York: Springer.
194. Vanag, V. K., & Epstein, I. R. (2001). Pattern formation in a tunable medium: The Belousov-Zhabotinsky
reaction in an aerosol OT microemulsion. Physical Review Letters, 87, 228301.
195. Varela, F. J., Maturana, H. R., & Uribe, R. (1974). Autopoiesis: The organization of living systems, its
characterization and a model. Biosystems, 5, 187–196.
196. Vásárhelyi, G., Virágh, C., Somorjai, G., Nepusz, T., Eiben, A. E., & Vicsek, T. (2018). Optimized
flocking of autonomous drones in confined environments. Science Robotics, 3(20).
197. Vicsek, T., Czirók, A., Ben-Jacob, E., Cohen, I., & Shochet, O. (1995). Novel type of phase transition in a
system of self-driven particles. Physical Review Letters, 75, 1226.
198. Vicsek, T., & Zafeiris, A. (2012). Collective motion. Physics Reports, 517, 71–140.
199. Virágh, C., Nagy, M., Gershenson, C., & Vásárhelyi, G. (2016). Self-organized UAV traffic in realistic
environments. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1645–1652).
New York: IEEE.
200. von Foerster, H. (1960). On self-organizing systems and their environments. In M. C. Yovitts & S. Cameron
(Eds.), Self-organizing systems (pp. 31–50). New York: Pergamon.
201. von Neumann, J. (1966). The theory of self-reproducing automata. Champaign, IL: University of Illinois Press.
202. Walde, P., Wick, R., Fresta, M., Mangone, A., & Luisi, P. L. (1994). Autopoietic self-reproduction of fatty
acid vesicles. Journal of the American Chemical Society, 116, 11649–11654.
203. Walter, W. G. (1950). An imitation of life. Scientific American, 182, 42–45.
204. Walter, W. G. (1951). A machine that learns. Scientific American, 185, 60–63.
Artificial Life Volume 26, Number 3
407
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
a
r
t
l
/
/
l
a
r
t
i
c
e
–
p
d
f
/
/
/
/
2
6
3
3
9
1
1
8
9
6
0
8
8
a
r
t
l
/
_
a
_
0
0
3
2
4
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
C. Gershenson et al.
Self-Organization and Artificial Life
205. Watson, R. A., Buckley, C. L., & Mills, R. (2011). Optimisation in “self-modelling” complex adaptive
systems. Complexity, 16(5), 17–26.
206. Watson, R. A., Mills, R., & Buckley, C. L. (2011). Global adaptation in networks of selfish components:
Emergent associative memory at the system scale. Artificial Life, 17, 147–166.
207. Weber, B. H., & Depew, D. J. (1996). Natural selection and self-organization. Biology and Philosophy, 11, 33–65.
208. Werfel, J., Petersen, K., & Nagpal, R. (2014). Designing collective behavior in a termite-inspired robot
construction team. Science, 343, 754–758.
209. Whitesides, G. M., & Grzybowski, B. (2002). Self-assembly at all scales. Science, 295, 2418–2421.
210. Wolfram, S. (1983). Statistical mechanics of cellular automata. Reviews of Modern Physics, 55, 601–644.
211. Wolfram, S. (1984). Cellular automata as models of complexity. Nature, 311, 419–424.
212. Wolfram, S. (2002). A new kind of science. Champaign, IL: Wolfram Media.
213. Wood, R., Nagpal, R., & Wei, G.-Y. (2013). Flight of the robobees. Scientific American, 308(3), 60–65.
214. Yang, X.-S. (2009). Firefly algorithms for multimodal optimization. In O. Watanabe & T. Zeugmann
(Eds.), Saga (pp. 169–178). New York: Springer.
215. Yim, M., Shen, W.-M., Salemi, B., Rus, D., Moll, M., Lipson, H., Klavins, E., & Chirikjian, G. S. (2007).
Modular self-reconfigurable robot systems: Challenges and opportunities for the future. IEEE Robotics and
Automation Magazine, 14, 43–52.
216. Young, D. A. (1984). A local activator-inhibitor model of vertebrate skin patterns. Mathematical Biosciences,
72, 51–58.
217. Zykov, V., Mytilinaios, E., Adams, B., & Lipson, H. (2005). Robotics: Self-reproducing machines. Nature,
435, 163–164.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
a
r
t
l
/
/
l
a
r
t
i
c
e
–
p
d
f
/
/
/
/
2
6
3
3
9
1
1
8
9
6
0
8
8
a
r
t
l
/
_
a
_
0
0
3
2
4
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
408
Artificial Life Volume 26, Number 3