Artificial Life (A-Life) studies the properties of biological and simulated life forms through the techniques of mathematical and computational modeling. It is closely related to the field of Artificial Intelligence both in its philosophy and its methodology. The kinds of questions posed by A-Life range from the simplest chemical processes of organic development in individual cells and organisms, to complex dynamics of evolutionary change in populations of organisms, to the philosophical foundations of the concept of life itself.
The general methodology of A-Life is to come to a better understanding of some specific or general biological phenomena by attempting to model it computationally, and then studying the behavior of that model. The "artificial" component of A-Life is its concern with constructing, or synthesizing, its own life-forms to study. The advantage of this synthetic approach over traditional analytic biological methods is that the computer models allow researchers to investigate the dynamics of complex systems like whole ecologies, whereas studying real biological systems would be too difficult. Also, because all life forms on Earth are descendent from a common ancestor, A-Life is able to study the possibility of alternative forms of life and hypothetical ecosystems. By not being limited to the study of life merely as it happens to occur in nature, A-Life can investigate the potential for new forms of life to arise and hopes to achieve a more universal understanding of what life is as a result.
A-Life typically models complex non-linear systems, and poses a number of theoretical questions which could not be asked using analytic techniques. Analytic techniques try to understand a phenomena by analyzing the component mechanisms which are causally responsible for the observed phenomena. In biology, this means dissecting a biological unit into smaller units-organisms into systems, systems into organs, organs into cells, etc. Most of the biological phenomena with which A-Life is concerned are "emergent." An emergent phenomena is one which is not determined by a specific mechanism, a single gene or the physiological structure of a single organism, but instead emerges out of the interaction of multiple mechanisms, organisms, and features of the environment. In this sense, A-Life is more concerned with explaining how the complex systems seen in biology arise at all, than in explaining the details of how those systems work.
For example, a group behavior such as "flocking" in birds is not a feature of any specific to any single bird, but to the whole group. A-Life can study this phenomena by simulating many artificial birds which behave according to a rather simple shared behavioral model according to which individual birds attempt to fly towards the perceived "center of mass" of the group of birds and avoid obstacles (Reynolds 1987). The success of the simulation is determined by the degree to which the behavior of the flock of artificial birds appears "natural" and "life-like" under various conditions, such as splitting into two groups to fly around an obstacle and merging again on the other side. This kind of simulation is synthetic because the behaviors are specified for each bird, while the behavior of the flock is an emergent phenomena.
The biggest problem facing A-Life is to provide a good definition of what it means to "be alive." Most A-Life researchers believe that the artificial life forms they create in their models are not really, or at least not fully, alive. This depends as much on the definition of "life" which they use as it does on the actual features of their creations. Many believe that there is a continuum of life in which some beings are more or less alive than others, and computer programs are less alive than physical organisms. Others believe that a thing is either alive or not-alive, and there is no middle ground between these.
Apart from the question of whether or not there is a continuum of life, there are many different proposals for the best definition of life, and no clear criteria for picking one. There are many different approaches to defining the necessary and sufficient conditions for life. A physiological definition of life sets up different processes such as digestion, locomotion, growth, and reproduction as hallmarks of life. Of course, it often happens that recognizable life-forms are found to exist which do not perform all of these physiological processes. Similarly, a metabolic definition looks specifically at the flow of energy and materials between an organism and its environment. Biochemical definitions focuses on the biochemical processes of nucleic acids from genes and protein synthesis as it is found in organic life. According to a biochemical definition, carbon-based life-forms are the only kind of life-forms possible. Most A-Life researchers feel this is too narrow a definition and is biased toward the way terrestrial life happened to evolve, and misses the point that it could have evolved differently (Langton 1989).
Genetic definitions of life are more general than biochemical definitions because they do not care about the physical structure of the genetic material itself. Under such views, genes are merely the information which structures an organism's development and behavior. Unlike instructions or programs, however, genes can be passed on and recombined into new organisms. A genetic definition of life requires that an organism be able to pass on heritable traits to its offspring and thus evolve over generations.
Thermodynamic definitions of life are based on theories of energy dynamics taken from physics. Entropy is the process of the breakdown of "orderliness" in a system based on a measure of the differentiation of energy in the system. For example, a room may begin as cool at one end and hot at the other, but entropy will tend to bring the room toward a homogenous temperature equilibrium over time. A system is highly "ordered" if it is able to maintain a differentiated structure over time. Systems which become more organized over time (more differentiated) are called self-organizing systems (von Foerster 1960). Another interesting feature of self-organizing systems is that they tend to organize themselves hierarchically, meaning that once a set of systems become stable, a new level of organization can emerge which utilizes those stable systems as a building blocks (Simon 1996).
One conception of life derived from self-organizing systems if that of autopoiesis. An autopoietic system is an autonomous system which continually produces itself. The idea of autonomy is that a system is in some sense independent of its environment and of any external control (i.e. from a designer or programmer). The idea that the system must continually self-produce stems from the recognition that the struggle against entropy in self-organization is not a one-time battle, but rather is a ceaseless war. Thus, the struggle for life is an unending fight to secure the material and energy for life and to maintain and restore the organism's own health. When a system ceases to produce itself, it dies (Maturana and Varela, 1972).
Some definitions of life require a material embodiment of the life form, while others do not. Those definitions that do make this requirement maintain that computer models will never be truly alive, and will require some physical body, with all the needs, urges, and vulnerabilities which they have, in order to be fully alive. While some argue that only a body made of organic compounds will do, many feel that a robotic body will suffice. A number of A-Life projects revolve around animats, which are artificial animals (Cliff, Harvey & Husbands 1993). Whereas most robots are built and programmed to perform a certain specific task, animats are built and programmed to interact with their environment and learn to get along in it. And whereas a computer simulations of an organism and its environment can only consider a limited number of environmental factors, an animat has to deal with a infinite potential of environmental influences. The sub-field of situated robotics is committed to developing just these sorts of robots (Meyer & Wilson 1991).
Closely related to A-Life is a field of computer science called genetic algorithms (GAs), or more broadly evolutionary programming, which is based on using genetic theory to find optimal solutions to problems (Goldberg 1989). GAs are a technique for optimizing non-linear systems which exploits a metaphor borrowed from Darwinian evolutionary theory. According to the theory of natural selection, organisms vary at every generation, while the forces and demands of nature "select" traits among the variations through differential survival. That is, some sets of traits are successful because they allow the organisms which have them to succeed, while the bearers of unfit traits tend to die out and fail to reproduce. Thus, over a series of generations those traits that are still around tend to be the ones which enhance the chances of survival for those organisms which have them, and those organisms are said to be adapted to their environment.
GAs encode the partial solutions of problems as "genes" according to some coding scheme. The algorithm then compares various combinations of those genes according to some evaluation function, and rates the combinations. Those that do better according to this evaluation are given a higher probability of passing their genes on to the next generation. The genes are then recombined based on these probabilities to form a new generation, with operators such as mutation, cross-over, and inversion, applied to the old genes to derive new combinations. This process is then repeated for a large number of generations, usually hundreds or thousands, until an optimal or sufficiently good problem solution is found.
Historically, the field of A-Life has its roots in Cybernetics and automata theory, and specifically in the work of the mathematicians Alan Turing, John von Neumann, and less directly in the work of the psychiatrist W. Ross Ashby and the mathematician Nils Aall Barricelli. Alan Turing was responsible not only for giving a formal definition of automata, but also foresaw the ways in which the programs which specify an automata might be randomly combined in order to form new machines and programs. Later in his life Turing (1952) also developed a mathematical theory of morphogenesis-how simple chemical differentials could result in complicated recurrent patterns in plant and animal structures, such as the arrangement of petals on a flower or spots on a leopard. The theory of morphogenesis was further developed by the mathematician René Thom (1972).
After developing the modern computer, von Neumann became concerned with the principles governing the self-replication of a machine-i.e. a machine capable of building a copy of itself. To develop his theory, von Neumann developed a mathematical formalism called cellular automata (CAs), which are computationally equivalent to other formalisms for automata but represent them as squares on a grid, with rules which determine the state of a square based on the states of its adjacent neighbors (Burks 1966). Using this formalism, von Neumann defined a universal replicator which is capable of replicating any cellular automata if given a description of it. He then defined a universal replicator which also copied a non-functional element which contained its own description, and thus could construct a copy of itself from that description (including a new copy of its own description). Von Neumann's theory of self-replicating automata is considered visionary not only because of its mathematical elegance, but because it predicted the formal structure of DNA and its mechanisms of self-replication several years before that structure was identified by biochemists.
Though not often cited by the leaders of contemporary A-Life research, Ashby and Barricelli also made important early contributions to the field. Ashby's early work on machine learning and information flow, as well as his work on the quantification and transmission of variety in automata (Ashby 1956) were crucial to the rise of second-order cybernetics and self-organizing systems research in the 1960s. The self-replication and transmission of automata envisioned and formalized by Ashby and von Neumann were more thoroughly explored by Arthur Burks (1966, 1970) and Stephen Wolfram (1983, 1986), who later developed the Mathematica software application. John Conway popularized cellular automata with a game called "Life" (Gardner 1970). And Stuart Kaufmann (1969, 1971, 1992) further generalized the formal techniques for study of self-organizing systems and their role in evolutionary development.
Barricelli extended an early twentieth century biological theory called "symbiogenesis" to include symbiotic self-reproducing structures of all kinds. The structures he studied were "genes" coded as numerical sequences in one of the first high-speed digital computers built by von Neumann at the Institute for Advanced Study (Barricelli 1957). The genes did not represent any "traits," but Barricelli studied the properties of patterns and the dynamics of mutations attempting to "take-over" or form symbiotic relationships with other genes. John Holland (1975), a student of Burks', developed the first genetic algorithms along very similar lines, but assigned significance to the "genes" within a problem and measured the various combinations by an evaluation function.
The field of "Artificial Life" was given its name by Christopher Langton
in 1986, and the new discipline first made itself known in a conference
held at Los Alamos in 1987. Since then it has developed two strong journals,
Artificial Life and Adaptive Systems and has been a key area
of research for the Santa Fe Institute. Today there are many conferences
which examine the various aspects of A-Life.
by Peter M. Asaro
For Further Research
Adami, C. Introduction to Artificial Life. New York, NY: Springer-Verlag,
George B. Darwin Among the Machines: The Evolution of Global Intelligence. New
York, NY: Addison Wesley, 1997.
C. The Garden in the Machine: The Emerging Science of Artificial Life.
Princeton: Princeton University Press, 1994.
Levy, S. Artificial Life:
The Quest for a New Creation. New York: Pantheon, 1992.
Ray, T. S. "An Evolutionary
Approach to Synthetic Biology: Zen and the Art of Creating Life."
Life 1: 179-210, 1994.
Ashby, W. R. Introduction
to Cybernetics. New York, NY: Wiley, 1956.
Barricelli, N. A. "Symbiogenetic
Evolution Processes Realized by Artificial Methods," Methods 9,
nos. 35-36: 152, 1957.
Boden, M. A. The Philosophy
of Artificial Life. Oxford, UK: Oxford University Press, 1996.
Burks, A. W. Theory of
Self-Reproducing Automata. Urbana, IL: University of Illinois Press,
Burks, A. W. Essays on
Cellular Automata. Urbana, IL: University of Illinois Press, 1970.
Cliff, D., I. Harvey, and
P. Husbands. "Explorations in evolutionary robotics." Adaptive Behavior
2: 71-108, 1993.
Gardner, M. "The fantastic
combinations of John Conway"s new solitaire game "Life."" Scientific
American 223(4): 120-123, 1970.
Goldberg, D. E. Genetic
Algorithms in Search, Optimization, and Machine Learning. Reading,
MA: Addison-Wesley, 1989.
Holland, J. H. Adaptation
in Natural and Artificial Systems. Ann Arbor: University of Michigan
Kauffman, S. A. "Metabolic
Stability and Epigenesis in Randomly Connected Nets." Journal of Theoretical
Biology 22: 437-467, 1969.
Kauffman, S. A. "Cellular
Homeostasis, Epigenesis, and Replication in Randomly Aggregated Macro-Molecular
Systems." Journal of Cybernetics 1: 71-96, 1971.
Kauffman, S. A. The Origins
of Order: Self-Organization and Selection in Evolution. Oxford, UK:
Oxford University Press, 1992.
Langton, C. G. "Self-reproduction
in cellular automata." Physica D 10: 135-144, 1984.
Langton, C. G. "Studying
artificial life with cellular automata." Physica D 22: 1120-1149,
Langton, C. G. "Artificial
life." In C. G. Langton, Ed., Artificial Life: The Proceedings of an
Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems
(held September 1987). Redwood City, CA: Addison-Wesley, pp. 1-47, 1989.
(Reprinted, with revisions, in M. A. Boden, Ed., The Philosophy of Artificial
Life. Oxford, UK: Oxford University Press, pp. 39-94, 1996.)
Maturana, H. R., and F. J.
Varela. Autopoiesis and Cognition: The Realization of the Living.
London: Reidel, 1980.
Meyer, J.-A., and S. W. Wilson, Eds. From Animals to Animats: Proceedings of the First International Conference on Simulation of Adaptive Behavior. Cambridge, MA: MIT Press, 1991.
Simon, H. A. The Sciences
of the Artificial. 3rd Edition. Cambridge, MA: MIT Press, 1996.
Turing, A. M. "The Chemical
Basis of Morphogenesis." Philosophical Transactions of the Royal Society:
B 237: 37-72, 1952.
Thom, R. Stabilité
structurelle et morphogénèse: Essai d'une théorie
générale des modèls. W. A. Benjamin, Inc., 1972.
Translated by D. H. Fowler as Structural Stability and Morphogenesis:
An Outline of a General Theory of Models. New York, NY: Addison-Wesley
Publishing Co., 1994.
von Foerster, H. "On Self-Organizing
Systems and their Environments," in Self-Organizing Systems, Marshall
Yovits & Scott Cameron Eds., New York, NY: Pergamon Press, pp. 31-50,
1960. Reprinted in Heinz
von Foerster, Observing Systems. Seaside, CA: Intersystems Publications,
pp. 1-23, 1984.
Wolfram, S. "Statistical
Mechanics of Cellular Automata." Review of Modern Physics 55: 601-644,
Wolfram, S. Theory and
Applications of Cellular Automata. Singapore: World Scientific, 1986.