Introducing Evolutionary Component Capabilities in Agent ...
Boeing contracts personnel: please note that this section has been added as a rough guide for further
negotiation on contract details:
Two fixed price contracts will be made between Boeing as the primary contractor and
• UWE, as a sub-contractor, and
• Dr. Robert E. Smith, as a sub-contractor.
UWE will provide support and technical assistance to Boeing and Dr. Smith, including consulting time
with UWE ICSC personnel, as well as physical and computational facilities for Dr. Smith’s work.
Dr. Smith will perform the primary research and development work on the framework, interact with
Boeing personnel on the demonstration of the framework on a command and control application, and
interact with UWE personnel on implementation issues. This sub-contract is primarily to cover meetings
between Dr. Smith and Boeing personnel. Primary support for Dr. Smith’s time will be provided through
the UWE sub-contract.
Contract with The University of the West of England: $70,000
Contract with Dr. Robert E. Smith: $15,000
A Framework for Evolutionary Component Capabilities in Agent-
Based Systems: A Proposal
Robert E. Smith, Ph.D.
Department of Aerospace Engineering and Mechanics
The University of Alabama
(currently on sabbatical as)
Senior Research Fellow
The Intelligent Computing Systems Center
Department of Computer Science and Mathematics
The University of The West of England
For agent-based systems to reach their full potential, and important capability for
individual agents is adaptation. An adaptive technique that is particularly well suited to
the agent-based paradigm is provided by evolutionary computation (EC). EC machine
learning systems have been shown to develop complex systems of coevolved structures.
Moreover, the EC techniques employed are naturally distributable to an agent-based
system. However, no standardized agent-based framework that includes EC capabilities
is available. This proposal suggests the development of such a framework. On
completion, the framework will provide a foundation for giving general agents EC
capabilities. These capabilities will be tested in a basic GA optimization application, a
basic GBML application, and a suitable command and control application provided by
Boeing. Moreover, given the standardized agent based nature of the framework,
collaborative work between distributed users, without a priori knowledge of the
collaborations will be possible. This capability will also be tested as a part of the
proposed effort. A variety of EC-motivated scientific experiments will also be possible
within the framework, and these will be outlined as a part of the proposed effort.
The following sections outline genetics-based machine learning, provide an example of
GBML capabilities (from the authors past efforts), describe the potential advantages of
EC in agent based systems, and outline the basic structure of the proposed framework.
Details of team capabilities to perform the proposed research, deliverables, and the
proposed budget details are also provided.
Genetics-based learning systems.
In his seminal 1975 work (Holland, 1975), Holland outlines a framework for generalized
adaptive systems. Holland’s work led to the growing body of research on genetic
algorithms(GAs). These algorithms (and others suggested by natural evolution) have
become lumped under the term evolutionary computation (EC). The advantage of EC
systems is their robust adaptability. Without a priori knowledge, these systems use
implicitly parallel computational leverage to adapt effectively in a broad range of
The majority of GA-based systems have been applied in optimization (Goldberg, 1989).
In such systems, a population of individuals is constructed, such that each individual is
an encoded solution to the problem at hand. Each individual is evaluated with regard to
its utility in the given optimization problem. Based on this evaluation, each individual is
assigned a figure of merit called fitness. Individuals are used to create a new population,
genetic operators. These operators have three primary effects:
• selection: which bias new populations to contain features of highly fit individuals
from the old population
• recombination: which juxtaposes features of two (or more) old individuals in each
• mutation: which preserves diversity in the population by (very occasionally)
introducing random changes to a new individual.
These operators usually have computationally simple forms, but despite this simplicity,
mathematical arguments show that they have substantial computational leverage as a
generalized search mechanism.
Although the majority of past GA applications are in optimization, one of the most
promising areas of current GA research is in the broader area of genetics-based machine
learning (GBML). In GBML systems, each individual represents a partial solution to the
problem at hand. Through slightly modified forms of the GA process described above,
the population develops a diverse set of individuals that cooperatively perform a desired
function. The evolution of distinct, cooperative individuals in these systems is called
coevolution. Coevolution extends the genetic analogy, by introducing the adaptive,
cooperative effects found in natural ecologies. Research in GBML systems has been
expanded by recent advances in reinforcement learning and neural network systems
(Cribbs & Smith, 1996; Smith & Cribbs, 1994; Smith & Cribbs, in press; Wilson, 1994;
Wilson, 1995), since the techniques in these areas are particularly applicable coevolved
networks of individuals.
Past Success with Coevolutionary Systems
An illustrative example of the authors’ success with GBML systems is our past work
learning novel combat maneuvers for fighter planes using simulation and machine
learning (Smith & Dike, 1995). These results come from a joint project conducted by
Boeing (formerly McDonnell Douglas), The University of Alabama, and NASA to
investigate the aquistion of rules for novel combat maneuvers for highly agile fighter
aircraft via genetics-based machine learning in dogfight simulations.
This project pursued a solution strategy for X-31 tactics optimization using GBML, in the
form of a learning classifier system (LCS). Each GA population member in this
coevolutionary system was an IF-THEN rule, which was used to respond to engagement
conditions and suggest a command action. One of these rules was selected for firing
every 1/10th of a second in a 30 second simulation in the Advanced Air-To-Air System
Performance Evaluation Model (AASPEM). The opponent aircraft was a modern fighter
controlled by the standard close-in combat tactics logic. An air combat engagement was
simulated, and the results were used as fitness values by the GA to produce the next
generation of rules. The process was repeated until the engagement scores exhibited no
further improvements for the X-31. The tactics from the best case in the run sequence
were extracted and processed for further analysis.
In this study, we have seen the successful emergence of a coevolved sets of rules that
cooperatively perform complex, successful maneuvers. Many of these maneuvers
successfully employed the post-stall, highly agile capabilities of the X-31 in out-of-plane
maneuvers. One such maneuver is shown in Figure 1. In this figure the light gray plane
(primarily on the right) is the GBML-controlled X-31, and the darker plane (primarily on
the left) is an F-18 using standard close-in combat logic.
Figure 1: An example result from previous GBML work on discovering novel fighter combat
maneuvers. Note that the complex maneuver of the plane on the right involves a complex, coevolved
set of GBML-discovered rules.
It is important to note that in this maneuver, and in many of the maneuvers discovered
through GBML, the action of a single rule may in-of-itself lead to the plane turning away
from its opponent. However, this rule may lead to a net advantage through the firing of
rules that directly cause effective targeting. Such complex, cooperative relationships are
the best illustration of the effectiveness of GBML systems.
Impact on agent based systems
The key advantages of agent-based software paradigms are that agents can be:
• and socially interacting
Moreover, agents have the possibility of mobility in complex network environments,
putting software functions near the computational resources they require. Agents can also
explicitly exploit the availability of distributed, parallel computation facilities (Franklin
& Graesser, 1997; Wooldridge & Jennings, 1996)
However, many of these capabilities ultimately depend on the potential for agent
adaptation. True autonomy in agents requires their ability to adapt their reactive and
proactive capabilities to unforeseen conditions. Moreover, social interactions between
agents need to adapt and emerge as problem conditions change.
Motivation for melding agents and EC
As is illustrated in the fighter plane example, and in a number of other GBML systems,
complex, multi-component adaptive systems can emerge from EC processes. Moreover,
these EC processes implicitly exploit parallelism, while remaining trivial to explicitly
parallelize. Therefore, EC methods are one of the most natural machine learning
techniques to transfer general-purpose adaptive capabilities to agent-based systems.
Although there is a large body of extant work on the application of parallel and
distributed EC algorithms (Kapsalis, Smith & Rayward-Smith, 1994), these studies differ
substantially from the agent-based work suggested in this proposal. Specifically, past
• consist of a number of centralized GAs running on separate computational nodes
• are restricted to optimization, rather than coevolutionary GBML, and
• are often designed for particular parallel computer configurations.
There is no currently available, generalized, agent-based EC system. The work proposed
here seeks to develop and test such a system.
Design of the suggested framework
To design an agent-based EC system, one must turn the typical GA software design on its
head. In common GA software, a centralized GA program stores the GA individuals as
data structures, and imposes the genetic operations that generate successive populations
(see Figure 2).
Centralized GA program
w/ external applications
simple genetic encodings
Figure 2. Structure of a typical, centralized GA.
In natural genetics, the equivalent of the centralized GA does not exist. Instead, the
evolutionary effects emerge from individual agents. This makes designing an agent-based
EC system a matter straight forward analogy (see Figure 3).
Generalized Agent Environment
contains genetic and
and Selection of Mates
genetic and non-genetic Recombination Strategy
with other agents Agent’s
and the “outside world” Mutation Strategy
Figure 3. Structure of the proposed, EC-enabled agent system, with emergent GA effects.
However, in designing such a system, one must make sure to allow for extensibility and
broad utility. It is not the intention of the work suggested here to design a system that is
to be used in isolation from other agent-based components. Rather, the framework is to
be developed as a tool that can be used within broader agent-based, centralized, and
hybrid systems. By including EC agents in broader frameworks, the advantages of
genetic adaptation can be coupled with controlled agent behavior.
Structure of the suggested framework.
The section outlines a framework for EC-enabled agents. The proposed framework is
built in Java, for its machine independence, and on IBM’s Aglets framework
(Http://www.trl.ibm.co.jp/aglets/), for basic agent capabilities (e.g., agent messaging, agent
storage and recall, etc.). The Aglet framework also allows for agent mobility. Using
these system’s standards will allow for consistency with other software systems built on
The framework is intended to be an extension of the Aglet mobile agent framework that
allows Aglets to initialize themselves with objects that can be evolved via evolutionary
computation methods. The goal is to do this is a totally asynchronous manner, such that
no “GA” per se runs anywhere, and the GA-like effects are totally emergent.
The framework design also allows for diverse agents that can use a variety of genetic
representations and operators, while interacting in the same distributed computational
environment. Thus, collaborative applications are possible by using the standards of the
framework, without a priori organization of the collaborations.
Many of the data object names are suggested by natural analogy. This is not intended as a
case of “wishful mnemonics," but is intended to help organize the data structures
The following is an outline of the basic data classes in the suggested framework:
GAglet (extends Aglet)
The GAglet is the genetically enabled version of the Aglet. The intention in this class is
to simply outline how a Aglet can be initialized with genetics-like initialization objects.
Since the framework envisions agents being able to perform more general functions, the
actual methods that implement and cause the genetic evolution are included in subclasses,
not directly in this class.
OnCreation Methods: These methods are a standard part of Aglets that activate after a
new Aglet is created. They can take a general object argument. The particular form of
initialization objects used here are key to the GAglet extension. One form takes a GAglet
as an argument, using its structure for the agent’s general form, and genetic data that it
constructs for its “child” for specific initialization. Another form takes “raw” data
structure (phylum) as an argument. This phylum is a data object specifies the general
form of the GAglet (i.e., it’s purpose, genetic information types, mating compatibility
with other GAglets, etc.). This primarily meant to create “first generation” GAglets for a
Egglet (extends GAglet)
An Egglet is a GAglet with all the methods added that find mates and (indirectly)
construct the child genotype. This implements the evolutionary functions, but does not
cause them to happen. These functions are not included in the GAglet class, since it will
not be desirable for all genetically initialized agents to perform GA functions. Therefore,
all agents will not want the overhead of carrying the added mating methods. An Egglet
sends and receives two types of messages in the evolutionary process:
CourtingRequest - These messages contain Plumage objects (see below). The Egglet
sends and receives these messages (largely at random) to build up a fitness-sorted list of
courters. Fitness is based on a comparison of the relative value of Plumage objects,
defined in the Egglet’s phylum. When the Courters list reaches a CourterThreshold size,
subsequent requests are refused, and the Egglet may begin sending MatingRequests.
When an Egglet accepts a courter request, it replies with its own Plumage object, which
the sending Egglet can add to its Courter list.
MatingRequest - These messages contain a Sperm object. The Egglet sends these
messages to Courters in fitness preferred order. If a MatingRequest message is received
(from a compatible Egglet), it is simply added to the Egglet’s list of MateSperm. When
an Egglet accepts a mate request, it replies with its own Sperm object, which the sending
Egglet can add to its MateSperm list.
An Egglet initiates this message sending and receiving process, and, when a sufficient
number of mates are found, creates an Egg object, “fertilizes” it will a MateSperm, and
uses the results to set this Egglet’s ChildGenotype object. If an insufficient number of
courters or mates is found, this will generate an exception that can be used to cause
Egglet death, or perhaps a change in a continued mating strategy.
ActiveEgglet (extends Egglet)
This object causes the evolutionary action to occur, in a fashion that will yield the
emergent effects of a traditional GA (using tournament selection), albeit asynchronously.
Simply stated, an Active Egglet comes into existence, and through the actions of its
“run” method, attempts to create a pre-specified number of children, and (whether
successful or not) dies.
Storage class of organizations (e.g., strings) of Genes. This class defines the methods for
reproduction, but only in the sense that it creates Egg objects, where the real overhead of
the recombination methods is added. Since one or more Genotype objects are carried by
every GAglet, this prevents the instantiation of the recombination methods until they are
needed to create a child agent.
The atomic genetic unit. The object (and its subclasses) defines a mutation method for
the particular type of gene.
BooleanGene (extends Gene)
The typical binary valued gene used in most GAs. Defines constructors for creating
random genes, and the standard mutation function for binary genes.
An initializing template for GAglets. A user forms a new type of GAglet by subclassing
this class. The phylum defines the type of Genes, Genotype, Gamete, and Plumage for
the GAglet. It also defines mating compatibility functions. By default, a GAglet can only
mate with other GAglets that use the same GAgletPhylum.
A phylum also defines the manner in which a Plumage object is created for the GAglet,
and how other GAglet’s Plumage objects should be compared (see below).
A copy of the data in a Genotype structure, but with none of the methods defined, for
A wrapper for the Gamete object to be sent as a message.
Another wrapper for the Gamete object, but a one that contains all the methods for
combining Sperm and Egg into a new Genotype object.
An advertisement for mating. It contains various type and identity information for the
GAglet, as well as a SelfDeclaredFitness field. In the default phylum, GAglets trust one
another’s self declared fitness values as a way of evaluating plumages. In more complex
applications, agents should evaluate fitness locally, based on Plumage objects of potential
Altering the Suggested Framework:
The suggested framework is designed for easy extensibility. For instance:
• To implement a new agent type with genetic initialization capabilities, one simply
subclasses GAgletPhylum, and creates new GAglets in that Phylum.
• To change gene representations or mutation operators, one subclasses Gene.
• To change genotype representations or recombination operators, one subclasses a
combination of Genotype, Gamete, Egg, and Sperm, depending on the nature of
• To change methods of evaluating other individuals, one subclasses Plumage and
• To change mating compatibility between various Phyla of GAglets, one changes
• To create a different (non-genetic) function, one subclasses GAglet.
The change of genetic functions in GAglet subclasses is particularly important. By
creating GAglets with basic rule functions, one can easily construct an asynchronous,
agent based GBML system within this framework.
Further EC experimentation in the proposed framework
In addition to applications of the proposed framework, a variety of scientifically
interesting EC experiments are possible. These include experimentation with emergent
speciation, self-adapting reproduction strategies, etc.
The proposed work is a cooperative effort between Boeing, Dr. Robert E. Smith, and The
Intelligent Computing Systems Centre at the University of Alabama. Boeing will provide
support for a test of the proposed EC-agents framework on a suitable command and
control application. Dr. Smith will perform the fundamental research and development of
the proposed framework. Dr. Smith is currently on sabbatical, at The Intelligent
Computing Systems Centre (ICSC) of The University of The West of England. If the
proposed research is funded, Dr. Smith will extend his stay at the ICSC, to perform the
research with the support of the unique research group and facilities available there.
Dr. Robert E. Smith, Ph.D. is an Associate Professor of Aerospace Engineering and
Mechanics at the University of Alabama. He received his B.S. in Mechanical
Engineering in 1885, his M.S. in Engineering Mechanics in 1988, and his Ph.D. in
Engineering Mechanics in 1991. Dr. Smith is an active researcher in the application of
adaptive and intelligent systems to engineering problems. His chief interests are genetic
algorithms for machine learning.
The Intelligent Computing Systems Centre (http://www.ics.uwe.ac.uk/) is an
established research facility that is uniquely suited to the proposed work. The ICSC has
significant thrusts in EC and agent-based systems. Pertinent publications and research
projects from current ICSC personnel include:
• TRENDS - A European Community funded project demonstrating the use of
distributed objects and the Internet to deliver real time road traffic information
for the city of Gothenburg to users' web browsers.
• Artemis - A project to develop intelligent databases for travel reservation systems,
which is supported by the DTI and is a collaboration with Thomson Tour
Operations Ltd, Novus Systems Technologies Ltd and Parsys Ltd.
• FOLLOWME - A European Community funded project on personal mobile agent
• OpenLabs - (Telematics in Clinical Laboratories), a project to develop an open
architecture for clinical laboratory information
• systems which also applies neural network techniques to medical decision making.
This is supported by the CEC Telematics Advanced Informatics in Medicine
programme and is a collaboration with 28 European partners.
• MOTOS - (Management of Traffic using Open Systems), a project to develop a real-
time distributed database for traffic information systems, which is supported by
SERC/DTI and in collaboration with Data Sciences Ltd, Architecture Projects
Management, Steer Davies Gleave, University College London and Kent County
• IDIOMS - (Intelligent Decision making In On-line Management Systems), is
supported by a grant from SERC/DTI, and carried out in collaboration with TSB
plc, Strand Software Technologies Ltd and the University of Sheffield.
Bull L (1997), "Evolutionary Computing in Multi-agent Environments: Partners", in T
Baeck (ed) Proceedings of the Seventh International Conference on Genetic Algorithms,
Morgan Kaufmann, pp370-377.
Bull L & Fogarty T C (1994), "Parallel Evolution of Communicating Classifier
Systems", in Proceedings of the 1994 IEEE Conference on Evolutionary Computing,
Bull L & Fogarty T C (1996), "Evolutionary Computing in Cooperative Multi-agent
Systems", in Proceedings of the 1996 AAAI Symposium on Adaptation, Coevolution and
Learning in Multi-agent Systems, AAAI.
Fogarty T C & Bull L (1995), "Optimising Individual Control Rules and Multiple
Communicating Rule-based Control Systems with Parallel Distributed Genetic
Algorithms", in IEE Journal of Control Theory and Applications, Vol 142, No 3:
Gammack, J., Fogarty, T., Battle, S. and Ireson, N. "Human-Centred Decision Support:
The IDIOMS System", AI & Society, 6(1):345-366, 1992.
Smith, J.E. & Fogarty, T.C. (1996a) " Self Adaptation of Mutation Rates in a Steady
State Genetic Algorithm ". pp 318 323 Proceedings of IEEE International Conference on
Evolutionary Computing 1996.IEEE Press
Smith,J.E. & Fogarty,T.C. (1996b) "Recombination Strategy Adaptation via Evolution of
Gene Linkage" pp 826 - 831 Proceedings of IEEE International Conference on
Evolutionary Computing 1996.
Smith, J.E. & Fogarty T.C. (1996c) "Adaptively Parameterised Evolutionary Ststems:
Self Adaptive Recombination and Mutation in a Steady State Genetic Algorithm". pp
441-450 "Parallel Problem Solving from Nature IV", eds. Voigt,Ebeling,Rechenberg and
Schwefel, Springer Verlag.
At its conclusion, the project will provide the following deliverables:
• Source code for the complete EC agents framework, with complete documentation
for its use and extension.
• Fully documented results from a preliminary test of the framework operating as an
asynchronous GA for simple optimization, as a white paper.
• Fully documented results from a preliminary test of the framework operating as an
asynchronous GBML system, as a white paper.
• Fully documented results from a preliminary test of the framework with collaborative
interactions between distributed users.
• Fully documented results from an application to a command and control problem
specified by Boeing, as a white paper.
Aglets Workbench. Http://www.trl.ibm.co.jp/aglets/
Cribbs, H. B. and Smith, R. E. (1996). Classifier system renaissance: New analogies,
new directions. In Koza, J., Goldberg, D., Fogel, D., and Riolo, R. (eds.) Proceedings of
the First Genetic Programming Conference. 547-551. MIT Press
Franklin and Graesser (1997). Is it an agent, or just a program? : A taxonomy for
autonomous agents. In Proceedings of the Third International Workshop on Agent
Theories, Architectures, and Languages. Springer-Verlag. pp 21-35.
Goldberg, D. E. (1989). Genetic algorithms in search, optimization, and machine
learning. Addison-Wesley. Reading, MA
Holland, J. H. (1975). Adaptation in natural and artificial systems. The University of
Michigan Press. Ann Arbor, MI.
Kapsalis, A., Smith, G. D. and Rayward-Smith, V. J. (1994). A unified paradigm for
parallel genetic algorithms. In T. Fogarty (ed.) Evolutionary Computing AISB Workshop.
Smith, R. E. and Cribbs, H. B. (1994). Is a classifier system a type of neural network?
Evolutionary Computation, 2(1), 19-36.
Smith R. E. and Dike, B. A. (1995). Learning novel fighter combat maneuver rules via
genetic algorithms. International Journal of Expert Systems. 8 (3). 247-276.
Smith, R. E. and Cribbs, H. B. (in press). Combined biological paradigms: a neural,
genetics-based autonomous systems strategy. The Journal of Robotics and Autonomous
Wilson, S. W. (1994). ZCS: A zeroth level classifier system. Evolutionary Computation,
Wilson, S. W. (1995). Classifier fitness based on accuracy. Evolutionary Computation,
Wooldridge and Jennings (1996). Software agents. IEE Review. January 1996. 17-20.