SlideShare a Scribd company logo
1 of 35
A Survey of the Artificial Intelligence and Autonomous Capabilities of Space Probes
       With an Emphasis on Autonomous Planner and Scheduler Functions

                                 Filip Jagodzinski

                          Advisor: Dr. Frank Klassner

                              Completed: Fall, 2002




      This Independent Study final report is submitted in partial fulfillment
                     of the requirements for the degree of


                            M.S. in Computer Science


                        Department of Computing Sciences
                              Villanova University
                            Villanova, Pennsylvania
                                     U.S.A.
Abstract

Artificial intelligence (AI) technologies are being integrated more and more into space
exploration missions as part of a drive to produce autonomous agents. These
technologies span a variety of topic areas, including machine learning algorithms,
knowledge discovery tools, and data analysis systems. Various AI programs and systems
that are used on board a space exploration agent can help reduce the overall cost of a
mission and increase the amount of useful quality data that is gathered. An autonomous
planner/scheduler, for example, circumvents the need for continued—and often lengthy
—uplink and downlink communications between mission personnel on Earth and the
spacecraft. An autonomous planner does not wait for command instructions from Earth,
but instead the planner generates and integrates effective command sequences into the
mission timeline in response to malfunctions or unforeseen science opportunities.
Without the need for overhead of communicating with Earth, a spacecraft that uses an
autonomous planner can quickly recover from malfunctions and can record short-lived
anomalous events. The use of a variety of autonomous systems on board a spacecraft is
not only useful, but in the future may be necessary. If a spacecraft is expected to travel
far enough away from Earth where communication is not feasible, or if the spacecraft is
expected to lose contact with Earth for some period of time, then autonomous systems
can monitor and reactively respond to any changes. If an autonomous agent approaches
an unstable object—for example a comet—then the autonomous planner and image
analysis tools can quickly respond to dangerous events and prevent damage to the
spacecraft.

    This survey report explores the issues and potential solutions for supporting increased
autonomous behavior in space probes. The report begins with an overview that includes a
discussion on the integration of different subsystems and a discussion about the main
features and components of an autonomous agent. The second section of this report
explains several planned missions with autonomous features and also explains a few
programs that use autonomous technologies. Included is information on an image
analysis tool, the Multi-Rover Integrated Science Understanding System, The Distant
Autonomous Recognizer of Events (DARE) System, the Autonomous Small Planet in
Situ Reaction to Events (ASPIRE) mission, the Techsat-21 Autonomous Science Craft
Constellation Demonstration mission, and the Three Corner Sat Mission (3CS). The
second section concludes with a discussion about the challenges and design opportunities
of autonomous agents. Machine learning concepts are explained next, as are neural
networks, a feature of machine learning algorithms. The fourth part of this report takes a
closer look at one component of autonomous agents, the planner. Batch planners,
dynamic planners, planner logic, planner languages and hierarchical task network
planners are all explained. The Automated Scheduling and Planning Environment
(ASPEN) as well as the Simple Hierarchical Ordered Planner (SHOP) are discussed in
detail. In the final section of this report, future work, possibilities for the continuation of
this survey report are explored, and possible thesis projects are proposed.




                                              2
Table Of Contents
      Abstract.....................................................................................................................................................2
      Table Of Contents.....................................................................................................................................3
      Acknowledgements...................................................................................................................................4
1. Introduction ..................................................................................................................................................5
2. Autonomous Mission Designs and Different Autonomous Technologies...................................................8
   2.1 Texture Analysis for Mars Rover Images...............................................................................................8
   2.2 The Multi-Rover Integrated Science Understanding System (MISUS).................................................9
   2.3 The Distant Autonomous Recognizer of Events (DARE) System.........................................................9
   2.4 The Autonomous Small Planet In Situ Reaction to Events (ASPIRE) Project....................................11
   2.5 The Techsat-21 Autonomous Science Craft Constellation Demonstration..........................................13
   2.6 The Three Corner Sat Mission..............................................................................................................15
   2.7 Challenges in the Design of Autonomous Agents and Mission Planners............................................16
3. Support Technology for Autonomous Agents............................................................................................18
   3.1 Machine Learning.................................................................................................................................18
   3.2 Neural Networks...................................................................................................................................20
4. Autonomous Agent Planners......................................................................................................................21
   4.1 Coordination of Subsystems.................................................................................................................21
   4.2 System Languages and Planner Logic..................................................................................................22
   4.3 Batch Planners......................................................................................................................................23
   4.4 Dynamic Planners.................................................................................................................................24
   4.5 Hierarchical Task Network Planners....................................................................................................25
   4.6 The Automated Scheduling and Planning Environment (ASPEN)......................................................26
   4.7 The Simple Hierarchical Ordered Planner (SHOP)..............................................................................29
5. Future Work................................................................................................................................................32
6. References...................................................................................................................................................33




                                                                                 3
Acknowledgements

I wish to thank Dr. Frank Klassner for his support and continued suggestions throughout
the course of this independent study project. Also I wish to thank Dr. Don Goelman and
Dr. Lillian Cassel for directing me to invaluable reference sources on data mining and
information technology.




                                           4
1. Introduction

On July 4, 1997, the Mars Pathfinder's Sojourner Rover rolled down the ramp of the
Lander spacecraft and began a 90-day survey mission of the Martian surface. The trip
from Earth to Mars had lasted approximately 6 months, during which time data was
periodically uploaded and downloaded from the Pathfinder probe via NASA’s Deep
Space Network’s 34-meter antenna at Goldstone, California [23]. A UHF antenna on
board the Sojourner Rover was used to communicate with mission control on Earth; an
Earth-based operator controlled the 6-wheeled Sojourner Rover. The roundtrip time for
the relay of communications between Earth and Mars is approximately 20 minutes, but
control commands for Sojourner were sent approximately every 12 hours because the
process of generating a new command sequences was lengthy. As downlink data was
received, engineers determined the state of the rover, noting the state of various system
components. The rover data was then given to scientists who produced a high-level
science plan, which then was given to engineers who generated a low-level command
sequence that was returned to the Pathfinder Rover for execution. Although Sojourner
did have a few autonomous capabilities that were tested a few times, Sojourner never
encountered events that it had to respond to immediately and it did not have the ability to
generate a plan for all of the subsystems. Sojourner was able to drive around objects that
were it its way and could plan a path between two different points, but Sojourner was not
able to continually operate in an autonomous mode.       Future missions to Mars, other
planets, and to regions beyond Pluto will require that spacecraft be able to function
without human intervention for long periods of time and perform several autonomous
functions—this is one motivation for the integration of autonomous technologies into
space missions.

    If many mission components of a space mission could be automated, then mission
costs can be decreased and the amount of necessary communication time between Earth
and a spacecraft can be greatly reduced. If a spacecraft can generate valid command
sequences for itself in response to sensor inputs, can manage its own onboard resources,
can monitor the condition of different spacecraft components, and can intelligently gather
and analyze data, then more time could be spent on exploration and less time on waiting
for new command sequences from Earth.

    An autonomous spacecraft is one that performs its own science planning and
scheduling, translates schedules into executable sequences, verifies that the produced
executable sequences will not damage the spacecraft and executes the derived sequences
without human intervention. Because the goals of each mission are unique, there does
not exist a “perfect” recipe for constructing an autonomous agent. Different missions
require various levels of autonomy, and depending on the sensor types and overall
architecture of a spacecraft, a successful autonomous agent architecture and set of
algorithms for one mission might not be effective when applied to another mission with
different objectives and a different developmental history. Several common topics that
reappear in different autonomous agent designs can, however, provide a general roadmap
for the design of a successful autonomous agent.




                                            5
Regardless of whether an autonomous agent is required to exhibit autonomous
behavior for 5 seconds or for 100 years, several issues relevant to successful continued
functioning of the spacecraft must be addressed. Resource constraints and hard deadlines
require that resources be managed effectively. Non-renewable solid energy fuels must be
used sparingly, and alternative, renewable energy sources and methods to acquire that
energy should be used. For example, solar power can be used as a source of energy, and
a calculated close flyby of a planet can produce a “slingshot” effect where the spacecraft
is accelerated due to the transfer of rotational energy from the planet to the spacecraft.
Resources must also be managed in reference to the number of sensors used; although an
increase of sensors can increase the validity and precision of data, the addition of each
new sensor necessitates the use of more energy. Effective management of resources of a
spacecraft also includes the effective maintenance of concurrent activities.            A
planner/scheduler must be able to schedule concurrent activities in different parts of the
spacecraft; hence there exists the possibility of communication overload or of an energy
shortage.

    The architecture of an autonomous agent must facilitate effective communication,
scheduling and schedule execution functions.            An effective data bus and/or
communication system must allow easy access to mission goals and the mission
databases, including both hard-coded data and data that has been acquired during the
mission. The planner and scheduler require a host of communication capabilities to
transmit goals and spacecraft status parameters; the derived schedule may be broken into
various segments depending on the design of the planner. Schedule segments are sent
from the planner to the executive module that executes the various schedule components.
Messages are repeatedly passed between the various components of a spacecraft
operating system, and the hardware and software architecture of the spacecraft must be
designed so that spacecraft communications systems can function at an optimal level.

    The function of the planner/scheduler of an autonomous agent is to generate a set of
high-level commands that will help achieve mission goals. Planning and scheduling
aspects must be tightly integrated, and any planned activities must be coordinated with
the availability of global resources. Different subsystems of a spacecraft can also have
their own subroutines, so a planner must be able to integrate different spacecraft
functions and be aware of the different energy requirements of and working parameters
of different spacecraft components.

    The hybrid executive performs process and dependency synchronization and
maintains hardware configurations in an effort to manage system resources. Because an
autonomous agent can potentially encounter problems—both due to the incorrect
functioning of the spacecraft or when introduced to an environment for which the agent is
not suited—the agent must be able to enter a stable, “safe” state. Different agents use
various forms and styles of a host of available abstract languages as a means of
communication between different system components. Regardless of what language is
used, the hybrid executive must be able to process different spacecraft modes using the
available abstract language. In the case when an agent enters a “safe” mode in response
to an error, the executive must be able to process the directives that are provided by the



                                            6
planner. The ability of the executive to understand abstract language terms and a host of
plan initiatives allows for the continued execution of a plan under a variety of execution
environments.

    An autonomous agent should also be able to identify the current operating state of
each component of the spacecraft—this allows for an executive to reason about the state
of the spacecraft in terms of component modes rather than in terms of low-level sensor
values. This approach makes mode confirmation, anomaly detection, fault isolation and
diagnosis and token tracking possible. Mode confirmation provides a confirmation that a
command has been completed successfully, while anomaly detection is simply the
identification of inconsistent subsystems. In response to anomaly detection, the
executive also should be able to isolate and diagnose faulty components and monitor the
execution of the commands that have been produced by the planner. In an effort to
perform all the above functions, the agent uses algorithms that describe and monitor the
current state of the spacecraft. The mode of each component part of the spacecraft can be
modeled using prepositional logic; the ensuing component representations can be used by
the planner and by the hybrid executive modules.

    The way that knowledge is represented can have a great effect on the proper
continued working status of an autonomous agent. As an example, in the case of
heterogeneous knowledge representation, which is defined as “a non-uniform method
used in loosely-coupled architectures for storing specialty knowledge not for general use
[2]”, coverage and mode identification is efficient; at any one time there may be several
representations of the same spacecraft component. This heterogeneous knowledge
representation provides multiple views of the condition of various components of a
spacecraft and allows for various component representations to be combined in union to
form a thorough complete picture of the state of a system. As a result, task and
component specialization is possible, but likewise model representations can diverge and
leave an autonomous system with “too many” views of the state of the autonomous agent.
The knowledge representation language must therefore be concise, yet simple enough so
that quick calculations and necessary modifications to different spacecraft systems can be
made.

    Various components and technologies can be combined to make an “autonomous
agent”, including machine learning algorithms, artificial neural networks and expert
intelligence systems. The effective coordinated functioning of the different autonomous
components of an autonomous agent is dependent on many factors, including, among
others, the management of resources, an overall system architecture, efficient scheduling
and planning techniques, and the ability to handle errors. Additionally, any one mission
that requires the use of autonomous agents may need to employ a host of data mining,
knowledge discovery and pattern matching capabilities, as well as be able to classify,
cluster, and effectively organize sensor data. Different missions require the use of
different autonomous technologies, so each autonomous component of a spacecraft must
not only perform its job but fit well within an overall autonomous architecture.




                                            7
2. Autonomous Mission Designs and Different Autonomous Technologies

Included in this section are several short reviews of a variety of space missions that
employ autonomous agent technologies. In some cases, only certain aspects of an entire
system exhibit autonomous behavior, while other spacecraft are designed to be fully
autonomous. Included in this section are examples of an autonomous image recognition
algorithm, autonomous agent architecture designs, event recognition and classification
agents, autonomous systems that employ reactive planners, and multi-rover/multi-
spacecraft missions.

    The variety of goals, design features and methodologies inherent in each of these
missions is a good indicator of how “autonomous” technology can manifest itself in
many forms depending on mission objectives. Several concepts are consistent across
many different autonomous projects. There is no one “perfect” recipe for constructing an
autonomous agent, but instead different autonomous agent technologies are combined so
that mission goals can be achieved.

2.1 Texture Analysis for Mars Rover Images

Texture analysis through the use of image pixel clustering and image classification is one
example of an autonomous functionality—made possible by machine learning—that can
be part of a larger autonomous system. The goal of image analysis is to enable an agent
to intelligently select data targets; this is an important goal when considering that the
amount of data that is acquired by a system tends to far exceed the storage capacity or the
transmission capacity of an agent while on an exploration mission [7]. Different
programs use different algorithms for the purpose of image analysis. Discussed here is
the technique of analyzing textures as a means of image and object classification.

     One way to extract texture information from an image is through the application of
filters and through the use of different mathematical functions; image data, for example,
can be translated into the frequency domain through the use of the Fourier Transform.
Several filters have been popular in this endeavor, including Gabor filters that were first
introduced by J. G. Daugman [8]. Different cells in the visual cortex can be modeled
through the use of 2 dimensional functions. This allows for different filters to be able to
discriminate between various visual inputs in the same way that different cells in the
visual cortex are responsive to different stimuli. A bank of filters can be stored in a
database so that a host of anticipated textures can be analyzed and classified. Mission
resources can dictate which filters should be supplied, and different filters can be rejected
or accepted depending on the amount of computational power that is required to pass a
filter across an image. Additionally, various filters can be applied in tandem or in series
so that a complete mathematical representation of an image can be calculated. Not only
can different filters be applied, but also different filters have different parameters, many
of which can be adjusted, ignored or otherwise altered in an attempt to better suit a filter
to the specific data domain.

    Once the set of filters is run across an image, clustering tools can be used to classify
the pixels of the image into one of several categories. Statistical analysis can then be


                                             8
performed to ensure that classification schemes have been properly developed and that
the differences between separate classifications are significant enough to ensure proper
separation of the pixel data into unique categories [8].

2.2 The Multi-Rover Integrated Science Understanding System (MISUS)

The Jet Propulsion Laboratories (JPL) Machine Learning Group is developing the
MISUS system. The goals of this system are ideally suited for Mars exploration through
the use of multiple autonomous agents working together to achieve science learning goals
[1]. The MISUS system employs autonomous generation of science goals, uses learning
clustering methods, and attempts to use algorithms capable of thesis formulation, testing
and refinement.

    The autonomous aspect of the MISUS system selects and maps out at appropriate
scales those areas of the surface most likely to return information with maximum science
value. The MISUS system is based on a framework capable of autonomously generating
and achieving planetary science goals. Machine learning and planning techniques are
used, including the use of a simulation environment where different terrains and
scenarios are modeled. The system learns by clustering data that it attains through
spectral mineralogical analysis of the environment [1]. The clustering method attempts
to find similarities among various samples and builds similarity classes. A model of the
rock type distribution is created and continually updated as more robots attain more and
more sensor data of the different terrain specimens. In the goal to attain both a large
number of samples and to attain a uniform collection pattern of specimens, the planning
and scheduling component of the MISUS system determines and monitors the activities
of the various robots.

    Within the framework of the MISUS system there exists an overall goal of “science
data acquisition”. In particular, the MISUS system developers aim to “test the AI
algorithms on the resulting mineral distribution and reconstruct sub-surface mineral
stratigraphy in candidate areas to determine possible extent (temporally and spatially) of
hydrothermal activity, while considering the implications for Mars biology and the
usefulness of mineral deposits [1]”. Requirements to meet the science acquisition goal
are continually updated in response to the activities of the individual robots. An iterative
repair algorithm continually refines goals and activity schedules, while a statistical
modeling algorithm that employs stochastic parameterized grammar is used to formulate,
test, and then either refine or discard a hypothesis.

2.3 The Distant Autonomous Recognizer of Events (DARE) System

The goal of the DARE system is to detect a new, potentially dangerous, event sufficiently
before encounter so that there is enough time to autonomously perform mission re-
planning. Autonomous agents that come within close proximity of comets or otherwise
maneuver in dangerous landscapes are thought to benefit most from this type of
technology. The DARE system performs autonomous mission redirection calculations in
response to specific sensor events. Image segmentation, clustering methods and a host of
other techniques are employed within a 6-component architecture [22].


                                             9
The motivation to design a re-planning autonomous agent stems from the spacecraft
Galileo’s discovery of a satellite of the asteroid Dactyl in 1993. At the time it was
believed that Dactyl’s satellite was an anomaly, but in 1998 again such an asteroid-
satellite couple was encountered. Because the spacecraft Galileo did not possess the
autonomous probe capabilities widely in use today or that are currently in the
development and deployment stages, mission re-planning was not possible.

    In an effort to locate and identify anomalous events, image segmentation and
clustering techniques are used. The tracking of regions through a sequence of images
allows for noise to be effectively removed. The tracking of regions takes into account a
variety of scenarios, including tracking from a stationary or moving platform, tracking a
moving or stationary body, tracking “noisy” or “clean” events, tracking against stars or
tracking from images alone [9]. The agent platform can be assumed to be stationary if it
is moving slowly relative to the image sampling rate, but in the cases where the sampling
rate is low or the movement of the agent spacecraft in reference to an event is sporadic,
the projection in the image plane can be approximated by a straight line with near
constant velocity. If the sampled object moves significantly from frame to frame and the
sampling rate is low, then tracking of the object must be done in three dimensions. To
track an object in universal coordinates, lines from sight are drawn from the imaging
device through the detection object in the image plane; when lines through regions in
various images intersect at approximately the same point, the intersection regions
represent an object.

     The architecture of the DARE is system built upon 6 main components. Images are
received from a camera unit and the images are searched for new bodies. If a body is
detected, then new pointing angles are calculated and passed to the scheduler. The image
processing unit performs filtering and image enhancement functions, while the region
segmentation processing unit performs the necessary data clustering. Regions are
calculated based on background pixel intensity and are “grown” from the ratio image
where each pixel gets a value that takes into account the mean and standard deviation of
statistical clustering functions. A registration unit tags each object and frame based on
the body of the object, the orientation of the object in reference to the background stars or
based on the object’s geometric characteristics. Once an object has been tagged and
identified, the matching unit finds similarities between regions in consecutive images in
an effort to begin the tracking process. Because at any one time there may be a host of
potential object candidates and, depending on the sampling rate, a large number of
images that must be processed, the amount of possible object-image combinations
becomes immense. In an effort to overcome this combinatorial explosion, tree pruning is
used as a method to reduce the amount of possible states. After an object has been
identified among several frames, the track-file maintenance unit follows the object as it is
placed into one of three tracks, failing, reserved and object. The failing track is a registry
of those objects that don’t seem to be a good object candidate because of conflicts with
previous images; those objects in the failing track and subject to pruning. The reserved
track maintains a registry of those objects that are more worthy than those objects in the
failing track but that demonstrate ambiguous behavior over the course of tracking. The



                                             10
object track is a registry of those items that are most certain to be objects of interest. The
final unit, the detection-reporting unit, creates a record of those objects that survive over
the course of object tracking and which are candidates to be returned to Earth for further
study or given to a planner module in the case of autonomous re-planning.

    The use of image tracking and dealing with combinatorial explosion issues places
special emphasis on the proper management of system resources and the design of the
overall system architecture. Aside from the combinatorial explosion due to the various
objects that are being tracked over the course of several images, errors that are generated
during geometric analysis of object paths and orientation, insufficient parallax
computations and cosmic ray noise add further computational strain on the entire system.
For example, in the case where geometric errors are prevalent, it is possible that the
DARE system will track many tracks, all of which may be the same object but with only
slightly different geometric and path parameters.

    The DARE system has been tested and validated using an autonomous on-board
system. Using the record of objects that are produced by the detection-reporting unit, a
science processing unit searches for scientifically interesting events and sends discovered
objects to the planner. The planner produces a sequence of mission redirection
commands in response to a set of mission goals. Continued object observation can also
be planned in the case that more object information is desired. The planner module can
also opt to send object information to mission personnel on Earth in an effort to attain a
“second opinion” or in the case of an unresolved conflict in the planner module.

2.4 The Autonomous Small Planet In Situ Reaction to Events (ASPIRE) Project

Related to the DARE system is the Autonomous Small Planet In situ Reaction to Events
(ASPIRE) project. The goal of the ASPIRE system is to safely perform a close-
proximity rendezvous of a low-activity comet. The target for ASPIRE is the comet
duToid-Hartley, which will be at perihelion of 15 February 2003. To achieve this goal,
the DARE system employs a host of autonomous activities, including close proximity
navigation planning and execution, onboard pointing, onboard execution, and onboard
mission planning and sequencing.

    A great amount of interest lies in the study of comets due to the fact that they are
composed of the “building blocks” of the solar system; their compositional data is of
extremely high scientific value. Unfortunately, comets have the tendency to fracture, and
hence close rendezvous can only be performed if an agent is able to intelligently detect
and respond to any fracture events. A fracture event can be of varying intensities,
ranging from the expulsion of gasses to a total comet breakup. Collection of comet
gasses as well as collection of comet fragments requires an agent that can autonomously
maneuver to within a few kilometers of an active nucleus of a comet. The autonomous
agent must be able to effectively maneuver while being bombarded by high-speed
particles that are being ejected from the comet rock nucleus.




                                             11
The ASPIRE project aims to integrate the detection, analysis and investigation of
short-term comet event. A science mission can have one or a series of high-order goals,
and it is the goal of the ASPIRE agent to re-plan science objectives in response to events
in the current environment. The agent will approach the comet from the sunward side (at
the time of perihelion, the sunward side of the comet is away from the stream of gasses
that form the comet tail) and will perform a series of maneuvers so that at arrival into
orbit the agent will have a velocity of 0 relative to the comet [4]. Leading up to and once
in orbit around the comet, the ASPIRE agent will function in one of three main modes,
the parking mode, the orbiting mode, and a hovering mode. An escape mode is also
available in the case that the comet exhibits break-up characteristics and endangers the
spacecraft.

    The parking mode involves the maneuvers that bring the agent to a safe and close
proximity of the comet. From the parking stage, the agent can move into the even-closer
orbiting mode or choose to remain in the parking stage depending on the condition of the
comet. Within the orbiting stage, the spacecraft flies a sequence of orbits around the
comet and captures data through the use of wide-field and narrow-field view cameras.
The hovering state involves a close flyby of the comet surface in an effort to closely
investigate various features of the comet. In both the orbiting and hovering states, the
spacecraft must complete the planned routine before choosing to go to the next hovering
or orbiting state—except in the case of comet break up, in which case the spacecraft
enters escape mode, leaves the proximity of the comet and returns acquired data to Earth.

    An example scenario includes the approach of the spacecraft to within 20km of the
comet. An event is detected by the science module, and the spacecraft points the narrow-
filed of view camera to the location where the event was detected. The exact coordinates
of the event from the science module are used to construct a close-flyby (the hovering
mode) trajectory. The close fly-by is performed, image data is collected, and the
spacecraft returns to the safe 20km distance from the comet. The next event involves the
ejection of two particles from the comet’s body. The ejection event is detected, analyzed
and further investigated by use of the narrow-field of view camera. Further break occurs
and the comet begins “total breakup”, at which time the spacecraft enters escape mode.

    The science module of the ASPIRE system must be proficient in detecting events of
scientific interest. There are seven components to the science module, including a mode
handler component, a viewing angles calculation component, a change detection
algorithm component, a confidence measure component, a clustering component, and a
classification component. The mode handler component of the ASPIRE system receives
the current mode of the spacecraft and appropriately determines what to do with the
acquired (and stored) data images. The mode handler chooses to store images, compare
images or request to acquire additional images. The viewing angles calculation
component uses the spacecraft’s position in reference to various astronomical markers as
well as the tilt of the camera to determine the focus of the onboard cameras. This “angles
calculation” procedure allows the ASPIRE system to compare currently acquired images
to those images that are stored in the database. Subsequently, images in the database can
be replaced or either appended so that a time delimited record of events can be stored.



                                            12
The change detection algorithm uses individual images of a specific region to
determine any changes—events—that might be underway. The changes are detected by
dividing the images into small tiles. Various correlation, interpolation and a host of
statistical analysis techniques are used to detect the displacement of each tile in reference
to past images of the same location. The statistical analysis techniques form a part of the
confidence measure component of the science module where threshold values, upper-
bound and lower-bound variables help determine confidence measures. Clustering
methods are used to combine displacements of similar magnitude and direction so that
large-scale events—events that span several image sets—can be detected. In the case of
complete comet breakup or large particle ejection, various components of the comet will
be detected through the application of the clustering component of the science module
[4]. Finally, the classification component of the science module is responsible for the
classification of events into one of several categories, including jet formation, surface
fracture, fragment ejection or complete comet breakup. Depending on the nature of the
event, the results of the statistical analysis module will help determine if the spacecraft
should collect gas samples, collect particle samples or escape due to comet breakup.

2.5 The Techsat-21 Autonomous Science Craft Constellation Demonstration

The Autonomous Science Craft Constellation flight (ASC) is scheduled to fly on board
Air Force’s TechSat-21 mission in October 2004. The goals of the ASC mission include
onboard science analysis and algorithms, re-planning, robust execution, model-based
estimation and control, and the execution of formation flying in an effort to increase
science return. As is the case with the MISUS system, a series of independent agents will
work together in the ASC project. Distributing functionality among the different agents
will increase system robustness and maintainability, and in turn the cost of the mission
will be reduced when compared to a multi-satellite mission where each spacecraft possess
a complete set of autonomous routines and subprograms and where each micro satellite
performs the same set of duties.

    The micro satellites will at times function as one “virtual” satellite, while during other
times each micro satellite will be performing special tasks. The satellites will fly as far as
5km apart or fly in formation where they are separated by as little as 100m. Radiation
hardened 175 MIPS processors will be on board each micro satellite, and OSE 4.3 will be
the operating system of choice because of the “message passing” nature of the program
[16].

    The onboard science algorithms fall into two main categories: image formation and
onboard science. Image formation is responsible for creating image products, while
target size, pointing parameters and radar attributes will be used to determine the
resolution for an image so that the best image in the least amount of time can be created.
The reduced resolution images also allow science products to be scaled and compressed,
and so more data can be stored onboard the spacecraft. The onboard science algorithms
will analyze the images produced by the image formation component and will output
derived science products, if any. Trigger conditions across time variant images will be
used to detect anomalous events, as for example backscatter properties that may be an



                                             13
indication of recent change [24]. Trigger conditions will be checked not only on
successive images but also on images that span a large amount of time; such a procedure
will be able to detect slow-change events such as the movement of ice plates. Statistics
will be used to determine if significant changes have occurred as measured by region
size, location, boundary morphology and the image histogram. Different missions will
require different computational requirements due to the nature of the content of the
images, and so the statistical algorithms scale linearly to accommodate an image of
varying pixel resolution. The detection of an anomalous event will trigger a series of
events, including the continued monitoring and analysis of an event and, if necessary, the
immediate downlink of information to the mission command center.

    The Spacecraft Command Language (SCL) will handle the robust execution aspect of
the ASC system where procedural programming with a real-time, forward-chaining
system will be used. The communication software will be of a publish/subscribe nature,
where notification and request messages will be handled appropriately. The Continuous
Activity Scheduling Planning Execution and Replanning (CASPER) system will have the
ability to plan and schedule various SLC scripts, and spacecraft telemetry information
from the different micro satellites will be gathered by one of the satellites and used as
input for an integrated expert system (For more information about the CASPER planner,
see the section on the ASPEN planner, which is a forerunner of CASPER). This
integrated expert system will use trigger rules for fault detection, mission recovery
procedures, mission constraint checking and pre-processing initiatives [6].

     The model-based monitoring and reconfiguration system will be responsible for the
reduction of goals into a control sequence. To function effectively, the model-based
monitoring and reconfiguration system will oversee the proper function of all
components that exist along the command path so that damaged components are not
initiated. Not only must the reconfiguration system ascertain the health state of various
components, but also it must consider repair options and be able to produce a plan from
among the components that are working. In effect, the monitoring and reconfiguration
component must perform a great deal of reasoning; a model-based executive is used so
that effective choices can be made.

    A model-based executive will be used to track planner goals, confirm hardware
modes, reconfigure hardware, generate command sequences, detect anomalies, isolate
faults, diagnose various components and perform repairs. The model executive receives
a stream of hardware configuration goals and sensor information that it uses to infer the
state of the hardware. The model executive continually tries to transition the hardware
towards a state that satisfies the current configuration goals; this allows the model-based
executive to immediately react to changes in goals and failures [5]. Control actions are
incrementally generated using the new observations and goals given in each state.

    The model-based executive uses mode estimation, mode reconfiguration and model-
based reactive planning to determine a desired control sequence. The mode estimation
stage involves the setup of the planning problem where initial and target states are
identified. A set of most likely state trajectories is incrementally generated. The mode



                                            14
reconfiguration state sets up the planning problem, identifies the initial and target states,
and tries to determine a reachable hardware state that satisfies the current goal
configuration. The model-based reactive planning stage (which is based on an enhanced
version of a Burton system [16]) reactively generates a solution plan by generating the
first action in a control sequence that moves the system from the most likely current sate
to the target state. The model-based executive’s link with the SCL provides execution
capabilities with an expressive scripting language. This makes it possible to generate
novel responses to anomalous events.

    The coordination of formation flying—cluster management—will be carried out by
decomposing control systems into agents consisting of multi-threaded processes. Each
process consists of a message that has a content field that is used to identify the purpose
of the message and its contents. Different agents will be loaded at different times, and
hence can be configured at the time of deployment. As part of the coordination process
among the micro satellites, agents will search out other agents that can provide needed
inputs. Because of this architecture, agents are easily created while control and
estimation procedures can be effectively integrated.

2.6 The Three Corner Sat Mission

The Three Corner Sat (3CS) mission—a cooperative project between The University of
Colorado, Arizona State University and New Mexico State University—includes robust
space command language (SCL) execution, continuous planning, onboard science
validation, anomaly detection and basic spacecraft coordination. Three, 15kg nano-
satellites will fly in formation during which time each nano-satellite will be tumbling,
having no control mechanisms for orientation control and stabilization.

    Onboard science validation will involve the analysis of images. Because each nano-
satellite will be tumbling and taking photos at various time intervals, part of the photos
will be of Earth, while others will be of outer space. The analysis of the images will
involve the compression of the data into a series of 1 and 0s, where different pixels of
different images will be assigned either a value of 1 (a threshold brightness value has
been reached, hence it is most likely that the pixel is part of an image of Earth) or a value
of 0 (a threshold brightness has not been reached, hence it is most likely that the pixel is
part of an image of outer space). Images will receive an aggregated value based on the
sum of “bright” pixels, and only those images meeting a certain brightness threshold will
be used for further analysis.

    The robust execution will use a standard SCL that was first developed for embedded
flight environments. It is hoped that the SCL language will be reusable across different
control systems. The SCL allows data from multiple sources to be processed, and
includes scripts, constraints, data formats, templates and definitions. The goal of the SCL
is to fully define all data and allow for the control system to organize and validate
different system processes. It is hoped that the SCL will allow the operating system to
take actions depending on a variety of inputs, including time dependent events, operator
directives and the state of various system components [3]. The general procedure of the



                                             15
SCL involves two main steps, the acquisition of data and the modification of the SCL
database by means of an inference engine. A data IO module will acquire, filter, smooth,
and convert the data to engineering units, while the Real Time Engine (RTE), the
collective of an inference engine, command interpreter, scripts scheduler, and execution
manager, will capture all SCL database updates and process the different rules associated
with different database items. SCL scripts will perform imaging, manage communication
links between the three nano-satellites, perform resource management and coordinate
communication with ground control.

    The 3CS mission will use the CASPER planning software as well as the Selective
Monitoring System (SELMON). The SELMON system uses multiple anomaly models to
identify the isolate phenomena and anomalous events. Outlier analysis will be used as an
anomaly detection tool, while attention focusing will attempt to determine how much and
which component of the system is being affected by an anomalous event. During the first
few moments of an anomalous event, different components of a system may be
bombarded with data and a host of abnormal signals, all of which should be analyzed.
The goal of the attention focusing feature is to help organize and catalog the state of the
various components of a system at the time of anomaly detection.

2.7 Challenges in the Design of Autonomous Agents and Mission Planners

Just as there is no one perfect autonomous agent prototype, there are no specific obstacles
and challenges that are applicable to the design and implementation of all autonomous
agent missions. Unique mission goals will present unique challenges. General software
design and mission planning concepts are, however, widely applicable. Different
components of a system must be effectively coordinated; a system language must be
exact enough to allow for intricate expressions but yet simple enough for debugging and
transmission purposes; image recognition and analysis tools should be effective yet not
require extensive amount of computational resources.

    Texture analysis algorithms must address resource availability issues. Mathematical
and statistical programs, although very effective, might only be used to a limited extent
when implemented in an autonomous agent with limited computational power. Tools
that require less computational power must be created, or current computational-intensive
tools must either be adjusted or applied in instances when absolutely necessary.

    Of key concern to the MISUS project are issues related to coordination among a host
of robots and the ability to determine how to form a hypothesis. Several nano-satellites
will be operating in tandem in an effort to collectively increase the amount of science
data that is collected. Communication and coordination must be managed, possibly
through the delegation of a “master” robot. The MISUS system must also be able to
effectively produce a hypothesis, and then either refute or refine the hypothesis in
response to test data. How can such decisions be made effectively and “intelligently”
with limited use of hard-coded parameters and without the use of preset upper or lower
bounds? If hard-coded parameters and threshold values are used, how certain are we that
an anomalous, yet undiscovered phenomena, will be effectively observed and analyzed?



                                            16
Design issues in the DARE system focus on the ability to recognize and track new
events. The computational complexity of detecting and tracking objects through the
analysis of successive images favors the pruning of the knowledge space so that
computational resources are not starved. Mathematical analysis and a host of statistical
analysis procedures may be used so that object tracking is more effective. As is the case
with the image texture analysis algorithms, mathematical analysis tools must be managed
in light of limited onboard computational resources.

    The ASPIRE project is an example of a close proximity rendezvous mission with a
comet. The Deep Space 4 / Champollion (ST4) mission, in fact, proposes to land on a
comet, an even more daunting task considering the unstable nature of comet surfaces. In
a landing scenario, re-planning and scheduling algorithms must be very fast and efficient.
Because of actual contact with a comet, a break-up activity must be detected almost
instantly and likewise a plan for escape must also be initiated immediately. An
autonomous agent landing on a comet will not have time for image analysis. More
importantly, images that are taken while an agent is anchored to a comet surface do not
provide the ability to see the comet from far away; hence there must be another way to
detected comet breakup.

    The Three Corner Sat (3CS) Mission is also unique because it employs a series of
micro satellites that will be tumbling and taking images at spaced intervals. The micro
satellites must be coordinated, and collected data must be analyzed so that only “usable”
science products are returned for analysis. In the 3CS mission, there is a need to keep
mission costs low while at the same time employ new technologies that can be costly to
develop and perform. The tumbling feature of the 3CS mission means that the satellites
cannot adjust their orientation and altitude, so spacecraft construction costs are reduced.
The image analysis algorithms must be efficient and yet functional on a system with
limited computation resources. The advent of new autonomous technologies will surely
help advance the science of space exploration, but the extent to which the new
technologies can be applied must be taken into consideration. This is similar to the image
and texture analysis projects where powerful tools have been created but, because of
computational or resource constraints, can be only implemented to a limited extent.




                                            17
3. Support Technology for Autonomous Agents

There exist a variety of technologies that supports the development of autonomous
agents. The products of the research on these technologies provide better and more
autonomous functionalities that can be used for space missions and autonomous space
probes. Machine learning, the process by which an agent organizes information and then
uses that organized information to improve its future performance in some particular task,
is one technology that is highly applicable to autonomous space missions. Discussed
here is the general process of machine learning as well as the concept of neural networks,
a technology that allows machine learning algorithms to be efficient and reliable.

3.1 Machine Learning

Machine learning is concerned with the design and deployment of algorithms that
improve their performance based on past experience or data. There are various
approaches to machine learning, including classification, clustering, regression (linear
and non-linear), hypothesis modeling, and the use of neural networks. Classification
employs feature identification tools such as template matching algorithms, clustering
includes image segmentation and texture analysis techniques, regression involves the use
of predictive models; while hypothesis modeling uses simulations to better define
classification models. Each of the above techniques uses a host of statistical and
graphical tools, and in many cases the exact design of a machine learning component of
an autonomous agent is specific to a mission and is tailored to expected available
resources. For example, analysis and classification can be phenomenological or
hypothesis based. Phenomenological analysis includes the detection and classification of
objects based on anticipated phenomena, while hypothesis based detection and
classification schemes involve looking for data that either supports or refutes a hypothesis
that is created based on data-specific knowledge.

    The general machine learning process involves a series of steps, including
observation, hypothesis generation, model formulation, testing, and refinement/rejection
[10]. The first phase of machine learning involves the organization and exploration of
high-dimensional data. Dimension reduction and generalization algorithms may be
employed to make the data more manageable, considering that this early stage of machine
learning involves a large amount of data. As data is collected, classification and anomaly
detection is performed. It is the goal of classification to spatially order and categorize
data, while outlier analysis—a data mining technique—is used to screen for potentially
“interesting” events that can be good candidates for further investigation. During the
second stage of the machine learning process, a hypothesis is formed. Clustering
techniques are used to fit data into a series of probability distributions and data is made to
fit into predictive models that can be used to refine classification schemes. These
probability distributions, models and categorized data sets can be applied to nonlinear
regression techniques that attempt to transform observed patterns into potential
hypotheses.




                                             18
The third major step in the machine learning process involves the formulation of a
model in an attempt to explain the phenomena, classification categories and proposed
hypotheses. Nonlinear regression again can be used, as well as a host of various
mathematical analysis tools such as trainable Markov Random Fields and Bayes
Networks. These analysis tools help interpret data and analyze patters [32]. The goal is
to label data points as instances of interesting or non-interesting phenomena so that only
relevant data points are used and analyzed when formulating or fitting a model. Labeling
data as either interesting or non-interesting involves the use of generative and
discriminative models. Generative models create a union of the various attributes of a
dataset in an attempt to define the underlying physical phenomena, while discriminative
models try to make a distinction between interesting and non-interesting data points. The
generative model can be considered as a bottom-up technique while the discriminative
model merely looks at the “whole picture” without concern for the underlying attributes.

    At this point, classification schemes have been developed, a model or multiple
models have been proposed as a response to explain the classification schemes and
phenomena, and additional data sets and data points are acquired to test the validity of the
model. This fourth step of the machine learning process is concerned with determining
what predictions to make and what data to gather to test these predictions. Several data
point selection criteria have been proposed over the years, including choosing the data
point for which the current machine learning model is least certain of its classification;
this is an attempt to collect and label the “most” random data point.

    The final step of the machine learning method involves making the decision whether
to refine, refute, or to repeat the hypotheses and model predictions. In essence, the
machine learning process now attempts to answer, “Is there enough evidence to refute or
confirm a hypothesis? If no, should the hypothesis production and model development
protocols be repeated? If yes, should the hypothesis be included into the database of
knowledge?” At this stage are several issues concerned with the use of static lower and
upper bounds. A predetermined hard-coded sentinel value may be used to determine if a
certain high or low-enough characteristic of a data model or dataset is significant enough
to merit rejection or acceptance of a hypothesis or prediction. Sentinel values and hard-
coded trigger thresholds can be used as confidence markers assuming that future models
will produce results similar to previous models and model parameters.

    Many programs that use machine learning protocols are being developed or are
already in use within medicine, mathematics and space exploration. The 3-D Computer-
Automated Threshold Amsler Grid Test performs real-time medical analysis that can aid
in the diagnosis of eye diseases. The standard Amsler Grid Test is administered to a
patient with the aid of a touch-sensitive computer screen. The patient’s responses are
used as input for a modeling protocol that simulates successive Amsler Grid Tests at
various grayscale levels. The use of additional grayscales allows scotomas (portions of
the retinal field that are non-functional) to be depicted in 3-dimensions as opposed to 2-
dimensions, as is the case when using the standard Amsler Grid Test. The output of the
program is a depiction of a patient’s visual field [11]. The Cellerator™ project uses the
Mathematica® program package. Equations that model biological compounds are


                                            19
generated. Ordinarily, chemical networks must be manually translated from cartoon-like
diagrams to chemical equations and fitted to ordinary differential equations [13]. The
manual translation process is often a tedious task that is hard to automate because of the
many variations and configuration states of different biological elements. The Cellerator
program automatically translates chemical networks into ordinary differential equations,
which are more easily modeled. Once the Cellerator program generates and translates
chemical equations into ordinary differential equations, various components of the
ordinary differential equations are used to model and eventually solve for various
chemical interactions [12].

    The Diamond Eye project is concerned with the cataloging and identification of
geological features for scientific analysis. The amount of scientific data that is acquired
grows with the launch of each scientific mission because of continued improvements in
data-gathering techniques. Filtering and on-board autonomous analysis capabilities allow
an autonomous agent to perform some of the analysis that otherwise would have to be
done by hand, a very time-consuming process [20]. The architecture of the Diamond Eye
project is based on a distributed software methodology where scientists can interact with
several data depositories [14,18]. An adaptive recognition algorithm mathematically
processes the low-level pixels of an image and uses several training-models in an effort to
construct recognizer features. Spatial structures, various object templates and temporal
analysis tools are also used.

3.2 Neural Networks

The concept of artificial neural networks is inspired by biological nervous systems, is
concerned with information processing, and is part of the machine learning field.
Artificial neurons have one or more inputs, generally one output and include two main
“modes”, training and production. Each input has an associated weight, which helps
determine the “importance” of the nature of the data of the input. Training involves the
giving of information to a neural network in an effort to determine input weights and is
usually done in one of two ways, supervised or unsupervised. The supervised training
method involves providing both the output and inputs to a neural system and then using
back propagation in an effort to adjust the input weights; the amount of errors between
the desired and actual output are analyzed. During unsupervised—or adaptive training—
a neural network is not provided with the output, so the network must determine which
features are important in the input data set and how various features can be grouped and
analyzed so that effective input weights can be set. Multi-layer neural network
architecture, error detection and modification, data reduction and several other issues are
pertinent to effective neural networks and help further refine and enhance neural
networks.

    There are several advantages of using a neural network, one of which is for the
purpose of pattern recognition that can lead to effective data organization methods. In the
case of unsupervised learning, organization is performed without any a priori knowledge;
patterns in multi-dimensional space can be transformed to lower dimensional space that
can be more easily ordered [15].



                                            20
4. Autonomous Agent Planners

The planning software for an autonomous space agent generates mission plans so that
onboard science can be scheduled and performed, so that useful data can be acquired, and
so that mission objectives can be met. The planner in an autonomous space agent must
manage resources and must be able to resolve conflicts between different subsystems. In
addition, the planner must produce commands that are valid, are not redundant, are
rational and adhere to flight rules. As is the case with the overall architecture of an
autonomous agent, there does not exist a recipe for a “perfect” autonomous planner.

    This section includes an overview of planner functions, batch planners, dynamic
planners, and Hierarchical Task Network (HTN) planners. The Automated Scheduling
and Planning Environment (ASPEN) and the Simple Hierarchical Ordered Planner
(SHOP) are also discussed.

4.1 Coordination of Subsystems

Different subsystems of a space agent require instructional commands over the course of
a mission. Some components of an autonomous spacecraft may be turned “on” for only
brief amounts of time, in which case appropriate command sequences must be sent to
those components at the right moment. An example is the deployment of a heat shield
when a spacecraft enters the atmosphere of a planet—the deployment of the shield must
be timed just right so that that spacecraft is not damaged. Other subsystems of an
autonomous agent may be functional throughout the entire mission and may require
updated commands on a periodic basis. A navigation system, for example, must be
operational at all times and must receive data from orientation sensors. Still other
components may be turned on only in response to special events. If a spacecraft detects
an anomaly, a camera can be turned on to gather information so that science data may be
returned to Earth for analysis.

   Because a spacecraft contains several subsystems, the planner must be able to
coordinate many tasks. Some sequences of events are valid only when performed in a
special order. For example, if a spacecraft approaches an anomalous object, the order in
which different actions are performed is very important:

   1)   Turn thrusters off
   2)   Turn camera on
   3)   Take 30 images in 30 seconds
   4)   Turn camera off
   5)   Turn thrusters on
   6)   Resume previous flight path.

   If the actions are performed out of sequence, then the camera might not record the
anomalous object. Even worse, if the wrong actions are performed, then the spacecraft
may collide with the anomalous object and become inoperative.




                                          21
In order for the planner to produce effective plans, the planner must take into
consideration a variety of factors. Different subsystems can directly or indirectly affect
other subsystems, so the planner must anticipate—and potentially resolve—possible
conflicts. Each command that is generated by the planner must be valid and must
conform to pre-defined rules, if any. The planner must also make sure that plans do not
violate flight objectives, and the planner must make sure that plans are not redundant.
The planner must also rational, meaning that the planner arrives at the right solution for
the right reasons.

4.2 System Languages and Planner Logic

The ability of the planner to validate and check a command sequence is highly dependent
on the underlying system language and the way that logical statements are represented.
The system language must be able to easily and effectively express command sequences,
flight rules, the states of different subsystems, and the contents of databases. In order for
the planner to check and validate different commands, logical constructs must be allowed
so that a variety of statements can be verified. If the camera on board the spacecraft can
be turned on only when the spacecraft burners are turned off (the combined energy
requirement of the two components may be more than is available at any one time), then
there must be a way for the system language to represent that requirement. Similar
axioms, corollaries, and logical facts must be represented, as well as while-loops, if-
statement and other programming constructs. The ability of the system language to
combine, manipulate, validate, classify, order, etc. a variety of statements is through the
use of logic [27]. The sum of all logical statements can be used to define a model of a
spacecraft’s subsystems, and this is how knowledge can be represented; simple facts are
combined and relationships are made to form complex ideas and expressions. Most
importantly, the use of logical statements and logical operators allows command
sequence to be validated. Conditionals and iterations of command sequences can be
represented using logical constructions and using an effective modeling language [29].

    The actual way in which logical statements are represented in an autonomous agent
depends on the type of logic that is used. Logic, itself, stems from the natural language
constructs and generally includes two types, predicate and prepositional logic. Both
predicate and prepositional logics allow for meaningful relationships to be assigned, but
the two types differ in regards to the use of sentinels and predicates. Prepositional logic
allows for reasoning about items only in terms of “true” and “false” values, but predicate
logic allows reasoning about true and false statements and about individuals; i.e., a
statement can be quantified.

    In prepositional logic, a sentinel is an operator that operates on one or more complete
ideas to form a new aggregate idea. Truth function operators determine the truth of a
statement from the truth values of the individual components of the statement. For
example, the statement “the sky is blue” can be combined with the statement, “ostriches
don’t fly”, to form the sentence, “The sky is blue and ostriches don’t fly”. This sentence
is a construct of the two statements and is called a conjunctive construct because the
word “and” is used. The truth of the sentence can be ascertained if, and only if, each of



                                             22
the two statements is true. Aside from the conjunctive construct, there are the disjunction
and negation prepositional constructs, as well as a range of constructs that can be formed
using boolean operators such as, “if, and, or, nor, implies,” etc. The ability to combine
true or false statements into aggregate ideas is the underlying concept of prepositional
logical—it is assumed that all propositions have a definite truth value, namely either true
or false.

    Predicate logic, however, is concerned not only with sentinel connectives but also
with the actual structure of atomic prepositions. A predicate is whatever is said of the
subject of a statement; a function that maps individuals to truth-values [30]. Atomic
sentences are constructed by applying predicates to individuals, and this is how unique
individuals can be quantified. For example, the two statements, “all men are mortal,” and
“Socrates is a man”, can be combined to form the sentence “Socrates is mortal”.
Predicate logic, then, deals with prepositions where subject and predicates are separately
signified. There are several types of predicate logics, including first-order, higher-order,
inclusive, monadic and polyadic predicate logics, each of which is different depending on
what arguments are acquired by the predicate. First-order logic, for example, is where
predicates take only individuals as arguments and quantifiers only bind individual
variables; the “Socrates is mortal” example is a first-order predicate logic.

4.3 Batch Planners

A batch planner—often considered the “traditional” model—divides a mission into a
series of planning horizons, where each horizon lasts for a predefined amount of time and
during which a plan is set into action. At each instance when the mission timeline
approaches one of these planning horizons, the planner projects what the state of the
agent will be at the moment when the current plan expires. The planner generates a new
plan for the next plan horizon, taking into account the expected end state of the current
plan and the goals of the mission.




            Figure 1: Batch Planner. Plans are joined end-to-end, and each
            plan must run to completion before the next plan is implemented; the
            plans are separated by plan horizons. Before each plan horizon, the
            planner users the current state values to project the state of the agent
            at the completion of the current plan. This projection is used as the
            starting point for the next phase. [16]

    This traditional planning model has the advantaged of being able to determine the
length of time between each event horizon, and so at the time of actual plan generation,
the planner routine knows exactly how much time there is remaining, and hence
continued revisions can be made to the plan routine until the event horizon occurs. The


                                               23
traditional planning model, however, also has several limitations, including the necessity
for dedicated resource, an inability to produce a quick new plan and an overall inability to
produce the most effective plan.

    The batch concept of the traditional planner requires that the planning phase be an
off-line process, where the planning routine is invoked only when a time horizon
approaches. This off-line scenario means that if system resources are limited, then the
planner cannot be invoked. Alternatively, the planner may be allotted dedicated system
resources, but then at the time when the planner is not running, those dedicated system
resources cannot be accessed and are left unused.

    If an anomalous event occurs—either positive or negative—then the response time
may be significant due to the fact that a new plan can be implemented only at the next
plan horizon, which may not occur for a significant amount of time. A negative event
may require an immediate response, and a positive event may be a short-lived science
opportunity event during which important science information could be collected.
Because the next plan horizon may be several moments away, a fortuitous opportunity for
data acquisition may be passed by because a new plan cannot be implemented in a short
enough time.

    The traditional planning model also may not be able to produce the most efficient
plan if the planner is especially slow or if the planner routine must be initiated far in
advance of the plan horizon. Because the batch planning method tries to project the most
likely state at the end of the current batch time phase, an event or change in
environmental variables that happens after the start of the planner but before the next
time horizon will not be taken into consideration by the planner routine. The projection
of the end of the current time phase may be grossly wrong if the planning algorithm starts
well in advance of the time horizon. Alternatively, the projection of the state at the end
of the current time phase cannot be calculated too close to the plan horizon because the
planner must be given enough time to construct a plan that is consistent with mission
goals.

4.4 Dynamic Planners

A dynamic/responsive planner, on the other hand, maintains a current goal set, a plan, a
current state and a model of the expected future state. The dynamic/responsive planner
can update the goals, current state or a plan horizon at any time, and the state of the
current plan is altered and the planner process is invoked. An update may be a host of
events or a malfunction. The dynamic/responsive planner is able to maintain a
satisfactory plan because the most current sensory and goal integration data is used.




                                            24
Figure 2: Continuous Planner. Instead of several plans joined end-
            to-end, there is only a current plan that is repeatedly modified in
            response to the current state of the agent. Goals and state
            representations are constantly being updated. Instead of waiting for a
            plan horizon, the planner makes constantly updates the plan [3].

    A dynamic/responsive planner integrates a new plan by means of a simplified cycle
where the goals and initial state of the current plan are appropriately updated, the effects
of the changes to the initial state and goals are propagated through the entire current plan,
potential conflicts are identified, and plan repair algorithms remove anticipated conflicts.
Conflict resolution involves the tracking of a host of agent systems, among them the
communications, science data acquisition and engineering activities as pertaining to any
changes that have been made to the plan. The dynamic/responsive planner therefore has
a big advantage over the batch planner because changes to the plan can be made
immediately.

4.5 Hierarchical Task Network Planners

Hierarchical task network (HTN) planners aim to decrease planning time by hierarchical
decomposition. HTN planners reduce a problem by recursively decomposing tasks into
subtasks, stopping only when primitives have been reached. A primitive cannot be
decomposed into a simpler unit. Planning operators are used on the primitives to perform
various tasks. Sets of available methods that define how different tasks can be
decomposed are used; each method provides a schema for decomposing a task into
several subtasks. Because there are many ways to decompose a large task (which may be
a conjunction of multiple tasks which themselves can be decomposed), several methods
can be applied effectively [25].

     The input to the planner consists of a task network, a set of operators, and a set of
methods—a triplet. The task network is the entire problem the needs to be solved, where
each task is one specific thing that needs to be done. A task has a name and a list with
arguments that include variables, constants, and various attributes. Task and network
attributes may include constraints that restrict or prevent some of the variables to be used


                                              25
or constraints that require that a series of tasks be performed in a certain order. Tasks can
be primitive, meaning that they can be performed directly; tasks can be compound, in
which case the planner must decide how to decompose the tasks; or tasks can be goals, in
which case they are just properties that must be made true. Available operators
enumerate the effects of each of the primitive tasks. Methods indicate how to perform
various non-primitive tasks and are defined as a pair (x,y), where x is a task that is
performed on a network y. Because planning problems (the triplets) are define
mathematically, restrictions, reductions and various comparisons can be performed
effectively using the available operational semantics. A planning domain is defined as
D=<Op, Me>, where OP is a set of operators and Me is a set of methods. The planning
problem, therefore, is defined as P = <d, I, D>, where D is the planning domain, I is the
initial state, and d is the task network that the plan needs to solve for. Using HTN
planners, the overall planning process can be summarized as follows:

   1. Receive the problem P
   2. If P is comprised of all primitives, then
          a. Resolve conflicts in P
          b. If Resolved, return P
          c. Else, Return Failure
   3. Choose a non-primitive task t in P
   4. Choose a decomposition for t
   5. Replace t with decomposition
   6. Use critics for find interactions and find resolutions
   7. Apply the resolutions
   8. Return to Step 2

    Critics (Step 6) are functions that handle ordering constraints, resource limits and
provide domain-specific guidance in the case that a planner has been designed for a
specific job and tailored algorithms/methods have been developed. Step 2 of the overall
process either returns a plan where all the primitives have been resolved (operators have
been used successfully on all the tasks), or a failure is returned because the plan cannot
be solved because an operator does not exist for a primitive task).

     There is a drawback to the use of HTN planners, however. When large complex
initial tasks are used and when interactions among non-primitives are complex, then the
planner may not be able to find solutions, especially if subtasks in the expanded, non-
decomposed, task list are interleaved. In such complex cases, HTN tasks are said to be
“undecidable”.

4.6 The Automated Scheduling and Planning Environment (ASPEN)

The ASPEN planning and scheduling program is a re-configurable framework that can
support a wide variety of applications. High-level goals are translated into low-level
commands that help achieve the objectives of a particular mission. It is the goal of the
ASPEN project to reduce mission costs and eventually permit scientists to directly issue
commands to a spacecraft. Rather than be dependent on mission personnel to provide



                                             26
command sequences, a spacecraft will receive mission goals from a scientist and in
response autonomously generate a plan and schedule sequence. Under an automated
scheduling and planning system, opportunistic and short-lived events can be effectively
monitored.

    The ASPEN software provides an expressive constraint modeling language, a
management system that is used for the maintenance of spacecraft operations and
resources, a host of search strategies, a reasoning system that is used for maintaining
temporal constraints, a language for representing plan preferences, various graphical
interfaces, and real-time re-planning capabilities [17]. Knowledge is stored as several
classes, including activities, parameters, temporal constraints, reservations, resource
variables, parameter dependencies and state variables. Different knowledge constructs
are used to define system components that are used to produce sequences of commands.

    The architecture of the ASPEN system uses iterative algorithms, heuristics, local
algorithms and parameters. Iterative algorithms permit re-planning to be used at any time
(in contrast to the batch planning protocol that is described above), which is one large
advantage in the case that anomalous events or short-lived opportunistic science
observations are expected. The use of heuristics allows for pruning of search trees and
knowledge spaces and may allow for a quick discovery of a higher quality solution [18].
Local algorithms do not have a computational overhead associated with intermediate
plans or previous failed plans. Consequently, local algorithms do not guarantee that
unsuccessful modifications to a plan will not be retried. The use of parameters and the
adhering to such parameter constraints (in contrast to least-commitment techniques)
allows for resource values to be easily computed [17].

    After a model is developed, ASPEN parses it into data structures that enable efficient
reasoning capabilities, where seven basic components are used. Parameters are used to
store simple variables and are used in parameter dependency functions. Dependencies
are represented and maintained in a Parameter Dependency Newtork (PDN) which
maintains all dependencies between parameters; at any given time all of the dependency
relationship can be checked to ensure that relations are satisfied[18]. Temporal
constraints are used to define relationships between the start and end times of two
different activities. Temporal constraints allow for the derivation of complicated
expressions, especially when used with conjunctive and disjunctive operators. Resources
are profiles that represent actual system resources or variables and permit the use of
restrictions. State variables describe the possible values of a system variable over time,
as for example Busy, Idle, Corrupt. Reservations allow for activities to have resource
usage constraints, and can be modified, turned on or turned off depending on the need to
regulate activities. Finally, activity hierarchies allow for the breaking up of a task into a
series of sub-activities. The activity hierarchies are efficient in mandating the order in
which tasks must be performed, including configurations in series or parallel. The use of
these seven components in a variety of combinations allows for the design of plans and
the repair of plan conflicts.




                                             27
ASPEN defines ten basic types of conflicts, including abstract activities, unassigned
parameters, violated parameter dependencies, unassigned temporal constraints, violated
temporal constraints, unassigned reservations, depletable resources, non-depletable
resources, state requirement conflicts and state transition conflicts [19]. An abstract
activity conflict occurs when a command has not been decomposed into the appropriate
sub-commands or if there exist several possible command decompositions, in which case
an algorithm must decide which decomposition should be performed. An unassigned
parameter conflict represents the condition when a parameter has no value but has been
included in a plan; a parameter can be unassigned, but once part of a plan, a parameter
must have a concrete value. When two parameters violate a functional relationship, a
parameter dependency violation occurs. Parameter dependencies must be constantly
checked because of continued updates to parameters. When a temporal constraint exists
for an activity instance that has not been selected to satisfy that constraint, then an
unassigned temporal constraint conflict occurs. A violated temporal constraint conflict
occurs when a temporal constraint is assigned to a relationship that does not hold; this
prevents for the setting of constraints that are potentially impossible to maintain. An
unassigned reservation conflict occurs when there exists a reservation in an activity that
has not yet been assigned to a resource. Timeline conflicts address issues of the use of
depletable and non-depletable resources. An upper or lower bound limit is set for most
variables, and the use of resources is closely monitored so that any one spacecraft
component does not exceed the allotted use of a resource. Timeline conflicts are
generally the more difficult to recover from because the solution may require that many
components of a system be adjusted. Finally, state variable conflicts result when either a
reservation mandates the use of a state that is not available, in which case a state
requirement conflict occurs, or when a reservation is changed to a condition that is not
allowed by a state variable, in which case a state transition conflict occurs.

    ASPEN performs iterative repair searches while taking into consideration different
constraints. Different types of constraints are organized into classifications depending on
how a constraint can be violated. The different violation types have appropriate repair
methods, and the search space includes all permissible repair methods as applicable to all
possible conflict types in all possible combinations. The iterative repair algorithm
searches the space of schedule components and makes decisions at key instances.
Choices are made when one of several elements must be selected, including a conflict, a
repair method, an activity for a repair method, a start time for an activity, a duration for
an activity, a timelines for a reservation, a decomposition, a change in a parameter or
when selecting a value for a parameter.

    The continuous planning algorithm receives a current plan, a current goal set, and a
current state. The current goal is updated to reflect new goals and conflicts are detected.
When a schedule contains several conflicts, the iterative repair algorithm selects a
conflict to attack and chooses a repair method. Because the conflict search space
contains pairs of conflicts and associated repair methods, a search for the conflict also
returns an appropriate repair method(s). There are many possible classes of repair
methods, including moving an activity to a different location in the plan, creating a new
activity, creating a reservation, canceling a reservation or deleting an activity. During the



                                             28
decision stage, the heuristics feature of the ASPEN system helps to pruning the tree when
several solution methods exist. Several domain-independent heuristics have been
developed, including a heuristic for the sorting of conflicts according to type, a heuristic
for selecting the repair method when more than one repair method exists, and a heuristic
for determining start and end times of activities that are being shuffled to different
locations.

    The iterative planning algorithm releases appropriate non-conflicting sections of a
plan to an executive for execution. During the entire iterative planning process, the state
of the spacecraft is represented by a series of timelines that portray the current and
potential future states of the system. The algorithm continually updates the timelines in
response to new events and actual plan consequences. Most importantly, the planning
algorithm tries to project the state of the system only into the near future; projections for
the long term are generally very abstract so that modifications can be made easily [21].

4.7 The Simple Hierarchical Ordered Planner (SHOP)

The AI Planning group at the University of Maryland is designing the Simple
Hierarchical Ordered Planner (SHOP). Ordered Task Decomposition, which is a special
case of Hierarchical Task Network (HTN) planning, has been developed. Using Ordered
Task Decomposition, a planner generates tasks in the same order that the tasks will later
be executed.

    The syntax and semantics of the SHOP planning system define logical symbols,
logical inferences, tasks, operators, plans, methods, domains and problems; first-order
logic is used. Logical symbols include constant, function, predicate and variable
symbols, as well as terms, atoms, ground atoms, conjuncts of atoms, horn clauses,
substitutions and most-general unifiers. Constant symbols are “individuals”, as for
example, “Susan”, and “3”. Function symbols map individuals to other individuals, as
for example, “age of (Susan) = 3”. Logical inference is through the use of states, axioms
and satisfiers. A state is defined as a list of ground atoms, while an axiom is a set of horn
clauses. (A horn clause is a clause containing at most one positive literal; for example,
“has_wings(bird)”, indicating that a bird has wings). Satisfiers are substitutions that help
make a conjunction true. Tasks are divided into two types, primitive task symbols and
non-primitive task symbols.

A task is of the form

   (s t1 t2 … tn)

where s is the task’s name and t1, t2, …, tn are the terms which define a task’s
arguments. A task list is defined as a list of tasks, either primitive or non-primitive.

Operators are expressions of the form

   (: operator h D A c)



                                             29
where h is a primitive task, D and A are lists of atoms with no variables, and c is a
number, which is the cost of the primitive task h. The operator specifies that h can be
accomplished if every atom in the list D is removed and placed into the list A; hence the
operator is an operation that defines a “before” and “after” state. A plan is defined as a
list of operator instances [31].

Methods are of the form:

   (: method h C ‘T)

where h is a task, C is a conjunct (a precondition), while T is a task list.

Domains and problems are represented as a set of axioms, operators, and methods.
Planning problems are triplets, (S,T,D), where S is the state, T is a task list, and D is a
representation of a domain. For example, consider the following definitions for State S,
operator o, the substitution u, and the plan P:

   S = ((on a b) (ontable b) (clear a) (handempty));

   o = (:operator (!unstack ?x ?y)
   ((clear ?x) (on ?x ?y) (handempty))
   ((holding ?x) (clear ?y)));

   u = ((?x . a) (?y . b));

   P = ((!unstack a b) (!putdown b));

Lisp is the programming language that is used in SHOP, but no in-depth knowledge of
Lisp is necessary to understand the example. The state of the system is represented by 4
statements, namely that “object a is on object b”, [(on a b)]; “b is on the table”, [(ontable
b)]; “there is nothing on top of a”, [(clear a)]; and “the hand is empty”, [(handempty)].
The operator o is a function that “unstacks” objects x and y [(!unstuck ?x ?y)], first
making sure that object x is on top of objects y [(on ?x ?y)], that the hand is empty
[(handempty)], and that there is nothing on top of object x [(clear ?x)]. The hand in this
case can be a robotic hand, and hence must be empty if the robot is to pick up the object
x. Likewise, there should be nothing on top of object x. Note that the use of question
and explanation marks is not analogous to the use of these items when used in standard
written languages; in this case the ? indicates that x and y are objects, while ! indicates
that unstack is a maneuver. These checks that the operator function performs are the pre-
conditions, meaning that they must be true if the operator is to be successful. The final
line of the o operator is the “result” component, indicating the end state after the operator
has performed the action. The substitution u specifies that objects x and y can acquire
variable names of a and b; this allows the example to relate an operator to the real
examples in this case, namely objects a and b as defined in the state S. The plan P is
composed of two statements, “unstack objects a and b”, [(!unstack a b)] and “put object b



                                              30
down”, [(!putdown b)]. The subgoal unstack is performed first, which would result in the
following:

   (o)u = (!unstack a b)

indicating that the substitution u was applied to the operator o, taking into consideration
the current state S. The unstack routine would then be performed, which would first
check that there is nothing on top of a, a is on top of b, and that the hand is empty.
Although this is a simple example, it demonstrates how a language can be used to
represent a state as well as define operations. Note also that first-order logic is evident
here in the precondition states of the operator o and the state S: (clear ?x), (on ?x ?y)
(handempty) must all first be true in order for the operator to function, while (on a b)
(ontable b) extends the definitions of object entities x and y as refereeing to the specific
examples of objects a and b.

The SHOP algorithm can be summarized as follows:

   procedure SHOP(S,T,D)
   1. if T = nil then return nil endif
   2. t = the first task in T
   3. U = the remaining tasks in T
   4. if t is primitive and there is a simple plan for t then
   5. nondeterministically choose a simple plan p for t
   6. P = SHOP(result(S,p),U,D)
   7. if P = FAIL then return FAIL endif
   8. return cons(p,P)
   9. else if t is non-primitive and there is a simple reduction of t in S then
   10. nondeterministically choose any simple reduction R of t in S
   11. return SHOP(S,append(R,U),D)
   12. else
   13. return FAIL
   14. endif
   end SHOP

where S is the state, T is a task list, and D is a representation of a domain.




                                              31
5. Future Work

In this survey report are included several unresolved issues and general concerns of the
design and use of autonomous agents in space missions. Section 2.7 of this report lists
several design and implementation issues for a few autonomous agent technologies and
several autonomous agent space missions. Specific issues can be addressed with more
detail, different autonomous technologies can be implemented in an actual robot, and
general autonomous behavior techniques can be modified and made compatible with
resources available on autonomous spacecraft.       Discussed in this section are three
possibilities for future work.

    The image analysis tools discussed in this survey report explain several limitations of
the use of object detection tools in autonomous agents. The primary obstacles in
implementing image analysis tools onboard spacecraft are related to the availability of
resources. Image analysis algorithms use mathematical modeling, statistical analysis
functions and a variety of image acquisition and storage features—all of which require a
great amount of computational power, time, and energy. An image analysis tool can be
very effective when implemented on a computer system with unlimited access to a fast
multi-processor, a large database, and an unlimited energy source. In comparison, a
spacecraft has access to very limited resources. Space flight logistics, a limited source of
energy and a limited amount of computational power are all features of spacecraft that
limit the extended use of powerful image analysis tools. Future work could focus on the
scalability of these powerful ground-based image analysis tools and on the development
of comparable algorithms that require fewer resources. Different aspects of current image
analysis tools could be optimized and new functions could be developed. Related to the
development of such new tools are systems that can model space environments and
provide a “virtual” test platform. Image analysis tools could be modified or developed,
and then tested within an environment that mimics the conditions of space.

    Several of the planner features discussed in this paper could be implemented in an
autonomous robot with simple path-planning capabilities. As is the case when image
analysis tools are integrated into an autonomous agent, adding path-planning capabilities
to a robot would require the modification of computer code in response to the resources
of the robot. Energy resources, sensors, the planning module and a command executive
must all be integrated to function in collaboration. The robot could be tested in a mock
alien terrestrial surface to ensure that the planner functions properly.

    A detailed analysis of existing planners could ascertain whether different planning
programs are scalable for use in autonomous space missions. Different features of a
planner could be analyzed, including the ability of the planer to produce valid sequences,
the ability of the planner to quickly generate command sequences in response to
anomalous events, and the ability of the planner to recover from malfunctions. After
various planners are analyzed, similarities and differences of the different benchmark
studies could be used to propose a list of features and abilities that are required by a
space-worthy autonomous planner.




                                            32
6. References

   1. The MISUS Multi-Rover Project, website, http://www-
      aig.jpl.nasa.gov/public/msl.

   2.    B. Pell, D.E. Bernard, S. A. Chien, E. Gat, “An Autonomous Spacecraft Agent
         Prototype”, ACM, pg 253-261, 1997.

   3.    S. Chien, B. Engelhardt, R. Knight, G. Rabideau, R. Sherwood, E. Hansen, A.
         Rotiviz, C. Wilklow, S. Wichman, “Onboard Autonomy on the Three Corner Sat
         Mission”, Proceedings of the 7th Symposium on Artificial Intelligence, Robotics,
         and Automation in Space (I-SAIRAS 2001), Canadian Space Agency, Montreal,
         2001.

   4. Autonomous Small Planet In situ Reaction to Events (ASPIRE) Project, website,
      http://www-aig.jpl.nasa.gov/public/mls/aspire/aspire.html.

   5. P.G. Backes, G. Rabideau, K.S. Tso, S. Chien, “Automated Planning and
      Scheduling for Planetary Rover Distributed Operations”, Jet Propulsion
      Laboratory, California Institute of Technology.

   6.    Casper: Space Exploration through Continuous Planning, IEEE Intelligent
         Systems, September/October 2001.

   7. R. Castano, T. Mann, E. Mjolsness, “Texture Analysis for Mars Rover Images”,
      website, http://www-aig.jpl.nasa.gov/public/mls/mls_papers.html.

   8.    R. Congalton, “A Review of assessing the accuracy of classifications of remotely
         senses data”, Remote Sensing of Environment, Volume 21, Issue 1, 1991.

   9.    T.J. Ellis, M. Mirmehdi, G.R. Dowling "Tracking image features using a parallel
         computional model" ,Technical report TCU/CS/1992/27, City University,
         Department of Computer Science, 1992.

   10.   E. Mjolsness and D. Decoste, “Machine Learning for Science: State of the Art
         and Future Prospects”, Science, 293, pp. 2051-2055, September 2001.

   11. The 3-D Computer-Automated Threshold Amsler Grid Test, website, http://www-
       aig.jpl.nasa.gov/public/mls/home/wfink/3DVisualFieldTest.htm.

   12.   B. Shapiro and E. Mjolsness. (2001) Developmental Simulations with Cellerator,
         Second International Conference on Systems Biology, November 2002.

   13. Cellerator, Jet Propulsion Laboratory, California Institute of Technology, website,
       http://www-aig.jpl.nasa.gov/public/mls/cellerator/




                                             33
Master's Independent Study Final Report.doc
Master's Independent Study Final Report.doc

More Related Content

What's hot

Signal descriptors of 8086
Signal descriptors of 8086Signal descriptors of 8086
Signal descriptors of 8086
aviban
 
Current and power using hall sensors
Current and power using hall sensorsCurrent and power using hall sensors
Current and power using hall sensors
Prasad Deshpande
 

What's hot (20)

Diode Current Equation
Diode Current EquationDiode Current Equation
Diode Current Equation
 
Depletion MOSFET and Digital MOSFET Circuits
Depletion MOSFET and Digital MOSFET CircuitsDepletion MOSFET and Digital MOSFET Circuits
Depletion MOSFET and Digital MOSFET Circuits
 
Bobinas O Inductores
Bobinas O InductoresBobinas O Inductores
Bobinas O Inductores
 
Signal descriptors of 8086
Signal descriptors of 8086Signal descriptors of 8086
Signal descriptors of 8086
 
Pin diode
Pin diodePin diode
Pin diode
 
Semiconductor and it's types
Semiconductor and it's typesSemiconductor and it's types
Semiconductor and it's types
 
A Report on Routers
A Report on RoutersA Report on Routers
A Report on Routers
 
Current and power using hall sensors
Current and power using hall sensorsCurrent and power using hall sensors
Current and power using hall sensors
 
Z parameters
Z parametersZ parameters
Z parameters
 
TUNNEL DIODE
TUNNEL DIODETUNNEL DIODE
TUNNEL DIODE
 
semiconductor - description and application
semiconductor - description and applicationsemiconductor - description and application
semiconductor - description and application
 
Basics of MOSFET
Basics of MOSFETBasics of MOSFET
Basics of MOSFET
 
Unit-III Waveform Generator
Unit-III Waveform GeneratorUnit-III Waveform Generator
Unit-III Waveform Generator
 
Electrónica de 4º E. S. O.
Electrónica de 4º E. S. O.Electrónica de 4º E. S. O.
Electrónica de 4º E. S. O.
 
Tunnel Diode
Tunnel DiodeTunnel Diode
Tunnel Diode
 
Carbon nanotube based Field Effect Transistor
Carbon nanotube based Field Effect TransistorCarbon nanotube based Field Effect Transistor
Carbon nanotube based Field Effect Transistor
 
Chapter 5 stp
Chapter 5   stpChapter 5   stp
Chapter 5 stp
 
Tipos de condensadores
Tipos de condensadoresTipos de condensadores
Tipos de condensadores
 
application of fibre optics in communication
application of fibre optics in communicationapplication of fibre optics in communication
application of fibre optics in communication
 
Basics of JFET
Basics of JFETBasics of JFET
Basics of JFET
 

Viewers also liked

Abstract.doc.doc
Abstract.doc.docAbstract.doc.doc
Abstract.doc.doc
butest
 
Dr. Emad A. Rahim Entrepreneurship and Small Business Management ...
Dr. Emad A. Rahim Entrepreneurship and Small Business Management ...Dr. Emad A. Rahim Entrepreneurship and Small Business Management ...
Dr. Emad A. Rahim Entrepreneurship and Small Business Management ...
butest
 
Chapter 10 - Section 179 and Additional 1st Year Depreciation
Chapter 10 - Section 179 and Additional 1st Year DepreciationChapter 10 - Section 179 and Additional 1st Year Depreciation
Chapter 10 - Section 179 and Additional 1st Year Depreciation
butest
 
Resume
ResumeResume
Resume
butest
 
Annual Report
Annual ReportAnnual Report
Annual Report
butest
 
TRAINING
TRAININGTRAINING
TRAINING
butest
 
Outline D
Outline DOutline D
Outline D
butest
 
BIOSKETCH
BIOSKETCHBIOSKETCH
BIOSKETCH
butest
 
Module-related pages
Module-related pagesModule-related pages
Module-related pages
butest
 
CP2083 Introduction to Artificial Intelligence
CP2083 Introduction to Artificial IntelligenceCP2083 Introduction to Artificial Intelligence
CP2083 Introduction to Artificial Intelligence
butest
 
What's Available in Assistive Technology for Students with ...
What's Available in Assistive Technology for Students with ...What's Available in Assistive Technology for Students with ...
What's Available in Assistive Technology for Students with ...
butest
 
Nabila__proposal4.doc
Nabila__proposal4.docNabila__proposal4.doc
Nabila__proposal4.doc
butest
 
Text Mining: Beyond Extraction Towards Exploitation
Text Mining: Beyond Extraction Towards ExploitationText Mining: Beyond Extraction Towards Exploitation
Text Mining: Beyond Extraction Towards Exploitation
butest
 
What's Available in Assistive Technology for Students with ...
What's Available in Assistive Technology for Students with ...What's Available in Assistive Technology for Students with ...
What's Available in Assistive Technology for Students with ...
butest
 
Summer Internships at the Center for Advanced Research The Center ...
Summer Internships at the Center for Advanced Research The Center ...Summer Internships at the Center for Advanced Research The Center ...
Summer Internships at the Center for Advanced Research The Center ...
butest
 
Julie Acker, M.S.W., CMHA Lambton Julie Acker holds a Masters ...
Julie Acker, M.S.W., CMHA Lambton Julie Acker holds a Masters ...Julie Acker, M.S.W., CMHA Lambton Julie Acker holds a Masters ...
Julie Acker, M.S.W., CMHA Lambton Julie Acker holds a Masters ...
butest
 
Machine Learning and Robotics
Machine Learning and RoboticsMachine Learning and Robotics
Machine Learning and Robotics
butest
 

Viewers also liked (18)

Abstract.doc.doc
Abstract.doc.docAbstract.doc.doc
Abstract.doc.doc
 
Dr. Emad A. Rahim Entrepreneurship and Small Business Management ...
Dr. Emad A. Rahim Entrepreneurship and Small Business Management ...Dr. Emad A. Rahim Entrepreneurship and Small Business Management ...
Dr. Emad A. Rahim Entrepreneurship and Small Business Management ...
 
Chapter 10 - Section 179 and Additional 1st Year Depreciation
Chapter 10 - Section 179 and Additional 1st Year DepreciationChapter 10 - Section 179 and Additional 1st Year Depreciation
Chapter 10 - Section 179 and Additional 1st Year Depreciation
 
Resume
ResumeResume
Resume
 
Annual Report
Annual ReportAnnual Report
Annual Report
 
TRAINING
TRAININGTRAINING
TRAINING
 
Outline D
Outline DOutline D
Outline D
 
BIOSKETCH
BIOSKETCHBIOSKETCH
BIOSKETCH
 
Module-related pages
Module-related pagesModule-related pages
Module-related pages
 
CP2083 Introduction to Artificial Intelligence
CP2083 Introduction to Artificial IntelligenceCP2083 Introduction to Artificial Intelligence
CP2083 Introduction to Artificial Intelligence
 
What's Available in Assistive Technology for Students with ...
What's Available in Assistive Technology for Students with ...What's Available in Assistive Technology for Students with ...
What's Available in Assistive Technology for Students with ...
 
Nabila__proposal4.doc
Nabila__proposal4.docNabila__proposal4.doc
Nabila__proposal4.doc
 
Text Mining: Beyond Extraction Towards Exploitation
Text Mining: Beyond Extraction Towards ExploitationText Mining: Beyond Extraction Towards Exploitation
Text Mining: Beyond Extraction Towards Exploitation
 
What's Available in Assistive Technology for Students with ...
What's Available in Assistive Technology for Students with ...What's Available in Assistive Technology for Students with ...
What's Available in Assistive Technology for Students with ...
 
Summer Internships at the Center for Advanced Research The Center ...
Summer Internships at the Center for Advanced Research The Center ...Summer Internships at the Center for Advanced Research The Center ...
Summer Internships at the Center for Advanced Research The Center ...
 
Julie Acker, M.S.W., CMHA Lambton Julie Acker holds a Masters ...
Julie Acker, M.S.W., CMHA Lambton Julie Acker holds a Masters ...Julie Acker, M.S.W., CMHA Lambton Julie Acker holds a Masters ...
Julie Acker, M.S.W., CMHA Lambton Julie Acker holds a Masters ...
 
Dagan
DaganDagan
Dagan
 
Machine Learning and Robotics
Machine Learning and RoboticsMachine Learning and Robotics
Machine Learning and Robotics
 

Similar to Master's Independent Study Final Report.doc

Design of an orbital inspection satellite
Design of an orbital inspection satelliteDesign of an orbital inspection satellite
Design of an orbital inspection satellite
Clifford Stone
 
Innovative Payloads for Small Unmanned Aerial System-Based Person
Innovative Payloads for Small Unmanned Aerial System-Based PersonInnovative Payloads for Small Unmanned Aerial System-Based Person
Innovative Payloads for Small Unmanned Aerial System-Based Person
Austin Jensen
 
Spatial_Data_Analysis_with_open_source_softwares[1]
Spatial_Data_Analysis_with_open_source_softwares[1]Spatial_Data_Analysis_with_open_source_softwares[1]
Spatial_Data_Analysis_with_open_source_softwares[1]
Joachim Nkendeys
 
Pranav_Shah_Report
Pranav_Shah_ReportPranav_Shah_Report
Pranav_Shah_Report
Pranav Shah
 

Similar to Master's Independent Study Final Report.doc (20)

artificial inteliigence in spacecraft power application
artificial inteliigence in spacecraft  power applicationartificial inteliigence in spacecraft  power application
artificial inteliigence in spacecraft power application
 
PhD_main
PhD_mainPhD_main
PhD_main
 
PhD_main
PhD_mainPhD_main
PhD_main
 
PhD_main
PhD_mainPhD_main
PhD_main
 
Design of an orbital inspection satellite
Design of an orbital inspection satelliteDesign of an orbital inspection satellite
Design of an orbital inspection satellite
 
thesis-2
thesis-2thesis-2
thesis-2
 
main
mainmain
main
 
Innovative Payloads for Small Unmanned Aerial System-Based Person
Innovative Payloads for Small Unmanned Aerial System-Based PersonInnovative Payloads for Small Unmanned Aerial System-Based Person
Innovative Payloads for Small Unmanned Aerial System-Based Person
 
thesis.compressed
thesis.compressedthesis.compressed
thesis.compressed
 
ltu-cover6899158065669445093
ltu-cover6899158065669445093ltu-cover6899158065669445093
ltu-cover6899158065669445093
 
ARTIFICIAL intelligence technique for space exploration TECHNICAL SEMINAR.pptx
ARTIFICIAL intelligence technique for space exploration TECHNICAL SEMINAR.pptxARTIFICIAL intelligence technique for space exploration TECHNICAL SEMINAR.pptx
ARTIFICIAL intelligence technique for space exploration TECHNICAL SEMINAR.pptx
 
ARTIFICIAL intelligence technique for space exploration TECHNICAL SEMINAR.pptx
ARTIFICIAL intelligence technique for space exploration TECHNICAL SEMINAR.pptxARTIFICIAL intelligence technique for space exploration TECHNICAL SEMINAR.pptx
ARTIFICIAL intelligence technique for space exploration TECHNICAL SEMINAR.pptx
 
Intelligent Lunar Landing Site Recommender
Intelligent Lunar Landing Site RecommenderIntelligent Lunar Landing Site Recommender
Intelligent Lunar Landing Site Recommender
 
Spatial_Data_Analysis_with_open_source_softwares[1]
Spatial_Data_Analysis_with_open_source_softwares[1]Spatial_Data_Analysis_with_open_source_softwares[1]
Spatial_Data_Analysis_with_open_source_softwares[1]
 
Artificial intelligence in space exploration venkat vajradhar - medium
Artificial intelligence in space exploration   venkat vajradhar - mediumArtificial intelligence in space exploration   venkat vajradhar - medium
Artificial intelligence in space exploration venkat vajradhar - medium
 
Pranav_Shah_Report
Pranav_Shah_ReportPranav_Shah_Report
Pranav_Shah_Report
 
Presentation
PresentationPresentation
Presentation
 
Presentation
PresentationPresentation
Presentation
 
Presentation
PresentationPresentation
Presentation
 
Presentation
PresentationPresentation
Presentation
 

More from butest

EL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBEEL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBE
butest
 
1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同
butest
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIAL
butest
 
Timeline: The Life of Michael Jackson
Timeline: The Life of Michael JacksonTimeline: The Life of Michael Jackson
Timeline: The Life of Michael Jackson
butest
 
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
butest
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIAL
butest
 
Com 380, Summer II
Com 380, Summer IICom 380, Summer II
Com 380, Summer II
butest
 
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet JazzThe MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
butest
 
MICHAEL JACKSON.doc
MICHAEL JACKSON.docMICHAEL JACKSON.doc
MICHAEL JACKSON.doc
butest
 
Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1
butest
 
Facebook
Facebook Facebook
Facebook
butest
 
Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...
butest
 
Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...
butest
 
NEWS ANNOUNCEMENT
NEWS ANNOUNCEMENTNEWS ANNOUNCEMENT
NEWS ANNOUNCEMENT
butest
 
C-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.docC-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.doc
butest
 
MAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.docMAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.doc
butest
 
Mac OS X Guide.doc
Mac OS X Guide.docMac OS X Guide.doc
Mac OS X Guide.doc
butest
 
WEB DESIGN!
WEB DESIGN!WEB DESIGN!
WEB DESIGN!
butest
 

More from butest (20)

EL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBEEL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBE
 
1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIAL
 
Timeline: The Life of Michael Jackson
Timeline: The Life of Michael JacksonTimeline: The Life of Michael Jackson
Timeline: The Life of Michael Jackson
 
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIAL
 
Com 380, Summer II
Com 380, Summer IICom 380, Summer II
Com 380, Summer II
 
PPT
PPTPPT
PPT
 
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet JazzThe MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
 
MICHAEL JACKSON.doc
MICHAEL JACKSON.docMICHAEL JACKSON.doc
MICHAEL JACKSON.doc
 
Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1
 
Facebook
Facebook Facebook
Facebook
 
Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...
 
Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...
 
NEWS ANNOUNCEMENT
NEWS ANNOUNCEMENTNEWS ANNOUNCEMENT
NEWS ANNOUNCEMENT
 
C-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.docC-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.doc
 
MAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.docMAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.doc
 
Mac OS X Guide.doc
Mac OS X Guide.docMac OS X Guide.doc
Mac OS X Guide.doc
 
hier
hierhier
hier
 
WEB DESIGN!
WEB DESIGN!WEB DESIGN!
WEB DESIGN!
 

Master's Independent Study Final Report.doc

  • 1. A Survey of the Artificial Intelligence and Autonomous Capabilities of Space Probes With an Emphasis on Autonomous Planner and Scheduler Functions Filip Jagodzinski Advisor: Dr. Frank Klassner Completed: Fall, 2002 This Independent Study final report is submitted in partial fulfillment of the requirements for the degree of M.S. in Computer Science Department of Computing Sciences Villanova University Villanova, Pennsylvania U.S.A.
  • 2. Abstract Artificial intelligence (AI) technologies are being integrated more and more into space exploration missions as part of a drive to produce autonomous agents. These technologies span a variety of topic areas, including machine learning algorithms, knowledge discovery tools, and data analysis systems. Various AI programs and systems that are used on board a space exploration agent can help reduce the overall cost of a mission and increase the amount of useful quality data that is gathered. An autonomous planner/scheduler, for example, circumvents the need for continued—and often lengthy —uplink and downlink communications between mission personnel on Earth and the spacecraft. An autonomous planner does not wait for command instructions from Earth, but instead the planner generates and integrates effective command sequences into the mission timeline in response to malfunctions or unforeseen science opportunities. Without the need for overhead of communicating with Earth, a spacecraft that uses an autonomous planner can quickly recover from malfunctions and can record short-lived anomalous events. The use of a variety of autonomous systems on board a spacecraft is not only useful, but in the future may be necessary. If a spacecraft is expected to travel far enough away from Earth where communication is not feasible, or if the spacecraft is expected to lose contact with Earth for some period of time, then autonomous systems can monitor and reactively respond to any changes. If an autonomous agent approaches an unstable object—for example a comet—then the autonomous planner and image analysis tools can quickly respond to dangerous events and prevent damage to the spacecraft. This survey report explores the issues and potential solutions for supporting increased autonomous behavior in space probes. The report begins with an overview that includes a discussion on the integration of different subsystems and a discussion about the main features and components of an autonomous agent. The second section of this report explains several planned missions with autonomous features and also explains a few programs that use autonomous technologies. Included is information on an image analysis tool, the Multi-Rover Integrated Science Understanding System, The Distant Autonomous Recognizer of Events (DARE) System, the Autonomous Small Planet in Situ Reaction to Events (ASPIRE) mission, the Techsat-21 Autonomous Science Craft Constellation Demonstration mission, and the Three Corner Sat Mission (3CS). The second section concludes with a discussion about the challenges and design opportunities of autonomous agents. Machine learning concepts are explained next, as are neural networks, a feature of machine learning algorithms. The fourth part of this report takes a closer look at one component of autonomous agents, the planner. Batch planners, dynamic planners, planner logic, planner languages and hierarchical task network planners are all explained. The Automated Scheduling and Planning Environment (ASPEN) as well as the Simple Hierarchical Ordered Planner (SHOP) are discussed in detail. In the final section of this report, future work, possibilities for the continuation of this survey report are explored, and possible thesis projects are proposed. 2
  • 3. Table Of Contents Abstract.....................................................................................................................................................2 Table Of Contents.....................................................................................................................................3 Acknowledgements...................................................................................................................................4 1. Introduction ..................................................................................................................................................5 2. Autonomous Mission Designs and Different Autonomous Technologies...................................................8 2.1 Texture Analysis for Mars Rover Images...............................................................................................8 2.2 The Multi-Rover Integrated Science Understanding System (MISUS).................................................9 2.3 The Distant Autonomous Recognizer of Events (DARE) System.........................................................9 2.4 The Autonomous Small Planet In Situ Reaction to Events (ASPIRE) Project....................................11 2.5 The Techsat-21 Autonomous Science Craft Constellation Demonstration..........................................13 2.6 The Three Corner Sat Mission..............................................................................................................15 2.7 Challenges in the Design of Autonomous Agents and Mission Planners............................................16 3. Support Technology for Autonomous Agents............................................................................................18 3.1 Machine Learning.................................................................................................................................18 3.2 Neural Networks...................................................................................................................................20 4. Autonomous Agent Planners......................................................................................................................21 4.1 Coordination of Subsystems.................................................................................................................21 4.2 System Languages and Planner Logic..................................................................................................22 4.3 Batch Planners......................................................................................................................................23 4.4 Dynamic Planners.................................................................................................................................24 4.5 Hierarchical Task Network Planners....................................................................................................25 4.6 The Automated Scheduling and Planning Environment (ASPEN)......................................................26 4.7 The Simple Hierarchical Ordered Planner (SHOP)..............................................................................29 5. Future Work................................................................................................................................................32 6. References...................................................................................................................................................33 3
  • 4. Acknowledgements I wish to thank Dr. Frank Klassner for his support and continued suggestions throughout the course of this independent study project. Also I wish to thank Dr. Don Goelman and Dr. Lillian Cassel for directing me to invaluable reference sources on data mining and information technology. 4
  • 5. 1. Introduction On July 4, 1997, the Mars Pathfinder's Sojourner Rover rolled down the ramp of the Lander spacecraft and began a 90-day survey mission of the Martian surface. The trip from Earth to Mars had lasted approximately 6 months, during which time data was periodically uploaded and downloaded from the Pathfinder probe via NASA’s Deep Space Network’s 34-meter antenna at Goldstone, California [23]. A UHF antenna on board the Sojourner Rover was used to communicate with mission control on Earth; an Earth-based operator controlled the 6-wheeled Sojourner Rover. The roundtrip time for the relay of communications between Earth and Mars is approximately 20 minutes, but control commands for Sojourner were sent approximately every 12 hours because the process of generating a new command sequences was lengthy. As downlink data was received, engineers determined the state of the rover, noting the state of various system components. The rover data was then given to scientists who produced a high-level science plan, which then was given to engineers who generated a low-level command sequence that was returned to the Pathfinder Rover for execution. Although Sojourner did have a few autonomous capabilities that were tested a few times, Sojourner never encountered events that it had to respond to immediately and it did not have the ability to generate a plan for all of the subsystems. Sojourner was able to drive around objects that were it its way and could plan a path between two different points, but Sojourner was not able to continually operate in an autonomous mode. Future missions to Mars, other planets, and to regions beyond Pluto will require that spacecraft be able to function without human intervention for long periods of time and perform several autonomous functions—this is one motivation for the integration of autonomous technologies into space missions. If many mission components of a space mission could be automated, then mission costs can be decreased and the amount of necessary communication time between Earth and a spacecraft can be greatly reduced. If a spacecraft can generate valid command sequences for itself in response to sensor inputs, can manage its own onboard resources, can monitor the condition of different spacecraft components, and can intelligently gather and analyze data, then more time could be spent on exploration and less time on waiting for new command sequences from Earth. An autonomous spacecraft is one that performs its own science planning and scheduling, translates schedules into executable sequences, verifies that the produced executable sequences will not damage the spacecraft and executes the derived sequences without human intervention. Because the goals of each mission are unique, there does not exist a “perfect” recipe for constructing an autonomous agent. Different missions require various levels of autonomy, and depending on the sensor types and overall architecture of a spacecraft, a successful autonomous agent architecture and set of algorithms for one mission might not be effective when applied to another mission with different objectives and a different developmental history. Several common topics that reappear in different autonomous agent designs can, however, provide a general roadmap for the design of a successful autonomous agent. 5
  • 6. Regardless of whether an autonomous agent is required to exhibit autonomous behavior for 5 seconds or for 100 years, several issues relevant to successful continued functioning of the spacecraft must be addressed. Resource constraints and hard deadlines require that resources be managed effectively. Non-renewable solid energy fuels must be used sparingly, and alternative, renewable energy sources and methods to acquire that energy should be used. For example, solar power can be used as a source of energy, and a calculated close flyby of a planet can produce a “slingshot” effect where the spacecraft is accelerated due to the transfer of rotational energy from the planet to the spacecraft. Resources must also be managed in reference to the number of sensors used; although an increase of sensors can increase the validity and precision of data, the addition of each new sensor necessitates the use of more energy. Effective management of resources of a spacecraft also includes the effective maintenance of concurrent activities. A planner/scheduler must be able to schedule concurrent activities in different parts of the spacecraft; hence there exists the possibility of communication overload or of an energy shortage. The architecture of an autonomous agent must facilitate effective communication, scheduling and schedule execution functions. An effective data bus and/or communication system must allow easy access to mission goals and the mission databases, including both hard-coded data and data that has been acquired during the mission. The planner and scheduler require a host of communication capabilities to transmit goals and spacecraft status parameters; the derived schedule may be broken into various segments depending on the design of the planner. Schedule segments are sent from the planner to the executive module that executes the various schedule components. Messages are repeatedly passed between the various components of a spacecraft operating system, and the hardware and software architecture of the spacecraft must be designed so that spacecraft communications systems can function at an optimal level. The function of the planner/scheduler of an autonomous agent is to generate a set of high-level commands that will help achieve mission goals. Planning and scheduling aspects must be tightly integrated, and any planned activities must be coordinated with the availability of global resources. Different subsystems of a spacecraft can also have their own subroutines, so a planner must be able to integrate different spacecraft functions and be aware of the different energy requirements of and working parameters of different spacecraft components. The hybrid executive performs process and dependency synchronization and maintains hardware configurations in an effort to manage system resources. Because an autonomous agent can potentially encounter problems—both due to the incorrect functioning of the spacecraft or when introduced to an environment for which the agent is not suited—the agent must be able to enter a stable, “safe” state. Different agents use various forms and styles of a host of available abstract languages as a means of communication between different system components. Regardless of what language is used, the hybrid executive must be able to process different spacecraft modes using the available abstract language. In the case when an agent enters a “safe” mode in response to an error, the executive must be able to process the directives that are provided by the 6
  • 7. planner. The ability of the executive to understand abstract language terms and a host of plan initiatives allows for the continued execution of a plan under a variety of execution environments. An autonomous agent should also be able to identify the current operating state of each component of the spacecraft—this allows for an executive to reason about the state of the spacecraft in terms of component modes rather than in terms of low-level sensor values. This approach makes mode confirmation, anomaly detection, fault isolation and diagnosis and token tracking possible. Mode confirmation provides a confirmation that a command has been completed successfully, while anomaly detection is simply the identification of inconsistent subsystems. In response to anomaly detection, the executive also should be able to isolate and diagnose faulty components and monitor the execution of the commands that have been produced by the planner. In an effort to perform all the above functions, the agent uses algorithms that describe and monitor the current state of the spacecraft. The mode of each component part of the spacecraft can be modeled using prepositional logic; the ensuing component representations can be used by the planner and by the hybrid executive modules. The way that knowledge is represented can have a great effect on the proper continued working status of an autonomous agent. As an example, in the case of heterogeneous knowledge representation, which is defined as “a non-uniform method used in loosely-coupled architectures for storing specialty knowledge not for general use [2]”, coverage and mode identification is efficient; at any one time there may be several representations of the same spacecraft component. This heterogeneous knowledge representation provides multiple views of the condition of various components of a spacecraft and allows for various component representations to be combined in union to form a thorough complete picture of the state of a system. As a result, task and component specialization is possible, but likewise model representations can diverge and leave an autonomous system with “too many” views of the state of the autonomous agent. The knowledge representation language must therefore be concise, yet simple enough so that quick calculations and necessary modifications to different spacecraft systems can be made. Various components and technologies can be combined to make an “autonomous agent”, including machine learning algorithms, artificial neural networks and expert intelligence systems. The effective coordinated functioning of the different autonomous components of an autonomous agent is dependent on many factors, including, among others, the management of resources, an overall system architecture, efficient scheduling and planning techniques, and the ability to handle errors. Additionally, any one mission that requires the use of autonomous agents may need to employ a host of data mining, knowledge discovery and pattern matching capabilities, as well as be able to classify, cluster, and effectively organize sensor data. Different missions require the use of different autonomous technologies, so each autonomous component of a spacecraft must not only perform its job but fit well within an overall autonomous architecture. 7
  • 8. 2. Autonomous Mission Designs and Different Autonomous Technologies Included in this section are several short reviews of a variety of space missions that employ autonomous agent technologies. In some cases, only certain aspects of an entire system exhibit autonomous behavior, while other spacecraft are designed to be fully autonomous. Included in this section are examples of an autonomous image recognition algorithm, autonomous agent architecture designs, event recognition and classification agents, autonomous systems that employ reactive planners, and multi-rover/multi- spacecraft missions. The variety of goals, design features and methodologies inherent in each of these missions is a good indicator of how “autonomous” technology can manifest itself in many forms depending on mission objectives. Several concepts are consistent across many different autonomous projects. There is no one “perfect” recipe for constructing an autonomous agent, but instead different autonomous agent technologies are combined so that mission goals can be achieved. 2.1 Texture Analysis for Mars Rover Images Texture analysis through the use of image pixel clustering and image classification is one example of an autonomous functionality—made possible by machine learning—that can be part of a larger autonomous system. The goal of image analysis is to enable an agent to intelligently select data targets; this is an important goal when considering that the amount of data that is acquired by a system tends to far exceed the storage capacity or the transmission capacity of an agent while on an exploration mission [7]. Different programs use different algorithms for the purpose of image analysis. Discussed here is the technique of analyzing textures as a means of image and object classification. One way to extract texture information from an image is through the application of filters and through the use of different mathematical functions; image data, for example, can be translated into the frequency domain through the use of the Fourier Transform. Several filters have been popular in this endeavor, including Gabor filters that were first introduced by J. G. Daugman [8]. Different cells in the visual cortex can be modeled through the use of 2 dimensional functions. This allows for different filters to be able to discriminate between various visual inputs in the same way that different cells in the visual cortex are responsive to different stimuli. A bank of filters can be stored in a database so that a host of anticipated textures can be analyzed and classified. Mission resources can dictate which filters should be supplied, and different filters can be rejected or accepted depending on the amount of computational power that is required to pass a filter across an image. Additionally, various filters can be applied in tandem or in series so that a complete mathematical representation of an image can be calculated. Not only can different filters be applied, but also different filters have different parameters, many of which can be adjusted, ignored or otherwise altered in an attempt to better suit a filter to the specific data domain. Once the set of filters is run across an image, clustering tools can be used to classify the pixels of the image into one of several categories. Statistical analysis can then be 8
  • 9. performed to ensure that classification schemes have been properly developed and that the differences between separate classifications are significant enough to ensure proper separation of the pixel data into unique categories [8]. 2.2 The Multi-Rover Integrated Science Understanding System (MISUS) The Jet Propulsion Laboratories (JPL) Machine Learning Group is developing the MISUS system. The goals of this system are ideally suited for Mars exploration through the use of multiple autonomous agents working together to achieve science learning goals [1]. The MISUS system employs autonomous generation of science goals, uses learning clustering methods, and attempts to use algorithms capable of thesis formulation, testing and refinement. The autonomous aspect of the MISUS system selects and maps out at appropriate scales those areas of the surface most likely to return information with maximum science value. The MISUS system is based on a framework capable of autonomously generating and achieving planetary science goals. Machine learning and planning techniques are used, including the use of a simulation environment where different terrains and scenarios are modeled. The system learns by clustering data that it attains through spectral mineralogical analysis of the environment [1]. The clustering method attempts to find similarities among various samples and builds similarity classes. A model of the rock type distribution is created and continually updated as more robots attain more and more sensor data of the different terrain specimens. In the goal to attain both a large number of samples and to attain a uniform collection pattern of specimens, the planning and scheduling component of the MISUS system determines and monitors the activities of the various robots. Within the framework of the MISUS system there exists an overall goal of “science data acquisition”. In particular, the MISUS system developers aim to “test the AI algorithms on the resulting mineral distribution and reconstruct sub-surface mineral stratigraphy in candidate areas to determine possible extent (temporally and spatially) of hydrothermal activity, while considering the implications for Mars biology and the usefulness of mineral deposits [1]”. Requirements to meet the science acquisition goal are continually updated in response to the activities of the individual robots. An iterative repair algorithm continually refines goals and activity schedules, while a statistical modeling algorithm that employs stochastic parameterized grammar is used to formulate, test, and then either refine or discard a hypothesis. 2.3 The Distant Autonomous Recognizer of Events (DARE) System The goal of the DARE system is to detect a new, potentially dangerous, event sufficiently before encounter so that there is enough time to autonomously perform mission re- planning. Autonomous agents that come within close proximity of comets or otherwise maneuver in dangerous landscapes are thought to benefit most from this type of technology. The DARE system performs autonomous mission redirection calculations in response to specific sensor events. Image segmentation, clustering methods and a host of other techniques are employed within a 6-component architecture [22]. 9
  • 10. The motivation to design a re-planning autonomous agent stems from the spacecraft Galileo’s discovery of a satellite of the asteroid Dactyl in 1993. At the time it was believed that Dactyl’s satellite was an anomaly, but in 1998 again such an asteroid- satellite couple was encountered. Because the spacecraft Galileo did not possess the autonomous probe capabilities widely in use today or that are currently in the development and deployment stages, mission re-planning was not possible. In an effort to locate and identify anomalous events, image segmentation and clustering techniques are used. The tracking of regions through a sequence of images allows for noise to be effectively removed. The tracking of regions takes into account a variety of scenarios, including tracking from a stationary or moving platform, tracking a moving or stationary body, tracking “noisy” or “clean” events, tracking against stars or tracking from images alone [9]. The agent platform can be assumed to be stationary if it is moving slowly relative to the image sampling rate, but in the cases where the sampling rate is low or the movement of the agent spacecraft in reference to an event is sporadic, the projection in the image plane can be approximated by a straight line with near constant velocity. If the sampled object moves significantly from frame to frame and the sampling rate is low, then tracking of the object must be done in three dimensions. To track an object in universal coordinates, lines from sight are drawn from the imaging device through the detection object in the image plane; when lines through regions in various images intersect at approximately the same point, the intersection regions represent an object. The architecture of the DARE is system built upon 6 main components. Images are received from a camera unit and the images are searched for new bodies. If a body is detected, then new pointing angles are calculated and passed to the scheduler. The image processing unit performs filtering and image enhancement functions, while the region segmentation processing unit performs the necessary data clustering. Regions are calculated based on background pixel intensity and are “grown” from the ratio image where each pixel gets a value that takes into account the mean and standard deviation of statistical clustering functions. A registration unit tags each object and frame based on the body of the object, the orientation of the object in reference to the background stars or based on the object’s geometric characteristics. Once an object has been tagged and identified, the matching unit finds similarities between regions in consecutive images in an effort to begin the tracking process. Because at any one time there may be a host of potential object candidates and, depending on the sampling rate, a large number of images that must be processed, the amount of possible object-image combinations becomes immense. In an effort to overcome this combinatorial explosion, tree pruning is used as a method to reduce the amount of possible states. After an object has been identified among several frames, the track-file maintenance unit follows the object as it is placed into one of three tracks, failing, reserved and object. The failing track is a registry of those objects that don’t seem to be a good object candidate because of conflicts with previous images; those objects in the failing track and subject to pruning. The reserved track maintains a registry of those objects that are more worthy than those objects in the failing track but that demonstrate ambiguous behavior over the course of tracking. The 10
  • 11. object track is a registry of those items that are most certain to be objects of interest. The final unit, the detection-reporting unit, creates a record of those objects that survive over the course of object tracking and which are candidates to be returned to Earth for further study or given to a planner module in the case of autonomous re-planning. The use of image tracking and dealing with combinatorial explosion issues places special emphasis on the proper management of system resources and the design of the overall system architecture. Aside from the combinatorial explosion due to the various objects that are being tracked over the course of several images, errors that are generated during geometric analysis of object paths and orientation, insufficient parallax computations and cosmic ray noise add further computational strain on the entire system. For example, in the case where geometric errors are prevalent, it is possible that the DARE system will track many tracks, all of which may be the same object but with only slightly different geometric and path parameters. The DARE system has been tested and validated using an autonomous on-board system. Using the record of objects that are produced by the detection-reporting unit, a science processing unit searches for scientifically interesting events and sends discovered objects to the planner. The planner produces a sequence of mission redirection commands in response to a set of mission goals. Continued object observation can also be planned in the case that more object information is desired. The planner module can also opt to send object information to mission personnel on Earth in an effort to attain a “second opinion” or in the case of an unresolved conflict in the planner module. 2.4 The Autonomous Small Planet In Situ Reaction to Events (ASPIRE) Project Related to the DARE system is the Autonomous Small Planet In situ Reaction to Events (ASPIRE) project. The goal of the ASPIRE system is to safely perform a close- proximity rendezvous of a low-activity comet. The target for ASPIRE is the comet duToid-Hartley, which will be at perihelion of 15 February 2003. To achieve this goal, the DARE system employs a host of autonomous activities, including close proximity navigation planning and execution, onboard pointing, onboard execution, and onboard mission planning and sequencing. A great amount of interest lies in the study of comets due to the fact that they are composed of the “building blocks” of the solar system; their compositional data is of extremely high scientific value. Unfortunately, comets have the tendency to fracture, and hence close rendezvous can only be performed if an agent is able to intelligently detect and respond to any fracture events. A fracture event can be of varying intensities, ranging from the expulsion of gasses to a total comet breakup. Collection of comet gasses as well as collection of comet fragments requires an agent that can autonomously maneuver to within a few kilometers of an active nucleus of a comet. The autonomous agent must be able to effectively maneuver while being bombarded by high-speed particles that are being ejected from the comet rock nucleus. 11
  • 12. The ASPIRE project aims to integrate the detection, analysis and investigation of short-term comet event. A science mission can have one or a series of high-order goals, and it is the goal of the ASPIRE agent to re-plan science objectives in response to events in the current environment. The agent will approach the comet from the sunward side (at the time of perihelion, the sunward side of the comet is away from the stream of gasses that form the comet tail) and will perform a series of maneuvers so that at arrival into orbit the agent will have a velocity of 0 relative to the comet [4]. Leading up to and once in orbit around the comet, the ASPIRE agent will function in one of three main modes, the parking mode, the orbiting mode, and a hovering mode. An escape mode is also available in the case that the comet exhibits break-up characteristics and endangers the spacecraft. The parking mode involves the maneuvers that bring the agent to a safe and close proximity of the comet. From the parking stage, the agent can move into the even-closer orbiting mode or choose to remain in the parking stage depending on the condition of the comet. Within the orbiting stage, the spacecraft flies a sequence of orbits around the comet and captures data through the use of wide-field and narrow-field view cameras. The hovering state involves a close flyby of the comet surface in an effort to closely investigate various features of the comet. In both the orbiting and hovering states, the spacecraft must complete the planned routine before choosing to go to the next hovering or orbiting state—except in the case of comet break up, in which case the spacecraft enters escape mode, leaves the proximity of the comet and returns acquired data to Earth. An example scenario includes the approach of the spacecraft to within 20km of the comet. An event is detected by the science module, and the spacecraft points the narrow- filed of view camera to the location where the event was detected. The exact coordinates of the event from the science module are used to construct a close-flyby (the hovering mode) trajectory. The close fly-by is performed, image data is collected, and the spacecraft returns to the safe 20km distance from the comet. The next event involves the ejection of two particles from the comet’s body. The ejection event is detected, analyzed and further investigated by use of the narrow-field of view camera. Further break occurs and the comet begins “total breakup”, at which time the spacecraft enters escape mode. The science module of the ASPIRE system must be proficient in detecting events of scientific interest. There are seven components to the science module, including a mode handler component, a viewing angles calculation component, a change detection algorithm component, a confidence measure component, a clustering component, and a classification component. The mode handler component of the ASPIRE system receives the current mode of the spacecraft and appropriately determines what to do with the acquired (and stored) data images. The mode handler chooses to store images, compare images or request to acquire additional images. The viewing angles calculation component uses the spacecraft’s position in reference to various astronomical markers as well as the tilt of the camera to determine the focus of the onboard cameras. This “angles calculation” procedure allows the ASPIRE system to compare currently acquired images to those images that are stored in the database. Subsequently, images in the database can be replaced or either appended so that a time delimited record of events can be stored. 12
  • 13. The change detection algorithm uses individual images of a specific region to determine any changes—events—that might be underway. The changes are detected by dividing the images into small tiles. Various correlation, interpolation and a host of statistical analysis techniques are used to detect the displacement of each tile in reference to past images of the same location. The statistical analysis techniques form a part of the confidence measure component of the science module where threshold values, upper- bound and lower-bound variables help determine confidence measures. Clustering methods are used to combine displacements of similar magnitude and direction so that large-scale events—events that span several image sets—can be detected. In the case of complete comet breakup or large particle ejection, various components of the comet will be detected through the application of the clustering component of the science module [4]. Finally, the classification component of the science module is responsible for the classification of events into one of several categories, including jet formation, surface fracture, fragment ejection or complete comet breakup. Depending on the nature of the event, the results of the statistical analysis module will help determine if the spacecraft should collect gas samples, collect particle samples or escape due to comet breakup. 2.5 The Techsat-21 Autonomous Science Craft Constellation Demonstration The Autonomous Science Craft Constellation flight (ASC) is scheduled to fly on board Air Force’s TechSat-21 mission in October 2004. The goals of the ASC mission include onboard science analysis and algorithms, re-planning, robust execution, model-based estimation and control, and the execution of formation flying in an effort to increase science return. As is the case with the MISUS system, a series of independent agents will work together in the ASC project. Distributing functionality among the different agents will increase system robustness and maintainability, and in turn the cost of the mission will be reduced when compared to a multi-satellite mission where each spacecraft possess a complete set of autonomous routines and subprograms and where each micro satellite performs the same set of duties. The micro satellites will at times function as one “virtual” satellite, while during other times each micro satellite will be performing special tasks. The satellites will fly as far as 5km apart or fly in formation where they are separated by as little as 100m. Radiation hardened 175 MIPS processors will be on board each micro satellite, and OSE 4.3 will be the operating system of choice because of the “message passing” nature of the program [16]. The onboard science algorithms fall into two main categories: image formation and onboard science. Image formation is responsible for creating image products, while target size, pointing parameters and radar attributes will be used to determine the resolution for an image so that the best image in the least amount of time can be created. The reduced resolution images also allow science products to be scaled and compressed, and so more data can be stored onboard the spacecraft. The onboard science algorithms will analyze the images produced by the image formation component and will output derived science products, if any. Trigger conditions across time variant images will be used to detect anomalous events, as for example backscatter properties that may be an 13
  • 14. indication of recent change [24]. Trigger conditions will be checked not only on successive images but also on images that span a large amount of time; such a procedure will be able to detect slow-change events such as the movement of ice plates. Statistics will be used to determine if significant changes have occurred as measured by region size, location, boundary morphology and the image histogram. Different missions will require different computational requirements due to the nature of the content of the images, and so the statistical algorithms scale linearly to accommodate an image of varying pixel resolution. The detection of an anomalous event will trigger a series of events, including the continued monitoring and analysis of an event and, if necessary, the immediate downlink of information to the mission command center. The Spacecraft Command Language (SCL) will handle the robust execution aspect of the ASC system where procedural programming with a real-time, forward-chaining system will be used. The communication software will be of a publish/subscribe nature, where notification and request messages will be handled appropriately. The Continuous Activity Scheduling Planning Execution and Replanning (CASPER) system will have the ability to plan and schedule various SLC scripts, and spacecraft telemetry information from the different micro satellites will be gathered by one of the satellites and used as input for an integrated expert system (For more information about the CASPER planner, see the section on the ASPEN planner, which is a forerunner of CASPER). This integrated expert system will use trigger rules for fault detection, mission recovery procedures, mission constraint checking and pre-processing initiatives [6]. The model-based monitoring and reconfiguration system will be responsible for the reduction of goals into a control sequence. To function effectively, the model-based monitoring and reconfiguration system will oversee the proper function of all components that exist along the command path so that damaged components are not initiated. Not only must the reconfiguration system ascertain the health state of various components, but also it must consider repair options and be able to produce a plan from among the components that are working. In effect, the monitoring and reconfiguration component must perform a great deal of reasoning; a model-based executive is used so that effective choices can be made. A model-based executive will be used to track planner goals, confirm hardware modes, reconfigure hardware, generate command sequences, detect anomalies, isolate faults, diagnose various components and perform repairs. The model executive receives a stream of hardware configuration goals and sensor information that it uses to infer the state of the hardware. The model executive continually tries to transition the hardware towards a state that satisfies the current configuration goals; this allows the model-based executive to immediately react to changes in goals and failures [5]. Control actions are incrementally generated using the new observations and goals given in each state. The model-based executive uses mode estimation, mode reconfiguration and model- based reactive planning to determine a desired control sequence. The mode estimation stage involves the setup of the planning problem where initial and target states are identified. A set of most likely state trajectories is incrementally generated. The mode 14
  • 15. reconfiguration state sets up the planning problem, identifies the initial and target states, and tries to determine a reachable hardware state that satisfies the current goal configuration. The model-based reactive planning stage (which is based on an enhanced version of a Burton system [16]) reactively generates a solution plan by generating the first action in a control sequence that moves the system from the most likely current sate to the target state. The model-based executive’s link with the SCL provides execution capabilities with an expressive scripting language. This makes it possible to generate novel responses to anomalous events. The coordination of formation flying—cluster management—will be carried out by decomposing control systems into agents consisting of multi-threaded processes. Each process consists of a message that has a content field that is used to identify the purpose of the message and its contents. Different agents will be loaded at different times, and hence can be configured at the time of deployment. As part of the coordination process among the micro satellites, agents will search out other agents that can provide needed inputs. Because of this architecture, agents are easily created while control and estimation procedures can be effectively integrated. 2.6 The Three Corner Sat Mission The Three Corner Sat (3CS) mission—a cooperative project between The University of Colorado, Arizona State University and New Mexico State University—includes robust space command language (SCL) execution, continuous planning, onboard science validation, anomaly detection and basic spacecraft coordination. Three, 15kg nano- satellites will fly in formation during which time each nano-satellite will be tumbling, having no control mechanisms for orientation control and stabilization. Onboard science validation will involve the analysis of images. Because each nano- satellite will be tumbling and taking photos at various time intervals, part of the photos will be of Earth, while others will be of outer space. The analysis of the images will involve the compression of the data into a series of 1 and 0s, where different pixels of different images will be assigned either a value of 1 (a threshold brightness value has been reached, hence it is most likely that the pixel is part of an image of Earth) or a value of 0 (a threshold brightness has not been reached, hence it is most likely that the pixel is part of an image of outer space). Images will receive an aggregated value based on the sum of “bright” pixels, and only those images meeting a certain brightness threshold will be used for further analysis. The robust execution will use a standard SCL that was first developed for embedded flight environments. It is hoped that the SCL language will be reusable across different control systems. The SCL allows data from multiple sources to be processed, and includes scripts, constraints, data formats, templates and definitions. The goal of the SCL is to fully define all data and allow for the control system to organize and validate different system processes. It is hoped that the SCL will allow the operating system to take actions depending on a variety of inputs, including time dependent events, operator directives and the state of various system components [3]. The general procedure of the 15
  • 16. SCL involves two main steps, the acquisition of data and the modification of the SCL database by means of an inference engine. A data IO module will acquire, filter, smooth, and convert the data to engineering units, while the Real Time Engine (RTE), the collective of an inference engine, command interpreter, scripts scheduler, and execution manager, will capture all SCL database updates and process the different rules associated with different database items. SCL scripts will perform imaging, manage communication links between the three nano-satellites, perform resource management and coordinate communication with ground control. The 3CS mission will use the CASPER planning software as well as the Selective Monitoring System (SELMON). The SELMON system uses multiple anomaly models to identify the isolate phenomena and anomalous events. Outlier analysis will be used as an anomaly detection tool, while attention focusing will attempt to determine how much and which component of the system is being affected by an anomalous event. During the first few moments of an anomalous event, different components of a system may be bombarded with data and a host of abnormal signals, all of which should be analyzed. The goal of the attention focusing feature is to help organize and catalog the state of the various components of a system at the time of anomaly detection. 2.7 Challenges in the Design of Autonomous Agents and Mission Planners Just as there is no one perfect autonomous agent prototype, there are no specific obstacles and challenges that are applicable to the design and implementation of all autonomous agent missions. Unique mission goals will present unique challenges. General software design and mission planning concepts are, however, widely applicable. Different components of a system must be effectively coordinated; a system language must be exact enough to allow for intricate expressions but yet simple enough for debugging and transmission purposes; image recognition and analysis tools should be effective yet not require extensive amount of computational resources. Texture analysis algorithms must address resource availability issues. Mathematical and statistical programs, although very effective, might only be used to a limited extent when implemented in an autonomous agent with limited computational power. Tools that require less computational power must be created, or current computational-intensive tools must either be adjusted or applied in instances when absolutely necessary. Of key concern to the MISUS project are issues related to coordination among a host of robots and the ability to determine how to form a hypothesis. Several nano-satellites will be operating in tandem in an effort to collectively increase the amount of science data that is collected. Communication and coordination must be managed, possibly through the delegation of a “master” robot. The MISUS system must also be able to effectively produce a hypothesis, and then either refute or refine the hypothesis in response to test data. How can such decisions be made effectively and “intelligently” with limited use of hard-coded parameters and without the use of preset upper or lower bounds? If hard-coded parameters and threshold values are used, how certain are we that an anomalous, yet undiscovered phenomena, will be effectively observed and analyzed? 16
  • 17. Design issues in the DARE system focus on the ability to recognize and track new events. The computational complexity of detecting and tracking objects through the analysis of successive images favors the pruning of the knowledge space so that computational resources are not starved. Mathematical analysis and a host of statistical analysis procedures may be used so that object tracking is more effective. As is the case with the image texture analysis algorithms, mathematical analysis tools must be managed in light of limited onboard computational resources. The ASPIRE project is an example of a close proximity rendezvous mission with a comet. The Deep Space 4 / Champollion (ST4) mission, in fact, proposes to land on a comet, an even more daunting task considering the unstable nature of comet surfaces. In a landing scenario, re-planning and scheduling algorithms must be very fast and efficient. Because of actual contact with a comet, a break-up activity must be detected almost instantly and likewise a plan for escape must also be initiated immediately. An autonomous agent landing on a comet will not have time for image analysis. More importantly, images that are taken while an agent is anchored to a comet surface do not provide the ability to see the comet from far away; hence there must be another way to detected comet breakup. The Three Corner Sat (3CS) Mission is also unique because it employs a series of micro satellites that will be tumbling and taking images at spaced intervals. The micro satellites must be coordinated, and collected data must be analyzed so that only “usable” science products are returned for analysis. In the 3CS mission, there is a need to keep mission costs low while at the same time employ new technologies that can be costly to develop and perform. The tumbling feature of the 3CS mission means that the satellites cannot adjust their orientation and altitude, so spacecraft construction costs are reduced. The image analysis algorithms must be efficient and yet functional on a system with limited computation resources. The advent of new autonomous technologies will surely help advance the science of space exploration, but the extent to which the new technologies can be applied must be taken into consideration. This is similar to the image and texture analysis projects where powerful tools have been created but, because of computational or resource constraints, can be only implemented to a limited extent. 17
  • 18. 3. Support Technology for Autonomous Agents There exist a variety of technologies that supports the development of autonomous agents. The products of the research on these technologies provide better and more autonomous functionalities that can be used for space missions and autonomous space probes. Machine learning, the process by which an agent organizes information and then uses that organized information to improve its future performance in some particular task, is one technology that is highly applicable to autonomous space missions. Discussed here is the general process of machine learning as well as the concept of neural networks, a technology that allows machine learning algorithms to be efficient and reliable. 3.1 Machine Learning Machine learning is concerned with the design and deployment of algorithms that improve their performance based on past experience or data. There are various approaches to machine learning, including classification, clustering, regression (linear and non-linear), hypothesis modeling, and the use of neural networks. Classification employs feature identification tools such as template matching algorithms, clustering includes image segmentation and texture analysis techniques, regression involves the use of predictive models; while hypothesis modeling uses simulations to better define classification models. Each of the above techniques uses a host of statistical and graphical tools, and in many cases the exact design of a machine learning component of an autonomous agent is specific to a mission and is tailored to expected available resources. For example, analysis and classification can be phenomenological or hypothesis based. Phenomenological analysis includes the detection and classification of objects based on anticipated phenomena, while hypothesis based detection and classification schemes involve looking for data that either supports or refutes a hypothesis that is created based on data-specific knowledge. The general machine learning process involves a series of steps, including observation, hypothesis generation, model formulation, testing, and refinement/rejection [10]. The first phase of machine learning involves the organization and exploration of high-dimensional data. Dimension reduction and generalization algorithms may be employed to make the data more manageable, considering that this early stage of machine learning involves a large amount of data. As data is collected, classification and anomaly detection is performed. It is the goal of classification to spatially order and categorize data, while outlier analysis—a data mining technique—is used to screen for potentially “interesting” events that can be good candidates for further investigation. During the second stage of the machine learning process, a hypothesis is formed. Clustering techniques are used to fit data into a series of probability distributions and data is made to fit into predictive models that can be used to refine classification schemes. These probability distributions, models and categorized data sets can be applied to nonlinear regression techniques that attempt to transform observed patterns into potential hypotheses. 18
  • 19. The third major step in the machine learning process involves the formulation of a model in an attempt to explain the phenomena, classification categories and proposed hypotheses. Nonlinear regression again can be used, as well as a host of various mathematical analysis tools such as trainable Markov Random Fields and Bayes Networks. These analysis tools help interpret data and analyze patters [32]. The goal is to label data points as instances of interesting or non-interesting phenomena so that only relevant data points are used and analyzed when formulating or fitting a model. Labeling data as either interesting or non-interesting involves the use of generative and discriminative models. Generative models create a union of the various attributes of a dataset in an attempt to define the underlying physical phenomena, while discriminative models try to make a distinction between interesting and non-interesting data points. The generative model can be considered as a bottom-up technique while the discriminative model merely looks at the “whole picture” without concern for the underlying attributes. At this point, classification schemes have been developed, a model or multiple models have been proposed as a response to explain the classification schemes and phenomena, and additional data sets and data points are acquired to test the validity of the model. This fourth step of the machine learning process is concerned with determining what predictions to make and what data to gather to test these predictions. Several data point selection criteria have been proposed over the years, including choosing the data point for which the current machine learning model is least certain of its classification; this is an attempt to collect and label the “most” random data point. The final step of the machine learning method involves making the decision whether to refine, refute, or to repeat the hypotheses and model predictions. In essence, the machine learning process now attempts to answer, “Is there enough evidence to refute or confirm a hypothesis? If no, should the hypothesis production and model development protocols be repeated? If yes, should the hypothesis be included into the database of knowledge?” At this stage are several issues concerned with the use of static lower and upper bounds. A predetermined hard-coded sentinel value may be used to determine if a certain high or low-enough characteristic of a data model or dataset is significant enough to merit rejection or acceptance of a hypothesis or prediction. Sentinel values and hard- coded trigger thresholds can be used as confidence markers assuming that future models will produce results similar to previous models and model parameters. Many programs that use machine learning protocols are being developed or are already in use within medicine, mathematics and space exploration. The 3-D Computer- Automated Threshold Amsler Grid Test performs real-time medical analysis that can aid in the diagnosis of eye diseases. The standard Amsler Grid Test is administered to a patient with the aid of a touch-sensitive computer screen. The patient’s responses are used as input for a modeling protocol that simulates successive Amsler Grid Tests at various grayscale levels. The use of additional grayscales allows scotomas (portions of the retinal field that are non-functional) to be depicted in 3-dimensions as opposed to 2- dimensions, as is the case when using the standard Amsler Grid Test. The output of the program is a depiction of a patient’s visual field [11]. The Cellerator™ project uses the Mathematica® program package. Equations that model biological compounds are 19
  • 20. generated. Ordinarily, chemical networks must be manually translated from cartoon-like diagrams to chemical equations and fitted to ordinary differential equations [13]. The manual translation process is often a tedious task that is hard to automate because of the many variations and configuration states of different biological elements. The Cellerator program automatically translates chemical networks into ordinary differential equations, which are more easily modeled. Once the Cellerator program generates and translates chemical equations into ordinary differential equations, various components of the ordinary differential equations are used to model and eventually solve for various chemical interactions [12]. The Diamond Eye project is concerned with the cataloging and identification of geological features for scientific analysis. The amount of scientific data that is acquired grows with the launch of each scientific mission because of continued improvements in data-gathering techniques. Filtering and on-board autonomous analysis capabilities allow an autonomous agent to perform some of the analysis that otherwise would have to be done by hand, a very time-consuming process [20]. The architecture of the Diamond Eye project is based on a distributed software methodology where scientists can interact with several data depositories [14,18]. An adaptive recognition algorithm mathematically processes the low-level pixels of an image and uses several training-models in an effort to construct recognizer features. Spatial structures, various object templates and temporal analysis tools are also used. 3.2 Neural Networks The concept of artificial neural networks is inspired by biological nervous systems, is concerned with information processing, and is part of the machine learning field. Artificial neurons have one or more inputs, generally one output and include two main “modes”, training and production. Each input has an associated weight, which helps determine the “importance” of the nature of the data of the input. Training involves the giving of information to a neural network in an effort to determine input weights and is usually done in one of two ways, supervised or unsupervised. The supervised training method involves providing both the output and inputs to a neural system and then using back propagation in an effort to adjust the input weights; the amount of errors between the desired and actual output are analyzed. During unsupervised—or adaptive training— a neural network is not provided with the output, so the network must determine which features are important in the input data set and how various features can be grouped and analyzed so that effective input weights can be set. Multi-layer neural network architecture, error detection and modification, data reduction and several other issues are pertinent to effective neural networks and help further refine and enhance neural networks. There are several advantages of using a neural network, one of which is for the purpose of pattern recognition that can lead to effective data organization methods. In the case of unsupervised learning, organization is performed without any a priori knowledge; patterns in multi-dimensional space can be transformed to lower dimensional space that can be more easily ordered [15]. 20
  • 21. 4. Autonomous Agent Planners The planning software for an autonomous space agent generates mission plans so that onboard science can be scheduled and performed, so that useful data can be acquired, and so that mission objectives can be met. The planner in an autonomous space agent must manage resources and must be able to resolve conflicts between different subsystems. In addition, the planner must produce commands that are valid, are not redundant, are rational and adhere to flight rules. As is the case with the overall architecture of an autonomous agent, there does not exist a recipe for a “perfect” autonomous planner. This section includes an overview of planner functions, batch planners, dynamic planners, and Hierarchical Task Network (HTN) planners. The Automated Scheduling and Planning Environment (ASPEN) and the Simple Hierarchical Ordered Planner (SHOP) are also discussed. 4.1 Coordination of Subsystems Different subsystems of a space agent require instructional commands over the course of a mission. Some components of an autonomous spacecraft may be turned “on” for only brief amounts of time, in which case appropriate command sequences must be sent to those components at the right moment. An example is the deployment of a heat shield when a spacecraft enters the atmosphere of a planet—the deployment of the shield must be timed just right so that that spacecraft is not damaged. Other subsystems of an autonomous agent may be functional throughout the entire mission and may require updated commands on a periodic basis. A navigation system, for example, must be operational at all times and must receive data from orientation sensors. Still other components may be turned on only in response to special events. If a spacecraft detects an anomaly, a camera can be turned on to gather information so that science data may be returned to Earth for analysis. Because a spacecraft contains several subsystems, the planner must be able to coordinate many tasks. Some sequences of events are valid only when performed in a special order. For example, if a spacecraft approaches an anomalous object, the order in which different actions are performed is very important: 1) Turn thrusters off 2) Turn camera on 3) Take 30 images in 30 seconds 4) Turn camera off 5) Turn thrusters on 6) Resume previous flight path. If the actions are performed out of sequence, then the camera might not record the anomalous object. Even worse, if the wrong actions are performed, then the spacecraft may collide with the anomalous object and become inoperative. 21
  • 22. In order for the planner to produce effective plans, the planner must take into consideration a variety of factors. Different subsystems can directly or indirectly affect other subsystems, so the planner must anticipate—and potentially resolve—possible conflicts. Each command that is generated by the planner must be valid and must conform to pre-defined rules, if any. The planner must also make sure that plans do not violate flight objectives, and the planner must make sure that plans are not redundant. The planner must also rational, meaning that the planner arrives at the right solution for the right reasons. 4.2 System Languages and Planner Logic The ability of the planner to validate and check a command sequence is highly dependent on the underlying system language and the way that logical statements are represented. The system language must be able to easily and effectively express command sequences, flight rules, the states of different subsystems, and the contents of databases. In order for the planner to check and validate different commands, logical constructs must be allowed so that a variety of statements can be verified. If the camera on board the spacecraft can be turned on only when the spacecraft burners are turned off (the combined energy requirement of the two components may be more than is available at any one time), then there must be a way for the system language to represent that requirement. Similar axioms, corollaries, and logical facts must be represented, as well as while-loops, if- statement and other programming constructs. The ability of the system language to combine, manipulate, validate, classify, order, etc. a variety of statements is through the use of logic [27]. The sum of all logical statements can be used to define a model of a spacecraft’s subsystems, and this is how knowledge can be represented; simple facts are combined and relationships are made to form complex ideas and expressions. Most importantly, the use of logical statements and logical operators allows command sequence to be validated. Conditionals and iterations of command sequences can be represented using logical constructions and using an effective modeling language [29]. The actual way in which logical statements are represented in an autonomous agent depends on the type of logic that is used. Logic, itself, stems from the natural language constructs and generally includes two types, predicate and prepositional logic. Both predicate and prepositional logics allow for meaningful relationships to be assigned, but the two types differ in regards to the use of sentinels and predicates. Prepositional logic allows for reasoning about items only in terms of “true” and “false” values, but predicate logic allows reasoning about true and false statements and about individuals; i.e., a statement can be quantified. In prepositional logic, a sentinel is an operator that operates on one or more complete ideas to form a new aggregate idea. Truth function operators determine the truth of a statement from the truth values of the individual components of the statement. For example, the statement “the sky is blue” can be combined with the statement, “ostriches don’t fly”, to form the sentence, “The sky is blue and ostriches don’t fly”. This sentence is a construct of the two statements and is called a conjunctive construct because the word “and” is used. The truth of the sentence can be ascertained if, and only if, each of 22
  • 23. the two statements is true. Aside from the conjunctive construct, there are the disjunction and negation prepositional constructs, as well as a range of constructs that can be formed using boolean operators such as, “if, and, or, nor, implies,” etc. The ability to combine true or false statements into aggregate ideas is the underlying concept of prepositional logical—it is assumed that all propositions have a definite truth value, namely either true or false. Predicate logic, however, is concerned not only with sentinel connectives but also with the actual structure of atomic prepositions. A predicate is whatever is said of the subject of a statement; a function that maps individuals to truth-values [30]. Atomic sentences are constructed by applying predicates to individuals, and this is how unique individuals can be quantified. For example, the two statements, “all men are mortal,” and “Socrates is a man”, can be combined to form the sentence “Socrates is mortal”. Predicate logic, then, deals with prepositions where subject and predicates are separately signified. There are several types of predicate logics, including first-order, higher-order, inclusive, monadic and polyadic predicate logics, each of which is different depending on what arguments are acquired by the predicate. First-order logic, for example, is where predicates take only individuals as arguments and quantifiers only bind individual variables; the “Socrates is mortal” example is a first-order predicate logic. 4.3 Batch Planners A batch planner—often considered the “traditional” model—divides a mission into a series of planning horizons, where each horizon lasts for a predefined amount of time and during which a plan is set into action. At each instance when the mission timeline approaches one of these planning horizons, the planner projects what the state of the agent will be at the moment when the current plan expires. The planner generates a new plan for the next plan horizon, taking into account the expected end state of the current plan and the goals of the mission. Figure 1: Batch Planner. Plans are joined end-to-end, and each plan must run to completion before the next plan is implemented; the plans are separated by plan horizons. Before each plan horizon, the planner users the current state values to project the state of the agent at the completion of the current plan. This projection is used as the starting point for the next phase. [16] This traditional planning model has the advantaged of being able to determine the length of time between each event horizon, and so at the time of actual plan generation, the planner routine knows exactly how much time there is remaining, and hence continued revisions can be made to the plan routine until the event horizon occurs. The 23
  • 24. traditional planning model, however, also has several limitations, including the necessity for dedicated resource, an inability to produce a quick new plan and an overall inability to produce the most effective plan. The batch concept of the traditional planner requires that the planning phase be an off-line process, where the planning routine is invoked only when a time horizon approaches. This off-line scenario means that if system resources are limited, then the planner cannot be invoked. Alternatively, the planner may be allotted dedicated system resources, but then at the time when the planner is not running, those dedicated system resources cannot be accessed and are left unused. If an anomalous event occurs—either positive or negative—then the response time may be significant due to the fact that a new plan can be implemented only at the next plan horizon, which may not occur for a significant amount of time. A negative event may require an immediate response, and a positive event may be a short-lived science opportunity event during which important science information could be collected. Because the next plan horizon may be several moments away, a fortuitous opportunity for data acquisition may be passed by because a new plan cannot be implemented in a short enough time. The traditional planning model also may not be able to produce the most efficient plan if the planner is especially slow or if the planner routine must be initiated far in advance of the plan horizon. Because the batch planning method tries to project the most likely state at the end of the current batch time phase, an event or change in environmental variables that happens after the start of the planner but before the next time horizon will not be taken into consideration by the planner routine. The projection of the end of the current time phase may be grossly wrong if the planning algorithm starts well in advance of the time horizon. Alternatively, the projection of the state at the end of the current time phase cannot be calculated too close to the plan horizon because the planner must be given enough time to construct a plan that is consistent with mission goals. 4.4 Dynamic Planners A dynamic/responsive planner, on the other hand, maintains a current goal set, a plan, a current state and a model of the expected future state. The dynamic/responsive planner can update the goals, current state or a plan horizon at any time, and the state of the current plan is altered and the planner process is invoked. An update may be a host of events or a malfunction. The dynamic/responsive planner is able to maintain a satisfactory plan because the most current sensory and goal integration data is used. 24
  • 25. Figure 2: Continuous Planner. Instead of several plans joined end- to-end, there is only a current plan that is repeatedly modified in response to the current state of the agent. Goals and state representations are constantly being updated. Instead of waiting for a plan horizon, the planner makes constantly updates the plan [3]. A dynamic/responsive planner integrates a new plan by means of a simplified cycle where the goals and initial state of the current plan are appropriately updated, the effects of the changes to the initial state and goals are propagated through the entire current plan, potential conflicts are identified, and plan repair algorithms remove anticipated conflicts. Conflict resolution involves the tracking of a host of agent systems, among them the communications, science data acquisition and engineering activities as pertaining to any changes that have been made to the plan. The dynamic/responsive planner therefore has a big advantage over the batch planner because changes to the plan can be made immediately. 4.5 Hierarchical Task Network Planners Hierarchical task network (HTN) planners aim to decrease planning time by hierarchical decomposition. HTN planners reduce a problem by recursively decomposing tasks into subtasks, stopping only when primitives have been reached. A primitive cannot be decomposed into a simpler unit. Planning operators are used on the primitives to perform various tasks. Sets of available methods that define how different tasks can be decomposed are used; each method provides a schema for decomposing a task into several subtasks. Because there are many ways to decompose a large task (which may be a conjunction of multiple tasks which themselves can be decomposed), several methods can be applied effectively [25]. The input to the planner consists of a task network, a set of operators, and a set of methods—a triplet. The task network is the entire problem the needs to be solved, where each task is one specific thing that needs to be done. A task has a name and a list with arguments that include variables, constants, and various attributes. Task and network attributes may include constraints that restrict or prevent some of the variables to be used 25
  • 26. or constraints that require that a series of tasks be performed in a certain order. Tasks can be primitive, meaning that they can be performed directly; tasks can be compound, in which case the planner must decide how to decompose the tasks; or tasks can be goals, in which case they are just properties that must be made true. Available operators enumerate the effects of each of the primitive tasks. Methods indicate how to perform various non-primitive tasks and are defined as a pair (x,y), where x is a task that is performed on a network y. Because planning problems (the triplets) are define mathematically, restrictions, reductions and various comparisons can be performed effectively using the available operational semantics. A planning domain is defined as D=<Op, Me>, where OP is a set of operators and Me is a set of methods. The planning problem, therefore, is defined as P = <d, I, D>, where D is the planning domain, I is the initial state, and d is the task network that the plan needs to solve for. Using HTN planners, the overall planning process can be summarized as follows: 1. Receive the problem P 2. If P is comprised of all primitives, then a. Resolve conflicts in P b. If Resolved, return P c. Else, Return Failure 3. Choose a non-primitive task t in P 4. Choose a decomposition for t 5. Replace t with decomposition 6. Use critics for find interactions and find resolutions 7. Apply the resolutions 8. Return to Step 2 Critics (Step 6) are functions that handle ordering constraints, resource limits and provide domain-specific guidance in the case that a planner has been designed for a specific job and tailored algorithms/methods have been developed. Step 2 of the overall process either returns a plan where all the primitives have been resolved (operators have been used successfully on all the tasks), or a failure is returned because the plan cannot be solved because an operator does not exist for a primitive task). There is a drawback to the use of HTN planners, however. When large complex initial tasks are used and when interactions among non-primitives are complex, then the planner may not be able to find solutions, especially if subtasks in the expanded, non- decomposed, task list are interleaved. In such complex cases, HTN tasks are said to be “undecidable”. 4.6 The Automated Scheduling and Planning Environment (ASPEN) The ASPEN planning and scheduling program is a re-configurable framework that can support a wide variety of applications. High-level goals are translated into low-level commands that help achieve the objectives of a particular mission. It is the goal of the ASPEN project to reduce mission costs and eventually permit scientists to directly issue commands to a spacecraft. Rather than be dependent on mission personnel to provide 26
  • 27. command sequences, a spacecraft will receive mission goals from a scientist and in response autonomously generate a plan and schedule sequence. Under an automated scheduling and planning system, opportunistic and short-lived events can be effectively monitored. The ASPEN software provides an expressive constraint modeling language, a management system that is used for the maintenance of spacecraft operations and resources, a host of search strategies, a reasoning system that is used for maintaining temporal constraints, a language for representing plan preferences, various graphical interfaces, and real-time re-planning capabilities [17]. Knowledge is stored as several classes, including activities, parameters, temporal constraints, reservations, resource variables, parameter dependencies and state variables. Different knowledge constructs are used to define system components that are used to produce sequences of commands. The architecture of the ASPEN system uses iterative algorithms, heuristics, local algorithms and parameters. Iterative algorithms permit re-planning to be used at any time (in contrast to the batch planning protocol that is described above), which is one large advantage in the case that anomalous events or short-lived opportunistic science observations are expected. The use of heuristics allows for pruning of search trees and knowledge spaces and may allow for a quick discovery of a higher quality solution [18]. Local algorithms do not have a computational overhead associated with intermediate plans or previous failed plans. Consequently, local algorithms do not guarantee that unsuccessful modifications to a plan will not be retried. The use of parameters and the adhering to such parameter constraints (in contrast to least-commitment techniques) allows for resource values to be easily computed [17]. After a model is developed, ASPEN parses it into data structures that enable efficient reasoning capabilities, where seven basic components are used. Parameters are used to store simple variables and are used in parameter dependency functions. Dependencies are represented and maintained in a Parameter Dependency Newtork (PDN) which maintains all dependencies between parameters; at any given time all of the dependency relationship can be checked to ensure that relations are satisfied[18]. Temporal constraints are used to define relationships between the start and end times of two different activities. Temporal constraints allow for the derivation of complicated expressions, especially when used with conjunctive and disjunctive operators. Resources are profiles that represent actual system resources or variables and permit the use of restrictions. State variables describe the possible values of a system variable over time, as for example Busy, Idle, Corrupt. Reservations allow for activities to have resource usage constraints, and can be modified, turned on or turned off depending on the need to regulate activities. Finally, activity hierarchies allow for the breaking up of a task into a series of sub-activities. The activity hierarchies are efficient in mandating the order in which tasks must be performed, including configurations in series or parallel. The use of these seven components in a variety of combinations allows for the design of plans and the repair of plan conflicts. 27
  • 28. ASPEN defines ten basic types of conflicts, including abstract activities, unassigned parameters, violated parameter dependencies, unassigned temporal constraints, violated temporal constraints, unassigned reservations, depletable resources, non-depletable resources, state requirement conflicts and state transition conflicts [19]. An abstract activity conflict occurs when a command has not been decomposed into the appropriate sub-commands or if there exist several possible command decompositions, in which case an algorithm must decide which decomposition should be performed. An unassigned parameter conflict represents the condition when a parameter has no value but has been included in a plan; a parameter can be unassigned, but once part of a plan, a parameter must have a concrete value. When two parameters violate a functional relationship, a parameter dependency violation occurs. Parameter dependencies must be constantly checked because of continued updates to parameters. When a temporal constraint exists for an activity instance that has not been selected to satisfy that constraint, then an unassigned temporal constraint conflict occurs. A violated temporal constraint conflict occurs when a temporal constraint is assigned to a relationship that does not hold; this prevents for the setting of constraints that are potentially impossible to maintain. An unassigned reservation conflict occurs when there exists a reservation in an activity that has not yet been assigned to a resource. Timeline conflicts address issues of the use of depletable and non-depletable resources. An upper or lower bound limit is set for most variables, and the use of resources is closely monitored so that any one spacecraft component does not exceed the allotted use of a resource. Timeline conflicts are generally the more difficult to recover from because the solution may require that many components of a system be adjusted. Finally, state variable conflicts result when either a reservation mandates the use of a state that is not available, in which case a state requirement conflict occurs, or when a reservation is changed to a condition that is not allowed by a state variable, in which case a state transition conflict occurs. ASPEN performs iterative repair searches while taking into consideration different constraints. Different types of constraints are organized into classifications depending on how a constraint can be violated. The different violation types have appropriate repair methods, and the search space includes all permissible repair methods as applicable to all possible conflict types in all possible combinations. The iterative repair algorithm searches the space of schedule components and makes decisions at key instances. Choices are made when one of several elements must be selected, including a conflict, a repair method, an activity for a repair method, a start time for an activity, a duration for an activity, a timelines for a reservation, a decomposition, a change in a parameter or when selecting a value for a parameter. The continuous planning algorithm receives a current plan, a current goal set, and a current state. The current goal is updated to reflect new goals and conflicts are detected. When a schedule contains several conflicts, the iterative repair algorithm selects a conflict to attack and chooses a repair method. Because the conflict search space contains pairs of conflicts and associated repair methods, a search for the conflict also returns an appropriate repair method(s). There are many possible classes of repair methods, including moving an activity to a different location in the plan, creating a new activity, creating a reservation, canceling a reservation or deleting an activity. During the 28
  • 29. decision stage, the heuristics feature of the ASPEN system helps to pruning the tree when several solution methods exist. Several domain-independent heuristics have been developed, including a heuristic for the sorting of conflicts according to type, a heuristic for selecting the repair method when more than one repair method exists, and a heuristic for determining start and end times of activities that are being shuffled to different locations. The iterative planning algorithm releases appropriate non-conflicting sections of a plan to an executive for execution. During the entire iterative planning process, the state of the spacecraft is represented by a series of timelines that portray the current and potential future states of the system. The algorithm continually updates the timelines in response to new events and actual plan consequences. Most importantly, the planning algorithm tries to project the state of the system only into the near future; projections for the long term are generally very abstract so that modifications can be made easily [21]. 4.7 The Simple Hierarchical Ordered Planner (SHOP) The AI Planning group at the University of Maryland is designing the Simple Hierarchical Ordered Planner (SHOP). Ordered Task Decomposition, which is a special case of Hierarchical Task Network (HTN) planning, has been developed. Using Ordered Task Decomposition, a planner generates tasks in the same order that the tasks will later be executed. The syntax and semantics of the SHOP planning system define logical symbols, logical inferences, tasks, operators, plans, methods, domains and problems; first-order logic is used. Logical symbols include constant, function, predicate and variable symbols, as well as terms, atoms, ground atoms, conjuncts of atoms, horn clauses, substitutions and most-general unifiers. Constant symbols are “individuals”, as for example, “Susan”, and “3”. Function symbols map individuals to other individuals, as for example, “age of (Susan) = 3”. Logical inference is through the use of states, axioms and satisfiers. A state is defined as a list of ground atoms, while an axiom is a set of horn clauses. (A horn clause is a clause containing at most one positive literal; for example, “has_wings(bird)”, indicating that a bird has wings). Satisfiers are substitutions that help make a conjunction true. Tasks are divided into two types, primitive task symbols and non-primitive task symbols. A task is of the form (s t1 t2 … tn) where s is the task’s name and t1, t2, …, tn are the terms which define a task’s arguments. A task list is defined as a list of tasks, either primitive or non-primitive. Operators are expressions of the form (: operator h D A c) 29
  • 30. where h is a primitive task, D and A are lists of atoms with no variables, and c is a number, which is the cost of the primitive task h. The operator specifies that h can be accomplished if every atom in the list D is removed and placed into the list A; hence the operator is an operation that defines a “before” and “after” state. A plan is defined as a list of operator instances [31]. Methods are of the form: (: method h C ‘T) where h is a task, C is a conjunct (a precondition), while T is a task list. Domains and problems are represented as a set of axioms, operators, and methods. Planning problems are triplets, (S,T,D), where S is the state, T is a task list, and D is a representation of a domain. For example, consider the following definitions for State S, operator o, the substitution u, and the plan P: S = ((on a b) (ontable b) (clear a) (handempty)); o = (:operator (!unstack ?x ?y) ((clear ?x) (on ?x ?y) (handempty)) ((holding ?x) (clear ?y))); u = ((?x . a) (?y . b)); P = ((!unstack a b) (!putdown b)); Lisp is the programming language that is used in SHOP, but no in-depth knowledge of Lisp is necessary to understand the example. The state of the system is represented by 4 statements, namely that “object a is on object b”, [(on a b)]; “b is on the table”, [(ontable b)]; “there is nothing on top of a”, [(clear a)]; and “the hand is empty”, [(handempty)]. The operator o is a function that “unstacks” objects x and y [(!unstuck ?x ?y)], first making sure that object x is on top of objects y [(on ?x ?y)], that the hand is empty [(handempty)], and that there is nothing on top of object x [(clear ?x)]. The hand in this case can be a robotic hand, and hence must be empty if the robot is to pick up the object x. Likewise, there should be nothing on top of object x. Note that the use of question and explanation marks is not analogous to the use of these items when used in standard written languages; in this case the ? indicates that x and y are objects, while ! indicates that unstack is a maneuver. These checks that the operator function performs are the pre- conditions, meaning that they must be true if the operator is to be successful. The final line of the o operator is the “result” component, indicating the end state after the operator has performed the action. The substitution u specifies that objects x and y can acquire variable names of a and b; this allows the example to relate an operator to the real examples in this case, namely objects a and b as defined in the state S. The plan P is composed of two statements, “unstack objects a and b”, [(!unstack a b)] and “put object b 30
  • 31. down”, [(!putdown b)]. The subgoal unstack is performed first, which would result in the following: (o)u = (!unstack a b) indicating that the substitution u was applied to the operator o, taking into consideration the current state S. The unstack routine would then be performed, which would first check that there is nothing on top of a, a is on top of b, and that the hand is empty. Although this is a simple example, it demonstrates how a language can be used to represent a state as well as define operations. Note also that first-order logic is evident here in the precondition states of the operator o and the state S: (clear ?x), (on ?x ?y) (handempty) must all first be true in order for the operator to function, while (on a b) (ontable b) extends the definitions of object entities x and y as refereeing to the specific examples of objects a and b. The SHOP algorithm can be summarized as follows: procedure SHOP(S,T,D) 1. if T = nil then return nil endif 2. t = the first task in T 3. U = the remaining tasks in T 4. if t is primitive and there is a simple plan for t then 5. nondeterministically choose a simple plan p for t 6. P = SHOP(result(S,p),U,D) 7. if P = FAIL then return FAIL endif 8. return cons(p,P) 9. else if t is non-primitive and there is a simple reduction of t in S then 10. nondeterministically choose any simple reduction R of t in S 11. return SHOP(S,append(R,U),D) 12. else 13. return FAIL 14. endif end SHOP where S is the state, T is a task list, and D is a representation of a domain. 31
  • 32. 5. Future Work In this survey report are included several unresolved issues and general concerns of the design and use of autonomous agents in space missions. Section 2.7 of this report lists several design and implementation issues for a few autonomous agent technologies and several autonomous agent space missions. Specific issues can be addressed with more detail, different autonomous technologies can be implemented in an actual robot, and general autonomous behavior techniques can be modified and made compatible with resources available on autonomous spacecraft. Discussed in this section are three possibilities for future work. The image analysis tools discussed in this survey report explain several limitations of the use of object detection tools in autonomous agents. The primary obstacles in implementing image analysis tools onboard spacecraft are related to the availability of resources. Image analysis algorithms use mathematical modeling, statistical analysis functions and a variety of image acquisition and storage features—all of which require a great amount of computational power, time, and energy. An image analysis tool can be very effective when implemented on a computer system with unlimited access to a fast multi-processor, a large database, and an unlimited energy source. In comparison, a spacecraft has access to very limited resources. Space flight logistics, a limited source of energy and a limited amount of computational power are all features of spacecraft that limit the extended use of powerful image analysis tools. Future work could focus on the scalability of these powerful ground-based image analysis tools and on the development of comparable algorithms that require fewer resources. Different aspects of current image analysis tools could be optimized and new functions could be developed. Related to the development of such new tools are systems that can model space environments and provide a “virtual” test platform. Image analysis tools could be modified or developed, and then tested within an environment that mimics the conditions of space. Several of the planner features discussed in this paper could be implemented in an autonomous robot with simple path-planning capabilities. As is the case when image analysis tools are integrated into an autonomous agent, adding path-planning capabilities to a robot would require the modification of computer code in response to the resources of the robot. Energy resources, sensors, the planning module and a command executive must all be integrated to function in collaboration. The robot could be tested in a mock alien terrestrial surface to ensure that the planner functions properly. A detailed analysis of existing planners could ascertain whether different planning programs are scalable for use in autonomous space missions. Different features of a planner could be analyzed, including the ability of the planer to produce valid sequences, the ability of the planner to quickly generate command sequences in response to anomalous events, and the ability of the planner to recover from malfunctions. After various planners are analyzed, similarities and differences of the different benchmark studies could be used to propose a list of features and abilities that are required by a space-worthy autonomous planner. 32
  • 33. 6. References 1. The MISUS Multi-Rover Project, website, http://www- aig.jpl.nasa.gov/public/msl. 2. B. Pell, D.E. Bernard, S. A. Chien, E. Gat, “An Autonomous Spacecraft Agent Prototype”, ACM, pg 253-261, 1997. 3. S. Chien, B. Engelhardt, R. Knight, G. Rabideau, R. Sherwood, E. Hansen, A. Rotiviz, C. Wilklow, S. Wichman, “Onboard Autonomy on the Three Corner Sat Mission”, Proceedings of the 7th Symposium on Artificial Intelligence, Robotics, and Automation in Space (I-SAIRAS 2001), Canadian Space Agency, Montreal, 2001. 4. Autonomous Small Planet In situ Reaction to Events (ASPIRE) Project, website, http://www-aig.jpl.nasa.gov/public/mls/aspire/aspire.html. 5. P.G. Backes, G. Rabideau, K.S. Tso, S. Chien, “Automated Planning and Scheduling for Planetary Rover Distributed Operations”, Jet Propulsion Laboratory, California Institute of Technology. 6. Casper: Space Exploration through Continuous Planning, IEEE Intelligent Systems, September/October 2001. 7. R. Castano, T. Mann, E. Mjolsness, “Texture Analysis for Mars Rover Images”, website, http://www-aig.jpl.nasa.gov/public/mls/mls_papers.html. 8. R. Congalton, “A Review of assessing the accuracy of classifications of remotely senses data”, Remote Sensing of Environment, Volume 21, Issue 1, 1991. 9. T.J. Ellis, M. Mirmehdi, G.R. Dowling "Tracking image features using a parallel computional model" ,Technical report TCU/CS/1992/27, City University, Department of Computer Science, 1992. 10. E. Mjolsness and D. Decoste, “Machine Learning for Science: State of the Art and Future Prospects”, Science, 293, pp. 2051-2055, September 2001. 11. The 3-D Computer-Automated Threshold Amsler Grid Test, website, http://www- aig.jpl.nasa.gov/public/mls/home/wfink/3DVisualFieldTest.htm. 12. B. Shapiro and E. Mjolsness. (2001) Developmental Simulations with Cellerator, Second International Conference on Systems Biology, November 2002. 13. Cellerator, Jet Propulsion Laboratory, California Institute of Technology, website, http://www-aig.jpl.nasa.gov/public/mls/cellerator/ 33