The Answer to Our Prayers?
Åsa Hagström, 751227-0047
This report is written as part of the course in media informatics (TDDB 55) given at
IDA during the fall of 1998. Each student is required to carry out a personal project
on one of a number of predefined subjects. I have chosen to study intelligent software
agents. The purpose of my project has been to examine agent architectures, the
problems that the agent paradigm gives rise to and how these problems may be
addressed in the design of software agents.
I have divided this report into two separate parts: “Background” and “Problems”.
The first part consists of a general introduction to agent concepts and the terminology
used. In order to simplify the reading and understanding of the report, as well as to
define my own view of an agent, I begin with an overview of agent characteristics
and show some example systems. Next, I give a survey of the three different types of
The second part begins with a presentation of some expectations of agent technology.
These are followed by a discussion on its limitations, an overview of problems in
agent technology and some thoughts on how they can be overcome.
The target group for the report consists of fourth year students of computer science,
but I hope that also persons who do not have a background in computer science will
be able to read and understand most of the text.
2. What is an agent?
There does not seem to exist an established and generally accepted definition of the
concept of software agents. Maybe this is due to the fact that this is a relatively new
subject, but there is a clear similarity with the difficulty there has been in AI research
to pinpoint the exact meaning of intelligence (this is not so strange, since software
agents are often termed as being intelligent). A lot of effort has gone into the attempts
to find a general definition, but as of yet no consensus has been reached. I will
therefore try to give a comprehensive and neutral overview of the characteristics and
properties of a software agent, rather than to enter the debate on the more detailed
A popular metaphor for the agent model is the digital butler; someone that "answers
the phone, recognizes the callers, disturbs you when appropriate, and may even tell a
white lie on your behalf. The same agent is well trained in timing, versed in finding
the opportune moments, and respectful of idiosyncrasies. People who know the agent
enjoy considerable advantage over a total stranger. That is just fine." Negroponte
compares agent technology to his sister-in-law: "When I want to go out to the movies,
rather than read reviews, I ask my sister-in-law. We all have an equivalent who is
both an expert on movies and an expert on us. What we need to build is a digital
"[An agent is] a software program that can perform specific tasks for a user and
possesses a degree of intelligence that permits it to perform parts of its tasks
autonomously and to interact with its environment in a useful manner." 
The use of the word agent in the term intelligent software agent implies that it is an
entity that acts on behalf of the user: it carries out a specific task that has been
delegated to it. However, unlike ordinary computer programs the agent is supposed to
act in an autonomous way and be able to understand vague task descriptions, using its
knowledge about the user and the situation. It should adapt to the situation and ask the
questions needed to clarify its tasks. These three characteristics - autonomy,
adaptivity and inferential capability (the ability to act on abstract task definitions) -
seem to be those most commonly agreed upon.
Other characteristics that researchers want to bestow on intelligent agents include:
• Reactivity/Responsiveness: the ability to selectively sense and act; to perceive the
environment and respond to changes in it. This is often achieved with different
kinds of sensors or actuators that detect changes in the environment. Reactivity is
one of the most basic agent characteristics, and is prominent in for example
watcher agents, that monitor some environment (e.g. the Internet) and alert the
user when some changes occur. [1, 3, 4, 6]
• Communicative/Collaborative behaviour/Social ability: the ability to interact or
work together with the user and/or other agents towards a common goal. Agents
communicate with other agents by exchanging messages in an expressive agent
communication language and with users by some kind of (graphical/text/sound)
interface. This is useful for three reasons: first, to clarify its goals; second, to work
towards its goals, taking help from other agents; third, to express the results to the
user. [1, 3, 4, 6]
• Proactivity/Goal-orientation: the ability to exhibit opportunistic, goal-oriented
behaviour and take the initiative when appropriate; not just waiting for changes in
the environment to react to. The agent should not have to be told what to do next,
but rather make suggestions to the user. [1, 3, 4]
• Mobility: the ability to migrate in a self-directed way from one host platform to
another. Instead of sending a remote procedure call (RPC) to some server, whose
services it wants to use, the agent uses remote programming and moves its own
process to the server. The main advantage of this technique is that it reduces the
network load; instead of sending a large amount of database queries and getting
large amounts of data back, it suffices with the two transfers of the agent itself. [1,
• Reasoning/Rationality: the ability to observe the environment and to make
specific decisions when changes occur in it, based on the contents of the agent‘s
knowledge base. Doing this in a rational way means that the agent will act in order
to achieve its goals, and will not act in a way that prevents its goals from being
achieved (at least based on the agent‘s beliefs). [1, 4]
• Learning: the ability to learn from previous experiences and adapt the behaviour
to the environment (modifying its knowledge base). 
• Personality/Character/Anthropomorphism: the ability to manifest certain
"human" attributes ranging from emotion to mimics and speech synthesis. [1, 6]
• Temporal continuity: a persistence of (identity and) state over long periods of
• Benevolence: the assumption that agents do not have conflicting goals, so that they
will always try to do what is asked of them. 
• Veracity: the assumption that an agent will not knowingly communicate false
The above list is compiled from many different sources, and of course a software
system does not need to exhibit all of these characteristics to be called an agent.
Indeed, most systems of today that are called agents hardly meet any of these
demands. However, the area is expanding quickly, and much research is going into
finding ways to implement the desired characteristics. Some people have a strict
definition and believe that only a program that meets all the criteria they have set up
can be called an intelligent agent; others are more liberal and allow some of the
characteristics to be absent.
But maybe it's not the exact characteristics but something more subtle that makes the
difference between a software agent and a piece of "smart code". Even if software
companies like to call their products agents simply because it makes them seem
better, I believe that most researchers that fuss over the definition share a common
view of what can be termed an agent and what can not. It is more a question of a “gut
feeling” than of checklists.
In order to give a clearer description of agentry, as well as producing a base for the
subsequent parts of this report, I will now give some examples of agent-based systems
that are or have been in use.
3.1. Example systems
In order to give the reader a picture of what agents can be used for, I have found
some examples of real-life agent systems. These two sample agents are quite different
from one another: NewsHound produces a personalized newspaper, whereas
AMROSE is part of an industrial robot. This illustrates maybe the large area of
possible agent applications.
NewsHound is an agent that produces personalized newspapers. The user uses a web
or email form to specify a profile, telling the agent what he is interested in and what
articles he likes to read. Using the profile, NewsHound searches the articles in a
number of newspapers to find articles that match the user’s interests, and send them to
him. This procedure gives the users personalized newspapers, delivered every
morning by email. 
“Creatures” is a life simulating computer game, created by the software company
CyberLife Technology. In an interactive world called Albia, various life forms eat,
mate and play. The player’s goal is to guide a creature called a norn through its life in
The norns are actually very complex agents, with brains, DNA, hormones and
instincts. The brain is a software-based neural network with 1000 nodes divided into
nine “lobes” dealing with attention, concepts, decisions etc (see front page).
Whenever the norn interacts with an object (such as a carrot), signals from the norn’s
senses feed through to the brain, which takes decisions on what actions to take and
learns from the results. Learning is done by adjusting the brain’s neural network.
Not only does the norn observe what happens around him; he also learns from how
his body reacts. When he eats the carrot, emitters in the brain release virtual starch.
Reactions within the norn’s “body” convert the starch into glycogen as well as a
chemical that reduces the level of the hunger drive. Now, since eating the carrot was
good for the norn, receptors in the norn’s brain reduce the firing thresholds involved
in recognising carrots and deciding to eat them. Next time, the norn will recognise a
carrot more easily, and know that it is good to eat.
The use of this technique has proven to be very fruitful. Even the creator of the norns,
Stephen Grand, has seen his hopes of norns displaying complex behaviours not only
fulfilled, but surpassed. The technology has also found other uses. For example, it is
used in flight simulators, providing more realistic enemies that fight with the skill and
ingenuity of human pilots. 
At Odense shipyard, ocean-going vessels are constructed with double hulls (two
complete steel surfaces with a space between them). Some of the construction work
has to be done in the narrow space between the hulls when they are in place, thereby
challenging traditional tooling. AMROSE is an intelligent agent that controls a robot
arm that snakes its way into the space and maneuvers the special tools or welding sets
that are required there.
(”Move me to location X”)
in all joints
The robot arm consists of nineteen segments, and each joint in the arm is an agent
that controls the segment ahead of it. The agent in the first joint, holding the head
segment (the one with the tool) computes where it would need to be, then passes this
message to the second agent. This agent in turn computes where it would need to be
in order to get the first agent to its desired position, and so on, down to the nineteenth
arm joint. 
4. Agent architectures
Now that we know in theory what an agent is, how can we use this knowledge to
design and implement in practice a software agent? How do we construct computer
systems that satisfy the properties we want and need?
These are the problems that are addressed in the area of agent architectures: they
“specify how the agent can be decomposed into the construction of a set of
component modules and how these modules should be made to interact. The total set
of modules and their interactions has to provide an answer to the question of how the
sensor data and the current internal state of the agent determine the actions [...] and
future internal state of the agent. An architecture encompasses techniques and
algorithms that support this methodology.“ 
The choice of a specific architecture is quite a crucial one in the construction of an
agent, and will have consequences on many of the agent’s characteristics. Therefore,
it is important to be aware of the benefits and disadvantages of each model.
I will not go so far into the different architectures as to examine the algorithms used,
but merely give an overview of the existing alternatives and the problems with each
one. Although there are of course dozens of variations, agent architectures are usually
and traditionally divided into three categories: deliberative, reactive, and hybrid
4.1. Deliberative architectures
A deliberative agent or agent architecture is one that contains an explicitly
represented, symbolic model of the world, and in which decisions (for example about
what actions to perform) are made via (pseudo-)logical reasoning, based on pattern
matching and symbolic manipulation. Two problems arise in the implementation of a
• The transduction problem: how can we translate the real world into a
symbolic description that the agent can understand, in time for it to be useful?
• The representation/reasoning problem: how can we symbolically represent a
complex world, and how can agents use the information to reason about a
problem in time for the results to be useful? 
The deliberative agent is often anthropomorphized: The internal state is called the
mental state, and we talk about such abstract characteristics as beliefs, desires, goals,
plans and intentions. Therefore, deliberative agents are sometimes called BDI (belief,
desire, intention) agents. This picture gives an overview of the structure of the mental
As can be seen from the picture, the desires are what first are formed, based on the
agent’s beliefs. Desires are the most vague and unstructured of the agent’s hopes for
future situations (for example, that a specific state of the outside world occurs) and
are allowed to be conflicting and unrealistic. A subset of the desires are the goals.
These are basically desires that do not conflict with each other and that are realistic.
Intentions are goals that the agent has decided to follow, and the plans consist of the
single actions necessary to do so.
Of course, a BDI agent needs some more components than its mental state in order to
function. Here is an overview of the architecture :
Executor Scheduler Planner
Information Knowledge base
reciever (symbolic environment model)
The figure shows that the information receiver has a very small role to play in the
architecture. This means that the internal model of the world only can be modified to
a limited extent, which is problematic. The rigid structure of the plan-based system
and the minimal updating of the internal model make the agent unsuitable for a
dynamically changing environment.
Another problem is the time necessary for the agent to reach a decision. As stated
earlier, a BDI agent uses logic to form its intentions from its beliefs. Probably as a
result of the developing researchers’ passion for mathematical proofs, the BDI model
is not constructed with time efficiency in mind. The algorithms that come up with the
plans for the agents are designed for perfect, provable results, something that makes
them quite slow.
We have seen that the BDI architecture is quite a rigid and slow one. Now, we will
take a look at a quite different architecture – the reactive one.
4.2. Reactive architectures
Reactive agents do not have an explicit internal model of their environment, as
deliberative agents do. Nor can they use logic to reason about their knowledge and
reach decisions. So what good are they?
The emphasis of the reactive architecture lies on the interaction side. The advocates
of this architecture insist that the intelligence of a system is not contained within its
separate agents, but is implicitly a part of the environment as a whole.  This is why
they have centered the design on the interaction module.
Reactive agents possess a number of competence modules that handle specific classes
of tasks. The sensors observe the environment and call the appropriate competence
modules, which then produce a reaction to the input that is transferred to the
environment via the actuators :
The competence modules each have a clearly defined task that it is responsible for,
and since there is no central manager, they must possess all skills required to
complete their respective tasks. This means that a reactive agent can’t solve a problem
that it has no competence module for, something that limits its usefulness. BDI agents
are designed in a more generic way, not being restricted to a specific class of tasks,
but are limited by the amount of knowledge in their knowledge base.
A major advantage of the reactive model is its fault tolerance. Since the competence
modules work in parallel, it is not catastrophic if one of them should fail – the agent
can probably carry out its task anyway.
One of the few actual implementations of the reactive model is the subsumption
architecture by Rodney Brooks. He has used it to construct some very simple systems
that nevertheless carries out very complex tasks that “would be impressive if they
were accomplished by symbolic AI systems”. 
Both the deliberative and the reactive architectures have their drawbacks and their
advantages. The third architecture we examine uses the benefits of both models.
4.3. Hybrid architectures
An agent of the hybrid model has both a reactive and a deliberative part. The reactive
module is concerned with interaction with the environment, and the deliberative part
maintains the knowledge base and does the reasoning, planning and decision making.
The relationships between these two modules differ between different
implementations of hybrid agents, thus focusing on either of the parts.
The AMROSE system, described above, has probably been designed with a hybrid
architecture: it gets a message from another agent and reacts to this by computing a
new position for itself and asking the next agent to take it there. This part is purely
reactive, but the agent also needs to store information about the space that it moves
in: within which boundaries it is allowed to move etc. Also, the message it sends to
the next agent is for a desired position, something that indicates that the agent has
desires and intentions, and thus a BDI part as well. In this case, the reactive part
should probably be the larger one.
Also the design of the norns seem to have been based on the hybrid architecture. As
can be seen on the front page, the centre of the reactive part is the brain with the lobes
as its different competence modules. At the same time, the brain stores information
about the mental state in the nodes, so the BDI part and the reactive part seem to be
It is a bit hard to decide which part is the most prominent in the case of the norns;
when they learn they react to things (reactive part), but at the same time they act on
their instincts and desires (BDI part). Maybe it’s not even worth discussing what part
is more important in such a complex case as this one.
Whenever technical progress is made and a new concept is born, its advocates praise
the advantages and expect it to save the world. Every new paradigm is thought to be
the solution to all our problems. The area of intelligent agents is a good example of
this trust in technical progress, and there is reason to get excited over this new
technology. However, there are limitations even to this approach. In this part of the
report, I will examine some of the criticism of the agent paradigm that has arisen as
well as some of the major problems that researchers face in the attempt to construct
5. Expectations of agent technology
The agent paradigm has blown new life into a great deal of research areas: AI, speech
recognition, dialogue management, computer interface design, etc. It has brought
together these technologies, inspiring and motivating for further research. Many
researchers see in agent technology the future for their particular field, and they have
high expectations for the evolution of it. They expect the agents of tomorrow to
behave like humans, to learn from their experiences, almost to read the user‘s mind
and meet his desires before he has even acknowledged them himself.
We have here identified one of the goals for agent researchers: agents that behave like
humans, that can talk and understand human speech, that have mimics, feelings and
personal traits. One of the most excited researchers is Nicholas Negroponte of MIT
Media Laboratory, who criticizes these dreams for shooting too low! He believes that
simply aiming at human traits in an agent is not ambitious enough - there are
probably other channels of communication that no one has even thought about yet. 
However, the main goal for agent research seems to be to help people cope with the
increasing volumes of information that they are exposed to in everyday life. Agents
can help people with one of the greatest problems of the information age – to find the
information they need and no more. In IR terms: to reduce the recall and increase the
precision. One of the dreams for this area is for agents to be able to assemble context
dependent virtual documents from distributed information. For example, you could
have an agent that collected news information from various sources and compiled a
personal newspaper based on the user’s interest, the time of week etc – like the
NewsHound agent, but much more sophisticated.
6. Limitations of agent technology
Even if the agent paradigm brings with it many hopes and promises, there are
limitations to it and many critical issues remain unsolved. Any problem that can be
solved with an agent approach can have other, more appropriate solutions. When
considering an agent-based system, it should be kept in mind that the very nature of
agent technology leads no a number of inherent characteristics that make it unsuitable
for some areas, as well as some intricate aspects that should be carefully thought over
(e.g. security and interaction aspects).
Two general limitations of agent-based applications arise because of their lack of
global perspective :
• No overall system controller. Since agents are separate entities, applications
where global constraints have to be maintained are not very suitable for agent
applications. Nor are systems that need a guarantee of a real-time response or
where deadlock or livelock must be avoided.
• No global perspective. This is another limitation that is caused by the agent’s
local perspective. Since the agents in a system work towards their own goals,
they will not obtain globally optimal performance without external help.
These limitations are clearly present in the AMROSE system, described above.
Imagine that the robot (actually the head segment) is given a goal, for example to
weld along a given curve. If all the arm joint agents act only to fulfill their own goals,
it is easy to understand that deadlock-like situations might occur: if the joints bend in
unsuitable ways, they may end up in a position where they can get no further, but
must return to some previous state. Having no global perspective may lead to serious
dysfunction in the system. I don’t know how this problem has been addressed in this
particular case, but it is likely that there is some kind of global control.
In the rest of the report I will examine some of the more specific problems that
surround agent technology, putting the emphasis on security, interaction and
7. Security aspects
Of course agents suffer from the same security problems that other software systems
do: there are Trojan horses, viruses and worms, weeds, zombies and all the like. Some
of these phenomena are even more worrying when it comes to agents. This applies
particularly to mobile agents, which introduce a new dimension in security thinking.
Electronic commerce issues also arise with the use of shopping agents. These specific
problems are discussed below.
Software agents almost always exist in a networking environment. This means that
they are exposed to risks of unauthorized access to their data, and that the information
retrieved in the network may be tampered with. Also, enemy entities may take on the
identity of an agent to hide behind when performing illegal actions, thus putting the
blame for the action on the innocent agent. The use of public key encryption and
digital signatures is one way to reduce these risks. 
7.1. Mobile agents
In an environment where mobile agents exist, security is definitely one of the
cornerstone issues. Because of the free roaming property of mobile agents, security
considerations become much more important than for stationary agents. The basic
problem is to find a useful compromise between the desire to isolate the agent from
the (possibly malicious) server system and the need to provide access in order for the
agent to complete its tasks.
It is critical to ensure that arriving agents don’t bring any threats into the system. The
agent shouldn’t be trusted any more than the least trusted person, program or system
that it has been in contact with, or it could take advantage of the user privileges it is
granted. For this reason, reliable authentication methods (using for example digital
signatures) are necessary to establish the true sender of the agent. 
If agents are allowed to reproduce, viruses and worms in the mother agent might be
passed on to the descendants. This kind of reproduction must be controlled in some
way, for example by limiting the number of descendants to an agent. 
From the user’s (the sender of the agent) point of view, two main security issues are
at hand: how can you be sure that no one gains access to or changes the data the agent
encapsulates, and how can you be sure that no one tampers with the agent itself? 
As Chess et. al. remarks, “it is impossible to hide anything within an agent without
the use of cryptography. If part (or all) of an agent is to be private, it has to be
protected cryptographically.”  Thus, if the data in the agent is encrypted, no
unauthorised entity can read the information. The storage of the encryption key is an
interesting question: in a symmetric cryptography system, if the agent carries the key
with, it anyone can read it. The answer to the problem is asymmetric (public key)
cryptography. If the agent encrypts the information with its own public key, no one
else can read it, and the data can be decrypted when the agent returns “home”. To
detect unauthorised changes to the information, digital signatures provide a good
Chess and his team also have a solution for the tampering problem: all AMPs (Agent
Meeting Points; the part of a server that deals with agents) must be equipped with
trusted, tamper-proof hardware, allowing only certified agents to execute on the
AMP. If the agent always uses such trusted AMPs, the risks are practically
eliminated. However, such restrictions are not really desired in a mobile agent: the
purpose of the whole concept is for it to roam free on the Internet. This trade-off
between security and freedom of movement for the agent is a question that has to be
considered by the designer of the agent. 
7.2. Electronic commerce and legal issues
If the agent has to pay for the services provided by the AMPs (either for the computer
resources used, or actual transactions for ordered goods), other interesting questions
arise. Since the agent acts autonomously at the AMP, the user has no real control over
it. Therefore, it is important for the user to have limited liability to pay for these
services. The agent company General Magic has solved this by using so-called
Teleclicks: the agent has a number of these fictional currency units, which it can
spend freely as temporary payment. The user is then liable to pay for the Teleclicks
used.  Another way to assure oneself of limited liability is to use some kind of
existing general digital currency system, such as DigiCash. The agent gets a limited
amount of money from the user and can spend this, but no more.
There are quite a lot of legal questions that arise around the use of mobile agents that
are allowed to spend money. What exactly are the potential consequences of
permitting these cyberspace androids to transact business on behalf of individuals?
Can a software agent legally assume the contracting authority it is asked to undertake
on behalf of a user? Can the current legal frameworks covering electronic commerce
handle issues arising out of the use of intelligent agents?
Probably, seeing that commercial uses of agents are emerging, legal issues will
become more prominent in agent design. To be able to show that an agent acted
according to its design specification will most likely be of interest to lawyers as well
as to technicians. Since an agent could be storing confidential information about its
users/clients, it is vital that it does what it should in a stable and predictable manner.
Awareness of these issues as well as carefulness in the design phase (and the
development of standards, see below) is probably the best way to tackle the problems.
The commercial use of agents brings up alarming visions of the future: hordes of
trading agents might cause network congestion and server crashes on the Internet.
Even market crashes might be the result if we entrust stock broking to intelligent
Privacy for the user is yet another aspect of security. For one thing, an agent probably
knows confidential information about its user, and for a mobile agent that needs to
bring this data with it, there really is no way to hide it, if it wants to be able to read it.
If it encrypts the information, it can’t decrypt it again until it is safely home. Bringing
the key with it means that anyone could decrypt the information.
There might also be agents in our environment that serve somebody else, and try to
get information about us. For example, there are agent systems that schedule meetings
for their users by comparing the agendas of all attendants. If we allow such agents to
search our email or calendars in order to facilitate for our friends and ourselves, there
is no guarantee that foreign agents won’t get to the same information. Not only
should we make our agents use safe AMPs, we must also find a way to authenticate
the agents that get into our own system.
8. Interaction aspects
“As with most user interface designs, the challenge is to create a mechanism that is
powerful, yet comprehensible, predictable, and controllable. A well-designed agent
interface would enable users to specify tasks rapidly, be confident that they will get
what they want, and have a sense of accomplishment when the job is done.“ 
These goals aren’t simple to attain, but probably necessary for the success of an
agent, or any other user interface. If the system fails to live up to them, the user will
probably abandon it: who wants a system that can’t be controlled or predicted? Who
wants to delegate tasks if they’re not sure that they will be performed in a satisfactory
This part is about problems in the interaction between agents and their user,
something that must not be forgotten in agent research. Because if users don’t use
their agents because they don’t trust them, what good are they then?
Maybe the most important characteristic of an agent is that of its adaptivity. However,
this characteristic sometimes conflicts with the above requirements of a good agent.
Erickson  has identified three steps of adaptive functionality:
• Noticing: to monitor the user’s activities, trying to find patterns in his/her
behaviour or other relevant information
• Interpreting: to try and make sense of the noticed events by applying some
set of rules
• Responding: to react to the events as interpreted, by applying some set of
Thus, there are three main parts of the adaptivity that can potentially fail, either alone
or together: the agent might for example fail to notice something that the user does, or
it may respond erroneously to something that it has noticed. However, the user will
not be aware of this failure, but most likely interpret the result of it as something the
agent did on purpose. If the action was wrong from the user’s point of view, his trust
in the system will probably diminish.
Also, more complex problems can arise as the user interacts with an adaptive system.
As the agent studies the user and tries to adapt to him/her, the user at the same time
watches the agent. This might lead to a better understanding of the agent’s
functionality, but it might also lead to misunderstandings of the agent’s intentions.
The purpose of the adaptivity is to be more or less invisible in some situations, but if
the user gets suspicious of the agent, the goal obtained is quite the opposite. Instead,
the user may try to trick the agent into another behaviour just to figure out how it
8.2. Being in control
What we also need is an interface that is controllable. If the system makes an error,
there should be a way for the user to take command of it or shut it down. However,
this is not as easy as it sounds. When a “normal” program such as a word processor
doesn’t behave as intended, there is often a way to undo the last function called.
When an agent makes an error, the situation is more complicated. For one thing, the
agent is autonomous and therefore has no interface to its innermost thoughts, plans
and actions. Another problem is that the user may not understand why the (faulty)
event took place at all – maybe he will not even realize that it was the agent that made
it happen. A third problem is the difference between the representations of the event
in the user’s and the agent’s minds. The user may not find the wanted event in the
agent’s representation of the world, if it’s just a side effect of something that the
The very term “intelligent agent” might reduce the user’s feeling of responsibility: if
the user is convinced that the computer is intelligent, he might simply blame the
machine when something goes wrong. 
If we are to solve these kinds of problems, we first have to make it possible for the
user to take control of the agent, and second to make the user understand that it
indeed is possible. Also, the agent must be able to present its mental state to the user
in a comprehensible way.
8.3. The agent metaphor
Another issue that arises when it comes to the interaction aspects is that of the agent
metaphor (by this we mean agents that are designed to imitate human behaviour and
character). How do people react to this metaphor?
Many studies have shown that people tend to regard agents with even very crude
human traits as some kind of intelligent systems. If the system is human in some
respect, it is expected to be human in others too. This may lead to very high
expectations of the system’s functionality, and if it fails to meet them, the users can
be disappointed. I will illustrate this with an example: the Guides system was an
interface to an encyclopedia of American history, intended to be used by high school
students. A travel guide with some stereotype character (such as an Indian or a
settler), who would give tips on related topics guided the user through the program. In
some cases the students would become engaged emotionally in the agents, getting
mad when they didn’t give them good tips. At one occasion, the agent disappeared
due to a software bug, and the student assumed that the agent had gotten mad with
him for not following his recommendation. 
Another reaction to the agent metaphor is to apply social rules to the agent’s actions,
ranging from “rules about politeness, to gender biases, to attributions about
expertise.”  People simply tend to apply the same judgements to human-like
machines as they do to humans, and apply social rules to interpret their behaviour.
These results show that the agent metaphor can be evoked quite easily in the human
mind, but they also show a presumptive weakness in the use of the agent metaphor:
what if the agent fails to meet the expectations it evokes?
Some people think it immoral to portray agents with human characteristics – it must
inevitably be a false promise of the system’s capabilities, at least with the level of
technology we have today. The more human-like the interface, the higher the
expectations of it become. An interesting question is why people want human-like
interfaces at all.
Portraying agents as humans does have its advantages, however. One is that people
know how to interact with people, and making agents look human means that most
people will intuitively know how to deal with them – and predict their actions. Thus,
anthropomorphizing the agents solves some of the interaction problems that will
otherwise need attention.
Probably the hardest problem of all to solve is how to make people trust their agents,
yet this is also one of the most crucial problems. In order to make real use of the
agent, it has to be entrusted with personal information for most applications.
To give away personal information to a software program, the user has to feel safe
that no one else will be able to get this information. Thus, security issues are
important also in the more psychological aspects of agent use. What more is, I believe
that the user wouldn’t want the agent to know things that it doesn’t need to know. In
other words: if the user can’t see any reason for the agent to know a certain piece of
information, it probably won’t give it. Here, the problem of the user not
understanding the agent’s motifs appears again. If the agent can show why it needs to
know something, the user will probably feel more at ease telling it what it asks for.
For the user to trust an agent, he also has to be reassured that everything is working
according to plan. Again, we see a need for a “see-through” interface that lets users
check what their agents are up to. Maybe these monitoring functions are only needed
in the first stage of introducing agent technology to the masses – when people start to
trust their agents, they won’t need the reassurance any more.
Here is a scenario that was used at a conference held by FIPA (Foundation for
Intelligent Physical Agents) with the title “The Impact of Agents on Communications
and Ethics: What do and don’t we know?”:
“Stan McGregor is confused. As senior research engineer for a major
telecommunication and computer company, he is on the leading edge of developing
important new technology called ‘choreographed intelligent agents.’ His synthesis of
several types of software would allow his company to sell ‘need-fulfilling’ services to
clients over the Internet. Stan’s invention would, for example, enable a client to tell
his computer that he is lonely, and the choreographed intelligent agents would
automatically produce a long list of options for the customer to relieve loneliness.
Everything from a list of phone numbers of friends and relatives to a list of phone
numbers of escort services and on-air psychologists would appear on the computer
screen – and lots more – all within one minute. Now Stan McGregor is facing a
serious list of ethical issues...”
The agents in this scenario could obviously be used for questionable goals. The first
issue is of course if it is morally correct at all to make money on needs of this kind.
The computer company will gain a clear advantage over the clients, since they claim
to be able to fulfill people’s needs. However, assuming that the company ignored that
problem and created the agents any way, other problems arise. What if the company
designing and implementing the agent has commercial interests in any of the
suggestions made? Won’t they present that option in a more tempting way? Also,
since lonely people could be very easily influenced, maybe presenting the option of
an escort service will lead the user into doing things they will later regret. How will
the agent know what is the appropriate thing to suggest to a person in such a delicate
situation? Nobody knows how people would react to services of this kind, but I fear
that people who are really lonely – not just temporarily – might get dependant on the
service, developing a kind of “agent abuse”. It is dangerous to believe that machines
can permanently solve people’s personal problems, even if it is tempting.
Brenda Laurel points out another interesting ethical problems with (human-like)
agents: if an agent looks and acts just like any living person and you can treat the
however you like – shouting at them, being impolite etc – won’t you start treating real
people in the same way too? However, judging from history, this is not really a
realistic plot. Problems of that kind lie more in the fundamental ethics of society. 
As agents become more and more elaborated and complex, and thus more and more
human-like, what will happen with our relationship towards them? If the agent can
take on a personality we choose, look any way we like, and talk to us like a real
person, why shouldn’t we start to develop feelings for them? People are known to
have fallen in love with persons they have only met in cyberspace, so obviously
physical presence is no requirement for romantic feelings...
9. Implementation aspects
Having so far discussed some of the more abstract views, I will now look into the
implementation side of software agents.
Of course it is hard to implement an agent – with all the complex disciplines
involved, no one would expect it to be easy. However, it is not these kinds of
problems that I have studied, but rather the problems that surround the rising of a new
A major problem here is the current lack of standards in the technology. For example,
it is virtually impossible for agents of one type to communicate with agents of another
type. Until developers have agreed upon a common language for agent applications,
this issue will probably remain unsolved. KQML (Knowledge Query Manipulation
Language) has been proposed as a standard communication language for distributed
agent applications, but so far consensus has yet to be reached on a number of
important issues.  Without a standard, you can’t be sure that a question posed by
an agent will be interpreted in the right way by the answering agent. This could in the
worst case lead to legal problems of the kind described above for commercial
Another thing standing in the way for agent interoperability is a lack of supporting
infrastructure. The more servers, databases etc. that are designed with support of
agents in mind, the more agents will be able to achieve. As Bradshaw et al. puts it:
“...the last thing anyone wants is an agent architecture that can accommodate only a
single native language and a limited set of proprietary services to which it alone can
provide access.”  In order to achieve a secure environment for agents and servers
alike, there needs to be certifying standards and procedures.
As noted by Jennings and Wooldridge (1998), agent technology should not be
oversold: “Most applications that currently use agents could be built using non-agent
techniques. Thus the mere fact that a particular problem domain has distributed data
sources [...] does not necessarily imply that an agent-based solution is the most
appropriate one - or even that it is feasible.“ 
Until the security issues in agent technology are resolved, we will probably not see a
large commercial breakthrough in this industry. People will not trust an agent with
confidential or personal information until they feel safe that it won’t be spread. Also –
at least as important – ordinary people will have to get used to the agent metaphor.
The normal way today to deal with computers is to do everything yourself, explicitly
calling the programs needed at each instant. To go from this to accepting the
delegation of everyday tasks to an agent is not as simple as it sounds.
However, when studying books and reports for this project, I have become very
impressed with what can be done with agent technology. The large amount of
different applications – from shopping agents to industrial applications to flight
simulators to computer games – shows that this is a technology that has many
different uses and can be modified to fit in almost anywhere. There are many
problems that remain to be solved, but with the huge amount of research going on in
this area, I would be surprised if most of them weren’t. My answer to the question
asked on the title page of this report would be: not all, but some, and in due time.
I would like to conclude with some words from Donald A. Norman: “...along with the
promise comes potential danger. Agents are unlike other artifacts in society in that
they have some level of intelligence, some form of self-initiated, self-determined
goals. Along with their benefits and capabilities come the potential for social
mischief, for systems that run amok, for a loss of privacy, and for further alienation
of society from technology through a diminishing sense of control. None of these
negative aspects of agents are inevitable. All can be eliminated or minimized, but
only if we consider these aspects in the design of our intelligent systems.“ 
 Walter Brenner, Rüdiger Zarnekow and Hartmut Wittig: Intelligent Software
Agents: Foundations and Applications. Springer-Verlag, 1998.
 H. Van Parunak: Practical and Industrial Applications of Agent-Based
Systems. http://www.cs.umbc.edu/agents/papers/apps98.pdf, 1998.
 Nicholas R. Jennings and Michael J. Wooldridge: Applications of Intelligent
Agents. http://www.cs.umbc.edu/agents/introduction/jennings98.pdf, 1998.
 Nicholas R. Jennings and Michael J. Wooldridge: Intelligent Agents: Theory
and Practice. http://www.doc.mmu.ac.uk/STAFF/mike/ker95/ker95-html,
 Alper K. Caglayan and Colin G. Harrison: Agent Sourcebook (chapter three).
Jeffrey Bradshaw, Editor: Software Agents. The AAAI Press/The MIT Press,
 Chapter 1 – Jeffrey Bradshaw: Introduction
 Chapter 2 – Donald A.Norman: How Might People Interact with Agents
 Chapter 3 – Nicholas Negroponte: Agents: From Direct Manipulation to
 Chapter 4 – Brenda Laurel: Interface Agents: Metaphors with Character
 Chapter 5 – Thomas Erickson: Designing Agents as if People Mattered
 Chapter 6 – Ben Schneiderman: Direct Manipulation Versus Agents: Paths to
Predictable, Controllable, and Comprehensible Interfaces
 Chapter 14 – Tim Finin, Yannis Labrou and James Mayfield: KQML as an
Agent Communication Language
 Chapter 17 – Jeffrey M. Bradshaw, Stewart Dutfield, Pete Benoit and John D.
Woolley: KaoS: Toward An Industrial-Strength Open Agent Architecture
 Chapter 18 – Philip R. Cohen and Hector J. Levesque: Communicative
Actions for Artificial Agents
 David Chess: Things that go Bump in the Net. http://www.research.ibm.com/
 David Chess et al: Itinerant Agents for Mobile Computing.
 Colin G. Harrison, David M. Chess and Aaron Kershenbaum: Mobile Agents:
Are they a good idea? http://www.research.ibm/massdist/mobag.ps, 1997.
 Clive Davidson: Agents from Albia.