2. INTRODUCTION
The impacts of the increasing globalisation on the information
overload encompass the tedious tasks of the user to determine and
keep track of relevant information sources, to efficiently deal with
different levels of abstractions of information modelling at
sources, and to combine partially relevant information from
potentially billions of sources. A special type of intelligent
software agents, so called information agents, is supposed to cope
with these difficulties associated with the information overload of
the user. This implies its ability to semantically broker information
by providing pro-active resource discovery, resolving the
information impedance of information consumers and providers in
the Internet, and offering value-added information services and
products to the user or other agents.
3. WHAT IS AN INTELLIGENT INFORMATION
AGENT?
An intelligent information agent is an autonomous,
computational software entity that has access to one or
more heterogeneous and geographically distributed
information sources, and which is able to pro-actively
acquire, mediate, and maintain relevant information on
behalf of its users or other agents preferably just-in-time.
4. HETEROGENEITY: A MAJOR PROBLEM
Huge amount of heterogeneous information sources are
large volumes of (non-, semi-) structured, volatile
(dangling links, relocated), redundant (mirrored, copied)
data
“Information overload” is big problem to the user to
Searching for relevant information. Like (“needle-in-the-
hay-stack”)
5. CHARACTERISTICS
pro-actively acquires, mediates, and maintains relevant
information on behalf of its user(s) or other agents preferably
just-in-time.
Mediates Maintains
The Intelligent Information Agents architecture is a Java framework for
constructing a hybrid system of Intelligent Information Software Agents.
Initiate, coordinate,
and
control distribution of
information
(incl. query
processing,
brokering,
matchmaking)
Storage, Caching,
Consistency;
Assist users (visualization,
etc.)
adapt to user’s needs;
collaborate with agents on
demand
etc.
6. DEVELOPMENT OF INTELLIGENT
INFORMATION AGENTS
The European AgentLink special interest group on
intelligent information agents (I2A SIG) has been
founded in 1998.
The I2A SIG has been co-ordinated by Matthias Klusch
from the German Research Centre for Artificial
Intelligence (DFKI) since 1998, jointly with Sonia
Bergamaschi from University of Bologna, Italy, since
July 2001.
The mission of the AgentLink I2A SIG is to promote
advanced research on and development of intelligent
information agents across Europe.
7. MAIN FUNCTIONALITIES OF THE
INTELLIGENT INFORMATION AGENTS
The main functionalities of the intelligent information
agents include
intelligent search,
navigation guide,
auto-notification,
personal information management, and
dynamic personalized Web page retrieval.
8. CLASSES OF INFORMATION AGENTS
Non-cooperative or cooperative information agents
Its depending on the ability to cooperate or does not cooperate
with each other for the execution of their tasks. Several
protocols and methods are available for achieving cooperation
among autonomous information agents in different scenarios,
like hierarchical task delegation, contracting, and decentralised
negotiation.
Example: Non-cooperative information agents: Searchbots,
Meta-Searchbots
Cooperative information agents: RETSINA, InfoSleuth,
IMPACT, BIG, PLEIADES, MAVA, TSIMMIS, etc.
9. ADAPTIVE INFORMATION AGENTS
Adaptive information agents are able to adapt themselves
to changes in networks and information environments.
Examples of such agents are learning personal assistants
on the Web.
Example: Adaptive information agents: InfoSpiders,
Butterfly, ExpertFinder, Let’s Browse, Amalthaea,
Firefly, LikeMinds.
10. RATIONAL INFORMATION AGENTS
Rational information agents behave in a utilitarian way
in an economic sense. They act, and may even
collaborate, to increase their own benefits.
The main application domains of such kinds of agents
are automated trading and electronic commerce in the
Internet. Examples include the variety of shop bots, and
systems for agent-mediated auctions on the Web.
Shopbots, Kasbah, Bazaar, FCSI/COALA, FishMarket,
AuctionBot, UMDL
11. MOBILE INFORMATION AGENTS
Mobile information agents are able to travel
autonomously through the Internet.
Such agents enable dynamic load balancing in large-
scale networks, reduction of data transfer among
information servers, and migration of small business
logic within medium-range corporate intranets on
demand.
Example: Mobile information agents: D’Agents/Smart,
MIAOW/InfoSphere.
12. SOME SOFTWARES
o InfoSleuth (Cooperative information agents)
o Infospider (Adaptive information agents )
o Kasbah (Rational information agents )
o Scalable Mobile and Reliable Technology
(SMART) (Mobile information agents )
13. INFOSLEUTH (COOPERATIVE INFORMATION AGENTS)
InfoSleuth is a very powerful agent based software
application that performs information retrieval and fusion,
event detection, data analysis, knowledge discovery and
trend analysis using existing databases or the internet as
data sources.
Information retrieval and fusion: InfoSleuth agents
access and fuse information from a wide variety of types
of information sources, including external machines,
databases, text and image repositories and the World
Wide Web.
14. INFOSLEUTH (COOPERATIVE INFORMATION AGENTS)
Monitoring capabilities: InfoSleuth gives the user, on
request, dynamic focused notification as the world of
data changes. The user only need specify the style of
information to be monitored. InfoSleuth transparently
maps this to event monitoring on the appropriate
resources.
Distributed processing: InfoSleuth processes data where the
data is. It enhances efficiency by distributing the processing
of queries and data manipulations among multiple agents,
each responsible for some subpart of the entire world of
information.
15. INFOSLEUTH (COOPERATIVE INFORMATION AGENTS)
Collaborative Processing: InfoSleuth Agents cooperate
with each other by pooling their resources to answer
complex queries.
Dynamic Architecture: InfoSleuth agents can come and
go, i.e. be initiated, killed or moved, and InfoSleuth
increases (or degrades) gracefully, using whatever
services are available through the currently available set
of agents.
Scalability: InfoSleuth is extensible to a changing
distributed world of information under a paradigm
similar to that allowing growth of the Internet.
16. HOW INFOSLEUTH WORKS?
InfoSleuth is designed as an agent-based, object-oriented
system
The InfoSleuth system consists of agents, clients and tools.
Clients are user interfaces built using a common API
Agents are designed as instances of a set of Java classes
called the generic agent shell
Agents communicate via conversations using a language
called KQML (Knowledge Query and Manipulations
Language)
Tools such as the ontology creation and maintenance tools
are built independently and with no overriding architectural
hierarchy or relationship.
17. INFOSPIDERS
The InfoSpiders system was implemented to test the
feasibility, efficiency, and performance of adaptive, on-
line, browsing Web agents.
InfoSpiders design and implementation:
Algorithm
Agent architecture
Adaptive agent representation
20. INFOSPIDER AGENT ARCHITECTURE
The agent interacts with the information environment, that
consists of the actual networked collection (the Web) plus
data kept on local disks (e.g., relevance feedback data and
cache files).
The user interacts with the environment by accessing data on
the local client (current status of a search) and on the Web
(viewing a document suggested by agents) and by making
relevance assessments that are saved locally on the client and
will be accessed by agents as they subsequently report to the
user/client.
21. INFOSPIDER AGENT ARCHITECTURE
The InfoSpiders prototype is written in C and runs on
UNIX and MacOS platforms
The Web interface is based on the W3C library.
. Agents employ standard information retrieval tools
such as a filter for noise words and a stemmer based on
Porter’s algorithm.
agents store an efficient representation of visited
documents in the shared cache on the client machine
Each document is represented by a list of stemmed
keywords and links (with their relative positions).
22. INFOSPIDER AGENT REPRESENTATION
The adaptive representation of InfoSpiders consists of the
genotype
The first component of an agent’s genotype consists of the
parameter β ∈.
it represents the degree to which an agent trusts the
descriptions that a page contains about its outgoing links.
Each agent’s genotype also contains a list of keywords,
initialized with the query terms
Genotypes also comprise a vector of real-valued weights,
initialized randomly with uniform distribution in a small
interval [−w0, +w0].
23. INFOSPIDER AGENT REPRESENTATION
The keywords represent an agent’s opinion of what terms
best discriminate documents relevant to the user from the
rest.
The weights represent the interactions of such terms with
respect to relevance. The association of an agent’s
keyword vector with its neural net highlights the
significant difference between the representation in this
model and the vector space model.
The neural net has a real-valued input for each keyword
in its genotype and a single output unit.
24. KASBAH (RATIONAL INFORMATION AGENTS )
Kasbah is a Web-based system which allows users to
create autonomous agents which buy and sell goods on
their behalf.
Kasbah is a Website where users go to buy and sell
things. They do this by creating buying and selling
agents, which then interact in the marketplace.
the market place needs to ensure that the agent’s
participating in it speak a common language.
25. KASBAH (RATIONAL INFORMATION AGENTS )
It will direct agents to areas of common interest within the
marketplace. What this means is that when an agent enters,
the market place will ask what it is buying or selling, and
direct it to other agents buying and selling the same kinds of
things.
The other agents in the marketplace are also notified of the
arrival of the new agent.
For example, there might be a tent for cars, a tent for
apartments in Cambridge, a tent for stereo equipment, etc.
The marketplace also determines the terminology spoken,
that is, how goods are described. In Kasbah, this terminology
will be extendible by users.
26. SCALABLE MOBILE AND RELIABLE TECHNOLOGY
(SMART) (MOBILE INFORMATION AGENTS )
27. SCALABLE MOBILE AND RELIABLE TECHNOLOGY
(SMART) (MOBILE INFORMATION AGENTS )
SMART is a four tiered architecture that is built on top of Java
virtual machine.
The lowest layer is the Region Administrator, which is built on
Java virtual machine. It manages a set of agent systems and
enforces security policies on them. The Finder module at this level
ores the region administrators and the layers above it the naming
service.
The next layer is the Agent System layer. This layer acts as the
world of agents allowing them to create, migrate and destroy
themselves in this world.
This layer can have multiple contexts called places where agents
execute. The layer on top of this layer is the agent context layer,
also called the Place.
28. SCALABLE MOBILE AND RELIABLE TECHNOLOGY
(SMART) (MOBILE INFORMATION AGENTS )
The top most layers are the Agent Proxy layer. This layer
constitutes the mobile agent API which can be used by the
applications written in SMART.
The agent proxy communicates with the place server on
which the agent resides currently. The place server is
designed as an RMI server (non-CORBA object)
The place server interacts with the agent system, to which
the agent wants to migrate.
The place server may also communicate with its parent
agent system for certain agent management operations
such as register, locate and unregister.
30. CONCLUSION
The future of information agents in database information
retrieval and in Web search is promising. There is no
getting around the fact that there are simply too many
sources out there for a person sitting at his/her computer
to sift through in search of specific information. It is
much easier to submit a query to an agent and let it find
the information you need. This saves both time and
frustration in following links across the Internet, and it
promises a bright future for intelligent search agents.