The document describes the development of an intelligent agent using JADE that is linked to a knowledge base system implemented with Protege and Algernon. The agent delivers useful information to users from the web or other agents based on their preferences. The knowledge base contains ontologies defined in Protege and facts that can be queried using if-then rules in Algernon. The example application was developed in Java Studio Creator to demonstrate an intelligent information agent.
Computer Aided Development of Fuzzy, Neural and Neuro-Fuzzy SystemsIJEACS
Development of an expert system is difficult because of two challenges involve in it. The first one is the expert system itself is high level system and deals with knowledge, which make is difficult to handle. Second, the systems development is more art and less science; hence there are little guidelines available about the development. This paper describes computer aided development of intelligent systems using modem artificial intelligence technology. The paper illustrates a design of a reusable generic framework to support friendly development of fuzzy, neural network and hybrid systems such as neuro-fuzzy system. The reusable component libraries for fuzzy logic based systems, neural network based system and hybrid system such as neuro-fuzzy system are developed and accommodated in this framework. The paper demonstrates code snippets, interface screens and class libraries overview with necessary technical details.
A Comparative Study of Recent Ontology Visualization Tools with a Case of Dia...IJORCS
Ontology is a conceptualization of a domain into machine readable format. Ontologies are becoming increasingly popular modelling schemas for knowledge management services and applications. Focus on developing tools to graphically visualise ontologies is rising to aid their assessment and analysis. Graph visualisation helps to browse and comprehend the structure of ontologies. A number of ontology visualizations exist that have been embedded in ontology management tools. The primary goal of this paper is to analyze recently implemented ontology visualization tools and their contributions in the enrichment of users’ cognitive support. This work also presents the preliminary results of an evaluation of three visualization tools to determine the suitability of each method for end user applications where ontologies are used as browsing aids with a case of Diabetes data
Concept integration using edit distance and n gram match ijdms
Information is growing more rapidly on the World Wide Web (WWW) has made it necessary to make all
this information not only available to people but also to the machines. Ontology and token are widely being
used to add the semantics in data processing or information processing. A concept formally refers to the
meaning of the specification which is encoded in a logic-based language, explicit means concepts,
properties that specification is machine readable and also a conceptualization model how people think
about things of a particular subject area. In modern scenario more ontologies has been developed on
various different topics, results in an increased heterogeneity of entities among the ontologies. The concept
integration becomes vital over last decade and a tool to minimize heterogeneity and empower the data
processing. There are various techniques to integrate the concepts from different input sources, based on
the semantic or syntactic match values. In this paper, an approach is proposed to integrate concept
(Ontologies or Tokens) using edit distance or n-gram match values between pair of concept and concept
frequency is used to dominate the integration process. The proposed techniques performance is compared
with semantic similarity based integration techniques on quality parameters like Recall, Precision, FMeasure
& integration efficiency over the different size of concepts. The analysis indicates that edit
distance value based interaction outperformed n-gram integration and semantic similarity techniques.
Computer Aided Development of Fuzzy, Neural and Neuro-Fuzzy SystemsIJEACS
Development of an expert system is difficult because of two challenges involve in it. The first one is the expert system itself is high level system and deals with knowledge, which make is difficult to handle. Second, the systems development is more art and less science; hence there are little guidelines available about the development. This paper describes computer aided development of intelligent systems using modem artificial intelligence technology. The paper illustrates a design of a reusable generic framework to support friendly development of fuzzy, neural network and hybrid systems such as neuro-fuzzy system. The reusable component libraries for fuzzy logic based systems, neural network based system and hybrid system such as neuro-fuzzy system are developed and accommodated in this framework. The paper demonstrates code snippets, interface screens and class libraries overview with necessary technical details.
A Comparative Study of Recent Ontology Visualization Tools with a Case of Dia...IJORCS
Ontology is a conceptualization of a domain into machine readable format. Ontologies are becoming increasingly popular modelling schemas for knowledge management services and applications. Focus on developing tools to graphically visualise ontologies is rising to aid their assessment and analysis. Graph visualisation helps to browse and comprehend the structure of ontologies. A number of ontology visualizations exist that have been embedded in ontology management tools. The primary goal of this paper is to analyze recently implemented ontology visualization tools and their contributions in the enrichment of users’ cognitive support. This work also presents the preliminary results of an evaluation of three visualization tools to determine the suitability of each method for end user applications where ontologies are used as browsing aids with a case of Diabetes data
Concept integration using edit distance and n gram match ijdms
Information is growing more rapidly on the World Wide Web (WWW) has made it necessary to make all
this information not only available to people but also to the machines. Ontology and token are widely being
used to add the semantics in data processing or information processing. A concept formally refers to the
meaning of the specification which is encoded in a logic-based language, explicit means concepts,
properties that specification is machine readable and also a conceptualization model how people think
about things of a particular subject area. In modern scenario more ontologies has been developed on
various different topics, results in an increased heterogeneity of entities among the ontologies. The concept
integration becomes vital over last decade and a tool to minimize heterogeneity and empower the data
processing. There are various techniques to integrate the concepts from different input sources, based on
the semantic or syntactic match values. In this paper, an approach is proposed to integrate concept
(Ontologies or Tokens) using edit distance or n-gram match values between pair of concept and concept
frequency is used to dominate the integration process. The proposed techniques performance is compared
with semantic similarity based integration techniques on quality parameters like Recall, Precision, FMeasure
& integration efficiency over the different size of concepts. The analysis indicates that edit
distance value based interaction outperformed n-gram integration and semantic similarity techniques.
Army Study: Ontology-based Adaptive Systems of Cyber DefenseRDECOM
The U.S. Army Research Laboratory is part of the U.S. Army Research, Development and Engineering Command, which has the mission to ensure decisive overmatch for unified land operations to empower the Army, the joint warfighter and our nation. RDECOM is a major subordinate command of the U.S. Army Materiel Command.
Artificial Neural Networks: Applications In ManagementIOSR Journals
With the advancement of computer and communication technology, the tools used for management decisions have undergone a gigantic change. Finding the more effective solution and tools for managerial problems is one of the most important topics in the management studies today. Artificial Neural Networks (ANNs) are one of these tools that have become a critical component for business intelligence. The purpose of this article is to describe the basic behavior of neural networks as well as the works done in application of the same in management sciences and stimulate further research interests and efforts in the identified topics.
UML MODELING AND SYSTEM ARCHITECTURE FOR AGENT BASED INFORMATION RETRIEVALijcsit
In this current technological era, there is an enormous increase in the information available on web and
also in the online databases. This information abundance increases the complexity of finding relevant
information. To solve such challenges, there is a need for improved and intelligent systems for efficient
search and retrieval. Intelligent Agents can be used for better search and information retrieval in a
document collection. The information required by a user is scattered in a large number of databases. In this
paper, the object oriented modeling for agent based information retrieval system is presented. The paper
also discusses the framework of agent architecture for obtaining the best combination terms that serve as
an input query to the information retrieval system. The communication and cooperation among the agents
are also explained. Each agent has a task to perform in information retrieval.
A Semi-Automatic Ontology Extension Method for Semantic Web ServicesIDES Editor
this paper provides a novel semi-automatic ontology
extension method for Semantic Web Services (SWS). This is
significant since ontology extension methods those existing
in literature mostly deal with semantic description of static
Web resources such as text documents. Hence, there is a need
for methods that can serve dynamic Web resources such as
SWS. The developed method in this paper avoids redundancy
and respects consistency so as to assure high quality of the
resulting shared ontologies.
A SYSTEM OF SERIAL COMPUTATION FOR CLASSIFIED RULES PREDICTION IN NONREGULAR ...ijaia
Objects or structures that are regular take uniform dimensions. Based on the concepts of regular models,
our previous research work has developed a system of a regular ontology that models learning structures
in a multiagent system for uniform pre-assessments in a learning environment. This regular ontology has
led to the modelling of a classified rules learning algorithm that predicts the actual number of rules needed
for inductive learning processes and decision making in a multiagent system. But not all processes or
models are regular. Thus this paper presents a system of polynomial equation that can estimate and predict
the required number of rules of a non-regular ontology model given some defined parameters.
The Optimization of choosing Investment in the capital markets using artifici...inventionjournals
Optimization is one of crucial items in behavioural sciences. These daystheuse of Meta heuristic has grown considerably in all fields. In this study, we will look for optimization of selection in a portfolio of investment opportunities. We’ve been looking for a selection logic using a meta-heuristic algorithm Called artificial neural networks. The results showed that using artificial neural network algorithm had an optimization in decision-making and selection of investment opportunities. The research is applied one considering the purpose and is looking for developing knowledge in a particular field.
AUTOMATED DISCOVERY OF LOGICAL FALLACIES IN LEGAL ARGUMENTATIONijaia
This paper presents a model of an algorithmic framework and a system for the discovery of non sequitur fallacies in legal argumentation. The model functions on formalised legal text implemented in Prolog. Different parts of the formalised legal text for legal decision-making processes such as, claim of a plaintiff, the piece of law applied to the case, and the decision of judge, will be assessed by the algorithm, for detecting fallacies in an argument. We provide a mechanism designed to assess the coherence of every premise of a claim, their logic structure and legal consistency, with their corresponding piece of law at each stage of the argumentation. The modelled system checks for validity and soundness of a claim, as well as sufficiency and necessity of the premise of arguments. We assert that, dealing with the challenges of validity, soundness, sufficiency and necessity resolves fallacies in argumentation.
ONTOLOGY VISUALIZATION PROTÉGÉ TOOLS – A REVIEWijait
Protégé is one of the most popular tools of the ontology visualization. The “Protégé” tools are being applied for further development in various disciplines for better understanding of knowledge. These tools commonly use four methods of ontology visualization, namely, indented list, node-link and tree, zoomable, and focus+context. The purpose of this work is to present a study on application of these four methods in the development of different kinds of protégé visualization tools and categorize their characteristics and features so that it assists in method selection and promotes further future research in the area of ontology visualization.
ONTOLOGY VISUALIZATION PROTÉGÉ TOOLS – A REVIEW ijait
Protégé is one of the most popular tools of the ontology visualization. The “Protégé” tools are being applied for further development in various disciplines for better understanding of knowledge. These tools commonly use four methods of ontology visualization, namely, indented list, node-link and tree,
zoomable, and focus+context. The purpose of this work is to present a study on application of these four methods in the development of different kinds of protégé visualization tools and categorize their characteristics and features so that it assists in method selection and promotes further future research in
the area of ontology visualization.
Army Study: Ontology-based Adaptive Systems of Cyber DefenseRDECOM
The U.S. Army Research Laboratory is part of the U.S. Army Research, Development and Engineering Command, which has the mission to ensure decisive overmatch for unified land operations to empower the Army, the joint warfighter and our nation. RDECOM is a major subordinate command of the U.S. Army Materiel Command.
Artificial Neural Networks: Applications In ManagementIOSR Journals
With the advancement of computer and communication technology, the tools used for management decisions have undergone a gigantic change. Finding the more effective solution and tools for managerial problems is one of the most important topics in the management studies today. Artificial Neural Networks (ANNs) are one of these tools that have become a critical component for business intelligence. The purpose of this article is to describe the basic behavior of neural networks as well as the works done in application of the same in management sciences and stimulate further research interests and efforts in the identified topics.
UML MODELING AND SYSTEM ARCHITECTURE FOR AGENT BASED INFORMATION RETRIEVALijcsit
In this current technological era, there is an enormous increase in the information available on web and
also in the online databases. This information abundance increases the complexity of finding relevant
information. To solve such challenges, there is a need for improved and intelligent systems for efficient
search and retrieval. Intelligent Agents can be used for better search and information retrieval in a
document collection. The information required by a user is scattered in a large number of databases. In this
paper, the object oriented modeling for agent based information retrieval system is presented. The paper
also discusses the framework of agent architecture for obtaining the best combination terms that serve as
an input query to the information retrieval system. The communication and cooperation among the agents
are also explained. Each agent has a task to perform in information retrieval.
A Semi-Automatic Ontology Extension Method for Semantic Web ServicesIDES Editor
this paper provides a novel semi-automatic ontology
extension method for Semantic Web Services (SWS). This is
significant since ontology extension methods those existing
in literature mostly deal with semantic description of static
Web resources such as text documents. Hence, there is a need
for methods that can serve dynamic Web resources such as
SWS. The developed method in this paper avoids redundancy
and respects consistency so as to assure high quality of the
resulting shared ontologies.
A SYSTEM OF SERIAL COMPUTATION FOR CLASSIFIED RULES PREDICTION IN NONREGULAR ...ijaia
Objects or structures that are regular take uniform dimensions. Based on the concepts of regular models,
our previous research work has developed a system of a regular ontology that models learning structures
in a multiagent system for uniform pre-assessments in a learning environment. This regular ontology has
led to the modelling of a classified rules learning algorithm that predicts the actual number of rules needed
for inductive learning processes and decision making in a multiagent system. But not all processes or
models are regular. Thus this paper presents a system of polynomial equation that can estimate and predict
the required number of rules of a non-regular ontology model given some defined parameters.
The Optimization of choosing Investment in the capital markets using artifici...inventionjournals
Optimization is one of crucial items in behavioural sciences. These daystheuse of Meta heuristic has grown considerably in all fields. In this study, we will look for optimization of selection in a portfolio of investment opportunities. We’ve been looking for a selection logic using a meta-heuristic algorithm Called artificial neural networks. The results showed that using artificial neural network algorithm had an optimization in decision-making and selection of investment opportunities. The research is applied one considering the purpose and is looking for developing knowledge in a particular field.
AUTOMATED DISCOVERY OF LOGICAL FALLACIES IN LEGAL ARGUMENTATIONijaia
This paper presents a model of an algorithmic framework and a system for the discovery of non sequitur fallacies in legal argumentation. The model functions on formalised legal text implemented in Prolog. Different parts of the formalised legal text for legal decision-making processes such as, claim of a plaintiff, the piece of law applied to the case, and the decision of judge, will be assessed by the algorithm, for detecting fallacies in an argument. We provide a mechanism designed to assess the coherence of every premise of a claim, their logic structure and legal consistency, with their corresponding piece of law at each stage of the argumentation. The modelled system checks for validity and soundness of a claim, as well as sufficiency and necessity of the premise of arguments. We assert that, dealing with the challenges of validity, soundness, sufficiency and necessity resolves fallacies in argumentation.
ONTOLOGY VISUALIZATION PROTÉGÉ TOOLS – A REVIEWijait
Protégé is one of the most popular tools of the ontology visualization. The “Protégé” tools are being applied for further development in various disciplines for better understanding of knowledge. These tools commonly use four methods of ontology visualization, namely, indented list, node-link and tree, zoomable, and focus+context. The purpose of this work is to present a study on application of these four methods in the development of different kinds of protégé visualization tools and categorize their characteristics and features so that it assists in method selection and promotes further future research in the area of ontology visualization.
ONTOLOGY VISUALIZATION PROTÉGÉ TOOLS – A REVIEW ijait
Protégé is one of the most popular tools of the ontology visualization. The “Protégé” tools are being applied for further development in various disciplines for better understanding of knowledge. These tools commonly use four methods of ontology visualization, namely, indented list, node-link and tree,
zoomable, and focus+context. The purpose of this work is to present a study on application of these four methods in the development of different kinds of protégé visualization tools and categorize their characteristics and features so that it assists in method selection and promotes further future research in
the area of ontology visualization.
Semantic Web in Action: Ontology-driven information search, integration and a...Amit Sheth
Amit Sheth's Keynote talk given at: “Semantic Web in Action: Ontology-driven information search, integration and analysis,” Net Object Days 2003 and MATES03, Erfurt, Germany, September 23, 2003. http://knoesis.org
Note: slides 51-55 have audio.
Towards Ontology Development Based on Relational Databaseijbuiiir1
Ontology is defined as the formal explicit specification of a shared conceptualization. It has been widely used in almost all fields especially artificial intelligence, data mining, and semantic web etc. It is constructed using various set of resources. Now it has become a very important task to improve the efficiency of ontology construction. In order to improve the efficiency, need an automated method of building ontology from database resource. Since manual construction is found to be erroneous and not up to the expectation, automatic construction of ontology from database is innovated. Then the construction rules for ontology building from relational data sources are put forward. Finally, ontology for �automated building of ontology from relational data sources� has been implemented
Feature analysis of ontology visualization methods and toolsCSITiaesprime
Visualization is a technique of creating images, graphs or animations to share knowledge. Different kinds of visualization methods and tools are available to envision the data in an efficient way. The visualization tools and techniques enable the user to understand the knowledge in an easy manner. Nowadays most of the information is presented semantically which provides knowledge based retrieval of the information. Knowledge based visualization tools are required to visualize semantic concepts. This article analyses the existing semantic based visualization tools and plug-ins. The features and characteristics of these tools and plug-ins are analyzed and tabulated.
Novel Database-Centric Framework for Incremental Information Extractionijsrd.com
Information extraction (IE) has been an active research area that seeks techniques to uncover information from a large collection of text. IE is the task of automatically extracting structured information from unstructured and/or semi structured machine-readable documents. In most of the cases this activity concerns processing human language texts by means of natural language processing (NLP). Recent activities in document processing like automatic annotation and content extraction could be seen as information extraction. Many applications call for methods to enable automatic extraction of structured information from unstructured natural language text. Due to the inherent challenges of natural language processing, most of the existing methods for information extraction from text tend to be domain specific. In this project a new paradigm for information extraction. In this extraction framework, intermediate output of each text processing component is stored so that only the improved component has to be deployed to the entire corpus. Extraction is then performed on both the previously processed data from the unchanged components as well as the updated data generated by the improved component. Performing such kind of incremental extraction can result in a tremendous reduction of processing time and there is a mechanism to generate extraction queries from both labeled and unlabeled data. Query generation is critical so that casual users can specify their information needs without learning the query language.
Implementing an ATL Model Checker tool using Relational Algebra conceptsinfopapers
Florin Stoica, Laura Florentina Stoica, Implementing an ATL Model Checker tool using Relational Algebra concepts, Proceeding The 22th International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Split-Primosten, Croatia, 2014
Deliver Dynamic and Interactive Web Content in J2EE Applicationsinfopapers
F. Stoica, Deliver dynamic and interactive Web content in J2EE applications, Proceedings of the Central and East European Conference in Business Information Systems, Cluj-Napoca, Romania, ISBN 973-656-648-X, pp. 780-789, 2004
An Executable Actor Model in Abstract State Machine Languageinfopapers
F. Stoica, An executable Actor model in Abstract State Machine Language, The Proceedings of the International Conference on Computers and Communications, Oradea, ISBN 973-613-542-X, pp. 388-393, 2004
Laura F. Cacovean, Florin Stoica, Dana Simian, A New Model Checking Tool, Proceedings of the 5th European Computing Conference (ECC ’11), Paris, France, pp. 358-363, April 28-30, 2011
CTL Model Update Implementation Using ANTLR Toolsinfopapers
L. Cacovean, F. Stoica, CTL Model Update Implementation Using ANTLR Tools, Proceedings of the 13th WSEAS International Conference on COMPUTERS, Rodos, Greece, July 23-25, 2009, ISSN: 1790-5109, ISBN: 978-960-474-099-4
Generating JADE agents from SDL specificationsinfopapers
F. Stoica, Generating JADE agents from SDL specifications, International Journal of Computers, Communications & Control, Supplementary Issue, Volume I, ISSN 1841-9836, pp. 429-438, 2006
A general frame for building optimal multiple SVM kernelsinfopapers
Dana Simian, Florin Stoica, A General Frame for Building Optimal Multiple SVM Kernels, Large-Scale Scientific Computing, Lecture Notes in Computer Science, 2012, Volume 7116/2012, 256-263, DOI: 10.1007/978-3-642-29843-1_29
Optimization of Complex SVM Kernels Using a Hybrid Algorithm Based on Wasp Be...infopapers
Dana Simian, Florin Stoica, Corina Simian, Optimization of Complex SVM Kernels Using a Hybrid Algorithm Based on Wasp Behaviour, Lecture Notes in Computer Science, LNCS 5910 (2010), I. Lirkov, S. Margenov, and J. Wasniewski (Eds.), Springer-Verlag Berlin Heidelberg, pp. 361-368
An evolutionary method for constructing complex SVM kernelsinfopapers
D. Simian, F. Stoica, An Evolutionary Method for Constructing Complex SVM Kernels, Recent Advances in Mathematics and Computers in Biology and Chemistry, Proceedings of the 10th International Conference on Mathematics and Computers in Biology and Chemistry, MCBC’09, Prague, Chech Republic, WSEAS Press, ISBN 978-960-474-062-8, ISSN 1790-5125, pp.172-178, 2009
Evaluation of a hybrid method for constructing multiple SVM kernelsinfopapers
Dana Simian, Florin Stoica, Evaluation of a hybrid method for constructing multiple SVM kernels, Recent Advances in Computers, Proceedings of the 13th WSEAS International Conference on Computers, Recent Advances in Computer Engineering Series, WSEAS Press, Rodos, Greece, July 23-25, 2009, ISSN: 1790-5109, ISBN: 978-960-474-099-4, pp. 619-623
Interoperability issues in accessing databases through Web Servicesinfopapers
Florin Stoica, Laura Florentina Cacovean, Interoperability Issues in Accessing Databases through Web Services, Proceedings of the Recent Advances in Neural Networks, Fuzzy Systems & Evolutionary Computing, 13-15 June 2010, Iaşi, Romania, ISSN: 1790-2769, ISBN: 978-960-474-194-6, pp. 279-284
Using Ontology in Electronic Evaluation for Personalization of eLearning Systemsinfopapers
I. Pah, F. Stoica, L. F. Cacovean, E. M. Popa, Using Ontology in Electronic Evaluation for Personalization of eLearning Systems, Proceedings of the 8th WSEAS International Conference on APPLIED INFORMATICS and COMMUNICATIONS (AIC’08), Rhodes, Greece, August 20-22, ISSN: 1790-5109, ISBN: 978-960-6766-94-7, pp. 332-337, 2008
An AsmL model for an Intelligent Vehicle Control Systeminfopapers
F. Stoica, An AsmL model for an Intelligent Vehicle Control System, Proceedings of the 11th WSEAS Int. Conf. on COMPUTERS: Computer Science and Technology, vol. 4, Crete Island, Greece, ISBN: 978-960-8457-92-8, pp. 323-328, July 2007
Using genetic algorithms and simulation as decision support in marketing stra...infopapers
F.Stoica, L.F.Cacovean, Using genetic algorithms and simulation as decision support in marketing strategies and long-term production planning, Proceedings of the 9th WSEAS International Conference on SIMULATION, MODELLING AND OPTIMIZATION (SMO ‘09), Budapest Tech, Hungary, September 3-5, ISSN: 1790-2769 ISBN:978-960-474-113-7, pp. 435-439, 2009
Models for a Multi-Agent System Based on Wasp-Like Behaviour for Distributed ...infopapers
D. Simian, F. Stoica, C. Simian, Models for a Multi-Agent System Based on Wasp-like Behaviour for Distributed Patients Repartition, Proceedings of the 9th WSEAS International Conference on Evolutionary Computing, Sofia, Bulgaria, ISBN 978-960-6766-58-9, ISSN 1790-5109, pp. 82-86, May 2008
F. Stoica, D. Simian, C. Simian, A new co-mutation genetic operator, Proceedings of the 9th WSEAS International Conference on Evolutionary Computing, Sofia, Bulgaria, ISBN 978-960-6766-58-9, ISSN 1790-5109, pp. 76-81, May 2008
Modeling the Broker Behavior Using a BDI Agentinfopapers
Laura Florentina Cacovean, Florin Stoica, Modeling the Broker Behavior Using a BDI Agent, Proceedings of the 14th WSEAS International Conference on Computers (CSCC), 22-25 July, 2010, Corfu, Greece, ISSN: 1792-4391, ISBN: 978-960-474-206-6, pp. 699-703
Algebraic Approach to Implementing an ATL Model Checkerinfopapers
Laura Florentina Stoica, Florian Mircea Boian, Algebraic Approach to Implementing an ATL Model Checker, STUDIA Univ. Babes Bolyai, INFORMATICA, Volume LVII, Number 2, 2012, pp. 73-82
Generic Reinforcement Schemes and Their Optimizationinfopapers
Dana Simian, Florin Stoica, Generic Reinforcement Schemes and Their Optimization, Proceedings of the 5th European Computing Conference (ECC ’11), Paris, France, April 28-30, 2011, pp. 332-337
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
1. Intelligent agents in ontology-based applications
FLORIN STOICA
Computer Science Department
“Lucian Blaga” University Sibiu
Str. Dr. Ion Ratiu 5-7, 550012, Sibiu
ROMANIA
IULIAN PAH
Department of Sociology
“Babes-Bolyai” University Cluj-Napoca
Bd.21 decembrie 1989, no.128-130, 400604,
Cluj-Napoca, ROMANIA
Abstract: - Development of intelligent agents is not a trivial task. In this paper, a Web-interfaced JADE agent is linked
to a knowledge-base system in order to get intelligent behaviour. The knowledge-base system was implemented using
Protégé-2000 as tool for modeling ontologies and Algernon as inference engine. The example application was
developed in Sun Java Studio Creator 2, an Integrated Development Environment (IDE) for developing state-of-the-art
JavaServer Faces Web applications.
Key-Words: - agent, JADE, Protégé, Algernon, Knowledge-based systems
1 Introduction
When we talk about intelligent agents, the question often
arises, what do we mean by intelligence? In this paper,
we consider that an agent is intelligent if it acts
rationally. More specifically, intelligence refers to the
ability of the agent to capture and apply application
domain-specific knowledge and processing to solve
problems.
Intelligent behavior can be produced by the
manipulation of symbols. Symbols are tokens that
represent real-world objects or ideas and can be
represented inside a computer by character strings or by
numbers. In this approach, a problem must be
represented by a collection of symbols, and then an
appropriate algorithm must be developed to process
these symbols.
There are several typical ways of manipulating
symbols that have proven useful in solving problems.
The most common approach is to use symbols in
formulations of if-then rules that are processed using
reasoning techniques called forward and backward
chaining.
In this paper, we develop an example application
based on information agent, used to deliver useful
information to the user, according with his preferences.
This information is gathered by the agent from the Web
using web services or through collaboration with other
agents, and inserted into a knowledge base. The
knowledge base is then queried using a set of if-then
rules and an inference engine. The rules are defined
dynamically by the agent according to user preferences.
We are investigating some tools (Protégé, Algernon,
JADE) used in implementation of an intelligent agent
with reasoning capabilities.
2 Knowledge representation
What is knowledge? We adopt the following definition
for knowledge: “the fact or condition of being aware of
something” [8]. But how do we make a computer aware
of something? This problem, called knowledge
representation, is one of the first, most fundamental
issues that researchers in artificial intelligence had to
face. There are many different kinds of knowledge we
may want to represent: simple facts or complex
relationships, rules for natural language syntax,
associations between related concepts, inheritance
hierarchies between classes of objects. In addition to
being easy to use, a good knowledge representation also
must be easily modified and extended, either by
changing the knowledge using a GUI tool, or through
automatic techniques.
The most popular knowledge representation is
declarative representation. In declarative knowledge
representation, a user simply states the ontology and
specific instances of data, which represent pure
knowledge.
An ontology is a formal explicit description of
concepts in a domain of discourse (classes (sometimes
called concepts)), properties of each concept describing
various features and attributes of the concept (slots
(sometimes called roles or properties)), as well as the
relationships between the concepts
The knowledge base is the central repository of
information containing the ontology and specific
instances of data (a set of individual instances of classes
- the facts known about objects).
The process of mapping the set of knowledge in a
particular problem domain and converting it into a
knowledge base is called knowledge engineering or
knowledge acquisition. While essential, the knowledge
acquisition is a difficult and costly process, in which are
12th WSEAS International Conference on COMPUTERS, Heraklion, Greece, July 23-25, 2008
ISBN: 978-960-6766-85-5 274 ISSN: 1790-5109
2. identified several unique roles: a domain expert and a
knowledge engineer. A knowledge engineer is a person
who can take the domain knowledge and represent it in a
form for use by the reasoning system. Using an
appropriate tool for this task (such as Protégé) may be
very useful.
A reasoning system is used in conjunction with a
knowledge base and with a set of if-then rules to answer
questions and solve problems regarding the domain.
3 Knowledge-based systems
A knowledge-based system is the common term used to
describe a rule-based processing system. If-then rules are
easily manipulated by reasoning systems because if-then
rules are easily understandable, each rule can be viewed
as standalone unit of information, new knowledge can be
easily added, and existing knowledge can be easily
changed by creating or modifying individual rules.
A knowledge-based system consists of four major
elements: a knowledge base (ontology + a set of
individual instances of classes), a set of if-then rules, a
working memory or database of derived facts and data,
an inference engine, which contains the reasoning logic
used to process rules and data.
Fig. 1 Architecture of a knowledge-based system
The reasoning logic of an inference engine is based
on forward chaining and backward chaining [8].
Forward chaining is a data-driven reasoning process
in which a set of rules is used to derive new facts from
initial set of data. The forward-chaining algorithm
generates new data by calling effector procedures
(procedural program code) or firing of the rules.
Backward chaining is often called goal-directed
inferencing, because a particular consequence or goal
clause is evaluated first, and then the algorithm
backward through the rules. Backward chaining uses
rules to answer questions about whether a goal clause is
true or not, processing only rules that are relevant to
question. One advantage of backward chaining is that,
because the inferencing is directed, information can be
requested from the user when it is needed.
4 Tools for implementing a Knowledge-
based system
4.1 Comparative study of tools
When starting out on an ontology project, the first and
reasonable reaction is to find a suitable ontology
software editor. The ability to organize and manage an
emerging ontology is very important to an editor's
usability. Convenient and intuitive presentations and
manipulations of ontology’s interlinking concepts and
relations are essential. Because many ontology models
support multiple inheritances in the concept hierarchies
and relation hierarchies, keeping the associations straight
is a challenge. A graph presentation is less common,
although it can be quite useful for actual ontology
editing functions that change concepts and relations.
Finally, it is worth considering the inferencing support
afforded by the ontology editor (beyond classification in
description logic editors). While ontologies themselves
can be treated as standalone specifications, they are
ultimately used to help answer queries about a body of
information. Some editors incorporate the ability to add
additional axioms and deductive rules to the ontology for
evaluation within the defined target of the development
environment.
The survey presented in [10] covers software tools
that have ontology editing capabilities and are in use
today. The tools may be useful for building ontology
schemas (terminological component) alone or together
with instance data.
The following implementation levels were
considered: 0 = Nil, 1= Poor, 2 = OK, 3 = Good, 4 =
Very Good, 5 = Excellent
Criteria Ontolingua
Protégé
2000
OntoEdit
Clarity of interface 3 5 5
Interface Consistency
Meaning of Commands
4
2
5
4
5
4
Visualization
Ontology overview
2
2
5
5
5
5
HCI 13 24 24
Local installation 0 5 5
Updating speed 2 4 4
Help system 4 5 2
Operational Aspects 6 14 11
Stability of the tool 3 5 5
User support 4 5 0
Features of free version 5 5 2
Tool Support Features 12 15 7
Total 31 53 42
Table 1 Comparison of Usability Aspects [10]
Knowledge base
(ontology + facts)
If-then rules
Inference engine
Working memory
12th WSEAS International Conference on COMPUTERS, Heraklion, Greece, July 23-25, 2008
ISBN: 978-960-6766-85-5 275 ISSN: 1790-5109
3. Criteria Ontolingua
Protégé
2000
OntoEdit
Multiple inheritance 4 5 0
Exhaustive
decomposition
Disjoint decomposition
4
5
0
5
0
5
Structural aspects 13 10 5
Example ontologies 3 5 0
Ontology Library 5 3 0
Library 8 8 0
Java based 0 5 5
Database Backend 0 5 0
Implementation
features
0 10 5
Total 21 28 10
Table 2 Comparison of Ontological Aspects [10]
4.2 Protégé-2000 and Algernon
In order to implement a knowledge-base system, we are
using Protégé-2000 for ontology development and
knowledge-acquisition and Algernon as inference
engine.
Why select protégé? The Protégé-2000 tool provides
access to all of his functionality through a uniform GUI
(graphical user interface) whose top-level consists of
overlapping tabs for compact presentation of the parts
and for convenient co-editing between them.
This "tabbed" top-level design permits an integration
of the modeling of ontology of classes describing a
particular subject, the creation of a knowledge-
acquisition tool for collecting knowledge, the entering of
specific instances of data and creation of a knowledge
base, and the execution of applications [6].
Besides its user friendly interface, plug-in
architecture and other features mentioned above, Protégé
2000 supports collaborative ontology editing. In the
multi-user mode, Protégé 2000 allows multiple clients to
edit simultaneously the same ontology hosted on a
Protégé server. All changes made by one client are
immediately visible by other clients.
The main assumption of Protégé-2000 is that
knowledge-based systems are usually very expensive to
build and maintain. Protégé-2000 is designed to guide
developers and domain experts through the process of
system development. Protégé-2000 is designed to allow
developers to reuse domain ontologies and problem-
solving methods, thereby shortening the time needed for
development and program maintenance. Several
applications can use the same domain ontology to solve
different problems, and the same problem-solving
method can be used with different ontologies.
Algernon can be used from stand-alone applications
(in our example, the Algernon API will be used in order
to call the Algernon engine from a JADE agent) or as a
Protégé tab plug-in that allows all operations to be
performed from within the Protégé GUI.
Algernon is an inference engine that supports both
forward and backward chaining rules. Also, Algernon is
suitable for reading and writing Protégé knowledge
bases, providing a concise way to retrieve and store
information in a knowledge base (KB). Likewise, calls
out to Java and LISP for non-KB calculations are
possible [7].
Algernon's syntax for a clause is like a predicate:
(slot frame value). A path is a sequence of clauses.
In Algernon, a variable starts with a question mark
(example: ?name). Variables do not have to be declared,
but they are implicitly assigned a type according to their
first use. They keep that type throughout the scope of
their current path. A variable may be bound (assigned a
value) through a query or through explicit assignment
(with the :BIND command). A variable’s binding is
passed to all succeeding clauses in the path.
A ground clause (assert) either contains no variables,
or all of its variables have been bound by previous
clauses in the path. When Algernon processes a ground
clause, it will assert the information into the KB if it is
not there already. If it is already there the clause will
succeed, but will not change the KB. Asserting new
information into the KB will cause Algernon to fire
relevant forward-chaining rules.
A non-ground clause (query) contains an unbound
variable. When Algernon processes a non-ground clause,
i.e. the knowledge base is queried, it will first fire any
relevant backward-chaining rules, and then it will query
the KB for the information in the clause.
In order to fire, the key clause of the rule must match
the new fact. In forward chaining rules the key clause is
the first clause in the antecedent. In backward chaining
rules the key clause is the first clause in the consequent.
A rule can contain commands that perform KB
retrievals, KB assertions, KB class, instance and slot
creation commands, and other Algernon operators that
print output, retrieve the current date, call external Java
or LISP routines, etc.
Algernon facilities include [7]: supports interleaved
forward and backward chaining, provide direct
manipulation and interaction with Protégé knowledge
bases, contains operators that create and delete classes,
instances and slots, retrieve and store slot values,
supports access to multiple concurrent KBs, provide a
Protégé tab plug-in that allows all operations to be
performed from within the Protégé GUI, etc.
5 General architecture of ontology-
based application
One of the main reasons for building an ontology-based
application is to use a reasoner to derive additional truths
about the concepts we are modeling and/or to answer
12th WSEAS International Conference on COMPUTERS, Heraklion, Greece, July 23-25, 2008
ISBN: 978-960-6766-85-5 276 ISSN: 1790-5109
4. queries and solve problems regarding the domain.
Figure 2, taken from [9], shows the proposed layers
of the Semantic Web, of the Tim Berners-Lee’s
Semantic Web architecture, with the higher level
languages using the syntax (and semantics) of the lower
level languages.
Fig. 2 The Tim Berners-Lee’s Semantic Web
layered mode
Our application focuses primarily on the ontology
development level, on the rules describing logic and
used by an inferencing engine in order to answer queries
about a requested information, and the sort of agent-
based computing that enable exploitation of the
constructed knowledge-based system.
6 Reasoning JADE agents
JADE is a middleware that facilitates the development of
multi-agent systems and applications conforming to
FIPA standards for intelligent agents [1]. It includes: a
runtime environment where JADE agents can “live” and
that must be active on a given host before one or more
agents can be executed on that host, a library of classes
that programmers have to/can use (directly or by
specializing them) to develop their agents and a suite of
graphical tools that allows administrating and
monitoring the activity of running agents.
The computational model of an agent is multitask,
where tasks (or behaviours) are executed concurrently.
Each functionality/service provided by an agent should
be implemented as one or more behaviours. A scheduler,
internal to the base Agent class and hidden to the
programmer, automatically manages the scheduling of
behaviours.
A behaviour represents a task that an agent can carry
out and is implemented as an object of a class that
extends jade.core.behaviours.Behaviour. In order to
make an agent execute the task implemented by a
behaviour object it is sufficient to add the behaviour to
the agent by means of the addBehaviour() method of the
Agent class.
In order to adding reasoning capabilities to a JADE
agent, this must be interfaced with Algernon. Using the
Algernon Java API, the code is simple:
// Algernon instance
protected Algernon f_algy = null;
protected AlgernonKB f_kb = null;
//Protégé Knowledge Base
protected String f_prjFile = "airfarekb.pprj";
ErrorSet errors = new ErrorSet();
f_algy = new Algernon();
f_kb = new AlgernonProtegeKB(
f_algy, f_prjFile);
f_algy.addKB(f_kb);
Result result = (Result)f_algy.ask(query, errors);
where query represents a valid Algernon path.
7 The example Web application
In the following we will describe a sample JADE agent
which use a rule base and backward chaining to
determine when a discovered airfare is of interest to the
user. The JADE agent is part of a Web application
developed with Java Studio Creator 2, a powerful IDE
based on JavaServer Faces technology [2].
When scheduling a trip, a user will often find that the
airfare fluctuates over time, based on different factors:
the number of seats available, how far in advance the trip
is booked, etc. To get the best fares available, the user
needs to keep checking the airline or travel services Web
site for flight schedules that meet his travel criteria,
waiting for a convenient price.
We suppose that our agent is getting the information
about published airfares by invoking web services or by
querying other agents.
Let’s further suppose that the user desires to fly from
Sibiu to Bucharest on April 1st
and returns on April 11th
.
The user is willing to pay up to 100 € for his ideal flight
time: he desires to fly from Sibiu between 8 and 12
o’clock or after 17 o’clock and catches his return flight
between 16 and 22 o’clock.
But often, when scheduling a trip, a user is willing to
settle for less than ideal if the price is right. Thus, the
user may take a flight that departs before 8 o’clock and
returns between 16 and 22 o’clock, if the price is less
than 80 €. Or he may be willing to leave before 8 o’clock
and returns after 22 o’clock if the price is less than 50 €.
Without using an agent, the user would periodically
goes out to the Web site, enter in his dates, and check the
flights that are were returned to see if any mach his flight
schedule and pricing criteria. If the prices are changing
rapidly, the user may needs to repeat this process quite
often in order to get the right flights at the desired price.
Our JADE agent does just that.
Resource Description
Framework + RDF Schema
Ontology Digitalsignature
UNICODE Universal resource
indicator (URI)
XML + Name space + XML Schema + XML Querry
Logic + rules
descrybing logic
PROOF
Trust
Encryption
Inference
Meaning
of data
Self-
describing
document
12th WSEAS International Conference on COMPUTERS, Heraklion, Greece, July 23-25, 2008
ISBN: 978-960-6766-85-5 277 ISSN: 1790-5109
5. 7.1 Ontology development
Our proposed ontology contains three classes: Flight,
Option and Rank. Slots of class Flight are described in
figure 4, and have the following signification (table 3):
Fig. 3 Classes
from Protégé
ontology
Fig. 4 Slots of class Flight
Slot name Description
d_date (r_date) Departure (return) date
d_time (r_time) Departure (return) time
d_option (r_option) Captures the user preferences:
departure (return) time is desirable
or undesirable. Values of these
slots are instances of class Option
from_City Departure City
to_City Destination City
number Flight number
price Price of the trip
rank Captures the user preferences,
involving values of the following
slots: d_option, r_option and price
Table 3. Ontology slots and their signification
The class Rank has three instances: good, better and
best.
For testing purposes, we can use the Instance Editor
of Protégé to introduce some instances of class Flight in
the Knowledge base (figure 5), and then we can test the
reasoning rules with Algernon plug-in, using Algernon
tab (figure 6).
Fig. 5 Protégé Instance Editor
Fig. 6 Testing rules in Algernon tab
7.2 Generating rules
After the user is questioned about his preferences, the
JADE agent will generate rules to update the knowledge
base (setting values for slots d_option, r_option and rank
of instances of class Flight) and for delivering useful
information to the user.
By example, the following backward chaining rule
will be used to define whether the return time is in
desired range:
((:add-rule Flight
((r_option ?flight undesirable) <- (r_time
?flight ?time) (:TEST (:LISP (string< ?time
"16.00"))))
((r_option ?flight desirable) <- (r_time
?flight ?time) (:TEST (:LISP ( and (string>
?time "16.00") (string< ?time "22.00")))))
((r_option ?flight undesirable) <- (r_time
?flight ?time) (:TEST (:LISP (string> ?time
"22.00"))))
))
The following rule will be used to rate each set of
flight options, and will assert a value for the rank slot.
This slot can take on the value of good, better or best
(which are instances of class Rank).
((:add-rule Flight
((rank ?flight best) <- (d_option ?flight ?d)
(r_option ?flight ?r) (price ?flight ?p)
(:NAME ?d ?dn) (:NEQ ?dn "undesirable")
(:NAME ?r ?rn) (:NEQ ?rn "undesirable")
(:TEST (:LISP (< ?p 100))))
...
((rank ?flight good)<-(d_option ?flight ?d)
(r_option ?flight ?r) (price ?flight ?p)
(:NAME ?d ?dn) (:NEQ ?dn "desirable")
(:NAME ?r ?rn) (:NEQ ?rn "desirable")
(:TEST (:LISP (< ?p 50))))
))
Invoking the defined backward rules will be done by
executing the following queries:
((:INSTANCE Flight ?f) (:trace :verbose)
(:CLEAR-RELATION ?f d_option)
(d_option ?f ?o))
((:INSTANCE Flight ?f) (:trace :verbose)
(:CLEAR-RELATION ?f r_option)
(r_option ?f ?r))
((:INSTANCE Flight ?f) (:trace :verbose)
(:CLEAR-RELATION ?f rank) (rank ?f ?r))
Finally, is constructed a last query to deliver the
requested information to the user, according to his
preferences, captured from the Web page of application.
Such a query may be in the following form:
((:INSTANCE Flight ?f) (rank ?f better)
(d_date ?f "01.04.2008")
(r_date ?f "11.04.2008")
12th WSEAS International Conference on COMPUTERS, Heraklion, Greece, July 23-25, 2008
ISBN: 978-960-6766-85-5 278 ISSN: 1790-5109
6. (from_City ?f "Sibiu")(to_City ?f "Bucuresti")
(:NAME ?f ?fn)(number ?f ?nr))
7.3 The JADE ProxyAgent
First of all, the JADE agent must be interfaced with a
Web browser. A technical solution for this problem can
be found in [4] where is provided a general method of
how can be linked a JADE agent to a JavaServer Faces
(JSF) component, in order to allow Web applications to
be interfaced with a JADE platform.
The example JSF application is developed in Sun
Java Studio Creator 2, an Integrated Development
Environment (IDE) for developing state-of-the-art web
applications. Based on JSF technology [3], this IDE
simplifies writing Java code by providing well-defined
event handlers for incorporating business logic, without
requiring developers to manage details of transactions,
persistence, and other complexities.
The Web application is interfaced with a proxy-agent
running on a JADE platform, in order to retrieve and
display the requested information to the user. Each
user’s request is linked to a behavior of the JADE
ProxyAgent in charge of handling the request. In fact,
the ProxyAgent has three behaviours in order to provide
the required functionality:
- behaviour AddRules, executed only once, to define
the backward chaining rules;
- behaviour UpdateKB, executed periodically,
querying the Web for information about flights
which satisfy the user preferences;
- behaviour GetFlights, executed at each user
request, and handling that request by querying the
knowledge base and displaying the information
retrieved.
All behaviours are using Algernon API in order to
access the Algernon inference engine, as mentioned in
section 6.
In figure 7 is showed the ProxyAgent running within
a JADE platform, and in figure 8 is depicted the Web
interface of the application, which interacts with the
user.
Fig. 7 The reasoning ProxyAgent
References:
[1] F. Bellifemine, G. Caire, T. Trucco, G.
Rimassa, JADE programmer's guide,
http://jade.tilab.com
Fig. 8 The Web interface of the JSF application
[2] Java Studio Creator Field Guide, 2nd ed.,
Sun Microsystems,
http://developers.sun.com/jscreator/
learning/bookshelf/
[3] D. Geary, C. Horstmann, Core JavaServer
Faces, Prentice Hall, 2004, ch. 1-5.
[4] F. Stoica, Building a Web-bridge for JADE
agents, Proceedings of the RoEduNet IEEE
International Conference, 2006, “Lucian
Blaga” University of Sibiu Printing House,
ISBN 973-739-277-9
[5] Protégé-Frames User's Guide,
http://protege.stanford.edu/doc/
users_guide/index.html
[6] H. Knublauch, An AI tool for the real
world - Knowledge modeling with Protégé,
http://www.javaworld.com/javaworld/jw-
06-2003/jw-0620-protege.html
[7] M. Hewett, Algernon in Java,
http://algernon-j.sourceforge.net/doc/
[8] J. P. Bigus, J. Bigus, Constructing
Intelligent Agents using Java, 2nd ed., John
Wiley & Sons, Inc., 2001
[9] M. Davis, Next-Wave Publishing,
Revolutions in Content, The Seybold
Report, Vol. 3, No. 23, March 2004
[10]R. Jakkilinki, N. Sharda, I. Ahmad,
Ontology-Based Intelligent Tourism
Information Systems: An overview of
Development Methodology and
Applications, Tourism Enterprise
Strategies: Thriving – and Surviving – in
an Online Era, 11-12 July 2005, Centre for
Hospitality and Tourism Research (CHTR),
Victoria University, Melbourne, Australia
12th WSEAS International Conference on COMPUTERS, Heraklion, Greece, July 23-25, 2008
ISBN: 978-960-6766-85-5 279 ISSN: 1790-5109