KQML (Knowledge Query Manipulation Language) is both a language and a protocol for establishing communication among multiagent systems. Researchers have been putting efforts to improve the existing structure of KQML. The latest version of KQML i.e. the KQML Improved not only supports existing features but it also extends the list of performatives and parameters along with a novel KQML based communication protocol. It also uniquely contributes security related performatives and hence limits the agents going destructive in a system. A comprehensive evaluation of KQML Improved with respect to available metrics as well as its
comparison with its predecessor is being presented in the paper.
This document discusses collaborative modeling and metamodeling. It describes how collaboration allows multiple engineers to work together in defining modeling languages and creating models. Collaboration enables language definitions and models to be integrated, checked early, and developed faster. It also improves scalability by allowing large teams to work together on large models using multiple integrated modeling languages.
Requirements identification for distributed agile team communication using hi...journalBEEI
Communication plays an important role to deliver the correct information. However, the communication became challenging especially for agile software teams, which are in geographical distributed. The problem arise when there are exchanging information using unstructured communication platform, misunderstanding on the information communicated and lack of documentation. The aim of this study is to propose a text classification technique for requirements identification in text messages. In this study, we adopted the cascade and cluster classification concept of Carotene that relies on the hash tag function. It classifies the text messages into requirements types instead of job title. This technique called as high-level carotene (HLC) technique that embedded into the tool to identify the functional requirement and non-functional requirements. The result shows that most of criterias evaluated have achieved more than 85% of effectiveness in identifying both of requirement in text messaging by using this technique.
This document summarizes a research paper that proposes using business process models and lifecycle management to provide context and improve the development and use of message standards for systems integration. Key points:
- Message standards are increasingly complex, making them difficult to use for specific integration cases. Context is needed to define how standards should be adapted and applied.
- The researchers developed tools called MSSRT and BPCCS that use business process models and lifecycle management to provide context definitions for message standards.
- BPCCS classifies and catalogs business process models using classification schemes to define contexts. MSSRT then applies these contexts to generate customized message standard profiles for specific integration cases.
The document summarizes the history and development of the C programming language. It discusses how C was created by Dennis Ritchie at Bell Labs in 1972 to develop the Unix operating system. C became widely popular due to its portability, efficiency, and simplicity. The document outlines the ancestral languages that influenced C, such as CPL and BCPL, and how C addressed limitations in its predecessors. It also describes the generic structure of C programs and how C has evolved into other object-oriented languages like C++ and influenced the development of many other programming languages overall.
Development of online learning groups based on MBTI learning style and fuzzy ...TELKOMNIKA JOURNAL
Group development is an initial step and an important influence on learning collaborative problem solving (CPS) based on the digital learning environment (DLE). Group development based on the Myers-Briggs types indicators (MBTI) rule proved successful for the educational and industrial environment. The MBTI ideal group rules are reached when a group leader has the highest level of leadership and compatibility between group members. The level of leadership and suitability of group members is determined based on the MBTI learning style (LS). Problems arise when the population of MBTI LS with the highest level of leadership is over. This will lead to dual leadership problems and have an impact on group disharmony. This study proposes an intelligent agent software for the development of the ideal group of MBTI, using the Fuzzy algorithm. The intelligent agent was developed on the SKACI platform. SKACI is a DLE for CPS learning. Fuzzy algorithm for solving dual leadership problems in a group. Fuzzy algorithm is used to increase the population of MBTI LS to 3 levels, namely low, medium and high. Increasing the population of MBTI LS can increase the probability of forming an ideal group of MBTI. Intelligent agents are tested based on a quantitative analysis between experimental classes (applying intelligent agents), and control classes (without intelligent agents). Experiment results show an increase in performance and productivity is better in the experimental class than in the control class. It was concluded that the development of intelligent agents had a positive impact on group development based on the MBTI LS.
This document discusses transforming an e-Kanban business process, which is a vendor-managed inventory system, from the Web Ontology Language (OWL) to the Business Process Execution Language (BPEL). E-Kanban uses the Kanban system to manage material flows in manufacturing by having suppliers deliver parts just in time based on consumption rates. Transforming the e-Kanban process to BPEL formalizes the specification and allows for composition and coordination of activities. The transformation is done using the Simple Transformer (SiTra) framework, which uses Java for specifications to ease the learning curve compared to other tools. SiTra has a transformation engine that executes the transformations. This case study increases retrieved information from e
This document provides an introduction to generic programming. It discusses the motivation for generic programming, which includes providing better abstraction and reusability in programming languages. It describes two main models for generic programming - parametric polymorphism in statically typed languages and dynamic typing in dynamically typed languages. The document also discusses implementation issues around generic programming such as efficiency, code generation, and language details.
IMPROVING RULE-BASED METHOD FOR ARABIC POS TAGGING USING HMM TECHNIQUEcsandit
Part-of-speech (POS) tagger plays an important role in Natural Language Applications like
Speech Recognition, Natural Language Parsing, Information Retrieval and Multi Words Term
Extraction. This study proposes a building of an efficient and accurate POS Tagging technique
for Arabic language using statistical approach. Arabic Rule-Based method suffers from
misclassified and unanalyzed words due to the ambiguity issue. To overcome these two
problems, we propose a Hidden Markov Model (HMM) integrated with Arabic Rule-Based
method. Our POS tagger generates a set of 4 POS tags: Noun, Verb, Particle, and Quranic
Initial (INL). The proposed technique uses the different contextual information of the words with
a variety of the features which are helpful to predict the various POS classes. To evaluate its
accuracy, the proposed method has been trained and tested with the Holy Quran Corpus
containing 77 430 terms for undiacritized Classical Arabic language. The experiment results
demonstrate the efficiency of our method for Arabic POS Tagging. The obtained accuracies are
97.6% and 94.4% for respectively our method and for the Rule based tagger method.
This document discusses collaborative modeling and metamodeling. It describes how collaboration allows multiple engineers to work together in defining modeling languages and creating models. Collaboration enables language definitions and models to be integrated, checked early, and developed faster. It also improves scalability by allowing large teams to work together on large models using multiple integrated modeling languages.
Requirements identification for distributed agile team communication using hi...journalBEEI
Communication plays an important role to deliver the correct information. However, the communication became challenging especially for agile software teams, which are in geographical distributed. The problem arise when there are exchanging information using unstructured communication platform, misunderstanding on the information communicated and lack of documentation. The aim of this study is to propose a text classification technique for requirements identification in text messages. In this study, we adopted the cascade and cluster classification concept of Carotene that relies on the hash tag function. It classifies the text messages into requirements types instead of job title. This technique called as high-level carotene (HLC) technique that embedded into the tool to identify the functional requirement and non-functional requirements. The result shows that most of criterias evaluated have achieved more than 85% of effectiveness in identifying both of requirement in text messaging by using this technique.
This document summarizes a research paper that proposes using business process models and lifecycle management to provide context and improve the development and use of message standards for systems integration. Key points:
- Message standards are increasingly complex, making them difficult to use for specific integration cases. Context is needed to define how standards should be adapted and applied.
- The researchers developed tools called MSSRT and BPCCS that use business process models and lifecycle management to provide context definitions for message standards.
- BPCCS classifies and catalogs business process models using classification schemes to define contexts. MSSRT then applies these contexts to generate customized message standard profiles for specific integration cases.
The document summarizes the history and development of the C programming language. It discusses how C was created by Dennis Ritchie at Bell Labs in 1972 to develop the Unix operating system. C became widely popular due to its portability, efficiency, and simplicity. The document outlines the ancestral languages that influenced C, such as CPL and BCPL, and how C addressed limitations in its predecessors. It also describes the generic structure of C programs and how C has evolved into other object-oriented languages like C++ and influenced the development of many other programming languages overall.
Development of online learning groups based on MBTI learning style and fuzzy ...TELKOMNIKA JOURNAL
Group development is an initial step and an important influence on learning collaborative problem solving (CPS) based on the digital learning environment (DLE). Group development based on the Myers-Briggs types indicators (MBTI) rule proved successful for the educational and industrial environment. The MBTI ideal group rules are reached when a group leader has the highest level of leadership and compatibility between group members. The level of leadership and suitability of group members is determined based on the MBTI learning style (LS). Problems arise when the population of MBTI LS with the highest level of leadership is over. This will lead to dual leadership problems and have an impact on group disharmony. This study proposes an intelligent agent software for the development of the ideal group of MBTI, using the Fuzzy algorithm. The intelligent agent was developed on the SKACI platform. SKACI is a DLE for CPS learning. Fuzzy algorithm for solving dual leadership problems in a group. Fuzzy algorithm is used to increase the population of MBTI LS to 3 levels, namely low, medium and high. Increasing the population of MBTI LS can increase the probability of forming an ideal group of MBTI. Intelligent agents are tested based on a quantitative analysis between experimental classes (applying intelligent agents), and control classes (without intelligent agents). Experiment results show an increase in performance and productivity is better in the experimental class than in the control class. It was concluded that the development of intelligent agents had a positive impact on group development based on the MBTI LS.
This document discusses transforming an e-Kanban business process, which is a vendor-managed inventory system, from the Web Ontology Language (OWL) to the Business Process Execution Language (BPEL). E-Kanban uses the Kanban system to manage material flows in manufacturing by having suppliers deliver parts just in time based on consumption rates. Transforming the e-Kanban process to BPEL formalizes the specification and allows for composition and coordination of activities. The transformation is done using the Simple Transformer (SiTra) framework, which uses Java for specifications to ease the learning curve compared to other tools. SiTra has a transformation engine that executes the transformations. This case study increases retrieved information from e
This document provides an introduction to generic programming. It discusses the motivation for generic programming, which includes providing better abstraction and reusability in programming languages. It describes two main models for generic programming - parametric polymorphism in statically typed languages and dynamic typing in dynamically typed languages. The document also discusses implementation issues around generic programming such as efficiency, code generation, and language details.
IMPROVING RULE-BASED METHOD FOR ARABIC POS TAGGING USING HMM TECHNIQUEcsandit
Part-of-speech (POS) tagger plays an important role in Natural Language Applications like
Speech Recognition, Natural Language Parsing, Information Retrieval and Multi Words Term
Extraction. This study proposes a building of an efficient and accurate POS Tagging technique
for Arabic language using statistical approach. Arabic Rule-Based method suffers from
misclassified and unanalyzed words due to the ambiguity issue. To overcome these two
problems, we propose a Hidden Markov Model (HMM) integrated with Arabic Rule-Based
method. Our POS tagger generates a set of 4 POS tags: Noun, Verb, Particle, and Quranic
Initial (INL). The proposed technique uses the different contextual information of the words with
a variety of the features which are helpful to predict the various POS classes. To evaluate its
accuracy, the proposed method has been trained and tested with the Holy Quran Corpus
containing 77 430 terms for undiacritized Classical Arabic language. The experiment results
demonstrate the efficiency of our method for Arabic POS Tagging. The obtained accuracies are
97.6% and 94.4% for respectively our method and for the Rule based tagger method.
A Metamodel and Graphical Syntax for NS-2 ProgramingEditor IJCATR
One of the most important issues, around the world, which manufacturers pay special attention to is to promote their
activities in order to be able to give high quality products or services. Perhaps the first advice to achieve this goal is simulation idea.
Therefore, simulation software packages with different properties have been made available. One of the most applicable simulators is
NS-2 which suffers from internal complexity. On the other hand, Domain Specific Modeling Languages can make an abstraction level
by which we can overcome the complexity of NS-2, increase the production speed, and promote efficiency. So, in this paper, we
introduce a Domain Specific Metamodel for NS-2.This new metamodel paves the ways for introducing abstract syntax and modeling
language syntax. In addition, created syntax in Domain Specific Modeling Language is supported by a graphical modeling tool.
This paper presents a vision on how the software development process could be a fully unified mechanized
process by getting benefits from the advances of Natural Language Processing and Program Synthesis
fields. The process begins from requirements written in natural language that is translated to sentences in
logical form. A program synthesizer gets those sentences in logical form (the translator's outcome) and
generates a source code. Finally, a compiler produces ready-to-run software. To find out how the building
blocks of our proposed approach works, we conducted an exploratory research on the literature in the
fields of Requirements Engineering, Natural Language Processing, and Program Synthesis. Currently, this
approach is difficult to accomplish in a fully automatic way due to the ambiguities inherent in natural
language, the reasoning context of the software, and the program synthesizer limitations in generating a
source code from logic.
Information system project management (IT - project) is a complex iterative process. An important role for the development of complex IT projects plays records of the development lifecycle (LC). The article presents an analysis of the effectiveness of the work on the creation of IT - projects based on two modified models of the life cycle: cascade and spiral. Analysis of the effectiveness of the management of the IT project was implemented on the basis of simulation. The modeling was carried out on the basis of Any Ljgic tools on the example of development of geoinformation system (GIS). It is shown that it is advisable to design GIS on the basis of a modified spiral LC with splitting of the flow of requirements at the input. The peculiarity of the proposed study is to take into account the requirements in the form of communicative interactions of different types. Under the communicative interactions are understood all the interactions between the subjects of the process of creating an IT-project: verbal and non - verbal, carried out on the basis of CASE-means
Information system project management (IT - project) is a complex iterative process. An important role for the development of complex IT projects plays records of the development lifecycle (LC). The article presents an analysis of the effectiveness of the work on the creation of IT - projects based on two modified models of the life cycle: cascade and spiral. Analysis of the effectiveness of the management of the IT project was implemented on the basis of simulation. The modeling was carried out on the basis of Any Ljgic tools on the example of development of geoinformation system (GIS). It is shown that it is advisable to design GIS on the basis of a modified spiral LC with splitting of the flow of requirements at the input. The peculiarity of the proposed study is to take into account the requirements in the form of communicative interactions of different types. Under the communicative interactions are understood all the interactions between the subjects of the process of creating an IT-project: verbal and non - verbal, carried out on the basis of CASE-means.
An Adjacent Analysis of the Parallel Programming Model Perspective: A SurveyIRJET Journal
This document provides an overview and analysis of parallel programming models. It begins with an abstract discussing the growing demand for parallel computing and challenges with existing parallel programming frameworks. It then reviews several relevant studies on parallel programming models and architectures. The document goes on to describe several key parallel programming models in more detail, including the Parallel Random Access Machine (PRAM) model, Unrestricted Message Passing (UMP) model, and Bulk Synchronous Parallel (BSP) model. It discusses aspects of each model like architecture, communication methods, and associated cost models. The overall goal is to compare benefits and limitations of different parallel programming models.
The document discusses several key features of a good programming language:
1. Clarity, simplicity, and unity - the language should have a clear set of minimal concepts that are combined in a simple, regular way.
2. Orthogonality - changing one aspect of the language does not require changing another, making the language easier to learn and programs easier to write.
3. Support for abstraction - the language allows programmers to define abstract data types and operations that match the problem solution, rather than being limited by language implementations.
Compiler Design Using Context-Free GrammarIRJET Journal
This document discusses the use of context-free grammars (CFGs) in compiler design. It provides an overview of CFGs and their role in specifying the syntax of programming languages. The document also examines some of the challenges of using CFGs, such as ambiguity issues, and discusses solutions. It describes the process of designing a compiler that uses a CFG to parse a custom programming language, including implementing a lexer, parser, code generator, and testing the compiler.
FRAMEWORKS BETWEEN COMPONENTS AND OBJECTSacijjournal
Before the emergence of Component-Based Frameworks, similar issues have been addressed by other
software development paradigms including e.g. Object-Oriented Programming (OOP), ComponentBased Development (CBD), and Object-Oriented Framework. In this study, these approaches especially
object-oriented Frameworks are compared to Component-Based Frameworks and their relationship are
discussed. Different software reuse methods impacts on architectural patterns and support for
application extensions and versioning. It is concluded that many of the mechanisms provided by
Component-Based Framework can be enabled by software elements at the lower level. The main
contribution of Component-Based Framework is the focus on Component development. All of them can be
built on each other in layered manner by adopting suitable design patterns. Still some things such as
which method to develop and upgrade existing application to other approach.
Is fortran still relevant comparing fortran with java and c++ijseajournal
This paper presents a comparative study to evaluate and compare Fortran with the two mo
st popular
programming languages Java and C++. Fortran has gone through major and minor extensions in the
years 2003 and 2008. (1) How much have these extensions made Fortran comparable to Java and C++?
(2) What are the differences and similarities, in sup
porting features like: Templates, object constructors
and destructors, abstract data types and dynamic binding? These are the main questions we are trying to
answer in this study. An object
-
oriented ray tracing application is implemented in these three lan
guages to
compare them. By using only one program we ensured there was only one set of requirements thus making
the comparison homogeneous. Based on our literature survey this is the first study carried out to compare
these languages by applying software m
etrics to the ray tracing application and comparing these results
with the similarities and differences found in practice. We motivate the language implementers and
compiler developers, by providing binary analysis and profiling of the application, to impr
ove Fortran
object handling and processing, and hence making it more prolific and general. This study facilitates and
encourages the reader to further explore, study and use these languages more effectively and productively,
especially Fortran.
AN IMPROVED MT5 MODEL FOR CHINESE TEXT SUMMARY GENERATIONgerogepatton
Complicated policy texts require a lot of effort to read, so there is a need for intelligent interpretation of
Chinese policies. To better solve the Chinese Text Summarization task, this paper utilized the mT5 model
as the core framework and initial weights. Additionally, In addition, this paper reduced the model size
through parameter clipping, used the Gap Sentence Generation (GSG) method as unsupervised method,
and improved the Chinese tokenizer. After training on a meticulously processed 30GB Chinese training
corpus, the paper developed the enhanced mT5-GSG model. Then, when fine-tuning the Chinese Policy
text, this paper chose the idea of “Dropout Twice”, and innovatively combined the probability distribution
of the two Dropouts through the Wasserstein distance. Experimental results indicate that the proposed
model achieved Rouge-1, Rouge-2, and Rouge-L scores of 56.13%, 45.76%, and 56.41% respectively on
the Chinese policy text summarization dataset.
AN IMPROVED MT5 MODEL FOR CHINESE TEXT SUMMARY GENERATIONijaia
Complicated policy texts require a lot of effort to read, so there is a need for intelligent interpretation of
Chinese policies. To better solve the Chinese Text Summarization task, this paper utilized the mT5 model
as the core framework and initial weights. Additionally, In addition, this paper reduced the model size
through parameter clipping, used the Gap Sentence Generation (GSG) method as unsupervised method,
and improved the Chinese tokenizer. After training on a meticulously processed 30GB Chinese training
corpus, the paper developed the enhanced mT5-GSG model. Then, when fine-tuning the Chinese Policy
text, this paper chose the idea of “Dropout Twice”, and innovatively combined the probability distribution
of the two Dropouts through the Wasserstein distance. Experimental results indicate that the proposed
model achieved Rouge-1, Rouge-2, and Rouge-L scores of 56.13%, 45.76%, and 56.41% respectively on
the Chinese policy text summarization dataset.
AN IMPROVED MT5 MODEL FOR CHINESE TEXT SUMMARY GENERATIONgerogepatton
Complicated policy texts require a lot of effort to read, so there is a need for intelligent interpretation of
Chinese policies. To better solve the Chinese Text Summarization task, this paper utilized the mT5 model
as the core framework and initial weights. Additionally, In addition, this paper reduced the model size
through parameter clipping, used the Gap Sentence Generation (GSG) method as unsupervised method,
and improved the Chinese tokenizer. After training on a meticulously processed 30GB Chinese training
corpus, the paper developed the enhanced mT5-GSG model. Then, when fine-tuning the Chinese Policy
text, this paper chose the idea of “Dropout Twice”, and innovatively combined the probability distribution
of the two Dropouts through the Wasserstein distance. Experimental results indicate that the proposed
model achieved Rouge-1, Rouge-2, and Rouge-L scores of 56.13%, 45.76%, and 56.41% respectively on
the Chinese policy text summarization dataset.
Availability Assessment of Software Systems Architecture Using Formal ModelsEditor IJCATR
There has been a significant effort to analyze, design and implement the information systems to process the information and data, and solve various problems. On the one hand, complexity of the contemporary systems, and eye-catching increase in the variety and volume of information has led to great number of the components and elements, and more complex structure and organization of the information systems. On the other hand, it is necessary to develop the systems which meet all of the stakeholders' functional and non-functional requirements. Considering the fact that evaluation and assessment of the aforementioned requirements - prior to the design and implementation phases - will consume less time and reduce costs, the best time to measure the evaluable behavior of the system is when its software architecture is provided. One of the ways to evaluate the architecture of software is creation of an executable model of architecture.
The present research used availability assessment and took repair, maintenance and accident time parameters into consideration. Failures of software and hardware components have been considered in the architecture of software systems. To describe the architecture easily, the authors used Unified Modeling Language (UML). However, due to the informality of UML, they utilized Colored Petri Nets (CPN) for assessment too. Eventually, the researchers evaluated a CPN-based executable model of architecture through CPN-Tools.
General Methodology for developing UML models from UIijwscjournal
In recent past every discipline and every industry have their own methods of developing products. It may
be software development, mechanics, construction, psychology and so on. These demarcations work fine
as long as the requirements are within one discipline. However, if the project extends over several
disciplines, interfaces have to be created and coordinated between the methods of these disciplines.
Performance is an important quality aspect of Web Services because of their distributed nature.
Predicting the performance of web services during early stages of software development is significant. In
Industry, Prototype of these applications is developed during analysis phase of Software Development Life
Cycle (SDLC). However, Performance models are generated from UML models. Methodologies for
predicting the performance from UML models is available. Hence, In this paper, a methodology for
developing Use Case model and Activity model from User Interface is presented. The methodology is
illustrated with a case study on Amazon.com
General Methodology for developing UML models from UIijwscjournal
In recent past every discipline and every industry have their own methods of developing products. It may
be software development, mechanics, construction, psychology and so on. These demarcations work fine
as long as the requirements are within one discipline. However, if the project extends over several
disciplines, interfaces have to be created and coordinated between the methods of these disciplines.
Performance is an important quality aspect of Web Services because of their distributed nature.
Predicting the performance of web services during early stages of software development is significant. In
Industry, Prototype of these applications is developed during analysis phase of Software Development Life
Cycle (SDLC). However, Performance models are generated from UML models. Methodologies for
predicting the performance from UML models is available. Hence, In this paper, a methodology for
developing Use Case model and Activity model from User Interface is presented. The methodology is
illustrated with a case study on Amazon.com.
General Methodology for developing UML models from UI ijwscjournal
In recent past every discipline and every industry have their own methods of developing products. It may be software development, mechanics, construction, psychology and so on. These demarcations work fine as long as the requirements are within one discipline. However, if the project extends over several disciplines, interfaces have to be created and coordinated between the methods of these disciplines.
Performance is an important quality aspect of Web Services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In Industry, Prototype of these applications is developed during analysis phase of Software Development Life
Cycle (SDLC). However, Performance models are generated from UML models. Methodologies for predicting the performance from UML models is available. Hence, In this paper, a methodology for developing Use Case model and Activity model from User Interface is presented. The methodology is illustrated with a case study on Amazon.com.
General Methodology for developing UML models from UIijwscjournal
In recent past every discipline and every industry have their own methods of developing products. It may be software development, mechanics, construction, psychology and so on. These demarcations work fine as long as the requirements are within one discipline. However, if the project extends over several disciplines, interfaces have to be created and coordinated between the methods of these disciplines. Performance is an important quality aspect of Web Services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In Industry, Prototype of these applications is developed during analysis phase of Software Development Life Cycle (SDLC). However, Performance models are generated from UML models. Methodologies for predicting the performance from UML models is available. Hence, In this paper, a methodology for developing Use Case model and Activity model from User Interface is presented. The methodology is illustrated with a case study on Amazon.com.
General Methodology for developing UML models from UIijwscjournal
The document presents a methodology for developing UML models from a user interface prototype. The methodology involves identifying user interface elements from the prototype, developing a flow diagram of the elements, creating an activity model, and developing a use case model. The methodology is demonstrated through a case study of developing UML models for the login page of the Amazon.com website. Key steps include identifying UI elements like workspaces and functions, creating a flow diagram to show the main and exception flows, developing an activity model of the login process, and specifying a use case for login and authentication.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
This document discusses adopting aspect-oriented programming (AOP) in enterprise-wide computing. It provides a brief history of AOP, from its inception at Xerox PARC in the 1990s to the development of AspectJ in the late 1990s. It then reviews related work studying the benefits and challenges of using AOP, such as improved modularity and separation of concerns but also increased complexity. Many studies found quantitative benefits to maintenance from AOP but challenges in adoption. The document concludes by discussing uses of AOP in enterprises, noting both benefits like modularizing cross-cutting concerns, but also challenges such as difficulties aspectizing concurrency and failures.
A Review On Chatbot Design And Implementation TechniquesJoe Osborn
The document reviews techniques for designing and implementing chatbots. It proposes using DialogFlow, TensorFlow, Android Studio, and machine learning/deep learning techniques like neural machine translation and deep reinforcement learning to develop a customizable chatbot. The review analyzes past works on chatbots and neural machine translation techniques like sequence-to-sequence models with encoder-decoder architectures. The proposed chatbot aims to improve on previous approaches by jointly learning to align and translate inputs to maximize performance.
Content-Based Image Retrieval (CBIR) systems have been used for the searching of relevant images in various research areas. In CBIR systems features such as shape, texture and color are used. The extraction of features is the main step on which the retrieval results depend. Color features in CBIR are used as in the color histogram, color moments, conventional color correlogram and color histogram. Color space selection is used to represent the information of color of the pixels of the query image. The shape is the basic characteristic of segmented regions of an image. Different methods are introduced for better retrieval using different shape representation techniques; earlier the global shape representations were used but with time moved towards local shape representations. The local shape is more related to the expressing of result instead of the method. Local shape features may be derived from the texture properties and the color derivatives. Texture features have been used for images of documents, segmentation-based recognition,and satellite images. Texture features are used in different CBIR systems along with color, shape, geometrical structure and sift features.
This document discusses clickjacking attacks, which hijack users' clicks to perform unintended actions. It provides an overview of clickjacking, describes different types of attacks, and analyzes vulnerabilities that make websites susceptible. Experiments are conducted on a sample social networking site, applying various clickjacking techniques. Potential defenses are tested, including X-Frame-Options headers and frame busting code. A proposed solution detects transparent iframes to warn users and check for hidden mouse pointers to mitigate cursorjacking. Analysis of top Jammu and Kashmir websites found most were vulnerable, while browser behavior studies showed varying support for defenses.
A Metamodel and Graphical Syntax for NS-2 ProgramingEditor IJCATR
One of the most important issues, around the world, which manufacturers pay special attention to is to promote their
activities in order to be able to give high quality products or services. Perhaps the first advice to achieve this goal is simulation idea.
Therefore, simulation software packages with different properties have been made available. One of the most applicable simulators is
NS-2 which suffers from internal complexity. On the other hand, Domain Specific Modeling Languages can make an abstraction level
by which we can overcome the complexity of NS-2, increase the production speed, and promote efficiency. So, in this paper, we
introduce a Domain Specific Metamodel for NS-2.This new metamodel paves the ways for introducing abstract syntax and modeling
language syntax. In addition, created syntax in Domain Specific Modeling Language is supported by a graphical modeling tool.
This paper presents a vision on how the software development process could be a fully unified mechanized
process by getting benefits from the advances of Natural Language Processing and Program Synthesis
fields. The process begins from requirements written in natural language that is translated to sentences in
logical form. A program synthesizer gets those sentences in logical form (the translator's outcome) and
generates a source code. Finally, a compiler produces ready-to-run software. To find out how the building
blocks of our proposed approach works, we conducted an exploratory research on the literature in the
fields of Requirements Engineering, Natural Language Processing, and Program Synthesis. Currently, this
approach is difficult to accomplish in a fully automatic way due to the ambiguities inherent in natural
language, the reasoning context of the software, and the program synthesizer limitations in generating a
source code from logic.
Information system project management (IT - project) is a complex iterative process. An important role for the development of complex IT projects plays records of the development lifecycle (LC). The article presents an analysis of the effectiveness of the work on the creation of IT - projects based on two modified models of the life cycle: cascade and spiral. Analysis of the effectiveness of the management of the IT project was implemented on the basis of simulation. The modeling was carried out on the basis of Any Ljgic tools on the example of development of geoinformation system (GIS). It is shown that it is advisable to design GIS on the basis of a modified spiral LC with splitting of the flow of requirements at the input. The peculiarity of the proposed study is to take into account the requirements in the form of communicative interactions of different types. Under the communicative interactions are understood all the interactions between the subjects of the process of creating an IT-project: verbal and non - verbal, carried out on the basis of CASE-means
Information system project management (IT - project) is a complex iterative process. An important role for the development of complex IT projects plays records of the development lifecycle (LC). The article presents an analysis of the effectiveness of the work on the creation of IT - projects based on two modified models of the life cycle: cascade and spiral. Analysis of the effectiveness of the management of the IT project was implemented on the basis of simulation. The modeling was carried out on the basis of Any Ljgic tools on the example of development of geoinformation system (GIS). It is shown that it is advisable to design GIS on the basis of a modified spiral LC with splitting of the flow of requirements at the input. The peculiarity of the proposed study is to take into account the requirements in the form of communicative interactions of different types. Under the communicative interactions are understood all the interactions between the subjects of the process of creating an IT-project: verbal and non - verbal, carried out on the basis of CASE-means.
An Adjacent Analysis of the Parallel Programming Model Perspective: A SurveyIRJET Journal
This document provides an overview and analysis of parallel programming models. It begins with an abstract discussing the growing demand for parallel computing and challenges with existing parallel programming frameworks. It then reviews several relevant studies on parallel programming models and architectures. The document goes on to describe several key parallel programming models in more detail, including the Parallel Random Access Machine (PRAM) model, Unrestricted Message Passing (UMP) model, and Bulk Synchronous Parallel (BSP) model. It discusses aspects of each model like architecture, communication methods, and associated cost models. The overall goal is to compare benefits and limitations of different parallel programming models.
The document discusses several key features of a good programming language:
1. Clarity, simplicity, and unity - the language should have a clear set of minimal concepts that are combined in a simple, regular way.
2. Orthogonality - changing one aspect of the language does not require changing another, making the language easier to learn and programs easier to write.
3. Support for abstraction - the language allows programmers to define abstract data types and operations that match the problem solution, rather than being limited by language implementations.
Compiler Design Using Context-Free GrammarIRJET Journal
This document discusses the use of context-free grammars (CFGs) in compiler design. It provides an overview of CFGs and their role in specifying the syntax of programming languages. The document also examines some of the challenges of using CFGs, such as ambiguity issues, and discusses solutions. It describes the process of designing a compiler that uses a CFG to parse a custom programming language, including implementing a lexer, parser, code generator, and testing the compiler.
FRAMEWORKS BETWEEN COMPONENTS AND OBJECTSacijjournal
Before the emergence of Component-Based Frameworks, similar issues have been addressed by other
software development paradigms including e.g. Object-Oriented Programming (OOP), ComponentBased Development (CBD), and Object-Oriented Framework. In this study, these approaches especially
object-oriented Frameworks are compared to Component-Based Frameworks and their relationship are
discussed. Different software reuse methods impacts on architectural patterns and support for
application extensions and versioning. It is concluded that many of the mechanisms provided by
Component-Based Framework can be enabled by software elements at the lower level. The main
contribution of Component-Based Framework is the focus on Component development. All of them can be
built on each other in layered manner by adopting suitable design patterns. Still some things such as
which method to develop and upgrade existing application to other approach.
Is fortran still relevant comparing fortran with java and c++ijseajournal
This paper presents a comparative study to evaluate and compare Fortran with the two mo
st popular
programming languages Java and C++. Fortran has gone through major and minor extensions in the
years 2003 and 2008. (1) How much have these extensions made Fortran comparable to Java and C++?
(2) What are the differences and similarities, in sup
porting features like: Templates, object constructors
and destructors, abstract data types and dynamic binding? These are the main questions we are trying to
answer in this study. An object
-
oriented ray tracing application is implemented in these three lan
guages to
compare them. By using only one program we ensured there was only one set of requirements thus making
the comparison homogeneous. Based on our literature survey this is the first study carried out to compare
these languages by applying software m
etrics to the ray tracing application and comparing these results
with the similarities and differences found in practice. We motivate the language implementers and
compiler developers, by providing binary analysis and profiling of the application, to impr
ove Fortran
object handling and processing, and hence making it more prolific and general. This study facilitates and
encourages the reader to further explore, study and use these languages more effectively and productively,
especially Fortran.
AN IMPROVED MT5 MODEL FOR CHINESE TEXT SUMMARY GENERATIONgerogepatton
Complicated policy texts require a lot of effort to read, so there is a need for intelligent interpretation of
Chinese policies. To better solve the Chinese Text Summarization task, this paper utilized the mT5 model
as the core framework and initial weights. Additionally, In addition, this paper reduced the model size
through parameter clipping, used the Gap Sentence Generation (GSG) method as unsupervised method,
and improved the Chinese tokenizer. After training on a meticulously processed 30GB Chinese training
corpus, the paper developed the enhanced mT5-GSG model. Then, when fine-tuning the Chinese Policy
text, this paper chose the idea of “Dropout Twice”, and innovatively combined the probability distribution
of the two Dropouts through the Wasserstein distance. Experimental results indicate that the proposed
model achieved Rouge-1, Rouge-2, and Rouge-L scores of 56.13%, 45.76%, and 56.41% respectively on
the Chinese policy text summarization dataset.
AN IMPROVED MT5 MODEL FOR CHINESE TEXT SUMMARY GENERATIONijaia
Complicated policy texts require a lot of effort to read, so there is a need for intelligent interpretation of
Chinese policies. To better solve the Chinese Text Summarization task, this paper utilized the mT5 model
as the core framework and initial weights. Additionally, In addition, this paper reduced the model size
through parameter clipping, used the Gap Sentence Generation (GSG) method as unsupervised method,
and improved the Chinese tokenizer. After training on a meticulously processed 30GB Chinese training
corpus, the paper developed the enhanced mT5-GSG model. Then, when fine-tuning the Chinese Policy
text, this paper chose the idea of “Dropout Twice”, and innovatively combined the probability distribution
of the two Dropouts through the Wasserstein distance. Experimental results indicate that the proposed
model achieved Rouge-1, Rouge-2, and Rouge-L scores of 56.13%, 45.76%, and 56.41% respectively on
the Chinese policy text summarization dataset.
AN IMPROVED MT5 MODEL FOR CHINESE TEXT SUMMARY GENERATIONgerogepatton
Complicated policy texts require a lot of effort to read, so there is a need for intelligent interpretation of
Chinese policies. To better solve the Chinese Text Summarization task, this paper utilized the mT5 model
as the core framework and initial weights. Additionally, In addition, this paper reduced the model size
through parameter clipping, used the Gap Sentence Generation (GSG) method as unsupervised method,
and improved the Chinese tokenizer. After training on a meticulously processed 30GB Chinese training
corpus, the paper developed the enhanced mT5-GSG model. Then, when fine-tuning the Chinese Policy
text, this paper chose the idea of “Dropout Twice”, and innovatively combined the probability distribution
of the two Dropouts through the Wasserstein distance. Experimental results indicate that the proposed
model achieved Rouge-1, Rouge-2, and Rouge-L scores of 56.13%, 45.76%, and 56.41% respectively on
the Chinese policy text summarization dataset.
Availability Assessment of Software Systems Architecture Using Formal ModelsEditor IJCATR
There has been a significant effort to analyze, design and implement the information systems to process the information and data, and solve various problems. On the one hand, complexity of the contemporary systems, and eye-catching increase in the variety and volume of information has led to great number of the components and elements, and more complex structure and organization of the information systems. On the other hand, it is necessary to develop the systems which meet all of the stakeholders' functional and non-functional requirements. Considering the fact that evaluation and assessment of the aforementioned requirements - prior to the design and implementation phases - will consume less time and reduce costs, the best time to measure the evaluable behavior of the system is when its software architecture is provided. One of the ways to evaluate the architecture of software is creation of an executable model of architecture.
The present research used availability assessment and took repair, maintenance and accident time parameters into consideration. Failures of software and hardware components have been considered in the architecture of software systems. To describe the architecture easily, the authors used Unified Modeling Language (UML). However, due to the informality of UML, they utilized Colored Petri Nets (CPN) for assessment too. Eventually, the researchers evaluated a CPN-based executable model of architecture through CPN-Tools.
General Methodology for developing UML models from UIijwscjournal
In recent past every discipline and every industry have their own methods of developing products. It may
be software development, mechanics, construction, psychology and so on. These demarcations work fine
as long as the requirements are within one discipline. However, if the project extends over several
disciplines, interfaces have to be created and coordinated between the methods of these disciplines.
Performance is an important quality aspect of Web Services because of their distributed nature.
Predicting the performance of web services during early stages of software development is significant. In
Industry, Prototype of these applications is developed during analysis phase of Software Development Life
Cycle (SDLC). However, Performance models are generated from UML models. Methodologies for
predicting the performance from UML models is available. Hence, In this paper, a methodology for
developing Use Case model and Activity model from User Interface is presented. The methodology is
illustrated with a case study on Amazon.com
General Methodology for developing UML models from UIijwscjournal
In recent past every discipline and every industry have their own methods of developing products. It may
be software development, mechanics, construction, psychology and so on. These demarcations work fine
as long as the requirements are within one discipline. However, if the project extends over several
disciplines, interfaces have to be created and coordinated between the methods of these disciplines.
Performance is an important quality aspect of Web Services because of their distributed nature.
Predicting the performance of web services during early stages of software development is significant. In
Industry, Prototype of these applications is developed during analysis phase of Software Development Life
Cycle (SDLC). However, Performance models are generated from UML models. Methodologies for
predicting the performance from UML models is available. Hence, In this paper, a methodology for
developing Use Case model and Activity model from User Interface is presented. The methodology is
illustrated with a case study on Amazon.com.
General Methodology for developing UML models from UI ijwscjournal
In recent past every discipline and every industry have their own methods of developing products. It may be software development, mechanics, construction, psychology and so on. These demarcations work fine as long as the requirements are within one discipline. However, if the project extends over several disciplines, interfaces have to be created and coordinated between the methods of these disciplines.
Performance is an important quality aspect of Web Services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In Industry, Prototype of these applications is developed during analysis phase of Software Development Life
Cycle (SDLC). However, Performance models are generated from UML models. Methodologies for predicting the performance from UML models is available. Hence, In this paper, a methodology for developing Use Case model and Activity model from User Interface is presented. The methodology is illustrated with a case study on Amazon.com.
General Methodology for developing UML models from UIijwscjournal
In recent past every discipline and every industry have their own methods of developing products. It may be software development, mechanics, construction, psychology and so on. These demarcations work fine as long as the requirements are within one discipline. However, if the project extends over several disciplines, interfaces have to be created and coordinated between the methods of these disciplines. Performance is an important quality aspect of Web Services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In Industry, Prototype of these applications is developed during analysis phase of Software Development Life Cycle (SDLC). However, Performance models are generated from UML models. Methodologies for predicting the performance from UML models is available. Hence, In this paper, a methodology for developing Use Case model and Activity model from User Interface is presented. The methodology is illustrated with a case study on Amazon.com.
General Methodology for developing UML models from UIijwscjournal
The document presents a methodology for developing UML models from a user interface prototype. The methodology involves identifying user interface elements from the prototype, developing a flow diagram of the elements, creating an activity model, and developing a use case model. The methodology is demonstrated through a case study of developing UML models for the login page of the Amazon.com website. Key steps include identifying UI elements like workspaces and functions, creating a flow diagram to show the main and exception flows, developing an activity model of the login process, and specifying a use case for login and authentication.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
This document discusses adopting aspect-oriented programming (AOP) in enterprise-wide computing. It provides a brief history of AOP, from its inception at Xerox PARC in the 1990s to the development of AspectJ in the late 1990s. It then reviews related work studying the benefits and challenges of using AOP, such as improved modularity and separation of concerns but also increased complexity. Many studies found quantitative benefits to maintenance from AOP but challenges in adoption. The document concludes by discussing uses of AOP in enterprises, noting both benefits like modularizing cross-cutting concerns, but also challenges such as difficulties aspectizing concurrency and failures.
A Review On Chatbot Design And Implementation TechniquesJoe Osborn
The document reviews techniques for designing and implementing chatbots. It proposes using DialogFlow, TensorFlow, Android Studio, and machine learning/deep learning techniques like neural machine translation and deep reinforcement learning to develop a customizable chatbot. The review analyzes past works on chatbots and neural machine translation techniques like sequence-to-sequence models with encoder-decoder architectures. The proposed chatbot aims to improve on previous approaches by jointly learning to align and translate inputs to maximize performance.
Content-Based Image Retrieval (CBIR) systems have been used for the searching of relevant images in various research areas. In CBIR systems features such as shape, texture and color are used. The extraction of features is the main step on which the retrieval results depend. Color features in CBIR are used as in the color histogram, color moments, conventional color correlogram and color histogram. Color space selection is used to represent the information of color of the pixels of the query image. The shape is the basic characteristic of segmented regions of an image. Different methods are introduced for better retrieval using different shape representation techniques; earlier the global shape representations were used but with time moved towards local shape representations. The local shape is more related to the expressing of result instead of the method. Local shape features may be derived from the texture properties and the color derivatives. Texture features have been used for images of documents, segmentation-based recognition,and satellite images. Texture features are used in different CBIR systems along with color, shape, geometrical structure and sift features.
This document discusses clickjacking attacks, which hijack users' clicks to perform unintended actions. It provides an overview of clickjacking, describes different types of attacks, and analyzes vulnerabilities that make websites susceptible. Experiments are conducted on a sample social networking site, applying various clickjacking techniques. Potential defenses are tested, including X-Frame-Options headers and frame busting code. A proposed solution detects transparent iframes to warn users and check for hidden mouse pointers to mitigate cursorjacking. Analysis of top Jammu and Kashmir websites found most were vulnerable, while browser behavior studies showed varying support for defenses.
Performance Analysis of Audio and Video Synchronization using Spreaded Code D...Eswar Publications
The audio and video synchronization plays an important role in speech recognition and multimedia communication. The audio-video sync is a quite significant problem in live video conferencing. It is due to use of various hardware components which introduces variable delay and software environments. The objective of the synchronization is used to preserve the temporal alignment between the audio and video signals. This paper proposes the audio-video synchronization using spreading codes delay measurement technique. The performance of the proposed method made on home database and achieves 99% synchronization efficiency. The audio-visual
signature technique provides a significant reduction in audio-video sync problems and the performance analysis of audio and video synchronization in an effective way. This paper also implements an audio- video synchronizer and analyses its performance in an efficient manner by synchronization efficiency, audio-video time drift and audio-video delay parameters. The simulation result is carried out using mat lab simulation tools and simulink. It is automatically estimating and correcting the timing relationship between the audio and video signals and maintaining the Quality of Service.
Due to the availability of complicated devices in industry, models for consumers at lower cost of resources are developed. Home Automation systems have been developed by several researchers. The limitations of home automation includes complexity in architecture, higher costs of the equipment, interface inflexibility. In this paper as we have proposed, the working protocol of PIC 16F72 technology is which is secure, cost efficient, flexible that leads to the development of efficient home automation systems. The system is operational to control various home appliances like fans, Bulbs, Tube light. The following paper describes about components used and working of all components connected. The home automation system makes use of Android app entitled “Home App” which gives
flexibility and easy to use GUI.
Semantically Enchanced Personalised Adaptive E-Learning for General and Dysle...Eswar Publications
E-learning plays an important role in providing required and well formed knowledge to a learner. The medium of e- learning has achieved advancement in various fields such as adaptive e-learning systems. The need for enhancing e-learning semantically can enhance the retrieval and adaptability of the learning curriculum. This paper provides a semantically enhanced module based e-learning for computer science programme on a learnercentric perspective. The learners are categorized based on their proficiency for providing personalized learning environment for users. Learning disorders on the platform of e-learning still require lots of research. Therefore, this paper also provides a personalized assessment theoretical model for alphabet learning with learning objects for
children’s who face dyslexia.
Agriculture plays an important role in the economy of our country. Over 58 percent of the rural households depend on the agriculture sector as their means of livelihood. Agriculture is one of the major contributors to Gross Domestic Product(GDP). Seeds are the soul of agriculture. This application helps in reducing the time for the researchers as well as farmers to know the seedling parameters. The application helps the farmers to know about the percentage of seedlings that will grow and it is very essential in estimating the yield of that particular crop. Manual calculation may lead to some error, to minimize that error, the developed app is used. The scientist and farmers require the app to know about the physiological seed quality parameters and to take decisions regarding their farming activities. In this article a desktop app for seed germination percentage and vigour index calculation are developed in PHP scripting language.
What happens when adaptive video streaming players compete in time-varying ba...Eswar Publications
Competition among adaptive video streaming players severely diminishes user-QoE. When players compete at a bottleneck link many do not obtain adequate resources. This imbalance eventually causes ill effects such as screen flickering and video stalling. There have been many attempts in recent years to overcome some of these problems. However, added to the competition at the bottleneck link there is also the possibility of varying network bandwidth which can make the situation even worse. This work focuses on such a situation. It evaluates current heuristic adaptive video players at a bottleneck link with time-varying bandwidth conditions. Experimental setup includes the TAPAS player and emulated network conditions. The results show PANDA outperforms FESTIVE, ELASTIC and the Conventional players.
WLI-FCM and Artificial Neural Network Based Cloud Intrusion Detection SystemEswar Publications
Security and Performance aspects of cloud computing are the major issues which have to be tended to in Cloud Computing. Intrusion is one such basic and imperative security problem for Cloud Computing. Consequently, it is essential to create an Intrusion Detection System (IDS) to detect both inside and outside assaults with high detection precision in cloud environment. In this paper, cloud intrusion detection system at hypervisor layer is developed and assesses to detect the depraved activities in cloud computing environment. The cloud intrusion detection system uses a hybrid algorithm which is a fusion of WLI- FCM clustering algorithm and Back propagation artificial Neural Network to improve the detection accuracy of the cloud intrusion detection system. The proposed system is implemented and compared with K-means and classic FCM. The DARPA’s KDD cup dataset 1999 is used for simulation. From the detailed performance analysis, it is clear that the proposed system is able to detect the anomalies with high detection accuracy and low false alarm rate.
Spreading Trade Union Activities through Cyberspace: A Case StudyEswar Publications
This report present the outcome of an investigative research conducted to examine the modu-operandi of academic staff union of polytechnics (ASUP) YabaTech. The investigation covered the logistics and cost implication for spreading union activities among members. It was discovered that cost of management and dissemination of information to members was at high side, also logistics problem constitutes to loss of information in transit hence cut away some members from union activities. To curtail the problem identified, we proposed the
design of secure and dynamic website for spreading union activities among members and public. The proposed system was implemented using HTML5 technology, interface frameworks like Bootstrap and j query which enables the responsive feature of the application interface. The backend was designed using PHPMYSQL. It was discovered from the evaluation of the new system that cost of managing information has reduced considerably, and logistic problems identified in the old system has become a forgotten issue.
Identifying an Appropriate Model for Information Systems Integration in the O...Eswar Publications
Nowadays organizations are using information systems for optimizing processes in order to increase coordination and interoperability across the organizations. Since Oil and Gas Industry is one of the large industries in whole of the world, there is a need to compatibility of its Information Systems (IS) which consists three categories of systems: Field IS, Plant IS and Enterprise IS to create interoperability and approach the
optimizing processes as its result. In this paper we introduce the different models of information systems integration, identify the types of information systems that are using in the upstream and downstream sectors of petroleum industry, and finally based on expert’s opinions will identify a suitable model for information systems integration in this industry.
Link-and Node-Disjoint Evaluation of the Ad Hoc on Demand Multi-path Distance...Eswar Publications
This work illustrates the AOMDV routing protocol. Its ancestor, the AODV routing protocol is also described. This tutorial demonstrates how forward and reverse paths are created by the AOMDV routing protocol. Loop free paths formulation is described, together with node and link disjoint paths. Finally, the performance of the AOMDV routing protocol is investigated along link and node disjoint paths. The WSN with the AOMDV routing protocol using link disjoint paths is better than the WSN with the AOMDV routing protocol using node disjoint paths for energy consumption.
Bridging Centrality: Identifying Bridging Nodes in Transportation NetworkEswar Publications
To identify the importance of node of a network, several centralities are used. Majority of these centrality measures are dominated by components' degree due to their nature of looking at networks’ topology. We propose a centrality to identification model, bridging centrality, based on information flow and topological aspects. We apply bridging centrality on real world networks including the transportation network and show that the nodes distinguished by bridging centrality are well located on the connecting positions between highly connected regions. Bridging centrality can discriminate bridging nodes, the nodes with more information flowed through them and locations between highly connected regions, while other centrality measures cannot.
Now a days we are living in an era of Information Technology where each and every person has to become IT incumbent either intentionally or unintentionally. Technology plays a vital role in our day to day life since last few decades and somehow we all are depending on it in order to obtain maximum benefit and comfort. This new era equipped with latest advents of technology, enlightening world in the form of Internet of Things (IoT). Internet of things is such a specified and dignified domain which leads us to the real world scenarios where each object can perform some task while communicating with some other objects. The world with full of devices, sensors and other objects which will communicate and make human life far better and easier than ever. This paper provides an overview of current research work on IoT in terms of architecture, a technology used and applications. It also highlights all the issues related to technologies used for IoT, after the literature review of research work. The main purpose of this survey is to provide all the latest technologies, their corresponding
trends and details in the field of IoT in systematic manner. It will be helpful for further research.
Automatic Monitoring of Soil Moisture and Controlling of Irrigation SystemEswar Publications
In past couple of decades, there is immediate growth in field of agricultural technology. Utilization of proper method of irrigation by drip is very reasonable and proficient. A various drip irrigation methods have been proposed, but they have been found to be very luxurious and dense to use. The farmer has to maintain watch on irrigation schedule in the conventional drip irrigation system, which is different for different types of crops. In remotely monitored embedded system for irrigation purposes have become a new essential for farmer to accumulate his energy, time and money and will take place only when there will be requirement of water. In this approach, the soil test for chemical constituents, water content, and salinity and fertilizer requirement data collected by wireless and processed for better drip irrigation plan. This paper reviews different monitoring systems and proposes an automatic monitoring system model using Wireless Sensor Network (WSN) which helps the farmer to improve the yield.
Multi- Level Data Security Model for Big Data on Public Cloud: A New ModelEswar Publications
With the advent of cloud computing the big data has emerged as a very crucial technology. The certain type of cloud provides the consumers with the free services like storage, computational power etc. This paper is intended to make use of infrastructure as a service where the storage service from the public cloud providers is going to leveraged by an individual or organization. The paper will emphasize the model which can be used by anyone without any cost. They can store the confidential data without any type of security issue, as the data will be altered
in such a way that it cannot be understood by the intruder if any. Not only that but the user can retrieve back the original data within no time. The proposed security model is going to effectively and efficiently provide a robust security while data is on cloud infrastructure as well as when data is getting migrated towards cloud infrastructure or vice versa.
Impact of Technology on E-Banking; Cameroon PerspectivesEswar Publications
The financial services industry is experiencing rapid changes in services delivery and channels usage, and financial companies and users of financial services are looking at new technologies as they emerge and deciding whether or not to embrace them and the new opportunities to save and manage enormous time, cost and stress.
There is no doubt about the favourable and manifold impact of technology on e-banking as pictured in this review paper, almost all banks are with the least and most access e-banking Technological equipments like ATMs and Cards. On the other Hand cheap and readily available technology has opened a favourable competition in ebanking services business with a lot of wide range competitors competing with Commercial Banks in Cameroon in providing digital financial services.
Classification Algorithms with Attribute Selection: an evaluation study using...Eswar Publications
Attribute or feature selection plays an important role in the process of data mining. In general the data set contains more number of attributes. But in the process of effective classification not all attributes are relevant.
Attribute selection is a technique used to extract the ranking of attributes. Therefore, this paper presents a comparative evaluation study of classification algorithms before and after attribute selection using Waikato Environment for Knowledge Analysis (WEKA). The evaluation study concludes that the performance metrics of the classification algorithm, improves after performing attribute selection. This will reduce the work of processing irrelevant attributes.
Mining Frequent Patterns and Associations from the Smart meters using Bayesia...Eswar Publications
In today’s world migration of people from rural areas to urban areas is quite common. Health care services are one of the most challenging aspect that is must require to the people with abnormal health. Advancements in the technologies lead to build the smart homes, which contains various sensor or smart meter devices to automate the process of other electronic device. Additionally these smart meters can be able to capture the daily activities of the patients and also monitor the health conditions of the patients by mining the frequent patterns and
association rules generated from the smart meters. In this work we proposed a model that is able to monitor the activities of the patients in home and can send the daily activities to the corresponding doctor. We can extract the frequent patterns and association rules from the log data and can predict the health conditions of the patients and can give the suggestions according to the prediction. Our work is divided in to three stages. Firstly, we used to record the daily activities of the patient using a specific time period at three regular intervals. Secondly we applied the frequent pattern growth for extracting the association rules from the log file. Finally, we applied k means clustering for the input and applied Bayesian network model to predict the health behavior of the patient and precautions will be given accordingly.
Network as a Service Model in Cloud Authentication by HMAC AlgorithmEswar Publications
Resource pooling on internet-based accessing on use as pay environmental technology and ruled in IT field is the
cloud. Present, in every organization has trusted the web, however, the information must flow but not hold the
data. Therefore, all customers have to use the cloud. While the cloud progressing info by securing-protocols. Third
party observing and certain circumstances directly stale in flow and kept of packets in the virtual private cloud.
Global security statistics in the year 2017, hacking sensitive information in cloud approximately maybe 75.35%,
and the world security analyzer said this calculation maybe reached to 100%. For this cause, this proposed
research work concentrates on Authentication-Message-Digest-Key with authentication in routing the Network as
a Service of packets in OSPF (Open Shortest Path First) implementing Cloud with GNS3 has tested them to
securing from attackers.
Microstrip patch antennas are recently used in wireless detection applications due to their low power consumption, low cost, versatility, field excitation, ease of fabrication etc. The microstrip patch antennas are also called as printed antennas which is suffer with an array elements of antenna and narrow bandwidth. To overcome the above drawbacks, Flame Retardant Material is used as the substrate. Rectangular shape of microstrip patch antenna with FR4 material as the substrate which is more suitable for the explosive detection applications. The proposed printed antenna was designed with the dimension of 60 x 60 mm2. FR-4 material has a dielectric constant value of 4.3 with thickness 1.56 mm, length and width 60 mm and 60 mm respectively. One side of the substrate contains the ground plane of dimensions 60 x60 mm2 made of copper and the other side of the substrate contains the patch which have dimensions 34 x 29 mm2 and thickness 0.03mm which is also made of copper. RMPA without slot, Vertical slot RMPA, Double horizontal slot RMPA and Centre slot RMPA structures were
designed and the performance of the antennas were analysed with various parameters such as gain, directivity, Efield, VSWR and return loss. From the performance analysis, double horizontal slot RMPA antenna provides a better result and it provides maximum gain (8.61dB) and minimum return loss (-33.918dB). Based on the E-field excitation value the SEMTEX explosive material is detected and it was simulated using CST software.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Assessment of KQML Improved
1. Int. J. Advanced Networking and Applications
Volume: 07 Issue: 06 Pages: 2931-2935 (2016) ISSN: 0975-0290
2931
Assessment of KQML Improved
Ankit Jagga
M.M Institute of Computer Technology and Business Management
ankitjagga2000@gmail.com
Aarti Singh
Maharishi Markandeshwar University, Mullana (Ambala)
singh2208@gmail.com
-----------------------------------------------------------------------ABSTRACT----------------------------------------------------------
KQML (Knowledge Query Manipulation Language) is both a language and a protocol for establishing
communication among multiagent systems. Researchers have been putting efforts to improve the existing
structure of KQML. The latest version of KQML i.e. the KQML Improved not only supports existing features but
it also extends the list of performatives and parameters along with a novel KQML based communication protocol.
It also uniquely contributes security related performatives and hence limits the agents going destructive in a
system. A comprehensive evaluation of KQML Improved with respect to available metrics as well as its
comparison with its predecessor is being presented in the paper.
Keywords-Agent Communication Language, Knowledge Query Manipulation Language (KQML), Multiagent
Systems, Metrics of Evaluation
--------------------------------------------------------------------------------------------------------------------------------------------------
Date of Submission: May 26, 2016 Date of Acceptance: June 22, 2016
--------------------------------------------------------------------------------------------------------------------------------------------------
1. INTRODUCTION
A multiagent system [1,2] allows inter agent
communication which in turn facilitates decision making
among interacting agents. KQML [3,4] is one of the agent
communication languages supporting agent
communication which represents the message in three
layered format. Layer 1 represents the knowledge and
expression of the message to be transferred. Layer 2
depicts the attributes of message itself and identity of
sender and recipient and the type of message is
represented by third layer. Also, technically a KQML
message is known as a performative and it actually
performs an action. In fact, like any other language,
KQML also contains a set of performatives i.e. the
keywords that make the communication meaningful
amongst KQML agents. From the basic KQML which
contained 36 performatives initially, many new
performatives have been proposed with very few being
actually added to the list in practice. Further, very few
works are available reflecting the evaluation of
communication languages and KQML in particular [5].
The paper explores the possible metrics for evaluating
KQML and applies for analyzing the extended version of
KQML.
The paper is structured into four sections. Section 2
discusses the related work. Section 3 begins by discussing
the parameters thus explored from the literature and later
analyses the extended version of KQML on the same.
Section 4 concludes with the scope of future work.
II. THE LITERATURE REVIEW
KQML has emerged as the efforts of many researchers. In
fact, its developers has been putting efforts to improve the
same. Available literature [6,7] suggests that during initial
years of development, KQML only had an informal and
partial semantic description and still not many commercial
applications using KQML are available. A communication
language demands committed agents so that they can
achieve the target or complete the delegated task.
However, language has diverted from fulfilling the
requirements of researchers and developers working in the
domain of Internet technologies. Further, to the best of our
knowledge and literature grilling advocates that
development in agent communication language and
KQML in particular, has not been the prime agenda of
organizations.
For instance, Huhns, Bridgeland and Arni [8] specify that
coordination is a property of a system of agents
performing some activity in a shared environment and to
achieve coordination, agents communicate to avoid
sharing the resources. Vaniya et al. [9] addresses the
issuesof KQML in particular. Covington [10] examined
the encoding of speech acts in KQML and suggested ways
of improving KQML. It is a well not fact that agents in
different MAS are usually incompatible and a
communication protocol make them exchange messages.
However, KQML do not offer semantically enhanced
approach for expressing the meaning of messages being
exchanged. Wu & Sun [11] highlights the research and
design of anintelligent Multi-Agent System (MAS) using
KQML as agent communication language. Yu, Li and
Deng in 2010 reviewed the features and problems of a
multiagent cooperative information system (MACIS) [12].
The work has also analyzed the structure of KQML and
has verified the rational use of KQML in MACIS. Work
by Chen and Lien [13] highlights the technological
challenges pertaining in the field of machine to machine
world. An indepth survey comparing the pros and cons of
various ACLs and protocols is given in [14]. Researchers
[15,16] have been continuously putting efforts to improve
existing ACL according to FIPA standards and also
carrying out the task cooperatively to achieve the shared
2. Int. J. Advanced Networking and Applications
Volume: 07 Issue: 06 Pages: 2931-2935 (2016) ISSN: 0975-0290
2932
goals. Authors in [17,18] have addressed the issue of
semantics for KQML, in particular. They described
KQML as a language and an associated
protocol by which intelligent software agents can
communicate to share information and knowledge. They
believed that KQML, or something very much like it, will
be important in building the distributed agent-oriented
information systems of the future.
Mayfield and his coworkers [5] have discussed the
desirable features of an agent communication language
and suggested that content, semantics, implementation,
networking, environment and reliability are some of the
most prominent desirable features of any communication
language. The authors have further evaluated the KQML.
Improvements in the structure of KQML and set of
performatives and parameters are also available in
[14,19,20,21]. On the basis of above, the current structure
KQML is being evaluated in the upcoming section.
III. THE EXPLORED METRICS
Literature suggests that an agent communication language
is desired to be evaluated on the basis of metrics shown in
figure 1. As we are already aware that KQML is both a
message handling and a communication protocol,
therefore the focus of this work is to evaluate the same as
a language as well as a protocol. The basic understanding
of the listed semantics has been drawn from work by
Mayfield [5]. Each of these metrics and its implication on
the same is being discussed as follows.
Figure 1: Metrics for Evaluating an Agent Communication Language
Declarative and Simple
A communication language should be declarative
and simple so that it can be easily understood by its
users. It should be to parse the syntax. Further, a
language should be platform independent so that it
can be used cross-platform.
Well Defined Performatives
Since the language should be inter-operational, the
performatives should be clear and concise. The
perfromatives should be able to convey the
intended semantics and further, the set should have
the ample scope to improve the same. The
performatives should such that these are able to
express not only the action associated with the
message but also the content of the message should
reflect domain of focus.
Canonical Semantics
In general, the semantics of any communication
language should be canonical i.e. these should obey
certain rules such that the associated meaning is
obvious to any user. However, developing such
semantics has always been a challenge for language
developers as the words in a common language are
almost indefinite and designing a language with
unlimited performatives conveying the correct
semantics is out of scope.
Qualitative Implementation
A language should be designed so that once its
implemented in practice, it offers faster execution,
utilizes less resources and since KQML is a
language to be used in a networked environment ,
minimum bandwidth utilization is also one of the
evaluating factors. Not only the existing version
but also the improved version should be able to
integrate with the other high level languages such
as Java, C++ and such integration should be at
ease.
Ease of Integration
Since, now a days all executions and
implementations are being carried out in a
networked environment, therefore the language
should support networking concepts including
connection less, connection oriented, synchronous
and asynchronous connection modes.
Declarative & Simple
Well Defined
Communicative acts
Cannonical Semantics
Qualitative
Implementation
Ease of Integartion
Support
Interoperability
Reliable & Secure
Communication
Extensible
3. Int. J. Advanced Networking and Applications
Volume: 07 Issue: 06 Pages: 2931-2935 (2016) ISSN: 0975-0290
2933
Support Interoperability
The agents in a multiagent system are both
homogenous and heterogeneous and usually
operate in a networked and distributed
environment. In order to support interoperability
amongst agents active on different platforms , the
communicating agents must understand the
semantics of messages being exchanged
independent of the language in which these are
implemented.
Reliable and Secure Communication
The language of agent communication should offer
a reliable communication such that the messages
being exchanged are secure. The developer should
design
a language which offers encryption and decryption
of messages. Further, a mechanism for
authenticating and authorization of the agents
moving in the system is highly desired especially in
case of mobile agents which can migrate from one
hop to another. In such cases, there is high
probability of agents getting distracted from the
original track.
Extensible & Scalable
The language should be extensible in terms of
addition of both new performatives and parameters.
It is empirical that any language shall continue to
improve as the new agents developed would offer
new features and thus would demand for new
ontology, new parameters and performatives for
establishing communication with other agents.
Therefore, an agent communication language
should be extensible and scalable as well i.e. it
should allow integration of new agents easily.
IV. THE ASSESSMENT OF KQML
IMPROVED
Although thirty six performatives including various
parameters form the basic KQML and the basic structure
is good enough to address the need of a moderately
complex multiagent system. However, today’s multiagent
systems are highly complex and distributed and on the
other hand the user of the same demands for early and well
in time response. Moreover, the requirements pertaining to
prioritizing the tasks with quality of response within the
time constraints has been another challenge. The KQML
Improved made an attempt addressing the issues
mentioned above by introducing: priority and :capabilities
parameters in ask-one and advertise performatives
respectively. While the parameter priority reflects
represents execution time, priority of task and quality of
response in a more general way, the capability parameters
specifies the exact time of execution and quality of
response.
Further since KQML is not only an agent communication
language, it also acts as communication protocol, the
KQML Improved suggested improvements in the
communication protocol. In fact, it proposed the new
structure of communication protocol with five new
performatives reflecting the state of agent extending the
basic list of thirty six performatives to forty one
performatives (see figure 2). The new performatives are
active, acquire, waiting, busy and sleep.In order to
maintain theintegrity and order of messages, parameters
reflecting timestamp on messages have been included.
Therefore number of parameters being supported by any
performtive have also increased. The KQML Improved
allows inclusion of :priority parameter, :capabilities
parameter and msg_timestamp in ask-one, advertise and
the eight new performatives where five contributes to
communication protocol and three limits the autonomy of
agents in a multiagent system. Further, basic KQML
supports one communication protocol but nothing
pertaining to message time stamping and integrity is
considered. The novel protocol is an additional feather in
the cap as in addition to the basic features that
conventional KQML offers, it overcomes the limitation
mentioned above.
Figure 2: A Comparison of Basic KQML with KQML Improved
0
10
20
30
40
50
Performatives Parameters Protocol
Basic KQML Vs KQML Improved
Basic KQML
Improved KQML
4. Int. J. Advanced Networking and Applications
Volume: 07 Issue: 06 Pages: 2931-2935 (2016) ISSN: 0975-0290
2934
Further, since basic KQML which is based on
performatives is declarative and simple, KQML Improved
also extends the same. The novel performatives and
parameters which contributes towards improvement in
KQML are also declarative and simple. The new
performatives are well defined and self-explanatory.
KQML Improved allows interoperability and is agent
implementation independent. The KQML Improved is
extensible as it allows addition of new agents at ease.
Governing policies for ensuring the authenticity and
integrity of agents participating in communication also
guarantees that the communicating entities are reliable and
secure. Therefore, it can be conveniently concluded that
KQML Improved is at par with basic KQML and contains
all desirable features of any agent communication
language. Table 1 enlists the comparison of basic KQML
and KQML Improved.
Table 1: Comparison of Basic KQML with KQML Improved
Metric Basic KQML KQML Improved
Declarative & Simple Based on Core PErformatives Based on Core as well as New
Performatives and Parameters
Well Defined
Performatives
36 Core performatives 44 Core Performatives
Canonical Semantics Yes but few performatives are
ambiguous
Yes (New Parameters are semantically
unambiguous)
Qualitative
Implementation
Supports Cross-Platform
Implementation
Supports Cross-Platform Implementation
Ease of Integration Can be integrated with C, C++,
Prolog
Can be integrated with C, C++, Prolog,
Java, Javabeans
Reliable & Secure Do not ensure authenticity of
agents
Ensure the security of message by
limiting the agents autonomy
Extensible & Scalable Allows introduction of new
agents easily
Allows introduction of new agents easily
V. CONCLUSION
The paper began by defining the metrics for evaluating an
agent communication language. In fact, to the best of our
understanding, literature grilling did not reveal many
works suggesting the metrics and further evaluation of
agent communication languages. Therefore, a detailed
discussion pertaining to the desirable features of agent
communication language and thus the metrics of
evaluation was presented. Later a comparison of basic
KQML with the KQML Improved is being given and it
can be concluded that although the basic KQML still
forms the backbone of KQML Improved, However;
KQML Improved contributes significantly in terms of new
performatives, parameters as well as a communication
protocol.
REFERENCES
[1] Wooldridge, M. (2009). An introduction to
multiagent systems. John Wiley & Sons.
[2] Wooldridge, M., & Jennings, N. R. (1995). Agent
theories, architectures, and languages: a survey.
In Intelligent agents (pp. 1-39). Springer Berlin
Heidelberg
[3] Finin, T., Fritzson, R., McKay, D., & McEntire,
R. (1994, November). KQML as an agent
communication language. In Proceedings of the
third international conference on Information and
knowledge management (pp. 456-463). ACM.
[4] Labrou, Y., & Finin, T. (1997). A Proposal for a
new KQML Specification. Technical Report
Technical Report TR-CS-97-03, University of
Maryland Baltimore County.
[5] Mayfield, James, YannisLabrou, and Tim
Finin."Evaluation of KQML as an agent
communication language."Intelligent Agents II
Agent Theories, Architectures, and
Languages..Springer Berlin Heidelberg,
1995.347-360.
[6] Labrou, Y., amd Finin, T. (1994) ‘A semantics
approach for KQML—a general purpose
communication language for software agents.’ In
Proceedings of the third international conference
on Information and knowledge management,
ACM, pp. 447-455.
5. Int. J. Advanced Networking and Applications
Volume: 07 Issue: 06 Pages: 2931-2935 (2016) ISSN: 0975-0290
2935
[7] Neches, R., Fikes, R. E., Finin, T., Gruber, T.,
Patil, R., Senator, T. and Swartout, W. R. (1991)
‘Enabling technology for knowledge sharing.’ AI
magazine, Vol.12, No. 3, pp.36.
[8] Michael N. Huhns, David M. Bridgeland, and
Natraj V. Arni,” A DAI communication aide”.
Technical Report ACT-RA-317-90, MCC, Austin
TX , October 1990
[9] Sandip Vaniya, Bhavesh Lad and Shreyansh
Bhavsar “A Survey on Agent Communication
Languages”, International Conference on
Innovation, Management and Service, IPEDR
vol.14(2011) Singapore 2011.
[10] Covington,”Speech Acts, Electronic Commerce,
and KQML” 1997, Artificial Intelligence Center
The University of Georgia, USA.
[11]Wu, X., & Sun, J. (2010, May). Study on a
KQML-based intelligent multi-agent system. In
2010 International Conference on Intelligent
Computation Technology and Automation (pp.
466-469). IEEE.
[12] Yu, Y., Li, Y., & Deng, Y. (2010, August).
Research and realization of KQML in Multiagent
Cooperative Information System. In 2010 5th
International Conference on Computer Science &
Education.
[13]Chen, Kwang-Cheng, and Shao-Yu Lien.
"Machine-to-machine communications:
Technologies and challenges." Ad Hoc Networks
18 (2014): 3-23.
[14]Jagga, A., Juneja, D., & Singh, A. Updates in
Knowledge Query Manipulation Language for
Complex Multiagent Systems.
[15]Kone, M. T., Shimazu, A. and Nakajima, T.
(2000) ‘The state of the art in agent
communication languages.’ Knowledge and
Information Systems, Vol.2, No. 3, pp. 259-284.
[16]Korzyk, A. (2000) ‘Towards XML as a secure
intelligent agent communication language.’ In the
23rd National Information Systems Security
Conference, Baltimore Convention Center,
Baltimore, Maryland, SA.
[17]Labrou, Y. (2001) ‘Standardizing agent
communication.’ In Multi-Agent Systems and-
Applications, Springer Berlin Heidelberg, pp. 74-
97.
[18]Labrou, Y., amd Finin, T. (1994) ‘A semantics
approach for KQML—a general purpose
communication language for software agents.’ In
Proceedings of the third international conference
on Information and knowledge management,
ACM, pp. 447-455.
[19]Singh, R., Singh, A., & Mukherjee, S. (2016).
Governing the Authenticity and Degree of
Autonomy of Agents in Multiagent System.
Journal of Network Communications and
Emerging Technologies (JNCET) www. jncet.
org, 6(3).
[20]Jagga, A., Juneja, D., & Singh, A. ,”KQML
Based Communication Protocol for Multi Agent
Systems” Published in International Journal of
Emerging Technologies in Computational and
Applied Sciences (IJETCAS),2015
[21]Juneja, D., Singh, A., & Hooda, R. A Three Tier
Secure KQML Interface with Novel
Performatives.International Science Index, Vol
8(3),