Agent-oriented software engineering is a promising new approach
to software engineering that uses the notion of an agent as the
primary entity of abstraction. The development of methodologies
for agent-oriented software engineering is an area that is currently
receiving much attention, there have been several agent-oriented
methodologies proposed recently and survey papers are starting to
appear. However the authors feel that there is still much work
necessary in this area; current methodologies can be improved
upon. This paper presents a new methodology, the Styx Agent
Methodology, which guides the development of collaborative
agent systems from the analysis phase through to system
implementation and maintenance. A distinguishing feature of
Styx is that it covers a wider range of software development lifecycle
activities than do other recently proposed agent-oriented
methodologies. The key areas covered by this methodology are
the specification of communication concepts, inter-agent
communication and each agent's behaviour activation—but it
does not address the development of application-specific parts of
a sys-tem. It will be supported by a software tool which is
currently in development.
Integrated Analysis of Traditional Requirements Engineering Process with Agil...zillesubhan
In the past few years, agile software development approach has emerged as a most attractive software development approach. A typical CASE environment consists of a number of CASE tools operating on a common hardware and software platform and note that there are a number of different classes of users of a CASE environment. In fact, some users such as software developers and managers wish to make use of CASE tools to support them in developing application systems and monitoring the progress of a project. This development approach has quickly caught the attention of a large number of software development firms. However, this approach particularly pays attention to development side of software development project while neglects critical aspects of requirements engineering process. In fact, there is no standard requirement engineering process in this approach and requirements engineering activities vary from situation to situation. As a result, there emerge a large number of problems which can lead the software development projects to failure. One of major drawbacks of agile approach is that it is suitable for small size projects with limited team size. Hence, it cannot be adopted for large size projects. We claim that this approach can be used for large size projects if traditional requirements engineering approach is combined with agile manifesto. In fact, the combination of traditional requirements engineering process and agile manifesto can also help resolve a large number of problems exist in agile development methodologies. As in software development the most important thing is to know the clear customer’s requirements and also through modeling (data modeling, functional modeling, behavior modeling). Using UML we are able to build efficient system starting from scratch towards the desired goal. Through UML we start from abstract model and develop the required system through going in details with different UML diagrams. Each UML diagram serves different goal towards implementing a whole project.
A REVIEW OF SECURITY INTEGRATION TECHNIQUE IN AGILE SOFTWARE DEVELOPMENTijseajournal
Agile software development has gained a lot of popularity in the software industry due to its iterative and
incremental approach as well as user involvement. Agile has also been criticized due to lack of its ability to
deliver secure software. In this paper, extensive literature has been performed, in order to highlight the
existing security issues in agile software development. Majority of challenges reported in literature,
occurred due to lack of involvement of security expert. Improving security of a software system without
damaging the real essence of Agile can achieved with the continuous involvement of security engineer
throughout development lifecycle with its defined role and responsibilities.
Exploring the Efficiency of the Program using OOAD MetricsIRJET Journal
This document proposes a methodology to analyze the efficiency of object-oriented programs using OOAD (Object Oriented Analysis and Design) metrics. The methodology involves compiling a program successively until it is error-free, recording the error rate at each compilation. These results are then compared to determine how many compilations were needed for the program to be error-free, indicating its efficiency. The methodology is experimentally validated on a sample Java program, with results showing the error rate decreasing with each compilation until the program is error-free after the 8th compilation, demonstrating good efficiency.
Today in era of software industry there is no perfect software framework available for
analysis and software development. Currently there are enormous number of software development
process exists which can be implemented to stabilize the process of developing a software system. But no
perfect system is recognized till yet which can help software developers for opting of best software
development process. This paper present the framework of skillful system combined with Likert scale. With
the help of Likert scale we define a rule based model and delegate some mass score to every process and
develop one tool name as MuxSet which will help the software developers to select an appropriate
development process that may enhance the probability of system success.
Iceemas 119- state of art of metrics of aspect oriented programmingMazen Ghareb
The document summarizes Mazen Ismaeel Ghareb's conference presentation on aspect-oriented programming (AOP) metrics. It discusses AOP and various proposed AOP metrics from literature. The methodology analyzes 6 common AOP metrics from 23 papers. Results show most AOP metrics improve aspects of software development over object-oriented programming, though the effects depend on implementation and problem. The conclusion is that the chosen metrics generally demonstrate AOP techniques can enhance modularity, coupling, cohesion and other qualities.
Change management and version control of Scientific Applicationsijcsit
The development process of scientific applications is largely dependent on scientific progress and the
experimental research results. Thus, dealing with frequent changes is one of the main problems faced by
the developers of scientific software. Taking into account the results of the survey conducted among
scientists in the HP-SEE project, the implementation of change management and version control software
processes is inevitable. In this paper, we propose software engineering principles that should be included
in the development process to improve the version control and change management. Moreover, we give
some specific recommendations for their implementation, thereby making a slight modification of already
generally accepted templates and methods. The development steps practiced by scientists should not be
replaced completely, but they need to be supplemented with appropriate practices, documents and formal
methods. We also emphasize the reasons for the inclusion of these two processes and the consequences that
may arise as a result of their non-application.
A Review on Software Mining: Current Trends and MethodologiesIJERA Editor
With the evolution of Software Mining, it has enabled to play a crucial role in the day to day activities .By empowering the software with the data mining methods it aids to all the software developers and the managers at all the managerial levels to use it as a tool so that the relative software engineering data (in the form of code, design documents, bug reports) to visualize the project‟s status of evolution and progress. To add on, further all the mining methods and algorithms help to device the models to develop any fault prone real time system for the real world more prior to the testing phase or the evolution phase. Also , in this review paper it will highlight the different methodologies for software mining along with its extension as a tool for fault tolerance and as well as a bibliography with the special prominence on mining the software engineering information.
Productivity Factors in Software Development for PC PlatformIJERA Editor
Identifying the most relevant factors influencing project performance is essential for implementing business strategies by selecting and adjusting proper improvement activities. The two major classification algorithms CRT and ANN that were recommended by the Auto Classifier tool in SPSS Modeler used for determining the most important variables (attributes) of software development in PC environment. While the accuracy of classification of productive versus non-productive cases are relatively close (72% vs 69%), their ranking of important variables are different. CRT ranks the Programming Language as the most important variable and Function Points as the least important. On the other hand, ANN ranks the Function Points as the most important followed by team size and Programming Language.
Integrated Analysis of Traditional Requirements Engineering Process with Agil...zillesubhan
In the past few years, agile software development approach has emerged as a most attractive software development approach. A typical CASE environment consists of a number of CASE tools operating on a common hardware and software platform and note that there are a number of different classes of users of a CASE environment. In fact, some users such as software developers and managers wish to make use of CASE tools to support them in developing application systems and monitoring the progress of a project. This development approach has quickly caught the attention of a large number of software development firms. However, this approach particularly pays attention to development side of software development project while neglects critical aspects of requirements engineering process. In fact, there is no standard requirement engineering process in this approach and requirements engineering activities vary from situation to situation. As a result, there emerge a large number of problems which can lead the software development projects to failure. One of major drawbacks of agile approach is that it is suitable for small size projects with limited team size. Hence, it cannot be adopted for large size projects. We claim that this approach can be used for large size projects if traditional requirements engineering approach is combined with agile manifesto. In fact, the combination of traditional requirements engineering process and agile manifesto can also help resolve a large number of problems exist in agile development methodologies. As in software development the most important thing is to know the clear customer’s requirements and also through modeling (data modeling, functional modeling, behavior modeling). Using UML we are able to build efficient system starting from scratch towards the desired goal. Through UML we start from abstract model and develop the required system through going in details with different UML diagrams. Each UML diagram serves different goal towards implementing a whole project.
A REVIEW OF SECURITY INTEGRATION TECHNIQUE IN AGILE SOFTWARE DEVELOPMENTijseajournal
Agile software development has gained a lot of popularity in the software industry due to its iterative and
incremental approach as well as user involvement. Agile has also been criticized due to lack of its ability to
deliver secure software. In this paper, extensive literature has been performed, in order to highlight the
existing security issues in agile software development. Majority of challenges reported in literature,
occurred due to lack of involvement of security expert. Improving security of a software system without
damaging the real essence of Agile can achieved with the continuous involvement of security engineer
throughout development lifecycle with its defined role and responsibilities.
Exploring the Efficiency of the Program using OOAD MetricsIRJET Journal
This document proposes a methodology to analyze the efficiency of object-oriented programs using OOAD (Object Oriented Analysis and Design) metrics. The methodology involves compiling a program successively until it is error-free, recording the error rate at each compilation. These results are then compared to determine how many compilations were needed for the program to be error-free, indicating its efficiency. The methodology is experimentally validated on a sample Java program, with results showing the error rate decreasing with each compilation until the program is error-free after the 8th compilation, demonstrating good efficiency.
Today in era of software industry there is no perfect software framework available for
analysis and software development. Currently there are enormous number of software development
process exists which can be implemented to stabilize the process of developing a software system. But no
perfect system is recognized till yet which can help software developers for opting of best software
development process. This paper present the framework of skillful system combined with Likert scale. With
the help of Likert scale we define a rule based model and delegate some mass score to every process and
develop one tool name as MuxSet which will help the software developers to select an appropriate
development process that may enhance the probability of system success.
Iceemas 119- state of art of metrics of aspect oriented programmingMazen Ghareb
The document summarizes Mazen Ismaeel Ghareb's conference presentation on aspect-oriented programming (AOP) metrics. It discusses AOP and various proposed AOP metrics from literature. The methodology analyzes 6 common AOP metrics from 23 papers. Results show most AOP metrics improve aspects of software development over object-oriented programming, though the effects depend on implementation and problem. The conclusion is that the chosen metrics generally demonstrate AOP techniques can enhance modularity, coupling, cohesion and other qualities.
Change management and version control of Scientific Applicationsijcsit
The development process of scientific applications is largely dependent on scientific progress and the
experimental research results. Thus, dealing with frequent changes is one of the main problems faced by
the developers of scientific software. Taking into account the results of the survey conducted among
scientists in the HP-SEE project, the implementation of change management and version control software
processes is inevitable. In this paper, we propose software engineering principles that should be included
in the development process to improve the version control and change management. Moreover, we give
some specific recommendations for their implementation, thereby making a slight modification of already
generally accepted templates and methods. The development steps practiced by scientists should not be
replaced completely, but they need to be supplemented with appropriate practices, documents and formal
methods. We also emphasize the reasons for the inclusion of these two processes and the consequences that
may arise as a result of their non-application.
A Review on Software Mining: Current Trends and MethodologiesIJERA Editor
With the evolution of Software Mining, it has enabled to play a crucial role in the day to day activities .By empowering the software with the data mining methods it aids to all the software developers and the managers at all the managerial levels to use it as a tool so that the relative software engineering data (in the form of code, design documents, bug reports) to visualize the project‟s status of evolution and progress. To add on, further all the mining methods and algorithms help to device the models to develop any fault prone real time system for the real world more prior to the testing phase or the evolution phase. Also , in this review paper it will highlight the different methodologies for software mining along with its extension as a tool for fault tolerance and as well as a bibliography with the special prominence on mining the software engineering information.
Productivity Factors in Software Development for PC PlatformIJERA Editor
Identifying the most relevant factors influencing project performance is essential for implementing business strategies by selecting and adjusting proper improvement activities. The two major classification algorithms CRT and ANN that were recommended by the Auto Classifier tool in SPSS Modeler used for determining the most important variables (attributes) of software development in PC environment. While the accuracy of classification of productive versus non-productive cases are relatively close (72% vs 69%), their ranking of important variables are different. CRT ranks the Programming Language as the most important variable and Function Points as the least important. On the other hand, ANN ranks the Function Points as the most important followed by team size and Programming Language.
A Novel Optimization towards Higher Reliability in Predictive Modelling towar...IJECEIAES
Although, the area of software engineering has made a remarkable progress in last decade but there is less attention towards the concept of code reusability in this regards.Code reusability is a subset of Software Reusability which is one of the signature topics in software engineering. We review the existing system to find that there is no progress or availability of standard research approach toward code reusability being introduced in last decade. Hence, this paper introduced a predictive framework that is used for optimizing the performance of code reusability. For this purpose, we introduce a case study of near real-time challenge and involved it in our modelling. We apply neural network and Damped-Least square algorithm to perform optimization with a sole target to compute and ensure highest possible reliability. The study outcome of our model exhibits higher reliability and better computational response time
Requirements effort estimation state of the practice - mohamad kassabIWSM Mensura
This document summarizes the results of a survey on requirements engineering practices. It found that while agile practices are widely used, many classic requirements techniques are still common, including interviews, prototyping, and use case modeling. Effort estimation is still important for projects of all sizes, with story points and expert judgment being frequently used. Both agile and waterfall projects reported high satisfaction with productivity and quality, though agile projects tended to have higher satisfaction. Overall the survey found that while agile has increased in popularity, many requirements practices are still applied across both agile and traditional methodologies.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
SETTA'18 Keynote: Intelligent Software Engineering: Synergy between AI and So...Tao Xie
2018 Keynote Speaker, Symposium on Dependable Software Engineering - Theories, Tools and Applications (SETTA 2018). "Intelligent Software Engineering: Synergy between AI and Software Engineering" http://confesta2018.csp.escience.cn/dct/page/65581
Model Driven Method Engineering. A Supporting InfrastructureMario Cervera
The document proposes a methodological framework and software infrastructure to support model-driven method engineering. It applies model-driven development principles to define software production methods using standardized languages like SPEM, and then generate CASE tool support through model transformations. A prototype called 4ME was developed to demonstrate defining a sample OOWS-BP method, configuring it with technical fragments, and automatically generating a final CASE tool to support the method. The approach aims to address current gaps in method engineering tool support and adoption of standards.
Software engineering is defined as a process of analyzing user requirements and then designing, building, and testing software application which will satisfy those requirements. ... It helps you to obtain, economically, software which is reliable and works efficiently on the real machines
5.0 Estimating Agile Development ProjectsGlen Alleman
The primary purpose of software estimation is not to predict a project’s outcome; it is to determine whether a project’s targets are realistic enough to allow the project to be controlled to meet them ‒ Steve McConnell
The document summarizes Tamara Lopez's PhD research proposal on reasoning about flaws in software design. The research aims to analyze software failures by taking a situational approach between the broad scope of systemic analyses and narrow focus of means analyses. It will apply qualitative methods to examine how failures manifest and are addressed in software development. The goal is to better understand why some software fails and other succeeds.
Motivation in Software Engineering: A Systematic Review Update
A. César C. França, Tatiana B. Gouveia, Pedro C. F. Santos, Celio A. Santana, Fabio Q. B. da Silva
Abstract-Background/Aim – Given the relevance and importance that the understanding of motivation has gained in the field of software engineering, this work was carried out in order to update the results of a literature review carried out in 2006 on motivation in software engineering. Method – Based on guidelines for this specific type of study, we replicated the original study protocol. Results – The combination of manual and automatic searches retrieved 6,534 papers, of which 53 relevant papers were selected for data extraction and analysis. Conclusions – Studies address motivation using several viewpoints and approaches and, even though the number of researches increased in this area, the overall understanding of what actually motivates software engineers does not seem to have significantly advanced in the last five years.
Paper presented at Evaluation and Assessment in Software Engineering, Durham, 2011.
http://www.haseresearch.com
This thesis focuses on assessing, evaluating, and managing risks in software development projects for Indian software companies. It aims to identify project risks, analyze dimensions of specific risks through primary research, and study an existing project to identify risks and mitigation strategies. The document discusses related work, project and software development lifecycles, types of risks, the research methodology used, a case study of a project called ERWINA, and concludes with recommendations for improving risk management practices.
Intelligent Software Engineering: Synergy between AI and Software Engineering...Tao Xie
2018 Distinguished Speaker, the UC Irvine Institute for Software Research (ISR) Distinguished Speaker Series 2018-2019. "Intelligent Software Engineering: Synergy between AI and Software Engineering" http://isr.uci.edu/content/isr-distinguished-speaker-series-2018-2019
AN INVESTIGATION OF SOFTWARE REQUIREMENTS PRACTICES AMONG SOFTWARE PRACTITION...ijseajournal
This paper presents the result of software requirements practices survey among software practitioners, a study in a city of Jeddah, Saudi Arabia. As software requirements are important and they lead to the successful of a software development project, it becomes interesting to investigate the current software requirements practices in the kingdom of Saudi Arabia. As an initial work, a survey was conducted in
Jeddah as a study before we conduct it in the Kingdom of Saudi Arabia software industry. The survey is conducted by distributing a set of questionnaire to the software practitioners. There are 17 respondents completed the questionnaire out of 50 distributed questionnaire, which is 34% of response rate. The result of this survey is promising and it has shown that requirements management area should be focused for
future improvement. In the future, the survey will focus on software engineering and requirements engineering practices over the entire Kingdom of Saudi Arabia software industry.
Publish or Perish: Questioning the Impact of Our Research on the Software Dev...Margaret-Anne Storey
A Video for this talk can be found here: https://www.youtube.com/watch?v=DvRdBb9TEUI
Abstract: How often do we pause to consider how we, as a community, decide which developer problems we address, or how well we are doing at evaluating our solutions within real development contexts? Many of our research contributions in software engineering can be considered as purely technical. Yet somewhere, at some time, a software developer may be impacted by our research. In this talk, I invite the community to question the impact of our research on software developer productivity. To guide the discussion, I first paint a picture of the modern-day developer and the challenges they experience. I then present 4+1 views of software engineering research --- views that concern research context, method choice, research paradigms, theoretical knowledge and real-world impact. I demonstrate how these views can be used to design, communicate and distinguish individual studies, but also how they can be used to compose a critical perspective of our research at a community level. To conclude, I propose structural changes to our collective research and publishing activities --- changes to provoke a more expeditious consideration of the many challenges facing today's software developer.
(Thanks to Brynn Hawker for slide design and proposed new badges. brynn@hawker.me)
The article proposes a new model for optimizing software effort and cost estimation based on code reusability. The model compares new projects to previously completed, similar projects stored in a code repository. By searching for and retrieving reusable code, functions, and methods from old projects, the model aims to reduce effort and cost estimates for new software development. The model is described as being based on the concept of estimation by analogy and using innovative search and retrieval techniques to achieve code reuse and thus decreased cost and effort estimates.
The document discusses micro patterns in agile software development. It begins with an introduction to agile software development and the evolution of micro patterns. It then defines micro patterns, provides examples, and discusses Gil and Maman's approach to studying micro patterns in two software projects. Their methodology involved detecting micro patterns, surveying developers, linking bug reports to code, and analyzing how micro pattern usage changed over multiple versions of the projects. The conclusion was that classes not following micro patterns were more faulty, and certain micro patterns like function pointers were common across all versions.
The effects of duration based moving windows with estimation by analogy - sou...IWSM Mensura
Fixed-size and fixed-duration moving windows were evaluated with estimation by analogy (EbA) effort estimation. With fixed-size windows, the modified EbA showed moving windows became significantly advantageous over the growing portfolio at medium window sizes, while trends were similar but ranges differed from past studies. With fixed-duration windows, moving windows improved accuracy less significantly than fixed-size windows over smaller window sizes. Comparisons with past studies found overall trends the same but effective window sizes and ranges differed.
ACM Chicago March 2019 meeting: Software Engineering and AI - Prof. Tao Xie, ...ACM Chicago
Join us as Tao Xie, Professor and Willett Faculty Scholar in the Department of Computer Science at the University of Illinois at Urbana-Champaign and ACM Distinguished Speaker, talks about Intelligent Software Engineering: Synergy between AI and Software Engineering. This is a joint meeting hosted by Chicago Chapter ACM / Loyola University Computer Science Department.
In the software measurement validations, assessing the validation of software metrics in software
engineering is a very difficult task due to lack of theoretical methodology and empirical methodology [41,
44, 45]. During recent years, there have been a number of researchers addressing the issue of validating
software metrics. At present, software metrics are validated theoretically using properties of measures.
Further, software measurement plays an important role in understanding and controlling software
development practices and products. The major requirement in software measurement is that the measures
must represent accurately those attributes they purport to quantify and validation is critical to the success
of software measurement. Normally, validation is a collection of analysis and testing activities across the
full life cycle and complements the efforts of other quality engineering functions and validation is a critical
task in any engineering project. Further, validation objective is to discover defects in a system and assess
whether or not the system is useful and usable in operational situation. In the case of software engineering,
validation is one of the software engineering disciplines that help build quality into software. The major
objective of software validation process is to determine that the software performs its intended functions
correctly and provides information about its quality and reliability. This paper discusses the validation
methodology, techniques and different properties of measures that are used for software metrics validation.
In most cases, theoretical and empirical validations are conducted for software metrics validations in
software engineering [1-50].
The document discusses concepts related to software project scheduling, including:
- Software project scheduling involves distributing estimated effort across the planned project duration by allocating effort to specific tasks.
- There are two perspectives on software scheduling - either working within a prescribed end date or setting the end date based on the software team's estimates.
- Basic principles of software scheduling include compartmentalizing tasks, determining dependencies, allocating time estimates, validating effort, and defining responsibilities, outcomes, and milestones.
- Tracking project schedules involves comparing actual progress to planned schedules through status meetings, reviews, and milestone completions.
Multiagent Based Methodologies have become an
important subject of research in advance Software Engineering.
Several methodologies have been proposed as, a theoretical
approach, to facilitate and support the development of complex
distributed systems. An important question when facing the
construction of Agent Applications is deciding which
methodology to follow. Trying to answer this question, a
framework with several criteria is applied in this paper for the
comparative analysis of existing multiagent system
methodologies. The results of the comparative over two of them,
conclude that those methodologies have not reached a sufficient
maturity level to be used by the software industry. The
framework has also proved its utility for the evaluation of any
kind of Multiagent Based Software Engineering Methodology
This document discusses and compares several agent-assisted methodologies for developing multi-agent systems:
- It reviews Gaia, HLIM, PASSI, and Tropos methodologies, outlining their key models and phases. Gaia focuses on analysis and design, HLIM models internal and external agent behavior, and PASSI and Tropos incorporate UML modeling.
- It then proposes a new MAB methodology intended to address shortcomings of existing approaches. MAB includes requirements, analysis, design, and implementation phases and models such as use case maps and agent roles.
- Finally, it concludes that agent technologies represent a promising approach for developing complex software systems, but that matching methodologies to problem domains and developing princip
A Novel Optimization towards Higher Reliability in Predictive Modelling towar...IJECEIAES
Although, the area of software engineering has made a remarkable progress in last decade but there is less attention towards the concept of code reusability in this regards.Code reusability is a subset of Software Reusability which is one of the signature topics in software engineering. We review the existing system to find that there is no progress or availability of standard research approach toward code reusability being introduced in last decade. Hence, this paper introduced a predictive framework that is used for optimizing the performance of code reusability. For this purpose, we introduce a case study of near real-time challenge and involved it in our modelling. We apply neural network and Damped-Least square algorithm to perform optimization with a sole target to compute and ensure highest possible reliability. The study outcome of our model exhibits higher reliability and better computational response time
Requirements effort estimation state of the practice - mohamad kassabIWSM Mensura
This document summarizes the results of a survey on requirements engineering practices. It found that while agile practices are widely used, many classic requirements techniques are still common, including interviews, prototyping, and use case modeling. Effort estimation is still important for projects of all sizes, with story points and expert judgment being frequently used. Both agile and waterfall projects reported high satisfaction with productivity and quality, though agile projects tended to have higher satisfaction. Overall the survey found that while agile has increased in popularity, many requirements practices are still applied across both agile and traditional methodologies.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
SETTA'18 Keynote: Intelligent Software Engineering: Synergy between AI and So...Tao Xie
2018 Keynote Speaker, Symposium on Dependable Software Engineering - Theories, Tools and Applications (SETTA 2018). "Intelligent Software Engineering: Synergy between AI and Software Engineering" http://confesta2018.csp.escience.cn/dct/page/65581
Model Driven Method Engineering. A Supporting InfrastructureMario Cervera
The document proposes a methodological framework and software infrastructure to support model-driven method engineering. It applies model-driven development principles to define software production methods using standardized languages like SPEM, and then generate CASE tool support through model transformations. A prototype called 4ME was developed to demonstrate defining a sample OOWS-BP method, configuring it with technical fragments, and automatically generating a final CASE tool to support the method. The approach aims to address current gaps in method engineering tool support and adoption of standards.
Software engineering is defined as a process of analyzing user requirements and then designing, building, and testing software application which will satisfy those requirements. ... It helps you to obtain, economically, software which is reliable and works efficiently on the real machines
5.0 Estimating Agile Development ProjectsGlen Alleman
The primary purpose of software estimation is not to predict a project’s outcome; it is to determine whether a project’s targets are realistic enough to allow the project to be controlled to meet them ‒ Steve McConnell
The document summarizes Tamara Lopez's PhD research proposal on reasoning about flaws in software design. The research aims to analyze software failures by taking a situational approach between the broad scope of systemic analyses and narrow focus of means analyses. It will apply qualitative methods to examine how failures manifest and are addressed in software development. The goal is to better understand why some software fails and other succeeds.
Motivation in Software Engineering: A Systematic Review Update
A. César C. França, Tatiana B. Gouveia, Pedro C. F. Santos, Celio A. Santana, Fabio Q. B. da Silva
Abstract-Background/Aim – Given the relevance and importance that the understanding of motivation has gained in the field of software engineering, this work was carried out in order to update the results of a literature review carried out in 2006 on motivation in software engineering. Method – Based on guidelines for this specific type of study, we replicated the original study protocol. Results – The combination of manual and automatic searches retrieved 6,534 papers, of which 53 relevant papers were selected for data extraction and analysis. Conclusions – Studies address motivation using several viewpoints and approaches and, even though the number of researches increased in this area, the overall understanding of what actually motivates software engineers does not seem to have significantly advanced in the last five years.
Paper presented at Evaluation and Assessment in Software Engineering, Durham, 2011.
http://www.haseresearch.com
This thesis focuses on assessing, evaluating, and managing risks in software development projects for Indian software companies. It aims to identify project risks, analyze dimensions of specific risks through primary research, and study an existing project to identify risks and mitigation strategies. The document discusses related work, project and software development lifecycles, types of risks, the research methodology used, a case study of a project called ERWINA, and concludes with recommendations for improving risk management practices.
Intelligent Software Engineering: Synergy between AI and Software Engineering...Tao Xie
2018 Distinguished Speaker, the UC Irvine Institute for Software Research (ISR) Distinguished Speaker Series 2018-2019. "Intelligent Software Engineering: Synergy between AI and Software Engineering" http://isr.uci.edu/content/isr-distinguished-speaker-series-2018-2019
AN INVESTIGATION OF SOFTWARE REQUIREMENTS PRACTICES AMONG SOFTWARE PRACTITION...ijseajournal
This paper presents the result of software requirements practices survey among software practitioners, a study in a city of Jeddah, Saudi Arabia. As software requirements are important and they lead to the successful of a software development project, it becomes interesting to investigate the current software requirements practices in the kingdom of Saudi Arabia. As an initial work, a survey was conducted in
Jeddah as a study before we conduct it in the Kingdom of Saudi Arabia software industry. The survey is conducted by distributing a set of questionnaire to the software practitioners. There are 17 respondents completed the questionnaire out of 50 distributed questionnaire, which is 34% of response rate. The result of this survey is promising and it has shown that requirements management area should be focused for
future improvement. In the future, the survey will focus on software engineering and requirements engineering practices over the entire Kingdom of Saudi Arabia software industry.
Publish or Perish: Questioning the Impact of Our Research on the Software Dev...Margaret-Anne Storey
A Video for this talk can be found here: https://www.youtube.com/watch?v=DvRdBb9TEUI
Abstract: How often do we pause to consider how we, as a community, decide which developer problems we address, or how well we are doing at evaluating our solutions within real development contexts? Many of our research contributions in software engineering can be considered as purely technical. Yet somewhere, at some time, a software developer may be impacted by our research. In this talk, I invite the community to question the impact of our research on software developer productivity. To guide the discussion, I first paint a picture of the modern-day developer and the challenges they experience. I then present 4+1 views of software engineering research --- views that concern research context, method choice, research paradigms, theoretical knowledge and real-world impact. I demonstrate how these views can be used to design, communicate and distinguish individual studies, but also how they can be used to compose a critical perspective of our research at a community level. To conclude, I propose structural changes to our collective research and publishing activities --- changes to provoke a more expeditious consideration of the many challenges facing today's software developer.
(Thanks to Brynn Hawker for slide design and proposed new badges. brynn@hawker.me)
The article proposes a new model for optimizing software effort and cost estimation based on code reusability. The model compares new projects to previously completed, similar projects stored in a code repository. By searching for and retrieving reusable code, functions, and methods from old projects, the model aims to reduce effort and cost estimates for new software development. The model is described as being based on the concept of estimation by analogy and using innovative search and retrieval techniques to achieve code reuse and thus decreased cost and effort estimates.
The document discusses micro patterns in agile software development. It begins with an introduction to agile software development and the evolution of micro patterns. It then defines micro patterns, provides examples, and discusses Gil and Maman's approach to studying micro patterns in two software projects. Their methodology involved detecting micro patterns, surveying developers, linking bug reports to code, and analyzing how micro pattern usage changed over multiple versions of the projects. The conclusion was that classes not following micro patterns were more faulty, and certain micro patterns like function pointers were common across all versions.
The effects of duration based moving windows with estimation by analogy - sou...IWSM Mensura
Fixed-size and fixed-duration moving windows were evaluated with estimation by analogy (EbA) effort estimation. With fixed-size windows, the modified EbA showed moving windows became significantly advantageous over the growing portfolio at medium window sizes, while trends were similar but ranges differed from past studies. With fixed-duration windows, moving windows improved accuracy less significantly than fixed-size windows over smaller window sizes. Comparisons with past studies found overall trends the same but effective window sizes and ranges differed.
ACM Chicago March 2019 meeting: Software Engineering and AI - Prof. Tao Xie, ...ACM Chicago
Join us as Tao Xie, Professor and Willett Faculty Scholar in the Department of Computer Science at the University of Illinois at Urbana-Champaign and ACM Distinguished Speaker, talks about Intelligent Software Engineering: Synergy between AI and Software Engineering. This is a joint meeting hosted by Chicago Chapter ACM / Loyola University Computer Science Department.
In the software measurement validations, assessing the validation of software metrics in software
engineering is a very difficult task due to lack of theoretical methodology and empirical methodology [41,
44, 45]. During recent years, there have been a number of researchers addressing the issue of validating
software metrics. At present, software metrics are validated theoretically using properties of measures.
Further, software measurement plays an important role in understanding and controlling software
development practices and products. The major requirement in software measurement is that the measures
must represent accurately those attributes they purport to quantify and validation is critical to the success
of software measurement. Normally, validation is a collection of analysis and testing activities across the
full life cycle and complements the efforts of other quality engineering functions and validation is a critical
task in any engineering project. Further, validation objective is to discover defects in a system and assess
whether or not the system is useful and usable in operational situation. In the case of software engineering,
validation is one of the software engineering disciplines that help build quality into software. The major
objective of software validation process is to determine that the software performs its intended functions
correctly and provides information about its quality and reliability. This paper discusses the validation
methodology, techniques and different properties of measures that are used for software metrics validation.
In most cases, theoretical and empirical validations are conducted for software metrics validations in
software engineering [1-50].
The document discusses concepts related to software project scheduling, including:
- Software project scheduling involves distributing estimated effort across the planned project duration by allocating effort to specific tasks.
- There are two perspectives on software scheduling - either working within a prescribed end date or setting the end date based on the software team's estimates.
- Basic principles of software scheduling include compartmentalizing tasks, determining dependencies, allocating time estimates, validating effort, and defining responsibilities, outcomes, and milestones.
- Tracking project schedules involves comparing actual progress to planned schedules through status meetings, reviews, and milestone completions.
Multiagent Based Methodologies have become an
important subject of research in advance Software Engineering.
Several methodologies have been proposed as, a theoretical
approach, to facilitate and support the development of complex
distributed systems. An important question when facing the
construction of Agent Applications is deciding which
methodology to follow. Trying to answer this question, a
framework with several criteria is applied in this paper for the
comparative analysis of existing multiagent system
methodologies. The results of the comparative over two of them,
conclude that those methodologies have not reached a sufficient
maturity level to be used by the software industry. The
framework has also proved its utility for the evaluation of any
kind of Multiagent Based Software Engineering Methodology
This document discusses and compares several agent-assisted methodologies for developing multi-agent systems:
- It reviews Gaia, HLIM, PASSI, and Tropos methodologies, outlining their key models and phases. Gaia focuses on analysis and design, HLIM models internal and external agent behavior, and PASSI and Tropos incorporate UML modeling.
- It then proposes a new MAB methodology intended to address shortcomings of existing approaches. MAB includes requirements, analysis, design, and implementation phases and models such as use case maps and agent roles.
- Finally, it concludes that agent technologies represent a promising approach for developing complex software systems, but that matching methodologies to problem domains and developing princip
Insights on Research Techniques towards Cost Estimation in Software Design IJECEIAES
This document summarizes research on techniques for cost estimation in software design. It begins by describing common cost estimation techniques like Constructive Cost Modeling (COCOMO) and Function Point Analysis. It then analyzes research trends in cost estimation, effort estimation, and fault prediction based on literature from 2010 to present. Fewer than 50 papers were found related to overall cost estimation, less than 25 for effort estimation, and only 9 for fault prediction. The document then reviews existing research addressing general cost estimation, enhancement of Function Point Analysis, statistical modeling approaches, cost estimation for embedded systems, and estimation for fourth generation languages and NASA projects. Most techniques use COCOMO or extend existing models with techniques like fuzzy logic, neural networks, or statistical
Episode an extreme programming method for innovative software based on system...IJCSEA Journal
In software development, the waterfall model is commonly used, especially for large-scale software
systems. For smaller-scale software development, agile software development approaches such as extreme
programming or scrum are used. Traditional software development methodologies are mainly targeted
toward customer-centric development, and therefore, new software methodologies are often not well
received in the industry. In this study, we propose a new software development methodology that is aimed
at developing innovative software using artificial intelligence (AI), idea creation, value engineering, and
systems design. The name of our method is named as EPISODE (Extreme Programming method for
Innovative SOftware based on systems DEsign). EPISODE supports the efficient and creative development
of open source software (OSS) by small groups. Moreover we describe an evaluation of EPISODE in a
class.
1) The document discusses various ways that artificial intelligence can be applied to different phases of the software engineering lifecycle, including requirements specification, design, coding, testing, and estimation.
2) It provides examples of using techniques like natural language processing to clarify requirements, knowledge graphs to manage requirements information, and computational intelligence for requirements prioritization.
3) For design, the document discusses using intelligent agents to recommend patterns and designs to satisfy quality attributes from requirements and assist with assigning responsibilities to components.
Agile techniques that utilize iterative development are broadly used in various industry projects as a lightweight development technique which can satisfy the continuous changes of requirements. Short repetitions are used that are required for efficient product delivery. Traditional and old software development methods are not much efficient and effective to control the rapid change in requirements. Despite the benefits of Agile, criticism on agile methodology states that it couldn’t succeed to pay attention to architectural and design issues and therefore is bound to produce small design-decisions. The past decade has observed numerous changes in systems development with many organizations accepting agile techniques as a viable methodology for developing systems. An increase in the number of research studies reveals the growing demand and acceptance of agile methodologies. While most research has focused on acceptance rate and adaptation of agile practices, there is very limited knowledge of their post-adoption usage and incorporation within organizations. Several factors explain the effective usage of agile methodologies. A combination of previous research in Agile Methodologies, Diffusion of Innovations, Information Systems implementation, and Systems Development has been carried out to develop a research model that identifies the main factors relevant to the propagation and effective usage of agile methodologies in organizations.
A Systematic Review On Software Cost Estimation In Agile Software DevelopmentBrooke Heidt
This document provides a summary of a systematic literature review on software cost estimation techniques for agile software development. It discusses the challenges of cost estimation for agile projects due to their dynamic nature. The review examines various cost estimation mechanisms that have been explored for agile methodologies and compares their accuracy based on parameters like magnitude of relative error, mean magnitude of relative error, and others. It aims to help agile practitioners understand current trends in cost estimation and determine which techniques may be suitable given different project circumstances.
Software metric analysis methods for product development maintenance projectsIAEME Publication
This document discusses various software metrics and methods for analyzing metrics to improve the software development process. It begins with an introduction to software metrics and their importance for project management. It then describes common software development phases and associated metrics that can be collected at each phase. The remainder of the document focuses on different methods for analyzing metrics, including pie charts, Pareto diagrams, bar charts, line charts, scatter diagrams, radar diagrams, and control charts. These analysis methods help identify areas for process improvement and determine if changes have led to desired outcomes.
Software metric analysis methods for product developmentiaemedu
This document discusses various software metrics and methods for analyzing metrics to improve the software development process. It begins with an introduction to software metrics and their importance for project management. It then describes common software development phases and associated metrics that can be collected at each phase, such as lines of code, defects, and staff hours. The document proceeds to explain different types of charts and diagrams that can be used to analyze and visualize metrics data, including pie charts, Pareto diagrams, histograms, line charts, scatter plots, radar diagrams, and control charts. These various analysis methods help identify areas for process improvement and determine whether changes have resulted in desired outcomes.
Software metric analysis methods for product developmentiaemedu
This document discusses various software metrics and methods for analyzing metrics to improve the software development process. It begins with an introduction to software metrics and their importance for project management. It then describes common software development phases and associated metrics that can be collected at each phase, such as lines of code, defects, and staff hours. The document proceeds to explain different types of charts and diagrams that can be used to analyze and visualize metrics data, including pie charts, Pareto diagrams, histograms, line charts, scatter plots, radar diagrams, and control charts. These various analysis methods help identify problems, determine correlations, and track performance over time in order to control and improve the software development process.
DEVOPS ADOPTION IN INFORMATION SYSTEMS PROJECTS; A SYSTEMATIC LITERATURE REVIEWijseajournal
The word DevOps derives from two different words Development and Operations. DevOps has recorded as
an interesting and novel approach adopted to the commonly used Agile software development
methodology. It raised agility of the software development process. Practical issues of Agile methodology
emphasize the requirement for collaboration of software development and operating teams. This
collaboration completed by the DevOps approach engages with the Agile methodology to improve the
quality, performance, and speed of the software developments. Since DevOps is an accentuating approach
in the software development industry, this research aimed to conduct a literature review to study the
evolution of the DevOps approach and its adoption in information systems projects. This target has
accomplished by reviewing the Agile methodology, issues of the Agile methodology, DevOps approach,
challenges and overcoming strategies of DevOps, and success factors of the DevOps approach. Finally, the
paper provides better acquaintance about the DevOps adoption in Information System projects
developments.
Software testing is a key part of software engineering used to evaluate software quality and identify errors. There are various software testing techniques and methods, but thoroughly investigating a complex software is more important than following a specific procedure. Testing complex software cannot discover all errors, but can help improve quality. Software engineering involves defining requirements, design, development, testing, and maintenance of software using methodologies like agile development.
Paper 25 agent-oriented_software_testing_role_oriented_approachFraz Awan
This document proposes a role-oriented approach to testing agent-oriented software. It begins by discussing limitations in existing agent-oriented software engineering methodologies, noting that most do not adequately address the testing phase. It then introduces the concept of roles as an important attribute of agents. The proposed approach uses a V-model framework with testing occurring on the right side to mirror development activities on the left. Testing is focused on roles and responsibilities, beginning with unit testing of individual agent responsibilities, then integration testing of agent interactions, and finally system-level testing. A role schema is presented as a way to define roles, associated agents, goals, protocols, permissions and responsibilities to support the role-oriented testing approach.
Software projects mostly exceeds budget, delivered late and does not meet with the customer’s satisfaction for years. In the past, many traditional development models like waterfall, spiral, iterative, and prototyping methods are used to build the software systems. In recent years, agile models are widely used in developing the software products. The major reasons are – simplicity, incorporating the requirement changes at any time, light-weight approach and delivering the working product early and in short duration. Whatever the development model used, it still remains a challenge for software engineer’s to accurately estimate the size, effort and the time required for developing the software system. This survey focuses on the existing estimation models used in traditional as well in agile software development.
The performance of an algorithm can be improved using a parallel computing programming approach. In this study, the performance of bubble sort algorithm on various computer specifications has been applied. Experimental results have shown that parallel computing programming can save significant time performance by 61%-65% compared to serial computing programming.
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSijseajournal
With the massive growth of the organizations files, the needs for archiving system become a must. A lot of time is consumed in collecting requirements from the organization to build an archiving system. Sometimes the system does not meet the organization needs. This paper proposes a domain-based requirement engineering system that efficiently and effectively develops different archiving systems based on new
suggested technique that merges the two best used agile methodologies: extreme programming (XP) and SCRUM. The technique is tested on a real case study. The results shows that the time and effort consumed during analyzing and designing the archiving systems decreased significantly. The proposed methodology also reduces the system errors that may happen at the early stages of the development of the system.
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSijseajournal
With the massive growth of the organizations files, the needs for archiving system become a must. A lot of
time is consumed in collecting requirements from the organization to build an archiving system. Sometimes
the system does not meet the organization needs. This paper proposes a domain-based requirement
engineering system that efficiently and effectively develops different archiving systems based on new
suggested technique that merges the two best used agile methodologies: extreme programming (XP) and
SCRUM. The technique is tested on a real case study. The results shows that the time and effort consumed
during analyzing and designing the archiving systems decreased significantly. The proposed methodology
also reduces the system errors that may happen at the early stages of the development of the system.
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...butest
This document is a dissertation submitted by Christopher N. Bull for the degree of Bachelor of Computer Science with Software Engineering. The dissertation presents Jistory, a tool that implements a history-sensitive strategy for automatically detecting the design flaw of god classes in software systems. The document includes chapters on background research, a feasibility study of detection strategies, the design and implementation of Jistory, testing and evaluation of Jistory, and conclusions. The aim of the project is to evaluate whether history-sensitive detection strategies can improve the detection of design flaws compared to conventional strategies and whether such strategies can be automated to provide accurate results.
This document discusses the paradigm shift in software engineering from traditional methods to more agile methodologies. It introduces SEMAT (Software Engineering Method and Theory) which aims to support both the craftsmanship of software development through defined methods, as well as building a theoretical foundation. SEMAT defines a "kernel" with seven dimensions (alphas) to measure project progress, categorize necessary activities, and identify required competencies. The kernel provides a framework for practices like agile methods to advance projects in a predictable way. The goal is to develop a true discipline of software engineering through standardization of successful practices while maintaining flexibility.
Unified V- Model Approach of Re-Engineering to reinforce Web Application Deve...IOSR Journals
The document discusses approaches for reengineering web applications. It proposes using a unified V-model approach to reinforce web application development through reengineering. Specifically, it discusses:
1) Using reverse engineering to analyze existing web applications and recover designs, followed by forward engineering to restructure the applications based on new requirements.
2) Applying the V-model at each phase of the web development process during reengineering to incorporate methodology.
3) The reengineering process involves reverse engineering, transformations to adapt to new technologies/requirements, and forward engineering to implement the new design.
Similar to A Novel Agent Oriented Methodology – Styx Methodology (20)
HijackLoader Evolution: Interactive Process HollowingDonato Onofri
CrowdStrike researchers have identified a HijackLoader (aka IDAT Loader) sample that employs sophisticated evasion techniques to enhance the complexity of the threat. HijackLoader, an increasingly popular tool among adversaries for deploying additional payloads and tooling, continues to evolve as its developers experiment and enhance its capabilities.
In their analysis of a recent HijackLoader sample, CrowdStrike researchers discovered new techniques designed to increase the defense evasion capabilities of the loader. The malware developer used a standard process hollowing technique coupled with an additional trigger that was activated by the parent process writing to a pipe. This new approach, called "Interactive Process Hollowing", has the potential to make defense evasion stealthier.
Discover the benefits of outsourcing SEO to Indiadavidjhones387
"Discover the benefits of outsourcing SEO to India! From cost-effective services and expert professionals to round-the-clock work advantages, learn how your business can achieve digital success with Indian SEO solutions.
Ready to Unlock the Power of Blockchain!Toptal Tech
Imagine a world where data flows freely, yet remains secure. A world where trust is built into the fabric of every transaction. This is the promise of blockchain, a revolutionary technology poised to reshape our digital landscape.
Toptal Tech is at the forefront of this innovation, connecting you with the brightest minds in blockchain development. Together, we can unlock the potential of this transformative technology, building a future of transparency, security, and endless possibilities.
Gen Z and the marketplaces - let's translate their needsLaura Szabó
The product workshop focused on exploring the requirements of Generation Z in relation to marketplace dynamics. We delved into their specific needs, examined the specifics in their shopping preferences, and analyzed their preferred methods for accessing information and making purchases within a marketplace. Through the study of real-life cases , we tried to gain valuable insights into enhancing the marketplace experience for Generation Z.
The workshop was held on the DMA Conference in Vienna June 2024.
A Novel Agent Oriented Methodology – Styx Methodology
1. A Novel Agent Oriented Methodology – Styx Methodology
Nwagu, Chikezie Kenneth; Omankwu, Obinnaya Chinecherem; and Inyiama, Hycient
1
Computer Science Department, Nnamdi Azikiwe University, Awka Anambra State, Nigeria,
Nwaguchikeziekenneth@hotmail.com
2
Computer Science Department, Michael Okpara University of Agriculture, Umudike
Umuahia, Abia State, Nigeria
saintbeloved@yahoo.com
3
Electronics & Computer Engineering Department, Nnamdi Azikiwe University, Awka
Anambra State, Nigeria.
ABSTRACT
Agent-oriented software engineering is a promising new approach
to software engineering that uses the notion of an agent as the
primary entity of abstraction. The development of methodologies
for agent-oriented software engineering is an area that is currently
receiving much attention, there have been several agent-oriented
methodologies proposed recently and survey papers are starting to
appear. However the authors feel that there is still much work
necessary in this area; current methodologies can be improved
upon. This paper presents a new methodology, the Styx Agent
Methodology, which guides the development of collaborative
agent systems from the analysis phase through to system
implementation and maintenance. A distinguishing feature of
Styx is that it covers a wider range of software development life-
cycle activities than do other recently proposed agent-oriented
methodologies. The key areas covered by this methodology are
the specification of communication concepts, inter-agent
communication and each agent's behaviour activation—but it
does not address the development of application-specific parts of
a sys-tem. It will be supported by a software tool which is
currently in development.
Keywords
Agent-based software engineering, methodologies for agent-
oriented software development.
1. INTRODUCTION
Agent-oriented software engineering (AOSE) is a promising
new approach to software engineering that uses the notion of
an agent as the primary entity of abstraction. The agent-
oriented approach is rapidly emerging as a powerful paradigm
for de-signing and developing complex software systems.
AOSE researchers hope that the use of the agent abstraction
will provide a significant improvement to current software
engineering practice, similar to the improvements gained from
structured programming, the object-oriented approach and
design patterns.
The development of methodologies for AOSE is an area
that is currently receiving much attention, there have been
several agent-oriented methodologies proposed recently
(Elammari and Lalonde) and survey papers are starting to
appear (Iglesias et all). However there is still much scope
for work in this area and the authors believe that the agent-
oriented methodologies proposed so far can be improved.
The rest of this paper is structured as follows, the new ideas that
Styx introduces are discussed in section 2 and the scope of Styx is
outlined in section 3. The Styx Agent Methodology itself is
presented in section 4 and it is compared to other methodologies
in section 5. Future work is discussed in section 6 and conclusions
are drawn in section 7.
Towards a Novel Methodology
Software development has been identified as a difficult task,
software has a high inherent complexity and its abstract,
intangible nature adds further difficulties .Over the past two
decades research in software engineering has improved the
software development process significantly, nevertheless,
many software projects are still late or over-budget. A key
idea that has emerged from software engineering research is
the use of software development methodologies, which are a
set of procedures and methods to guide the software
development process. The goal of developing and using
such methodologies is to change software development
from an ad-hoc practice to a well-structured engineering
process that produces high-quality software within the
constraints of limited resources and adhering to a predicted
schedule.
Recent commercial systems have demonstrated that AOSE
is potentially a powerful new software engineering paradigm.
However these systems have been developed without the
support of agent-oriented methodologies; current
methodologies are yet to be widely adopted and may not have
been sufficiently mature to be useful in the development
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 8, August 2017
344 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
2. of these applications. For agent-oriented software engineering
to become a widely accepted practice, as many agent
researchers predict it will, it is important that mature tools and
methodologies are developed.
Several agent-oriented methodologies have been recently
described; High Level and Intermediate Models, Gaia, the
ZEUS Methodology and Multiagent Systems Engineering. All
of these offer approaches to the analysis and design of agent-
oriented software systems. Ideas introduced in these
methodologies have been drawn upon in the development of
Styx, however there are several new ideas that distinguish
Styx from those previously presented in the literature.
Styx covers a wider range of software development life-cycle
activities than other methodologies, providing not only analysis
and design models but also skeleton source-code for the
implementation phase and support for the maintenance phase. It is
designed so that a software tool can automate the transformations
between analysis-level and design-level models and automatically
generate skeleton source-code from the design-level models. This
software tool will also informally verify the development process.
Domain concepts that will be used in communication between
agents are modeled at the analysis level, allowing a more
complete model of inter-agent communication at the design level.
Styx also utilizes the interaction protocols specified by the
Foundation for Intelligent Physical Agents (FIPA) in order to
support inter-agent communication.
2.1 Covering the Software Life Cycle
It is customary in the software engineering literature to
prescribe a number of phases that constitute `the software
development life cycle’; an example is given in table 1.
The software development life cycle covers the entire life
of a software system, starting from gathering the initial
requirements for the system, building models of the
system, implementing and finally maintaining the system.
Current work on agent-oriented methodologies
typically does not cover all phases of the software
development life cycle. The first phase, gathering
requirements, does not significantly change for agent-
oriented software projects. Techniques for requirements
gathering already exist, for example formalized
specifications, use cases or user stories. These are
generally not developed further for the agent-oriented
approach. Note however that the requirements are
interpreted using agent-oriented concepts in the analysis
phase.
The analysis and design of agent-oriented systems is
significantly different from analysis and de-sign of other
types of software systems. Although
Requirements
⇓
Analysis
⇓
Design
⇓
Implementation
⇓
Maintenance
Table 1: A typical software development life cycle
Tools are often borrowed from object-oriented approaches;
there are many differences between agents and objects that
must be considered. This phase has received the most
attention from agent-oriented methodology researchers and
it forms a substantial part of the Styx methodology.
Implementation The implementation phase has received
less attention; it is not supported in the methodologies
reviewed in this paper. Since these methodologies are
intended to be useful over a wide range of agent types and
there are so many different types of agents, it is difficult for
a methodology to be both widely applicable and also
support the implementation phase.
To address this issue Styx introduces the idea of an abstract
agent specification language. This is to be a text based
language that is sufficiently generic that it can be mapped to a
large proportion of agent-oriented software development
frame-works and toolkits. In the ideal case these map-pings
would be performed automatically, however in the case that an
unsupported framework or toolkit is used a mapping could be
achieved by hand or a new automatic mapping be developed.
The implementation phase of the software development life-
cycle involves transforming the set of design level models into
an executable implementation defined in some programming
language. Supposing the ideal case, providing automated sup-
port for this phase would involve some form of automatic
programming. However this is not generally feasible because
including enough information in the design level models for a
program to be automatically generated would make the design
phase far too complex. To ensure that it remains focused on the
task at hand, Styx does not attempt to provide support for
developing application-specific parts of a system, but rather
focuses solely on the agent-oriented aspects. The authors
believe that the implementation phase can be best supported by
providing a skeleton implementation of the agent-based aspects
of a software system, leaving pro-
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 8, Augus 2017
345 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
3. grammars to focus on the application-specific aspects of
their system. Note that the generation of this skeleton
implementation would be a two-step process; firstly the
design models would be mapped into the abstract agent
specification language, and then the abstract specification
would be mapped to a particular implementation
language, toolkit or platform.
Automatically generating such a skeleton
implementation directly from the design-level models
would require that they contain a sufficient amount of
information about how the individual agents are
structured and how they will behave. The Internal Agent
Model and Conversation Model of the High Level and
Intermediate Models methodology and the Services
Model of the Gaia methodology all provide a table that
loosely resembles a finite state machine. The authors
believe that it would be feasible to automatically
transform models such as these to some skeleton
implementation.
Maintenance The longest phase of the software
development life cycle is usually the maintenance phase. A
well-documented development process is important for
maintenance. However aside from providing such
documentation there has been no support for maintenance in
existing methodologies.
Documents outlining the analysis and design phases of
system development form an important part of the support
offered by Styx for the maintenance phase, as is true of other
methodologies. However the skeleton source code generation
pro-posed in the previous paragraph provides an additional
area in which the maintenance phase needs to be supported.
During a system's lifetime changes may be made to the
analysis and design models, as it is likely that requirements
will change over time and thus require modifications to these
models. Styx provides a source-code skeleton to support the
implementation phase, which is generated from these
analysis and design models. Now since these models will
change over the lifetime of the system it is important that the
methodology provides some way of updating the
implementation skeleton in response to such changes. Simply
regenerating this skeleton would not provide sufficient
support for this, as the skeleton will be fleshed out with
application specific code during implementation. Thus an
important feature of the proposed methodology is to be able
not only to generate these skeletons but also to be able to
update them with the application specific code in place.
2.2 Designing for Automation
Several parts of the High Level and Intermediate Models
methodology appear to be partially automatable, however the
authors believe that a methodology designed specifically to
be supported by computer software could be more
automatable, especially by providing automatic
techniques for the transformation between the design and
implementation phases. A key feature of Styx is the
software tool that supports this methodology.
Note that automatic programming is not the goal of this
methodology; application specific aspects will be left to the
designers of each system. This tool will provide support for
drawing the various graphical models in the methodology,
generate and maintain skeleton design models and skeleton
implementation source code, and perform informal validation
of the development process.
2.3 Roles as Agent Classes
Strength of the object-oriented paradigm is the ability to
specify reusable classes of objects, rather than specifying
individual objects, which promotes reusability and
modularity. The Gaia methodology, as well as other work,
uses roles in a similar manner for agent-based systems.
Developing with roles means that one agent may be able to
play several roles, or a single role may be played by more
than one agent. Styx is strongly oriented towards developing
reusable roles that are later used for the construction of
agents.
2.4 Specifying Agent Message Content
It is non-trivial to develop a mapping between the content of
agent communication language messages and the objects that
the application specific parts of the system will work with.
Neither of the methodologies proposed here address the
specification of the content of agent message, however this has
been included in the ZEUS methodology.
Styx incorporates a model at the analysis level which
specifies the possible content of agent messages. This model
uses the class diagram syntax of the Unified Modelling
Language (UML). UML class diagrams have been chosen
because they are a well-known modelling representation, and
because of the increasing usage of the object-oriented
paradigm in the professional software development
community, both for application-specific coding and also for
agent development toolkits and frameworks. Modelling
message content as object classes makes the conceptual gap
between application specific code and the content of agent
communication language messages smaller. This idea draws
upon recently published work.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 8, Augus 2017
346 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
4. 2.5 Reusing FIPA Interaction Proto-cols
Current agent-oriented methodologies either do not specify
the conversations that will occur between agents in detail (for
example Gaia), or otherwise expect that the system designers
will invent new conversation protocols for each agent system
(for example High Level and Intermediate Models and
MaSE). Specifying agent conversations is an important part
of a design methodology that seeks to support the
implementation phase, however it seems that reinventing
conversation protocols for each agent system is often
unnecessary. Styx will incorporate the specification of
conversation protocols at the design level, but will draw
these from
o well-known pool—the interaction protocols introduced in
the forthcoming FIPA 2000 specifications
These cover a wide range of possible inter-agent
interactions, from simple Request and Query protocols, to
more complex Dutch Auction and Iterated Contract Net
protocols. Although the complete documentation for these
is not yet available, nevertheless based on the strength of
previous versions of the FIPA interaction protocol
specifications and on the wide range of proposed
protocols the authors believe that these protocols will
form a sound basis upon which to build this part of the
methodology. Note that reusing these protocols does not
restrict Styx to agent systems that are based on the FIPA
specifications because, with some slight modification, the
protocols could be used with different standards for
message passing that exist in other types of agent system.
3 Scope of the Styx Agent
Methodology
Styx is intended to be used in the development of
collaborative agent systems. These are systems which
tend to use static, coarse-grained agents and are typically
used to solve problems more efficiently than a single
centralised system. Problems may exceed the capabilities
of a single centralised system because of resource
limitations, the need to interoperate with multiple legacy
systems or because a problem is inherently distributed, for
ex-ample distributed sensor networks, distributed data
sources, or distributed expertise.
Styx does not consider systems that contain a large
number of roles—it is envisioned that applying Styx to a
system that has significantly more than about twenty to thirty
different roles would result in the analysis level models
becoming too complex to give a clear, high-level overview of
the system. However there is little restriction on how many
agents can be instantiated from the roles defined from
Requirements Specification
Analysis
Identify Roles and Concepts
Use Case Map Domain Concepts Model
Role Responsibility Models
Design
Deployment Model Role Relationship Model
Implementatio
n
Agent Skeleton Application Specific Code
Final Implementation
Figure 1: Overview of the Styx Agent Methodology
Styx, for example a system may contain hundreds of
telephone agents instantiated from a single role.
It is assumed that agent systems developed using Styx
will be implemented using some agent development toolkit
or framework that is based on an object-oriented language3
and that application specific code will also be developed in
the same object-oriented language.
Such notions as planning, scheduling, mobility and learning,
which are commonly associated with agent systems, are not
explicitly handled by Styx. It is assumed that support for these
things would be provided either by the agent development
toolkit or framework, or that they would be included by-hand
in the application-specific parts of the system.
4 Overview of the Styx Agent
Methodology
A schematic overview of Styx is presented in figure 1. Styx is
briefly summarised in the next paragraph, and more complete
descriptions of each part of the methodology are given in
sections 4.1 to 4.6 with reference to a simple fruit-market
scenario.
The analysis phase starts by identifying agent roles and
domain concepts, followed by generating a high-level Use
Case Map, which gives an overview of the entire system, and a
Domain Concepts Model, which specifies what concepts will
be used in the communication between agents. In the design
phase Role Responsibility Models are generated for each
component of the Use Case Map,
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 8, Augus 2017
347 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
5. where each responsibility of a component is specified in
more detail. The Role Relationship Model specifies how
roles are related to each other and the concepts about which
they will communicate. The Deployment Model maps the
roles identified in the Use Case Map to agents. The
implementation phase is supported by an Agent Skeleton and
together with application specific code this forms the final
implementation. Although it is not shown in the figure the
maintenance phase will be supported by both the models
detailed so far and also by changes in the analysis and design
models being reflected in the implementation skeletons.
Fruit-Market Scenario A simple multi-agent fruit-market
scenario will be used to demonstrate the proposed
methodology. This involves buyers and sellers of fruit in an
electronic marketplace. A seller can notify the marketplace
that some fruit is for sale with a sale-order and buyers can
notify the marketplace they want to buy fruit with a buy-
order. The marketplace attempts to match buy- and sell-
orders; and when a match is made it notifies the buyer
concerned, who then sends payment details to the seller, who
in turn sends delivery details for the order to the buyer. The
actions of the buyer and seller agents are directed by human
users, intended users are farmers who will sell fruit,
supermarkets that will buy fruit and warehouses that will
both buy and sell fruit. When placing an order with the
market, buyers and sellers name the type of fruit, specify a
quality rating (A, B or C grade), the price per unit and the
quantity. The marketplace assumes that buy orders can be
filled by more than one seller, that sell orders can be split
between buyers, and that all transactions are successful.
4.1 Use Case Map
The first step of the analysis phase is to create a high-level
Use Case Map (UCM) for the system. Use Case Maps model
possible processes in a sys-tem as paths which traverse
various components of the system. Components are drawn as
boxes, while paths are drawn as lines crossing various
components. The start of a path is indicated by a solid circle,
while the end point is indicated by a strong line. When a path
crosses a component that component is assigned one or more
responsibilities associated with the path. The Use Case Map
is la-belled with component names, responsibility names and
other explanatory notes. These maps provide a highly
condensed notion suitable for modelling the high-level
behaviour of a system.
To develop the Use Case Map, the roles that agents
may play are identified in the requirements specification
and placed in the Use Case Map as components.
Interactions between roles are then
Buyer Seller
B3
S2
B2
B1 S1
M1 M3
M2
Market
Figure 2: A UCM for the fruit-market scenario.
Label Responsibility
B1 Post buy-order
B2 Send payment
B3 Receive fruit
S1 Post sell-order
S2 Receive payment and send fruit
M1 Receive buy-order
M2 Receive sell-order
M3 Match buy- and sell-orders and
notify buyer
Table 2: Responsibilities for the fruit-market UCM
identified in the requirements specification and placed in
the Use Case Map as paths. Finally when a path crosses a
component, one or more responsibilities are assigned to that
component. Thinking of components as agent roles and
paths as interactions between agent roles means that the Use
Case Map becomes more of an agent-oriented model, as op-
posed to the standard object-oriented interpretation outlined
in Buhr's original work.
1. sample UCM is given for the fruit-market ex-ample in
figure 2 and the responsibilities in this figure are expanded
in table 2. This UCM was developed by first creating
system components for each identifiable role in the
requirements specification and then for each interaction
between entities a path was drawn on the diagram and
responsibilities were assigned.
4.2 Role Responsibility Model
Role Responsibility Models are created for each component of the
analysis level UCM, and take the form of a table with four
columns: responsibility, pre- and post-condition, and action. For
each responsibility in the analysis level UCM, an entry is made in
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 8, Augus 2017
348 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
6. the appropriate role's Role Responsibility Model. The pre- and
post-conditions are in-formal statements that can be either true or
false. These are listed to guide the implementation phase; the
implementation-level action will be performed when the pre-
conditions are true, and a correct implementation will achieve the
post-conditions un-less it meets some error condition. One or
more actions, which are simply named at this stage, are listed for
each responsibility. These give some indication of the actions that
are to be performed in order to achieve this responsibility. Several
actions can be specified, allowing a responsibility to be bro-ken
down into a number of steps at this level.
There are several keywords used in the Role Responsibility
Model; the pre-condition message received, post-
condition message sent and action sendMessage.
These keywords will be used during the generation of skeleton
source code to link the actions specified in the Role
Responsibility Model with the agent interactions specified in
the Role Relationship Model.
4.3 Role Relationship Model
The Role Relationship Model further elaborates the
relationships between roles that are indicated by the analysis
level Use Case Map; relationships exist where there is a path
linking two components of the Use Case Map. Each
relationship is assigned a type drawn from the interaction
protocols specified in the forthcoming FIPA 2000
specifications [13] and an object from the Domain Concepts
Model. The interpretation is that a conversation of the specified
type will occur between the agents, where information is
interchanged using the specified object.
A sample Role Dependency Model is shown in figure 4
for the fruit market scenario. This shows several
`information' dependencies, where one role makes some
information available to another role. For example a Buyer
would inform the Marketplace when it wishes to buy fruit,
using a Buy-order object. Note that the `information'
dependency is used here as a place holder for the
appropriate FIPA interaction protocols.
4.4 Deployment Model
The Deployment Model is the most simple model in this
methodology. It specifies a many-to-many mapping
between agents and roles which assigns roles to agents. This
model specifies what agents will exist in the system, and
what roles they will play. An example for the fruit-market.
4.5 Implementation Phase
The design-level models and the Domain Concepts model
will form the basis for generating skeleton source code for
the implementation phase. The generation of this skeleton
implementation will be a two-step process, firstly the
design models will be mapped into the abstract agent
specification language, and then the abstract specification
would be mapped to a particular implementation
language, toolkit or platform.
The Domain Concepts Model will map directly into a set
of object class definitions that will be available to all agents.
Programmers will be able to use these object classes for
communication between agents without concern for how the
object instances are mapped into a string based
representation for the particular agent communication
language encoding.
The Role Responsibility and Role Relationship Models
will be used to generate skeleton code that takes care of
inter-agent conversations and behaviour activation. The
skeleton code will provide method stubs for each action
specified in the Role Responsibility Model and these
stubs will be filled in by programmers. The Role
Relationship Model will be used to generate state-
machine-based entities that take care of conducting the
conversation appropriately and that activate the
appropriate action methods when a certain message is
received. When an action method is activated, it will be
sup-plied with the instance of an object from the Do-main
Concepts Model that arrived with the activating message.
Programmers will be provided with methods to call when
it is necessary to send a message from within application-
specific code, these methods will have a formal parameter
of the type specified in the Role Relationship Model
(drawn from the classes in the Domain Concepts Model).
So far the source code skeletons for agent roles have been
outlined, however these must be mapped to individual agents
for the deployment of the sys-tem. This is specified at the
design level in the Deployment Model, and at the
implementation level this information is used to group roles
into agents. How this occurs will depend largely upon the
target agent development toolkit or framework.
5 Styx and other methodologies
There are several parallels between Styx and other recently
proposed agent-oriented methodologies, however most of
the ideas that are reused are given a significantly different
interpretation and several new ideas are introduced.
Styx draws on three models that exist in the High Level and
Intermediate Models methodology; how-ever Styx interprets
the components of these models as roles rather than individual
agents. The approach of using a Use Case Map at the analysis
level was seen as a particularly good way of giving a high-level
overview of the system in a single diagram, which details both
the structural and the behavioural aspects. In contrast to the
High Level and Intermediate Models methodology, Styx
interprets the components of the Use Case Map as roles, rather
than as individual agents.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 8, Augus 2017
349 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
7. The Role Responsibility Model is similar to the Internal
Agent Model of the High Level and Inter-mediate Models
methodology and by Gaia's Services Model. Representing
the responsibilities of a role as a table is a useful way of
specifying the actions that carry out the responsibility at a
de-sign level, and since these tables resemble finite state
machines they can be used to generate skele-ton source-
code.
The Role Relationship Model is inspired by the
Dependency Diagram of the High Level and Inter-mediate
Models methodology, however new ideas have been
introduced; relationships are defined us-ing FIPA
interaction protocols in preference to ad-hoc dependency
types, and information about dis-course objects is
included. The Role Relationship Model is somewhat more
complex than the original Dependency Diagram, however
reusing FIPA inter-action protocols and using object
classes specified at the analysis level allows a more
complete model of agents' social behaviour than the
combination of both the Dependency Diagram and the
Conversation Model of the High Level and Intermediate
Models methodology.
The Deployment Model is drawn directly from the Agent
Model in the Gaia methodology. It is necessary to have a
model of this type in a role-based methodology as the final
products of the methodology are agents, not roles, and Gaia's
Agent Model appears a particularly concise way of achieving
this.
6 Future Work
The Styx agent methodology is still in a specifcation
phase; many of the difficult problems have been left for
future work. A key part of this work will be using Styx to
develop examples of a variety of typical agent-based
systems, ranging from process control systems, to
distributed information systems, to more complex market-
place applications than the simplistic fruit-market
scenario. This will ensure that Styx is applicable to a wide
range of problem domains, rather than being focused on a
certain class of applications.
The specification of the text-based abstract agent
specification language has not yet been completed. This will
be `boot-strapped' by identifying common concepts
identified among current agent software development toolkits
and frameworks. This work is perhaps one of the more
important areas of Styx, as it will ensure that Styx is not only
applicable to cur-rent frameworks and toolkits, but also that
it will be applicable to future work. JADE [1], FIPA-OS [19]
and JACK [6] are the current candidate implemen-tation
toolkits to which mappings from the abstract specification
language will be provided. The exact detail of how the
abstract agent specification will be generated from the
design-level models will depend on the specification
language that is developed, and is another item for future
work.
It is not yet determined if specifying a single object
class per relationship in the Role Relationship Model will
be appropriate in all cases, for example a `query'
conversation may require one class of object to specify
the query but another class for the results of the query to
be made available. This problem could be overcome by
specifying a single object that has fields for both the
query and answer, but this would be more of a work-
around than an elegant solution. This issue will be
resolved once the FIPA 2000 specifications for interaction
protocols have been released, and it may well be
necessary to specify multiple object classes per
relationship in the Role Relationship Model for certain
interaction types.
7 Conclusions
Styx represents a new approach to agent-oriented software
engineering that is more comprehensive in scope than many
other methodologies. Styx will provide a software tool that
automates much of the difficult work that currently goes
into build-ing agent-based systems and will ensure that soft-
ware development projects using an agent-based approach
will be able to focus most of their ef-fort on developing
application-specific code, rather than grappling with agent
research issues. However there is a significant amount of
work, both programming and research, to be completed
before Styx is ready for widespread use.
References
F. Bellifemine, A. Poggi, and G. Rimassa. Jade
programmers guide. http://sharon.cselt.it/projects/jade, June
5 2000.
G. Booch. Object Oriented Analysis and Design with
Applications. Addison Wesley, 1994.
F. P. Brooks. The mythical manmonth : essays on software
engineering. Addison-Wesley, anniversary edition, 1995.
T. Buchheim, G. Hetzel, G. Kindermann, and Levi. A multi-
agent approach for opti-cal inspection technology. In R.
Mizoguchi and J. Slaney, editors, PRICAI 2000, Topics in
Artificial Intelligence , volume 1886 of Lec-ture Notes in
Artificial Intelligence . Springer, 2000.
R. Buhr and R. Casselman. Use Case maps for Object-oriented
Systems. Prentice Hall, 1996.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 8, Augus 2017
350 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
8. P. Busetta, R. Rnnquist, A. Hodgson, and Lucas. Jack
intelligent agents components for intelligent agents in Java. In
P. Davidsson, editor, AgentLink News 2. www.agentlink.org,
2000.
G. Bush, M. Purvis, and S. Cranefield. Experiences in the
development of an agent architecture.
S. Cranefield and M. Purvis. Extending agent messaging to
enable OO information exchange.
S. A. DeLoach and M. Wood. Developing Multiagent Systems
with agentTool. In N. Jennings and Y. Lesperance, editors,
Intelligent Agents VI, volume 1757 of Lecture Notes in
Artificial Intelligence . Springer, 2000.
M. Elammari and W. Lalonde. An agent-oriented
methodology: High-level and inter-mediate models.
Foundation for Intelligent Physical Agents (FIPA) web site.
Located at http://www.fipa.org/.
E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design
patterns: elements of reusable object-oriented software.
Addison-Wesley, 1995.
C. A. Iglesias, M. Garijo, and J. C. Gonzalez. A survey of
agent-oriented methodologies. In P. Muller, M. P. Singh, and
A. S. Rao, editors, Intelligent Agents V, volume 1555 of
Lecture Notes in Artificial Intelligence , pages 317–330.
Springer, 1998.
N. R. Jennings and M. Wooldridge. Agent-oriented software
engineering. In J. Brad-shaw, editor, Handbook of Agent
Technology. AAAI/MIT Press, 2000.
E. A. Kendall. Agent roles and role models: New
abstractions for multiagent system analysis and design. In
Proceedings of the International Workshop on Intelligent
Agents in Information and Process Management, Germany,
September 1998.
H. S. Nwana. Software agents: An overview. Knowledge
Engineering Review, 11(3):1–40, September 1996.
S. Poslad, P. Buckle, and R. Hadingham. The FIPA-OS agent
platform: Open source for open standards. In Proceedings of
the 5th International Conference and Exhibition on the
Practical Application of Intelligent Agents and Multi-Agents,
pages 355–368, 2000.
J. Rumbaugh, I. Jacobson, and G. Booch. The Unified
Modeling Language Reference Manual. Addison-Wesley,
1998.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 8, Augus 2017
351 https://sites.google.com/site/ijcsis/
ISSN 1947-5500