Program analysis is useful for debugging, testing and maintenance of software systems due to information
about the structure and relationship of the program’s modules . In general, program analysis is performed
either based on control flow graph or dependence graph. However, in the case of aspect-oriented
programming (AOP), control flow graph (CFG) or dependence graph (DG) are not enough to model the
properties of Aspect-oriented (AO) programs. With respect to AO programs, although AOP is good for
modular representation and crosscutting concern, suitable model for program analysis is required to
gather information on its structure for the purpose of minimizing maintenance effort. In this paper Aspect
Oriented Dependence Flow Graph (AODFG) as an intermediate representation model is proposed to
represent the structure of aspect-oriented programs. AODFG is formed by merging the CFG and DG, thus
more information about dependencies between the join points, advice, aspects and their associated
construct with the flow of control from one statement to another are gathered. We discussthe performance
of AODFG by analysing some examples of AspectJ program taken from AspectJ Development Tools
(AJDT).
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
Can “Feature” be used to Model the Changing Access Control Policies? IJORCS
Access control policies [ACPs] regulate the access to data and resources in information systems. These ACPs are framed from the functional requirements and the Organizational security & privacy policies. It was found to be beneficial, when the ACPs are included in the early phases of the software development leading to secure development of information systems. Many approaches are available for including the ACPs in requirements and design phase. They relied on UML artifacts, Aspects and also Feature for this purpose. But the earlier modeling approaches are limited in expressing the evolving ACPs due to organizational policy changes and business process modifications. In this paper, we analyze, whether “Feature”- defined as an increment in program functionality can be used as a modeling entity to represent the Evolving Access control requirements. We discuss the two prominent approaches that use Feature in modeling ACPs. Also we have a comparative analysis to find the suitability of Features in the context of changing ACPs. We conclude with our findings and provide directions for further research.
A Methodology To Manage Victim Components Using Cbo Measureijseajournal
The current practices of software industry demands development of a software within time and budget
which is highly productive. The traditional approach of developing a software from scratch requires
considerable amount of effort. To overcome the drawback a reuse drive software development approach is
adopted. However there is a dire need for realizing effective software reuse. This paper presents several
measures of reusability and presents a methodology of reconfiguring the victim components. The CBO
measure helps in identifying the component to be reconfigured. The proposed strategy is simulated using
HR portal domain specific component system.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
Can “Feature” be used to Model the Changing Access Control Policies? IJORCS
Access control policies [ACPs] regulate the access to data and resources in information systems. These ACPs are framed from the functional requirements and the Organizational security & privacy policies. It was found to be beneficial, when the ACPs are included in the early phases of the software development leading to secure development of information systems. Many approaches are available for including the ACPs in requirements and design phase. They relied on UML artifacts, Aspects and also Feature for this purpose. But the earlier modeling approaches are limited in expressing the evolving ACPs due to organizational policy changes and business process modifications. In this paper, we analyze, whether “Feature”- defined as an increment in program functionality can be used as a modeling entity to represent the Evolving Access control requirements. We discuss the two prominent approaches that use Feature in modeling ACPs. Also we have a comparative analysis to find the suitability of Features in the context of changing ACPs. We conclude with our findings and provide directions for further research.
A Methodology To Manage Victim Components Using Cbo Measureijseajournal
The current practices of software industry demands development of a software within time and budget
which is highly productive. The traditional approach of developing a software from scratch requires
considerable amount of effort. To overcome the drawback a reuse drive software development approach is
adopted. However there is a dire need for realizing effective software reuse. This paper presents several
measures of reusability and presents a methodology of reconfiguring the victim components. The CBO
measure helps in identifying the component to be reconfigured. The proposed strategy is simulated using
HR portal domain specific component system.
ADAPTIVE CONFIGURATION META-MODEL OF A GUIDANCE PROCESSijcsit
The current technology tend leads us to recognize the need for adaptive guidance process for all process of
software development. The new needs generated by the mobility context for software development led these
guidance processes to be adapted. In order to improve the performance of the deployed software
development, it is useful to manage the configuration of its evolving aspects. This paper deals with the
configuration management of guidance process or its ability to be adapted to specific development
contexts. The proposed adaptive configuration Meta-model is worked out on the basis of a Y description
for adaptive guidance process. This description focuses on three dimensions defined by the
material/software platform, the adaptation form and provided guidance service. Each dimension considers
several factors to develop a coherent configuration strategy and provide automatically the appropriate
guidance process to a current development context.
Reduced Software Complexity for E-Government Applications with ZEF FrameworkTELKOMNIKA JOURNAL
The situation of dynamic change is unpredictable and always growth increasingly. It also can
happen anytime and anywhere. The one kind which is always changing is the government policy.This
condition is suggested take the impact for software for information system. It will cause replacement,
modification, and enhancement of software for information system. There is some commonality and
variability of software features in Indonesian Government. Hence, to manage it, we present enhancement
of Zuma’s E-Government Framework (ZEF) for reduce software complexity.We enhance ZEF Framework
using SPLE and GORE approach in order to improve traditional software development.It can reduce, if
the changing continuously happen.The measurement of software complexity relate to functionality of
system.It can describe with function point, because function point can describe logical software
complexity also. The preliminary result of this study can reduce efficiency of software complexity such as
information processing size, technical complexity adjustment factors and function points in e-government
applications.
SOFTWARE REQUIREMENT CHANGE EFFORT ESTIMATION MODEL PROTOTYPE TOOL FOR SOFTWA...ijseajournal
In software development phase software artifacts are not in consistent states such as: some of the class artifacts are fully developed some are half developed, some are major developed, some are minor developed and some are not developed yet. At this stage allowing too many software requirement changes may possibly delay in project delivery and increase development budget of the software. On the other hand rejecting too many changes may increase customer dissatisfaction. Software change effort estimation is one of the most challenging and important activity that helps software project managers in accepting or rejecting changes during software development phase. This paper extends our previous works on developing a software requirement change effort estimation model prototype tool for the software development phase. The significant achievements of the tool are demonstrated through an extensive experimental validation using several case studies. The experimental analysis shows improvement in the estimation accuracy over current change effort estimation models.
A methodology to evaluate object oriented software systems using change requi...ijseajournal
It is a well known fact that software maintenance plays a major role and finds importance in software
development life cycle. As object
-
oriented programming has become the standard, it is very important to
understand th
e problems of maintaining object
-
oriented software systems. This paper aims at evaluating
object
-
oriented software system through change requirement traceability
–
based impact analysis
methodology
for non functional requirements using functional requirem
ents
. The major issues have been
related to change impact algorithms and inheritance of functionality.
A FRAMEWORK FOR ASPECTUAL REQUIREMENTS VALIDATION: AN EXPERIMENTAL STUDYijseajournal
Requirements engineering is a discipline of software engineering that is concerned with the
identification and handling of user and system requirements. Aspect-Oriented Requirements
Engineering (AORE) extends the existing requirements engineering approaches to cope with the
issue of tangling and scattering resulted from crosscutting concerns. Crosscutting concerns are
considered as potential aspects and can lead to the phenomena “tyranny of the dominant
decomposition”. Requirements-level aspects are responsible for producing scattered and tangled
descriptions of requirements in the requirements document. Validation of requirements artefacts
is an essential task in software development. This task ensures that requirements are correct and
valid in terms of completeness and consistency, hence, reducing the development cost,
maintenance and establish an approximately correct estimate of effort and completion time of the
project. In this paper, we present a validation framework to validate the aspectual requirements
and the crosscutting relationship of concerns that are resulted from the requirements engineering
phase. The proposed framework comprises a high-level and low-level validation to implement on
software requirements specification (SRS). The high-level validation validates the concerns with
stakeholders, whereas the low-level validation validates the aspectual requirement by
requirements engineers and analysts using a checklist. The approach has been evaluated using
an experimental study on two AORE approaches. The approaches are viewpoint-based called
AORE with ArCaDe and lexical analysis based on Theme/Doc approach. The results obtained
from the study demonstrate that the proposed framework is an effective validation model for
AORE artefacts.
PRODUCT QUALITY EVALUATION METHOD (PQEM): TO UNDERSTAND THE EVOLUTION OF QUAL...ijseajournal
Promoting quality within the context of agile software development, it is extremely important as well as
useful to improve not only the knowledge and decision-making of project managers, product owners, and
quality assurance leaders but also to support the communication between teams. In this context, quality
needs to be visible in a synthetic and intuitive way in order to facilitate the decision of accepting or
rejecting each iteration within the software life cycle. This article introduces a novel solution called
Product Quality Evaluation Method (PQEM) which can be used to evaluate a set of quality characteristics
for each iteration within a software product life cycle. PQEM is based on the Goal-Question-Metric
approach, the standard ISO/IEC 25010, and the extension made of testing coverage in order to obtain the
quality coverage of each quality characteristic. The outcome of PQEM is a unique multidimensional value,
that represents the quality level reached by each iteration of a product, as an aggregated measure. Even
though a value it is not the regular idea of measuring quality, we believe that it can be useful to use this
value to easily understand the quality level of each iteration. An illustrative example of the PQEM method
was carried out with two iterations from a web and mobile application, within the healthcare environment.
A single measure makes it possible to observe the evolution of the level of quality reached in the evolution
of the product through the iterations.
Performance Evaluation using Blackboard Technique in Software ArchitectureEditor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional
requirements (such as performance). Whereas establishing the non-functional requirements have significant effect on success of
software systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software
performance has been specified based on performance models, may be evaluated at the primary stages of software development cycle.
Therefore, modeling and evaluation of non-functional requirements in software architecture level, that are designed at the primary
stages of software systems development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance of software systems, based on black board technique in software architecture
level. In this approach, at first, software architecture using blackboard technique is described by UML use case, activity and
component diagrams. then UML model is transformed to an executable model based on timed colored petri nets(TCPN)
Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including performance
(such as response time) may be evaluated in software architecture level.
Software size estimation at early stages of project development holds great significance to meet
the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor
of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for
measuring the size of object oriented projects. The class point approach is used to quantify
classes which are the logical building blocks in object oriented paradigm. In this paper, we
propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries
based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H
benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class
point approach is used to estimate the size of an OLAP System The results of our approach are
validated.
Software size estimation at early stages of project development holds great significance to meet the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for measuring the size of object oriented projects. The class point approach is used to quantify classes which are the logical building blocks in object oriented paradigm. In this paper, we propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class point approach is used to estimate the size of an OLAP System The results of our approach are validated.
Evaluation of the software architecture styles from maintainability viewpointcsandit
In the process of software architecture design, different decisions are made that have systemwide
impact. An important decision of design stage is the selection of a suitable software
architecture style. Lack of investigation on the quantitative impact of architecture styles on
software quality attributes is the main problem in using such styles. So, the use of architecture
styles in designing is based on the intuition of software developers. The aim of this research is
to quantify the impacts of architecture styles on software maintainability. In this study,
architecture styles are evaluated based on coupling, complexity and cohesion metrics and
ranked by analytic hierarchy process from maintainability viewpoint. The main contribution of
this paper is quantification and ranking of software architecture styles from the perspective of
maintainability quality attribute at stage of architectural style selection.
The objective of this paper is to provide an insight preview into various
agent oriented methodologies by using an enhanced comparison
framework based on criteria like process related criteria, steps and
techniques related criteria, steps and usability criteria, model related or
“concepts” related criteria, comparison regarding model related criteria
and comparison regarding supportive related criteria. The result also
constitutes inputs collected from the users of the agent oriented
methodologies through a questionnaire based survey.
Automatically inferring structure correlated variable set for concurrent atom...ijseajournal
The at
omicity of correlated variables is
quite tedious and error prone for programmers to explicitly infer
and specify in
the multi
-
threaded program
. Researchers have studied the automatic discovery of atomic set
programmers intended, such as by frequent itemset
mining and rules
-
based filter. However, due to the
lack
of inspection of inner structure
,
some
implicit sematics
independent
variables intended by user are
mistakenly
classified to be correlated
. In this paper, we present a
novel
simplification method for
program
dependency graph
and the corresponding graph
mining approach to detect the set of variables correlated
by logical structures in the source code. This approach formulates the automatic inference of the correlated
variables as mining frequent subgra
ph of
the simplified
data and control flow dependency graph. The
presented simplified
graph representation of program dependency is not only robust for coding style’s
varieties, but also essential to recognize the logical correlation. We implemented our me
thod and
compared it with previous methods on the open source programs’ repositories. The experiment results
show that our method has less false positive rate than previous methods in the development initial stage. It
is concluded that our presented method
can provide programmers in the development with the sufficient
precise correlated variable set for checking atomicity
DETECTION AND REFACTORING OF BAD SMELL CAUSED BY LARGE SCALEijseajournal
Bad smells are signs of potential problems in code. Detecting bad smells, however, remains time
consuming for software engineers despite proposals on bad smell detection and refactoring tools. Large
Class is a kind of bad smells caused by large scale, and the detection is hard to achieve automatically. In
this paper, a Large Class bad smell detection approach based on class length distribution model and
cohesion metrics is proposed. In programs, the lengths of classes are confirmed according to the certain
distributions. The class length distribution model is generalized to detect programs after grouping.
Meanwhile, cohesion metrics are analyzed for bad smell detection. The bad smell detection experiments of
open source programs show that Large Class bad smell can be detected effectively and accurately with this
approach, and refactoring scheme can be proposed for design quality improvements of programs.
ADAPTIVE CONFIGURATION META-MODEL OF A GUIDANCE PROCESSijcsit
The current technology tend leads us to recognize the need for adaptive guidance process for all process of
software development. The new needs generated by the mobility context for software development led these
guidance processes to be adapted. In order to improve the performance of the deployed software
development, it is useful to manage the configuration of its evolving aspects. This paper deals with the
configuration management of guidance process or its ability to be adapted to specific development
contexts. The proposed adaptive configuration Meta-model is worked out on the basis of a Y description
for adaptive guidance process. This description focuses on three dimensions defined by the
material/software platform, the adaptation form and provided guidance service. Each dimension considers
several factors to develop a coherent configuration strategy and provide automatically the appropriate
guidance process to a current development context.
Reduced Software Complexity for E-Government Applications with ZEF FrameworkTELKOMNIKA JOURNAL
The situation of dynamic change is unpredictable and always growth increasingly. It also can
happen anytime and anywhere. The one kind which is always changing is the government policy.This
condition is suggested take the impact for software for information system. It will cause replacement,
modification, and enhancement of software for information system. There is some commonality and
variability of software features in Indonesian Government. Hence, to manage it, we present enhancement
of Zuma’s E-Government Framework (ZEF) for reduce software complexity.We enhance ZEF Framework
using SPLE and GORE approach in order to improve traditional software development.It can reduce, if
the changing continuously happen.The measurement of software complexity relate to functionality of
system.It can describe with function point, because function point can describe logical software
complexity also. The preliminary result of this study can reduce efficiency of software complexity such as
information processing size, technical complexity adjustment factors and function points in e-government
applications.
SOFTWARE REQUIREMENT CHANGE EFFORT ESTIMATION MODEL PROTOTYPE TOOL FOR SOFTWA...ijseajournal
In software development phase software artifacts are not in consistent states such as: some of the class artifacts are fully developed some are half developed, some are major developed, some are minor developed and some are not developed yet. At this stage allowing too many software requirement changes may possibly delay in project delivery and increase development budget of the software. On the other hand rejecting too many changes may increase customer dissatisfaction. Software change effort estimation is one of the most challenging and important activity that helps software project managers in accepting or rejecting changes during software development phase. This paper extends our previous works on developing a software requirement change effort estimation model prototype tool for the software development phase. The significant achievements of the tool are demonstrated through an extensive experimental validation using several case studies. The experimental analysis shows improvement in the estimation accuracy over current change effort estimation models.
A methodology to evaluate object oriented software systems using change requi...ijseajournal
It is a well known fact that software maintenance plays a major role and finds importance in software
development life cycle. As object
-
oriented programming has become the standard, it is very important to
understand th
e problems of maintaining object
-
oriented software systems. This paper aims at evaluating
object
-
oriented software system through change requirement traceability
–
based impact analysis
methodology
for non functional requirements using functional requirem
ents
. The major issues have been
related to change impact algorithms and inheritance of functionality.
A FRAMEWORK FOR ASPECTUAL REQUIREMENTS VALIDATION: AN EXPERIMENTAL STUDYijseajournal
Requirements engineering is a discipline of software engineering that is concerned with the
identification and handling of user and system requirements. Aspect-Oriented Requirements
Engineering (AORE) extends the existing requirements engineering approaches to cope with the
issue of tangling and scattering resulted from crosscutting concerns. Crosscutting concerns are
considered as potential aspects and can lead to the phenomena “tyranny of the dominant
decomposition”. Requirements-level aspects are responsible for producing scattered and tangled
descriptions of requirements in the requirements document. Validation of requirements artefacts
is an essential task in software development. This task ensures that requirements are correct and
valid in terms of completeness and consistency, hence, reducing the development cost,
maintenance and establish an approximately correct estimate of effort and completion time of the
project. In this paper, we present a validation framework to validate the aspectual requirements
and the crosscutting relationship of concerns that are resulted from the requirements engineering
phase. The proposed framework comprises a high-level and low-level validation to implement on
software requirements specification (SRS). The high-level validation validates the concerns with
stakeholders, whereas the low-level validation validates the aspectual requirement by
requirements engineers and analysts using a checklist. The approach has been evaluated using
an experimental study on two AORE approaches. The approaches are viewpoint-based called
AORE with ArCaDe and lexical analysis based on Theme/Doc approach. The results obtained
from the study demonstrate that the proposed framework is an effective validation model for
AORE artefacts.
PRODUCT QUALITY EVALUATION METHOD (PQEM): TO UNDERSTAND THE EVOLUTION OF QUAL...ijseajournal
Promoting quality within the context of agile software development, it is extremely important as well as
useful to improve not only the knowledge and decision-making of project managers, product owners, and
quality assurance leaders but also to support the communication between teams. In this context, quality
needs to be visible in a synthetic and intuitive way in order to facilitate the decision of accepting or
rejecting each iteration within the software life cycle. This article introduces a novel solution called
Product Quality Evaluation Method (PQEM) which can be used to evaluate a set of quality characteristics
for each iteration within a software product life cycle. PQEM is based on the Goal-Question-Metric
approach, the standard ISO/IEC 25010, and the extension made of testing coverage in order to obtain the
quality coverage of each quality characteristic. The outcome of PQEM is a unique multidimensional value,
that represents the quality level reached by each iteration of a product, as an aggregated measure. Even
though a value it is not the regular idea of measuring quality, we believe that it can be useful to use this
value to easily understand the quality level of each iteration. An illustrative example of the PQEM method
was carried out with two iterations from a web and mobile application, within the healthcare environment.
A single measure makes it possible to observe the evolution of the level of quality reached in the evolution
of the product through the iterations.
Performance Evaluation using Blackboard Technique in Software ArchitectureEditor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional
requirements (such as performance). Whereas establishing the non-functional requirements have significant effect on success of
software systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software
performance has been specified based on performance models, may be evaluated at the primary stages of software development cycle.
Therefore, modeling and evaluation of non-functional requirements in software architecture level, that are designed at the primary
stages of software systems development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance of software systems, based on black board technique in software architecture
level. In this approach, at first, software architecture using blackboard technique is described by UML use case, activity and
component diagrams. then UML model is transformed to an executable model based on timed colored petri nets(TCPN)
Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including performance
(such as response time) may be evaluated in software architecture level.
Software size estimation at early stages of project development holds great significance to meet
the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor
of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for
measuring the size of object oriented projects. The class point approach is used to quantify
classes which are the logical building blocks in object oriented paradigm. In this paper, we
propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries
based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H
benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class
point approach is used to estimate the size of an OLAP System The results of our approach are
validated.
Software size estimation at early stages of project development holds great significance to meet the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for measuring the size of object oriented projects. The class point approach is used to quantify classes which are the logical building blocks in object oriented paradigm. In this paper, we propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class point approach is used to estimate the size of an OLAP System The results of our approach are validated.
Evaluation of the software architecture styles from maintainability viewpointcsandit
In the process of software architecture design, different decisions are made that have systemwide
impact. An important decision of design stage is the selection of a suitable software
architecture style. Lack of investigation on the quantitative impact of architecture styles on
software quality attributes is the main problem in using such styles. So, the use of architecture
styles in designing is based on the intuition of software developers. The aim of this research is
to quantify the impacts of architecture styles on software maintainability. In this study,
architecture styles are evaluated based on coupling, complexity and cohesion metrics and
ranked by analytic hierarchy process from maintainability viewpoint. The main contribution of
this paper is quantification and ranking of software architecture styles from the perspective of
maintainability quality attribute at stage of architectural style selection.
The objective of this paper is to provide an insight preview into various
agent oriented methodologies by using an enhanced comparison
framework based on criteria like process related criteria, steps and
techniques related criteria, steps and usability criteria, model related or
“concepts” related criteria, comparison regarding model related criteria
and comparison regarding supportive related criteria. The result also
constitutes inputs collected from the users of the agent oriented
methodologies through a questionnaire based survey.
Automatically inferring structure correlated variable set for concurrent atom...ijseajournal
The at
omicity of correlated variables is
quite tedious and error prone for programmers to explicitly infer
and specify in
the multi
-
threaded program
. Researchers have studied the automatic discovery of atomic set
programmers intended, such as by frequent itemset
mining and rules
-
based filter. However, due to the
lack
of inspection of inner structure
,
some
implicit sematics
independent
variables intended by user are
mistakenly
classified to be correlated
. In this paper, we present a
novel
simplification method for
program
dependency graph
and the corresponding graph
mining approach to detect the set of variables correlated
by logical structures in the source code. This approach formulates the automatic inference of the correlated
variables as mining frequent subgra
ph of
the simplified
data and control flow dependency graph. The
presented simplified
graph representation of program dependency is not only robust for coding style’s
varieties, but also essential to recognize the logical correlation. We implemented our me
thod and
compared it with previous methods on the open source programs’ repositories. The experiment results
show that our method has less false positive rate than previous methods in the development initial stage. It
is concluded that our presented method
can provide programmers in the development with the sufficient
precise correlated variable set for checking atomicity
DETECTION AND REFACTORING OF BAD SMELL CAUSED BY LARGE SCALEijseajournal
Bad smells are signs of potential problems in code. Detecting bad smells, however, remains time
consuming for software engineers despite proposals on bad smell detection and refactoring tools. Large
Class is a kind of bad smells caused by large scale, and the detection is hard to achieve automatically. In
this paper, a Large Class bad smell detection approach based on class length distribution model and
cohesion metrics is proposed. In programs, the lengths of classes are confirmed according to the certain
distributions. The class length distribution model is generalized to detect programs after grouping.
Meanwhile, cohesion metrics are analyzed for bad smell detection. The bad smell detection experiments of
open source programs show that Large Class bad smell can be detected effectively and accurately with this
approach, and refactoring scheme can be proposed for design quality improvements of programs.
MANAGING AND ANALYSING SOFTWARE PRODUCT LINE REQUIREMENTSijseajournal
Modelling software product line (SPL) features plays a crucial role to a successful development of SPL.
Feature diagram is one of the widely used notations to model SPL variants. However, there is a lack of
precisely defined formal notations for representing and verifying such models. This paper presents an
approach that we adopt to model SPL variants by using UML and subsequently verify them by using firstorder
logic. UML provides an overall modelling view of the system. First-order logic provides a precise
and rigorous interpretation of the feature diagrams. We model variants and their dependencies by using
propositional connectives and build logical expressions. These expressions are then validated by the Alloy
verification tool. The analysis and verification process is illustrated by using Computer Aided Dispatch
(CAD) system.
Development of an expert system for reducing medical errorsijseajournal
Recent advances in patient safety have been hampered by the hard
dealing with the development of a
uniform classification of patient safety concepts
in a systematic way
.
Therefore, m
any believe that medical
expert systems have great potential to improve health care.
A framework for computer
-
based medical
errors diagnose
s of primary systems’ deficiencies is presented.
Results of this research assisted in
developing the hierarchical structure of the medical errors expert system which
was
written and complied
in CLIPS. It has
225
rules,
52
parameters and
830
conditional pa
ragraphs. The system prompts the user
for response with suggested input formats. The system checks the user input for consistency within the
given limits. In addition, the system was validated through numerous consultations with the experts in the
field.
The benefits that
are
gained from such types of expert system
s
are eliminating
the fear from dealing
with
personal mistake, and providing the up
-
date information and helps medical staff as a learning tool.
SCHEDULING AND INSPECTION PLANNING IN SOFTWARE DEVELOPMENT PROJECTS USING MUL...ijseajournal
This paper presents a Multi-objective Hyper-heuristic Evolutionary Algorithm (MHypEA) for the solution
of Scheduling and Inspection Planning in Software Development Projects. Scheduling and Inspection
planning is a vital problem in software engineering whose main objective is to schedule the persons to
various activities in the software development process such as coding, inspection, testing and rework in
such a way that the quality of the software product is maximum and at the same time the project make span
and cost of the project are minimum. The problem becomes challenging when the size of the project is
huge. The MHypEA is an effective metaheuristic search technique for suggesting scheduling and inspection
planning. It incorporates twelve low-level heuristics which are based on different methods of selection,
crossover and mutation operations of Evolutionary Algorithms. The selection mechanism to select a lowlevel
heuristic is based on reinforcement learning with adaptive weights. The efficacy of the algorithm has
been studied on randomly generated test problem.
Is fortran still relevant comparing fortran with java and c++ijseajournal
This paper presents a comparative study to evaluate and compare Fortran with the two mo
st popular
programming languages Java and C++. Fortran has gone through major and minor extensions in the
years 2003 and 2008. (1) How much have these extensions made Fortran comparable to Java and C++?
(2) What are the differences and similarities, in sup
porting features like: Templates, object constructors
and destructors, abstract data types and dynamic binding? These are the main questions we are trying to
answer in this study. An object
-
oriented ray tracing application is implemented in these three lan
guages to
compare them. By using only one program we ensured there was only one set of requirements thus making
the comparison homogeneous. Based on our literature survey this is the first study carried out to compare
these languages by applying software m
etrics to the ray tracing application and comparing these results
with the similarities and differences found in practice. We motivate the language implementers and
compiler developers, by providing binary analysis and profiling of the application, to impr
ove Fortran
object handling and processing, and hence making it more prolific and general. This study facilitates and
encourages the reader to further explore, study and use these languages more effectively and productively,
especially Fortran.
Management of time uncertainty in agileijseajournal
Agile software development represents a major departure from traditional methods of software
engineering. It had huge impact on how software is developed worldwide. Agile software development
solutions are targeted at enhancing work at project level. But it may encounter some uncertainties in its
working. One of the key measures of the resilience of a project is its ability to reach completion, on time
and on budget, regardless of the turbulent and uncertain environment it may operate within. Uncertainty of
time is the problem which can lead to other uncertainties too. In uncertainty of time the main issue is that
the how much delay will be caused by the uncertain environment and if the project manager comes to know
about this delay before, then he can ask for that extra time from customer. So this paper tries to know about
that extra time and calculate it.
An empirical evaluation of impact of refactoring on internal and external mea...ijseajournal
Refactoring is the process of improving the design of existing code by changing its internal structure
without affecting its external behaviour, with the main aims of improving the quality of software product.
Therefore, there is a belief that refactoring improves quality factors such as understandability, flexibility,
and reusability. However, there is limited empirical evidence to support such assumptions.
The objective of this study is to validate/invalidate the claims that refactoring improves software quality.
The impact of selected refactoring techniques was assessed using both external and internal measures. Ten
refactoring techniques were evaluated through experiments to assess external measures: Resource
Utilization, Time Behaviour, Changeability and Analysability which are ISO external quality factors and
five internal measures: Maintainability Index, Cyclomatic Complexity, Depth of Inheritance, Class
Coupling and Lines of Code.
The result of external measures did not show any improvements in code quality after the refactoring
treatment. However, from internal measures, maintainability index indicated an improvement in code
quality of refactored code than non-refactored code and other internal measures did not indicate any
positive effect on refactored code.
The main problems of school course timetabling are time, curriculum, and classrooms. In addition there
are other problems that vary from one institution to another. This paper is intended to solve the problem of
satisfying the teachers’ preferred schedule in a way that regards the importance of the teacher to the
supervising institute, i.e. his score according to some criteria. Genetic algorithm (GA) has been presented
as an elegant method in solving timetable problem (TTP) in order to produce solutions with no conflict. In
this paper, we consider the analytic hierarchy process (AHP) to efficiently obtain a score for each teacher,
and consequently produce a GA-based TTP solution that satisfies most of the teachers’ preferences.
SOCIO-DEMOGRAPHIC DIFFERENCES IN THE PERCEPTIONS OF LEARNING MANAGEMENT SYSTE...ijseajournal
Learner centred design (LCD) focuses on creating an e-learning system that can fulfil individual needs
through personalization, nevertheless there are still many technical challenges. Besides, losing balanced
focus on both of the learners and the instructors does not help to create a successful e-learning system.
User-centred design helps to improve the usability of a system as it integrates requirements and user
interface designs based on users’ needs. The findings of this research prove that even the users are
provided with the same LMS, not everyone has the same perceptions or tolerance levels of the seven design
factors that may cause frustrations to the users, and not everyone has the same satisfaction level of
navigation experience and interface design. It is important for the LMS developers to understand that the
variations between roles, genders, experiences and ages exist and should not be ignored when designing
the system.
In this paper we proposed the logical correct path to implement automatically any algorithm or model in
verified C# code. Our proposal depends on using the event-B as a formal method. It is suitable solution for
un-experience in programming language and profession in mathematical modeling. Our proposal also
integrates requirements, codes and verification in system development life cycle. We suggest also using
event-B pattern. Our suggestion is classify into two cases, the algorithm case and the model case. The
benefits of our proposal are reducing the prove effort, reusability, increasing the automation degree and
generate high quality code. In this paper we applied and discussed the three phases of automatic code
generation philosophy on two case studies the first is “minimum algorithm” and the second one is a model
for ATM.
Software testing is an important activity of the software development process. Software testing is most
efforts consuming phase in software development. One would like to minimize the effort and maximize the
number of faults detected and automated test case generation contributes to reduce cost and time effort.
Hence test case generation may be treated as an optimization problem In this paper we have used genetic
algorithm to optimize the test case that are generated applying conditional coverage on source code. Test
case data is generated automatically using genetic algorithm are optimized and outperforms the test cases
generated by random testing.
SOFTWARE ENGINEERING FOR TAGGING SOFTWAREijseajournal
Tagging is integrated into web application to ease maintenance of large amount of information stored in a web application. With no mention of requirement specification or design document for tagging software,academically or otherwise, integrating tagging software in a web application is a tedious task. In this paper, a framework has been created for integration of tagging software in a web application. The framework follows the software development life cycle paradigms and is to be used during it different stages. The requirement component of framework presents a weighted requirement checklist that aids the user in deciding requirement for the tagging software in a web application, from among popular ones. The design component facilitates the developer in understanding the design of existing tagging software,modifying it or developing a new one. Also, the framework helps in verification and validation of tagging software integrated in a web application.
The analytic hierarchy process (AHP) has been applied in many fields and especially to complex
engineering problems and applications. The AHP is capable of structuring decision problems and finding
mathematically determined judgments built on knowledge and experience. This suggests that AHP should
prove useful in agile software development where complex decisions occur routinely. In this paper, the
AHP is used to rank the refactoring techniques based on the internal code quality attributes. XP
encourages applying the refactoring where the code smells bad. However, refactoring may consume more
time and efforts.So, to maximize the benefits of the refactoring in less time and effort, AHP has been
applied to achieve this purpose. It was found that ranking the refactoring techniques helped the XP team to
focus on the technique that improve the code and the XP development process in general.
Design patterns for self adaptive systemsijseajournal
Self adaptation has been proposed to overcome the complexity of today's software systems which results
from the uncertainty issue. Aspects of uncertainty include changing systems goals, changing resource
availability and dynamic operating conditions. Feedback control loops have been recognized as vital
elements for engineering self-adaptive systems. However, despite their importance, there is still a lack of
systematic way of the design of the interactions between the different components comprising one
particular feedback control loop as well as the interactions between components from different control
loops . Most existing approaches are either domain specific or too abstract to be useful. In addition, the
issue of multiple control loops is often neglected and consequently self adaptive systems are often designed
around a single loop. In this paper we propose a set of design patterns for modeling and designing self
adaptive software systems based on MAPE-K. Control loop of IBM architecture blueprint which takes into
account the multiple control loops issue. A case study is presented to illustrate the applicability of the
proposed design patterns.
Discover why unit testing is such an important practice in software development and learn about Test Driven Development, mocking and other code testing practices in .Net
A review of slicing techniques in software engineeringSalam Shah
Program slice is the part of program that may take the program off the path of the
desired output at some point of its execution. Such point is known as the slicing criterion.
This point is generally identified at a location in a given program coupled with the subset
of variables of program. This process in which program slices are computed is called
program slicing. Weiser was the person who gave the original definition of program slice
in 1979. Since its first definition, many ideas related to the program slice have been
formulated along with the numerous numbers of techniques to compute program slice.
Meanwhile, distinction between the static slice and dynamic slice was also made.
Program slicing is now among the most useful techniques that can fetch the particular
elements of a program which are related to a particular computation. Quite a large
numbers of variants for the program slicing have been analyzed along with the
algorithms to compute the slice. Model based slicing split the large architectures of
software into smaller sub models during early stages of SDLC. Software testing is
regarded as an activity to evaluate the functionality and features of a system. It verifies
whether the system is meeting the requirement or not. A common practice now is to
extract the sub models out of the giant models based upon the slicing criteria. Process of
model based slicing is utilized to extract the desired lump out of slice diagram. This
specific survey focuses on slicing techniques in the fields of numerous programing
paradigms like web applications, object oriented, and components based. Owing to the
efforts of various researchers, this technique has been extended to numerous other
platforms that include debugging of program, program integration and analysis, testing
and maintenance of software, reengineering, and reverse engineering. This survey
portrays on the role of model based slicing and various techniques that are being taken
on to compute the slices.
Proceedings of the 2015 Industrial and Systems Engineering Res.docxwkyra78
Proceedings of the 2015 Industrial and Systems Engineering Research Conference
S. Cetinkaya and J. K. Ryan, eds.
Use of Symbolic Regression for Lean Six Sigma Projects
Daniel Moreno-Sanchez, MSc.
Jacobo Tijerina-Aguilera, MSc.
Universidad de Monterrey
San Pedro Garza Garcia, NL 66238, Mexico
Arlethe Yari Aguilar-Villarreal, MEng.
Universidad Autonoma de Nuevo Leon
San Nicolas de los Garza, NL 66451, Mexico
Abstract
Lean Six Sigma projects and the quality engineering profession have to deal with an extensive selection of tools
most of them requiring specialized training. The increased availability of standard statistical software motivates the
use of advanced data science techniques to identify relationships between potential causes and project metrics. In
these circumstances, Symbolic Regression has received increased attention from researchers and practitioners to
uncover the intrinsic relationships hidden within complex data without requiring specialized training for its
implementation. The objective of this paper is to evaluate the advantages and drawbacks of using computer assisted
Symbolic Regression within the Analyze phase of a Lean Six Sigma project. An application of this approach in a
service industry project is also presented.
Keywords
Symbolic Regression, Data Science, Lean Six Sigma
1. Introduction
Lean Six Sigma (LSS) has become a well-known hybrid methodology for quality and productivity improvement in
organizations. Its wide adoption in several industries has shaped Process Innovation and Operational Excellence
initiatives, enabling LSS to become a main topic in quality practitioner sites of interest [1], recognized Six Sigma
(SS) certification body of knowledge contents [2], and professional society conferences [3].
However LSS projects and the quality engineering profession have to deal with an extensive selection of tools most
of them requiring specialized training. To assist LSS practitioners it is common to categorize tools based on the
traditional DMAIC model which stands for Define, Measure, Analyze, Improve, and Control phases. Table 1
presents an overview of the main tools that are commonly used in each phase of a LSS project, allowing team
members to progressively develop an understanding between realizing each phase’s intent and how the selected
tools can contribute to that purpose.
This paper focuses on the Analyze phase where tools for statistical model building are most likely to be selected.
The increased availability of standard statistical software motivates the use of advanced data science techniques to
identify relationships between potential causes and project metrics. In these circumstances Symbolic Regression
(SR) has received increased attention from researchers and practitioners even though SR is still in an early stage of
commercial availability.
The objective of this paper is to evaluate the advantages and drawbacks o ...
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Adaptive guidance model based similarity for software process development pro...ijseajournal
This paper describes a modeling approach SAGM (Similarity for Adaptive Guidance Model) that provides
adaptive and recursive guidance for software process development. This approach, in accordance to
developer needs, allows specific tailored guidance regarding the profile of developers. A profile is partially
or completely defined from a model of developers, through their roles, their qualifications, and through the
relationships between the context of the current activity and the model of the defined activities. This
approach aims to define the generic profile of development context and a similarity measure that evaluates
the similarities between the profiles created from the model of developers and those of the development
team involved in the execution of a software process. This is to identify the profiles classification and to
deduce the appropriate type of assistance to developers (that can be corrective, constructive or specific).
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
Configuration of a Guidance Process for Software Process Modeling csandit
The current technology tend leads us to recognize t
he need for adaptive guidance process for all
process of software development. The new needs gene
rated by the mobility context for software
development led these guidance processes to be adap
ted. This paper deals with the
configuration management of guidance process or its
ability to be adapted to specific
development contexts. We propose a Y description fo
r adaptive guidance process. This
description focuses on three dimensions defined by
the material/software platform, the
adaptation form and provided guidance service. Each
dimension considers several factors to
develop a coherent configuration strategy and provi
de automatically the appropriate guidance
process to a current development context.
EVALUATION OF THE SOFTWARE ARCHITECTURE STYLES FROM MAINTAINABILITY VIEWPOINTcscpconf
In the process of software architecture design, different decisions are made that have systemwide
impact. An important decision of design stage is the selection of a suitable software
architecture style. Lack of investigation on the quantitative impact of architecture styles on
software quality attributes is the main problem in using such styles. So, the use of architecture
styles in designing is based on the intuition of software developers. The aim of this research is
to quantify the impacts of architecture styles on software maintainability. In this study,
architecture styles are evaluated based on coupling, complexity and cohesion metrics and
ranked by analytic hierarchy process from maintainability viewpoint. The main contribution of
this paper is quantification and ranking of software architecture styles from the perspective of
maintainability quality attribute at stage of architectural style selection.
IMPLEMENTATION OF DYNAMIC COUPLING MEASUREMENT OF DISTRIBUTED OBJECT ORIENTED...IJCSEA Journal
Software metrics are increasingly playing a central role in the planning and control of software development projects. Coupling measures have important applications in software development and maintenance. Existing literature on software metrics is mainly focused on centralized systems, while work in the area of distributed systems, particularly in service-oriented systems, is scarce. Distributed systems with service oriented components are even more heterogeneous networking and execution environment. Traditional coupling measures take into account only “static” couplings. They do not account for “dynamic” couplings due to polymorphism and may significantly underestimate the complexity of software and misjudge the need for code inspection, testing and debugging. This is expected to result in poor predictive accuracy of the quality models in distributed Object Oriented systems that utilize static coupling measurements. In order to overcome these issues, we propose a hybrid model in Distributed Object Oriented Software for measure the coupling dynamically. In the proposed method, there are three steps such as Instrumentation process, Post processing and Coupling measurement. Initially the instrumentation process is done. In this process the instrumented JVM that has been modified to trace method calls. During this process, three trace files are created namely .prf, .clp, .svp. In the second step, the information in these file are merged. At the end of this step, the merged detailed trace of each JVM contains pointers to the merged trace files of the other JVM such that the path of every remote call from the client to the server can be uniquely identified. Finally, the coupling metrics are measured dynamically. The implementation results show that the proposed system will effectively measure the coupling metrics dynamically.
IMPLEMENTATION OF DYNAMIC COUPLING MEASUREMENT OF DISTRIBUTED OBJECT ORIENTED...IJCSEA Journal
Software metrics are increasingly playing a central role in the planning and control of software development projects. Coupling measures have important applications in software development and maintenance. Existing literature on software metrics is mainly focused on centralized systems, while work in the area of distributed systems, particularly in service-oriented systems, is scarce. Distributed systems with service oriented components are even more heterogeneous networking and execution environment. Traditional coupling measures take into account only “static” couplings. They do not account for “dynamic” couplings due to polymorphism and may significantly underestimate the complexity of software and misjudge the need for code inspection, testing and debugging. This is expected to result in poor predictive accuracy of the quality models in distributed Object Oriented systems that utilize static coupling measurements. In order to overcome these issues, we propose a hybrid model in Distributed Object Oriented Software for measure the coupling dynamically. In the proposed method, there are three steps
such as Instrumentation process, Post processing and Coupling measurement. Initially the instrumentation process is done. In this process the instrumented JVM that has been modified to trace method calls. During this process, three trace files are created namely .prf, .clp, .svp. In the second step, the information in these file are merged. At the end of this step, the merged detailed trace of each JVM contains pointers to the merged trace files of the other JVM such that the path of every remote call from the client to the server can be uniquely identified. Finally, the coupling metrics are measured dynamically. The implementation results show that the proposed system will effectively measure the coupling metrics dynamically.
EFFICIENT SCHEDULING STRATEGY USING COMMUNICATION AWARE SCHEDULING FOR PARALL...ijdpsjournal
In the area of Computer Science, Parallel job scheduling is an important field of research. Finding a best
suitable processor on the high performance or cluster computing for user submitted jobs plays an
important role in measuring system performance. A new scheduling technique called communication aware
scheduling is devised and is capable of handling serial jobs, parallel jobs, mixed jobs and dynamic jobs.
This work focuses the comparison of communication aware scheduling with the available parallel job
scheduling techniques and the experimental results show that communication aware scheduling performs
better when compared to the available parallel job scheduling techniques.
ARCHITECTURAL FRAMEWORK FOR DEVELOPING COMPONENT BASED GIS SYSTEMijfcstjournal
Component Based Software Engineering has one main sole motive of making the development process of
software systems as easy as possible and to achieve this objective work needs to be done in previous
systems to identify the concerns and limitations which can be overcome using this software engineering
based approach. In this paper to support concept of component based system a domain is chosen that
covers the GIS systems. GIS (Geographic Information Systems) are commonly used for development of map
based applications these systems are widely used across the web and in various organizations. With the
development and deepening of GIS, traditional GIS systems showed the challenges on isolation, sealing,
interoperability and the limitations, thereby hindering further development and application of GIS
technology. In this paper framework for component based GIS system is proposed. This framework is
having rich graphical interface and user data can be easily retrieved from the connected database and
displayed in the browser.
ARCHITECTURAL FRAMEWORK FOR DEVELOPING COMPONENT BASED GIS SYSTEMijfcstjournal
Component Based Software Engineering has one main sole motive of making the development process of
software systems as easy as possible and to achieve this objective work needs to be done in previous
systems to identify the concerns and limitations which can be overcome using this software engineering
based approach. In this paper to support concept of component based system a domain is chosen that
covers the GIS systems. GIS (Geographic Information Systems) are commonly used for development of map
based applications these systems are widely used across the web and in various organizations. With the
development and deepening of GIS, traditional GIS systems showed the challenges on isolation, sealing,
interoperability and the limitations, thereby hindering further development and application of GIS
technology. In this paper framework for component based GIS system is proposed. This framework is
having rich graphical interface and user data can be easily retrieved from the connected database and
displayed in the browser.
ARCHITECTURAL FRAMEWORK FOR DEVELOPING COMPONENT BASED GIS SYSTEMijfcstjournal
Component Based Software Engineering has one main sole motive of making the development process of
software systems as easy as possible and to achieve this objective work needs to be done in previous
systems to identify the concerns and limitations which can be overcome using this software engineering
based approach. In this paper to support concept of component based system a domain is chosen that
covers the GIS systems. GIS (Geographic Information Systems) are commonly used for development of map
based applications these systems are widely used across the web and in various organizations. With the
development and deepening of GIS, traditional GIS systems showed the challenges on isolation, sealing,
interoperability and the limitations, thereby hindering further development and application of GIS
technology. In this paper framework for component based GIS system is proposed. This framework is
having rich graphical interface and user data can be easily retrieved from the connected database and
displayed in the browser.
A SIMILARITY MEASURE FOR CATEGORIZING THE DEVELOPERS PROFILE IN A SOFTWARE PR...cscpconf
Software development processes need to have an integrated environment that fulfills specific
developer needs. In this context, this paper describes the modeling approach SAGM ((Similarity
for Adaptive Guidance Model) that provides adaptive recursive guidance for software
processes, and specifically tailored regarding the profile of developers. A profile is defined from
a model of developers, through their roles, their qualifications, and through the relationships
between the context of the current activity and the model of the activities. This approach
presents a similarity measure that evaluates the similarities between the profiles created from
the model of developers and those of the development team involved in the execution of a
software process. This is to identify the profiles classification and to deduce the appropriate
type of assistance (that can be corrective, constructive or specific) to developers.
Similar to Dependence flow graph for analysis (20)
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
AKS UNIVERSITY Satna Final Year Project By OM Hardaha.pdf
Dependence flow graph for analysis
1. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.6, November 2014
DEPENDENCE FLOW GRAPH FOR ANALYSIS
OF ASPECT- ORIENTED PROGRAMS
Syarbaini Ahmad1 , Abdul Azim A. Ghani2 ,Fazlida Mohd Sani3
1Faculty of Science & Info Technology, International Islamic University College Selangor
(KUIS), 43000 Kajang Selangor Malaysia
2,3Faculty of Computer Science and Information, Technology, University Putra Malaysia (UPM),
43400, Serdang Selangor, Malaysia
Abstract
Program analysis is useful for debugging, testing and maintenance of software systems due to information
about the structure and relationship of the program’s modules . In general, program analysis is performed
either based on control flow graph or dependence graph. However, in the case of aspect-oriented
programming (AOP), control flow graph (CFG) or dependence graph (DG) are not enough to model the
properties of Aspect-oriented (AO) programs. With respect to AO programs, although AOP is good for
modular representation and crosscutting concern, suitable model for program analysis is required to
gather information on its structure for the purpose of minimizing maintenance effort. In this paper Aspect
Oriented Dependence Flow Graph (AODFG) as an intermediate representation model is proposed to
represent the structure of aspect-oriented programs. AODFG is formed by merging the CFG and DG, thus
more information about dependencies between the join points, advice, aspects and their associated
construct with the flow of control from one statement to another are gathered. We discussthe performance
of AODFG by analysing some examples of AspectJ program taken from AspectJ Development Tools
(AJDT).
Keywords
dependence flow graph, control flow graph, dependence graph, aspect-oriented, program analysis,
intermediate representation, maintenance.
1. Introduction
Program analysis is a key activity in debugging [13], testing [9] and maintenance [16] to
optimize functionality. In software engineering, debugging is activity of allocation and solution or
avoidence from syntax error. Maintenance is the process of enhancing or optimizing the
functionality and usability of the system. The main purpose of analysis in software engineering is
to get information about the program by tracing the dynamic or static program. It is very helpful
to improve the accuracy of array, subscript and variable aliasing analysis, to test legality of loop
transformations, to detect potential error during run-time program, and to provide user respond
during human and computer interactive environments [26]. Analysis activity can be provided in
every single phases of software development such as in the requirement, design, testing, and
DOI : 10.5121/ijsea.2014.5609 125
2. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
maintenance. Ideally, program analysis will allow us to seek in depth information into the
structure of the program without modifying the general framework. Program analyses through
intermediate representation also pave the way for more intensive approaches to optimize and
provide a better feedback to maintainer to enhance the usability and functionality of the program.
It is very important to depict relationship among the code components in the program. So far,
research in this area has focused on the establishment of intermediate representation tools and its
functionality for AO programs such as call graph [10, 26, 37] control flow and data flow graph
[11, 14, 27, 28]
Aspect-oriented is an improvement of Object- Oriented (OO) paradigm proposed by Kiczales et.
al. [33] with the goal to enhance software maintainability through new modularization
mechanisms for encapsulation of crosscutting concerns. It is very useful in software engineering
to reduce the complexity in program development especially for reverse engineering and
maintenance activities such as program analysis, slicing and refactoring. Separation of concerns
[32] are able to identify, encapsulate and manipulate in isolated way only those parts of software
that are relevant to a concept, object or intention given.
AOP presents unique opportunities and problems for program representation schemes. The
crosscutting concerns produce complex structures to the program. To enable the development of
effective analysis for AOP, the program representation scheme must appropriately preserve the
structure associated with the use of features such as pointcut, joint points, advice and
introduction. It is therefore of interest to analyse the structure of AOP program and present with
proper and informative view.
The proposed approach in this paper is an intermediate representation as an analysis tool which is
based on the dependence flow graph (DFG) to get the details of code relationships. DFG
originally is used for procedural programming [13] to show the detailed information about the
code structure. It combines two different kind of representation which is CFG and DG. CFG is
used to produce the information about flow work list that is used to propagate the executable flag.
It is a model of node (or point) that corresponds to a program statement, and each arc (or directed
edge) indicates the flow of control from one statement to another [35].
DG produces data structure to be traversed to obtain information about dependencies among the
node. DG is a directed graph normally used to represent dependencies of several objects towards
each other. In OO [36], It is a collection of method dependence graph representing a main()
method or a method in a class of the program, or some additional arcs to represent direct or
indirect dependencies between a call and the called method and transitive interprocedural data
dependencies. DG in AO is used to represent the dependencies between the concept of join
points, advice, aspects and their associated constructs. In our study, we regenerate the use of DFG
by improving the usability of the graph from procedural programming only into aspect-oriented
programming.
In this paper, we propose an analysis tool for aspect- oriented programs. The tool is able to
identify which part of an aspect or non-aspect program is affected by a specific aspect or non-aspect
126
program. Tool can provide a better understanding of how the analysis output can help
software engineer to analyse the dependencies among the AO codes in the program. Maintainer
will not need to spend more time to understand the structure and relationship among the variables
in order to modify the program. In particular, we use DFG which first known introduced by
3. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
Pingali et.al. [13] implemented in procedural programming. This research focus on the program
analysis activity and proposed new intermediate representation called Aspect Oriented
Dependence Flow Graph (AODFG). The AODFGs of some small AO systems have been
generated to verify and validate the feasibility and correctness. It could be useful for maintenance
purpose especially for corrective and adaptive maintenance.
This paper is organized as follows; Section 2 represents the related research in program
representation by focusing on aspect oriented. Section 3 is a description about characterization of
CFG and DG analysis in producing DFG. Section 4 presents a construction of AODFG. Section 5
discusses performance evaluation of analysis result and finally, section 6 contains the conclusion
and future works.
AOP introduces a modularization unit which is aspect, which means a focus on a specific
crosscutting functionality in the program. By weaving crosscutting concern into the core classes it
creates applications that are easy for maintenance purpose. AOP does not replace object-oriented
but it is complementary to object-oriented. AOP behaviour looks like it should have structure, but
it is difficult to get the structure in code with traditional object-oriented techniques since
complexity with the relationship.
A crosscutting concern as a main behaviour of AOP is important for cuts across multiple points
in the object models. As a development methodology, AOP recommends the abstract and
encapsulate crosscutting concerns.
Figure 1[39] is the example of OO code which is important to implement AOP to measure the
amount of time to invoke a particular method in.
127
2. Aspect-oriented Programming Paradigm
While this code works, there are a few problems with the traditional approach
Figure 1 Class Bank Account DAQ
4. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
1. It's extremely difficult to turn metrics on and off, as we have to manually add the code in
the try/finally block to each and every method or constructor we want to benchmark.
2. The profiling code really doesn't belong sprinkled throughout application code. It makes
code bloated and harder to read
3. If we want to expand this functionality to include a method or failure count, or even to
register these statistics to a more sophisticated reporting mechanism, we have to modify a lot of
different files (again). This approach to metrics is very difficult to maintain, expand, and
extend, because it's dispersed throughout the entire code base. Aspect-oriented programming
gives us a way to encapsulate this type of behaviour functionality. It allows us to add behaviour
such as metrics around our code. For example, AOP provides us with programmatic control to
specify that we want calls to BankAccountDAO to go through a metrics aspect before executing
the actual body of that code.
The most popular AOP language is AspectJ. AspectJ is extensions for JAVA that enable
developers to work with crosscutting concern effectively [38].
AOP is important to work with abstractions that correspond more directly to all the different
aspects of concern in software [2]. It is used to encapsulate the crosscutting behaviours into
modular units called aspects. These units are composed of advices that realize the crosscutting
behaviour, and pointcut descriptors, which designate the points in the program where the advices
are inserted. Join point is a special well-defined point in the program flow that is needed where
horizontal bundling occurs in the code. Example of join points are method and constructor calls
or executions, attribute accesses, and exception handling.
Pointcut is used to pick a join point when a method is made (see figure 2 for example [39]). It
will select certain join points along with some context values specific to the points. All join points
are call methods for pointcut in the program flow. For example int_A_a_int() picks a join point
when a method call to a method int a(int) of a class A is made.
Advice is an executable code, which will be an actual implementation of a concern. It gets
executed when a given pointcut is reached. The advice’s code can be run before, after, or instead
of join points are reached. Figure 2 shows implementation of before, after and around advice in
an example program [39].
128
5. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
129
Figure 2 Aspect Showcase.java
The before advice of the Showcase aspect will run before the int_A_a_int() pointcut’s join points,
in this particular case, before all calls to int A.a(int) method.
• The after advice will run after one of the join points of either int_A_all_int() or all_all_c_all()
pointcuts, in this example, after methods void intro.B.c(double), String intro.A.c(String), int
intro.A.b(int), or int intro.A.a(int) will finish.
• The around advice will run instead of all join points picked by the all_all_c_all() pointcut. It
means that when such a join point is reached, the control will be passed to the advice, and then it
can decide what to do.
2.1 Modelling of AOP Program Structure
Recently, AOP is making its way through in almost all stages of software development life cycle.
There has been much research in the area of requirements engineering [4, 5], architectural
modelling [6, 19], design [3, 7, 12] and testing [8]. In this section, we briefly explain the idea of
analysis aspect oriented program as a complicated relationship in the structure by representation
using DFG. It is much related with CFG and DG.
CFGs are showing relationship as nodes represent either assignment statements or conditional
expressions that affect flow of control of the program and the edges represent the possible
6. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
transfer of control between the statements. The approaches differ with respect to the handling of
branching and the merging of branches, and the representation of statements that are always
executed together [9].
The example of CFG is shown in Figure 4(a) from the skeleton presented in Figure 3. In aspect
oriented analysis for CFG, Xu, G. [27] proposes an approach for program slicing of AspectJ
software, based on a novel data flow and control flow program representation. He describes a
general approach for constructing a static data flow and control flow representation for AspectJ,
for the purposes of dependence analysis, program slicing, and similar inter-procedural analyses.
He further conducted an experimental study on 37 AspectJ programs on AJANA, as analysis
framework based on abc AspectJ compiler. The results were compared with analysis of the
woven byte code in which they supported that is proposed model was suitable for static
analysis.Cacho, N. et.al. [28] using CFG for AOP presenting the novel exception handling
mechanism to promote improved separation between normal and error handling code. They
promote better maintainability of normal and error handling code by separating the handlers and
eliminating annoying exception interface declarations. They develop EJFlow with small syntactic
additions to AspectJ in their study. CFG will allow us to formulate a simple algorithm to analyse
the code based on abstract interpretation on the control structure. However, the program
statement is not only about the structure. It is more about discovering data dependencies among
statements and its relationships.
DG is arises from data flow of the program. It shows the relationship among the statements as in
Figure 3(b). The oval is representing the statement (S) and the arrow represents the dependency
of data. Sg is arises from Sf. Sh and Si is execution order from Sg. In aspect oriented, Rinard and
Zhao [1] proposed an AO code representation based on an adaptation of the dependence graph to
support the slicing of AO code. They extend their previous study on system dependence graphs
(SDG) [35] by providing a solid foundation for further analysis of AOP program by using
dependence graph. Mohapatra [11] then proposed a dependence based representation to represent
dynamic dependences between the statements. They improved from [1] by producing Dynamic
Aspect Oriented Dependence Graph (DADG) as intermediate representation. DADG was used to
represent various dynamic dependences between the statements of AOP
130
7. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
DFG are proposed by Pingali et. al.[13] with procedural programming. The original study is
based on two crucial reasons. The first reason is representation must be a data structure that can
be rapidly traversed to determine dependence information. Second, this representation must be a
program in its own right, with a parallel, local model of execution. They propose DFG which
have the two of the properties mentioned above. An interesting feature of this representation is
that it naturally incorporates the best aspects of many other representations, including call graph,
dependence graphs and control flow graphs by speed up the analysis and optimization [17] since
it provides hybrid information which comes from CFG and DG.
In maintenance activity, constructing a global understanding of the program and how the program
abstraction, control flow graph is very useful to allows us to formulate a simple algorithm to
visualize the structure of the program. It is based on interpretation of find possible paths that need
to be extracting [16]. Dependence analysis is very useful tool for instruction scheduling by
determining the ordering relationships between the code executions [22]. Algorithms that use the
various dependence graphs are more complex, and none of them found any possible paths without
some program transformation. However, the asymptotic complexity of such algorithms would be
better than the algorithms that use control flow graphs.
DFG is an ideal program representation to get a region of dependencies and flows of the program.
It can be viewed as a data structure that can be traversed efficiently for dependence information,
and at the same time it can also be viewed as a precisely defined language with a local operational
semantics. It can help in reverse engineering of the program with perspectives of reuse,
refactoring, reengineering or slicing. Dependence flow graphs are a synthesis of ideas from data
dependence graphs and control flow computation. As in the data dependence graph, the
dependence flow graph can be viewed as a data structure in which arcs represent dependencies
between operations.
However, unlike data dependence graphs, dependence flow graphs are executable, and the
execution semantics, called dependence-driven execution, is a generalization of the data-driven
execution semantics of data flow graphs. In data flow graphs, nodes represent functional
operators that communicate with each other by exchanging value-carrying tokens along arcs in
the graph. Abstractly, this concept represents the linked lists in general by using du or ud chain as
a function from a variable and a basic-block position pair. In this section, we will look at the
computation process of generating AODFG which uses data flow analysis and dependence
analysis for aspect- oriented programs.
It is often convenient to directly link labels of statements that produce values to the labels of
statements that use them. For each use of variable, associate all assignments that reach that use
are called use-definition chains or ud-chains. For each assignment, associate all uses are called
Definition-use chains or du-chains. The standard definition of du-chains and ud-chains are as in
definition 1 and 2.
131
3. Theoretical Characterization of DFG
3.1 Du and Ud Chains
8. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
Definition 1. A definition of variable x is said to reach a use of x if there is a control flow path
from the defines to the uses that does not pass through any other definition of x.
A du- chain for variable x is a node pair (n1, n2) such that n1 defines x, n2 uses x, and the defines
of x at n1 reaches the uses of x at n2.
Definition 2. A definition of variable x is said to reach a defines of x if there is a control flow
path from the uses to the defines that does not pass through any other defines of x.
A ud-chain for variable x is a node pair(n1, n2) such that n1 uses x, n2 define x, and the uses of x
at n1 reaches the defines of x at n2.
In this section, we present a control flow analysis which is one of the analysis phases to represent
the DFG graph. We use iterative data flow analyzers users dominators to discover loops. Control
flow analysis is very important to characterize the flow of programs, so that any unused
generality can be removed and optimizing the manipulations of the data.
Definition 4. The control flow graph Gf = (N, E) of a function f has one node na N for each
statement a in f and two additional nodes nin, nout. We add an edge (na, na’) if the statement a’ is
executed immediately after the statement a. For the first statement a1 in the function, we
introduce an edge (nin, na1). Furthermore, we add edges (na’, nout) for each node na’ that is
associated to a statement a’, after which the control flow leaves the function because of a return-statement
132
3.2 Control Flow Graphs
Definition 3. A du-chain for variable x is an edge pair (e1, e2) such that
1. The source of e1 defines x,
2. The destination of e2 uses x
3. There is a control flow path from e1 to e2 with no assignment to x.
or the right brace that terminates the function. The control flow graph of an empty
function, i.e., a function without any statements consists of N = {nin,nout} and E =
{(nin,nout)}.The node nin is the only entry node and the node nout the only exit node of the
control flow graph. Note that the control flow graph Gf is a graph where each node (except nin
and nout) corresponds to one statement in the function f.
We present a simple JAVA program as shown in Figure 5 that shows the computes Fibonacci
numbers program [15].
When we analyse the source code statically, the technique used in getting CFG start by
discovering its data structure such as if, else and if-then-else with a loop in the intermediate code.
Then, we extract the code line by line and represent it visually in flowchart to make it clearer to
the eye.
Next, we identify basic block to show a straight line sequence of code which have an entry and
exit point. The characteristic of producing basic block are as in Definition 5.
Definition 5. The characters of analysis policy;
9. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
Such instructions are defined as a leader. Each leader is flowing to another until exit in sequence.
The flow will become clear to analyze backward dataflow by add entry block as a successor and
exit block at the end of the branches.
By merging the code in Figure 5(a), the region of data as in Figure 5(b) can be performed. Nodes
in line 1 through 4 as a basic block B1. Line 6 through 11 forms another block B6 which has a
back edge to line 4 in block B4. The others is a basic block unto itself; which are node for line 7
into B2, node for line 8 into B3 and node for line 5 into B5.
133
Char1: The entry point of the routine x.
Char2: a target branch y, or
Char3: instruction following a branch y or a return to x.
Figure 5: Fibonacci computing program and control flow graph.
3.3 Dependence Flow Graphs
Dependence analysis will construct a graph called dependence graph that represents the
dependences in the program. But in our study, we not present the dependence graph on itself but
the information from the dependence analysis.
The purpose of dependence analysis is to determine the ordering relationships between
instructions that must be satisfied for the code to execute correctly.
Compare to control flow analysis, dependence analysis can be applied at any level in the
program. This is because; the source of dependence analysis will perform based on statement (S)
execution. If S1 precedes S2 (S1 ıS2) in their execution order, we know that S2 are depending to
S1. There are 4 types of data dependences [15]:
10. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
Type1: Flow dependence/true dependence; If S1 ı S2 and the former sets of value that the later
uses.
Type2: Anti dependence; If S1 ı S2 , S1 uses some variable’s value, and S2 sets it.
Type3: Output dependence; If S1 ı S2 and both statements set the value of some variable.
Type4: Input dependence; If S1 ı S2 and both statements read the value of some variable.
Figure 6 is an example consists of the four types of dependence as explained. Figure 6(a) is
simplified of the analysed code and Figure 6(b) is dependence graph of Figure 6(a). The flow
dependence between S3 and S4 is Type1 when the former sets a value that the latter uses. In the
reverse order, S3 uses some variable’s value (e) and S4 sets it as Type2. S3 and S5 are set the
value of some variable which is mentioned in Type3. And finally Type4 are dependence between
S3 and S5 since both read the value of e.
134
Definition 6. The character type of dependencies
Figure 6 Example of dependence graph.
4. AODFG Construction
This section discuss on the extension of the construction of both control flow and data
dependency to come up with AODFG. We assume that AODFG can help maintainer to manage
the maintenance activity with aspect-oriented program. Therefore our approach focuses on
maintainer view to understand the architecture of the program in a short time.
Our approach, in the case of aspect-oriented, shares the same viewpoint with procedural [20] and
OO approach in the sense that it is also a collection of information about the dependencies of the
data and the flow of control represent in a hierarchical manner. But the different between AO and
others is only the AO features that exist in aspect code). As a concept, AO survival is depending
on base code which is OO. Base code which normally includes classes, interfaces, and standard
Java features or constructs, and aspect code which put into practice the crosscutting concerns in
the program by using aspect, advice, etc. [23].
11. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
Figure 7 depicts the macro view of AODFG construct. It shows that the AODFG diagram are
generating from analysis of control flow and dependence flow. It just brings the information from
control flow and dependence flow and represents hybrid information in one single graph.
135
Figure 7: The macro view of AODFG
4.1 Aspect-oriented Construct in AODFG
We implement the DFG algorithm to suite with aspect-oriented features. The AODFG algorithm
consists of the following phases:
i. Analyze the control flow structure of the program as the technique used in CFG. The
control flow graph Gf = (N, E) of a function f has one node na N for each statement a in f and
two additional nodes nin, nout. We add an edge (na, na’) if the statement a’ is executed
immediately after the statement a. For the first statement a1 in the function, we introduce an edge
(nin, na1). Furthermore, we add edges (na’, nout) for each node na’ that is associated to a
statement a’, after which the control flow leaves the function because of a return-statement or the
right brace that terminates the function. The control flow graph of an empty function, i.e., a
function without any statements consists of N = {nin,nout} and E = {(nin,nout)}.The node nin is
the only entry node and the node nout the only exit node of the control flow graph. Note that the
control flow graph Gf is a graph where each node (except nin and nout) corresponds to one
statement in the function f.
ii. Analyze the dependencies among the statement in program as a technique used in
dependence graph. The character type of dependencies such as Flow dependence/true
dependence; If S1 ı S2 and the former sets of value that the later uses. Anti-dependence; If S1
ı S2 , S1 uses some variable’s value, and S2 sets it. Output dependence; If S1 ı S2 and both
statements set the value of some variable. Input dependence; If S1 ı S2 and both statements read
the value of some variable.
iii. We used AspectJ as a target language and advice execution as a method call. The features
of AOP introduced are following:
Join point: AspectJ provides join point object in order to access context information. The method
join point is prepared for accessing parameters. Since the parameter of the method call is
12. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
determined in runtime, the caller of the method call is handled as references to all parameters of
the method of the join point.
Pointcut: An advice depends on a pointcut definition. Since a pointcut determines an advice
execution, we connect a dependency edge from a pointcut to an advice.
Advice call: consists of an advice type (before, after and around). A vertex corresponding to a
join point shadow is regarded as a caller vertex of the advice.
iv. Construct a graph that contains information about the control flow and data dependencies
in the program.
From the algorithm of AODFG, we develop a tool to visualize the output of analyzing AspectJ
program as shown in Figure 7.
In order to test the functionality of our study, we develop Aspect Oriented Slicing Tool (AOST).
AOST is a tool to visualize control view and dependence view of JAVA aspectJ program. We use
C# as a programming tool and Graphviz as a graph representation tool. We can use AOST to
analyze the aspectJ program line by line. Then it can identify the method, aspect and any related
statement. Then visualize the relationship as an AODFG graph.
AOST are divided into three compartment which are original code, generated code and DFG.
Figure 8 are AOST interface. The detail operational are as below:-
As shown in figure 8, Original Code will show the a complete JAVA program. We can import
java class from location. After select the destination by using menu file the complete class will
appear on this view. If we want to look for a part or a full program which include with more than
one class, we can copy the code from each class or aspect manually and paste in the location
continues after each other.
The next step after original code viewed will react after we press on “Generate” button. This
button is very important to sort the code and classify by following the rules of JAVA syntax. For
example we look at the following rule that describe the structure of a ‘while’ statement.
A rule has a left-hand side (LHS) and a right-hand side (RHS), and consists of terminal and
nonterminal symbols. A grammar is a finite nonempty set of rules. An abstraction (or nonterminal
symbol) can have more than one RHS as in the statement as follows.
136
4.2 AODFG Representation Tool - AOST
i. Original Code
ii. Generated Code
“while_stmt ı while ( logic_expr ) stmt”
stmt ı single_stmt
| begin stmt_list end
13. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
Then, we identify the program as describe in chapter 4.1. this is very important to differentiate
between
The output from this level is an analyzed code derived from original code. The different in this
view is the code are represented with sort of method and aspect (aspect, pointcut, join points and
advice) and statement.
In this level, we will represent the construct of graph based on the information from previous
steps. Graphviz will work in this level. Graphviz is open source graph visualization software. It is
a way of representing structural information as diagrams of abstract graphs for AODFG. It
various functionality is important to show the dependencies of data and flow relationship of
control.
137
• object oriented and aspect oriented,
• data dependency and it relationship and
• flow of control and it relationship.
iii. DFG view
Figure 8: AOST Interface
In order to introduce example of AODFG, we used a modified version of an AspectJ program
taken from [24]. Figure 9 gives the analysed AspectJ code from the program named
ShadowTraker.We bring one aspect from the whole program named PointShadow-Protocol.
The analysed code is generated with the label C represent the class or aspect, M is for method and
S is for statement. The program output is shown in figure 10. The AODFG contain control flow
and data dependences edges between the AO nodes.
Figure 9 illustrates an AODFG for the running example, focus on after advice (M9). Since
relevant control is associated with placeholder decision nodes, the algorithm can take into account
the data dependencies between such nodes and the nodes that define the data. For example, the
14. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
selection decision node S17 is data dependent on node M9. At the same time S17 have a control
flow M9_p a reference to the receiver object of the crosscut call site, because this node represents
the target pointcut that guards the execution of dynamic advice after (M9).
The graph represents a pointcut node (M5), call node M9), and the statement which is
differentiate by the type of shape. The bull eye represents an aspect node. The control flow and
data dependences are differentiate by type of edge. The detail type of shape can be referring to the
legend in Figure 10.
138
Figure 9 : Point Shadow Protocol Aspect Code
5. Evaluation of AODFG
In order to assess the effectiveness of AODFG analysis tool, some AspectJ program were
analysed. Our study used ten benchmarks of AspectJ examples as shown in Table 1, 2 and 3[37]
from the collections of AspectJ Development Tools (AJDT) plug-in with some modification
code. Our concern is to look at the consistency of output between CFG, DG and DFG. We also
will look at the extraction that we can get from AODFG compare to CFG or DG. For each
program, table gives the numbers of aspect, LOC, methods, statement and AO denotes as aspect
modules separately. LOC represents the value of lines of code included class and aspect files. We
define pointcut as AO module even it did not contain any body code since the style of structure is
15. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
same with module. We verified those AODFGs generated by the tool against a manual inspection
of the graph and the associated analysed source code for each of aforementioned programs.
First, we extract the output data from CFG. As we describe in previous chapter, CFGs shows the
relationship between the nodes represent either an assignment statement or a conditional
expression that affect the control flow and the edges represent the possibility to transfer the
control between statements. So the output that we need from execution is to identify any
possibility transition between the edges and the flows of control. Table 1 show the output of
AODFG execution and the output graph as shown in figure 11.
139
Table 1: CFG Analysis data
Figure 11: CFG Output Graph
16. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
Then, we use the same program to extract the output from DG. DG will shows the relationship
among the statements in the program. So the output that we need from the execution of DG is
relationship among data in the program. Table 2 show the representation of the same program in a
DG representation view. Figure 12 represent the output from DG that we get from table 2.
140
Table 2: DG Analysis
Package/
Program
Aspect
Dep.
Node
Dep.
Edge
eventPooling 1 91 88
Bean Example 1 15 14
Introduction 3 19 25
Aspect.GUI 2 42 41
HashablePoint 1 15 14
Coordination 1 9 8
Spacewar 4 261 251
Observer 2 16 53
telecom 3 42 39
DCM 1 18 18
Figure 12: DG Output Graph
17. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
We repeatedly modify the source code with a minimal customization to help AODFG
representation tool in their debugging process. For example, if we found any incorrect value of a
variable that related with AO, we try to change to any suitable. We assume that the more LOC
will make more complexity to the relationship of the program. From Table 3, The quantity of AO
is not related with the value of neither methods nor statement. AODFG identify the AO features
in the program, based on existing AO source code in the program. For example, Introduction with
234 LOC and 18 methods have 1 AO features in the program compare to Coordination with 448
LOC and three methods and three AO features.
141
Table 3: DFG Analysis
Package/
Program
Aspect LOC
Method DFG
DFG
OO AO Edge
Node Formula Spread
eventPooling 1 108 3 1 102 99
Edge
Node
3
Bean
Example
1 159 14 1 27 15 12
Introduction 3 234 18 1 47 19 28
Aspect.GUI 2 101 0 8 77 72 5
HashablePoi
1 48 2 3 27 21 6
nt
Coordination 1 448 3 3 17 13 4
Spacewar 4 636 0 40 438 377 61
Observer 2 243 16 2 35 40
Edge
Node
64
telecom 3 119 7 8 61 63 2
DCM 1 211 0 3 26 42 16
Figure 13: Output of DFG Analysis
From the experiment, we can see that program generated by the tool were correct and consistently
show the same output as CFG and DG. The advantage AODFG are proposed all together DG and
CFG in a single graph representation. In other words, we can get information about flow work list
18. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
that can help us to get the executable flag and we also understand the dependencies among the
object and aspect methods in the program.
It shows that, representing AO software by using AODFG provides a useful support for gaining a
better knowledge of the internal structure even in the complicated programs, by reducing the
effort needed to understrand the detail structure of the program. It just another way to represent
the code structure that obtain more useful information which are dependencies among the
program and its flow of control.
An ideal program representation for aspect-oriented would have a local execution semantics from
which an abstract interpretation can be easily derived. It would also be a sparse representation of
program dependencies, in order to yield an efficient algorithm. Like a Necker cube, this
representation will permit two points of view: It can be viewed as a data structure that provide
dependence information, at the same time it can also be viewed as a precisely defined language
with a local operational semantics. The dependence flow graph is just such a representation.
As future research, this kind of representation can be improved to a more mature and compatible
tool for maintenance task in order to be applied in many kind of complex program with multiple
language and environment. It is also good to plug-in this representation into Java IDE. Thus,
developers can easily reverse and forward development activity to understand the program
structure concurrently in the case they use AOP in their projects.
[1] Zhao, J., Rinard, M. (2003). System Dependence Graph Construction for Aspect-Oriented
142
5. Conclusion and Future Work
Reference
Programs. Laboratory for Computer Science.
[2] Masuhara, H., Kiczales, G., Dutchyn, C. (2002). Compilation Semantics of Aspect-Oriented
Programs, 17-26.
[3] E. Baniassad and S. Clarke. Aspect-Oriented Analysis and Design: The Theme Approach. Addison-
Wesley, 2005.
[4] I. Jacobson and P.-W. Ng. Aspect-Oriented Software Development with Use Cases (Addison-Wesley
Object Technology Series). Addison-Wesley Professional, 2004.
[5] A. Rashid and R. Chitchyan. Aspect-oriented requirements engineering: a roadmap. In Proceedings of
the 13th International Workshop on Software Architectures and Mobility, pages 35–41. ACM, 2008.
[6] M. Katara and S. Katz. Architectural views of aspects. In Proceedings of the 2nd International
Conference on AOSD, pages 1–10. ACM, 2003.
[7] J.-M. J´ Equel. Model driven design and aspect weaving. Software and Systems Modeling, 2008.
[8] Xu, D., Xu, W. (2006). State-based incremental testing of aspect-oriented programs. Proceedings
of the 5th international conference on Aspect-oriented software development - AOSD ’06, 180. New
York, New York, USA: ACM Press.
[9] Gold, R. (2010). Control flow graphs and code coverage. International Journal of Applied
Mathematics and Computer Science, 20(4), 739-749.
[10] Ishio, T., Kusumoto, S., Inoue, K. (2004). Debugging support for aspect-oriented program based on
program slicing and call graph. 20th IEEE International Conference on Software Maintenance, 2004.
Proceedings., 178-187. Ieee.
[11] Mohapatra, D. P. (2008). Dynamic Slicing of Aspect-Oriented. Informatica, 32, 261-274.
19. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
[12] Y. R. Reddy, S. Ghosh, R. B. France, G. Straw, J. M. Bieman, N. McEachen, E. Song, and G. Georg.
143
Directives for composing aspect-oriented design class models. Transactions on Aspect-Oriented
Software Development, pages 75–105, 2006.
[13] Pingali, K., Beck, M., Johnson, R., Moudgill, M., Stodghill, P. (1991). Dependence flow graphs: an
algebraic approach to program dependencies. Proceedings of the 18th ACM SIGPLAN-SIGACT
symposium on Principles of programming languages - POPL ’91, 67-78. New York, New York,
USA: ACM Press.
[14] Yin, W., Jiang, L., Yin, Q., Zhou, L., Li, J. (2009). A Control Flow Graph Reconstruction Method
from Binaries Based on XML. 2009 International Forum on Computer Science-Technology and
Applications, 226-229. IEEE.
[15] Muchnick S.S. (1997), Advanced Compiler Design Implementation. Morgan Kaufmann Publishers,
USA.
[16] Beck, M., Johnson, R., Pingali, K. (1991). From Control Flow to Dataflow. Journal of Parallel and
Distributed Computing, 129(12), 118-129.
[17] Johnson, R., Pingali, K. (1993). Dependence-based program analysis. ACM SIGPLAN Notices,
28(6), 78-89.
[18 Hovsepyan, A., Scandariato, R., Baelen, S. V., Berbers, Y., Joosen, W. (2010). From Aspect-
Oriented Models to Aspect-Oriented Code? The Maintenance Perspective, 85-96.
[19] Munoz, F., Baudry, B., Delamare, R., Le Traon, Y. (2009). Inquiring the usage of aspect-oriented
programming: An empirical study. 2009 IEEE International Conference on Software Maintenance,
137-146.
[20] Johnson, R. C. (1994). Efficient Program Analysis Using Dependence Flow Graphs. Cornell
University.
[21] Fuentes, L., Sánchez, P. (2007). Towards executable aspect-oriented UML models. Proceedings of
the 10th international workshop on Aspect-oriented modeling - AOM ’07, 28-34, New York, USA:
ACM Press.
[22] F. Würthinger, T. (2007). Visualization of Program Dependence Graphs. Johannes Kepler University
Linz.
[23] Parizi, R. M., Azim, A., Ghani, A. (2008). AJcFgraph - AspectJ Control Flow Graph Builder for
Aspect-Oriented Software. Journal of Computer Science, vol. 3, pp. 170-181. Zhao, J. (2006).
Control-Flow Analysis and Representation for Aspect-Oriented Programs. 2006 Sixth International
Conference on Quality Software (QSIC’06), 38-48. IEEE.
[24] Johnson, R. C. (1994). Efficient Program Analysis Using Dependence Flow Graphs. Cornell
University N.Y USA.
[25] Lin, Y., Zhang, S., Zhao, J. (2009). Incremental Call Graph Reanalysis for AspectJ software. 2009
IEEE International Conference on Software Maintenance, 306-315. IEEE.
[26] Xu, G. (2007). Data-flow and Control-flow Analysis of AspectJ Software for Program Slicing.
Program. Ohio State University.
[27] Cacho, N., Filho, F. C., Garcia, A., Figueiredo, E. (2008). EJFlow: Taming Exceptional Control
Flows in Aspect-Oriented Programming. AOSD’08 (pp. 72-83).
[28] Cao, W. (2008). A Software Function Testing Method Based on Data Flow Graph. 2008 International
Symposium on Information Science and Engineering, 28-31. IEEE.
[29] Würthinger, T. (2007). Visualization of Program Dependence Graphs. Johannes Kepler University
Linz.
[30] G. Carlos, J.M. Murillo, Pablo A. Amaya, Aspect Oriented Analysis: a MDA Based Approach,
(2004), Quercus Software Engineering Group, University of Extremadura Spain.
[31] G. Kiczales, et al., Aspect Oriented Programming, (1997) Proc. Of ECOOP’97, LNCS 1241, Finland,
220-242.
[32] Elisa.Baniassad, Siobhan.Clarke, Theme: An Approach for Aspect-Oriented Analysis and
Design,(2004) Trinity College, Dublin ‘ICSE 2004’.
[33] Norman E. Fenton, Software Metrics: A Rigorous Practical Approach 2nd Edition, 1997, page 280,
PWS publishing Company, Boston UK.
20. International Journal of Software Engineering Applications (IJSEA), Vol.5, No.6, November 2014
[34]Jianjun Zhao, Slicing aspect-oriented software, In IWPC ’02: Proceedings of the 10th International
Workshop on Program Comprehension, page 251, Washington, DC, USA, 2002. IEEE Computer Society.
[35]AspectJ Team: AspectJ Development Environment. http://www.eclipse.org/ajdt/
[36]D.Sereni, O.D.M., Static Analysis of Aspects, in AOSD'03. 2003, ACM: Boston, Massachusetts. p. 30-
39.
[37]Hohenstein, U. D., Gleim, Using aspect-orientation to simplify concurrent programming.
Proceedings of the tenth international conference on Aspect-oriented software development companion -
AOSD ’11, . 2011.
[38]I. Kiselev, Aspect-Oriented Programming with AspectJ, SAMS, Indianapolis, p. 19.
[39][1] Chapter 1. What Is Aspect-Oriented Programming?, http://docs.jboss.org/aop/1.1/aspect-framework/
144
userguide/en/html/what.html
Authors
Syarbaini Ahmad is a software engineering Phd student from University Putra
Malaysia. His research study is about aspect-oriented program analysis and slicing
specifically for maintenance purpose. He doing his M.S degree in real time software
engineering at University Technology of Malaysia.
Dr. Abdul Azim Abdul Ghani (Prof.) received his M.S. degrees in Computer Science
from University of Miami, Florida, U.S.A in 1984 and Ph.D. in Computer Science from
University of Strathclyde, Scotland, U.K in 1993. His research areas include software
engineering, software metric and software quality. He is now a professor in Department
of Software Engineering and Information System, Faculty of Computer Science and
Information Technology, University Putra of Malaysia.
Dr. Nor Fazlida Mohd Sani (Assoc. Prof.) is a lecturer in the Department of Computer
Science at the Universiti Putra Malaysia. She received her Ph.D. from Universiti
Kebangsaan Malaysia in 2007. Her current research interests are authentication
security, information security, program understanding, and program analysis. Dr. Nor
Fazlida’s research has been published in various international journals and conferences,
and frequently serves as a technical editor for journal and program committee for international conferences.