It is a well known fact that software maintenance plays a major role and finds importance in software
development life cycle. As object
-
oriented programming has become the standard, it is very important to
understand th
e problems of maintaining object
-
oriented software systems. This paper aims at evaluating
object
-
oriented software system through change requirement traceability
–
based impact analysis
methodology
for non functional requirements using functional requirem
ents
. The major issues have been
related to change impact algorithms and inheritance of functionality.
Contributors to Reduce Maintainability Cost at the Software Implementation PhaseWaqas Tariq
Software maintenance is important and difficult to measure. The cost of maintenance is the most ever during the phases of software development. One of the most critical processes in software development is the reduction of software maintainability cost based on the quality of source code during design step, however, a lack of quality models and measures can help asses the quality attributes of software maintainability process. Software maintainability suffers from a number of challenges such as lack source code understanding, quality of software code, and adherence to programming standards in maintenance. This work describes model based-factors to assess the software maintenance, explains the steps followed to obtain and validate them. Such a method can be used to eliminate the software maintenance cost. The research results will enhance the quality of the source code. It will increase software understandability, eliminate maintenance time, cost, and give confidence for software reusability.
Evaluation of the software architecture styles from maintainability viewpointcsandit
In the process of software architecture design, different decisions are made that have systemwide
impact. An important decision of design stage is the selection of a suitable software
architecture style. Lack of investigation on the quantitative impact of architecture styles on
software quality attributes is the main problem in using such styles. So, the use of architecture
styles in designing is based on the intuition of software developers. The aim of this research is
to quantify the impacts of architecture styles on software maintainability. In this study,
architecture styles are evaluated based on coupling, complexity and cohesion metrics and
ranked by analytic hierarchy process from maintainability viewpoint. The main contribution of
this paper is quantification and ranking of software architecture styles from the perspective of
maintainability quality attribute at stage of architectural style selection.
A reliability estimation framework for OO design complexity perspective has been developed inthis paper. The proposed framework correlates the object oriented design constructs with complexity and also correlates complexity with reliability. No such framework has been available in the literature that estimates software reliability of OO design by taking complexity into consideration. The framework bridges the gap between object oriented design constructs, complexity and reliability. Framework measures and minimizes the complexity of software design at the early stage of software development life cycle leading to a reliable end product. Reliability and complexity estimation models have been proposed by following the proposed framework. Complexity estimation model has been developed which takes OO design constructs into consideration and proposed reliability estimation models take complexity in consideration for estimating reliability of OO design.
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
Ijartes v2-i1-001Evaluation of Changeability Indicator in Component Based Sof...IJARTES
The maintaining of software system is a major
cost concern. The maintaining of a software system depends
on how the changes made to it. The maintainability of a system
depending on the folw of software, its design pattern and
CBSS. In Maintainability phase of a sotware system there are
4 parts, like analyzing, testing, stability, and changes made to
it. In some side areas, these systems emerged very rapidly.
There are many companies which purchase software instead
of developing it .These companies do not have any interst in
the testing of the system but wants to like smoothness in the
flow of the system during changes.
Changeability is one of the characteristics of maintainability.
Software changeability is associated with refactoring which
makes code simpler and easier to maintain (enable all
programmers to improve their code).Factors that affect
changeability include coupling between the modules, lack of
code comments, naming of functions and variables.
Basically,”changeabilty” is the ability of a product or software
to be able to change the structure of the program. It is the rate
the product allows the modification to its components.
In this paper changeability based cost estimation is done.
Initially we take four components; these components are
evaluated based on the coupling, cohesion and Interface
metrix. Next some changes are made to the existing
components and than again these components are evaluated.
Now, on the basis of these two evaluations some conclusion is
made for changeability cost.
The objective of this paper is to provide an insight preview into various
agent oriented methodologies by using an enhanced comparison
framework based on criteria like process related criteria, steps and
techniques related criteria, steps and usability criteria, model related or
“concepts” related criteria, comparison regarding model related criteria
and comparison regarding supportive related criteria. The result also
constitutes inputs collected from the users of the agent oriented
methodologies through a questionnaire based survey.
Changeability has a direct relation to software maintainability and has a major role in providing high quality maintainable and trustworthy software. The concept of Changeability is a major factor when we design and develop software and its constituents. Developing programs and its constituent components with good changeability continually improves and simplifies test operations and maintenance during and after implementation. It encourages and supports improvement in software quality at design stage in the development of software. The research here highlights the importance of changeability broadly and also as an important aspect of software quality.
Contributors to Reduce Maintainability Cost at the Software Implementation PhaseWaqas Tariq
Software maintenance is important and difficult to measure. The cost of maintenance is the most ever during the phases of software development. One of the most critical processes in software development is the reduction of software maintainability cost based on the quality of source code during design step, however, a lack of quality models and measures can help asses the quality attributes of software maintainability process. Software maintainability suffers from a number of challenges such as lack source code understanding, quality of software code, and adherence to programming standards in maintenance. This work describes model based-factors to assess the software maintenance, explains the steps followed to obtain and validate them. Such a method can be used to eliminate the software maintenance cost. The research results will enhance the quality of the source code. It will increase software understandability, eliminate maintenance time, cost, and give confidence for software reusability.
Evaluation of the software architecture styles from maintainability viewpointcsandit
In the process of software architecture design, different decisions are made that have systemwide
impact. An important decision of design stage is the selection of a suitable software
architecture style. Lack of investigation on the quantitative impact of architecture styles on
software quality attributes is the main problem in using such styles. So, the use of architecture
styles in designing is based on the intuition of software developers. The aim of this research is
to quantify the impacts of architecture styles on software maintainability. In this study,
architecture styles are evaluated based on coupling, complexity and cohesion metrics and
ranked by analytic hierarchy process from maintainability viewpoint. The main contribution of
this paper is quantification and ranking of software architecture styles from the perspective of
maintainability quality attribute at stage of architectural style selection.
A reliability estimation framework for OO design complexity perspective has been developed inthis paper. The proposed framework correlates the object oriented design constructs with complexity and also correlates complexity with reliability. No such framework has been available in the literature that estimates software reliability of OO design by taking complexity into consideration. The framework bridges the gap between object oriented design constructs, complexity and reliability. Framework measures and minimizes the complexity of software design at the early stage of software development life cycle leading to a reliable end product. Reliability and complexity estimation models have been proposed by following the proposed framework. Complexity estimation model has been developed which takes OO design constructs into consideration and proposed reliability estimation models take complexity in consideration for estimating reliability of OO design.
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
Ijartes v2-i1-001Evaluation of Changeability Indicator in Component Based Sof...IJARTES
The maintaining of software system is a major
cost concern. The maintaining of a software system depends
on how the changes made to it. The maintainability of a system
depending on the folw of software, its design pattern and
CBSS. In Maintainability phase of a sotware system there are
4 parts, like analyzing, testing, stability, and changes made to
it. In some side areas, these systems emerged very rapidly.
There are many companies which purchase software instead
of developing it .These companies do not have any interst in
the testing of the system but wants to like smoothness in the
flow of the system during changes.
Changeability is one of the characteristics of maintainability.
Software changeability is associated with refactoring which
makes code simpler and easier to maintain (enable all
programmers to improve their code).Factors that affect
changeability include coupling between the modules, lack of
code comments, naming of functions and variables.
Basically,”changeabilty” is the ability of a product or software
to be able to change the structure of the program. It is the rate
the product allows the modification to its components.
In this paper changeability based cost estimation is done.
Initially we take four components; these components are
evaluated based on the coupling, cohesion and Interface
metrix. Next some changes are made to the existing
components and than again these components are evaluated.
Now, on the basis of these two evaluations some conclusion is
made for changeability cost.
The objective of this paper is to provide an insight preview into various
agent oriented methodologies by using an enhanced comparison
framework based on criteria like process related criteria, steps and
techniques related criteria, steps and usability criteria, model related or
“concepts” related criteria, comparison regarding model related criteria
and comparison regarding supportive related criteria. The result also
constitutes inputs collected from the users of the agent oriented
methodologies through a questionnaire based survey.
Changeability has a direct relation to software maintainability and has a major role in providing high quality maintainable and trustworthy software. The concept of Changeability is a major factor when we design and develop software and its constituents. Developing programs and its constituent components with good changeability continually improves and simplifies test operations and maintenance during and after implementation. It encourages and supports improvement in software quality at design stage in the development of software. The research here highlights the importance of changeability broadly and also as an important aspect of software quality.
An interactive approach to requirements prioritization using quality factorsijfcstjournal
As the prevalence of software increases, so does the complexity and the number of requirements assoc
iated
to the software project. This presents a dilemma for the developers to clearly identify and prioriti
ze the
most important requirements in order to del
iver the project in given amount of resources and time.
A
number of prioritization methods have been proposed which provide consistent results, but they are v
ery
difficult and complex to implement in practical scenarios as well as lack proper structure to
analyze the
requirements properly. In this study, the users can provide their requirements in two forms: text ba
sed
story form and use case form.
Moreover, the existing prioritization techniques have a very little or no
interaction with the users. So, in t
his paper an attempt has been made to make the prioritization process
user interactive by adding a second level of prioritization where after the developer has properly a
nalyzed
and ranked the requirements on the basis of quality attributes in the first le
vel, takes the opinion of distinct
user’s about the requirements priority sequence. The developer then calculates the disagreement valu
e
associated with each user sequence in order to find out the final priority sequence.
ITERATIVE AND INCREMENTAL DEVELOPMENT ANALYSIS STUDY OF VOCATIONAL CAREER INF...ijseajournal
Software development process presents various types of models with their corresponding phases required to be accordingly followed in delivery of quality products and projects. Despite the various expertise and skills of systems analysts, designers, and programmers, systems failure is inevitable when a suitable development process model is not followed. This paper focuses on the Iterative and Incremental Development (IID)model and justified its role in the analysis and design software systems. The paper adopted the qualitative research approach that justified and harnessed the relevance of IID in the context of systems analysis and design using the Vocational
Career Information System (VCIS) as a case study. The paper viewed the IID as a change-driven software development process model. The results showed some system specification, functional specification of system and design specifications that can be used in implementing the VCIS using the IID model. Thus, the paper concluded that in systems analysis and design, it is imperative to consider a suitable development process that reflects the engineering mind-set, with heavy emphasis on good analysis and design for quality assurance.
An empirical evaluation of impact of refactoring on internal and external mea...ijseajournal
Refactoring is the process of improving the design of existing code by changing its internal structure
without affecting its external behaviour, with the main aims of improving the quality of software product.
Therefore, there is a belief that refactoring improves quality factors such as understandability, flexibility,
and reusability. However, there is limited empirical evidence to support such assumptions.
The objective of this study is to validate/invalidate the claims that refactoring improves software quality.
The impact of selected refactoring techniques was assessed using both external and internal measures. Ten
refactoring techniques were evaluated through experiments to assess external measures: Resource
Utilization, Time Behaviour, Changeability and Analysability which are ISO external quality factors and
five internal measures: Maintainability Index, Cyclomatic Complexity, Depth of Inheritance, Class
Coupling and Lines of Code.
The result of external measures did not show any improvements in code quality after the refactoring
treatment. However, from internal measures, maintainability index indicated an improvement in code
quality of refactored code than non-refactored code and other internal measures did not indicate any
positive effect on refactored code.
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...IJCSES Journal
With the sharp rise in software dependability and failure cost, high quality has been in great demand.However, guaranteeing high quality in software systems which have grown in size and complexity coupled with the constraints imposed on their development has become increasingly difficult, time and resource consuming activity. Consequently, it becomes inevitable to deliver software that have no serious faults. In
this case, object-oriented (OO) products being the de facto standard of software development with their unique features could have some faults that are hard to find or pinpoint the impacts of changes. The earlier faults are identified, found and fixed, the lesser the costs and the higher the quality. To assess product quality, software metrics are used. Many OO metrics have been proposed and developed. Furthermore,
many empirical studies have validated metrics and class fault proneness (FP) relationship. The challenge is which metrics are related to class FP and what activities are performed. Therefore, this study bring together the state-of-the-art in fault prediction of FP that utilizes CK and size metrics. We conducted a systematic literature review over relevant published empirical validation articles. The results obtained are
analysed and presented. It indicates that 29 relevant empirical studies exist and measures such as complexity, coupling and size were found to be strongly related to FP.
Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
Safety-driven Software Product Line architectures Design, A Survey PaperEditor IJCATR
Software architecture design is an important step of software development. Currently, there are various design methods
available and each is focusing on certain perspective of architecture design. Especially, quality-based methods have received a lot of
attentions and have been well developed for single system architecture design. However, the use of quality-based design methods is
limited in software product line (SPL) because of the complexity and variabilities existing in SPL architecture. With the increasing
attention to software safety, improving software safety has already become a more important issue, especially for safety-critical
systems. This study aims at surveying existing research on Software Product Line Architecture (SPLA) design based on quality
attributes, and to give an overview of the intersection of the areas of software product line architecture design and Safety Driven
Design in order to classifying existing work, and discover open issues for further research. Also this study investigates safety analysis
at the architectural level, and Safety-based Software Product Line Architecture Design (SSPLAD) approaches. Safety-driven software
product line architecture design seems to be a ‘‘discussion” topic. The study shows that there are a large number of SPLA design
methods. However, the use of safety-based design methods is limited in software product lines (SPL) due to the variability property
that can potentially result in a large number of possible systems and because of the complexity existing in safety attribute itself.
Software requirements prioritization is a
recognized practice in requirements engineering (RE)
that facilitates the management of stakeholders’
subjective views as specified in their requirements
listing. Since RE process is naturally collaborative in
nature, the intensiveness from both knowledge and
human perspectives opens up the problem of decision
making on requirements, which can be facilitated by
requirements prioritization. However, due to the large
volume of requirements elicited when considering an
ultra-large-scale system, existing prioritization
techniques proposed so far suffer some setbacks in
terms of efficiency, effectiveness and scalability. This
paper employed the use of a more efficient ranking
algorithm for requirements prioritization based on the
limitations of existing techniques. The major objective
is to provide a well-defined ranking procedure through
analysis, suitable for prioritizing software requirements.
An empirical evaluation of the proposed technique was
made using a typical scenario of the Pharmacy
Information System at the Obafemi Awolowo
University Teaching Hospital Complex (OAUTHC) as a
case study. The results showed the computation of the
positive ideal solution (PIS) and negative ideal solution
(NIS), as well as the closeness coefficient (CC) for 4
requirements across 3 stakeholders. The CC showed the
final ranks of requirements, where R4 with 2.09 point is
the most valued requirements, while R1 and R2 with
CC of 1.37 and 1.05 were next in the order of priority
respectively. The CC provides the medium through
which problems of multiple criteria decision making
can be handled, so as to determine the order of priority
of the available alternatives. The paper conveyed
encouraging evidence for the software engineering
community that is capable of resolving redundant
specified requirements, thereby providing the potential
that will facilitate effective and efficient decision
making in handling the differences amongst
requirements that have been prioritized. Thus,
prioritizing software requirements with the
recommended ranking procedure during software
development is crucial and vital in order to reduce
development cost.
Comparative Analysis of Model Based Testing and Formal Based Testing - A ReviewIJERA Editor
Software testing is one of the most important steps in the process of Software Development. Testing provides
the glimpse of the proper functioning of the system under different conditions. It makes it a necessary step to
choose the best testing method for the software system to be successful and accepted by a large number of
people as the market is really competitive these days and only error free systems can survive for a longer period
of time. This paper gives the comparative analysis of two major methods of testing : Formal Specifications
Based Software Testing and Model Based Software Testing, which are used widely in the process of software
development process. It brings out how these two methods of testing can provide reliability to software system
including the major uses, advantages, and disadvantages of both the testing methods. It briefly gives the detailed
comparative analysis of these two methods of software testing. It also brings out the situations where formal
specifications based testing is more effective and efficient while model based testing being effective in others.
This comparative analysis will help one in deciding on a better testing technique, depending upon the situation,
and requirements of software, for the software to be successful in long run
Machine Learning approaches are good in solving problems that have less information. In most cases, the
software domain problems characterize as a process of learning that depend on the various circumstances
and changes accordingly. A predictive model is constructed by using machine learning approaches and
classified them into defective and non-defective modules. Machine learning techniques help developers to
retrieve useful information after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of software bug prediction. This
study used public available data sets of software modules and provides comparative performance analysis
of different machine learning techniques for software bug prediction. Results showed most of the machine
learning methods performed well on software bug datasets.
Quality aware approach for engineering self-adaptive software systemscsandit
Self-adaptivity allows software systems to autonomously adjust their behavior during run-time to reduce
the cost complexities caused by manual maintenance. In this paper, an approach for building an external
adaptation engine for self-adaptive software systems is proposed. In order to improve the quality of selfadaptive
software systems, this research addresses two challenges in self-adaptive software systems. The
first challenge is managing the complexity of the adaptation space efficiently and the second is handling the
run-time uncertainty that hinders the adaptation process. This research utilizes Case-based Reasoning as
an adaptation engine along with utility functions for realizing the managed system’s requirements and
handling uncertainty.
REALIZING A LOOSELY-COUPLED STUDENTS PORTAL FRAMEWORKijseajournal
Most of the currently available students' portal frameworks are tightly-coupled frameworks. A recent
research done by the authors of this paper has discussed how to distribute the concepts of the traditional
students' portal framework and came out with a distributed interoperable framework. This paper realizes
the distributed interoperable students' portal framework by developing a prototype. This prototype is based
on Service Oriented Architecture (SOA). The prototype is tested using web service testing and compatibility
testing.
Online Studie Zukunftplanung junger Studenten in NRWMichael Di Figlia
Junge Akademiker suchen Sicherheit und ein eigenes Zuhause
Studie über die Zukunftsplanung junger Studenten
Köln, 12. Juli 2013.
Generation Spaß oder konservative Lebensplanung? Heutzutage haben junge angehende Akademiker in Deutschland schier unbegrenzte Möglichkeiten ihr Leben zu gestalten und ihre Zukunft zu planen. Doch wie intensiv setzen sich die angehenden Akademiker mit Themen wie finanzieller Absicherung, familiärer Planung und Karrierechancen auseinander? Dieser Frage ist das Marktforschungsinstitut DTO Research in Kooperation mit der Akademie für Unternehmensmanagement GmbH und der Solut Financial Consulting AG auf den Grund gegangen. In einer Online-Befragung, an der insgesamt 539 Studenten aus Nordrhein-Westfalen teilgenommen haben, wurden den jungen Akademikern Fragen zu den Lebensbereichen Beruf, Familie und Finanzen gestellt.
Beruf
Junge angehende Akademiker besitzen schon eine sehr konkrete Vorstellung wie sie sich ihren erhofften Beruf vorstellen. Dies geht von der Arbeitsplatzgestaltung bis hin zu den Arbeitszeiten. Ein fester Arbeitsplatz innerhalb eines Unternehmens ist den Befragten wichtiger als Flexibilisierungsmaßnahmen wie beispielsweise Home-Office oder Teleworking. Lediglich 27 Prozent der Befragten präferieren einen Arbeitsplatz außerhalb der Firma ohne festes Büro im Unternehmen. Betrachtet man die Wunscharbeitszeiten der Studenten, lässt sich feststellen, dass 64 Prozent eine Festlegung der Arbeitszeiten durch das Unternehmen bevorzugen. Nur 35 Prozent der Befragten wünschen sich, über ihre Arbeitszeiten selbst bestimmen zu können.
Familie
Der Lebensbereich Familie zeigt auf, wie konkret sich die Befragten mit der Planung ihrer Familiensituation auseinandergesetzt haben. Dreiviertel der befragten Studierenden planen die Gründung einer eigenen Familie. 74 Prozent möchten bis zum 35. Lebensjahr bereits Eltern sein. Mehr als die Hälfte der Befragten wünschen sich zwei Kinder. Die Mehrheit der angehenden Akademiker plant eine befristete Unterbrechung des Jobs, sei es durch eine Elternzeit oder eine persönliche Auszeit. Einen Wohnortwechsel ins Ausland kann sich der Großteil der Befragten vornehmlich mit ihren Partnern oder ihrer Familie vorstellen. Lediglich 23 Prozent schließen einen Umzug ins Ausland aus.
Finanzen
Die Fragen zum Thema Finanzen konnte von den meisten angehenden Akademikern sehr konkret beantwortet werden. Der Bereich Wohneigentum spielt eine entscheidende Rolle in der Zukunftsplanung junger Studenten in Nordrhein-Westfalen. So streben mehr als die Hälfte aller Befragungsteilnehmer den Besitz einer Immobilie an. Dieses Ergebnis zeigt auf, dass ein Großteil der Studenten sich mit dieser langfristigen Anschaffung schon einmal beschäftigt hat. Jedoch ist hingegen auch ein Drittel der Befragten in dieser Hinsicht noch unentschlossen und kann zum jetzigen Zeitpunkt keine klare Aussage darüber treffen, ob in ihrer Zukunftsplanung Wohneigentum angestrebt wird.
An interactive approach to requirements prioritization using quality factorsijfcstjournal
As the prevalence of software increases, so does the complexity and the number of requirements assoc
iated
to the software project. This presents a dilemma for the developers to clearly identify and prioriti
ze the
most important requirements in order to del
iver the project in given amount of resources and time.
A
number of prioritization methods have been proposed which provide consistent results, but they are v
ery
difficult and complex to implement in practical scenarios as well as lack proper structure to
analyze the
requirements properly. In this study, the users can provide their requirements in two forms: text ba
sed
story form and use case form.
Moreover, the existing prioritization techniques have a very little or no
interaction with the users. So, in t
his paper an attempt has been made to make the prioritization process
user interactive by adding a second level of prioritization where after the developer has properly a
nalyzed
and ranked the requirements on the basis of quality attributes in the first le
vel, takes the opinion of distinct
user’s about the requirements priority sequence. The developer then calculates the disagreement valu
e
associated with each user sequence in order to find out the final priority sequence.
ITERATIVE AND INCREMENTAL DEVELOPMENT ANALYSIS STUDY OF VOCATIONAL CAREER INF...ijseajournal
Software development process presents various types of models with their corresponding phases required to be accordingly followed in delivery of quality products and projects. Despite the various expertise and skills of systems analysts, designers, and programmers, systems failure is inevitable when a suitable development process model is not followed. This paper focuses on the Iterative and Incremental Development (IID)model and justified its role in the analysis and design software systems. The paper adopted the qualitative research approach that justified and harnessed the relevance of IID in the context of systems analysis and design using the Vocational
Career Information System (VCIS) as a case study. The paper viewed the IID as a change-driven software development process model. The results showed some system specification, functional specification of system and design specifications that can be used in implementing the VCIS using the IID model. Thus, the paper concluded that in systems analysis and design, it is imperative to consider a suitable development process that reflects the engineering mind-set, with heavy emphasis on good analysis and design for quality assurance.
An empirical evaluation of impact of refactoring on internal and external mea...ijseajournal
Refactoring is the process of improving the design of existing code by changing its internal structure
without affecting its external behaviour, with the main aims of improving the quality of software product.
Therefore, there is a belief that refactoring improves quality factors such as understandability, flexibility,
and reusability. However, there is limited empirical evidence to support such assumptions.
The objective of this study is to validate/invalidate the claims that refactoring improves software quality.
The impact of selected refactoring techniques was assessed using both external and internal measures. Ten
refactoring techniques were evaluated through experiments to assess external measures: Resource
Utilization, Time Behaviour, Changeability and Analysability which are ISO external quality factors and
five internal measures: Maintainability Index, Cyclomatic Complexity, Depth of Inheritance, Class
Coupling and Lines of Code.
The result of external measures did not show any improvements in code quality after the refactoring
treatment. However, from internal measures, maintainability index indicated an improvement in code
quality of refactored code than non-refactored code and other internal measures did not indicate any
positive effect on refactored code.
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...IJCSES Journal
With the sharp rise in software dependability and failure cost, high quality has been in great demand.However, guaranteeing high quality in software systems which have grown in size and complexity coupled with the constraints imposed on their development has become increasingly difficult, time and resource consuming activity. Consequently, it becomes inevitable to deliver software that have no serious faults. In
this case, object-oriented (OO) products being the de facto standard of software development with their unique features could have some faults that are hard to find or pinpoint the impacts of changes. The earlier faults are identified, found and fixed, the lesser the costs and the higher the quality. To assess product quality, software metrics are used. Many OO metrics have been proposed and developed. Furthermore,
many empirical studies have validated metrics and class fault proneness (FP) relationship. The challenge is which metrics are related to class FP and what activities are performed. Therefore, this study bring together the state-of-the-art in fault prediction of FP that utilizes CK and size metrics. We conducted a systematic literature review over relevant published empirical validation articles. The results obtained are
analysed and presented. It indicates that 29 relevant empirical studies exist and measures such as complexity, coupling and size were found to be strongly related to FP.
Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
Safety-driven Software Product Line architectures Design, A Survey PaperEditor IJCATR
Software architecture design is an important step of software development. Currently, there are various design methods
available and each is focusing on certain perspective of architecture design. Especially, quality-based methods have received a lot of
attentions and have been well developed for single system architecture design. However, the use of quality-based design methods is
limited in software product line (SPL) because of the complexity and variabilities existing in SPL architecture. With the increasing
attention to software safety, improving software safety has already become a more important issue, especially for safety-critical
systems. This study aims at surveying existing research on Software Product Line Architecture (SPLA) design based on quality
attributes, and to give an overview of the intersection of the areas of software product line architecture design and Safety Driven
Design in order to classifying existing work, and discover open issues for further research. Also this study investigates safety analysis
at the architectural level, and Safety-based Software Product Line Architecture Design (SSPLAD) approaches. Safety-driven software
product line architecture design seems to be a ‘‘discussion” topic. The study shows that there are a large number of SPLA design
methods. However, the use of safety-based design methods is limited in software product lines (SPL) due to the variability property
that can potentially result in a large number of possible systems and because of the complexity existing in safety attribute itself.
Software requirements prioritization is a
recognized practice in requirements engineering (RE)
that facilitates the management of stakeholders’
subjective views as specified in their requirements
listing. Since RE process is naturally collaborative in
nature, the intensiveness from both knowledge and
human perspectives opens up the problem of decision
making on requirements, which can be facilitated by
requirements prioritization. However, due to the large
volume of requirements elicited when considering an
ultra-large-scale system, existing prioritization
techniques proposed so far suffer some setbacks in
terms of efficiency, effectiveness and scalability. This
paper employed the use of a more efficient ranking
algorithm for requirements prioritization based on the
limitations of existing techniques. The major objective
is to provide a well-defined ranking procedure through
analysis, suitable for prioritizing software requirements.
An empirical evaluation of the proposed technique was
made using a typical scenario of the Pharmacy
Information System at the Obafemi Awolowo
University Teaching Hospital Complex (OAUTHC) as a
case study. The results showed the computation of the
positive ideal solution (PIS) and negative ideal solution
(NIS), as well as the closeness coefficient (CC) for 4
requirements across 3 stakeholders. The CC showed the
final ranks of requirements, where R4 with 2.09 point is
the most valued requirements, while R1 and R2 with
CC of 1.37 and 1.05 were next in the order of priority
respectively. The CC provides the medium through
which problems of multiple criteria decision making
can be handled, so as to determine the order of priority
of the available alternatives. The paper conveyed
encouraging evidence for the software engineering
community that is capable of resolving redundant
specified requirements, thereby providing the potential
that will facilitate effective and efficient decision
making in handling the differences amongst
requirements that have been prioritized. Thus,
prioritizing software requirements with the
recommended ranking procedure during software
development is crucial and vital in order to reduce
development cost.
Comparative Analysis of Model Based Testing and Formal Based Testing - A ReviewIJERA Editor
Software testing is one of the most important steps in the process of Software Development. Testing provides
the glimpse of the proper functioning of the system under different conditions. It makes it a necessary step to
choose the best testing method for the software system to be successful and accepted by a large number of
people as the market is really competitive these days and only error free systems can survive for a longer period
of time. This paper gives the comparative analysis of two major methods of testing : Formal Specifications
Based Software Testing and Model Based Software Testing, which are used widely in the process of software
development process. It brings out how these two methods of testing can provide reliability to software system
including the major uses, advantages, and disadvantages of both the testing methods. It briefly gives the detailed
comparative analysis of these two methods of software testing. It also brings out the situations where formal
specifications based testing is more effective and efficient while model based testing being effective in others.
This comparative analysis will help one in deciding on a better testing technique, depending upon the situation,
and requirements of software, for the software to be successful in long run
Machine Learning approaches are good in solving problems that have less information. In most cases, the
software domain problems characterize as a process of learning that depend on the various circumstances
and changes accordingly. A predictive model is constructed by using machine learning approaches and
classified them into defective and non-defective modules. Machine learning techniques help developers to
retrieve useful information after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of software bug prediction. This
study used public available data sets of software modules and provides comparative performance analysis
of different machine learning techniques for software bug prediction. Results showed most of the machine
learning methods performed well on software bug datasets.
Quality aware approach for engineering self-adaptive software systemscsandit
Self-adaptivity allows software systems to autonomously adjust their behavior during run-time to reduce
the cost complexities caused by manual maintenance. In this paper, an approach for building an external
adaptation engine for self-adaptive software systems is proposed. In order to improve the quality of selfadaptive
software systems, this research addresses two challenges in self-adaptive software systems. The
first challenge is managing the complexity of the adaptation space efficiently and the second is handling the
run-time uncertainty that hinders the adaptation process. This research utilizes Case-based Reasoning as
an adaptation engine along with utility functions for realizing the managed system’s requirements and
handling uncertainty.
REALIZING A LOOSELY-COUPLED STUDENTS PORTAL FRAMEWORKijseajournal
Most of the currently available students' portal frameworks are tightly-coupled frameworks. A recent
research done by the authors of this paper has discussed how to distribute the concepts of the traditional
students' portal framework and came out with a distributed interoperable framework. This paper realizes
the distributed interoperable students' portal framework by developing a prototype. This prototype is based
on Service Oriented Architecture (SOA). The prototype is tested using web service testing and compatibility
testing.
Online Studie Zukunftplanung junger Studenten in NRWMichael Di Figlia
Junge Akademiker suchen Sicherheit und ein eigenes Zuhause
Studie über die Zukunftsplanung junger Studenten
Köln, 12. Juli 2013.
Generation Spaß oder konservative Lebensplanung? Heutzutage haben junge angehende Akademiker in Deutschland schier unbegrenzte Möglichkeiten ihr Leben zu gestalten und ihre Zukunft zu planen. Doch wie intensiv setzen sich die angehenden Akademiker mit Themen wie finanzieller Absicherung, familiärer Planung und Karrierechancen auseinander? Dieser Frage ist das Marktforschungsinstitut DTO Research in Kooperation mit der Akademie für Unternehmensmanagement GmbH und der Solut Financial Consulting AG auf den Grund gegangen. In einer Online-Befragung, an der insgesamt 539 Studenten aus Nordrhein-Westfalen teilgenommen haben, wurden den jungen Akademikern Fragen zu den Lebensbereichen Beruf, Familie und Finanzen gestellt.
Beruf
Junge angehende Akademiker besitzen schon eine sehr konkrete Vorstellung wie sie sich ihren erhofften Beruf vorstellen. Dies geht von der Arbeitsplatzgestaltung bis hin zu den Arbeitszeiten. Ein fester Arbeitsplatz innerhalb eines Unternehmens ist den Befragten wichtiger als Flexibilisierungsmaßnahmen wie beispielsweise Home-Office oder Teleworking. Lediglich 27 Prozent der Befragten präferieren einen Arbeitsplatz außerhalb der Firma ohne festes Büro im Unternehmen. Betrachtet man die Wunscharbeitszeiten der Studenten, lässt sich feststellen, dass 64 Prozent eine Festlegung der Arbeitszeiten durch das Unternehmen bevorzugen. Nur 35 Prozent der Befragten wünschen sich, über ihre Arbeitszeiten selbst bestimmen zu können.
Familie
Der Lebensbereich Familie zeigt auf, wie konkret sich die Befragten mit der Planung ihrer Familiensituation auseinandergesetzt haben. Dreiviertel der befragten Studierenden planen die Gründung einer eigenen Familie. 74 Prozent möchten bis zum 35. Lebensjahr bereits Eltern sein. Mehr als die Hälfte der Befragten wünschen sich zwei Kinder. Die Mehrheit der angehenden Akademiker plant eine befristete Unterbrechung des Jobs, sei es durch eine Elternzeit oder eine persönliche Auszeit. Einen Wohnortwechsel ins Ausland kann sich der Großteil der Befragten vornehmlich mit ihren Partnern oder ihrer Familie vorstellen. Lediglich 23 Prozent schließen einen Umzug ins Ausland aus.
Finanzen
Die Fragen zum Thema Finanzen konnte von den meisten angehenden Akademikern sehr konkret beantwortet werden. Der Bereich Wohneigentum spielt eine entscheidende Rolle in der Zukunftsplanung junger Studenten in Nordrhein-Westfalen. So streben mehr als die Hälfte aller Befragungsteilnehmer den Besitz einer Immobilie an. Dieses Ergebnis zeigt auf, dass ein Großteil der Studenten sich mit dieser langfristigen Anschaffung schon einmal beschäftigt hat. Jedoch ist hingegen auch ein Drittel der Befragten in dieser Hinsicht noch unentschlossen und kann zum jetzigen Zeitpunkt keine klare Aussage darüber treffen, ob in ihrer Zukunftsplanung Wohneigentum angestrebt wird.
Changeability has a direct relation to software maintainability and has a major role in providing high quality maintainable and trustworthy software. The concept of Changeability is a major factor when we design and develop software and its constituents. Developing programs and its constituent components with good changeability continually improves and simplifies test operations and maintenance during and after implementation. It encourages and supports improvement in software quality at design stage in the development of software. The research here highlights the importance of changeability broadly and also as an important aspect of software quality.
In this paper a correlation between the major attributes of object oriented design and changeability has been ascertained. A changeability evaluation model using multiple linear regression has been proposed for object oriented design. The validation of the proposed changeability evaluation model is made known by means of experimental tests and the results show that the model is highly significant.
A SURVEY ON ACCURACY OF REQUIREMENT TRACEABILITY LINKS DURING SOFTWARE DEVELO...ijiert bestjournal
There are number of routing protocols proposed for the data transmission in WSN. Initially single path routing schemes with number of variations are proposed. Sti ll there were some drawbacks in single path routing . Single path routing was unable to provide the reliability and h igh throughput. Also security level was not conside red while routing. Recently,to remove the drawbacks of the s ingle path routing new routing technique is propose d called as multipath routing. In this paper we discussed the different multipath routing protocols with number of variants. Initiall y multipath routing was proposed for the purpose of guaranteed delivery of packet to sink in case of link or node failure. There are other protocols which are proposed for the reli ability,energy saving,security and high throughpu t. Some multipath routing protocols have discussed the load balancing and security during packet transmission.
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
DESQA a Software Quality Assurance FrameworkIJERA Editor
In current software development lifecycles of heterogeneous environments, the pitfalls businesses have to face are that software defect tracking, measurements and quality assurance do not start early enough in the development process. In fact the cost of fixing a defect in a production environment is much higher than in the initial phases of the Software Development Life Cycle (SDLC) which is particularly true for Service Oriented Architecture (SOA). Thus the aim of this study is to develop a new framework for defect tracking and detection and quality estimation for early stages particularly for the design stage of the SDLC. Part of the objectives of this work is to conceptualize, borrow and customize from known frameworks, such as object-oriented programming to build a solid framework using automated rule based intelligent mechanisms to detect and classify defects in software design of SOA. The implementation part demonstrated how the framework can predict the quality level of the designed software. The results showed a good level of quality estimation can be achieved based on the number of design attributes, the number of quality attributes and the number of SOA Design Defects. Assessment shows that metrics provide guidelines to indicate the progress that a software system has made and the quality of design. Using these guidelines, we can develop more usable and maintainable software systems to fulfill the demand of efficient systems for software applications. Another valuable result coming from this study is that developers are trying to keep backwards compatibility when they introduce new functionality. Sometimes, in the same newly-introduced elements developers perform necessary breaking changes in future versions. In that way they give time to their clients to adapt their systems. This is a very valuable practice for the developers because they have more time to assess the quality of their software before releasing it. Other improvements in this research include investigation of other design attributes and SOA Design Defects which can be computed in extending the tests we performed.
EVALUATION AND STUDY OF SOFTWARE DEGRADATION IN THE EVOLUTION OF SIX VERSIONS...csandit
When a software system evolves, new requirements may be added, existing functionalities
modified, or some structural change introduced. During such evolution, disorder may be
introduced, complexity increased or unintended consequences introduced, producing rippleeffect
across the system. JHotDraw (JHD), a well-tested and widely used open source Javabased
graphics framework developed with the best software engineering practice was selected
as a test suite. Six versions were profiled and data collected dynamically, from which two metrics were derived namely entropy and software maturity index. These metrics were used to investigate degradation as the software transitions from one version to another. This study observed that entropy tends to decrease as the software evolves. It was also found that a
software product attains its lowest decrease in entropy at the turning point where its highest maturity index is attained, implying a possible correlation between the point of lowest decreasein entropy and software maturity index.
IMPLEMENTATION OF DYNAMIC COUPLING MEASUREMENT OF DISTRIBUTED OBJECT ORIENTED...IJCSEA Journal
Software metrics are increasingly playing a central role in the planning and control of software development projects. Coupling measures have important applications in software development and maintenance. Existing literature on software metrics is mainly focused on centralized systems, while work in the area of distributed systems, particularly in service-oriented systems, is scarce. Distributed systems with service oriented components are even more heterogeneous networking and execution environment. Traditional coupling measures take into account only “static” couplings. They do not account for “dynamic” couplings due to polymorphism and may significantly underestimate the complexity of software and misjudge the need for code inspection, testing and debugging. This is expected to result in poor predictive accuracy of the quality models in distributed Object Oriented systems that utilize static coupling measurements. In order to overcome these issues, we propose a hybrid model in Distributed Object Oriented Software for measure the coupling dynamically. In the proposed method, there are three steps such as Instrumentation process, Post processing and Coupling measurement. Initially the instrumentation process is done. In this process the instrumented JVM that has been modified to trace method calls. During this process, three trace files are created namely .prf, .clp, .svp. In the second step, the information in these file are merged. At the end of this step, the merged detailed trace of each JVM contains pointers to the merged trace files of the other JVM such that the path of every remote call from the client to the server can be uniquely identified. Finally, the coupling metrics are measured dynamically. The implementation results show that the proposed system will effectively measure the coupling metrics dynamically.
IMPLEMENTATION OF DYNAMIC COUPLING MEASUREMENT OF DISTRIBUTED OBJECT ORIENTED...IJCSEA Journal
Software metrics are increasingly playing a central role in the planning and control of software development projects. Coupling measures have important applications in software development and maintenance. Existing literature on software metrics is mainly focused on centralized systems, while work in the area of distributed systems, particularly in service-oriented systems, is scarce. Distributed systems with service oriented components are even more heterogeneous networking and execution environment. Traditional coupling measures take into account only “static” couplings. They do not account for “dynamic” couplings due to polymorphism and may significantly underestimate the complexity of software and misjudge the need for code inspection, testing and debugging. This is expected to result in poor predictive accuracy of the quality models in distributed Object Oriented systems that utilize static coupling measurements. In order to overcome these issues, we propose a hybrid model in Distributed Object Oriented Software for measure the coupling dynamically. In the proposed method, there are three steps
such as Instrumentation process, Post processing and Coupling measurement. Initially the instrumentation process is done. In this process the instrumented JVM that has been modified to trace method calls. During this process, three trace files are created namely .prf, .clp, .svp. In the second step, the information in these file are merged. At the end of this step, the merged detailed trace of each JVM contains pointers to the merged trace files of the other JVM such that the path of every remote call from the client to the server can be uniquely identified. Finally, the coupling metrics are measured dynamically. The implementation results show that the proposed system will effectively measure the coupling metrics dynamically.
Agent Assisted Methodologies have become an
important subject of research in advance Software Engineering.
Several methodologies have been proposed as, a theoretical
approach, to facilitate and support the development of complex
distributed systems. An important question when facing the
construction of Agent Applications is deciding which
methodology to follow. Trying to answer this question, a
framework with several criteria is applied in this paper for the
comparative analysis of existing multiagent system
methodologies. The results of the comparative over two of them,
conclude that those methodologies have not reached a sufficient
maturity level to be used by the software industry. The
framework has also proved its utility for the evaluation of any
kind of Agent Assisted Software Engineering Methodology.
EVALUATION OF THE SOFTWARE ARCHITECTURE STYLES FROM MAINTAINABILITY VIEWPOINTcscpconf
In the process of software architecture design, different decisions are made that have systemwide
impact. An important decision of design stage is the selection of a suitable software
architecture style. Lack of investigation on the quantitative impact of architecture styles on
software quality attributes is the main problem in using such styles. So, the use of architecture
styles in designing is based on the intuition of software developers. The aim of this research is
to quantify the impacts of architecture styles on software maintainability. In this study,
architecture styles are evaluated based on coupling, complexity and cohesion metrics and
ranked by analytic hierarchy process from maintainability viewpoint. The main contribution of
this paper is quantification and ranking of software architecture styles from the perspective of
maintainability quality attribute at stage of architectural style selection.
Service oriented configuration management of software architectureIJNSA Journal
Software configuration management (SCM) is an important activity in the software engineering life cycle. SCM by control of the evolution process of products leads to constancy and stability in software systems. Nowadays, use of software configuration management is essential during the process of software development as rules to control and manage the evolution of software systems. SCM effects different levels of abstraction included the architectural level. Configuration of software architecture causes improvement in the configuration of the lower abstraction levels. CM of software architecture is more significant in large scale software with longevity of life cycle. Traditional SCM approaches, at the architectural level, do not provided the necessary support to software configuration management, so systems that use these approaches are faced with problems. These problems arise because of the lack of a serious constant and repeated changes in the software process. To overcome this it is necessary to create an infrastructure. Hence, a service oriented approach for configuration management is presented in this paper. In this approach, the activities of configuration management are conducted from a service oriented viewpoint. This approach was also used to try and control the evolution and number of versions of different software systems in order to identify, organize and control change and reforms during the production process. This approach can compose services and create composite services for new undefined activities of configuration.
SERVICE ORIENTED CONFIGURATION MANAGEMENT OF SOFTWARE ARCHITECTUREIJNSA Journal
Software configuration management (SCM) is an important activity in the software engineering life cycle. SCM by control of the evolution process of products leads to constancy and stability in software systems. Nowadays, use of software configuration management is essential during the process of software development as rules to control and manage the evolution of software systems. SCM effects different levels of abstraction included the architectural level. Configuration of software architecture causes improvement in the configuration of the lower abstraction levels. CM of software architecture is more significant in large scale software with longevity of life cycle. Traditional SCM approaches, at the architectural level, do not provided the necessary support to software configuration management, so systems that use these approaches are faced with problems. These problems arise because of the lack of a serious constant and repeated changes in the software process. To overcome this it is necessary to create an infrastructure. Hence, a service oriented approach for configuration management is presented in this paper. In this approach, the activities of configuration management are conducted from a service oriented viewpoint. This approach was also used to try and control the evolution and number of versions of different software systems in order to identify, organize and control change and reforms during the production process. This approach can compose services and create composite services for new undefined activities of configuration.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
Mvc architecture driven design and agile implementation of a web based softwa...ijseajournal
This paper reports design and implementation of a web based software system for storing and managing
information related to time management and productivity of employees working on a project.
The system
has been designed and implemented w
ith best principles from model view
controller
and agile development.
Such system has practical use for any organization in terms of ease of use, efficiency, and cost savings. The
manuscript describes design of the system as well as its database and user i
nterface. Detailed snapshots of
the working system are provided too.
Different Approaches using Change Impact Analysis of UML Based Design for Sof...zillesubhan
Because of rapidly changing technologies,
requirements for the software systems are
constantly changing. This requires a change in
software design as well, as design should be
traceable to the requirements. There is a strong
need to deal with these changes in a systematic
manner. For this purpose, proper decision
making and change planning is required to
effectively implement the change. Change Impact
Analysis provides its services in this regard, by
allowing us to assess the potential side - effects
of change and also helps us in identifying that
what is needed to be modified to accomplish the
change. A number of impact analysis techniques
have been proposed that perform impact analysis
of UML based software design using a certain
strategy and methodology. In order to explore
the strengths and weaknesses of different
approaches toward impact analysis, this survey
paper includes an evaluation criterion for the
comparison of different impact analysis
techniques and a thorough analysis of these
techniques based on evaluation criteria.
Unified V- Model Approach of Re-Engineering to reinforce Web Application Deve...IOSR Journals
Abstract: The diverse and dynamic nature of elements and techniques used to develop Web Application, due to
the lack of testing technique and effective programming principles which are used for implementing
basic software engineering principles, and undisciplined development processes insure by the high pressure
of a very short time to satisfy market request to develop Web application. This paper represent approaches of
reengineering in web that how reengineering process can be carried out to evolution activities in legacy
system as well we propose the V-model for re-engineering process. This paper presents the need of the
technologies and approaches for building new web-services from existing web-applications. In this
paper we present the processing of V-model for Reengineering in web application which is the extension of Vmodel
used in software domain. In our approach V-model incorporates with the methodology
throughout the phases of web development process to re-engineer the web system.
Keywords:Re-engineering, reverse engineering, forward engineering, V-model, application migration.
Multiagent Based Methodologies have become an
important subject of research in advance Software Engineering.
Several methodologies have been proposed as, a theoretical
approach, to facilitate and support the development of complex
distributed systems. An important question when facing the
construction of Agent Applications is deciding which
methodology to follow. Trying to answer this question, a
framework with several criteria is applied in this paper for the
comparative analysis of existing multiagent system
methodologies. The results of the comparative over two of them,
conclude that those methodologies have not reached a sufficient
maturity level to be used by the software industry. The
framework has also proved its utility for the evaluation of any
kind of Multiagent Based Software Engineering Methodology
ADAPTIVE CONFIGURATION META-MODEL OF A GUIDANCE PROCESSijcsit
The current technology tend leads us to recognize the need for adaptive guidance process for all process of
software development. The new needs generated by the mobility context for software development led these
guidance processes to be adapted. In order to improve the performance of the deployed software
development, it is useful to manage the configuration of its evolving aspects. This paper deals with the
configuration management of guidance process or its ability to be adapted to specific development
contexts. The proposed adaptive configuration Meta-model is worked out on the basis of a Y description
for adaptive guidance process. This description focuses on three dimensions defined by the
material/software platform, the adaptation form and provided guidance service. Each dimension considers
several factors to develop a coherent configuration strategy and provide automatically the appropriate
guidance process to a current development context.
ANALYZABILITY METRIC FOR MAINTAINABILITY OF OBJECT ORIENTED SOFTWARE SYSTEMIAEME Publication
Analyzability is one of the major factors in the prediction of maintainability aspect that can improve the quality of the intended software solution in an appropriate manner. In fact, the right analyzability measure will go a long way in decreasing the inadequacies and deficiencies through identification of the modified parts and failure causes in the software system. The analyzability metric identification is not possible in every software solution because of its non- functional nature in the real world, but in object-oriented software system there is an opportunity to find out the analyzability metric in the form of class inheritance hierarchies. In this research paper, the researcher measured the analyzability factor for the object-oriented software systems and also validated in accordance with the famed weyker’s properties. The proposed analyzability metric here is intended to lead the new developments in object-oriented software maintainability parameter in future and help the new researchers do their research the right way thus eventually achieving the quality aspect of the objectoriented software system and fulfilling the needs of the users.
Similar to A methodology to evaluate object oriented software systems using change requirement traceability based on impact analysis (20)
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Mission to Decommission: Importance of Decommissioning Products to Increase E...
A methodology to evaluate object oriented software systems using change requirement traceability based on impact analysis
1. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.3, May 2014
DOI : 10.5121/ijsea.2014.5304 47
A METHODOLOGY TO EVALUATE OBJECT-
ORIENTED SOFTWARE SYSTEMS USING
CHANGE REQUIREMENT TRACEABILITY
BASED ON IMPACT ANALYSIS
Sunil T. D. and Dr. M. Z. Kurian
Department of Electronics and Communication Engineering,
Sri Siddhartha Institute of Technology, Tumkur, Karnataka, India
ABSTRACT
It is a well known fact that software maintenance plays a major role and finds importance in software
development life cycle. As object-oriented programming has become the standard, it is very important to
understand the problems of maintaining object-oriented software systems. This paper aims at evaluating
object-oriented software system through change requirement traceability – based impact analysis
methodology for non functional requirements using functional requirements. The major issues have been
related to change impact algorithms and inheritance of functionality.
KEYWORDS
Change Requirement Traceability, Impact Analysis, Object-Oriented Software Systems, Software
Maintenance, Change Impact algorithms, inheritance of functionality.
1. INTRODUCTION
There are several standards for traceability, such as ISO15504 and CMMI, Over the past decades,
several techniques were developed for tracing requirements. Most of the traditional techniques
like Trace-based Impact Analysis Methodology (TIAM), which is based on utilizing the trace
information and Work Product Model (WoRM) , which is to define requirement change impact
metric for determining severity in change requirements. The above methodology has predictive
value for finding classes of similar changes. TIAM which is intended for planning rather than
ensuring that changes are thoroughly implemented. TIAM potentially could be used to evaluate
the risk of volatile requirements. In case of design changes, there are cognitive consequences of
the object oriented approach. Novice designers have been found to have problems with class
creation and articulating the declarative and procedural aspects of the solution. Accordingly, here
it is to introduce traceability patterns or methods as a solution to requirement-component that can
be applied to both traditional and modern development processes. This approach has achieved as
a result of the conformance of the structure of the source code to the traceability patterns or
methods. In the software life cycle, software undergoes changes at all stages. A software product
is successful if a software changes are identified or managed from all the phases of software life
cycle, like requirement specification phase, design phase, implementation phase and maintenance
phase.
2. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.3, May 2014
48
To obtain a software product it should be clear to have a well established threshold and it must
get higher and higher as development proceeds or no product ever appears. Software maintenance
consumes approximately forty percent of the software expenditure, since it is a non-trivial phase
in software development lifecycle capturing traceability link between code and element in
artifacts can be helpful in many tasks. Program comprehension, maintenance, requirement
tracing, impact analysis and reuse of existing software. Many number of traceability patterns or
methods were introduced to trace back elements from source code in reverse engineering.
Traceability matrix, keywords, aspect weaving, information retrieval, scenario-based, event
based, process centered, design pattern, goal centric are few examples of traceability methods.
The demand to reengineer legacy system has increased significantly with the shift toward web-
based user interface. The traceability patterns or methods are used for many reasons, such as
managing evolutionary software changes, impact analysis, software architecture. The object-
oriented paradigms such as classes and its relationship namely association, aggregations,
dependencies, multiplicity have been conducted by many researcher. The objective of this paper
is to create and provide round-trip engineering capability during traceability process.
Organisation: The literature survey about the related topic is dealt in section 2. Section 3 deals
with the types of Traceability models. Impact analysis based on Change requirement traceability
is discussed in section 4. The research results are presented in section 5. The paper is concluded
mentioning the conclusive remarks in section 6.
2.LITERATURE SURVEY
There are a number of phases in the life of a software product. The waterfall model, as described
by Ghezzi et al., [1], has five major phases. They are requirements analysis and specifications,
coding and module testing, integration testing, system testing and maintenance. This research is
concerned only with the final aspect of the final phase, maintenance. The maintenance phase is
the longest phase of the life cycle. Maintaining software becomes more difficult as time
progresses and the system evolves. Chandra Shrivastava et al., [2] stated that the algorithms
calculate the transitive closure of each of the potentially effected classes and methods. It will be
possible to greatly improve upon the information provided by the algorithms in recognition of
low-level design patterns, effects of data type changes, and effects of addition and deletion of
classes can be drawn from the LLSA model of an object-oriented system. Chen. X., Tsai et al.,[3]
presented an integrated environment for C++ program maintenance which describes three new
dependence graphs specific to object-oriented software systems: message, class and declaration
dependence in a model called C++ DG. Additionally, several new slicing techniques are
presented. The use of the new dependencies and slicing on code maintenance is described. The
dependencies are described, specifically as to the ripple effect and regression testing. The
application of the discovered dependencies and program slicing leads to recursive analysis of the
ripple effect caused by code modification. As the effects are located, classes and methods affected
can be “marked” for testing or re-execution in the testing phase.
Li.,L et al., [4] explained four algorithms that measure the effect of proposed changes to object-
oriented systems. The ripple effect is calculated by application of algorithms that
1. calculate the change effects inside of a class
2. calculate the change effects among clients
3. calculate the change effects among subclasses
4. measure the total effect by driving the algorithms in 1,2 and 3
3. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.3, May 2014
49
The author also presented the details of how different types of changes affect the system. Changes
are broadly categorized as method or member change, and then refined to more detail such as
adding a member or changing an attribute. Gallangher, K [5] described about program slicing to
select a point in an ANSI C program for observation. The method looks at program variables and
essentially models dependencies that exist among variables via assignment statements and
parameter passing. The method is a visualization of the data collected by the Surgeon’s Assistant
and is called the Decomposition Slice Display System. According to Hutchins et al., [6] Visual
Impact Analysis has improved the recognition of further dependencies such as interference.
Bohner.S.A [7] presented that software engineering practice evolves to respond to demands for
distributed applications on heterogeneous platforms; software change is increasingly influenced
by middleware and components. Interoperability dependency relationships now point to more
relevant impacts of software change and necessarily drive the analysis. Software changes to
software systems that incorporate middleware components like Web services expose these
systems and the organizations they serve to unforeseen ripple effects that frequently result in
failures. Current software change impact analysis models have not adequately addressed this
trend. Moreover, as software systems grow in size and complexity, the dependency webs of
information extend beyond most software engineers’ ability to comprehend them. This paper
examines preliminary research for extending current software change impact analysis to
incorporate interoperability dependency relationships for addressing distributed applications and
explores three dimensional (3D) visualization techniques for more effective navigation of
software changes. Pressman [8] explained that as software system becomes larger and more
complex, numerous corrections, extensions and adaptations tend to be more chaotic and
unmanageable. The traditional way of addressing the maintenance task individually is no longer
practical. It needs a special management system, called the Software Configuration Management
(SCM) that covers the procedures, rules, policies and methods to handle the software evolution
(IEEE, 1998b). SCM has been identified as a major part of a well defined software development
and maintenance task. SCM deals with controlling the evolution of complex software systems
that supports version controls and administrative aspects such as to handle change requests, and to
perform changes in a controlled manner by introducing well-defined processes. Suhaimi Bin
Ibrahim [9] illustrates that most of the Computer Aided Software Engineering (CASE) tools and
applications focuses on the high level software and yet are directly applicable to software
development rather than maintenance. While the low level software, e.g. code is given less
priority and very often left to users to decide. This makes the software change impact analysis
extremely difficult to manage at both levels. Secondly, there exists some research works on
change impact analysis but the majority confine their solution at the limited space i.e. code,
although more evolvable software can be achieved at the meta model level. No proper visibility is
being made by the ripple effects of a proposed change across different levels of work product. If
this can be achieved, a more concrete estimation can be predicted that can support change
decision, cost estimation and schedule plan. M.Z.Kurian et al., [10] explained a comparative
software maintenance methodology to assist in Object Oriented systems was carried out with
main intention regarding to impact analysis and ripple effect to retesting of affected and changed
components. This reduces the cost of testing and assists in identifying change impact in object-
oriented maintenance. Since, it does not emphasize on the change requirement analysis and
tracing object oriented software system it is to look forward with other methods. Ali R. Sharafat
et al., [11] proposed an estimation of change-proneness of parts of a software system is an active
topic in the area of software engineering. Such estimates can be used to predict changes to
different classes of a system from one release to the next. They can also be used to estimate and
possibly reduce the effort required during the development and maintenance phase by balancing
the amount of developers’ time assigned to each part of a software system. This is a novel
approach to predict changes in an object-oriented software system. The rationale behind this
approach is that in a well-designed software system, feature enhancement or corrective
maintenance should affect a limited amount of existing code. The goal is to quantify this aspect of
4. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.3, May 2014
50
quality by assessing the probability that each class will change in a future generation. Peter
Zielczynski [12] explained an approach which is applied to software writing in an object-oriented
language to trace object oriented code into functional requirements. Here, it is addressed the
problem of establishing traceability links between the free text documentation associated with
development and maintenance cycle of a software system and its code. Further, vector space
models to compare different model and to assess the relative influence of affecting factors are not
considered.
In this paper, based on the requirement management to maintenance is considered so that change
requirement traceability analysis is done on the requirement as well as object-oriented software
systems and a round-trip traceability analysis is performed.
3. TRACEABILITY MODELS
Requirement traceability refers to the ability to describe and follow the life of a requirement, in
both a forwards and backward direction. Forward traceability is the ability to trace a requirement
to components of a design or implementation. Backward traceability is the ability to trace a
requirement to its source that is, to a person, institution, law, argument etc. Inter-requirements
traceability refers to the relationships between requirements. Inter-requirement traceability is
important for requirement analysis and to deal with requirements change and evolution Francisco
A et al., [13]. Extra-requirements traceability refers to the relationships between requirements
and other artifacts.
4. CHANGE REQUIREMENT TRACEABILITY BASED IMPACT
ANALYSIS
It is the result of the elicitation process Gotel O.C.Z et al., [14]. The tracing of a requirement can
be done in either way, to get information related to the process of elicitation, prior to its inclusion
in the requirements specification or to get information related to its use, after the requirement has
been elicited and included in the requirement. It has pre-requirements specification traceability
and post-requirement traceability specification traceability. Pre- requirement traceability refers to
those aspects of a requirements life prior to its inclusion in the requirement specification. Post
RS-traceability refers to those aspects of a requirements life that result from inclusion in the
requirement specification. Pre RS-traceability is used, when there is a change to a requirement
and when to get the requirements source or people supporting it to validate change. Post RS
traceability is used to get the design module to which a requirement was allocated or the test
procedures created to verify the requirements.
Change Requirement Traceability Based Impact Analysis is a Non-functional tracing and
Informal tracing that is, in functional tracing, those related to well establish mapping between
objects model types and mapping types which allow analysis models, design models, process
models, organizational models. The Non-functional tracing is related to the tracing of non-
functional aspects of software development. They are usually related to quality aspects and
results from relationships to non-tangible concepts. The traces that related requirements to goals,
objectives, intensities and decisions are example of non-functional tracing. Non-function tracing
are classified into four categories like reason, context, decision and technical.
The tracing of non-functional aspects of software development can be automatically performed
only using a representation of that aspect. Therefore, here it is to use some model to functionally
capture the non-functional aspects we want to trace, it may use an organizational model to relate
policies, goal and roles to requirements, or it may be used process model to relate requirements
5. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.3, May 2014
51
to activities and resources. It is also an informal need for trace definition. The definition of
traces and traceable objects should promote their uniform understanding. Differences and
interpretation are the causes of errors, and in the more serious cases once may end up tracing
what did not happen. To account for non-functional traces, the definition of traceable objects
should allow the use of hyper-media objects like videos, recording and images together with
mechanism for inspecting these kinds of objects. The relationship between recorded real world
observations and parts of conceptual model is called extended traceability Haumer P et al., [15]
Smith t et al., [16] Yu W.D. [17]. Sarah Maadawy et al., [18] presents a methodology to
measure software complexity for changes. It studies attributes that affect complexity of change
and the relation between requirements and each other to finally find a complexity measure the
will serve in finding a precise estimate for the change. However, it did not discuss the object-
oriented analysis and design aspects.
In this paper, the change requirement traceability based impact analysis methodology has been
discussed, which is for a non-functional and informal and extended traceability, also object-
oriented analysis and design aspects are discussed. This paper discusses the following phase’s i.e.
Phase One
A. Validating the new requirements from any of the stake holders.
B. Classification of requirement whether functional or non-functional requirement
C. Traceability matrix can help tracing the requirement
D. Review of the Requirements
E. Requirement Evaluation
F. Requirement Documentation
G. Acceptance Testing
Phase Two
A. Stability: Unstable Requirements
B. Completeness: Incomplete Requirements
C. Clarity: Unclear Requirements
D. Validity: Invalid requirements
E. Feasibility: Infeasible requirements
F. Precedent: Unprecedented Requirements
Stability:
This represents the system vulnerability to change. It has been noticed that software
maintainability degrades as changes are made to it which increases complexity of the software,
system stability will be calculated as in
S(#NORS + #NOCNR + #NOCUR + #NOCDR) / (#NORS)
Where S= Stability and
NORS = No. of original requirement in the system
NOCNR=No. of cumulative number of requirement
NOCUR = No. of cumulative number of requests updated in the system
NOCDR=No. of cumulative number of request deleted from the system.
6. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.3, May 2014
52
Completeness
This represents completeness of the requirement
CMP=NARS –N IR
CMP=Completeness of the system
NARS=No. of Actual / original requirement in the system
NIR =Number of Incomplete Requirement in the system
Clarity
This represents clarity of the system.
CL=NARS –N IR-UCLR
CL=Clarity of the system
NARS=No. of Actual / original requirement in the system
NIR =Number of Incomplete Requirement in the system
UCLR=No. of Unclear requirements
Feasibility
This represents feasibility of the system.
FR=IFR -UCLR
FR=Feasibility requirements of the system
IFR =Number of Infeasible Requirement in the system
UCLR=No. of Unclear requirements
Precedent
This represents precedent of the system.
PR=CMP+CL+FR
PR=Precedent requirements of the system
CMP =Completeness of the system
CL=Clarity of the system
FR=Feasibility of the system
5. RESULTS
Case Study: Flight Booking System
Here it is to identify, visualize and analyze the change requirement traceability analysis on object-
oriented software system. Here, Flight Booking System case study has been taken as a
requirement. Based on the requirement level in, it is to split requirement into different
requirement types
1. Stakeholders need
2. Feature
3. Use Case
4. Supplementary Requirement
5. Test Cases
7. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.3, May 2014
53
6. Scenarios
1.The requirements at the top level of the levels (stakeholders’ requests) are gathered using
various methods of knowledge elicitation:
• Asking Questions
• Conducting Workshops
• Hearing to stories
• Role
• Brainstorming
• Module
• Use cases
• Analysis of existing documents
• Observation, task demonstration
• Analysis of existing systems
2. A business analyst derives the second level of the levels (features) from stakeholders’ requests
by cleaning the requirements and translating them from the problem domain to the solution
domain. The features should have all the attributes of a good requirement:
• Testable
• Clear
• Correct
• Understandable
• Feasible
• Independent
• Atomic
• Necessary
• Implementation-free
• Consistent
• Non-redundant
• Complete
To fix the requirements that are missing at least one of these attributes, which can apply some of
the following transformations:
• Copy
• Split
• Clarification
• Qualification
• Combination
• Generalization
• Cancellation
• Completion
• Correction
• Unification
• Adding details
3. The third layer of the levels contains use cases and supplementary requirements. Use cases
capture functional requirements. Creation of use cases consists of the following steps:
1. Identify actors.
8. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.3, May 2014
54
2. Identify use cases.
3. Design the initial use case model.
4. Structure the model.
5. Create use case documents.
4. Supplementary requirements capture mostly non functional requirements. They may also
capture some generic functional requirements not associated with any specific use cases.
Supplementary requirements can be classified as follows:
• Functionality
• Usability
• Reliability
• Performance
• Testability
• Design constraints
• Implementation requirements
• Interface requirements
• Physical requirements
• Documentation requirements
• Licensing and legal requirements
5. Test cases are created to test the requirements from the third level. The following steps are used
to derive test cases from use cases:
• Create scenarios.
• Identify variables for each use case step
• Identify significantly different options for each variable
• Combine options to be tested into test cases
• Assign values to variables
6. To create test cases from supplementary requirements, you can use one of the following
approaches:
• Execute selected functional test cases in different environments
• Add checks to all use cases
• Check and modify a specific use case
• Perform the exercise
• Checklist
• Analysis
• White-box testing
• Automated testing
7. Design diagrams are also derived from the requirements on the third level, especially Use
cases. Here are the possible approaches:
• Design classes that will capture required data and functionality
• Create one sequence diagram for each scenario
• Simultaneously add required methods and attributes to the classes on the class Diagrams
8. Documentation is created from various elements of the levels.
9. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.3, May 2014
55
Algorithms for ‘Book a Flight’
Step 1: Begin Algorithm
Step 2: Enter URL
Step 3: Enter flight data search flights
Step 4: Select a flight
Step 5: System Display return flights
Step 6: System Display details of flights
Step 7: Confirm the flight
Step 8: New User Register
Step 9: Login
Step 10: Provide passenger information
Step 11: Display available seats
Step 12: Select Seats
Step 13: Enter Billing information
Step 14 : Provide confirmation number
Step 15: End algorithm
Use Cases
Figure 1.1 An ACTOR and a use case
Figure 1.2 Use case Initiated by Travellers and User
10. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.3, May 2014
56
Structuring the Use Cases
The main purpose of structuring the model is to remove any redundancy, making the use
cases easier to understand and maintain. First, it is needed to analyze use cases and find
any parts of the flows that contain similar steps. Then it is to apply some of the three
types of relationships between use cases:
• Include
• Extend
Include Relationship
The included use case should be self-contained and cannot make any assumptions about
which use case is including it.
Figure 1.3 an include relationship between two use cases
Extend Relationship
If some part of the use case is optional or conditional, to make the model more clear,
extract it as a separate use case that is connected with an extend relationship.
Figure 1.4 an extend relationship between two use cases
Figure 1.5 A Context diagram for the Use case book a flight
11. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.3, May 2014
57
Traceability Structure
Figure 1.6 shows traceability structure in this case study
Figure 1.6 Traceability structure for case study “Book a Flight”
• Stakeholder Requests (STRQ) will be traced to Features (FEAT) defined in the Vision
document and supplementary Requirements defined in the Supplementary Specification. There
may be a many-to-many relationship between STRQ and FEAT, but usually it is one Stakeholder
Request to many Features. Every approved Request must trace to at least one Feature or
Supplementary Requirement.
• Feature Requirements (FEAT) (defined in the Vision document) will be traced to either a Use
Case or Supplementary Requirement. Every approved feature must trace to at least one Use Case
or Supplementary Requirement. There may be many-to-many relationships
between Features and Use Cases and Supplementary Requirements.
• Use Case Requirements (UC) defined in the Use Case Specifications will be traced back to
Features.
• Supplementary Requirements (SUPL) will be traced back to Features.
Object Oriented System Design from Use Cases
12. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.3, May 2014
58
Figure 1.7 A class diagram showing classes that implement the functionality of a basic flow of the Book a
flight use case.
Figure 1.8 Class Reservation and related classes
6. CONCLUSION
The Change requirement traceability of a case study “book a flight” requirement provides an
object oriented approach architecture till a class diagram. The object oriented analysis has been
done for the case study, “book a flight” and arrived at a class diagram by using ‘use cases’ and
arrived at an algorithm for “book a flight’ case study. The proposed analysis is highly dependent
on a very well defined software requirements specification and non-functional traceable
requirement. Further based on change in the requirements the impact on the class diagram till test
case attributes can be identified. Based on the requirement traceability which test cases must be
changed can be identified and also impact of object-oriented paradigms can be analyzed.
13. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.3, May 2014
59
References
[1] Carlo Ghezzi, Mehdi Jazayeri, Dino Mandrioli, Fundamentals of Software Engineering, Prentice Hall
Publishing, (1991).
[2] Chandra Shrivastava, D. L. Carver, "Using Low-Level Software Architecture for Software
Maintenance of Object-Oriented Systems", Proceedings of the 1995 Software Engineering Forum,
Boca Raton, FL, November pp. 31-40, (1995).
[3] Chen. X., Tsai. W., Hunag. H., Poonawala. M., Rayadurgam. S., Wang. Y., , Omega-an Integrated
Environment for C++ Program Maintenance, Proceedings of the International conference on software
Maintenance, pp.114-123, (1996).
[4] Li.,L.,Offutt,A.J., Algorithmic Analysis of the Impact of Changes to Object-oriented Software,
Proceedings of the International Conference on Software Maintenance, pp. 171-184, (1996).
[5] Gallangher, K., Visual Impact Analysis, Proceedings of the International Conference on Software
Maintenance, pp. 52-58, (1996).
[6] Hutchins,M., Gallagher,K., Improving Visual Impact Analysis, Proceedings of the International
Conference on Software Maintenance, pp.294-301, (1996).
[7] Bohner.S.A., Software change impacts–an evolving perspective, Proceedings of the International
Conference on Software maintenance, pp 263 – 272, (2002).
[8] Pressman. A Dynamic Analysis Approach Concept Location. Technical report of Software
Engineering, (2004).
[9] Suhaimi Bin Ibrahim A Document-Based Software Traceability to Support Change Impact Analysis
of Object-Oriented Software, University Teknologi Malaysia, Thesis, pp. 45-56, (2006).
[10] M.Z.Kurian and A S Manjunath Requirement traceability and impact analysis methodology to
evaluate software requirements changes, National Conference on Trends in Advanced Computing, at
DMCE,Airoli,Navi Mumbai,28-29, (2007).
[11] Ali R. Sharafat and Ladan Tahvildari, Change Prediction in Object- Oriented Software Systems: A
Probabilistic Approach, Journal of Software, Vol. 3, No. 5, pp.10-38, (2008).
[12] Peter Zielczynski, IBM, Requirements Manangement Using IBM Rational Requisite Pro, (2013).
[13] Francisco A C Pincher, Requirement traceability Technical Report, University of Brasilia, (2000)
[14] Gotel O.C.Z and Finkelstein ACW. An analysis of the requirements traceability problem.
Proceedings of ICRE94, 1st Internation conference on requirements engineering, 1994, Colorado
springs Co, IEEE CS Press (1994).
[15] Haumer P ., Pohl K., Weidenhaupt K and Jarke M . Improving reviews by extended traceability.
Proceedings of 32nd Hawaii International Conference on system sciences volume 3; January 05-08;
Maui, Hawaii, (1999).
[16] Smith t,T J READS: A requirements engineering tool. Proceedings of RE’93, International
Symposium on Requirements Engineering; January 4-6; san Diego,C.A. Los Alamitos,CA,IEEE
Computer Society,(1993).
[17] Yu W.D. Verifying software requirements – a requirement tracing methodology and its software tool
– RADIX, IEEE Journal on Selected Areas of Communication;12(2):234-240, (1994)
[18] Sarah Maadawy and Akram Salah, Measuring Change Complexity from Requirements: A proposed
methodology, IMACST Volume 3, Number ,Feburary (2012).
Authors
Asst. Prof. Sunil T D received Bachelor Degree from Bangalore University in Electronics
and Post graduate degree in Electronics from Visvesvaraya Technological University at
BMSCE, Bangalore and Pursuing Ph.D degree in Software Engineering from
VTU, Belgaum, Karnataka, India. Having 12 Years of Teaching experience in the field
of Electronics & Communication Engineering. Published several papers in peer reviewed
international journals, and several conference papers.
14. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.3, May 2014
60
Dr M.Z.Kurian received his Bachelor Degree from Bangalore University and Post graduate
degree in Industrial Electronics from Mysore University, and Ph.D degree in Software
Engineering from Dr.MGR University, Chennai, Tamil Nadu, India. He has more
than 30 Years of Teaching experience in the field of Electronics & Communication
Engineering. Published several papers in peer reviewed international journals including
IEEE, and several conference papers.