- The document discusses an automatic approach to structurally validate software product line variants using graph transformations.
- It proposes translating a feature diagram into a decorated tree model to facilitate analysis, then using graph grammars to validate products according to dependencies in the feature diagram.
- The approach is demonstrated through examples, showing the feasibility of using graph transformations to automatically check the structural validity of software product line configurations.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
Performance Evaluation using Blackboard Technique in Software ArchitectureEditor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional
requirements (such as performance). Whereas establishing the non-functional requirements have significant effect on success of
software systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software
performance has been specified based on performance models, may be evaluated at the primary stages of software development cycle.
Therefore, modeling and evaluation of non-functional requirements in software architecture level, that are designed at the primary
stages of software systems development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance of software systems, based on black board technique in software architecture
level. In this approach, at first, software architecture using blackboard technique is described by UML use case, activity and
component diagrams. then UML model is transformed to an executable model based on timed colored petri nets(TCPN)
Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including performance
(such as response time) may be evaluated in software architecture level.
A FRAMEWORK FOR ASPECTUAL REQUIREMENTS VALIDATION: AN EXPERIMENTAL STUDYijseajournal
Requirements engineering is a discipline of software engineering that is concerned with the
identification and handling of user and system requirements. Aspect-Oriented Requirements
Engineering (AORE) extends the existing requirements engineering approaches to cope with the
issue of tangling and scattering resulted from crosscutting concerns. Crosscutting concerns are
considered as potential aspects and can lead to the phenomena “tyranny of the dominant
decomposition”. Requirements-level aspects are responsible for producing scattered and tangled
descriptions of requirements in the requirements document. Validation of requirements artefacts
is an essential task in software development. This task ensures that requirements are correct and
valid in terms of completeness and consistency, hence, reducing the development cost,
maintenance and establish an approximately correct estimate of effort and completion time of the
project. In this paper, we present a validation framework to validate the aspectual requirements
and the crosscutting relationship of concerns that are resulted from the requirements engineering
phase. The proposed framework comprises a high-level and low-level validation to implement on
software requirements specification (SRS). The high-level validation validates the concerns with
stakeholders, whereas the low-level validation validates the aspectual requirement by
requirements engineers and analysts using a checklist. The approach has been evaluated using
an experimental study on two AORE approaches. The approaches are viewpoint-based called
AORE with ArCaDe and lexical analysis based on Theme/Doc approach. The results obtained
from the study demonstrate that the proposed framework is an effective validation model for
AORE artefacts.
PRODUCT QUALITY EVALUATION METHOD (PQEM): TO UNDERSTAND THE EVOLUTION OF QUAL...ijseajournal
Promoting quality within the context of agile software development, it is extremely important as well as
useful to improve not only the knowledge and decision-making of project managers, product owners, and
quality assurance leaders but also to support the communication between teams. In this context, quality
needs to be visible in a synthetic and intuitive way in order to facilitate the decision of accepting or
rejecting each iteration within the software life cycle. This article introduces a novel solution called
Product Quality Evaluation Method (PQEM) which can be used to evaluate a set of quality characteristics
for each iteration within a software product life cycle. PQEM is based on the Goal-Question-Metric
approach, the standard ISO/IEC 25010, and the extension made of testing coverage in order to obtain the
quality coverage of each quality characteristic. The outcome of PQEM is a unique multidimensional value,
that represents the quality level reached by each iteration of a product, as an aggregated measure. Even
though a value it is not the regular idea of measuring quality, we believe that it can be useful to use this
value to easily understand the quality level of each iteration. An illustrative example of the PQEM method
was carried out with two iterations from a web and mobile application, within the healthcare environment.
A single measure makes it possible to observe the evolution of the level of quality reached in the evolution
of the product through the iterations.
SOFTWARE REQUIREMENT CHANGE EFFORT ESTIMATION MODEL PROTOTYPE TOOL FOR SOFTWA...ijseajournal
In software development phase software artifacts are not in consistent states such as: some of the class artifacts are fully developed some are half developed, some are major developed, some are minor developed and some are not developed yet. At this stage allowing too many software requirement changes may possibly delay in project delivery and increase development budget of the software. On the other hand rejecting too many changes may increase customer dissatisfaction. Software change effort estimation is one of the most challenging and important activity that helps software project managers in accepting or rejecting changes during software development phase. This paper extends our previous works on developing a software requirement change effort estimation model prototype tool for the software development phase. The significant achievements of the tool are demonstrated through an extensive experimental validation using several case studies. The experimental analysis shows improvement in the estimation accuracy over current change effort estimation models.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
Performance Evaluation using Blackboard Technique in Software ArchitectureEditor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional
requirements (such as performance). Whereas establishing the non-functional requirements have significant effect on success of
software systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software
performance has been specified based on performance models, may be evaluated at the primary stages of software development cycle.
Therefore, modeling and evaluation of non-functional requirements in software architecture level, that are designed at the primary
stages of software systems development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance of software systems, based on black board technique in software architecture
level. In this approach, at first, software architecture using blackboard technique is described by UML use case, activity and
component diagrams. then UML model is transformed to an executable model based on timed colored petri nets(TCPN)
Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including performance
(such as response time) may be evaluated in software architecture level.
A FRAMEWORK FOR ASPECTUAL REQUIREMENTS VALIDATION: AN EXPERIMENTAL STUDYijseajournal
Requirements engineering is a discipline of software engineering that is concerned with the
identification and handling of user and system requirements. Aspect-Oriented Requirements
Engineering (AORE) extends the existing requirements engineering approaches to cope with the
issue of tangling and scattering resulted from crosscutting concerns. Crosscutting concerns are
considered as potential aspects and can lead to the phenomena “tyranny of the dominant
decomposition”. Requirements-level aspects are responsible for producing scattered and tangled
descriptions of requirements in the requirements document. Validation of requirements artefacts
is an essential task in software development. This task ensures that requirements are correct and
valid in terms of completeness and consistency, hence, reducing the development cost,
maintenance and establish an approximately correct estimate of effort and completion time of the
project. In this paper, we present a validation framework to validate the aspectual requirements
and the crosscutting relationship of concerns that are resulted from the requirements engineering
phase. The proposed framework comprises a high-level and low-level validation to implement on
software requirements specification (SRS). The high-level validation validates the concerns with
stakeholders, whereas the low-level validation validates the aspectual requirement by
requirements engineers and analysts using a checklist. The approach has been evaluated using
an experimental study on two AORE approaches. The approaches are viewpoint-based called
AORE with ArCaDe and lexical analysis based on Theme/Doc approach. The results obtained
from the study demonstrate that the proposed framework is an effective validation model for
AORE artefacts.
PRODUCT QUALITY EVALUATION METHOD (PQEM): TO UNDERSTAND THE EVOLUTION OF QUAL...ijseajournal
Promoting quality within the context of agile software development, it is extremely important as well as
useful to improve not only the knowledge and decision-making of project managers, product owners, and
quality assurance leaders but also to support the communication between teams. In this context, quality
needs to be visible in a synthetic and intuitive way in order to facilitate the decision of accepting or
rejecting each iteration within the software life cycle. This article introduces a novel solution called
Product Quality Evaluation Method (PQEM) which can be used to evaluate a set of quality characteristics
for each iteration within a software product life cycle. PQEM is based on the Goal-Question-Metric
approach, the standard ISO/IEC 25010, and the extension made of testing coverage in order to obtain the
quality coverage of each quality characteristic. The outcome of PQEM is a unique multidimensional value,
that represents the quality level reached by each iteration of a product, as an aggregated measure. Even
though a value it is not the regular idea of measuring quality, we believe that it can be useful to use this
value to easily understand the quality level of each iteration. An illustrative example of the PQEM method
was carried out with two iterations from a web and mobile application, within the healthcare environment.
A single measure makes it possible to observe the evolution of the level of quality reached in the evolution
of the product through the iterations.
SOFTWARE REQUIREMENT CHANGE EFFORT ESTIMATION MODEL PROTOTYPE TOOL FOR SOFTWA...ijseajournal
In software development phase software artifacts are not in consistent states such as: some of the class artifacts are fully developed some are half developed, some are major developed, some are minor developed and some are not developed yet. At this stage allowing too many software requirement changes may possibly delay in project delivery and increase development budget of the software. On the other hand rejecting too many changes may increase customer dissatisfaction. Software change effort estimation is one of the most challenging and important activity that helps software project managers in accepting or rejecting changes during software development phase. This paper extends our previous works on developing a software requirement change effort estimation model prototype tool for the software development phase. The significant achievements of the tool are demonstrated through an extensive experimental validation using several case studies. The experimental analysis shows improvement in the estimation accuracy over current change effort estimation models.
REALIZING A LOOSELY-COUPLED STUDENTS PORTAL FRAMEWORKijseajournal
Most of the currently available students' portal frameworks are tightly-coupled frameworks. A recent
research done by the authors of this paper has discussed how to distribute the concepts of the traditional
students' portal framework and came out with a distributed interoperable framework. This paper realizes
the distributed interoperable students' portal framework by developing a prototype. This prototype is based
on Service Oriented Architecture (SOA). The prototype is tested using web service testing and compatibility
testing.
JELINSKI-MORANDA SOFTWARE RELIABILITY GROWTH MODEL: A BRIEF LITERATURE AND MO...ijseajournal
Analyzing the reliability of a software can be done at various phases during the development of
engineering software. Software reliability growth models (SRGMs) assess, predict, and controlthe software
reliability based on data obtained from testing phase.This paper gives a literaturereview of the first and
wellknownJelinski and Moranda(J-M) (1972)SRGM.Also a modification to Jelinski and Morandamodel is
given, Jelinski and Moranda and Schick and Wolverton (S-W) (1978)SRGMsare two special cases of our
new suggested general SRGM. Our proposed general SRGMalong with our Survey will open doors for
much more useful researches to be done in the field of reliability modeling.
Program analysis is useful for debugging, testing and maintenance of software systems due to information
about the structure and relationship of the program’s modules . In general, program analysis is performed
either based on control flow graph or dependence graph. However, in the case of aspect-oriented
programming (AOP), control flow graph (CFG) or dependence graph (DG) are not enough to model the
properties of Aspect-oriented (AO) programs. With respect to AO programs, although AOP is good for
modular representation and crosscutting concern, suitable model for program analysis is required to
gather information on its structure for the purpose of minimizing maintenance effort. In this paper Aspect
Oriented Dependence Flow Graph (AODFG) as an intermediate representation model is proposed to
represent the structure of aspect-oriented programs. AODFG is formed by merging the CFG and DG, thus
more information about dependencies between the join points, advice, aspects and their associated
construct with the flow of control from one statement to another are gathered. We discussthe performance
of AODFG by analysing some examples of AspectJ program taken from AspectJ Development Tools
(AJDT).
Improving Consistency of UML Diagrams and Its Implementation Using Reverse En...journalBEEI
Software development deals with various changes and evolution that cannot be avoided due to the development processes which are vastly incremental and iterative. In Model Driven Engineering, inconsistency between model and its implementation has huge impact on the software development process in terms of added cost, time and effort. The later the inconsistencies are found, it could add more cost to the software project. Thus, this paper aims to describe the development of a tool that could improve the consistency between Unified Modeling Language (UML) design models and its C# implementation using reverse engineering approach. A list of consistency rules is defined to check vertical and horizontal consistencies between structural (class diagram) and behavioral (use case diagram and sequence diagram) UML diagrams against the implemented C# source code. The inconsistencies found between UML diagrams and source code are presented in a textual description and visualized in a tree view structure.
Selenium - A Trending Automation Testing Toolijtsrd
Selenium is an important testing tool for software quality assurance. In recent days number of websites are increasing rapidly and it becomes essential to test the websites against various quality factors to make sure it meets the expected quality goals. Several companies are spending a lot of bucks for the testing tool while Selenium is available completely free for the performance test. The open source tool is well known for its unlimited capabilities and unlimited reach. Selenium stands out from the crowd in this aspect. Anyone could visit the Selenium website and download the latest version and use it. It is not only an open source but also highly modifiable. Testers could make changes based upon their needs and requirements. Manav Kundra "Selenium - A Trending Automation Testing Tool" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31202.pdf Paper Url :https://www.ijtsrd.com/engineering/software-engineering/31202/selenium-%E2%80%93-a-trending-automation-testing-tool/manav-kundra
An Empirical Study of the Improved SPLD Framework using Expert Opinion TechniqueIJEACS
Due to the growing need for high-performance and low-cost software applications and the increasing competitiveness, the industry is under pressure to deliver products with low development cost, reduced delivery time and improved quality. To address these demands, researchers have proposed several development methodologies and frameworks. One of the latest methodologies is software product line (SPL) which utilizes the concepts like reusability and variability to deliver successful products with shorter time-to-market, least development and minimum maintenance cost with a high-quality product. This research paper is a validation of our proposed framework, Improved Software Product Line (ISPL), using Expert Opinion Technique. An extensive survey based on a set of questionnaires on various aspects and sub-processes of the ISPLD Framework was carried. Analysis of the empirical data concludes that ISPL shows significant improvements on several aspects of the contemporary SPL frameworks.
PROPERTIES OF A FEATURE IN CODE-ASSETS: AN EXPLORATORY STUDYijseajournal
Software product line engineering is a paradigm for developing a family of software products from a
repository of reusable assets rather than developing each individual product from scratch. In featureoriented software product line engineering, the common and the variable characteristics of the products
are expressed in terms of features. Using software product line engineering approach, software products
are produced en masse by means of two engineering phases: (i) Domain Engineering and, (ii) Application
Engineering. At the domain engineering phase, reusable assets are developed with variation points where
variant features may be bound for each of the diverse products. At the application engineering phase,
individual and customized products are developed from the reusable assets. Ideally, the reusable assets
should be adaptable with less effort to support additional variations (features) that were not planned
beforehand in order to increase the usage context of SPL as a result of expanding markets or when a new
usage context of software product line emerges. This paper presents an exploration research to investigate
the properties of features, in the code-asset implemented using Object-Oriented Programming Style. In the
exploration, we observed that program elements of disparate features formed unions as well as
intersections that may affect modifiability of the code-assets. The implication of this research to practice is
that an unstable product line and with the tendency of emerging variations should aim for techniques that
limit the number of intersections between program elements of different features. Similarly, the implication
of the observation to research is that there should be subsequent investigations using multiple case studies
in different software domains and programming styles to improve the understanding of the findings.
Testing and verification of software model through formal semantics a systema...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
‘O’ Model for Component-Based Software Development Processijceronline
The technology advancement has forced the user to become more dependent on information technology, and so on software. Software provides the platform for implementation of information technology. Component Based Software Engineering (CBSE) is adopted by software community to counter challenges thrown by fast growing demand of heavy and complex software systems. One of the essential reasons behind adopting CBSE for software development is the fast development of complicated software systems within well-defined boundaries of time and budget. CBSE provides the mechanical facilities by assembling already existing reusable components out of autonomously developed pieces of the software. The paper proposes a novel CBSE model named as O model, keeping an eye on the available CBSE lifecycle.
E government maturity models a comparative studyijseajournal
Many maturity models have been used to assess or rank e
-
government portals. In order to assess electronic
services provided to the citizens, an appropriate e
-
governm
ent maturity model should be selected. This paper
aims at comparing 25 e
-
government maturity models to find the similarities and differences between them and
also to identify their weaknesses and strengths. Although the maturity models present large simila
rities
between them, our findings show that the features included in those models differ from a maturity model to
another. Furthermore, while some maturity models are covering some features and introducing new ones, it
seems that others are just ignoring t
hem.
REALIZING A LOOSELY-COUPLED STUDENTS PORTAL FRAMEWORKijseajournal
Most of the currently available students' portal frameworks are tightly-coupled frameworks. A recent
research done by the authors of this paper has discussed how to distribute the concepts of the traditional
students' portal framework and came out with a distributed interoperable framework. This paper realizes
the distributed interoperable students' portal framework by developing a prototype. This prototype is based
on Service Oriented Architecture (SOA). The prototype is tested using web service testing and compatibility
testing.
JELINSKI-MORANDA SOFTWARE RELIABILITY GROWTH MODEL: A BRIEF LITERATURE AND MO...ijseajournal
Analyzing the reliability of a software can be done at various phases during the development of
engineering software. Software reliability growth models (SRGMs) assess, predict, and controlthe software
reliability based on data obtained from testing phase.This paper gives a literaturereview of the first and
wellknownJelinski and Moranda(J-M) (1972)SRGM.Also a modification to Jelinski and Morandamodel is
given, Jelinski and Moranda and Schick and Wolverton (S-W) (1978)SRGMsare two special cases of our
new suggested general SRGM. Our proposed general SRGMalong with our Survey will open doors for
much more useful researches to be done in the field of reliability modeling.
Program analysis is useful for debugging, testing and maintenance of software systems due to information
about the structure and relationship of the program’s modules . In general, program analysis is performed
either based on control flow graph or dependence graph. However, in the case of aspect-oriented
programming (AOP), control flow graph (CFG) or dependence graph (DG) are not enough to model the
properties of Aspect-oriented (AO) programs. With respect to AO programs, although AOP is good for
modular representation and crosscutting concern, suitable model for program analysis is required to
gather information on its structure for the purpose of minimizing maintenance effort. In this paper Aspect
Oriented Dependence Flow Graph (AODFG) as an intermediate representation model is proposed to
represent the structure of aspect-oriented programs. AODFG is formed by merging the CFG and DG, thus
more information about dependencies between the join points, advice, aspects and their associated
construct with the flow of control from one statement to another are gathered. We discussthe performance
of AODFG by analysing some examples of AspectJ program taken from AspectJ Development Tools
(AJDT).
Improving Consistency of UML Diagrams and Its Implementation Using Reverse En...journalBEEI
Software development deals with various changes and evolution that cannot be avoided due to the development processes which are vastly incremental and iterative. In Model Driven Engineering, inconsistency between model and its implementation has huge impact on the software development process in terms of added cost, time and effort. The later the inconsistencies are found, it could add more cost to the software project. Thus, this paper aims to describe the development of a tool that could improve the consistency between Unified Modeling Language (UML) design models and its C# implementation using reverse engineering approach. A list of consistency rules is defined to check vertical and horizontal consistencies between structural (class diagram) and behavioral (use case diagram and sequence diagram) UML diagrams against the implemented C# source code. The inconsistencies found between UML diagrams and source code are presented in a textual description and visualized in a tree view structure.
Selenium - A Trending Automation Testing Toolijtsrd
Selenium is an important testing tool for software quality assurance. In recent days number of websites are increasing rapidly and it becomes essential to test the websites against various quality factors to make sure it meets the expected quality goals. Several companies are spending a lot of bucks for the testing tool while Selenium is available completely free for the performance test. The open source tool is well known for its unlimited capabilities and unlimited reach. Selenium stands out from the crowd in this aspect. Anyone could visit the Selenium website and download the latest version and use it. It is not only an open source but also highly modifiable. Testers could make changes based upon their needs and requirements. Manav Kundra "Selenium - A Trending Automation Testing Tool" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31202.pdf Paper Url :https://www.ijtsrd.com/engineering/software-engineering/31202/selenium-%E2%80%93-a-trending-automation-testing-tool/manav-kundra
An Empirical Study of the Improved SPLD Framework using Expert Opinion TechniqueIJEACS
Due to the growing need for high-performance and low-cost software applications and the increasing competitiveness, the industry is under pressure to deliver products with low development cost, reduced delivery time and improved quality. To address these demands, researchers have proposed several development methodologies and frameworks. One of the latest methodologies is software product line (SPL) which utilizes the concepts like reusability and variability to deliver successful products with shorter time-to-market, least development and minimum maintenance cost with a high-quality product. This research paper is a validation of our proposed framework, Improved Software Product Line (ISPL), using Expert Opinion Technique. An extensive survey based on a set of questionnaires on various aspects and sub-processes of the ISPLD Framework was carried. Analysis of the empirical data concludes that ISPL shows significant improvements on several aspects of the contemporary SPL frameworks.
PROPERTIES OF A FEATURE IN CODE-ASSETS: AN EXPLORATORY STUDYijseajournal
Software product line engineering is a paradigm for developing a family of software products from a
repository of reusable assets rather than developing each individual product from scratch. In featureoriented software product line engineering, the common and the variable characteristics of the products
are expressed in terms of features. Using software product line engineering approach, software products
are produced en masse by means of two engineering phases: (i) Domain Engineering and, (ii) Application
Engineering. At the domain engineering phase, reusable assets are developed with variation points where
variant features may be bound for each of the diverse products. At the application engineering phase,
individual and customized products are developed from the reusable assets. Ideally, the reusable assets
should be adaptable with less effort to support additional variations (features) that were not planned
beforehand in order to increase the usage context of SPL as a result of expanding markets or when a new
usage context of software product line emerges. This paper presents an exploration research to investigate
the properties of features, in the code-asset implemented using Object-Oriented Programming Style. In the
exploration, we observed that program elements of disparate features formed unions as well as
intersections that may affect modifiability of the code-assets. The implication of this research to practice is
that an unstable product line and with the tendency of emerging variations should aim for techniques that
limit the number of intersections between program elements of different features. Similarly, the implication
of the observation to research is that there should be subsequent investigations using multiple case studies
in different software domains and programming styles to improve the understanding of the findings.
Testing and verification of software model through formal semantics a systema...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
‘O’ Model for Component-Based Software Development Processijceronline
The technology advancement has forced the user to become more dependent on information technology, and so on software. Software provides the platform for implementation of information technology. Component Based Software Engineering (CBSE) is adopted by software community to counter challenges thrown by fast growing demand of heavy and complex software systems. One of the essential reasons behind adopting CBSE for software development is the fast development of complicated software systems within well-defined boundaries of time and budget. CBSE provides the mechanical facilities by assembling already existing reusable components out of autonomously developed pieces of the software. The paper proposes a novel CBSE model named as O model, keeping an eye on the available CBSE lifecycle.
E government maturity models a comparative studyijseajournal
Many maturity models have been used to assess or rank e
-
government portals. In order to assess electronic
services provided to the citizens, an appropriate e
-
governm
ent maturity model should be selected. This paper
aims at comparing 25 e
-
government maturity models to find the similarities and differences between them and
also to identify their weaknesses and strengths. Although the maturity models present large simila
rities
between them, our findings show that the features included in those models differ from a maturity model to
another. Furthermore, while some maturity models are covering some features and introducing new ones, it
seems that others are just ignoring t
hem.
With interconnectivity between IT Service Providers and their customers and partners growing, fueled by
proliferation of IT Services Outsourcing, with some providers gaining leading positions in marketplace
today, challenges are faced by teams who are tasked to deliver integration projects with much desired
efficiencies both in cost and schedule. Such integrations are growing both in volume and complexity.
Integrations between different autonomous systems such as workflow systems of the providers and their
customers are an important element of this emerging paradigm. In this paper we present an efficient model
to implement such interfaces between autonomous workflow systems with close attention given to major
phases of these projects, from requirement gathering/analysis, to configuration/coding, to
validation/verification, several levels of testing and finally deployment. By deploying a comprehensive
strategy and implementing it in a real corporate environment, a 10%-20% reduction in cost and schedule
year over year was achieved for past several years primarily by improving testing techniques and detecting
bugs earlier in the development life-cycle. Some practical considerations are outlined in addition to
detailing the strategy for testing the autonomous system integrations domain.
Transaction handling in com, ejb and .netijseajournal
The technology evolution has shown a very impressive performance in the last years by introducing several
technologies that are based on the concept of component. As time passes, new versions of Component-
Based technologies are released in order to improve services provided by previous ones. One important
issue that regards these technologies is transactional activity. Transactions are important because they
consist in sending different small amounts of information collected properly in a single combined unit
which makes the process simpler, less expensive and also improves the reliability of the whole system,
reducing its chances to go through possible failures. Different Component-Based technologies offer
different ways of handling transactions. In this paper, we will review and discuss how transactions are
handled in three of them: COM, EJB and .NET. It can be expected that .NET offers more efficient
mechanisms due to the fact of being released later than the other two technologies. Nevertheless, COM and
EJB are still present in the market and their services are still widely used. Comparing transaction handling
in these technologies will be helpful to analyze the advantages and disadvantages of each of them. This
comparison and evaluation will be seen in two main perspectives: performance and security.
The analytic hierarchy process (AHP) has been applied in many fields and especially to complex
engineering problems and applications. The AHP is capable of structuring decision problems and finding
mathematically determined judgments built on knowledge and experience. This suggests that AHP should
prove useful in agile software development where complex decisions occur routinely. In this paper, the
AHP is used to rank the refactoring techniques based on the internal code quality attributes. XP
encourages applying the refactoring where the code smells bad. However, refactoring may consume more
time and efforts.So, to maximize the benefits of the refactoring in less time and effort, AHP has been
applied to achieve this purpose. It was found that ranking the refactoring techniques helped the XP team to
focus on the technique that improve the code and the XP development process in general.
Code coverage based test case selection and prioritizationijseajournal
Regression Testing is exclusively executed to guarantee the desirable functionality of existing software
after pursuing quite a few amendments or variations in it. Perhaps, it testifies the quality of the modified
software by concealing the regressions or
software bugs in both functional and non
-
functional applications
of the system. In fact, the maintenance of test suite is enormous as it necessitates a big investment of time
and money on test cases on a large scale. So, minimizing the test suite becomes
the indispensable requisite
to lessen the budget on regression testing. Precisely, this research paper aspires to present an innovative
approach for the effective
selection and prioritization of test cases which in return may procure a maximum
code average
Systems variability modeling a textual model mixing class and feature conceptsijcsit
System’s reusability and cost are very important in software product line design area. Developers’ goal is
to increase system reusability and decreasing cost and efforts for building components from scratch for
each software configuration. This can be reached by developing software product line (SPL). To handle
SPL engineering process, several approaches with several techniques were developed. One of these
approaches is called separated approach. It requires separating the commonalities and variability for
system’s components to allow configuration selection based on user defined features. Textual notationbased
approaches have been used for their formal syntax and semantics to represent system features and
implementations. But these approaches are still weak in mixing features (conceptual level) and classes
(physical level) that guarantee smooth and automatic configuration generation for software releases. The
absence of methodology supporting the mixing process is a real weakness. In this paper, we enhanced
SPL’s reusability by introducing some meta-features, classified according to their functionalities. As a first
consequence, mixing class and feature concepts is supported in a simple way using class interfaces and
inherent features for smooth move from feature model to class model. And as a second consequence, the
mixing process is supported by a textual design and implementation methodology, mixing class and feature
models by combining their concepts in a single language. The supported configuration generation process
is simple, coherent, and complete.
Bio-Inspired Requirements Variability Modeling with use Case ijseajournal
Background.Feature Model (FM) is the most important technique used to manage the variability through products in Software Product Lines (SPLs). Often, the SPLs requirements variability is by using variable use case modelwhich is a real challenge inactual approaches: large gap between their concepts and those of real world leading to bad quality, poor supporting FM, and the variability does not cover all requirements modeling levels. Aims. This paper proposes a bio-inspired use case variability modeling methodology dealing with the above shortages.
Method. The methodology is carried out through variable business domain use case meta modeling,
variable applications family use case meta modeling, and variable specific application use case generating.
Results. This methodology has leaded to integrated solutions to the above challenges: it decreases the gap
between computing concepts and real world ones. It supports use case variability modeling by introducing
versions and revisions features and related relations. The variability is supported at three meta levels
covering business domain, applications family, and specific application requirements.
Conclusion. A comparative evaluation with the closest recent works, upon some meaningful criteria in the
domain, shows the conceptual and practical great value of the proposed methodology and leads to
promising research perspectives
MANAGING AND ANALYSING SOFTWARE PRODUCT LINE REQUIREMENTSijseajournal
Modelling software product line (SPL) features plays a crucial role to a successful development of SPL.
Feature diagram is one of the widely used notations to model SPL variants. However, there is a lack of
precisely defined formal notations for representing and verifying such models. This paper presents an
approach that we adopt to model SPL variants by using UML and subsequently verify them by using firstorder
logic. UML provides an overall modelling view of the system. First-order logic provides a precise
and rigorous interpretation of the feature diagrams. We model variants and their dependencies by using
propositional connectives and build logical expressions. These expressions are then validated by the Alloy
verification tool. The analysis and verification process is illustrated by using Computer Aided Dispatch
(CAD) system.
BIO-INSPIRED REQUIREMENTS VARIABILITY MODELING WITH USE CASE mathsjournal
Background.Feature Model (FM) is the most important technique used to manage the variability through
products in Software Product Lines (SPLs). Often, the SPLs requirements variability is by using variable
use case modelwhich is a real challenge inactual approaches: large gap between their concepts and those of
real world leading to bad quality, poor supporting FM, and the variability does not cover all requirements
modeling levels.
A framework to performance analysis of software architectural stylesijfcstjournal
Growing and executable system architecture has a significant role in successful production of large and
distributed systems. Assessing the effect of different decisions in architecture design can decrease the time and cost of software production, especially when these decisions are related to non-functional properties of system. Performance is a non-functional property which relates to timing behaviour of system. In this paper
we propose an approach for modelling and analysis of performance in architecture level. To do this,we follow a general process which needs two formal notations for specifying architecture and performance models of system. In this paper we show how Stochastic Process Algebra (SPA) in the form of PEPA language can be used for performance modelling and analysis of software archi
tectures modelled using Graph Transformation System (GTS). To enable architecture model for performance analysis, equivalent PEPA model should be constructed with transformation. Transformed performance model of the
architecture has been analysed through PEPA toolkit for some properties like throughput, sensitivity analysis, response time and utilisation rate. The analysis results have been explained with regard to a realistic case study.
Semantic web based software engineering by automated requirements ontology ge...IJwest
This paper presents an approach for automated generation of requirements ontology using UML diagrams in service-oriented architecture (SOA). The goal of this paper is to convenience progress of software engineering processes like software design, software reuse, service discovering and etc. The proposed method is based on a four conceptual layers. The first layer includes requirements achieved by stakeholders, the second one designs service-oriented diagrams from the data in first layer and extracts XMI codes of them. The third layer includes requirement ontology and protocol ontology to describe behavior of services and relationships between them semantically. Finally the forth layer makes standard the concepts exists in ontologies of previous layer. The generated ontology exceeds absolute domain ontology because it considers the behavior of services moreover the hierarchical relationship of them. Experimental results conducted on a set of UML4Soa diagrams in different scopes demonstrate the improvement of the proposed approach from different points of view such as: completeness of requirements ontology, automatic generation and considering SOA.
A Review of Feature Model Position in the Software Product Line and Its Extra...CSCJournals
The software has become a modern asset and competitive product. The product line that has long been used in manufacturing and construction industries nowadays has attracted a lot of attention in software industry. Most importance of product line engineering approach is in cost and time issues involved in marketing. Feature model is one of the most important methods of documenting variability in product line that shows product features and their dependencies. Because of the magnitude and complexity of the product line, build and maintain feature models are complex and time-consuming work. In this article feature model importance and position in product line is discussed and feature model extraction methods are reviewed and compared.
A Model To Compare The Degree Of Refactoring Opportunities Of Three Projects ...acijjournal
Refactoring is applied to the software artifacts so as to improve its internal structure, while preserving its
external behavior. Refactoring is an uncertain process and it is difficult to give some units for
measurement. The amount to refactoring that can be applied to the source-code depends upon the skills of
the developer. In this research, we have perceived refactoring as a quantified object on an ordinal scale of
measurement. We have a proposed a model for determining the degree of refactoring opportunities in the
given source-code. The model is applied on the three projects collected from a company. UML diagrams
are drawn for each project. The values for source-code metrics, that are useful in determining the quality of
code, are calculated for each UML of the projects. Based on the nominal values of metrics, each relevant
UML is represented on an ordinal scale. A machine learning tool, weka, is used to analyze the dataset,
imported in the form of arff file, produced by the three projects
A MODEL TO COMPARE THE DEGREE OF REFACTORING OPPORTUNITIES OF THREE PROJECTS ...acijjournal
Refactoring is applied to the software artifacts so as to improve its internal structure, while preserving its
external behavior. Refactoring is an uncertain process and it is difficult to give some units for
measurement. The amount to refactoring that can be applied to the source-code depends upon the skills of
the developer. In this research, we have perceived refactoring as a quantified object on an ordinal scale of
measurement. We have a proposed a model for determining the degree of refactoring opportunities in the
given source-code. The model is applied on the three projects collected from a company. UML diagrams
are drawn for each project. The values for source-code metrics, that are useful in determining the quality of
code, are calculated for each UML of the projects. Based on the nominal values of metrics, each relevant
UML is represented on an ordinal scale. A machine learning tool, weka, is used to analyze the dataset,
imported in the form of arff file, produced by the three projects.
Validation and Verification of SYSML Activity Diagrams Using HOARE Logic ijseajournal
SysML diagrams are significant medium using for supporting software lifecycle management. The existing TBFV method is designed for error detection with full automation efficiency, only for code. For verifying the correctness of SysML diagram, we applying TBFV method into SysML diagram. In this paper, we
propose a novel technique that makes use of Hoare Logic and testing to verify whether the SysML diagrams meet the requirement, called TBFV-M. This research can improve the correctness of SysML diagram, which is likely to significantly affect the reliability of the implementation. A case study is conducted to show its feasibility and used to illustrate how the proposed method is applied; and discussion on potential
challenges to TBFV-M is also presented.
Design and Implementation of Automated Visualization for Input/Output for Pro...ijseajournal
While formal specification is regarded as an effective means to capture accurate requirements and design, validation of the specifications remains a challenge. Specification animation has been proposed to tackle the challenge, but lacking an effective representation of the input / output data in the animation can
considerably limit the understanding of the animation by clients. In this paper, we put forward a tool supported technique for visualization of the input / output data of processes in SOFL formal specifications. After discussing the motives of our work, we describe how data of each kind of data types available in the
SOFL language can be visualized to facilitate the representation and understanding of input / output data. We also present a supporting tool for the technique and a case study to demonstrate the usability and effectiveness of our proposed technique. Finally, we conclude the paper and point out the future research directions.
Detecting Aspect Intertype Declaration Interference at Aspect Oriented Design...IJERA Editor
Implementing crosscutting concerns requires aspect oriented developers to be enabled to introduce some mem-bers to core concerns modules along with other. This may lead to a problem of interference among modules, either between classes and aspects or among aspects themselves. Such conflicts may cause program to crash at runtime. Interference problem is addressed but with complex solutions that become more complicated propor-tionally with the project size. In this work a relational database approach and relational algebra is used to detect intertype declaration interferences in aspect oriented design models in order to capture conflicts in an early stage before having it as runtime error. Detection is done in an approach not that complex as the previous ones.
In this paper we present an approach of Model Versioning and Model Repository in context of Living
Models view. The idea of Living Models is a step forward from Model Based Software Development
(MBSD) in a sense that there is tight coupling between various artifacts of software development process.
These artifacts include System Models, Test Models, Executable artifacts etc. We explore the issues of
storage (import/export) of model elements into repository, inputs of cross link information, version
management and system analysis. The modeling environment in which these issues will be discussed is a
heterogeneous modeling environment, where different models types and different modeling tools are used
in the development process. An overview of the tool architecture is also presented..
The Application of Function Models In Software Design: A Survey Within the So...CSCJournals
Numerous function modelling approaches exist for software design. However, there is little empirical evidence on how these approaches are used in the early stages of software design. This article presents the results of an online survey on the application of function models in the academic and industrial software development community. The results show that more than 90% of the 75 respondents agreed with the statement that software projects that use function modelling techniques have a higher chance of success than other projects. UML is the most widely accepted and used modelling approach among the respondents, but only a handful of UML diagrams appear to be prominently addressed during the early software design stages. Asked for reasons for selecting or rejecting UML models the majority of respondents mentioned using function models to understand software requirements and communicate these with clients and technical teams, whereas lack of familiarity, the time-consuming nature of some models and data redundancy are widely mentioned reasons for not or seldomly using certain models. The study also shows a strong relationship between model usage and respondents’ professions. We conclude that improvements are required to ensure the benefits of the various available models and the links between the models can be fully exploited to support individual designers, to improve communication and collaboration, and to increase project success. A short discussion on the chosen solution direction - a simplified function modelling approach – closes the paper.
REQUIREMENTS VARIABILITY SPECIFICATION FOR DATA INTENSIVE SOFTWARE mathsjournal
Nowadays, the use of feature modeling technique, in software requirements specification, increased the
variation support in Data Intensive Software Product Lines (DISPLs) requirements modeling. It is
considered the easiest and the most efficient way to express commonalities and variability among different
products requirements. Several recent works, in DISPLs requirements, handled data variability by different
models which are far from real world concepts. This,leaded to difficulties in analyzing, designing,
implementing, and maintaining this variability. However, this work proposes a software requirements
specification methodology based on concepts more close to the nature and which are inspired from
genetics. This bio-inspiration has carried out important results in DISPLs requirements variability
specification with feature modeling, which were not approached by the conventional approaches.The
feature model was enriched with features and relations, facilitating the requirements variation
management, not yet considered in the current relevant works.The use of genetics-based m
Requirements Variability Specification for Data Intensive Software ijseajournal
Nowadays, the use of feature modeling technique, in software requirements specification, increased the variation support in Data Intensive Software Product Lines (DISPLs) requirements modeling. It is considered the easiest and the most efficient way to express commonalities and variability among different
products requirements. Several recent works, in DISPLs requirements, handled data variability by different models which are far from real world concepts. This,leaded to difficulties in analyzing, designing, implementing, and maintaining this variability. However, this work proposes a software requirements
specification methodology based on concepts more close to the nature and which are inspired from genetics. This bio-inspiration has carried out important results in DISPLs requirements variability specification with feature modeling, which were not approached by the conventional approaches.The feature model was enriched with features and relations, facilitating the requirements variation management, not yet considered in the current relevant works.The use of genetics-based methodology
seems to be promising in data intensive software requirements variability specification
an analysis and new methodology for reverse engineering of uml behavioralINFOGAIN PUBLICATION
The emergence of Unified Modeling Language (UML) as a standard for modeling systems has encouraged the use of automated software tools that facilitate the development process from analysis through coding. Reverse Engineering has become a viable method to measure an existing system and reconstruct the necessary model from its original. The Reverse Engineering of behavioral models consists in extracting high-level models that help understand the behavior of existing software systems. In this paper we present an ongoing work on extracting UML diagrams from object-oriented programming languages. we propose an approach for the reverse engineering of UML behavior from the analysis of execution traces produced dynamically by an object-oriented application using formal and semi-formal techniques for modeling the dynamic behavior of a system. Our methods show that this approach can produce UML behavioral diagrams in reasonable time and suggest that these diagrams are helpful in understanding the behavior of the underlying application.
Modeling and Evaluation of Performance and Reliability of Component-based So...Editor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional requirements
(such as performance and reliability). Whereas establishing the non-functional requirements have significant effect on success of software
systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software performance has
been specified based on performance models, may be evaluated at the primary stages of software development cycle. Therefore, modeling
and evaluation of non-functional requirements in software architecture level, that are designed at the primary stages of software systems
development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance and reliability of software systems, based on formal models (hierarchical timed
colored petri nets) in software architecture level. In this approach, the software architecture is described by UML use case, activity and
component diagrams, then UML model is transformed to an executable model based on hierarchical timed colored petri nets (HTCPN) by a
proposed algorithm. Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including
performance (such as response time) and reliability may be evaluated in software architecture level.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
1. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
STRUCTURAL V ALIDATION OF SOFTWARE PRODUCT
LINE VARIANTS: A GRAPH TRANSFORMATIONS
BASED APPROACH
Khaled Khalfaoui1, Allaoua Chaoui2, Cherif Foudil3 and Elhillali Kerkouche4
1
Department of Computer Science, University of Jijel, Algeria
khalfaoui_kh@yahoo.fr
2
Department of Computer Science and its applications, University of Constantine 2,
Constantine, Algeria
a_chaoui2001@yahoo.com
3
Department of Computer Science, University of Biskra, Algeria
foud_cherif@yahoo.fr
4
Department of Computer Science, University of Jijel, Algeria
elhillalik@yahoo.fr
ABSTRACT
A Software Product Line is a set of software products that share a number of core properties but also differ
in others. Differences and commonalities between products are typically described in terms of features. A
software product line is usually modeled with a feature diagram, describing the set of features and
specifying the constraints and relationships between these features. Each product is defined as a set of
features. In this area of research, a key challenge is to ensure correctness and safety of these products.
There is an increasing need for automatic tools that can support feature diagram analysis, particularly
with a large number of features that modern software systems may have. In this paper, we propose using
model transformations an automatic approach to validate products according to dependencies defined in
the feature diagram. We first introduce the necessary meta-models. Then, we present the used graph
grammars to perform automatically this task using the AToM3 tool. Finally, we show the feasibility of our
proposal by means of running examples.
KEYWORDS
Software Product Lines, Feature Diagram, Variability Modelling, Structural Validation, Graph
Transformations
1. INTRODUCTION
Software product line (SPL) engineering is an approach for developing families of software
systems. The main advantage over traditional approaches is that all products can be developed
and maintained together. This technique has found a broad adoption in several branches of
software production. Feature models [1] are widely used in domain engineering to capture
common and variant features among the different variants. A Feature Diagram is a hierarchically
structured model that defines the features and their relationships. Each product is defined as a
combination of features. One of the major problems is the validation of products from a structural
viewpoint. Generally, this task becomes difficult with a large number of features. To remedy this
DOI : 10.5121/ijsea.2013.4202 19
2. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
problem, development of automatic tools for verification proves necessary. In this research area,
we are interested by model transformations approach [2].
Model transformations are a very useful in the evaluation, validation, manipulation and
processing of diagrams. They are performed by executing graph grammars [3]. A graph grammar
is composed of rules. Each one has a graph in their left and right hand sides (LHS and RHS).
Rules are compared with an input graph called host graph. If a matching is found between the
LHS of a rule and a subgraph in the host graph, then the rule can be applied and the matching
subgraph of the host graph is replaced by the RHS of the rule. Furthermore, each rule may also
have application conditions that must be satisfied, as well as actions to be performed when the
rule is executed. A graph rewriting system iteratively applies rules of grammar in the host graph,
until no rules are applicable.
In this paper, we propose an automatic framework based on this technique to check the validity of
SPL products according to the dependencies defined in the feature diagram (FD). First, we verify
parental relationships by exploring the FD tree in an up-bottom manner from the root to leaves.
Then, we deal with the cross-Tree constraints. Analysis results will be edited in a text file.
The remainder of this paper is organized as follows: In section 2, we discuss some related works.
Section 3 provides the background of our approach. We recall some basic notions about FD
diagrams and give an overview of graph transformations technique. In Section 4, we present our
proposal. We illustrate our framework through some examples in section 5. Finally, section6
concludes the paper and gives some perspectives of this work.
2. RELATED WORK
Research in the field of SPL is becoming increasingly important, particularly through its ability to
increase software reuse. The success of this approach is conditioned by the correctness of final
products. With a high degree of variability, automated analyses and verification are crucial. Over
the past few years, a great variety of proposals had been proposed.
Mannion in [4] proposed the adoption of propositional formula for formally representing software
product lines. The principal idea is that variability and commonality are translated into a
propositional formula where the atoms represent features and the formula is valid if and only if a
given configuration is admissible. This idea has been extended by creating a connection between
feature models, grammars and propositional formula by Batory [5]. In [6], context-free grammars
have been used to represent the semantics of cardinality-based feature models. This semantic
interpretation of a feature model relates with an interpretation of the sentences recognized as valid
by the context-free grammar. Sun et al. in [7] proposed a formalization of FMs using Z and the
use of Alloy Analyzer for the automated support of the analyses of FMs. Wang et al. in [8] have
proposed an approach to modeling and verifying feature diagrams using semantic Web Ontology
Language (OWL). The authors have deployed OWL reasoning engines to check for the
inconsistencies of feature configurations fully automatically. The specialists in the field asserts
that three of the most promising proposals for the automated analysis of feature models are based
on the mapping of feature models into Constraint Satisfaction Problem (CSP) solvers[9],
propositional SAT is fiability problem(SAT) solvers [10] and Binary Decision Diagrams (BDD)
[11]. The basic idea in CSP solvers is to find states where all constraints are satisfied. Somewhat
similar to CSP solvers, SAT solvers attempt to decide whether a given propositional formula is
satisfiable or not, that is, a set of logical values can be assigned to its variables in such a way that
makes the formula true. Also, BDD is a data structure for representing the possible configuration
space of a Boolean function, which can be useful for mapping a feature model configuration
space. For more details see [12], where Benavides presented a comprehensive literature review on
the most important techniques and tools.
20
3. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
Nowadays, graph transformations are widely used for modelling and analysis of complex systems
in the area of software engineering [13]. In [14], the authors have proposed a tool that formally
transforms dynamic behaviours of systems expressed using Unified Modelling Language (UML)
Statechart and collaboration diagrams into their equivalent colored Petri nets models. Zambon et
al. in [15] have proposed an approach for the verification of software written in imperative
programming languages. The treatment was based on model checking of graph transition systems.
They used an explicit representation of program states as graphs, and they specified the
computational engine as graph transformations. In [16], it has proposed an approach to extract
and integrate the parallel changes made to Object-Oriented formal specifications in a
collaborative development environment. This approach allows combining the parallel changes
made while addressing any merging conflicts at the same time. The authors in [17] have proposed
an automatic approach to check UML models using the graph transformations approach. The idea
was to map Class Diagrams, StateCharts and Communication Diagrams into a single Maude
specification. Properties verification is performed using the Linear Temporal Logic (LTL) Model
Checker.
In previous work [18], we have proposed an automatic approach for behavioural analysis of SPL
products. The current paper concerns the structural validation.
3. BACKGROUND
3.1. Feature Modelling
Research on feature modeling has received much attention in software product line engineering
community. Feature-Oriented Domain Analysis (FODA) [1] is the most the most popular. The
success of this approach resides in the introduction of feature models, which contain a graphical
tree-like notation that shows the hierarchical organization of features. In the tree, nodes represent
features; edges describe feature relations. A single root node represents the domain concept being
modeled. Figure.1 depicts a simplified feature model of a mobile phone SPL.
Figure 1. Feature diagram of a mobile phone product line
Current feature modeling notations may be divided into three main groups: Basic feature models,
Cardinality-based feature models and Extended feature models. In this work, we are only
interested on basic feature models. Relationships between a parent feature and its child features
(or subfeatures) are:
• Mandatory: If the father feature is selected, the child feature must be selected.
• Optional: If the father feature is selected, the child feature may be selected but not
necessarily.
21
4. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
• XOR: If the father feature is selected, exactly one feature of the children features must be
selected.
• OR: If the father feature is selected, at last one feature of the OR-child features must be
selected.
In addition, cross-tree constraints are allowed. The most common are:
• Require: The selection of source feature implies the selection of the destination.
• Exclude: both features cannot be part of the same product.
The success of software product line approach is conditioned by the correctness of final products.
Modeling errors will inevitably affect the following steps. At this level, we must ensure that the
products are valid of a structural point of view. From the Mobile phone feature model (Fig.1),
consider the following configurations:
• P1: Mobile-Phone, Calls, Screen, Basic, Media, MP3
• P2: Mobile-Phone, Calls, GPS, Screen, Basic, Media, MP3
We remark that P1 is correct; however P2 is invalid.
3.2. Model Transformations
Raising the abstraction level from textual programming languages to visual modeling languages,
model transformation techniques have become more focused recently. They are successfully
applied in several domains. The translation is performed by executing graph grammars which are
a generalization of Chomsky grammars for graphs [2]. A graph transformation rule (Figure.2) is a
special pair of pattern graphs called left hand side (LHS) and right hand side (RHS). They are
defined such that an instance defined by the LHS is substituted with the instance defined in the
RHS when applying such rule. Rules are local in a sense that they handle only a small amount of
model elements, and therefore the designer does not need to concentrate on the entire
transformation problem.
Figure 2. Graph transformation rule
In the rewriting process, rules are evaluated against an input graph, called the host graph. If a
matching is found between the LHS of a rule and a subgraph of the host graph, then the rule can
be applied. When a rule is applied, the matching subgraph of the host graph is replaced by the
RHS of the rule. Rules can have applicability conditions, as well as actions to be performed when
the rule is applied. Generally, rules are ordered according to a priority assigned by the user and
are checked from the higher priority to the lower priority. After a rule matching and subsequent
application, the graph rewriting system starts again the search. The graph grammar execution
ends when no more matching rules are found.
22
5. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
In the field of graph transformation, the meta-modeling technique is widely used to describe the
different kinds of formalisms needed in the specification and design of systems. To define a meta-
model, we have to provide two syntaxes. On one hand, the abstract formal syntax to denote the
formalism's entities, their attributes, their relationships and the constraints. On the other hand, the
concrete graphical syntax to define graphical appearance of these entities and relationships. The
advantage of this technique is that the generated tool accepts only syntactically correct models
according to the formalism definition.
AToM3 [18] is a visual tool for meta-modeling and model transformations. Its meta-layer allows a
high-level description of models using the Entity-Relationship (ER) formalism extended with the
ability to specify the graphical appearance. Once the meta-model of a given formalism is defined,
AToM3 generates automatically an interactive environment to visually manipulate (create and
edit) models. In the LHS of rules, the attributes of the nodes must be provided with values which
will be compared with the nodes attributes of the host graph during the matching process. These
attributes can be set to <ANY> or have specific values. In order to specify the mapping between
LHS and RHS, nodes in both LHS and RHS are identified by means of labels (numbers). If a
node label appears in the LHS of a rule, but not in the RHS, then the node is deleted when the rule
is applied. Conversely, if a node label appears in the RHS but not in the LHS, then the node is
created when the rule is applied. If a node is created or modified by a rule, we must specify in the
RHS the appropriate Python code to calculate its attributes' values. In addition, AToM3 allows the
use of global attributes accessible in all of the graph grammar rules.
4. A GRAPH TRANSFORMATION APPROACH FOR STRUCTURAL VALIDATION
OF SPL PRODUCTS
In this section we present our proposal. To facilitate the processing, we prefer at first translating
the FD diagram into a decorated tree noted D-Tree. The purpose is to obtain a model easier to
explore. Then, the validation of a given product will be performed by dealing this D-Tree model.
The verification result is edited in a text file. So, as seen in figure Figure.3, we have two graph
transformations to realize.
Figure 3. The general outline of the proposed approach
In the following, we first present the proposed meta-models for FD and D-Tree formalisms. Then,
we introduce the used graph grammars.
23
6. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
5.1 Meta-Modeling
FD meta-model: It is composed of:
• Feature Entity: Each feature has three attributes:
o Its identifier Name.
o A boolean attribute called isRoot used to identify the root feature.
o A boolean attribute called isSelected used to specify selected features.
• OR Relationship, XOR Relationship, Mandatory Relationship, Optional Relationship,
Include Relationship and Exclude Relationship.
We note that all of these relationships don’t have attributes, but differ in their graphic appearance.
They are adjusted according to their appropriate notations.
D-Tree meta-model: It consists on:
• Node Entity: It has four attributes:
o Name: to identify the node.
o The same, two Booleans: isRoot and isSelected.
o Count-SelectedChilds: used to specify the number of its selected children.
• GenericLink Relationship: This association represents links between nodes in the D-Tree
model. Graphically, they have the same appearance. To differentiate them, we added an
attribute called RelationType.
5.2 Defining the Graph Grammars
1st GG: FD to D-Tree:
The purpose of this grammar is to translate the FD diagram into an equivalent D-Tree model. In
addition, it calculates the number of children for each feature. To perform this treatment, we
propose a graph grammar with eight rules (Figure.4).
24
7. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
Figure 4. FD to D-Tree graph grammar
Treatment begins with the creation of the D-Tree nodes. Each time the rewriting system locates
an FD feature and associates it to a new D-Tree node. The attributes Name, isRoot and isSelected
are copied with the same values. To do this, we use a temporary attribute called Translated to
indicate whether each FD feature has been previously treated or not. Then, we move on the
creation of the D-Tree links. At this level, each FD relationship is transformed into a D-Tree
GenericLink. The attribute RelationType is set according to the FD relationship type. At the same
time, for each node, we count the number of the selected children. It will be used in the next step.
Finally, we clean the created D-Tree model of the FD features.
The execution of rules N°2, N°3, N°4, N°5, N°6 and N°7 is not subject to any condition. This is
because in the application of each one, there is always deleting of the FD relationship, and
therefore this treatment does not reproduce. This allowed us to avoid the use of other temporary
attributes and get rid FD relationships at same time.
25
8. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
2nd GG: Validating-SPL-Products
This graph grammar model acts on the obtained D-Tree model. Its purpose is to analyze the
product in question by checking all dependences. The result of this verification is edited in a text
file. We propose to perform this treatment in two steps:
Step1: In the first step, we deal only parental relationships. For each node, if selected, we have to
check validity of its children according to their paternal relationship. If this latter is not satisfied,
the error will be edited in the text file. Once all children treated, the selected ones will be treated
as parents and we start again the same treatment. So, we have to explore the D-Tree model from
top to bottom, starting with the root up to the leaves. To do this, we use the following auxiliary
attributes:
• Current-Parent: is used to identify the node currently being treated as parent.
• isTreated-AsParent: is used to indicate whether this node has been treated as parent or
not.
• isTreated-AsChild: is used to indicate whether this child has been visited during treatment
of its parent or not.
• ToTreat-AsParent: is used to indicate whether this node should be treated as parent or
not.
The number of the selected children is used only to validate Or and XOR relationships, but once.
To do this, we use an attribute called NumChilds-Checked.
Step2: Now, we treat the cross-tree constraints. To check them one by one, we add to the
GenericLink associations an auxiliary attribute called isChecked. It is used to indicate whether a
Require or Exclude link has already been verified or not. Similarly, in the case of anomaly, error
will be edited in the text file. To carry out this process, we propose eleven rules (Figure.5).
26
10. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
The execution of the first nine rules realizes Step1, while the application of rules N°10 and N°11
performs Step2.
6. ILLUSTRATIVE EXAMPLES
To illustrate our framework, let us consider the Mobile Phone example presented previously in
Section 3.
First, we have to create the FD model (Figure.6).
Figure 6. FD diagram
28
11. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
Then, we select the features of the product that we going validate. Consider the product:
• P1: Mobile-Phone, GPS, Screen, Basic, Media.
Using this environment, to specify this product we have to set as true the attribute isSelected of
these features.
By executing the first graph grammar, we obtain the corresponding D-Tree model (Figure.7).
Figure 7. The generated D-Tree model
To perform automatically the validation of this product, we have to execute the second graph
grammar on the obtained D-Tree model. The text file containing the results of this analysis is
presented in Figure.8.
- Error: Calls is mandatory feature. It must be selected.
- Error: GPS excludes Basic.
- Error: For Media node, at least one child must be selected.
Figure 8. P1 validation results
It shows that there are three errors:
o The first concerns the node Calls. It is a mandatory feature, but it is not selected.
o The second is the fact that GPS and Basic features are linked by an Exclude
relationship, whereas they are both selected.
o The last error products: 12 node Camera. This feature is selected, but none of its
Number of valid regards the
children is selected. There must be at least one.
Consider now the product:
• P2: Mobile-Phone, Calls, Screen, Basic, Colour, Media, Camera.
By following same steps, we obtained the text file presented in Figure.9. It shows that P2 is
invalid since:
o The features Basic and Colour are exclusive to each other, but both are selected.
o The feature Camera necessities the presence of High-Resolution.
29
12. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
- Error: For Screen node, exactly one child must be selected.
- Error: Camera requires High-Resolution.
Figure 9. P2 validation results
Finally, for the product:
• P3: Mobile-Phone, Calls, GPS, Screen, High-Resolution, Media, Camera, MP3.
•
The generated text file is empty. There are no errors, therefore this product is valid.
7. CONCLUSION of valid products: 12
Number
Software product line engineering is about producing a set of related products that share more
commonalities than variabilities. Feature models are widely used in domain engineering to
capture common and variant features among the different variants. Each legal product is defined
as a combination of features that respects all dependencies defined in the FD model. With a large
number of features, the structural validation of products is extremely difficult. In fact, we have to
verify satisfaction of all relationships and constraints defined in the FD model.
To remedy this problem, we have proposed a novel approach based on graph transformations with
two steps. In the first, we have treated the parental relationships by exploring the FD tree in an
up- bottom manner from the root to leaves. For each selected feature, the relationship with its
children is checked according to the selected ones. The second step is dedicated to verify the
cross-tree dependencies. Constraints that are not satisfied are detected. To do this, we have
proposed two graph grammars. The first is used to translate the FD diagram into an equivalent
decorated tree in which all relationships between nodes are of the same type. The second
performs the verification by dealing the D-Tree model. The analysis results are generated in a text
file. The choice of the graph transformations technique is motivated by the fact that:
• It constitutes the most appropriate solution which allowed us the exploration of the FD
diagram easily.
• As this verification is based on local computations, it was found that graph grammar rules
are the most suitable solution.
• It is implemented directly as a fully automatic tool.
• It is extensible.
In our future work, we plan to develop an integrated environment to generate all valid products
according to the dependencies specified in the feature diagram.
30
13. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
REFERENCES
[1] K. Kang, S. Cohen, J. Hess, W. Novak and S. Peterson, (1990) “Feature-oriented domain analysis
(FODA) feasibility study”, Technical Report, CMU/SEI- 90-TR-21.
[2] M. Andries, G. Engels, A. Habel, B. Hoffmann, H. J. Kreowski, S. Kuske, D. Pump, A. Schürr and G.
Taentzer, (1999) “Graph transformation for specification and programming”, Science of Computer
Programming, Vol. 34, No. 1, pp 1-54.
[3] G. Rozenberg, (1999) “Handbook of graph grammars and computing by graph transformation”,
World Scientific, Singapore, Vol. 1.
[4] M. Mannion, (2002) “Using first-order logic for product line model validation”, SPLC 2, Chastek,
G.J. (Ed.), Springer-Verlag, London, pp. 176-187.
[5] D. Batory, (2005) “Feature models, grammars, and propositional formulas”, Lecture notes in
Computer Science, Vol. 3714, pp. 7-20.
[6] K. Czarnecki,, S. Helsen, U. Eisenecker, (2004) “Staged configuration using feature models”, In
SPLC 2004, Heidelberg, LNCS, Vol. 3154, pp. 266-283.
[7] J. Sun, H. Zhang, Y.F. Li and H. Wang, (2005) “Formal semantics and verification for feature
modeling”, In Proceedings of the ICECSS05, pp. 303-312.
[8] H. Wang, Y.Li, J. Sun, H. Zhang and J. Pan, (2007) “Verifying feature models using OWL”, Web
Semantics: Science Services and Agents on the World Wide Web, Vol. 5, No. 2, pp. 117-129.
[9] D. Benavides, P. Trinidad and A. Ruiz-Cortes, (2005) “Automated reasoning on feature models”, in
CAiSE 2005, LNCS, Springer, Vol. 3520, pp. 491-503.
[10] E. Bagheri, T.D. Noia, D. Gasevic and A. Ragone, (2012) “Formalizing interactive staged feature
model configuration”, Journal of Software: Evolution and Process, Vol. 24, No. 4, pp. 375-400.
[11] M. Mendonca, A.Wasowski, K. Czarnecki, and D. Cowan, (2008) “Efficient compilation techniques
for large scale feature models”, in Proceedings of GPCE '08, USA, ACM Press, pp.13-22.
[12] D. Benavides, S. Segura, A. Ruiz-Cortés, (2010) “Automated Analysis of Feature Models 20 Years
Later: A Literature Review”, Journal of Information Systems, Vol. 35, No. 6, pp. 615-636.
[13] H. Ehrig, G. Engels, H. J. Kreowski, G. Rozenberg, (2012) “Graph Transformation”, Sixth
International Conference on Graph Transformation, ICGT 2012, LNCS, Springer, Vol. 7562.
[14] E. Kerkouche, A. Chaoui, E. B. Bourennane and O. Labbani, (2010) “On the use of graph
transformation in the modeling and verification of dynamic behavior in UML models”, JSW, Vol. 5,
No. 11, pp. 1279-1291.
[15] E. Zambon and A. Rensink, (2011) “Using Graph Transformations and Graph Abstractions for
Software Verification”, ICGT-DS, Electronic Communications of the EASST, Vol. 38.
[16] F. Taibi, (2012) “Automatic Extraction and Integration of Changes in Shared Software
Specifications”, International Journal of Software Engineering and Its Applications, Vol. 6, No. 1, pp.
29-45.
[17] W. Chama, R. ElMansouri and A. Chaoui, (2012) “Model Checking and Code Generation for UML
Diagrams using Graph Transformation”, International Journal of Software Engineering &
Applications (IJSEA), Vol. 3, No. 6, pp. 39-55.
[18] K. Khalfaoui, A. Chaoui, C. Foudil and E. Kerkouche, (2012) “Formal Specification of Software
Product Lines: A Graph Transformation Based Approach”, JSW, Vol. 7, No. 11, pp. 2518-2532.
[19] J. De Lara and H. Vangheluwe, (2002) “AToM3: a tool for multi-formalism modelling and meta-
modelling”, Lecture Notes in Computer Science, Vol. 2306, pp.174-188.
31