The aim of the Software Product Line (SPL) approach is to improve the software development process by producing software products that match the stakeholders’ requirements. One of the important topics in SPLs is the feature model (FM) configuration process. The purpose of configuration here is to select and remove specific features from the FM in order to produce the required software product. At the same time, detection of differences between application’s requirements and the available capabilities of the implementation platform is a major concern of application requirements engineering. It is possible that the implementation of the selected features of FM needs certain software and hardware infrastructures such as database, operating system and hardware that cannot be made available by stakeholders. We address the FM configuration problem by proposing a method, which employs a two-layer FM comprising the application and infrastructure layers. We also show this method in the context of a case study in the SPL of a sample E-Shop website. The results demonstrate that this method can support both functional and non-functional requirements and can solve the problems arising from lack of attention to implementation requirements in SPL FM selection phase.
A Review of Feature Model Position in the Software Product Line and Its Extra...CSCJournals
The software has become a modern asset and competitive product. The product line that has long been used in manufacturing and construction industries nowadays has attracted a lot of attention in software industry. Most importance of product line engineering approach is in cost and time issues involved in marketing. Feature model is one of the most important methods of documenting variability in product line that shows product features and their dependencies. Because of the magnitude and complexity of the product line, build and maintain feature models are complex and time-consuming work. In this article feature model importance and position in product line is discussed and feature model extraction methods are reviewed and compared.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
A comparison of component-based software engineering and model-driven develop...Nikolay Grozev
Component-based software engineering (CBSE) and model-driven development (MDD) are two approaches for handling software development complexity. In essence, while CBSE focuses on the construction of systems from existing software modules called components; MDD promotes the usage of system models which after a series of transformations result with an implementation of the desired system. Even though they are different, MDD and CBSE are not mutually exclusive. However, there has not been any substantial research about what their similarities and differences are and how they can be combined. In this respect, the main goal of this thesis is to summarize the theoretical background of MDD and CBSE, and to propose and apply a systematic method for their comparison. The method takes into account the different effects that these development paradigms have on a wide range of development aspects. The comparison results are then summarized and analyzed.
The thesis also enriches the theoretical discussion with a practical case study comparing CBSE and MDD with respect to ProCom, a component model designed for the development of component-based embedded systems in the vehicular-, automation- and telecommunication domains. The aforementioned comparison method is refined and applied for this purpose. The comparison results are again summarized, analyzed and proposals about future work on ProCom are made.
FACTORS ON SOFTWARE EFFORT ESTIMATION ijseajournal
Software effort estimation is an important process of system development life cycle, as it may affect the
success of software projects if project designers estimate the projects inaccurately. In the past of few
decades, various effort prediction models have been proposed by academicians and practitioners.
Traditional estimation techniques include Lines of Codes (LOC), Function Point Analysis (FPA) method
and Mark II Function Points (Mark II FP) which have proven unsatisfactory for predicting effort of all
types of software. In this study, the author proposed a regression model to predict the effort required to
design small and medium scale application software. To develop such a model, the author used 60
completed software projects developed by a software company in Macau. From the projects, the author
extracted factors and applied them to a regression model. A prediction of software effort with accuracy of
MMRE = 8% was constructed.
A Review of Feature Model Position in the Software Product Line and Its Extra...CSCJournals
The software has become a modern asset and competitive product. The product line that has long been used in manufacturing and construction industries nowadays has attracted a lot of attention in software industry. Most importance of product line engineering approach is in cost and time issues involved in marketing. Feature model is one of the most important methods of documenting variability in product line that shows product features and their dependencies. Because of the magnitude and complexity of the product line, build and maintain feature models are complex and time-consuming work. In this article feature model importance and position in product line is discussed and feature model extraction methods are reviewed and compared.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
A comparison of component-based software engineering and model-driven develop...Nikolay Grozev
Component-based software engineering (CBSE) and model-driven development (MDD) are two approaches for handling software development complexity. In essence, while CBSE focuses on the construction of systems from existing software modules called components; MDD promotes the usage of system models which after a series of transformations result with an implementation of the desired system. Even though they are different, MDD and CBSE are not mutually exclusive. However, there has not been any substantial research about what their similarities and differences are and how they can be combined. In this respect, the main goal of this thesis is to summarize the theoretical background of MDD and CBSE, and to propose and apply a systematic method for their comparison. The method takes into account the different effects that these development paradigms have on a wide range of development aspects. The comparison results are then summarized and analyzed.
The thesis also enriches the theoretical discussion with a practical case study comparing CBSE and MDD with respect to ProCom, a component model designed for the development of component-based embedded systems in the vehicular-, automation- and telecommunication domains. The aforementioned comparison method is refined and applied for this purpose. The comparison results are again summarized, analyzed and proposals about future work on ProCom are made.
FACTORS ON SOFTWARE EFFORT ESTIMATION ijseajournal
Software effort estimation is an important process of system development life cycle, as it may affect the
success of software projects if project designers estimate the projects inaccurately. In the past of few
decades, various effort prediction models have been proposed by academicians and practitioners.
Traditional estimation techniques include Lines of Codes (LOC), Function Point Analysis (FPA) method
and Mark II Function Points (Mark II FP) which have proven unsatisfactory for predicting effort of all
types of software. In this study, the author proposed a regression model to predict the effort required to
design small and medium scale application software. To develop such a model, the author used 60
completed software projects developed by a software company in Macau. From the projects, the author
extracted factors and applied them to a regression model. A prediction of software effort with accuracy of
MMRE = 8% was constructed.
Availability Assessment of Software Systems Architecture Using Formal ModelsEditor IJCATR
There has been a significant effort to analyze, design and implement the information systems to process the information and data, and solve various problems. On the one hand, complexity of the contemporary systems, and eye-catching increase in the variety and volume of information has led to great number of the components and elements, and more complex structure and organization of the information systems. On the other hand, it is necessary to develop the systems which meet all of the stakeholders' functional and non-functional requirements. Considering the fact that evaluation and assessment of the aforementioned requirements - prior to the design and implementation phases - will consume less time and reduce costs, the best time to measure the evaluable behavior of the system is when its software architecture is provided. One of the ways to evaluate the architecture of software is creation of an executable model of architecture.
The present research used availability assessment and took repair, maintenance and accident time parameters into consideration. Failures of software and hardware components have been considered in the architecture of software systems. To describe the architecture easily, the authors used Unified Modeling Language (UML). However, due to the informality of UML, they utilized Colored Petri Nets (CPN) for assessment too. Eventually, the researchers evaluated a CPN-based executable model of architecture through CPN-Tools.
Performance Evaluation using Blackboard Technique in Software ArchitectureEditor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional
requirements (such as performance). Whereas establishing the non-functional requirements have significant effect on success of
software systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software
performance has been specified based on performance models, may be evaluated at the primary stages of software development cycle.
Therefore, modeling and evaluation of non-functional requirements in software architecture level, that are designed at the primary
stages of software systems development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance of software systems, based on black board technique in software architecture
level. In this approach, at first, software architecture using blackboard technique is described by UML use case, activity and
component diagrams. then UML model is transformed to an executable model based on timed colored petri nets(TCPN)
Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including performance
(such as response time) may be evaluated in software architecture level.
An Empirical Study of the Improved SPLD Framework using Expert Opinion TechniqueIJEACS
Due to the growing need for high-performance and low-cost software applications and the increasing competitiveness, the industry is under pressure to deliver products with low development cost, reduced delivery time and improved quality. To address these demands, researchers have proposed several development methodologies and frameworks. One of the latest methodologies is software product line (SPL) which utilizes the concepts like reusability and variability to deliver successful products with shorter time-to-market, least development and minimum maintenance cost with a high-quality product. This research paper is a validation of our proposed framework, Improved Software Product Line (ISPL), using Expert Opinion Technique. An extensive survey based on a set of questionnaires on various aspects and sub-processes of the ISPLD Framework was carried. Analysis of the empirical data concludes that ISPL shows significant improvements on several aspects of the contemporary SPL frameworks.
PRODUCT QUALITY EVALUATION METHOD (PQEM): TO UNDERSTAND THE EVOLUTION OF QUAL...ijseajournal
Promoting quality within the context of agile software development, it is extremely important as well as
useful to improve not only the knowledge and decision-making of project managers, product owners, and
quality assurance leaders but also to support the communication between teams. In this context, quality
needs to be visible in a synthetic and intuitive way in order to facilitate the decision of accepting or
rejecting each iteration within the software life cycle. This article introduces a novel solution called
Product Quality Evaluation Method (PQEM) which can be used to evaluate a set of quality characteristics
for each iteration within a software product life cycle. PQEM is based on the Goal-Question-Metric
approach, the standard ISO/IEC 25010, and the extension made of testing coverage in order to obtain the
quality coverage of each quality characteristic. The outcome of PQEM is a unique multidimensional value,
that represents the quality level reached by each iteration of a product, as an aggregated measure. Even
though a value it is not the regular idea of measuring quality, we believe that it can be useful to use this
value to easily understand the quality level of each iteration. An illustrative example of the PQEM method
was carried out with two iterations from a web and mobile application, within the healthcare environment.
A single measure makes it possible to observe the evolution of the level of quality reached in the evolution
of the product through the iterations.
A FRAMEWORK FOR ASPECTUAL REQUIREMENTS VALIDATION: AN EXPERIMENTAL STUDYijseajournal
Requirements engineering is a discipline of software engineering that is concerned with the
identification and handling of user and system requirements. Aspect-Oriented Requirements
Engineering (AORE) extends the existing requirements engineering approaches to cope with the
issue of tangling and scattering resulted from crosscutting concerns. Crosscutting concerns are
considered as potential aspects and can lead to the phenomena “tyranny of the dominant
decomposition”. Requirements-level aspects are responsible for producing scattered and tangled
descriptions of requirements in the requirements document. Validation of requirements artefacts
is an essential task in software development. This task ensures that requirements are correct and
valid in terms of completeness and consistency, hence, reducing the development cost,
maintenance and establish an approximately correct estimate of effort and completion time of the
project. In this paper, we present a validation framework to validate the aspectual requirements
and the crosscutting relationship of concerns that are resulted from the requirements engineering
phase. The proposed framework comprises a high-level and low-level validation to implement on
software requirements specification (SRS). The high-level validation validates the concerns with
stakeholders, whereas the low-level validation validates the aspectual requirement by
requirements engineers and analysts using a checklist. The approach has been evaluated using
an experimental study on two AORE approaches. The approaches are viewpoint-based called
AORE with ArCaDe and lexical analysis based on Theme/Doc approach. The results obtained
from the study demonstrate that the proposed framework is an effective validation model for
AORE artefacts.
Process-Centred Functionality View of Software Configuration Management: A Co...theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The objective of this paper is to provide an insight preview into various
agent oriented methodologies by using an enhanced comparison
framework based on criteria like process related criteria, steps and
techniques related criteria, steps and usability criteria, model related or
“concepts” related criteria, comparison regarding model related criteria
and comparison regarding supportive related criteria. The result also
constitutes inputs collected from the users of the agent oriented
methodologies through a questionnaire based survey.
Requirement Engineering Challenges in Development of Software Applications an...Waqas Tariq
Requirement Engineering acts as foundation for any software and is one of the most important tasks. Entire software is supported by four pillars of requirement engineering processes. Functional and non-functional requirements work as bricks to support software edifice. Finally, design, implementation and testing add stories to construct entire software tower on top of this foundation. Thus, the base needs to be well-built to support rest of software tower. For this purpose, requirement engineers come across with numerous challenges to develop successful software. The paper has highlighted requirement engineering challenges encountered in development of software applications and selection of right customer-off-the-shelf components (COTS). Comprehending stakeholder’s needs; incomplete and inconsistent process description; verification and validation of requirements; classification and modeling of extensive data; selection of COTS product with minimum requirement modifications are foremost challenges faced during requirement engineering. Moreover, the paper has discussed and critically evaluated challenges highlighted by various researchers. Besides, the paper presents a model that encapsulates seven major challenges that recur during requirement engineering phase. These challenges have been further categorized into problems. Furthermore, the model has been linked with previous research work to elaborate challenges that have not been specified earlier. Anticipating requirement engineering challenges could assist requirement engineers to prevent software tower from any destruction.
GENERATING SOFTWARE PRODUCT LINE MODEL BY RESOLVING CODE SMELLS IN THE PRODUC...ijseajournal
Software Product Lines (SPLs) refer to some software engineering methods, tools and techniques for
creating a collection of similar software systems from a shared set of software assets using a common
means of production. This concept is recognized as a successful approach to reuse in software
development. Its purpose is to reduce production costs by reusing existing features and managing the
variability between the different products with respect of particular constraints. Software Product Line
engineering is the production process in product lines and the development of a family of systems by
reusing core assets. It exploits the commonalities between software products and preserves the ability to
vary the functionalities and features between these products. The adopted strategy for building SPL can be
a top-down or bottom-up. Depending from the selected strategy, it is possible to face an inappropriate
implementation in the SPL Model or the derived products during this process. The code can contain code
smells or code anomalies. Code smells are considered as problems in source code which can have an
impact on the quality of the derived products of an SPL. The same problem can be present in many derived
products from an SPL due to reuse or in the obtained product line when the bottom-up strategy is selected.
A possible solution to this problem can be the refactoring which can improve the internal structure of
source code without altering external behavior. This paper proposes an approach for building SPL from
source code using the bottom-up strategy. Its purpose is to reduce code smells in the obtained SPL using
refactoring source code. This approach proposes a possible solution using reverse engineering to obtain
the feature model of the SPL.
PROPERTIES OF A FEATURE IN CODE-ASSETS: AN EXPLORATORY STUDYijseajournal
Software product line engineering is a paradigm for developing a family of software products from a
repository of reusable assets rather than developing each individual product from scratch. In featureoriented software product line engineering, the common and the variable characteristics of the products
are expressed in terms of features. Using software product line engineering approach, software products
are produced en masse by means of two engineering phases: (i) Domain Engineering and, (ii) Application
Engineering. At the domain engineering phase, reusable assets are developed with variation points where
variant features may be bound for each of the diverse products. At the application engineering phase,
individual and customized products are developed from the reusable assets. Ideally, the reusable assets
should be adaptable with less effort to support additional variations (features) that were not planned
beforehand in order to increase the usage context of SPL as a result of expanding markets or when a new
usage context of software product line emerges. This paper presents an exploration research to investigate
the properties of features, in the code-asset implemented using Object-Oriented Programming Style. In the
exploration, we observed that program elements of disparate features formed unions as well as
intersections that may affect modifiability of the code-assets. The implication of this research to practice is
that an unstable product line and with the tendency of emerging variations should aim for techniques that
limit the number of intersections between program elements of different features. Similarly, the implication
of the observation to research is that there should be subsequent investigations using multiple case studies
in different software domains and programming styles to improve the understanding of the findings.
Modeling and Evaluation of Performance and Reliability of Component-based So...Editor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional requirements
(such as performance and reliability). Whereas establishing the non-functional requirements have significant effect on success of software
systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software performance has
been specified based on performance models, may be evaluated at the primary stages of software development cycle. Therefore, modeling
and evaluation of non-functional requirements in software architecture level, that are designed at the primary stages of software systems
development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance and reliability of software systems, based on formal models (hierarchical timed
colored petri nets) in software architecture level. In this approach, the software architecture is described by UML use case, activity and
component diagrams, then UML model is transformed to an executable model based on hierarchical timed colored petri nets (HTCPN) by a
proposed algorithm. Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including
performance (such as response time) and reliability may be evaluated in software architecture level.
A Comparative Study of Forward and Reverse Engineeringijsrd.com
With the software development at its boom compared to 20 years in the past, software developed in the past may or may not have a well-supported documentation during the software evolution. This may increase the specification gap between the document and the legacy code to make further evolutions and updates. Understanding the legacy code of the underlying decisions made during development is the prime motto, which is very well supported by Reverse Engineering. In this paper, we compare the Transformational Forward engineering, where a stepwise abstraction is obtained with the Transformational Reverse Methodology. While the forward transformation process produces overlap of the decisions, performance is affected. Hence, the use of transformational method of Reverse Engineering which is a backwards Forward Engineering process is suitable. Besides the design recognition obtained is a domain knowledge which can be used in future by the forward engineers.
Availability Assessment of Software Systems Architecture Using Formal ModelsEditor IJCATR
There has been a significant effort to analyze, design and implement the information systems to process the information and data, and solve various problems. On the one hand, complexity of the contemporary systems, and eye-catching increase in the variety and volume of information has led to great number of the components and elements, and more complex structure and organization of the information systems. On the other hand, it is necessary to develop the systems which meet all of the stakeholders' functional and non-functional requirements. Considering the fact that evaluation and assessment of the aforementioned requirements - prior to the design and implementation phases - will consume less time and reduce costs, the best time to measure the evaluable behavior of the system is when its software architecture is provided. One of the ways to evaluate the architecture of software is creation of an executable model of architecture.
The present research used availability assessment and took repair, maintenance and accident time parameters into consideration. Failures of software and hardware components have been considered in the architecture of software systems. To describe the architecture easily, the authors used Unified Modeling Language (UML). However, due to the informality of UML, they utilized Colored Petri Nets (CPN) for assessment too. Eventually, the researchers evaluated a CPN-based executable model of architecture through CPN-Tools.
Performance Evaluation using Blackboard Technique in Software ArchitectureEditor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional
requirements (such as performance). Whereas establishing the non-functional requirements have significant effect on success of
software systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software
performance has been specified based on performance models, may be evaluated at the primary stages of software development cycle.
Therefore, modeling and evaluation of non-functional requirements in software architecture level, that are designed at the primary
stages of software systems development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance of software systems, based on black board technique in software architecture
level. In this approach, at first, software architecture using blackboard technique is described by UML use case, activity and
component diagrams. then UML model is transformed to an executable model based on timed colored petri nets(TCPN)
Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including performance
(such as response time) may be evaluated in software architecture level.
An Empirical Study of the Improved SPLD Framework using Expert Opinion TechniqueIJEACS
Due to the growing need for high-performance and low-cost software applications and the increasing competitiveness, the industry is under pressure to deliver products with low development cost, reduced delivery time and improved quality. To address these demands, researchers have proposed several development methodologies and frameworks. One of the latest methodologies is software product line (SPL) which utilizes the concepts like reusability and variability to deliver successful products with shorter time-to-market, least development and minimum maintenance cost with a high-quality product. This research paper is a validation of our proposed framework, Improved Software Product Line (ISPL), using Expert Opinion Technique. An extensive survey based on a set of questionnaires on various aspects and sub-processes of the ISPLD Framework was carried. Analysis of the empirical data concludes that ISPL shows significant improvements on several aspects of the contemporary SPL frameworks.
PRODUCT QUALITY EVALUATION METHOD (PQEM): TO UNDERSTAND THE EVOLUTION OF QUAL...ijseajournal
Promoting quality within the context of agile software development, it is extremely important as well as
useful to improve not only the knowledge and decision-making of project managers, product owners, and
quality assurance leaders but also to support the communication between teams. In this context, quality
needs to be visible in a synthetic and intuitive way in order to facilitate the decision of accepting or
rejecting each iteration within the software life cycle. This article introduces a novel solution called
Product Quality Evaluation Method (PQEM) which can be used to evaluate a set of quality characteristics
for each iteration within a software product life cycle. PQEM is based on the Goal-Question-Metric
approach, the standard ISO/IEC 25010, and the extension made of testing coverage in order to obtain the
quality coverage of each quality characteristic. The outcome of PQEM is a unique multidimensional value,
that represents the quality level reached by each iteration of a product, as an aggregated measure. Even
though a value it is not the regular idea of measuring quality, we believe that it can be useful to use this
value to easily understand the quality level of each iteration. An illustrative example of the PQEM method
was carried out with two iterations from a web and mobile application, within the healthcare environment.
A single measure makes it possible to observe the evolution of the level of quality reached in the evolution
of the product through the iterations.
A FRAMEWORK FOR ASPECTUAL REQUIREMENTS VALIDATION: AN EXPERIMENTAL STUDYijseajournal
Requirements engineering is a discipline of software engineering that is concerned with the
identification and handling of user and system requirements. Aspect-Oriented Requirements
Engineering (AORE) extends the existing requirements engineering approaches to cope with the
issue of tangling and scattering resulted from crosscutting concerns. Crosscutting concerns are
considered as potential aspects and can lead to the phenomena “tyranny of the dominant
decomposition”. Requirements-level aspects are responsible for producing scattered and tangled
descriptions of requirements in the requirements document. Validation of requirements artefacts
is an essential task in software development. This task ensures that requirements are correct and
valid in terms of completeness and consistency, hence, reducing the development cost,
maintenance and establish an approximately correct estimate of effort and completion time of the
project. In this paper, we present a validation framework to validate the aspectual requirements
and the crosscutting relationship of concerns that are resulted from the requirements engineering
phase. The proposed framework comprises a high-level and low-level validation to implement on
software requirements specification (SRS). The high-level validation validates the concerns with
stakeholders, whereas the low-level validation validates the aspectual requirement by
requirements engineers and analysts using a checklist. The approach has been evaluated using
an experimental study on two AORE approaches. The approaches are viewpoint-based called
AORE with ArCaDe and lexical analysis based on Theme/Doc approach. The results obtained
from the study demonstrate that the proposed framework is an effective validation model for
AORE artefacts.
Process-Centred Functionality View of Software Configuration Management: A Co...theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The objective of this paper is to provide an insight preview into various
agent oriented methodologies by using an enhanced comparison
framework based on criteria like process related criteria, steps and
techniques related criteria, steps and usability criteria, model related or
“concepts” related criteria, comparison regarding model related criteria
and comparison regarding supportive related criteria. The result also
constitutes inputs collected from the users of the agent oriented
methodologies through a questionnaire based survey.
Requirement Engineering Challenges in Development of Software Applications an...Waqas Tariq
Requirement Engineering acts as foundation for any software and is one of the most important tasks. Entire software is supported by four pillars of requirement engineering processes. Functional and non-functional requirements work as bricks to support software edifice. Finally, design, implementation and testing add stories to construct entire software tower on top of this foundation. Thus, the base needs to be well-built to support rest of software tower. For this purpose, requirement engineers come across with numerous challenges to develop successful software. The paper has highlighted requirement engineering challenges encountered in development of software applications and selection of right customer-off-the-shelf components (COTS). Comprehending stakeholder’s needs; incomplete and inconsistent process description; verification and validation of requirements; classification and modeling of extensive data; selection of COTS product with minimum requirement modifications are foremost challenges faced during requirement engineering. Moreover, the paper has discussed and critically evaluated challenges highlighted by various researchers. Besides, the paper presents a model that encapsulates seven major challenges that recur during requirement engineering phase. These challenges have been further categorized into problems. Furthermore, the model has been linked with previous research work to elaborate challenges that have not been specified earlier. Anticipating requirement engineering challenges could assist requirement engineers to prevent software tower from any destruction.
GENERATING SOFTWARE PRODUCT LINE MODEL BY RESOLVING CODE SMELLS IN THE PRODUC...ijseajournal
Software Product Lines (SPLs) refer to some software engineering methods, tools and techniques for
creating a collection of similar software systems from a shared set of software assets using a common
means of production. This concept is recognized as a successful approach to reuse in software
development. Its purpose is to reduce production costs by reusing existing features and managing the
variability between the different products with respect of particular constraints. Software Product Line
engineering is the production process in product lines and the development of a family of systems by
reusing core assets. It exploits the commonalities between software products and preserves the ability to
vary the functionalities and features between these products. The adopted strategy for building SPL can be
a top-down or bottom-up. Depending from the selected strategy, it is possible to face an inappropriate
implementation in the SPL Model or the derived products during this process. The code can contain code
smells or code anomalies. Code smells are considered as problems in source code which can have an
impact on the quality of the derived products of an SPL. The same problem can be present in many derived
products from an SPL due to reuse or in the obtained product line when the bottom-up strategy is selected.
A possible solution to this problem can be the refactoring which can improve the internal structure of
source code without altering external behavior. This paper proposes an approach for building SPL from
source code using the bottom-up strategy. Its purpose is to reduce code smells in the obtained SPL using
refactoring source code. This approach proposes a possible solution using reverse engineering to obtain
the feature model of the SPL.
PROPERTIES OF A FEATURE IN CODE-ASSETS: AN EXPLORATORY STUDYijseajournal
Software product line engineering is a paradigm for developing a family of software products from a
repository of reusable assets rather than developing each individual product from scratch. In featureoriented software product line engineering, the common and the variable characteristics of the products
are expressed in terms of features. Using software product line engineering approach, software products
are produced en masse by means of two engineering phases: (i) Domain Engineering and, (ii) Application
Engineering. At the domain engineering phase, reusable assets are developed with variation points where
variant features may be bound for each of the diverse products. At the application engineering phase,
individual and customized products are developed from the reusable assets. Ideally, the reusable assets
should be adaptable with less effort to support additional variations (features) that were not planned
beforehand in order to increase the usage context of SPL as a result of expanding markets or when a new
usage context of software product line emerges. This paper presents an exploration research to investigate
the properties of features, in the code-asset implemented using Object-Oriented Programming Style. In the
exploration, we observed that program elements of disparate features formed unions as well as
intersections that may affect modifiability of the code-assets. The implication of this research to practice is
that an unstable product line and with the tendency of emerging variations should aim for techniques that
limit the number of intersections between program elements of different features. Similarly, the implication
of the observation to research is that there should be subsequent investigations using multiple case studies
in different software domains and programming styles to improve the understanding of the findings.
Modeling and Evaluation of Performance and Reliability of Component-based So...Editor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional requirements
(such as performance and reliability). Whereas establishing the non-functional requirements have significant effect on success of software
systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software performance has
been specified based on performance models, may be evaluated at the primary stages of software development cycle. Therefore, modeling
and evaluation of non-functional requirements in software architecture level, that are designed at the primary stages of software systems
development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance and reliability of software systems, based on formal models (hierarchical timed
colored petri nets) in software architecture level. In this approach, the software architecture is described by UML use case, activity and
component diagrams, then UML model is transformed to an executable model based on hierarchical timed colored petri nets (HTCPN) by a
proposed algorithm. Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including
performance (such as response time) and reliability may be evaluated in software architecture level.
A Comparative Study of Forward and Reverse Engineeringijsrd.com
With the software development at its boom compared to 20 years in the past, software developed in the past may or may not have a well-supported documentation during the software evolution. This may increase the specification gap between the document and the legacy code to make further evolutions and updates. Understanding the legacy code of the underlying decisions made during development is the prime motto, which is very well supported by Reverse Engineering. In this paper, we compare the Transformational Forward engineering, where a stepwise abstraction is obtained with the Transformational Reverse Methodology. While the forward transformation process produces overlap of the decisions, performance is affected. Hence, the use of transformational method of Reverse Engineering which is a backwards Forward Engineering process is suitable. Besides the design recognition obtained is a domain knowledge which can be used in future by the forward engineers.
Reduced Software Complexity for E-Government Applications with ZEF FrameworkTELKOMNIKA JOURNAL
The situation of dynamic change is unpredictable and always growth increasingly. It also can
happen anytime and anywhere. The one kind which is always changing is the government policy.This
condition is suggested take the impact for software for information system. It will cause replacement,
modification, and enhancement of software for information system. There is some commonality and
variability of software features in Indonesian Government. Hence, to manage it, we present enhancement
of Zuma’s E-Government Framework (ZEF) for reduce software complexity.We enhance ZEF Framework
using SPLE and GORE approach in order to improve traditional software development.It can reduce, if
the changing continuously happen.The measurement of software complexity relate to functionality of
system.It can describe with function point, because function point can describe logical software
complexity also. The preliminary result of this study can reduce efficiency of software complexity such as
information processing size, technical complexity adjustment factors and function points in e-government
applications.
A FRAMEWORK STUDIO FOR COMPONENT REUSABILITYcscpconf
The deployment of a software product requires considerable amount of time and effort. In order
to increase the productivity of the software products, reusability strategies were proposed in the
literature. However effective reuse is still a challenging issue. This paper presents a framework
studio for effective components reusability which provides the selection of components from framework studio and generation of source code based on stakeholders needs. The framework studio is implemented using swings which are integrated onto the Net Beans IDE which help in faster generation of the source code.
GENERATING SOFTWARE PRODUCT LINE MODEL BY RESOLVING CODE SMELLS IN THE PRODUC...ijseajournal
Software Product Lines (SPLs) refer to some software engineering methods, tools and techniques for creating a collection of similar software systems from a shared set of software assets using a common means of production. This concept is recognized as a successful approach to reuse in software development. Its purpose is to reduce production costs by reusing existing features and managing the variability between the different products with respect of particular constraints. Software Product Line engineering is the production process in product lines and the development of a family of systems by reusing core assets. It exploits the commonalities between software products and preserves the ability to vary the functionalities and features between these products. The adopted strategy for building SPL can be a top-down or bottom-up. Depending from the selected strategy, it is possible to face an inappropriate implementation in the SPL Model or the derived products during this process. The code can contain code smells or code anomalies. Code smells are considered as problems in source code which can have an impact on the quality of the derived products of an SPL. The same problem can be present in many derived products from an SPL due to reuse or in the obtained product line when the bottom-up strategy is selected. A possible solution to this problem can be the refactoring which can improve the internal structure of source code without altering external behavior. This paper proposes an approach for building SPL from source code using the bottom-up strategy. Its purpose is to reduce code smells in the obtained SPL using refactoring source code. This approach proposes a possible solution using reverse engineering to obtain the feature model of the SPL
Generating Software Product Line Model by Resolving Code Smells in the Produc...ijseajournal
Software Product Lines (SPLs) refer to some software engineering methods, tools and techniques for
creating a collection of similar software systems from a shared set of software assets using a common
means of production. This concept is recognized as a successful approach to reuse in software
development. Its purpose is to reduce production costs by reusing existing features and managing the
variability between the different products with respect of particular constraints. Software Product Line
engineering is the production process in product lines and the development of a family of systems by
reusing core assets. It exploits the commonalities between software products and preserves the ability to
vary the functionalities and features between these products. The adopted strategy for building SPL can be
a top-down or bottom-up. Depending from the selected strategy, it is possible to face an inappropriate
implementation in the SPL Model or the derived products during this process. The code can contain code
smells or code anomalies. Code smells are considered as problems in source code which can have an
impact on the quality of the derived products of an SPL. The same problem can be present in many derived
products from an SPL due to reuse or in the obtained product line when the bottom-up strategy is selected.
A possible solution to this problem can be the refactoring which can improve the internal structure of
source code without altering external behavior. This paper proposes an approach for building SPL from
source code using the bottom-up strategy. Its purpose is to reduce code smells in the obtained SPL using
refactoring source code. This approach proposes a possible solution using reverse engineering to obtain
the feature model of the SPL.
A new model for the selection of web development frameworks: application to P...IJECEIAES
The use of a framework is often essential for medium and large scale developments, but is also of interest for small developments. PHP has evolved as the scripting language the most chosen by developers, which has generated an explosion of PHP frameworks. There is a big debate about what the best PHP frameworks are, because the simple fact is that not all frameworks are built for everyone. Indeed, not all frameworks meet the same needs, and several frameworks can be used together in certain situations. Choosing the right framework, however, can sometimes be difficult. In order to make the selection process easier, we propose a pragmatic and complete model to compare and evaluate the main PHP frameworks. This model is based on a set of comparison criteria based on the Intrinsic durability, industrialized solution, technical adaptability, strategy, technical architecture and Speed criteria. Results show that the values of these criteria allow developers to easily and properly choose the framwork that best meets their needs
HW/SW Partitioning Approach on Reconfigurable Multimedia System on ChipCSCJournals
Due to the complexity and the high performance requirement of multimedia applications, the design of embedded systems is the subject of different types of design constraints such as execution time, time to market, energy consumption, etc. Some approaches of joint software/hardware design (Co-design) were proposed in order to help the designer to seek an adequacy between applications and architecture that satisfies the different design constraints. This paper presents a new methodology for hardware/software partitioning on reconfigurable multimedia system on chip, based on dynamic and static steps. The first one uses the dynamic profiling and the second one uses the design trotter tools. The validation of our approach is made through 3D image synthesis.
Unified Component Model for Distributed, Real- Time and Embedded Systems Requ...Remedy IT
The objective of this RFP is to solicit proposals for a new component model called the “Unified Component Model” targeting Distributed, Real-Time and Embedded (DRTE) Systems. A component model defines a set of standards for component implementation, naming, interoperability, customization, composition, evolution, and deployment.
The UCM will be a simple, lightweight, middleware-agnostic, and flexible component model. The UCM will allow many different interaction models, including publish-subscribe and request-reply.
REGULARIZED FUZZY NEURAL NETWORKS TO AID EFFORT FORECASTING IN THE CONSTRUCTI...ijaia
Predicting the time to build software is a very complex task for software engineering managers. There are complex factors that can directly interfere with the productivity of the development team. Factors directly related to the complexity of the system to be developed drastically change the time necessary for the completion of the works with the software factories. This work proposes the use of a hybrid system based on artificial neural networks and fuzzy systems to assist in the construction of an expert system based on rules to support in the prediction of hours destined to the development of software according to the complexity of the elements present in the same. The set of fuzzy rules obtained by the system helps the management and control of software development by providing a base of interpretable estimates based on fuzzy rules. The model was submitted to tests on a real database, and its results were promissory in the construction of an aid mechanism in the predictability of the software construction
AN OVERVIEW OF EXISTING FRAMEWORKS FOR INTEGRATING FRAGMENTED INFORMATION SYS...ijistjournal
Literatures show that there are several structured integration frameworks which emerged with the aim of facilitating application integration. But weakness and strength of these frameworks are not known. This paper aimed at reviewing these frameworks with the focus on identifying their weakness and strength. To accomplish this, recommended comparison factors were identified and used to compare these frameworks. Findings shows that most of these structure frameworks are custom based on their motives. They focus on integrating applications from different sectors within an organization for the purpose of eliminating communication inefficiencies. There is no framework which guides application’s integrators on goals of integrations, outcomes of integration, outputs of integration and skills which will be required for types of applications expected to be integrated. The study recommended further study on integration framework especial on designing unstructured framework which will support and guide application’s integrators with consideration on consumer’s surrounding environment.
A VNF modeling approach for verification purposesIJECEIAES
Network Function Virtualization (NFV) architectures are emerging to increase networks flexibility. However, this renewed scenario poses new challenges, because virtualized networks, need to be carefully verified before being actually deployed in production environments in order to preserve network coherency (e.g., absence of forwarding loops, preservation of security on network traffic, etc.). Nowadays, model checking tools, SAT solvers, and Theorem Provers are available for formal verification of such properties in virtualized networks. Unfortunately, most of those verification tools accept input descriptions written in specification languages that are difficult to use for people not experienced in formal methods. Also, in order to enable the use of formal verification tools in real scenarios, vendors of Virtual Network Functions (VNFs) should provide abstract mathematical models of their functions, coded in the specific input languages of the verification tools. This process is error-prone, time-consuming, and often outside the VNF developers’ expertise. This paper presents a framework that we designed for automatically extracting verification models starting from a Java-based representation of a given VNF. It comprises a Java library of classes to define VNFs in a more developer-friendly way, and a tool to translate VNF definitions into formal verification models of different verification tools.
Similar to Feature Model Configuration Based on Two-Layer Modelling in Software Product Lines (20)
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Developing a smart system for infant incubators using the internet of things ...IJECEIAES
This research is developing an incubator system that integrates the internet of things and artificial intelligence to improve care for premature babies. The system workflow starts with sensors that collect data from the incubator. Then, the data is sent in real-time to the internet of things (IoT) broker eclipse mosquito using the message queue telemetry transport (MQTT) protocol version 5.0. After that, the data is stored in a database for analysis using the long short-term memory network (LSTM) method and displayed in a web application using an application programming interface (API) service. Furthermore, the experimental results produce as many as 2,880 rows of data stored in the database. The correlation coefficient between the target attribute and other attributes ranges from 0.23 to 0.48. Next, several experiments were conducted to evaluate the model-predicted value on the test data. The best results are obtained using a two-layer LSTM configuration model, each with 60 neurons and a lookback setting 6. This model produces an R 2 value of 0.934, with a root mean square error (RMSE) value of 0.015 and a mean absolute error (MAE) of 0.008. In addition, the R 2 value was also evaluated for each attribute used as input, with a result of values between 0.590 and 0.845.
A review on internet of things-based stingless bee's honey production with im...IJECEIAES
Honey is produced exclusively by honeybees and stingless bees which both are well adapted to tropical and subtropical regions such as Malaysia. Stingless bees are known for producing small amounts of honey and are known for having a unique flavor profile. Problem identified that many stingless bees collapsed due to weather, temperature and environment. It is critical to understand the relationship between the production of stingless bee honey and environmental conditions to improve honey production. Thus, this paper presents a review on stingless bee's honey production and prediction modeling. About 54 previous research has been analyzed and compared in identifying the research gaps. A framework on modeling the prediction of stingless bee honey is derived. The result presents the comparison and analysis on the internet of things (IoT) monitoring systems, honey production estimation, convolution neural networks (CNNs), and automatic identification methods on bee species. It is identified based on image detection method the top best three efficiency presents CNN is at 98.67%, densely connected convolutional networks with YOLO v3 is 97.7%, and DenseNet201 convolutional networks 99.81%. This study is significant to assist the researcher in developing a model for predicting stingless honey produced by bee's output, which is important for a stable economy and food security.
A trust based secure access control using authentication mechanism for intero...IJECEIAES
The internet of things (IoT) is a revolutionary innovation in many aspects of our society including interactions, financial activity, and global security such as the military and battlefield internet. Due to the limited energy and processing capacity of network devices, security, energy consumption, compatibility, and device heterogeneity are the long-term IoT problems. As a result, energy and security are critical for data transmission across edge and IoT networks. Existing IoT interoperability techniques need more computation time, have unreliable authentication mechanisms that break easily, lose data easily, and have low confidentiality. In this paper, a key agreement protocol-based authentication mechanism for IoT devices is offered as a solution to this issue. This system makes use of information exchange, which must be secured to prevent access by unauthorized users. Using a compact contiki/cooja simulator, the performance and design of the suggested framework are validated. The simulation findings are evaluated based on detection of malicious nodes after 60 minutes of simulation. The suggested trust method, which is based on privacy access control, reduced packet loss ratio to 0.32%, consumed 0.39% power, and had the greatest average residual energy of 0.99 mJoules at 10 nodes.
Fuzzy linear programming with the intuitionistic polygonal fuzzy numbersIJECEIAES
In real world applications, data are subject to ambiguity due to several factors; fuzzy sets and fuzzy numbers propose a great tool to model such ambiguity. In case of hesitation, the complement of a membership value in fuzzy numbers can be different from the non-membership value, in which case we can model using intuitionistic fuzzy numbers as they provide flexibility by defining both a membership and a non-membership functions. In this article, we consider the intuitionistic fuzzy linear programming problem with intuitionistic polygonal fuzzy numbers, which is a generalization of the previous polygonal fuzzy numbers found in the literature. We present a modification of the simplex method that can be used to solve any general intuitionistic fuzzy linear programming problem after approximating the problem by an intuitionistic polygonal fuzzy number with n edges. This method is given in a simple tableau formulation, and then applied on numerical examples for clarity.
The performance of artificial intelligence in prostate magnetic resonance im...IJECEIAES
Prostate cancer is the predominant form of cancer observed in men worldwide. The application of magnetic resonance imaging (MRI) as a guidance tool for conducting biopsies has been established as a reliable and well-established approach in the diagnosis of prostate cancer. The diagnostic performance of MRI-guided prostate cancer diagnosis exhibits significant heterogeneity due to the intricate and multi-step nature of the diagnostic pathway. The development of artificial intelligence (AI) models, specifically through the utilization of machine learning techniques such as deep learning, is assuming an increasingly significant role in the field of radiology. In the realm of prostate MRI, a considerable body of literature has been dedicated to the development of various AI algorithms. These algorithms have been specifically designed for tasks such as prostate segmentation, lesion identification, and classification. The overarching objective of these endeavors is to enhance diagnostic performance and foster greater agreement among different observers within MRI scans for the prostate. This review article aims to provide a concise overview of the application of AI in the field of radiology, with a specific focus on its utilization in prostate MRI.
Seizure stage detection of epileptic seizure using convolutional neural networksIJECEIAES
According to the World Health Organization (WHO), seventy million individuals worldwide suffer from epilepsy, a neurological disorder. While electroencephalography (EEG) is crucial for diagnosing epilepsy and monitoring the brain activity of epilepsy patients, it requires a specialist to examine all EEG recordings to find epileptic behavior. This procedure needs an experienced doctor, and a precise epilepsy diagnosis is crucial for appropriate treatment. To identify epileptic seizures, this study employed a convolutional neural network (CNN) based on raw scalp EEG signals to discriminate between preictal, ictal, postictal, and interictal segments. The possibility of these characteristics is explored by examining how well timedomain signals work in the detection of epileptic signals using intracranial Freiburg Hospital (FH), scalp Children's Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) databases, and Temple University Hospital (TUH) EEG. To test the viability of this approach, two types of experiments were carried out. Firstly, binary class classification (preictal, ictal, postictal each versus interictal) and four-class classification (interictal versus preictal versus ictal versus postictal). The average accuracy for stage detection using CHB-MIT database was 84.4%, while the Freiburg database's time-domain signals had an accuracy of 79.7% and the highest accuracy of 94.02% for classification in the TUH EEG database when comparing interictal stage to preictal stage.
Analysis of driving style using self-organizing maps to analyze driver behaviorIJECEIAES
Modern life is strongly associated with the use of cars, but the increase in acceleration speeds and their maneuverability leads to a dangerous driving style for some drivers. In these conditions, the development of a method that allows you to track the behavior of the driver is relevant. The article provides an overview of existing methods and models for assessing the functioning of motor vehicles and driver behavior. Based on this, a combined algorithm for recognizing driving style is proposed. To do this, a set of input data was formed, including 20 descriptive features: About the environment, the driver's behavior and the characteristics of the functioning of the car, collected using OBD II. The generated data set is sent to the Kohonen network, where clustering is performed according to driving style and degree of danger. Getting the driving characteristics into a particular cluster allows you to switch to the private indicators of an individual driver and considering individual driving characteristics. The application of the method allows you to identify potentially dangerous driving styles that can prevent accidents.
Hyperspectral object classification using hybrid spectral-spatial fusion and ...IJECEIAES
Because of its spectral-spatial and temporal resolution of greater areas, hyperspectral imaging (HSI) has found widespread application in the field of object classification. The HSI is typically used to accurately determine an object's physical characteristics as well as to locate related objects with appropriate spectral fingerprints. As a result, the HSI has been extensively applied to object identification in several fields, including surveillance, agricultural monitoring, environmental research, and precision agriculture. However, because of their enormous size, objects require a lot of time to classify; for this reason, both spectral and spatial feature fusion have been completed. The existing classification strategy leads to increased misclassification, and the feature fusion method is unable to preserve semantic object inherent features; This study addresses the research difficulties by introducing a hybrid spectral-spatial fusion (HSSF) technique to minimize feature size while maintaining object intrinsic qualities; Lastly, a soft-margins kernel is proposed for multi-layer deep support vector machine (MLDSVM) to reduce misclassification. The standard Indian pines dataset is used for the experiment, and the outcome demonstrates that the HSSF-MLDSVM model performs substantially better in terms of accuracy and Kappa coefficient.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
An Approach to Detecting Writing Styles Based on Clustering Techniquesambekarshweta25
An Approach to Detecting Writing Styles Based on Clustering Techniques
Authors:
-Devkinandan Jagtap
-Shweta Ambekar
-Harshit Singh
-Nakul Sharma (Assistant Professor)
Institution:
VIIT Pune, India
Abstract:
This paper proposes a system to differentiate between human-generated and AI-generated texts using stylometric analysis. The system analyzes text files and classifies writing styles by employing various clustering algorithms, such as k-means, k-means++, hierarchical, and DBSCAN. The effectiveness of these algorithms is measured using silhouette scores. The system successfully identifies distinct writing styles within documents, demonstrating its potential for plagiarism detection.
Introduction:
Stylometry, the study of linguistic and structural features in texts, is used for tasks like plagiarism detection, genre separation, and author verification. This paper leverages stylometric analysis to identify different writing styles and improve plagiarism detection methods.
Methodology:
The system includes data collection, preprocessing, feature extraction, dimensional reduction, machine learning models for clustering, and performance comparison using silhouette scores. Feature extraction focuses on lexical features, vocabulary richness, and readability scores. The study uses a small dataset of texts from various authors and employs algorithms like k-means, k-means++, hierarchical clustering, and DBSCAN for clustering.
Results:
Experiments show that the system effectively identifies writing styles, with silhouette scores indicating reasonable to strong clustering when k=2. As the number of clusters increases, the silhouette scores decrease, indicating a drop in accuracy. K-means and k-means++ perform similarly, while hierarchical clustering is less optimized.
Conclusion and Future Work:
The system works well for distinguishing writing styles with two clusters but becomes less accurate as the number of clusters increases. Future research could focus on adding more parameters and optimizing the methodology to improve accuracy with higher cluster values. This system can enhance existing plagiarism detection tools, especially in academic settings.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
2. Int J Elec & Comp Eng ISSN: 2088-8708
Feature model configuration based on two-layer modeling... (Elham Darmanaki Farahani)
2649
target domain and producing a set of core assets. On the other hand, Application Engineering involves
developing a domain-specific software product through the customization of artifacts that are developed in
the domain engineering phase. The Application Requirements Engineering sub-process encompasses all
activities necessary for developing the application requirements specification. FM configuration is the main
activity of this phase.
Figure 1. The SPLE Framework
A FM that describes a range of products generated from an SPL has a key role in the configuration
process of the SPL. An FM consists of: i) features and sub-features organized in a feature tree, and ii)
optional constraints such as “excludes” or “requires” to describe the products of a product line in terms of the
features that should be excluded (“excludes” constraints) and/or needed (“requires” constraints) by each
product. Each feature in a Feature Model represents a property of a product that will be visible to the product
user. Selecting a set of desirable features based on stakeholders’ needs is a complex process because:
1) There are normally some constrains between features that must be considered during the feature selection
process by stakeholders.
2) In addition to functional requirements (FRs), stakeholders may have some non-functional requirements
(NFPs) as well. However, it may not be straightforward to express how the NFRs can be satisfied in terms
of features in FM.
3) Stakeholders may have some restrictions in the implementation of the features in FM due to, for example,
lack of adequate hardware infrastructure.
The need for addressing these problems leads to increased complexity of the FM configuration
process. Therefore, selecting the best set of features while considering the stakeholders’ requirements and
implementation infrastructure is a hard task.
Due to the importance of the issue of RE in SPL, many studies have been done.According to [5] in
this area, there is a lack of tool support and comparative studies [5]. Also in another research, it has been
stated that inappropriate communication and communication, long repetition cycles, and lack of compliance
and flexibility in RE phase of SPL engineering could increase effort and mitigate disruption during product
development [6]. In other study, because of importatnce of RE phase in quality of final produts, the security
and related verification method in RE has been discussed [7].
Additionally, various configuration methods have previously been developed to help FM
configuration (The most important issue in RE phase) by automating the selection of features to satisfy FRs,
NFRs and constraints [8]-[10]. Some others have focused on the constraints satisfaction problem and
proposed a method to build optimal configurations [11]. The main problem with this approach is performance
inefficiency. Another technique is based on staged configuration of FM that gives more importance to the
role of stakeholders in feature selection but could not solve the NFRs satisfaction problem [12]. Among the
solutions proposed in this area the automated planning in [13] is more complete than the others because it
covers FRs, NFRs and constraints satisfaction and also automates the feature selection process but it does not
solve complexity of simultaneous presentation of Application and Infrastructure features in one-layer FM.
The above issues motivated us to address the following research questions: How can we determine
the infrastructure needed for implementation of selected FRs and NFRs by stakeholders in FM? And if any
3. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 4, August 2019 : 2648 - 2658
2650
conflict is found between the infrastructures needed versus that available, how should the selected features be
changed to resolve the conflict?
To address these questions, we looked into some FM design techniques and found a multi-layer SPL
with a reference model concept in [14] which is a common approach for manage highly complex product
families. In [14], a multi-level feature trees has been proposed that consist of a tree of feature models in
which the parent model serves as a reference feature model for its children. Base on proposed model in [14],
we guessed that the two-layer form of SPL together with the vertical composition can help us to first define
constraints between the features in application layer of SPL and then map the features to the needed
infrastructure for their implementation.
So we propose a two-layer FM, comprising an “Application layer” and an “Infrastructure layer”.
Application layer is to include functional and non-functional features of SPL and the purpose of
infrastructure layer is to deal with the hardware and network requirements that have a major role in
implementation of any product instances.
In addition to constraints applied to each feature in the FM of application layer, we can define
constraints between the two layers of FM, and thereby, specify the necessary infrastructure for each set of
stakeholders’ requirements.
In the context of FM configuration, the main contributions of this paper are as follows:
- A new method to represent FM as a two-layer model with the ability to specify the constraints between
the features in the same level (called “Inner Constraints”) and also constraints between the features in
different layers (called “Intra constraints”),
- An easy way to show NFRs in application layer of FM,
- We also show how the stakeholders can be helped to select NFRs from FMs to reflect the infrastructure
necessary for NFRs’ implementation.
The rest of this paper is organized as follows: section 2 gives an overview of the basic related
concepts; in section 3 we discuss the challenges in current FM configuration methods and in section 4 we
propose a new method that covers all of the problems described in section 3; this is followed with a case
study of the proposed method in section 5. Section 6 systematically compares our approach with related
works, and finally, the paper concludes in section 7.
2. FOUNDATION
In this section we describe the basic concepts used throughout the paper.
2.1. Feature models (FMs)
In software development, FM is a structured representation of all the products (generated by an
SPL) in terms of their “features”. FMs are widely used especially during application requirements
engineering phase, where the output of this phase can be used in producing other assets such as documents,
architecture definition, or pieces of code.
According to FODA in [15], a feature model has a tree-like structure that visually depicts features
and also their dependencies as constraints. The relationship between a parent feature and its child features in
FM are typically classified as follows:
- Mandatory – child feature is required.
- Optional – child feature is optional.
- Or – at least one of the child-features must be selected.
- Alternative (xor) – one (and only one) of the child-features must be selected
Also we can define some cross-tree constraints between the features in FM. The most common
constraints of this type are:
- A requires B – The selection of A in a product implies the selection of B.
- A excludes B – A and B cannot be part of the same product.
In an FM the main functionalities of products that are common between all products derived from
the SPL are specified as mandatory features. Figure 2 shows a subset of the FM of the example E-Shop
website. Here, the “catalogue” functionality is assumed to be the minimum facility of any E-Shops, so it is
set to mandatory in the FM, furthermore the “Bank transfer” feature requires the “High security” feature.
4. Int J Elec & Comp Eng ISSN: 2088-8708
Feature model configuration based on two-layer modeling... (Elham Darmanaki Farahani)
2651
Figure 2. A subset of E-Shop FM
2.2. Functional and non-functional requirements
As a part of the software development process, requirements engineering involves identification,
representation, documentation, and the management of the set of needs, desired features and preferences of
the stakeholders [16].
In a software system, requirements are categorized into functional and non-functional groups [16].
The term FR refers to the characteristics that specify the functions the system must perform, while NFR
refers to the constraints on how the system must perform those functions. In general, FRs describe the
behavior of the system whereas NFRs elaborate on the performance characteristic of the system.
NFRs are mostly known as system qualities and typically fall into areas such as: efficiency, security
and accessibility. An example of a functional requirement would be: “A system must send an email whenever
a certain condition is met” and a related non-functional requirement for this system may be: “Emails should
be sent with a latency of no greater than 12 hours after the related condition is met.”
Representation of FRs can be achieved through features in an FM. But representing the NFRs in an
FM is not a simple task, although there are proposals for how this can be achieved and presented to
stakeholders [13, 17]. One of the contributions of this paper is a method for representing NFRs in FMs in a
simple way. This will be discussed in the next sections.
3. PROBLEM STATEMENT
This section highlights the major challenge involved in selecting the necessary features from the FM
by stakeholders. As mentioned in section 1, one of the main problems in selecting the desired features is the
mismatch between the application requirements and the available infrastructure. This problem occurs because
the stakeholders can only see the features (Functional or Non-Functional) in the FM but cannot have any
information about the infrastructure (Network and Hardware) required for implementation of all the selected
features.
For example, let us consider an FM for the SPL of a website where one of the important non-
functional requirements can be accessibility in face of a high number of simultaneous online visitors per
minute. If the proper hardware and network configuration is not provided, the website could become
inaccessible to the users during heavy traffic periods. To avoid this problem, proper and adequate hardware
infrastructure should be provided in the relevant parts of the system.
In the FM of this website, we can have a feature named “Accessibility” and stakeholders can select
the predicted number of simultaneous visitors per minute. But the problem arises when the stakeholders
select the predicted number in the FM without having any information about the infrastructure required to
support the predicted number of number of simultaneous visitors. It is possible that the stakeholders select a
high number of simultaneous visitors in the FM but in practice it may not be possible to provide the hardware
and network infrastructure required to handle the predicted of simultaneous visitors.
As a result, all the stakeholders’ requirements related to the selected features, especially the NFRs in
the FM, could not necessarily be implemented. To solve this problem, we must find a way to present to the
stakeholders the infrastructure required for each special functional or non-functional feature in the
application requirements engineering phase. In the next section we describe our method that could solve this
major challenge.
5. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 4, August 2019 : 2648 - 2658
2652
4. PROPOSED FM CONFIGURATION METHOD
To solve the problem described in section 3, in this section we propose a new method for designing
FMs in any SPLs. To address the challenges that stakeholders face in feature selection in application
requirement engineering phase, we must find a method to able to simultaneously show to stakeholders the
functional and non-functional features and the infrastructure required to support those features. In this way
the stakeholders can view the properties, operational capabilities and the available infrastructure at the same
time. So in the following we describe both a new method for feature modelling, called “Two-layer FM”,
and also a related algorithm describing the steps involved in FM configuration.
4.1. Two-layer FM
This section introduces a new method for feature modelling called “Two-layer FM” consisting of
two layers that each one is a FM (One for application features and another for infrastructure features).
The dependencies between the features in the same layer of the FM are expressed using “Inner constraints”
which take the form of “Requires” or “Excludes” relations. Furthermore, the constrains between the features
in the application and infrastructure layers are defined via the “Intra constraints” which provide the “Uses”
relation between one feature in application layer and another one in infrastructure layer.
4.2. Proposed configuration algorithm
This section describes the new FM configuration algorithm based on our proposed two-layer FM.
The algorithm specifies the steps involved in feature selection leading to the final customized FM.
FMC Algorithm
Input: Two-layer FM
Output: Final Customized FM
(1) Stakeholders select the desired functional and non-functional features from application layer of FM.
(2) Stakeholders select the possible implementation equipment (as hardware and network) from
infrastructure layer of FM.
(3) Check whether the inner constraints in both application and infrastructure layers are satisfied.
If there is any conflict between the selected features and constraints
Until there is no conflict do:
(3-1) Send error message to stakeholders to change the selected features until there is no conflict.
(4) Check the intra constraints between application and infrastructure layers are satisfied by one of the
existing methods as SAT solver [18] or FMVA [19].
If there is any conflict between a selected feature from application layer and a selected available
equipment from the infrastructure layer (which means that the intra constraints were not satisfied), two
solutions will be proposed to stakeholders (only one solution can be selected)
(4-1) Stakeholders can change the selected features in application layer based on ticked available
equipment for implementation by going backward from infrastructure to application layer.
(4-2) Stakeholders can provide the equipment required for implementation of the corresponding
feature from the application layer according to predefined intra constraints and then change the
selected equipment in infrastructure layer.
end
5. CASE STUDY AND EVALUATION
To demonstrate the feasibility of our approach, we performed a case study using the presented FM.
For this purpose, in Figure 3 we provide two-layer FM for an E-Shop website previously depicted in a simple
model in Figure 2. Within the case study, we are particularly interested in answering two following
research question:
6. Int J Elec & Comp Eng ISSN: 2088-8708
Feature model configuration based on two-layer modeling... (Elham Darmanaki Farahani)
2653
Figure 3. Proposed Two-layer FM for E-Shop website
5.1. RQ1 (Effectiveness): Is the method effective for FM configuration?
The main aim of RQ1 is to determine whether our method can generate reliable results for
application engineers, and also, which level of automation is supported by it.
In our proposed approach, the application engineers’ tasks, based on stakeholders’ requirements, are
limited to: i) specifying the functional and non-functional features in the application layer of FM, ii) selecting
the required infrastructure (or configuration that can be provided) in the infrastructure layer of FM. The
configuration tool can automatically check whether the predefined inner and intra constraints are satisfied by
one of the existing method, for example SAT solver [18] or FMVA [19] and notify the application engineers
about the conflicts found between non-functional requirements and the selected infrastructure.
As a conclusion, we are able to answer RQ1 positively. Because we can conclude that: i) the final
result of our approach is correct, and ii) an automatic solution for FM configuration can be generated where
the stakeholders need to perform the minimum number of manual tasks.
5.2. RQ2 (Scalability): Can the method configure FMs in a reasonable time, based on functional and
non-functional requirements?
The purpose of RQ2 is to evaluate whether our proposed method can be used to generate an FM
configuration in a reasonable length of time when dealing with a large number of feature conditions.
We can see that accessibility and security are two important NFRs in this FM. When we initially
design the FM of E-Shop SPL we have no information about the context of the final customized website, so
we cannot include the maximum number of visitors in the FM. Therefore, we prefer to add accessibility
features to the FM. The stakeholders can subsequently use these accessibility features to choose their
prediction about the number of visitors per day.
If the stakeholders’ prediction is incorrect, it might lead to service failure at peak times due to lack
of dedicated network bandwidth or incompatible servers’ hardware configuration.
Therefore, in the application requirement engineering phase it is necessary that the stakeholders
have the complete knowledge about the infrastructure needed for implementation of their functional and non-
functional requirements. Our proposed method, two-layer FM, provides the possibility for stakeholders to
have the complete knowledge about the features and their relations in the application and infrastructure levels
of their desired products at a glance.
This, for example, means that if, based on a predefined set of intra constraints, there is any conflict
between the selected features in application layer of FM from one side and the hardware and network
configuration in infrastructure layer on the other side, the configuration tool based on our method could
detect this conflict, display an error message and request the stakeholders to undertake one of the following
actions: i) change the desired features in application layer of FM, or ii) provide the required hardware and
network configuration according to the infrastructure layer and then change the selected hardware and
network features in the FM accordingly. In this way the intra constraints would be satisfied and the
implementation of a customized product for stakeholders would be possible. Therefore, we are able to answer
RQ2 positively.
7. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 4, August 2019 : 2648 - 2658
2654
Also for an instance, we can apply our proposed FMC algorithm to the FM configuration of the E-
Shop website as shown in Figure 4.
1) Stakeholders select their desired non-functional features from application layer. For instance, “3-50
million page views per day” for accessibility, “Catalogue”, “Bank Transfer” and “Credit Card” for
payment, and finally, “Standard Security”.
2) Stakeholders select implementation equipment from infrastructure layer comprising 4 servers each
equipped with 4 processors, 6GB of RAM and a 5GB hard disk.
3) Check the satisfaction of inner constraints in both application and infrastructure layers by FMVA method:
There are no conflicts between inner constraints. Therefore, step (3-1) in the algorithm is not taken.
4) Check the satisfaction of intra constraints between application and infrastructure layers by FMVA
method. Result: A conflict is detected because the desired accessibility of stakeholders cannot be
provided through the selected hardware configuration. There are two solutions for resolving this conflict:
a. Stakeholders can change the desired level of accessibility to “Up to 5-million-page view/day” in
application layer. This accessibility feature is satisfied by the implementation equipment previously
chosen in step 2.
b. Stakeholders provide 9 servers each equipped with 2 processors, 3GB of RAM and a 20GB hard disk.
Based on the solution adopted, the selected features and equipment should be changed and the final
customized FM will be ready.
As we can see the proposed approach, “Two-Layer FM” is a simple and practical solution for FM
customization based on stakeholders’ requirements and available infrastructure.
Figure 4. Sample selected features by stakeholders in E-Shop FM
6. RELATED WORK
This section presents a systematic comparison between the main contribution of our work and the
previous contributions in this area. To achieve this, we need to define a set of criteria that should be
supported by any FM configuration approach. We have adopted the criteria set defined in [13] and modified
it for our proposed approach. We do not claim that this criteria set is perfect, but it provides the necessary
aspects to compare our work with others’. These criteria include: 1) Managing NFRs, 2) Optimization, 3)
Ensuring FM constraints, 4) Automating configuration process, 5) Providing tooling support, 6) Time
efficiency, and finally, 7) Supporting the definition of the infrastructure needed for implementation of the
desired FM of stakeholders.
6.1. Feature model configuration approaches
The first significant contribution is by Czarnecki et al. [12] who introduced Staged configuration.
They described a stepwise specialization of feature models where the configuration choices made in each
stage are defined by separate feature models. This approach is motivated by the characteristic of a realistic
development process, where different stakeholders make configuration choices in different stages.
In this method the constraints between the features in the FM are not significant and automatic configuration
8. Int J Elec & Comp Eng ISSN: 2088-8708
Feature model configuration based on two-layer modeling... (Elham Darmanaki Farahani)
2655
is not considered. This method could be implemented by a configuration tool but it does not affect the time
required to execute the configuration management process.
Benavides et al. in [11] presented how an FM (with or without considering cardinalities) can be
translated into a Constraint Satisfaction Problem (CSP). In that way, it is possible to use off–the–shelf
constraint satisfaction solvers to automatically accomplish several tasks such as calculating the number of
possible configurations and detecting possible conflicts.
White et al. [9] introduced a Filtered Cartesian Flattening (FCF) method to select optimal feature
sets according to resource constraints. In their approach, the feature selection problem is mapped to a multi-
dimensional, multi-choice knapsack problem (MMKP). By applying existing MMKP approximation
algorithms, they provided partially optimal feature configurations in polynomial time.
Siegmund et al. [20] proposed a technique for showing non-functional properties in FM and applied
CSP to find optimal configuration based on user defined objective functions. In their technique there are
some preprocessing steps to reduce the search space for optimal configuration.
White et al. [21] also formalized stage configuration and proposed a Multi-Step Software
Configuration probLEm solver (MUSCLE) that provides a formal model for multi-step configuration. They
considered non-functional properties such as cost constraints between two configurations and formalized
them as CSP constraints. Their approach is only applicable for multi-stage configuration and focuses on
creating new configurations from existing product configurations.
Mendonca et al. [22] introduced a translation of basic feature models based on propositional logic
and used Binary Decision Diagrams (BDD) as the reasoning system. Their approach focuses on validating
feature models and does not offer a facility for automated configuration. Their solution can be used in a
multi-stage configuration process for validation of the results of every specialization in one FM (called
interactive configuration). An interactive configuration only checks the structural constraints of FMs and
does not consider preferences and non-functional requirements. A tool was implemented to support software
developers in validation.
Gue et al. in [23] addressed the challenge of optimizing feature model configuration and covered
this problem with an approach named GAFES which employs Genetic Algorithms to optimize feature
selection. Machado et al. in [24] introduced SPLConfig as a tool that supports automatic product
configuration in SPLs. The main goal of this tool is to derive an optimized features set that satisfies the
customer requirements. The main contribution of their work is to achieve the balance between cost and
customer satisfaction while also taking into account the available budget of customer. The main shortcoming
of this tool is that it could not support non-functional features as constraints.
Batory in [25] defined a particular tool chain for product specification. The chain starts with a tool
that uses a feature model configuration to specify a product. The model is maintained in a Logic-Truth
Maintenance System (LTMS) and uses a propositional satisfiability (SAT) solver to prevent inconsistent
specifications. The feature-based specification can be mapped onto a grammar from which various
techniques can be used to produce products.
Sultana et al. in [13] employed the HTN planning process [26] for Artificial Intelligence (AI)
planning and described a configuration process based on this method. They also proposed an optimal
configuration framework that supports stakeholders’ constraints over non-functional features.
6.2. Comparing the approaches
Table 1 summarizes the comparison between the previous approaches based on the criteria
identified in section 6. As can be seen, none of the previous approaches (except ours) cover all the criteria.
Below, we describe each criterion in detail.
Table 1. Comparative analysis of related works: “”: criterion met, “-“ criterion not met
Approach
Criteria
NFR Optimization Constraint Automation Tool
Support
Time
Efficiency
Infrastructure
Support
Staged Method [21] - - - - - -
CSP [11] - - -
FCF [9] - - - -
SPL Conqueror [20] - -
MUSCLE [14] - - - -
BDD [22] - - - - -
GAFES [23] - - -
SPLConfig [24] - - - - -
LTMS Based Tool [25] - - -
Sultana Framework [13] - -
Our Approach
9. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 4, August 2019 : 2648 - 2658
2656
Modeling NFRs: The modelling of the functional features is a default ability of any FM. Therefore,
we focus on NFRs which are not supported by all approaches to FM configuration. Among the previous
works, only SPL Conqueror [20] and Sultana Framework [13] provided a solution for modelling NFRs.
Also our method supports NFRs as explained earlier.
Optimization and Time efficiency: Generating optimal FM configurations based on stakeholders’
constraints is a difficult task. Almost all the previous approaches tackled the optimization problem except
staged configuration [21] and BDD [22] because their main focus had been on stakeholders’ satisfaction
instead of performance efficiency or optimization. On the other hand, the CSP-based approaches [11, 13, 20,
21] provide optimized solutions but they require high computation time. Our approach provides optimization
and also decreases the time required for FM configuration by making the stakeholders’ tasks clear to them
and also preventing rework of tasks. The latter is achieved by the stakeholders being notified at an early stage
about any inconsistencies between their NFRs (in the application layer) and the available infrastructure
(in the infrastructure layer).
Considering stakeholders’ constraints: An FM may impose certain constraints between its features.
These constraints need to be considered by the FM configuration methods and only a configuration satisfying
the constraints must be produced. Table 1 shows that only Staged method [21], BBD [22] and
SPLConfig [24] do not address constraints. Our proposed method allows the definition of constrains between
features in the application layer as well as between features in the application and infrastructure layers of the
FM. This covers the requirements of any SPL product implementation.
Tooling support and automation: Almost all the methods in Table 1, except FCF [9], were
implemented or can be implemented as an FM configuration tool but none could support complete
automation. Our method and Sultana et al. framework [13] has the ability to show all levels of FM
configuration in the same view to stakeholders (Figure 3). Other tools only provide basic views of FM
configuration to stakeholders.
Infrastructure support: We can state with certainty that none of the previous methods can support the
infrastructure configuration of a product and also guarantee the implementation of all functional and non-
functional requirements of the stakeholders. In all previous methods we cannot find any mention of the
infrastructure needed for product implementation, they only cover the features in the application layer
required by the stakeholders.
As we can see in Table 1, our prosoposed approach can cover all the defined critera. The Most
important benefit of our approach is the ease of FM configuration process and time efficiency. The reason
for this claim is that the process of FM configuration can be finilized in one-step based on available needed
infrastructure .In all of the previous methods, stakeholders selected Features in first step and in the next step,
the required infrastructure will be considered, which will reduce the time efficiency, the quality and the
feasibility of implementation of the FM. In contrast, in our approach all the process for selction Features and
decision about needed infrastructure for implemetion can be done in only one-step.
7. CONCLUSIONS
In this paper, we discussed an open research question in configuration management of SPLs:
How can we guarantee the implementation of the desired functional and especially non-functional features
that may need special hardware resources (e.g. processors, memory, hard disk, networking equipment, etc.).
To answer this question, we proposed: i) a new “Two-layer” model comprising the application and
infrastructure layers, ii) new “inner” and “intra’ constraint types for feature modelling, and iii) a FM
configuration algorithm describing the steps involved in feature selection leading to the final customized FM.
These constitute a complete package to tackle the FM configuration issue in SPLE. Also, we evaluated our
approach using a case study in the SPL of a sample E-Shop website. This was followed by a systematic
comparison of our approach with previous related works based on a set of criteria. The results show that our
approach could help the stakeholders to have complete knowledge about the application and infrastructure
levels of their desired products at a glance and choose the features in the application layer according to the
availability of the hardware resources in the infrastructure layer. In conclusion, the proposed method can be
evaluated appropriately and used in any CM tools for the SPLs. Furthermore, our approach prevents the
inclusion of non-functional requests from stakeholders that cannot be implemented with the hardware
resources provided in the infrastructure layer. As yet, our approach is not implemented in any configuration
management tool, so in future we intend to implement this new approach in the context of a new or existing
open source configuration management tools for SPL.
10. Int J Elec & Comp Eng ISSN: 2088-8708
Feature model configuration based on two-layer modeling... (Elham Darmanaki Farahani)
2657
REFERENCES
[1] K. Pohl, et al., “A Framework for Software Product Line Engineering,” Springer, pp. 3-15, 2005.
[2] P. Clements and L. Northrop, “Software Product Lines: Practices and Patterns,” Addison-Wesley, 2001.
[3] Sommerville and P. Sawyer, “Viewpoints: Principles, Problems and a Practical Approach to Requirements
Engineering,” Annals Software Engineering, vol. 3, pp. 101-130, 1997.
[4] J. Bosch, “Design and Use Software Architectures: Adopting and Evolving a Product-Line Approach,” Addison-
Wesley, 2000.
[5] V. Alvesa, et al., “Requirements Engineering For Software Product Lines: A Systematic Literature Review,”
International Journal of Information and Software Technology, vol. 52. pp. 806-820, 2010.
[6] I. F. D. Silva, et al., “Software Product Line Scoping and Requirements Engineering In a Small and Medium-Sized
Enterprise: An Industrial Case Study,” The Journal of Systems and Software, vol. 88, pp. 189-206, 2013.
[7] S. Besrour and I. Ghani, “Measuring Security in Requirements Engineering,” International Journal of Informatics
and Communication Technology, vol/issue: 1(2), pp. 72-81, 2012.
[8] D. Benavides, et al., “Automated Reasoning on Feature Models,” Proceeding of 17th International Conference on
Advanced Information Systems Engineering, pp. 491-503, 2005.
[9] J. White, et al., “Selecting Highly Optimal Architectural Feature Sets With Filtered Cartesian Flattening,” Journal
Systems & Software, vol. 82, pp. 1268-1284, 2009.
[10] D. Mairiza, et al., “An Investigation into the Notion of Non-Functional Requirements,” Proceeding of ACM
Symposium on Applied Computing, pp. 311-317, 2010.
[11] D. Benavides, et al., “Using Java CSP Solvers in the Automated Analyses of Feature Models,” Generative and
Transformational Techniques in Software Engineering, pp. 399-408, 2006.
[12] K. Czarnecki, et al., “Staged Configuration Using Feature Models,” Proceeding of Software Product Lines
conference, pp. 162-164, 2004.
[13] S. Sultana, et al., “Automated Planning for Feature Model Configuration based on Functional and Non-Functional
Requirements,” Proceeding of 16th International Software Product Line Conference, vol. 1, pp. 56-65, 2012.
[14] M. Reiser and M. Weber, “Multi-Level Feature Trees: A Pragmatic Approach to Managing Highly Complex
Product Families,” International Journal of Requirements Engineering, vol. 12, pp. 57-75, 2007.
[15] K. C. Kang, et al., “Feature-Oriented Domain Analysis (FODA) Feasibility Study,” Technical Report CMU/SEI-
90-TR-021, SEI, Carnegie Mellon University, 1990.
[16] G. Kotonya and I. Sommerville, “Requirements Engineering Processes and Techniques,” John Wiley & Sons, 1998.
[17] M. Norian, et al., “Non-functional Properties in Software Product Lines: A Taxonomy for classification,”
Proceeding of 24th International Conference on Software Engineering and Knowledge Engineering, 2012.
[18] N. Eén and N. Sörensson, “An Extensible SAT Solver,” Proceeding of 6th International Conference on Theory and
Applications of Satisfiability Testin, LNCS 2919, pp. 502-518, 2003.
[19] E. D. Farahani and J. Habibi, “Feature Model Constraints Control in Stage Configuration of Software Product
Lines,” International Journal of Software Engineering and Its Application, vol. 11, pp. 1-12, 2017.
[20] N. Siegmund, et al., “SPL Conqueror: Toward Optimization of Non-Functional Properties in Software Product
Lines,” Software Quality Journal, 2011.
[21] J. White, et al., “Automated Reasoning for Multi-Step Feature Model Configuration Problems,” Proceedings of the
13th International Software Product Line Conference, pp. 11-20, 2009.
[22] M. Mendonça, et al., “S.P.L.O.T.: Software Product Lines Online Tools,” Proceeding of 24th Conference on
Object-Oriented Programming Systems, Languages and Applications (OOPSLA), pp. 761-762, 2009.
[23] J. Guoa, et al., “A Genetic Algorithm for Optimized Feature Selection with Resource Constraints in Software
Product Lines,” Journal of Systems and Software, vol. 84, pp. 2208-2221, 2011.
[24] L. Machado, et al., “Splconfig: Product configuration in software product line,” Brazilian Congress on Software
(CBSoft), Tools Session, pp. 1-8, 2014.
[25] D. Batory, “Feature Models Grammars, and Propositional Formulas,” Proceeding of the 9th International Software
Product Line Conference (SPLC 2005).Lecture Notes in Computer Science, vol. 3714, 2005.
[26] S. Sohrabi, et al., “HTN planning with preferences,” Proceeding of 21st International Joint Conference Artificial
Intelligence, pp. 1790-1797, 2009.
11. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 4, August 2019 : 2648 - 2658
2658
BIOGRAPHIES OF AUTHORS
Elham Darmanaki Farahani received accordingly her bachelor and Master of Software
Engineering at Years of 2002 and 2004 from Azad University, Central and South Branch in Tehran.
She is currently PhD candidate in Software Engineering PhD at the Sharif University of Technology.
She has the papers published in national and international conferences and Journals. Her research
interests include software engineering, software product line and Configuration Management.
Her email address is: efarahani@ce.sharif.edu.
Jafar Habibi received accordingly her bachelor in Computer Engineering and Master of Industrial
Engineering at Years of 1980 and 1388 from School of Computer and Tarbiat Modarres University
and received PhD in computer science in 1999 from the University of Manchester, UK. He is
currently a member of the faculty the department of Computer Engineering Sharif University of
Technology and director of the Computer Society of Iran. His research Fields in computer
engineering are Performance Evaluation in Computer systems, software engineering, peer to peer
networks, Social networks and data mining. His email address is: jhabibi@sharif.edu