This document proposes extending the OpenUP software process to better support the development of autonomous software systems with a focus on eliciting non-functional requirements (NFRs). It introduces two new artifacts: 1) an NFR Description to document identified NFRs and resolve any conflicts or ambiguities, and 2) Misuse Cases to help uncover additional hidden NFRs. A case study on a Brazilian Emergency System is presented to illustrate applying the extended OpenUP process with the new artifacts during requirements elicitation.
REGULARIZED FUZZY NEURAL NETWORKS TO AID EFFORT FORECASTING IN THE CONSTRUCTI...ijaia
Predicting the time to build software is a very complex task for software engineering managers. There are complex factors that can directly interfere with the productivity of the development team. Factors directly related to the complexity of the system to be developed drastically change the time necessary for the completion of the works with the software factories. This work proposes the use of a hybrid system based on artificial neural networks and fuzzy systems to assist in the construction of an expert system based on rules to support in the prediction of hours destined to the development of software according to the complexity of the elements present in the same. The set of fuzzy rules obtained by the system helps the management and control of software development by providing a base of interpretable estimates based on fuzzy rules. The model was submitted to tests on a real database, and its results were promissory in the construction of an aid mechanism in the predictability of the software construction
This document discusses the impact of aspect-oriented programming (AOP) on software maintainability based on a literature review and case studies. It summarizes several case studies that measured maintainability metrics like coupling, cohesion, and separation of concerns in object-oriented (OO) systems versus aspect-oriented (AO) systems. The studies found that AO systems generally had less coupling between components, higher separation of concerns, and were more changeable and maintainable than equivalent OO systems. The document also outlines various software metrics that have been used to measure maintainability attributes in AO systems like cohesion, coupling, size, and changeability.
An Approach of Improve Efficiencies through DevOps AdoptionIRJET Journal
This document discusses adopting DevOps practices to improve organizational efficiencies. It begins with an abstract discussing how organizations waste resources and how DevOps aims to address this through lean principles and continuous feedback. It then discusses the history and concepts of DevOps, proposing a DevOps adoption model. It outlines factors that affect IT performance and cultural transformation. The document also describes the research design of a study conducted through interviews with DevOps professionals. It identifies four main challenges to DevOps adoption: lack of awareness, lack of support, implementing technologies, and adapting processes. The analysis focuses on the lack of awareness challenge, noting confusion around DevOps definitions and resistance to "buzzwords".
FACTORS ON SOFTWARE EFFORT ESTIMATION ijseajournal
Software effort estimation is an important process of system development life cycle, as it may affect the
success of software projects if project designers estimate the projects inaccurately. In the past of few
decades, various effort prediction models have been proposed by academicians and practitioners.
Traditional estimation techniques include Lines of Codes (LOC), Function Point Analysis (FPA) method
and Mark II Function Points (Mark II FP) which have proven unsatisfactory for predicting effort of all
types of software. In this study, the author proposed a regression model to predict the effort required to
design small and medium scale application software. To develop such a model, the author used 60
completed software projects developed by a software company in Macau. From the projects, the author
extracted factors and applied them to a regression model. A prediction of software effort with accuracy of
MMRE = 8% was constructed.
IRJET- Analysis of Software Cost Estimation TechniquesIRJET Journal
This document analyzes and compares different software cost estimation techniques using machine learning algorithms. It uses the COCOMO and function point estimation models on NASA project datasets to test the performance of the ZeroR and M5Rules classifiers. The M5Rules classifier produced more accurate results with lower mean absolute errors and root mean squared errors compared to COCOMO, function points, and the ZeroR classifier. Therefore, the study suggests using M5Rules techniques to build models for more precise software effort estimation.
1) The document presents EJACK, an extended version of the JACK development tool to support all phases of the Tropos agent-oriented software engineering methodology.
2) EJACK maps concepts from Tropos like actors, goals, resources, and dependencies to equivalent concepts in JACK like agents, plans, and capabilities.
3) A student e-registration system is used as a case study to demonstrate how EJACK can be used to analyze, design, and implement an application using all phases of Tropos.
This document summarizes a research paper on software architecture reconstruction methods. It discusses how software architectures can drift over time from the original design due to changes and deviations. Architecture reconstruction is used to recover the original architecture by applying reverse engineering techniques. The document reviews different bottom-up, top-down, and hybrid methods for architecture reconstruction, including tools like ARMIN and Rigi. It also defines key terms related to architecture reconstruction and the challenges of architectural aging, erosion, drift, and mismatch.
Software engineering in industrial automation state of-the-art reviewTiago Oliveira
This document summarizes recent developments in software engineering for industrial automation systems. It discusses how software is becoming increasingly important and complex in industrial automation, representing 40% of system costs in some cases. The document reviews key areas of software engineering as they relate to industrial automation, including requirements, design, construction, testing, maintenance, and standards/norms. It provides an overview of typical automation system architectures and software functions.
REGULARIZED FUZZY NEURAL NETWORKS TO AID EFFORT FORECASTING IN THE CONSTRUCTI...ijaia
Predicting the time to build software is a very complex task for software engineering managers. There are complex factors that can directly interfere with the productivity of the development team. Factors directly related to the complexity of the system to be developed drastically change the time necessary for the completion of the works with the software factories. This work proposes the use of a hybrid system based on artificial neural networks and fuzzy systems to assist in the construction of an expert system based on rules to support in the prediction of hours destined to the development of software according to the complexity of the elements present in the same. The set of fuzzy rules obtained by the system helps the management and control of software development by providing a base of interpretable estimates based on fuzzy rules. The model was submitted to tests on a real database, and its results were promissory in the construction of an aid mechanism in the predictability of the software construction
This document discusses the impact of aspect-oriented programming (AOP) on software maintainability based on a literature review and case studies. It summarizes several case studies that measured maintainability metrics like coupling, cohesion, and separation of concerns in object-oriented (OO) systems versus aspect-oriented (AO) systems. The studies found that AO systems generally had less coupling between components, higher separation of concerns, and were more changeable and maintainable than equivalent OO systems. The document also outlines various software metrics that have been used to measure maintainability attributes in AO systems like cohesion, coupling, size, and changeability.
An Approach of Improve Efficiencies through DevOps AdoptionIRJET Journal
This document discusses adopting DevOps practices to improve organizational efficiencies. It begins with an abstract discussing how organizations waste resources and how DevOps aims to address this through lean principles and continuous feedback. It then discusses the history and concepts of DevOps, proposing a DevOps adoption model. It outlines factors that affect IT performance and cultural transformation. The document also describes the research design of a study conducted through interviews with DevOps professionals. It identifies four main challenges to DevOps adoption: lack of awareness, lack of support, implementing technologies, and adapting processes. The analysis focuses on the lack of awareness challenge, noting confusion around DevOps definitions and resistance to "buzzwords".
FACTORS ON SOFTWARE EFFORT ESTIMATION ijseajournal
Software effort estimation is an important process of system development life cycle, as it may affect the
success of software projects if project designers estimate the projects inaccurately. In the past of few
decades, various effort prediction models have been proposed by academicians and practitioners.
Traditional estimation techniques include Lines of Codes (LOC), Function Point Analysis (FPA) method
and Mark II Function Points (Mark II FP) which have proven unsatisfactory for predicting effort of all
types of software. In this study, the author proposed a regression model to predict the effort required to
design small and medium scale application software. To develop such a model, the author used 60
completed software projects developed by a software company in Macau. From the projects, the author
extracted factors and applied them to a regression model. A prediction of software effort with accuracy of
MMRE = 8% was constructed.
IRJET- Analysis of Software Cost Estimation TechniquesIRJET Journal
This document analyzes and compares different software cost estimation techniques using machine learning algorithms. It uses the COCOMO and function point estimation models on NASA project datasets to test the performance of the ZeroR and M5Rules classifiers. The M5Rules classifier produced more accurate results with lower mean absolute errors and root mean squared errors compared to COCOMO, function points, and the ZeroR classifier. Therefore, the study suggests using M5Rules techniques to build models for more precise software effort estimation.
1) The document presents EJACK, an extended version of the JACK development tool to support all phases of the Tropos agent-oriented software engineering methodology.
2) EJACK maps concepts from Tropos like actors, goals, resources, and dependencies to equivalent concepts in JACK like agents, plans, and capabilities.
3) A student e-registration system is used as a case study to demonstrate how EJACK can be used to analyze, design, and implement an application using all phases of Tropos.
This document summarizes a research paper on software architecture reconstruction methods. It discusses how software architectures can drift over time from the original design due to changes and deviations. Architecture reconstruction is used to recover the original architecture by applying reverse engineering techniques. The document reviews different bottom-up, top-down, and hybrid methods for architecture reconstruction, including tools like ARMIN and Rigi. It also defines key terms related to architecture reconstruction and the challenges of architectural aging, erosion, drift, and mismatch.
Software engineering in industrial automation state of-the-art reviewTiago Oliveira
This document summarizes recent developments in software engineering for industrial automation systems. It discusses how software is becoming increasingly important and complex in industrial automation, representing 40% of system costs in some cases. The document reviews key areas of software engineering as they relate to industrial automation, including requirements, design, construction, testing, maintenance, and standards/norms. It provides an overview of typical automation system architectures and software functions.
HW/SW Partitioning Approach on Reconfigurable Multimedia System on ChipCSCJournals
Due to the complexity and the high performance requirement of multimedia applications, the design of embedded systems is the subject of different types of design constraints such as execution time, time to market, energy consumption, etc. Some approaches of joint software/hardware design (Co-design) were proposed in order to help the designer to seek an adequacy between applications and architecture that satisfies the different design constraints. This paper presents a new methodology for hardware/software partitioning on reconfigurable multimedia system on chip, based on dynamic and static steps. The first one uses the dynamic profiling and the second one uses the design trotter tools. The validation of our approach is made through 3D image synthesis.
Dynamically Adapting Software Components for the GridEditor IJCATR
The surfacing of dynamic execution environments such as „grids‟ forces scientific applications to take dynamicity. Dynamic
adaptation of Grid Components in Grid Comput ing is a critical issue for the design of framework for dynamic adaptation towards
self-adaptable software development components for the grid. T h i s paper carries the systematic design of dynamic adaptation
framework with the effective implementation of the structure of adaptable component. i . e . incorporating the layered architecture
e n v i r o nme n t with the concept of dynamicity.
Availability Assessment of Software Systems Architecture Using Formal ModelsEditor IJCATR
There has been a significant effort to analyze, design and implement the information systems to process the information and data, and solve various problems. On the one hand, complexity of the contemporary systems, and eye-catching increase in the variety and volume of information has led to great number of the components and elements, and more complex structure and organization of the information systems. On the other hand, it is necessary to develop the systems which meet all of the stakeholders' functional and non-functional requirements. Considering the fact that evaluation and assessment of the aforementioned requirements - prior to the design and implementation phases - will consume less time and reduce costs, the best time to measure the evaluable behavior of the system is when its software architecture is provided. One of the ways to evaluate the architecture of software is creation of an executable model of architecture.
The present research used availability assessment and took repair, maintenance and accident time parameters into consideration. Failures of software and hardware components have been considered in the architecture of software systems. To describe the architecture easily, the authors used Unified Modeling Language (UML). However, due to the informality of UML, they utilized Colored Petri Nets (CPN) for assessment too. Eventually, the researchers evaluated a CPN-based executable model of architecture through CPN-Tools.
Performance Evaluation using Blackboard Technique in Software ArchitectureEditor IJCATR
This document proposes an approach to evaluate software performance using the blackboard technique at the software architecture level. It begins by describing blackboard technique, performance modeling in UML, and timed colored Petri nets. It then outlines an algorithm to convert a UML model of a software architecture using blackboard technique into an executable timed colored Petri net model. This would allow evaluating non-functional requirements like response time at the architecture level before implementation. As a case study, it applies the method to a hotel reservation system modeled with UML diagrams and implemented using the blackboard technique. The performance is then evaluated by analyzing the resulting timed colored Petri net model.
A Methodology To Manage Victim Components Using Cbo Measureijseajournal
This document presents a methodology for managing victim components using coupling between object (CBO) measure. It defines several measures of software component reusability, including weighted component measure and depth of inheritance tree measure. These measures are calculated for components in a human resources (HR) portal application. The document identifies the business tier component as a potential victim component based on its low reuse count. It proposes using CBO measure to identify highly cohesive components that need reconfiguration to improve reusability. Reconfiguring such components would make them less cohesive and easier to reuse in other applications.
‘O’ Model for Component-Based Software Development Processijceronline
The technology advancement has forced the user to become more dependent on information technology, and so on software. Software provides the platform for implementation of information technology. Component Based Software Engineering (CBSE) is adopted by software community to counter challenges thrown by fast growing demand of heavy and complex software systems. One of the essential reasons behind adopting CBSE for software development is the fast development of complicated software systems within well-defined boundaries of time and budget. CBSE provides the mechanical facilities by assembling already existing reusable components out of autonomously developed pieces of the software. The paper proposes a novel CBSE model named as O model, keeping an eye on the available CBSE lifecycle.
Harnessing deep learning algorithms to predict software refactoringTELKOMNIKA JOURNAL
During software maintenance, software systems need to be modified by adding or modifying source code. These changes are required to fix errors or adopt new requirements raised by stakeholders or market place. Identifying thetargeted piece of code for refactoring purposes is considered a real challenge for software developers. The whole process of refactoring mainly relies on software developers’ skills and intuition. In this paper, a deep learning algorithm is used to develop a refactoring prediction model for highlighting the classes that require refactoring. More specifically, the gated recurrent unit algorithm is used with proposed pre-processing steps for refactoring predictionat the class level. The effectiveness of the proposed model is evaluated usinga very common dataset of 7 open source java projects. The experiments are conducted before and after balancing the dataset to investigate the influence of data sampling on the performance of the prediction model. The experimental analysis reveals a promising result in the field of code refactoring prediction
The document discusses the objectives, feasibility study, and implementation specifications for an Income Tax Department Management System project. The objectives are to overcome paper-based problems and easily manage records of PAN card holders and employees. A feasibility study assesses the technical, operational, and economic feasibility of the proposed system. The implementation will use ASP.NET on Windows with a SQL Server database. Hardware requirements include a Pentium PC with 512MB RAM and 80GB hard drive.
The document discusses applying project cost management principles like earned value management (EVM) to software maintenance projects. It outlines the types of maintenance tasks, challenges in effort estimation, and proposes using a software maturity index and EVM to estimate maintenance costs and improve project measurement and control. Accurately estimating effort is key to the successful application of EVM for software maintenance projects.
The document discusses challenges and business success related to software reuse. It outlines topics like reuse challenges, technologies, economics, case studies and empirical investigations. Regarding challenges, it notes organizational, technical, domain engineering, and economic aspects. For technologies, it discusses software analysis/visualization, product lines, and architectures. It also examines cost/benefit relationships, metrics, and legal issues regarding reuse. Case studies from HP and Ericsson demonstrate quality, productivity and economic benefits of large-scale reuse programs. Strategies for successful reuse include formal reuse programs with quality control and continuous improvement.
This document discusses software reuse and application frameworks. It covers the benefits of software reuse like accelerated development and increased dependability. Application frameworks provide a reusable architecture for related applications and are implemented by adding components and instantiating abstract classes. Web application frameworks in particular use the model-view-controller pattern to support dynamic websites as a front-end for web applications.
The document describes a course on software engineering taught by Dr. P. Visu at Velammal Engineering College. It includes the course objectives, outcomes, syllabus, and learning resources. The key objectives are to understand software processes, requirements engineering, object-oriented concepts, software design, testing, and project management techniques. The syllabus covers topics like software processes, requirements analysis, object-oriented concepts, software design, testing, and project management over 5 units. Recommended textbooks and online references are also provided.
An Adjacent Analysis of the Parallel Programming Model Perspective: A SurveyIRJET Journal
This document provides an overview and analysis of parallel programming models. It begins with an abstract discussing the growing demand for parallel computing and challenges with existing parallel programming frameworks. It then reviews several relevant studies on parallel programming models and architectures. The document goes on to describe several key parallel programming models in more detail, including the Parallel Random Access Machine (PRAM) model, Unrestricted Message Passing (UMP) model, and Bulk Synchronous Parallel (BSP) model. It discusses aspects of each model like architecture, communication methods, and associated cost models. The overall goal is to compare benefits and limitations of different parallel programming models.
Performance prediction for software architecturesMr. Chanuwan
The document proposes an approach called APPEAR for predicting software performance in component-based systems. APPEAR uses both structural and statistical modeling techniques. It consists of two main parts: (1) calibrating a statistical regression model by measuring performance of existing applications, and (2) using the calibrated model to predict performance of new applications. Both parts are based on a model that describes relevant execution properties in terms of a "signature". The method supports flexible choice of parts modeled structurally versus statistically. It is being validated on two industrial case studies.
This document provides an overview of component-based software engineering (CBSE). It discusses CBSE processes, component models, composition, and issues related to developing and reusing components. Specifically, it covers CBSE for reuse, which focuses on developing reusable components, and CBSE with reuse, which is the process of developing new applications using existing components. Component identification, validation, and resolving interface incompatibilities during composition are also addressed.
The document discusses a proposed reusability framework for cloud computing. The framework, called the Cloud Computing Reusability Model (CCR), aims to enable reusability in cloud computing through component-based development. The CCR model is validated using CloudSim, and experimental results show that the reusability-based approach can minimize costs and reduce time to market. The document also reviews related work on reusability and cloud computing, and analyzes challenges of the cloud computing platform for software development.
The document discusses software re-engineering and describes:
1) What software re-engineering is, including restructuring software to facilitate future changes without adding new functionality.
2) The advantages of re-engineering over new development, including reduced risk and cost.
3) When re-engineering should be done, such as when changes are confined to part of a system or hardware/software becomes obsolete.
The document presents a Petri net model for hardware/software codesign. Petri nets are used as an intermediate model to allow for formal qualitative and quantitative analysis in order to perform hardware/software partitioning. Quantitative metrics like load balance, communication cost, and mutual exclusion degree are computed from the Petri net model to guide the initial allocation and partitioning process. The approach also estimates hardware area and considers multiple software components in the partitioning method.
Reengineering involves improving existing software or business processes by making them more efficient, effective and adaptable to current business needs. It is an iterative process that involves reverse engineering the existing system, redesigning problematic areas, and forward engineering changes by implementing a redesigned prototype and refining it based on feedback. The goal is to create a system with improved functionality, performance, maintainability and alignment with current business goals and technologies.
Contributors to Reduce Maintainability Cost at the Software Implementation PhaseWaqas Tariq
This document discusses factors that can reduce software maintenance costs during the implementation phase. It identifies that maintenance costs are highest during software development phases. The objective is to define criteria to assess software quality characteristics and assist during implementation. This will help reduce maintenance costs by creating criteria groups to support writing standard code, developing a model to apply criteria, and increasing understandability. Student groups will study code standardization, write programs, and test software maintenance on programs to validate the model and proposed criteria.
Decision Making and Autonomic ComputingIOSR Journals
Abstract: Autonomic Computing refers to the self-managing characteristics of distributed computing
resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users.
An autonomic system makes decisions on its own, using high-level policies; it will constantly check and
optimize its status and automatically adapt itself to changing conditions. As widely reported in literature, an
autonomic computing framework might be seen composed by autonomic components interacting with each
other.
An Autonomic Computing can be modeled in terms of two main control loops (local and global) with
sensors (for self-monitoring), effectors (for self-adjustment), knowledge and planner/adapter for exploiting
policies based on self- and environment awareness.
The goal of autonomic computing is to create systems that run themselves, capable of high-level functioning
while keeping the system's complexity invisible to the user.
General Terms: Autonomic systems, Self-configuration, Self-healing, Self-optimization, Self-protection.
Keywords: Know itself, reconfigure, recover from extraordinary events, expert in self-protection,
A quick tour of Autonomic Computing provides an overview of autonomic computing concepts and the IBM Autonomic Computing Toolkit. It explains that autonomic computing aims to build self-managing systems that can configure, heal, protect and optimize themselves. The document outlines the key components of an autonomic infrastructure, including managed resources, autonomic managers, sensors, effectors and control loops. It also discusses different levels of autonomic maturity that systems can achieve.
HW/SW Partitioning Approach on Reconfigurable Multimedia System on ChipCSCJournals
Due to the complexity and the high performance requirement of multimedia applications, the design of embedded systems is the subject of different types of design constraints such as execution time, time to market, energy consumption, etc. Some approaches of joint software/hardware design (Co-design) were proposed in order to help the designer to seek an adequacy between applications and architecture that satisfies the different design constraints. This paper presents a new methodology for hardware/software partitioning on reconfigurable multimedia system on chip, based on dynamic and static steps. The first one uses the dynamic profiling and the second one uses the design trotter tools. The validation of our approach is made through 3D image synthesis.
Dynamically Adapting Software Components for the GridEditor IJCATR
The surfacing of dynamic execution environments such as „grids‟ forces scientific applications to take dynamicity. Dynamic
adaptation of Grid Components in Grid Comput ing is a critical issue for the design of framework for dynamic adaptation towards
self-adaptable software development components for the grid. T h i s paper carries the systematic design of dynamic adaptation
framework with the effective implementation of the structure of adaptable component. i . e . incorporating the layered architecture
e n v i r o nme n t with the concept of dynamicity.
Availability Assessment of Software Systems Architecture Using Formal ModelsEditor IJCATR
There has been a significant effort to analyze, design and implement the information systems to process the information and data, and solve various problems. On the one hand, complexity of the contemporary systems, and eye-catching increase in the variety and volume of information has led to great number of the components and elements, and more complex structure and organization of the information systems. On the other hand, it is necessary to develop the systems which meet all of the stakeholders' functional and non-functional requirements. Considering the fact that evaluation and assessment of the aforementioned requirements - prior to the design and implementation phases - will consume less time and reduce costs, the best time to measure the evaluable behavior of the system is when its software architecture is provided. One of the ways to evaluate the architecture of software is creation of an executable model of architecture.
The present research used availability assessment and took repair, maintenance and accident time parameters into consideration. Failures of software and hardware components have been considered in the architecture of software systems. To describe the architecture easily, the authors used Unified Modeling Language (UML). However, due to the informality of UML, they utilized Colored Petri Nets (CPN) for assessment too. Eventually, the researchers evaluated a CPN-based executable model of architecture through CPN-Tools.
Performance Evaluation using Blackboard Technique in Software ArchitectureEditor IJCATR
This document proposes an approach to evaluate software performance using the blackboard technique at the software architecture level. It begins by describing blackboard technique, performance modeling in UML, and timed colored Petri nets. It then outlines an algorithm to convert a UML model of a software architecture using blackboard technique into an executable timed colored Petri net model. This would allow evaluating non-functional requirements like response time at the architecture level before implementation. As a case study, it applies the method to a hotel reservation system modeled with UML diagrams and implemented using the blackboard technique. The performance is then evaluated by analyzing the resulting timed colored Petri net model.
A Methodology To Manage Victim Components Using Cbo Measureijseajournal
This document presents a methodology for managing victim components using coupling between object (CBO) measure. It defines several measures of software component reusability, including weighted component measure and depth of inheritance tree measure. These measures are calculated for components in a human resources (HR) portal application. The document identifies the business tier component as a potential victim component based on its low reuse count. It proposes using CBO measure to identify highly cohesive components that need reconfiguration to improve reusability. Reconfiguring such components would make them less cohesive and easier to reuse in other applications.
‘O’ Model for Component-Based Software Development Processijceronline
The technology advancement has forced the user to become more dependent on information technology, and so on software. Software provides the platform for implementation of information technology. Component Based Software Engineering (CBSE) is adopted by software community to counter challenges thrown by fast growing demand of heavy and complex software systems. One of the essential reasons behind adopting CBSE for software development is the fast development of complicated software systems within well-defined boundaries of time and budget. CBSE provides the mechanical facilities by assembling already existing reusable components out of autonomously developed pieces of the software. The paper proposes a novel CBSE model named as O model, keeping an eye on the available CBSE lifecycle.
Harnessing deep learning algorithms to predict software refactoringTELKOMNIKA JOURNAL
During software maintenance, software systems need to be modified by adding or modifying source code. These changes are required to fix errors or adopt new requirements raised by stakeholders or market place. Identifying thetargeted piece of code for refactoring purposes is considered a real challenge for software developers. The whole process of refactoring mainly relies on software developers’ skills and intuition. In this paper, a deep learning algorithm is used to develop a refactoring prediction model for highlighting the classes that require refactoring. More specifically, the gated recurrent unit algorithm is used with proposed pre-processing steps for refactoring predictionat the class level. The effectiveness of the proposed model is evaluated usinga very common dataset of 7 open source java projects. The experiments are conducted before and after balancing the dataset to investigate the influence of data sampling on the performance of the prediction model. The experimental analysis reveals a promising result in the field of code refactoring prediction
The document discusses the objectives, feasibility study, and implementation specifications for an Income Tax Department Management System project. The objectives are to overcome paper-based problems and easily manage records of PAN card holders and employees. A feasibility study assesses the technical, operational, and economic feasibility of the proposed system. The implementation will use ASP.NET on Windows with a SQL Server database. Hardware requirements include a Pentium PC with 512MB RAM and 80GB hard drive.
The document discusses applying project cost management principles like earned value management (EVM) to software maintenance projects. It outlines the types of maintenance tasks, challenges in effort estimation, and proposes using a software maturity index and EVM to estimate maintenance costs and improve project measurement and control. Accurately estimating effort is key to the successful application of EVM for software maintenance projects.
The document discusses challenges and business success related to software reuse. It outlines topics like reuse challenges, technologies, economics, case studies and empirical investigations. Regarding challenges, it notes organizational, technical, domain engineering, and economic aspects. For technologies, it discusses software analysis/visualization, product lines, and architectures. It also examines cost/benefit relationships, metrics, and legal issues regarding reuse. Case studies from HP and Ericsson demonstrate quality, productivity and economic benefits of large-scale reuse programs. Strategies for successful reuse include formal reuse programs with quality control and continuous improvement.
This document discusses software reuse and application frameworks. It covers the benefits of software reuse like accelerated development and increased dependability. Application frameworks provide a reusable architecture for related applications and are implemented by adding components and instantiating abstract classes. Web application frameworks in particular use the model-view-controller pattern to support dynamic websites as a front-end for web applications.
The document describes a course on software engineering taught by Dr. P. Visu at Velammal Engineering College. It includes the course objectives, outcomes, syllabus, and learning resources. The key objectives are to understand software processes, requirements engineering, object-oriented concepts, software design, testing, and project management techniques. The syllabus covers topics like software processes, requirements analysis, object-oriented concepts, software design, testing, and project management over 5 units. Recommended textbooks and online references are also provided.
An Adjacent Analysis of the Parallel Programming Model Perspective: A SurveyIRJET Journal
This document provides an overview and analysis of parallel programming models. It begins with an abstract discussing the growing demand for parallel computing and challenges with existing parallel programming frameworks. It then reviews several relevant studies on parallel programming models and architectures. The document goes on to describe several key parallel programming models in more detail, including the Parallel Random Access Machine (PRAM) model, Unrestricted Message Passing (UMP) model, and Bulk Synchronous Parallel (BSP) model. It discusses aspects of each model like architecture, communication methods, and associated cost models. The overall goal is to compare benefits and limitations of different parallel programming models.
Performance prediction for software architecturesMr. Chanuwan
The document proposes an approach called APPEAR for predicting software performance in component-based systems. APPEAR uses both structural and statistical modeling techniques. It consists of two main parts: (1) calibrating a statistical regression model by measuring performance of existing applications, and (2) using the calibrated model to predict performance of new applications. Both parts are based on a model that describes relevant execution properties in terms of a "signature". The method supports flexible choice of parts modeled structurally versus statistically. It is being validated on two industrial case studies.
This document provides an overview of component-based software engineering (CBSE). It discusses CBSE processes, component models, composition, and issues related to developing and reusing components. Specifically, it covers CBSE for reuse, which focuses on developing reusable components, and CBSE with reuse, which is the process of developing new applications using existing components. Component identification, validation, and resolving interface incompatibilities during composition are also addressed.
The document discusses a proposed reusability framework for cloud computing. The framework, called the Cloud Computing Reusability Model (CCR), aims to enable reusability in cloud computing through component-based development. The CCR model is validated using CloudSim, and experimental results show that the reusability-based approach can minimize costs and reduce time to market. The document also reviews related work on reusability and cloud computing, and analyzes challenges of the cloud computing platform for software development.
The document discusses software re-engineering and describes:
1) What software re-engineering is, including restructuring software to facilitate future changes without adding new functionality.
2) The advantages of re-engineering over new development, including reduced risk and cost.
3) When re-engineering should be done, such as when changes are confined to part of a system or hardware/software becomes obsolete.
The document presents a Petri net model for hardware/software codesign. Petri nets are used as an intermediate model to allow for formal qualitative and quantitative analysis in order to perform hardware/software partitioning. Quantitative metrics like load balance, communication cost, and mutual exclusion degree are computed from the Petri net model to guide the initial allocation and partitioning process. The approach also estimates hardware area and considers multiple software components in the partitioning method.
Reengineering involves improving existing software or business processes by making them more efficient, effective and adaptable to current business needs. It is an iterative process that involves reverse engineering the existing system, redesigning problematic areas, and forward engineering changes by implementing a redesigned prototype and refining it based on feedback. The goal is to create a system with improved functionality, performance, maintainability and alignment with current business goals and technologies.
Contributors to Reduce Maintainability Cost at the Software Implementation PhaseWaqas Tariq
This document discusses factors that can reduce software maintenance costs during the implementation phase. It identifies that maintenance costs are highest during software development phases. The objective is to define criteria to assess software quality characteristics and assist during implementation. This will help reduce maintenance costs by creating criteria groups to support writing standard code, developing a model to apply criteria, and increasing understandability. Student groups will study code standardization, write programs, and test software maintenance on programs to validate the model and proposed criteria.
Decision Making and Autonomic ComputingIOSR Journals
Abstract: Autonomic Computing refers to the self-managing characteristics of distributed computing
resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users.
An autonomic system makes decisions on its own, using high-level policies; it will constantly check and
optimize its status and automatically adapt itself to changing conditions. As widely reported in literature, an
autonomic computing framework might be seen composed by autonomic components interacting with each
other.
An Autonomic Computing can be modeled in terms of two main control loops (local and global) with
sensors (for self-monitoring), effectors (for self-adjustment), knowledge and planner/adapter for exploiting
policies based on self- and environment awareness.
The goal of autonomic computing is to create systems that run themselves, capable of high-level functioning
while keeping the system's complexity invisible to the user.
General Terms: Autonomic systems, Self-configuration, Self-healing, Self-optimization, Self-protection.
Keywords: Know itself, reconfigure, recover from extraordinary events, expert in self-protection,
A quick tour of Autonomic Computing provides an overview of autonomic computing concepts and the IBM Autonomic Computing Toolkit. It explains that autonomic computing aims to build self-managing systems that can configure, heal, protect and optimize themselves. The document outlines the key components of an autonomic infrastructure, including managed resources, autonomic managers, sensors, effectors and control loops. It also discusses different levels of autonomic maturity that systems can achieve.
Autonomic Resource Provisioning for Cloud-Based SoftwarePooyan Jamshidi
This document proposes using fuzzy logic and type-2 fuzzy sets to develop an autonomous resource provisioning system for cloud-based software. Current auto-scaling solutions have limitations including requiring deep application knowledge and performance modeling expertise from users. The proposed system would use fuzzy inference to map monitored performance data to scaling actions, eliminating the need for users to specify scaling parameters or policies. It would incorporate uncertainty into the modeling and use expert knowledge from multiple users to develop robust and adaptive provisioning behavior.
This document proposes an architecture for a framework of autonomic computing applied to hybrid wireless networks. The key points are:
1) Autonomic computing principles of self-configuration, self-optimization, self-healing, and self-protection are applied to enable self-management in hybrid wireless networks.
2) An architecture is proposed with autonomic elements distributed at different layers (e.g. routing, MAC) of the network that can monitor, analyze, plan and execute based on policies.
3) Artificial intelligence techniques like neural networks and fuzzy logic are identified as promising approaches to facilitate autonomic operations for functions like routing and radio resource management.
Overview of the US National Science Foundation Cloud and Autonomic Computing Industry/University Cooperative Research Center testbed activities on the US NSF Chameleon, Cloudlab and XSEDE resources.
The NSF CAC will use its industry/university connections to promote and foster open cloud standards & interoperability testbeds using internal and external resources.
Specific projects have been proposed and approved on two new NSF computer-science-oriented cloud “testbed as a service” resources, Chameleon and CloudLab, which have recently been funded to replace the FutureGrid project.
These testbeds will be open to all researchers who wish to cooperate with us on cloud interoperability, performance, standards or general cloud functionality testing within the context of the approved projects.
Both US domestic and international participants are welcome, as long as you’re willing to work on interoperability topics and share your results.
Opportunties for involvement in the CAC by commercial companies also exist, as described at http://nsfcac.org
This is a short introduction to the city, the venue, and the organisation of the ICAC 2011. More information is avaialble on the conference website: www.autonomic-conference.org.
Building Toward an Open and Extensible Autonomous Computing Platform Utilizi...Phil Cryer
This document proposes building an open and extensible autonomous computing platform using existing open source technologies. It recommends using a distributed network topology and the Debian operating system standardized across nodes. The system would use Puppet for configuration, Apticron for security updates, Monit for monitoring, and a distributed file system like HDFS to store data across multiple nodes, improving autonomy. Standard x86 server hardware is suggested for flexibility and low cost. The goal is to realize much of autonomous computing's promise today through coordination of established software applications.
Autonomics Computing (with some of Adaptive Systems) and Requirements Enginee...Jehn
This presentantion gives an overview on Autonomic Computing. Next, show the state-of-the-art on Requirements Engineering for Autonomic Computing based on 4 papers
In this deck from the 2015 PBS Works User Group, Dale Talcott from the NASA Advanced Supercomputing (NAS) Division presents: Prologue O/S - Improving the Odds of Job Success.
"When looking to buy a used car, you kick the tires, make sure the radio works, check underneath for leaks, etc. You should be just as careful when deciding which nodes to use to run job scripts. At the NASA Advanced Supercomputing Facility (NAS), our prologue and epilogue have grown almost into an extension of the O/S to make sure resources that are nominally capable of running jobs are, in fact, able to run the jobs. This presentation describes the issues and solutions used by the NAS for this purpose."
Learn more: http://www.pbsworks.com/pbsug/2015/agenda.aspx
Watch the video presentation: https://www.youtube.com/watch?v=eQfowQK8PE4
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Autonomic computing aims to reduce human intervention in computer systems by enabling self-managing capabilities. It was developed by IBM based on the human autonomic nervous system and involves techniques like self-configuration, self-healing, self-optimization, and self-protection. The goals of autonomic computing include reducing IT costs, simplifying system management, enabling server consolidation, and forming the basis for more effective cloud computing and pervasive computing technologies in the future through greater system automation and adaptability.
This document summarizes an academic paper about autonomic computing and self-healing systems. It begins with an introduction to self-adapting systems and their development. It then discusses key characteristics of self-managing systems like self-configuration and self-healing. The document outlines tools used to design autonomic systems, including the MAPE-K loop and variability models. It also describes implementations of self-healing systems using redundancy, managers, and architecture-centric approaches. The document concludes by discussing challenges in self-healing system design around fault isolation, tool integration, and reliance on design-time models.
Autonomic Computing: Vision or RealityIvo Neskovic
Autonomic computing is a new computing paradigm which combines multiple disciplines of computer science with the sole aim of developing self-managing computer systems. Dating from early 2001, it is one of the most recent paradigm shifts, and as such it is still in a research-only phase, however, attracting a lot of business investors in the process.
The following survey presents in a clear and appropriately detailed manner the problem of computer science which autonomic computing tries to solve, the details of the proposed solution, together with the some of the immediate and long-term benefits it will provide. Moreover, the survey outlines the basic principles which define a system as an autonomic one, and presents a novel method of designing autonomic systems. Closing the survey are two sections which briefly outline the most prominent research projects on autonomic computing, together with a distiled summary of the major challenges which will be faced by businesses in the process of adopting autonomic systems.
The document discusses applying autonomic computing principles to wireless sensor networks (WSNs). It introduces WSNs and their design goals/challenges, which include fault tolerance, power management, efficient routing and data aggregation. Autonomic computing aims to make systems self-configuring, self-healing, self-optimizing and self-protecting. The document argues that autonomic computing is well-suited for addressing WSN challenges by allowing for self-configuration, recovery from failures, optimized resource usage, and protection. It outlines architectures like MANNA that apply autonomic and service-oriented principles to provide a self-managing framework for WSNs.
Introduction
Metadata and Ontology in the Semantic Web
Semantic Web Services
A Layered Structure of the Semantic Grid
SemanticGrid
Autonomic Computing
This seminar discusses autonomic computing technology. Autonomic computing allows IT systems to self-manage by configuring, healing, optimizing and protecting themselves with minimal human intervention similar to the autonomic nervous system. The goal is to increase productivity while reducing complexity. Key aspects discussed include self-configuration, self-optimization, self-healing and self-protection. Challenges include defining system identity and boundaries, interface design, translating business policies to IT, and creating a federated system of autonomic components.
The document discusses autonomic computing and its evolution. It describes autonomic computing as systems that are self-configuring, self-healing, self-protecting and self-optimizing without direct human intervention. These systems aim to manage complexity and adapt to changing conditions automatically. The document also notes that the increasing complexity of computing systems is overwhelming human administrators and that autonomic computing aims to develop systems capable of self-management to address this problem. It describes how computing systems have evolved from manual management to include increasingly automated functions.
This document discusses autonomic computing, which aims to develop self-managing computing systems that can perform tasks automatically with minimal human intervention. It outlines the growing complexity of IT systems that motivates autonomic computing. The conceptual model is inspired by the human autonomic nervous system which automatically regulates vital functions. The architecture uses control loops to monitor systems and keep parameters within desired ranges. Autonomic systems are characterized by self-configuration, self-optimization, self-healing, and self-protection. Research challenges include developing policies to guide autonomous behavior. Benefits are reduced costs and improved stability, availability, and security of systems.
Cloud Computing and the Next-Generation of Enterprise Architecture - Cloud Co...Stuart Charlton
Stuart Charlton's presentation at the 2008 Sys-Con Cloud Computing Expo in San Jose, CA
Revised for the 2009 Sys-Con Cloud Computing Expo in New York City
This is the presentation I made for my seminar on the topic Autonomic Computing, which describes the Computing systems that can adjust themselves and adapt to various changes, autonomic-ally.
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
Quality aware approach for engineering self-adaptive software systemscsandit
Self-adaptivity allows software systems to autonomously adjust their behavior during run-time to reduce
the cost complexities caused by manual maintenance. In this paper, an approach for building an external
adaptation engine for self-adaptive software systems is proposed. In order to improve the quality of selfadaptive
software systems, this research addresses two challenges in self-adaptive software systems. The
first challenge is managing the complexity of the adaptation space efficiently and the second is handling the
run-time uncertainty that hinders the adaptation process. This research utilizes Case-based Reasoning as
an adaptation engine along with utility functions for realizing the managed system’s requirements and
handling uncertainty.
Integrated Analysis of Traditional Requirements Engineering Process with Agil...zillesubhan
In the past few years, agile software development approach has emerged as a most attractive software development approach. A typical CASE environment consists of a number of CASE tools operating on a common hardware and software platform and note that there are a number of different classes of users of a CASE environment. In fact, some users such as software developers and managers wish to make use of CASE tools to support them in developing application systems and monitoring the progress of a project. This development approach has quickly caught the attention of a large number of software development firms. However, this approach particularly pays attention to development side of software development project while neglects critical aspects of requirements engineering process. In fact, there is no standard requirement engineering process in this approach and requirements engineering activities vary from situation to situation. As a result, there emerge a large number of problems which can lead the software development projects to failure. One of major drawbacks of agile approach is that it is suitable for small size projects with limited team size. Hence, it cannot be adopted for large size projects. We claim that this approach can be used for large size projects if traditional requirements engineering approach is combined with agile manifesto. In fact, the combination of traditional requirements engineering process and agile manifesto can also help resolve a large number of problems exist in agile development methodologies. As in software development the most important thing is to know the clear customer’s requirements and also through modeling (data modeling, functional modeling, behavior modeling). Using UML we are able to build efficient system starting from scratch towards the desired goal. Through UML we start from abstract model and develop the required system through going in details with different UML diagrams. Each UML diagram serves different goal towards implementing a whole project.
QUALITY-AWARE APPROACH FOR ENGINEERING SELF-ADAPTIVE SOFTWARE SYSTEMScscpconf
Self-adaptivity allows software systems to autonomously adjust their behavior during run-time to reduce
the cost complexities caused by manual maintenance. In this paper, an approach for building an external
adaptation engine for self-adaptive software systems is proposed. In order to improve the quality of selfadaptive
software systems, this research addresses two challenges in self-adaptive software systems. The
first challenge is managing the complexity of the adaptation space efficiently and the second is handling the
run-time uncertainty that hinders the adaptation process. This research utilizes Case-based Reasoning as
an adaptation engine along with utility functions for realizing the managed system’s requirements and
handling uncertainty.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...IOSR Journals
This document describes a study that uses fuzzy clustering and software metrics to predict faults in large industrial software systems. The study uses fuzzy c-means clustering to group software components into faulty and fault-free clusters based on various software metrics. The study applies this method to the open-source JEdit software project, calculating metrics for 274 classes and identifying faults using repository data. The results show 88.49% accuracy in predicting faulty classes, demonstrating that fuzzy clustering can be an effective technique for fault prediction in large software systems.
With the emergence of virtualization and cloud computing technologies, several services are housed on virtualization platform. Virtualization is the technology that many cloud service providers rely on for efficient management and coordination of the resource pool. As essential services are also housed on cloud platform, it is necessary to ensure continuous availability by implementing all necessary measures. Windows Active Directory is one such service that Microsoft developed for Windows domain networks. It is included in Windows Server operating systems as a set of processes and services for authentication and authorization of users and computers in a Windows domain type network. The service is required to run continuously without downtime. As a result, there are chances of accumulation of errors or garbage leading to software aging which in turn may lead to system failure and associated consequences. This results in software aging. In this work, software aging patterns of Windows active directory service is studied. Software aging of active directory needs to be predicted properly so that rejuvenation can be triggered to ensure continuous service delivery. In order to predict the accurate time, a model that uses time series forecasting technique is built.
The adoption of cloud environment for various application uses has led to security and privacy concern of user’s data. To protect user data and privacy on such platform is an area of concern.
Many cryptography strategy has been presented to provide secure sharing of resource on cloud platform. These methods tries to achieve a secure authentication strategy to realize feature such as self-blindable access tickets, group signatures, anonymous access tickets, minimal disclosure of tickets and revocation but each one varies in realization of these features. Each feature requires different cryptography mechanism for realization. Due to this it induces computation complexity which affects the deployment of these models in practical application. Most of these techniques are designed for a particular application environment and adopt public key cryptography which incurs high cost due to computation complexity.
To address these issues this work present an secure and efficient privacy preserving of mining data on public cloud platform by adopting party and key based authentication strategy. The proposed SCPPDM (Secure Cloud Privacy Preserving Data Mining) is deployed on Microsoft azure cloud platform. Experiment is conducted to evaluate computation complexity. The outcome shows the proposed model achieves significant performance interm of computation overhead and cost.
Productivity Factors in Software Development for PC PlatformIJERA Editor
Identifying the most relevant factors influencing project performance is essential for implementing business strategies by selecting and adjusting proper improvement activities. The two major classification algorithms CRT and ANN that were recommended by the Auto Classifier tool in SPSS Modeler used for determining the most important variables (attributes) of software development in PC environment. While the accuracy of classification of productive versus non-productive cases are relatively close (72% vs 69%), their ranking of important variables are different. CRT ranks the Programming Language as the most important variable and Function Points as the least important. On the other hand, ANN ranks the Function Points as the most important followed by team size and Programming Language.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
Insights on Research Techniques towards Cost Estimation in Software Design IJECEIAES
This document summarizes research on techniques for cost estimation in software design. It begins by describing common cost estimation techniques like Constructive Cost Modeling (COCOMO) and Function Point Analysis. It then analyzes research trends in cost estimation, effort estimation, and fault prediction based on literature from 2010 to present. Fewer than 50 papers were found related to overall cost estimation, less than 25 for effort estimation, and only 9 for fault prediction. The document then reviews existing research addressing general cost estimation, enhancement of Function Point Analysis, statistical modeling approaches, cost estimation for embedded systems, and estimation for fourth generation languages and NASA projects. Most techniques use COCOMO or extend existing models with techniques like fuzzy logic, neural networks, or statistical
Process-Centred Functionality View of Software Configuration Management: A Co...theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Analysis and design web portl amazing north sulawesi using aup methodologyStanley Karouw
The document describes the analysis and design of a web portal for promoting tourism in North Sulawesi, Indonesia using the Agile Unified Process (AUP) methodology. It discusses using AUP for the software development lifecycle, which includes inception, elaboration, construction, and transition phases. In the inception phase, project scope and requirements are defined. In elaboration, use cases, architecture, and interfaces are designed. Construction includes coding the application and testing. The goal is to develop a web portal that provides comprehensive information on North Sulawesi's exotic tourism locations using a user-centered approach.
The document discusses linguistic structures for incorporating fault tolerance into application software. It begins by explaining that as software complexity increases, software faults have become more prevalent and impactful, necessitating fault tolerance at the application level. It then establishes a set of desirable attributes for application-level fault tolerance structures and surveys current solutions, assessing each according to these attributes. The goal is to identify shortcomings and opportunities to develop improved fault tolerance structures.
ESTIMATING THE EFFORT OF MOBILE APPLICATION DEVELOPMENTcsandit
The rise of the use of mobile technologies in the world, such as smartphones and tablets,
connected to mobile networks is changing old habits and creating new ways for the society to
access information and interact with computer systems. Thus, traditional information systems
are undergoing a process of adaptation to this new computing context. However, it is important
to note that the characteristics of this new context are different. There are new features and,
thereafter, new possibilities, as well as restrictions that did not exist before. Finally, the systems
developed for this environment have different requirements and characteristics than the
traditional information systems. For this reason, there is the need to reassess the current
knowledge about the processes of planning and building for the development of systems in this
new environment. One area in particular that demands such adaptation is software estimation.
The estimation processes, in general, are based on characteristics of the systems, trying to
quantify the complexity of implementing them. Hence, the main objective of this paper is to
present a proposal for an estimation model for mobile applications, as well as discuss the
applicability of traditional estimation models for the purpose of developing systems in the
context of mobile computing. Hence, the main objective of this paper is to present an effort
estimation model for mobile applications.
Testing and verification of software model through formal semantics a systema...eSAT Publishing House
This document summarizes research on automated testing and verification of software models through formal semantics. It discusses various approaches for transforming UML diagrams into other representations to enable verification. The most widely used technique is model-based testing using use case, class, and state diagrams. Formalizing UML diagrams with other formal languages allows verification of properties. Automating test case generation from UML models can improve efficiency and effectiveness of software testing.
This document summarizes several software development process models. It begins by defining what a software process is - a framework for the activities required to build software. It then discusses evolutionary models like prototyping and the spiral model, which use iterative development and user feedback. Concurrent modeling is presented as allowing activities to occur simultaneously. The Unified Process is described as use case driven and iterative. Other models discussed include component-based development, formal methods, and aspect-oriented development. Personal and team software processes are also summarized, focusing on planning, metrics, and continuous improvement.
A survey of predicting software reliability using machine learning methodsIAESIJAI
In light of technical and technological progress, software has become an urgent need in every aspect of human life, including the medicine sector and industrial control. Therefore, it is imperative that the software always works flawlessly. The information technology sector has witnessed a rapid expansion in recent years, as software companies can no longer rely only on cost advantages to stay competitive in the market, but programmers must provide reliable and high-quality software, and in order to estimate and predict software reliability using machine learning and deep learning, it was introduced A brief overview of the important scientific contributions to the subject of software reliability, and the researchers' findings of highly efficient methods and techniques for predicting software reliability.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Similar to Extending open up for autonomic computing english_finalversionupload 6 (20)
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Mind map of terminologies used in context of Generative AI
Extending open up for autonomic computing english_finalversionupload 6
1. Extending OpenUp for Elicitation of Non-Functional
Autonomous Software Requirement
Viviane Priscila S. Santos, Anselmo Leonardo O. Nhane, Humberto Torres Marques-Neto
Department of Computer Science
Pontifical Catholic University of Minas Gerais (PUC MINAS)
Belo Horizonte – Brazil – 30.535-901
vpssantos@sga.pucminas.br, anselmo.nhane@sga.pucminas.br, humberto@pucminas.br
Abstract— In the last decade several contributions to the would compromise the software quality [3, 6, 7]. Taking
development of autonomic computing were presented in the this into consideration, this paper proposes an extension of
literature. However, it is clear that there is still a need for the OpenUP process for developing autonomous softwares
further research to consolidate this computing paradigm. This focusing on the elicitation of NFRs. This extension was
paper proposes the extension of the OpenUP software process
done including new artifacts for independent elicitation of
to promote the development of applications emphasizing the
analysis of non-functional requirements (NFRs) related to the NFRs in this process, because, usually, NFRs are
autonomic capabilities: self-configuration, self-optimization, described together functional requirement.
self-healing, and self-protection. Basically, the changes were We present a case study of Brazilian Emergency
made in the software requirements discipline of OpenUP, more System for illustrating the usage of proposed extension
specifically for proposing two new artifacts: NFR Description done. The results show that the new artifacts can help
and Misuse Case. These changes were important to discover software engineers in the task of the elicitation of NFRs.
other NFRs. In order to illustrate the proposed extension, we This paper is organized in five sections. Section 2
present a case study of the Brazilian Emergency System. presents the Related Work and describes aspects and
Keywords-component: Open UP; Autonomic computing;
concepts used in this work. In Section 3, we show our
Autonomous system; Non-functional Requirements; Software proposal of extending OpenUP for autonomic computing.
Process; Section 4 presents an illustrative case study and, finally, in
Section 5 we offer the conclusions.
I. INTRODUCTION
II. RELATED WORK
In recent decades, the development of computing
power and the growth of the number of devices used for A. Autonomic Computing
processing information have grown exponentially [24]. The concept of Autonomic Computing (AC) was
The computing has a very important role in the introduced by Paul Horn [5]. This concept is inspired by
information society. It is noticed that computing the Human Nervous System in order to find new ways of
technologies are developed and incorporated quickly to implementing software able to answer some challenges
existing computer systems, which increase the complexity posed by the increasing complexity of current IT systems.
[5]. Such complexity hampers the human intervention The AC refers to a system that incorporates mechanisms
required for software maintenances. It is expected that the for self-management based on the following qualitative
autonomic computing (AC) can be used to tackle part of characteristics: self-configuration, self-optimization, self-
these needs, because the autonomic systems may be able healing and self-protection [10, 12, 13].
to evaluate themselves and take appropriate action, thus The MAPE-K (monitor, analyze, plan, execute,
avoiding the need for human intervention for its operation. knowledge) is a reference model for autonomic control
In 2000s, software processes were focused on the loop, generally used to provide architectural aspects for
agility and integration [14], emphasizing collaborative autonomic systems [11]. The control loop defines the
methods, infrastructure environment, values-based elements that help to be in accordance with autonomic
methods, architectures and business systems built by users. characteristics which are linked to the NFR. Approach for
Different software processes, like RUP, XP, OpenUP, designing the software require multi-disciplinary group
among others, were used for software developing and its [16, 18] of systems engineering, once the software process
maintenance. However, these processes do not give the should consider all the autonomic characteristics, which
attention needed for the non-functional software affect the software design. In this context, MAPE-K plays
requirements (NFRs) because, normally, they focus on the a crucial role for satisfying the self-management qualities.
handling of functional requirements which can be easily
noticed and tested by final user [4]. Nevertheless, the B. Software Process
NFRs are very important for some kinds of software and The first generic processes, such as Waterfall model, V
the lack of their elicitation may cause problems which models, among others, did not always work for all
1
2. domains. On the other hand, it motivates the proposal for
the customization of generic software processes to work
well in a specific domain. For instance, the software
process for embedded systems should focus on the
determination of various challenges of integration between
software and hardware [23]. Moreover, in the ubiquitous
computing (any computing device can build dynamically
computational models of the environment in which it is
inserted and configure their services depending on the
needs) the software process should focus on, e.g. the
limitations imposed by the characteristics of the devices in
the environment [27].
According to Dhaminda [22] SOTA (“Stat Of Affair”)
is a general model for modeling the adaptation
requirement. This model enables a uniform and
comprehensive way of modeling functional and non-
functional software requirements. SOTA initially allows
the verification of requirements, identification of the
knowledge needed for self-adaptive and identifying Figure 1. NFR in the Autonomic Systems Qualities
patterns of self-adaptive more appropriate [22]. SOTA is
an integration of context modeling multidimensional and it The OpenUP is a software process with a minimal
tries to identify the general needs for the systems and content for developing software of high quality. Thus, it
dynamic components for self-adaptive focusing on the does not provide guidance on various topics that can
system architecture. The same happen on the SASSY handle projects, such as contractual situations, safety or
(Self-Architecting Software System) proposal, a model- mission critical applications, technology specific guidance,
driven framework target at dynamics settings in which a etc. [26]. Thus, to meet the requirements that are not
system´s requirement might change [25]. covered in your content, the OpenUP is extensible to be
Other research [8] has an architecture for self-adaptive, used as a basis on which process content can be added or
and it proposes a framework of quality metrics – as a modified.
methodology for software engineering based on IEEE Std Moreover, the OpenUP is iterative and agile process.
1061-1998, which makes explicit the qualities of The process consists of a few disciplines, including roles,
autonomic computing. The paper [15] proposes a Multi- artifacts and tasks, which are divided in Requirements,
Agent Systems approach and highlights the importance of Architecture, Development, Test, Project Management,
NFR for autonomic computing. and Configuration & Change management [9, 26].
As the AC properties are directly connected to NFRs, This work was extended only the discipline of the
as shown in Figure 1, the inadequate processing along the requirements. This extension is reported in subsection B.
generic software processes can be a problem. These There were no direct changes in other disciplines, but
generic software processes, like RUP, are focused on taking into account the iterative phases of OpenUP, we
continuous iterations with stakeholders to achieve the presented in subsection C that the impacts can be avoided.
functional requirements during the software process. The A. Adapting OpenUp for AC
same finding was observed in the software processes for
embedded systems, which leads us to conclude that non- Our proposal is to insert two new artifacts in the
functional requirements have always been on the margins requirements discipline of the OpenUP. The first one is the
of software processes. However, aspects conducive to NFR Description, which contains a list of NFRs with their
achieving the AC requirements are strongly conditioned to detailing, identifying ambiguities, disability/inconsistency
of these requirements and conflicts. NFRs defined by
the NFR that requires special attention in the treatment of
different actors can run between them, with the solution, as
the software process, and in the modeling of autonomic
can be seen in Figure 2. The main goal of this new artifact
systems, as documented in Figure 1. Current studies are
is to make feasible a greater focus on NFRs elicitation.
focused on proposed models for AC architecture based on
To identify ambiguities among the requirements, both
the standard cycle control MAPE-K as well as those
NFR and FR must declare them consistently and
proposals affect the design and analysis of systems.
objectively, avoiding double interpretation. The
III. EXTENDING OPENUP FOR THE ELICITATION OF inconsistency is reflected in the development and typically
NON-FUNCTIONAL AUTONOMIC SOFTWARE REQUIREMENT results in a system where user satisfaction is a question [2].
Ambiguity or inconsistency of NFRs can result in system
As mentioned before, this paper proposes some crashes such as the famous example of the London
extensions to the OpenUP process in order to tailor it to Ambulance System [1].
the development autonomous systems. These extensions
were done in the activities of the requirement discipline.
2
3. For some NFRs are only identified among the functional
requirements, i.e. are hidden among the FRs beyond those
specified by the stakeholders. Thus, for this purpose it is
proposed to use the Misuse Case artifact.
The Figure 3 modifies the basic OpenUP [28],
demonstrates the tasks of the requirements elicitation in
extended OpenUP which included the two new artifacts
here indicated by circles: Misuse Case and NFR
Description. This provides inputs and outputs of
Requirements elicitation in order to identify and refine
requirements. In Input 1 has the artifacts Glossary and
Vision. These are the tasks of the system specifications in
the view of stakeholder and analyst. This input produces
other artifacts, output 1, will be explained only those are
proposed in this paper. The output 1 produces besides the
Figure 2. Description NFR Structure
ancient artifacts of OpenUP. NFR Description and Misuse
Cases are new artifacts where all the specifications
The NFRs obtained through different actors of the
described above are followed. This is the first iteration.
system may have conflicts with each other. It is necessary
The second input, the level of NFRs seeks to use the
that all the functional and non-functional requirements are
Misuse Cases for modifying the NFR Description. They
satisfied. It is very difficult to tell when NFR are satisfied.
are inserted into the new NFRs discovered through faults
Hence, the term partial NFR satisfaction is used when
in the FRs of the system in the NFR Description.
there is a solution considered good, even if this is not ideal
Therefore, all NFRs are defined. Procedures are applied to
[17]. Therefore the conflict must be explained including all
find out if there are ambiguities and conflicts between
its dependencies. The decisions to resolve the conflict
them after that obtains the final NFR.
must be based on the classification of priorities [19],
which seeks to apply the NFR. B. Impacts of Changes in OpenUP
The indentify and Refine Requirements are present in
three phases of Patterns of Inception phase Elaboration
Iteration Phase Construction Phase Iteration what the
requirements are presents and Transition Phase Iteration.
Thus the changes proposed here can impact and add new
goals to these iterative phases.
In the initiation phase iteration of the objectives of this
stage, affects the pattern model iteration (the identification
and refinement requirements) in order to determine a
relative solution autonomic possible. Assess the key
features based on the autonomic system. realizes all
limitations associated with non-functional and functional
requirements, which will impact the product by
minimizing maintenance and future corrections due to this
requirement.
The iteration of the elaboration phase - this phase, the
process OpenUP impact on goals in two fundamental
aspects: Definition of the architecture (must allow
modification with changes of context, which will require
an additional study of restrictions and changes that may
occur) and in the detailing of non-functional requirements
according to our proposal, will highlight new non-
functional requirements. Another relevant aspect is in
accordance with the need for the project, identifying
Figure 3. Steps Requirements elicitation OpenUP, modified for professionals of different areas that are covered by the
elicitation of NFRs
characteristics of the system under study.
Iteration of the construction phase - this phase the
The Misuse Case was also added to the OpenUp
OpenUP also recommends the Refinement of the
software process. This is based on scenarios of negative
requirements that will impact the development, including
circumstances used in situations where error occurs in the
how to identify the policies that define the behavior of the
system. Misuse Cases can improve the elicitation process
autonomic system.
they may lead to yet unidentified NFRs that are hidden [2].
3
4. In general, the process extension increases the attention shows the sequence of events where a most events can
given to non-functional requirements, instead of relying occur simultaneously (eg as in box 6 and 10).
only on use cases. The process focuses on the qualities and
NFR based on metrics for autonomic computing. OpenUP B. Appling Extended OpenUp in SBE
provides guidance on the specification of autonomic For requirements elicitation this system was applied to
software quality(self-configuration, self-healing, self- the artifacts here proposed as an extension of the OpenUP,
optimization, etc..) That best suits the project under study. so defined correctly the NFRs.Firstly, we got the system
description and produce traditional artifacts of OpenUP as
C. Non-Functional Requirements and MAPE-K a list of requirements and UML diagrams, which were
It is very important to define correctly the NFRs of the specified FRs and NFRs normally. After this was done
autonomic system, as well as conflicts, if any, between where the NFR Description was identified first order two
them. It can directly influence the political system NFRs ambiguous, ie, those that could be bad roles played
management. How goals are often expressed using by others, such as Architects, Developers and Testers.
political event-condition-action (ECA) the policies of goal, After it was built the Misuse Case.
or political utility function [21]. As the policies define how The next step is the built of the Misuse Case, where it
and when MAPE-K should act in relation to managed was possible to discover other NFRs hidden amid the FRs.
Element, Plan and Knowledge, influence that the actions These NFRs could not be identified in most processes
taken by the autonomic system. Furthermore, through the commonly used for modeling focused FRs [2] and thus,
NFRs is possible to define what properties (self-Healing, the analyst cannot perceive them. Generally, all NFRs are
self-protection, self-configuration and self-optimization) not specified by stakeholders. It requires observation of the
are added to the AC system, as can be seen in Figure 1. analyst, which does not always happen due to the short
An example would be an autonomic system that time frame for completion of the system. This can result in
detects two sub-problems at the same time as one of software (system) that does not solve completely the
configuration, and other Healing, priorities of NFRs problem of users [28], leading to the need for corrections
determine which decision is the highest priority and should when the process is already at an advanced stage or system
be executed first, if it is not possible to place them at the in use. This is very bad because NFRs corrections are
same time. much more expensive and difficult to do [2].
Another factor is that this information is possible to Some NFRs found integrations with external systems
select which application analyst MAPE-K is appropriate to are already in use. This is shown in Figure 5, where the
the system. Example a system that has as NFRs system should decrease the time of the traffic lights on the
Performance and Maintainability can be implemented route traced, but it must be integrated into the control
using the tool ABLE, this has performance monitoring, system. Moreover, there is also integration with traffic
health monitoring, prediction level and length of service system that was forgotten in the first instance, whose need
for contract management [20]. was perceived when simulations of traffic problems in
using functional part Misuse Case, where it is necessary to
IV. CASE STUDY divert the route in case of traffic blockage. As well as
A. Brazilian Emergency System changes in the climate where drastic changes in climate
should also divert the route of MSU, so it is necessary
As a Case Study to illustrate this changes proposed integration with the weather system. These integrations in
here, we applied the two new artifacts in set of the both the systems, traffic and weather, was also required in
requirements of the Brazilian Emergency System (SBE). the case of Figure 6, where the system checks the necessity
This system aims to integrate the attendance of SAMU of sending helicopter, if such is available. Also noted is
(Emergency Phone Number: 192), the emergency care unit other NFR a need for integration in real time.
in Brazil, and Fire Department (Emergency Phone In the second iteration, the NFR Description has been
Number: 193), in order to solve problems caused by the redone and all NFRs uncovered from Misuse Case were
disjunction of these systems of care. These two units are added. In this new list were sought ambiguities and
disjoint emergency entities and sometimes we can observe conflicts. The only conflict was found that the external
problems of communication among them. In an emergency system for diversion of priority, i.e. the climate and traffic.
situation, SAMU ambulance and firefighter’s vehicle are In this case the climate was chosen as a priority, as in the
called unnecessarily, causing disputes between service case of certain roads may flood rains. Moreover, there
units and allocation of professional and unnecessary were some NFRs that could be misinterpreted and its
vehicles that could meet other cases. description was improved so that it is more objective.
This paper proposed an autonomous system that is
selected for the best care unit for each type of call. Its
operation can be seen in Figure 4, where the numbering
4
5. Figure 4. Operation of Emergency System
Figure 5. Misuse case part applied to the SBE Figure 6 - Misuse case part applied to the SBE
C. Impact properties AC As Self-configuration: The emergency system in
Brazil (SBE) have to change settings and route
These changes were important because aside from
traffic lights, according to the external conditions
discovering other NFRs that they could be disregarded by
to be monitored by this, among other;
not receiving adequate attention on the beginning of the
Self-optimization: SBE should increase or reduce
project [2], these requirements were also attached the
its performance according to the system need;
properties of the autonomic system. This can be seen in
Figure 1, largely Self-Healing and Self-configuration due Self-healing: SBE must be able to function even
to the nature of the system. This connection between the with faults such as networks and repairs them.
AC and properties NFRs are exemplified below: This is because the system must always be
available to users; and
5
6. Self-protection: The system must recognize and [10] J. A. McCann and M. C. Huebscher. “Evaluation Issues in
replace, malicious services by other suitable Autonomic Computing”, Grid and Cooperative Computing
- GCC 2004 Workshops, Springer Berlin / Heidelberg, vol.
through the result of the discovery. 3252, 2004, pp. 597-608.
Thus, it is clear that when the NFRs receive due [11] M. C. Huebscher and J. A. McCann. “A survey of
attention positively impact the outcome ddo process, may autonomic computing—degrees, models, and applications”,
reduce costs and delays [2]. For this reason, the extension ACM Comput. Surv., vol. 40, Aug. 2008, pp. 7:1--7:28,
here proposed NFRs attempts to identify the beginning of doi: 10.1145/1380584.1380585.
the process software, as proposed in [3] and [4]. [12] L. D. Paulson. “Computer System, Heal Thyself”,
Computer, vol. 35, Aug. 2002, pp. 20--22, doi:
V. CONCLUSION 10.1109/MC.2002.1023783.
[13] A. J. Ramirez, D. B. Knoester, B. H. Cheng and P. K.
There are clearly problems in the elicitation of NFRs in Mckinley. “Plato: a genetic algorithm approach to run-time
conventional software processes, which may generate reconFiguretion in autonomic computing systems,” Cluster
system failures. This includes standalone systems, as Computing, vol. 14, Sep. 2011, pp.229-244, doi:
realized during this work, the AC is intertwined with 10.1007/s10586-010-0122-y.
NFRs. Bearing this in account the work done here [14] B. Boehm et al; A View of 20th and 21st Century Software
Engineering; University of Southern California, University
provides a new way of NFR elicitation supported by Park Campus, Los Angeles, 2006.
Misuse Cases. This proposal appears to be adequate, since [15] H. Kuang and O. Ormandjieva, "Self-Monitoring of Non-
it focuses on the NFRs since the beginning of the software Functional Requirements in Reactive Autonomic Systems
process. Thus, we conclude that the extension made in Framework: A Multi-Agent Systems Approach"; IEEE,
OpenUP can get good results, facilitating the identification Concordia University, Montreal, Canada, 2008.
and refinement of NFRs using the NFR Misuse Case [16] E. Shahriar at al, "Software Process Engineering: Strengths,
Description and which are present in the process since the Weaknesses, Opportunities and Threats", Malasia, 2010.
beginning of the process. It is still desired changes in the [17] L. M. Cysneiros, "Evaluating the Effectiveness of
test area and adding documentation to choose the best Using Catalogues to Elicit Non-Functional Requirements,"
Proceedings of 10th Workshop in Requirements
application through the specified NFRs for autonomous Engineering, 2007.
system development. Furthermore, we intend to review the [18] J, Cleland-Huang; "Goal-Centric Traceability for
impact on all other process disciplines, for these managing Non-Functional Requirements", ICSE, USA,
modifications. 2005.
[19] D. K. Barham Paech, "Non-Functional Requirements
REFERENCES Engineering - Quality is Essential," 10th Anniversary
[1] A D. J. Finkelstein, "A comedy of Errors: The London International Workshop on Requirements Engineering:
Ambulance Service Case Study," IEEE Computer Society Foundation for Software Quality(REFSQ'04), 2004.
Press pp 2-5 1996. [20] Bigus, J. P.; Schlosnagle, D. A.; Pilgrim, J. R.; Mills III,
[2] Ullah, S.; Iqbal, M.; Khan, A.M.; , "A survey on issues in W. N.; Diao, Y.; , "ABLE: A toolkit for building
non-functional requirements elicitation,", 2011, pp.333-340 multiagent autonomic systems," IBM Systems Journal ,
doi: 10.1109/ICCNIT.2011.6020890. vol.41, no.3, pp.350-371, 2002 doi: 10.1147/sj.413.0350.
[3] L. M. Cysneiros., J. C. S. P, Leite. "Non-functional [21] J.O. Kephart, and W. E. Walsh. "An artificial intelligence
requirements: from elicitation to modelling languages," perspective on autonomic computing policies." In
Software Engineering,. ICSE 2002. Proceedings of the 24rd Proceedings of the 5th IEEE International Workshop on
International Conference, pp.699-700, 2002. Policies for Distributed Systems and Networks. 3–12 2004.
[4] B. A N. Lawrence Chung. Eric Yo, and John Mylopoulos, [22] B. Dhaminda, Abeywickrama et ali, SOTA: toward
"Non Functional Requirements in Software General Model for Self-Adaptative Systems; IEEE 21st
Engineering," Boston: Kluwer Academic Publishers, International Wetice, 2012.
2000. [23] D. E. Damkeres; “Aplicação da abordagem GQM para a
[5] Horn P., 2001. "Autonomic computing: IBM’s perspective definição de um processo de engenharia de requisitos de
on the state of information technology". URL: software embarcado”, Masterdegree, Universidade Católica
http://researchweb.watson.ibm.com/autonomic. de Brasilia; Brazil; 2008.
[6] M. S.Emami , N. Binti Ithnin and O. Ibrahim, “Software [24] Would Statistics, http://www.factfish.com/statistic/. Accessed
Process Engineering: Strengths, Weaknesses, Date: 07.11.12;
Opportunities and Threats”. Networked Computing (INC), [25] D. Menascé et ali; SASSY(Self-architecting software
6th International Conference on, IEEE, 2010. system), IEEE Software, 2011
[7] L. M. Cysneiros and J. C. S. P. Leite. “Using UML to [26] Introduction to OpenUP (Open unified Process. Project.
reflect non-functional requirements”. In Proceedings of the URL: http://www.eclipse.org/epf/general/OpenUP.pdf.
2001 conference of the Centre for Advanced Studies on [27] C. Cirilo; "Computação Ubíqua: definição, princípios e
Collaborative research (CASCON '01), 2001. tecnologias"; Cientifc article, Universidade Federal de São
[8] P. Lin, A. Mac, J. Leaney, “Defining Autonomic Carlos, Brasil.
Computing: A Software Engineering Perspective,” [28] A. Matouss and R. Laleau , "A Survey of Non-Functional
Proceedings of the 2005 Australian Software Engineering Requirements in Software Development Process" TR-
Conference (ASWEC’05), Mar. 2005; Australia. LACL-2008-7. 2008.
[9] Eletronic Document: OpenUp: Unified Project. URL: [29] OpenUp/Basic "Maneged Requeriments". 2012. URL:
http://epf.eclipse.org/wikis/openup/ <http://epf.eclipse.org/wikis/openuppt/index.htm>
6