There are number of routing protocols proposed for the data transmission in WSN. Initially single path routing schemes with number of variations are proposed. Sti ll there were some drawbacks in single path routing . Single path routing was unable to provide the reliability and h igh throughput. Also security level was not conside red while routing. Recently,to remove the drawbacks of the s ingle path routing new routing technique is propose d called as multipath routing. In this paper we discussed the different multipath routing protocols with number of variants. Initiall y multipath routing was proposed for the purpose of guaranteed delivery of packet to sink in case of link or node failure. There are other protocols which are proposed for the reli ability,energy saving,security and high throughpu t. Some multipath routing protocols have discussed the load balancing and security during packet transmission.
Requirements Analysis and Design in the Context of Various Software Developme...zillesubhan
This document provides a comparative analysis of requirements analysis and design phases between traditional and agile software development approaches. It discusses the importance of requirements analysis and outlines the key stages in a traditional software development lifecycle, including requirements analysis, system design, coding, testing, and maintenance. The document also examines requirements engineering processes and sources of requirements. It describes the goals and importance of software design as a key phase for implementing requirements and allowing flexibility for changes.
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...IOSR Journals
This document describes a study that uses fuzzy clustering and software metrics to predict faults in large industrial software systems. The study uses fuzzy c-means clustering to group software components into faulty and fault-free clusters based on various software metrics. The study applies this method to the open-source JEdit software project, calculating metrics for 274 classes and identifying faults using repository data. The results show 88.49% accuracy in predicting faulty classes, demonstrating that fuzzy clustering can be an effective technique for fault prediction in large software systems.
Model-Based Performance Prediction in Software Development: A SurveyMr. Chanuwan
This document provides a survey of model-based approaches for predicting software performance early in the development lifecycle. It reviews approaches that use queueing networks, stochastic Petri nets, and other models. The approaches are evaluated based on how integrated the software and performance models are, how early performance analysis can be done in the lifecycle, and the level of automation support. The survey finds that while progress has been made, fully integrated solutions spanning the entire lifecycle are still needed. Promising future work includes approaches with more semantic integration of models and higher degrees of automation.
SECURING SOFTWARE DEVELOPMENT STAGES USING ASPECT-ORIENTATION CONCEPTSijseajournal
The document summarizes research on securing software development stages using aspect-orientation concepts. It proposes a model called the Aspect-Oriented Software Security Development Life Cycle (AOSSDLC) which incorporates security activities into each stage of the software development life cycle. The model aims to efficiently integrate security as a cross-cutting concern using aspect orientation. It is concluded that aspect orientation allows security features to be installed without changing the existing software structure, providing benefits over other approaches.
The document discusses various prescriptive software process models including the waterfall model, incremental process model, evolutionary process model, and prototyping. The waterfall model proposes a sequential approach from requirements to deployment. The incremental model produces deliverable software increments. Evolutionary models iteratively produce more complete versions. Prototyping builds prototypes to help define requirements through evaluation. Issues with each approach are also outlined.
This document provides an overview of software engineering and a generic process model. It discusses that software should be engineered to meet 21st century challenges. A software engineering process involves communication, planning, modeling, construction, and deployment activities applied iteratively. It also involves umbrella activities like tracking, reviews, and configuration management. Finally, it presents a schematic of a generic process model showing the relationship between framework activities, actions, and tasks.
This document provides an overview of software architectures and architectural structures. It discusses different types of architectural structures, including module structures, component-and-connector structures, and allocation structures. Module structures focus on modules and their relationships, component-and-connector structures examine runtime components and connectors, and allocation structures show how software elements map to environments. The document then examines specific architectural structures like modules, layers, classes, processes, repositories, and deployment. It emphasizes that an architect should focus on a few key structures like logical, process, development, and physical views to validate that the architecture meets requirements.
Quality Attributes and Software Architectures Emerging Through Agile Developm...Waqas Tariq
Software architectures play an important role as an intermediate stage through which system requirements are translated into full scale working system. The idea of what a system does, what it does not, and different concerns and requirements can be negotiated and expressed clearly through the software architecture. Software architectures exist to enhance and provide quality attributes, while they are quality attributes and their required level of achievement which can offer numerous number of software architectures for a single software system.
We believe that the agile approach to architecting is problematic because of agilists’ beliefs about how to architect a software system, and how critical quality attributes are to achieve a stable yet flexible architecture. Through this research we clarify these issues, and discuss consequences of agile architecting on achieved level of quality attributes. We are going to pursue the answer to how to architect to achieve required level of quality attributes, while adopting an agile process.
Requirements Analysis and Design in the Context of Various Software Developme...zillesubhan
This document provides a comparative analysis of requirements analysis and design phases between traditional and agile software development approaches. It discusses the importance of requirements analysis and outlines the key stages in a traditional software development lifecycle, including requirements analysis, system design, coding, testing, and maintenance. The document also examines requirements engineering processes and sources of requirements. It describes the goals and importance of software design as a key phase for implementing requirements and allowing flexibility for changes.
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...IOSR Journals
This document describes a study that uses fuzzy clustering and software metrics to predict faults in large industrial software systems. The study uses fuzzy c-means clustering to group software components into faulty and fault-free clusters based on various software metrics. The study applies this method to the open-source JEdit software project, calculating metrics for 274 classes and identifying faults using repository data. The results show 88.49% accuracy in predicting faulty classes, demonstrating that fuzzy clustering can be an effective technique for fault prediction in large software systems.
Model-Based Performance Prediction in Software Development: A SurveyMr. Chanuwan
This document provides a survey of model-based approaches for predicting software performance early in the development lifecycle. It reviews approaches that use queueing networks, stochastic Petri nets, and other models. The approaches are evaluated based on how integrated the software and performance models are, how early performance analysis can be done in the lifecycle, and the level of automation support. The survey finds that while progress has been made, fully integrated solutions spanning the entire lifecycle are still needed. Promising future work includes approaches with more semantic integration of models and higher degrees of automation.
SECURING SOFTWARE DEVELOPMENT STAGES USING ASPECT-ORIENTATION CONCEPTSijseajournal
The document summarizes research on securing software development stages using aspect-orientation concepts. It proposes a model called the Aspect-Oriented Software Security Development Life Cycle (AOSSDLC) which incorporates security activities into each stage of the software development life cycle. The model aims to efficiently integrate security as a cross-cutting concern using aspect orientation. It is concluded that aspect orientation allows security features to be installed without changing the existing software structure, providing benefits over other approaches.
The document discusses various prescriptive software process models including the waterfall model, incremental process model, evolutionary process model, and prototyping. The waterfall model proposes a sequential approach from requirements to deployment. The incremental model produces deliverable software increments. Evolutionary models iteratively produce more complete versions. Prototyping builds prototypes to help define requirements through evaluation. Issues with each approach are also outlined.
This document provides an overview of software engineering and a generic process model. It discusses that software should be engineered to meet 21st century challenges. A software engineering process involves communication, planning, modeling, construction, and deployment activities applied iteratively. It also involves umbrella activities like tracking, reviews, and configuration management. Finally, it presents a schematic of a generic process model showing the relationship between framework activities, actions, and tasks.
This document provides an overview of software architectures and architectural structures. It discusses different types of architectural structures, including module structures, component-and-connector structures, and allocation structures. Module structures focus on modules and their relationships, component-and-connector structures examine runtime components and connectors, and allocation structures show how software elements map to environments. The document then examines specific architectural structures like modules, layers, classes, processes, repositories, and deployment. It emphasizes that an architect should focus on a few key structures like logical, process, development, and physical views to validate that the architecture meets requirements.
Quality Attributes and Software Architectures Emerging Through Agile Developm...Waqas Tariq
Software architectures play an important role as an intermediate stage through which system requirements are translated into full scale working system. The idea of what a system does, what it does not, and different concerns and requirements can be negotiated and expressed clearly through the software architecture. Software architectures exist to enhance and provide quality attributes, while they are quality attributes and their required level of achievement which can offer numerous number of software architectures for a single software system.
We believe that the agile approach to architecting is problematic because of agilists’ beliefs about how to architect a software system, and how critical quality attributes are to achieve a stable yet flexible architecture. Through this research we clarify these issues, and discuss consequences of agile architecting on achieved level of quality attributes. We are going to pursue the answer to how to architect to achieve required level of quality attributes, while adopting an agile process.
EVALUATION OF SOFTWARE DEGRADATION AND FORECASTING FUTURE DEVELOPMENT NEEDS I...ijseajournal
This article is an extended version of a previously published conference paper. In this research, JHotDraw (JHD), a well-tested and widely used open source Java-based graphics framework developed with the best software engineering practice was selected as a test suite. Six versions of this software were profiled, and data collected dynamically, from which four metrics namely (1) entropy (2) software maturity index, COCOMO effort and duration metrics were used to analyze software degradation, maturity level and use
the obtained results as input to time series analysis in order to predict effort and duration period that may
be needed for the development of future versions. The novel idea is that, historical evolution data is used to
project, predict and forecast resource requirements for future developments. The technique presented in
this paper will empower software development decision makers with a viable tool for planning and decision
making.
This document discusses software reuse and application frameworks. It covers the benefits of software reuse like accelerated development and increased dependability. Application frameworks provide a reusable architecture for related applications and are implemented by adding components and instantiating abstract classes. Web application frameworks in particular use the model-view-controller pattern to support dynamic websites as a front-end for web applications.
2016 state of industrial internet application development.
Study Highlights
This study, carried out in collaboration with GE Digital,
surveyed the existing industrial developer landscape, to better
understand who industrial developers are, how they allocate their time and resources when developing applications, the challenges faced in the development process, and the technological opportunities available to them. The study, a survey of over 1,200 industrial developers, concludes that there is a need within the industrial developer community for focused tools and that these developers would receive significant benefit from using PaaS and infrastructures such as Predix. Relevant findings include the following:
This document provides an outline and details of the key topics covered in Unit 1 of a Software Engineering course, including defining framework activities, identifying task sets, and process patterns. The five framework activities are communication, planning, modeling, construction, and deployment. Process patterns describe process-related problems, the environment they occur in, and proven solutions. The document also discusses approaches to software process assessment and improvement like SCAMPI, CBA IPI, SPICE, and ISO 9001:2000.
Multiagent Based Methodologies have become an
important subject of research in advance Software Engineering.
Several methodologies have been proposed as, a theoretical
approach, to facilitate and support the development of complex
distributed systems. An important question when facing the
construction of Agent Applications is deciding which
methodology to follow. Trying to answer this question, a
framework with several criteria is applied in this paper for the
comparative analysis of existing multiagent system
methodologies. The results of the comparative over two of them,
conclude that those methodologies have not reached a sufficient
maturity level to be used by the software industry. The
framework has also proved its utility for the evaluation of any
kind of Multiagent Based Software Engineering Methodology
The document discusses various types of software testing:
- Development testing includes unit, component, and system testing to discover defects.
- Release testing is done by a separate team to validate the software meets requirements before release.
- User testing involves potential users testing the system in their own environment.
The goals of testing are validation, to ensure requirements are met, and defect testing to discover faults. Automated unit testing and test-driven development help improve test coverage and regression testing.
This document proposes techniques for detecting and correcting design defects in object-oriented software. It discusses using design patterns as a reference to detect defects and class slicing to refactor code to meet design specifications. The detection process involves specifying quality goals, static program analysis, metric computation, and comparing the software design to an object-oriented design knowledge base containing design patterns and principles. Identified defects are then suggested for correction, which involves class slicing to modify the software design while preserving behavior. The goal is to develop tools that can automatically detect and correct design defects to improve software quality and reduce costs.
Software Engineering Important Short Question for ExamsMuhammadTalha436
The document discusses various topics related to software engineering including:
1. The software development life cycle (SDLC) and its phases like requirements, design, implementation, testing, etc.
2. The waterfall model and its phases from modeling to maintenance.
3. The purpose of feasibility studies, data flow diagrams, and entity relationship diagrams.
4. Different types of testing done during the testing phase like unit, integration, system, black box and white box testing.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document provides a selective survey of software reliability models. It discusses both static models used in early development stages and dynamic models used later. For static models, it describes a phase-based model and predictive development life cycle model. For dynamic models, it outlines reliability growth models, including binomial, Poisson, and other classes. It also presents a case study of incorporating code changes as a covariate into reliability modeling during testing of a large telecommunications system. The document concludes by advocating for wider use of statistical software reliability models to improve development and testing processes.
This document proposes developing an extended maintainability estimation model for object-oriented software designs that incorporates reliability and portability metrics. It begins by introducing maintainability and discussing how estimating maintainability during design can help reduce maintenance costs. It then reviews related work on maintainability models and metrics. The proposed work section outlines developing a model that calculates maintainability based on reliability and portability factors. It defines the key aspects of reliability and portability and describes a methodology for inheriting these factors into an existing maintainability model called MOOD. The methodology would use a MATLAB GUI to demonstrate how replacing buggy components with reliable, portable ones could lower maintenance costs.
An Empirical Study of SQA Function Effectiveness in CMMI Certified Companies ...zillesubhan
The most vital component for any software development process is, “quality”, as it ensures the reliability and effectiveness of new software. Software Quality Assurance (SQA) techniques as well as a standardized qualitative metric known as Capability Maturity Model Integration (CMMI) are used to ensure this quality. The purposes of both the practices are same as both make efforts for end product’s quality. In spite of this, CMMI certified organizations have SQA function, but face a lot of issues, which resulted in lowering the quality of the products. Standards usually provide documentation, but SQA consider testing as a chief element and also documentation only for authentication and appraisals. The relationship of the SQA function with CMMI has not attended much in common literatures. This paper is centered on investigation conducted through data collection from diverse CMMI certified software development firm to check the practice of SQA function.
The document discusses assessing software complexity and security metrics from UML class diagrams for software reengineering. It proposes developing a Software Reverse Engineering Tool (SRET) that can automatically calculate metrics like coupling, cohesion, and security metrics from a UML class diagram generated from source code. This would help analysts and developers evaluate software metrics more quickly and efficiently during reengineering compared to manual methods. The tool would extract metrics based on rules applied to the class diagram to measure things like data access, operation access, and interactions between methods and attributes.
This document proposes a 3 layered filtering approach to help developers and managers efficiently implement changes to agile software projects based on new requirements. The first layer classifies requirement changes. The second layer identifies which architecture layers will be affected. The third layer selects the appropriate agile methodology based on the first two layers. Each layer iterates as new requirements emerge, with layers tightly related to each other. This provides a way to abstract and prioritize different issues related to new requirements, allowing changes to be made with less time and money spent.
MVC Architecture from Maintenance Quality Attributes PerspectiveCSCJournals
This paper provides an explanatory study on MVC (Model-View-Controller) architecture from the perspective of maintenance. It aims to answer a knowledge question about how MVC architecture supports the maintainability quality attributes. This knowledge boosts the potential of utilizing the maintainability of MVC from several sides. To fulfill this purpose, we investigate the main mechanism of MVC with focusing on maintainability quality attributes. Accordingly, we form and discuss MMERFT maintainability set that consists of Modifiability, Modularity, Extensibility, Reusability, Flexibility, and Testability. Besides investigating the mechanism of MVC regarding MMERFT quality attributes, we explain how MVC supports maintainability by examining measures and approaches such as: complexity of code by using a cyclomatic approach, re-engineering process, use of components, time needed to detect bugs, number of code lines, parallel maintenance, automation, massive assignment, and others. Therefore, this paper is dedicated to providing a concrete view of how MVC gets along with maintainability aspects in general and its several attributes particularly. This view helps to maximize the opportunity of taking advantage of MVC's maintainability features that can encourage reconsidering the maintenance decisions and the corresponding estimated cost. The study focuses on maintainability since software that has high maintainability will have the opportunity to evolve, and consequently, it will have a longer life. Our study shows that MVC generally supports maintainability and its attributes, and it is a recommended choice when maintenance is a priority.
David vernon software_engineering_notesmitthudwivedi
This document provides an overview of the Software Engineering 2 course, including its aims, objectives, course contents, and recommended textbooks. The course aims to provide knowledge of techniques for estimating, designing, building, and ensuring quality in software projects. The objectives cover understanding software metrics, estimating project costs and schedules, quality assurance attributes and standards, and software analysis and design techniques. The course content includes topics like software metrics, estimation models, quality assurance, and object-oriented analysis and design. The document also summarizes several software engineering process models and risk management approaches.
This document summarizes a research paper on software architecture reconstruction methods. It discusses how software architectures can drift over time from the original design due to changes and deviations. Architecture reconstruction is used to recover the original architecture by applying reverse engineering techniques. The document reviews different bottom-up, top-down, and hybrid methods for architecture reconstruction, including tools like ARMIN and Rigi. It also defines key terms related to architecture reconstruction and the challenges of architectural aging, erosion, drift, and mismatch.
The document presents a changeability evaluation model for object-oriented software. It begins with an introduction to changeability and its importance. It then reviews existing literature on measuring changeability. A relationship is established between changeability and object-oriented design properties like coupling, inheritance, and polymorphism. The paper then develops a changeability evaluation model using multiple linear regression. The model relates changeability as the dependent variable to object-oriented design metrics as independent variables. The model is validated experimentally using data from class diagrams, showing it is highly significant.
EReeRisk- EFFICIENT RISK IMPACT MEASUREMENT TOOL FOR REENGINEERING PROCESS OF...ijpla
EReeRisk (Efficient Reengineering Risk) is a risk impact measurement tool which automatically identifies
and measure impact of various risk components involve in reengineering process of legacy software system.
EReeRisk takes data directly from users of legacy system and establishes various risk measurement metrics
according to different risk measurement scheme of ReeRisk framework [1]. Furthermore EReeRisk present
a variety of statistical quantities for project management to obtain decision concerning at what time
evolution of a legacy system through reengineering is successful. Its enhanced user interface greatly
simplifies the risk assessment procedures and the usage reaming time. The tool can perform the following
tasks to support decision concern with the selection of reengineering as a system evolution strategy.
The document discusses various topics related to software engineering including:
1) How early days of software development have affected modern practices.
2) Definitions of software engineering from different sources.
3) The stages of software design including problem analysis, solution identification, and abstraction description.
4) Object-oriented design principles like information hiding, independent objects, and service-based communication.
DESQA a Software Quality Assurance FrameworkIJERA Editor
In current software development lifecycles of heterogeneous environments, the pitfalls businesses have to face are that software defect tracking, measurements and quality assurance do not start early enough in the development process. In fact the cost of fixing a defect in a production environment is much higher than in the initial phases of the Software Development Life Cycle (SDLC) which is particularly true for Service Oriented Architecture (SOA). Thus the aim of this study is to develop a new framework for defect tracking and detection and quality estimation for early stages particularly for the design stage of the SDLC. Part of the objectives of this work is to conceptualize, borrow and customize from known frameworks, such as object-oriented programming to build a solid framework using automated rule based intelligent mechanisms to detect and classify defects in software design of SOA. The implementation part demonstrated how the framework can predict the quality level of the designed software. The results showed a good level of quality estimation can be achieved based on the number of design attributes, the number of quality attributes and the number of SOA Design Defects. Assessment shows that metrics provide guidelines to indicate the progress that a software system has made and the quality of design. Using these guidelines, we can develop more usable and maintainable software systems to fulfill the demand of efficient systems for software applications. Another valuable result coming from this study is that developers are trying to keep backwards compatibility when they introduce new functionality. Sometimes, in the same newly-introduced elements developers perform necessary breaking changes in future versions. In that way they give time to their clients to adapt their systems. This is a very valuable practice for the developers because they have more time to assess the quality of their software before releasing it. Other improvements in this research include investigation of other design attributes and SOA Design Defects which can be computed in extending the tests we performed.
VTrace-A Tool for Visualizing Traceability Links among Software Artefacts for...journalBEEI
Traceability Management plays a key role in tracing the life of a requirement through all the specifications produced during the development phase of a software project. A lack of traceability information not only hinders the understanding of the system but also will prove to be a bottleneck in the future maintenance of the system. Projects that maintain traceability information during the development stages somehow fail to upgrade their artefacts or maintain traceability among the different versions of the artefacts that are produced during the maintenance phase. As a result the software artefacts lose the trustworthiness and engineers mostly work from the source code for impact analysis. The goal of our research is on understanding the impact of visualizing traceability links on change management tasks for an evolving system. As part of our research we have implemented a Traceability Visualization Tool-VTrace that manages software artefacts and also enables the visualization of traceability links. The results of our controlled experiment show that subjects who used the tool were more accurate and faster on change management tasks than subjects that didn’t use the tool.
A methodology to evaluate object oriented software systems using change requi...ijseajournal
It is a well known fact that software maintenance plays a major role and finds importance in software
development life cycle. As object
-
oriented programming has become the standard, it is very important to
understand th
e problems of maintaining object
-
oriented software systems. This paper aims at evaluating
object
-
oriented software system through change requirement traceability
–
based impact analysis
methodology
for non functional requirements using functional requirem
ents
. The major issues have been
related to change impact algorithms and inheritance of functionality.
EVALUATION OF SOFTWARE DEGRADATION AND FORECASTING FUTURE DEVELOPMENT NEEDS I...ijseajournal
This article is an extended version of a previously published conference paper. In this research, JHotDraw (JHD), a well-tested and widely used open source Java-based graphics framework developed with the best software engineering practice was selected as a test suite. Six versions of this software were profiled, and data collected dynamically, from which four metrics namely (1) entropy (2) software maturity index, COCOMO effort and duration metrics were used to analyze software degradation, maturity level and use
the obtained results as input to time series analysis in order to predict effort and duration period that may
be needed for the development of future versions. The novel idea is that, historical evolution data is used to
project, predict and forecast resource requirements for future developments. The technique presented in
this paper will empower software development decision makers with a viable tool for planning and decision
making.
This document discusses software reuse and application frameworks. It covers the benefits of software reuse like accelerated development and increased dependability. Application frameworks provide a reusable architecture for related applications and are implemented by adding components and instantiating abstract classes. Web application frameworks in particular use the model-view-controller pattern to support dynamic websites as a front-end for web applications.
2016 state of industrial internet application development.
Study Highlights
This study, carried out in collaboration with GE Digital,
surveyed the existing industrial developer landscape, to better
understand who industrial developers are, how they allocate their time and resources when developing applications, the challenges faced in the development process, and the technological opportunities available to them. The study, a survey of over 1,200 industrial developers, concludes that there is a need within the industrial developer community for focused tools and that these developers would receive significant benefit from using PaaS and infrastructures such as Predix. Relevant findings include the following:
This document provides an outline and details of the key topics covered in Unit 1 of a Software Engineering course, including defining framework activities, identifying task sets, and process patterns. The five framework activities are communication, planning, modeling, construction, and deployment. Process patterns describe process-related problems, the environment they occur in, and proven solutions. The document also discusses approaches to software process assessment and improvement like SCAMPI, CBA IPI, SPICE, and ISO 9001:2000.
Multiagent Based Methodologies have become an
important subject of research in advance Software Engineering.
Several methodologies have been proposed as, a theoretical
approach, to facilitate and support the development of complex
distributed systems. An important question when facing the
construction of Agent Applications is deciding which
methodology to follow. Trying to answer this question, a
framework with several criteria is applied in this paper for the
comparative analysis of existing multiagent system
methodologies. The results of the comparative over two of them,
conclude that those methodologies have not reached a sufficient
maturity level to be used by the software industry. The
framework has also proved its utility for the evaluation of any
kind of Multiagent Based Software Engineering Methodology
The document discusses various types of software testing:
- Development testing includes unit, component, and system testing to discover defects.
- Release testing is done by a separate team to validate the software meets requirements before release.
- User testing involves potential users testing the system in their own environment.
The goals of testing are validation, to ensure requirements are met, and defect testing to discover faults. Automated unit testing and test-driven development help improve test coverage and regression testing.
This document proposes techniques for detecting and correcting design defects in object-oriented software. It discusses using design patterns as a reference to detect defects and class slicing to refactor code to meet design specifications. The detection process involves specifying quality goals, static program analysis, metric computation, and comparing the software design to an object-oriented design knowledge base containing design patterns and principles. Identified defects are then suggested for correction, which involves class slicing to modify the software design while preserving behavior. The goal is to develop tools that can automatically detect and correct design defects to improve software quality and reduce costs.
Software Engineering Important Short Question for ExamsMuhammadTalha436
The document discusses various topics related to software engineering including:
1. The software development life cycle (SDLC) and its phases like requirements, design, implementation, testing, etc.
2. The waterfall model and its phases from modeling to maintenance.
3. The purpose of feasibility studies, data flow diagrams, and entity relationship diagrams.
4. Different types of testing done during the testing phase like unit, integration, system, black box and white box testing.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document provides a selective survey of software reliability models. It discusses both static models used in early development stages and dynamic models used later. For static models, it describes a phase-based model and predictive development life cycle model. For dynamic models, it outlines reliability growth models, including binomial, Poisson, and other classes. It also presents a case study of incorporating code changes as a covariate into reliability modeling during testing of a large telecommunications system. The document concludes by advocating for wider use of statistical software reliability models to improve development and testing processes.
This document proposes developing an extended maintainability estimation model for object-oriented software designs that incorporates reliability and portability metrics. It begins by introducing maintainability and discussing how estimating maintainability during design can help reduce maintenance costs. It then reviews related work on maintainability models and metrics. The proposed work section outlines developing a model that calculates maintainability based on reliability and portability factors. It defines the key aspects of reliability and portability and describes a methodology for inheriting these factors into an existing maintainability model called MOOD. The methodology would use a MATLAB GUI to demonstrate how replacing buggy components with reliable, portable ones could lower maintenance costs.
An Empirical Study of SQA Function Effectiveness in CMMI Certified Companies ...zillesubhan
The most vital component for any software development process is, “quality”, as it ensures the reliability and effectiveness of new software. Software Quality Assurance (SQA) techniques as well as a standardized qualitative metric known as Capability Maturity Model Integration (CMMI) are used to ensure this quality. The purposes of both the practices are same as both make efforts for end product’s quality. In spite of this, CMMI certified organizations have SQA function, but face a lot of issues, which resulted in lowering the quality of the products. Standards usually provide documentation, but SQA consider testing as a chief element and also documentation only for authentication and appraisals. The relationship of the SQA function with CMMI has not attended much in common literatures. This paper is centered on investigation conducted through data collection from diverse CMMI certified software development firm to check the practice of SQA function.
The document discusses assessing software complexity and security metrics from UML class diagrams for software reengineering. It proposes developing a Software Reverse Engineering Tool (SRET) that can automatically calculate metrics like coupling, cohesion, and security metrics from a UML class diagram generated from source code. This would help analysts and developers evaluate software metrics more quickly and efficiently during reengineering compared to manual methods. The tool would extract metrics based on rules applied to the class diagram to measure things like data access, operation access, and interactions between methods and attributes.
This document proposes a 3 layered filtering approach to help developers and managers efficiently implement changes to agile software projects based on new requirements. The first layer classifies requirement changes. The second layer identifies which architecture layers will be affected. The third layer selects the appropriate agile methodology based on the first two layers. Each layer iterates as new requirements emerge, with layers tightly related to each other. This provides a way to abstract and prioritize different issues related to new requirements, allowing changes to be made with less time and money spent.
MVC Architecture from Maintenance Quality Attributes PerspectiveCSCJournals
This paper provides an explanatory study on MVC (Model-View-Controller) architecture from the perspective of maintenance. It aims to answer a knowledge question about how MVC architecture supports the maintainability quality attributes. This knowledge boosts the potential of utilizing the maintainability of MVC from several sides. To fulfill this purpose, we investigate the main mechanism of MVC with focusing on maintainability quality attributes. Accordingly, we form and discuss MMERFT maintainability set that consists of Modifiability, Modularity, Extensibility, Reusability, Flexibility, and Testability. Besides investigating the mechanism of MVC regarding MMERFT quality attributes, we explain how MVC supports maintainability by examining measures and approaches such as: complexity of code by using a cyclomatic approach, re-engineering process, use of components, time needed to detect bugs, number of code lines, parallel maintenance, automation, massive assignment, and others. Therefore, this paper is dedicated to providing a concrete view of how MVC gets along with maintainability aspects in general and its several attributes particularly. This view helps to maximize the opportunity of taking advantage of MVC's maintainability features that can encourage reconsidering the maintenance decisions and the corresponding estimated cost. The study focuses on maintainability since software that has high maintainability will have the opportunity to evolve, and consequently, it will have a longer life. Our study shows that MVC generally supports maintainability and its attributes, and it is a recommended choice when maintenance is a priority.
David vernon software_engineering_notesmitthudwivedi
This document provides an overview of the Software Engineering 2 course, including its aims, objectives, course contents, and recommended textbooks. The course aims to provide knowledge of techniques for estimating, designing, building, and ensuring quality in software projects. The objectives cover understanding software metrics, estimating project costs and schedules, quality assurance attributes and standards, and software analysis and design techniques. The course content includes topics like software metrics, estimation models, quality assurance, and object-oriented analysis and design. The document also summarizes several software engineering process models and risk management approaches.
This document summarizes a research paper on software architecture reconstruction methods. It discusses how software architectures can drift over time from the original design due to changes and deviations. Architecture reconstruction is used to recover the original architecture by applying reverse engineering techniques. The document reviews different bottom-up, top-down, and hybrid methods for architecture reconstruction, including tools like ARMIN and Rigi. It also defines key terms related to architecture reconstruction and the challenges of architectural aging, erosion, drift, and mismatch.
The document presents a changeability evaluation model for object-oriented software. It begins with an introduction to changeability and its importance. It then reviews existing literature on measuring changeability. A relationship is established between changeability and object-oriented design properties like coupling, inheritance, and polymorphism. The paper then develops a changeability evaluation model using multiple linear regression. The model relates changeability as the dependent variable to object-oriented design metrics as independent variables. The model is validated experimentally using data from class diagrams, showing it is highly significant.
EReeRisk- EFFICIENT RISK IMPACT MEASUREMENT TOOL FOR REENGINEERING PROCESS OF...ijpla
EReeRisk (Efficient Reengineering Risk) is a risk impact measurement tool which automatically identifies
and measure impact of various risk components involve in reengineering process of legacy software system.
EReeRisk takes data directly from users of legacy system and establishes various risk measurement metrics
according to different risk measurement scheme of ReeRisk framework [1]. Furthermore EReeRisk present
a variety of statistical quantities for project management to obtain decision concerning at what time
evolution of a legacy system through reengineering is successful. Its enhanced user interface greatly
simplifies the risk assessment procedures and the usage reaming time. The tool can perform the following
tasks to support decision concern with the selection of reengineering as a system evolution strategy.
The document discusses various topics related to software engineering including:
1) How early days of software development have affected modern practices.
2) Definitions of software engineering from different sources.
3) The stages of software design including problem analysis, solution identification, and abstraction description.
4) Object-oriented design principles like information hiding, independent objects, and service-based communication.
DESQA a Software Quality Assurance FrameworkIJERA Editor
In current software development lifecycles of heterogeneous environments, the pitfalls businesses have to face are that software defect tracking, measurements and quality assurance do not start early enough in the development process. In fact the cost of fixing a defect in a production environment is much higher than in the initial phases of the Software Development Life Cycle (SDLC) which is particularly true for Service Oriented Architecture (SOA). Thus the aim of this study is to develop a new framework for defect tracking and detection and quality estimation for early stages particularly for the design stage of the SDLC. Part of the objectives of this work is to conceptualize, borrow and customize from known frameworks, such as object-oriented programming to build a solid framework using automated rule based intelligent mechanisms to detect and classify defects in software design of SOA. The implementation part demonstrated how the framework can predict the quality level of the designed software. The results showed a good level of quality estimation can be achieved based on the number of design attributes, the number of quality attributes and the number of SOA Design Defects. Assessment shows that metrics provide guidelines to indicate the progress that a software system has made and the quality of design. Using these guidelines, we can develop more usable and maintainable software systems to fulfill the demand of efficient systems for software applications. Another valuable result coming from this study is that developers are trying to keep backwards compatibility when they introduce new functionality. Sometimes, in the same newly-introduced elements developers perform necessary breaking changes in future versions. In that way they give time to their clients to adapt their systems. This is a very valuable practice for the developers because they have more time to assess the quality of their software before releasing it. Other improvements in this research include investigation of other design attributes and SOA Design Defects which can be computed in extending the tests we performed.
VTrace-A Tool for Visualizing Traceability Links among Software Artefacts for...journalBEEI
Traceability Management plays a key role in tracing the life of a requirement through all the specifications produced during the development phase of a software project. A lack of traceability information not only hinders the understanding of the system but also will prove to be a bottleneck in the future maintenance of the system. Projects that maintain traceability information during the development stages somehow fail to upgrade their artefacts or maintain traceability among the different versions of the artefacts that are produced during the maintenance phase. As a result the software artefacts lose the trustworthiness and engineers mostly work from the source code for impact analysis. The goal of our research is on understanding the impact of visualizing traceability links on change management tasks for an evolving system. As part of our research we have implemented a Traceability Visualization Tool-VTrace that manages software artefacts and also enables the visualization of traceability links. The results of our controlled experiment show that subjects who used the tool were more accurate and faster on change management tasks than subjects that didn’t use the tool.
A methodology to evaluate object oriented software systems using change requi...ijseajournal
It is a well known fact that software maintenance plays a major role and finds importance in software
development life cycle. As object
-
oriented programming has become the standard, it is very important to
understand th
e problems of maintaining object
-
oriented software systems. This paper aims at evaluating
object
-
oriented software system through change requirement traceability
–
based impact analysis
methodology
for non functional requirements using functional requirem
ents
. The major issues have been
related to change impact algorithms and inheritance of functionality.
Mvc architecture driven design and agile implementation of a web based softwa...ijseajournal
This paper reports design and implementation of a web based software system for storing and managing
information related to time management and productivity of employees working on a project.
The system
has been designed and implemented w
ith best principles from model view
controller
and agile development.
Such system has practical use for any organization in terms of ease of use, efficiency, and cost savings. The
manuscript describes design of the system as well as its database and user i
nterface. Detailed snapshots of
the working system are provided too.
Improved Strategy for Distributed Processing and Network Application Developm...Editor IJCATR
The complexity of software development abstraction and the new development in multi-core computers have shifted the burden of distributed software performance from network and chip designers to software architectures and developers. We need to look at software development strategies that will integrate parallelization of code, concurrency factors, multithreading, distributed resources allocation and distributed processing. In this paper, a new software development strategy that integrates these factors is further experimented on parallelism. The strategy is multidimensional aligns distributed conceptualization along a path. This development strategy mandates application developers to reason along usability, simplicity, resource distribution, parallelization of code where necessary, processing time and cost factors realignment as well as security and concurrency issues in a balanced path from the originating point of the network application to its retirement.
Improved Strategy for Distributed Processing and Network Application DevelopmentEditor IJCATR
The complexity of software development abstraction and the new development in multi-core computers have shifted the
burden of distributed software performance from network and chip designers to software architectures and developers. We need to
look at software development strategies that will integrate parallelization of code, concurrency factors, multithreading, distributed
resources allocation and distributed processing. In this paper, a new software development strategy that integrates these factors is
further experimented on parallelism. The strategy is multidimensional aligns distributed conceptualization along a path. This
development strategy mandates application developers to reason along usability, simplicity, resource distribution, parallelization of
code where necessary, processing time and cost factors realignment as well as security and concurrency issues in a balanced path from
the originating point of the network application to its retirement.
Unified V- Model Approach of Re-Engineering to reinforce Web Application Deve...IOSR Journals
The document discusses approaches for reengineering web applications. It proposes using a unified V-model approach to reinforce web application development through reengineering. Specifically, it discusses:
1) Using reverse engineering to analyze existing web applications and recover designs, followed by forward engineering to restructure the applications based on new requirements.
2) Applying the V-model at each phase of the web development process during reengineering to incorporate methodology.
3) The reengineering process involves reverse engineering, transformations to adapt to new technologies/requirements, and forward engineering to implement the new design.
This document discusses elements that contribute to legacy program complexity. It identifies factors such as difficulty understanding old code, high cost of maintenance and replacement, large size, poor design, integration challenges with new technologies, lack of documentation, inflexibility, long processing times, unavailability of original staff, reliability issues, and bugs. The paper explores each of these elements in detail and argues that legacy programs are complex due to a combination of these interrelated factors such as large size, complex designs with many interconnected parts, and difficulty integrating old code and platforms with new technologies.
Integrated Analysis of Traditional Requirements Engineering Process with Agil...zillesubhan
In the past few years, agile software development approach has emerged as a most attractive software development approach. A typical CASE environment consists of a number of CASE tools operating on a common hardware and software platform and note that there are a number of different classes of users of a CASE environment. In fact, some users such as software developers and managers wish to make use of CASE tools to support them in developing application systems and monitoring the progress of a project. This development approach has quickly caught the attention of a large number of software development firms. However, this approach particularly pays attention to development side of software development project while neglects critical aspects of requirements engineering process. In fact, there is no standard requirement engineering process in this approach and requirements engineering activities vary from situation to situation. As a result, there emerge a large number of problems which can lead the software development projects to failure. One of major drawbacks of agile approach is that it is suitable for small size projects with limited team size. Hence, it cannot be adopted for large size projects. We claim that this approach can be used for large size projects if traditional requirements engineering approach is combined with agile manifesto. In fact, the combination of traditional requirements engineering process and agile manifesto can also help resolve a large number of problems exist in agile development methodologies. As in software development the most important thing is to know the clear customer’s requirements and also through modeling (data modeling, functional modeling, behavior modeling). Using UML we are able to build efficient system starting from scratch towards the desired goal. Through UML we start from abstract model and develop the required system through going in details with different UML diagrams. Each UML diagram serves different goal towards implementing a whole project.
This document discusses several software development models and practices. It describes the waterfall model which involves sequential stages of requirement analysis, design, implementation, testing, and maintenance. It also covers prototyping, rapid application development (RAD), and component assembly models which are more iterative in nature. The prototyping model involves creating prototypes to help define requirements, RAD emphasizes reuse and short development cycles, and component assembly focuses on reusing existing software components.
Software testing is a key part of software engineering used to evaluate software quality and identify errors. There are various software testing techniques and methods, but thoroughly investigating a complex software is more important than following a specific procedure. Testing complex software cannot discover all errors, but can help improve quality. Software engineering involves defining requirements, design, development, testing, and maintenance of software using methodologies like agile development.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Adopting DevOps practices: an enhanced unified theory of acceptance and use o...IJECEIAES
DevOps software development approach is widely used in the software engineering discipline. DevOps eliminates the development and operations department barriers. The paper aims to develop a conceptual model for adopting DevOps practices in software development organizations by extending the unified theory of acceptance and use of technology (UTAUT). The research also aims to determine the influencing factors of DevOps practices’ acceptance and adoption in software organizations, determine gaps in the software development literature, and introduce a clear picture of current technology acceptance and adoption research in the software industry. A comprehensive literature review clarifies how users accept and adopt new technologies and what leads to adopting DevOps practices in the software industry as the starting point for developing a conceptual framework for adopting DevOps in software organizations. The literature results have formulated the conceptual framework for adopting DevOps practices. The resulting model is expected to improve understanding of software organizations’ acceptance and adoption of DevOps practices. The research hypotheses must be tested to validate the model. Future work will include surveys and expert interviews for model enhancement and validation. This research fulfills the necessity to study how software organizations accept and adopt DevOps practices by enhancing UTAUT.
This document summarizes several software development process models. It begins by defining what a software process is - a framework for the activities required to build software. It then discusses evolutionary models like prototyping and the spiral model, which use iterative development and user feedback. Concurrent modeling is presented as allowing activities to occur simultaneously. The Unified Process is described as use case driven and iterative. Other models discussed include component-based development, formal methods, and aspect-oriented development. Personal and team software processes are also summarized, focusing on planning, metrics, and continuous improvement.
DESIGN PATTERNS IN THE WORKFLOW IMPLEMENTATION OF MARINE RESEARCH GENERAL INF...AM Publications
This paper proposes the use of design patterns in a marine research general information platform. The development of the platform refers to a design of complicated system architecture. Creation and execution of the research workflow nodes and designing of visualization library suited for marine users play an important role in the whole software architecture. This paper studies the requirements characteristic in marine research fields and has implemented a series of framework to solve these problems based on object-oriented and design patterns techniques. These frameworks make clear the relationship in all directions between modules and layers of software, which communicate through unified abstract interface and reduce the coupling between modules and layers. The building of these frameworks is importantly significant in advancing the reusability of software and strengthening extensibility and maintainability of the system.
How Observability and Explainability Benefit the SDLCCloudZenix LLC
Observability and explainability are crucial for a seamless software development life cycle (SDLC). Observability enables real-time monitoring, troubleshooting, and optimization, ensuring smooth operations. Explainability helps understand AI models' decisions, improving transparency and trust. Read more: https://cloudzenix.com/
Elementary Probability theory Chapter 2.pptxethiouniverse
The document discusses various software process models including waterfall, iterative, incremental, evolutionary (prototyping and spiral), and component-based development models. It describes the key activities and characteristics of each model and discusses when each may be applicable. The waterfall model presents a linear sequential flow while evolutionary models like prototyping and spiral are iterative and incremental to accommodate changing requirements.
An Approach of Improve Efficiencies through DevOps AdoptionIRJET Journal
This document discusses adopting DevOps practices to improve organizational efficiencies. It begins with an abstract discussing how organizations waste resources and how DevOps aims to address this through lean principles and continuous feedback. It then discusses the history and concepts of DevOps, proposing a DevOps adoption model. It outlines factors that affect IT performance and cultural transformation. The document also describes the research design of a study conducted through interviews with DevOps professionals. It identifies four main challenges to DevOps adoption: lack of awareness, lack of support, implementing technologies, and adapting processes. The analysis focuses on the lack of awareness challenge, noting confusion around DevOps definitions and resistance to "buzzwords".
Modern applications are complex with microservices and containers which makes observability challenging. Observability refers to understanding an application's state through logs, events and other data. It is more comprehensive than monitoring alone. To achieve effective observability, organizations should use error tracking, distributed tracing, APM, infrastructure monitoring, log aggregation and incident management tools. They should also implement development best practices like shift-left processes and continuous communication between teams. The goal of observability is to deliver the best user experience and maximize business value.
The impact of user involvement in software development processnooriasukmaningtyas
In software development process, user can take part in any phase of the process, depending on what model is being applied. Lack of user involvement can result in a poorly designed solution, or even a solution that conflicts with user’s needs. This review paper presents the impact of user involvement in software development process. In this study, different software development processes will be reviewed, show where the user usually gets involved in different models such as: structural (waterfall, Vmodel) and incremental (scrum-extreme programming XP). As each model differs from the other, each of them has a different perspective of where user should take part and where they should not. This can be an asset that helps project managers, and leaders to develop suitable strategies to follow in their projects.
Similar to A SURVEY ON ACCURACY OF REQUIREMENT TRACEABILITY LINKS DURING SOFTWARE DEVELOPMENT (20)
Since so many years a problem occurs in KSB Pump Va mbori for casting process i.e. cracks occurs in the castings & it is repeated one. Therefore the compan y has given opportunity to me to solve this problem . In case of steel casting there are mainly cracks & also blo w holes induced due to the casting procedure. There are many factors for the casting defects .The factor is unev en material feeding in casting & also due to the mo uld material & also the core material. These cracks finally brea k directly the component of the casting i.e. in cas e of pump the casting component is like Impeller,Volute casing & casing cover. At the time of feeding of steel material in to the casting the material is in liquid us form i.e. it i s hot material & this material is feeding into casting at the time o f feeding it develop different region of heat. At o ne side the temp is high &at other side the temp is low this also pr oduce cracks. To simulate that casting we use the M AGMA SOFTWARE for simulation & validate it using NDT.
A COMPARATIVE STUDY OF DESIGN OF SIMPLE SPUR GEAR TRAIN AND HELICAL GEAR TRAI...ijiert bestjournal
The document describes the design of a simple spur gear train and helical gear train with an idler gear using the AGMA (American Gear Manufacturers Association) standard method. Key steps of the design process include selecting input parameters, creating a preliminary drawing, selecting materials, and performing theoretical calculations to determine dimensions and check for bending and contact stresses based on AGMA equations. A comparative study is carried out to select the optimal gear train design that meets the strength requirements for the given input parameters and load conditions.
COMPARATIVE ANALYSIS OF CONVENTIONAL LEAF SPRING AND COMPOSITE LEAFijiert bestjournal
A leaf spring is a simple form of spring,commonly used for the suspension in wheeled vehicles. It is also one of the oldest forms of spring. Sometimes referred to as a semielliptical l eaf spring (SELS) it takes the form of a slender ar c-shaped length of spring steel of rectangular cross section. The centre of the arc p rovides location for the axle,while tie holes are provided at either end for attaching to the vehicle body. In the present work,a seven-leaf steel spring use d in passenger cars is replaced with a composite mu lti leaf spring made of glass/epoxy composites. The dimensions sand the num ber of leaves for both steel leaf spring and compos ite leaf springs are considered to be the same. The primary objective is to compare their load carrying capacity,stiffness and weight savings of composite leaf spring. Finally,fatigue life of steel and com posite leaf spring is also predicted using life dat a
Brimmed diffuser is collection�acceleration device which shrouds a wind turbine.For a given turbine di ameter,the power augmentation can be achieved by brimmed diffuser,p opularly known as wind lens. The present numerical investigation deals with the effect of low pressure region created by wind l ens and hence to analyze the strong vortices formed by a brim attached to the shroud diffuser at exit. Also in this analysis,a c omparative numerical prediction of mass flow rates through the wind turbine has been carried out with various types of wind lens wh ich in turn helps to optimize the torque augmentati on. It has been numerically proved that there is significant increase in the wa ke formation & vortex strength when brimming effect is added to a diffuser
FINITE ELEMENT ANALYSIS OF CONNECTING ROD OF MG-ALLOY ijiert bestjournal
The automobile engine connecting rod is a high volume production,critical component. It co nnects reciprocating piston to rotating crankshaft,transmitting the thrust of the piston to the crankshaft. Every vehicle that uses an internal combustion engine requires at least one connecting rod depending upon the number of cylinders in the engine. As the purp ose of the connecting rod is to transfer the reciprocating motion of the piston into rotary motion of the crankshaft. Connecting ro ds for automotive applications are typically manufactured by forging from either w rought steel or powdered metal. the material used f or this process is Mg-Alloy and also finite element analysis of connecting rod
REVIEW ON CRITICAL SPEED IMPROVEMENT IN SINGLE CYLINDER ENGINE VALVE TRAINijiert bestjournal
1) The document discusses improving the critical speed of the valve train in a single cylinder engine from 3600 rpm to 5000 rpm. It aims to optimize the valve spring parameters to increase the speed limit without failure of contact between components.
2) An analytical and simulation-based approach is proposed. The valve spring stiffness, pushrod buckling, contact stresses, and natural frequency response will be analyzed. ADAMS multi-body dynamics software will be used to simulate the optimized design.
3) Preliminary results found that with the optimized valve spring configuration, the engine speed could be increased beyond 5000 rpm without failure, unlike with the existing design. Experimental validation of the optimized design will evaluate performance.
ENERGY CONVERSION PHENOMENON IN IMPLEMENTATION OF WATER LIFTING BY USING PEND...ijiert bestjournal
This paper consist of working of reciprocating pump which is driven by a compound pendulum. It provide s the energy required to lift the water from a tank placed approximately several meter below the ground level. Basic application of the mechanism will be for watering the garden which will be operated by means of operation opening and closing of entrance gate. Paper consists of basic concept,design of pump and compound pendulum mecha nism and fabricationed model. The concept can also be implemented in the rural areas,having the problem of electric supply. We aim at making a prototype for providing some me an for pumping of water by the pump which requires less human efforts,conside ring cost effectiveness,easy to operate and portab le mechanism.
The IC engine has seen numerous revolutionary and e volutionary modifications in technology and design over the past few decades. The sole motto behind the modifications wa s to increase the overall efficiency of the IC Engi ne including volumetric and thermal efficiency. Recently few benchmarking techn ologies like the CRDI,MPFI,HCCI,etc. in the Otto cycle and Diesel cycle engines have created an enormous revolution in the automobile industry. In spite of these technologica l and design advances,the efficiencies are not being more than a particular l imit. However,the concept of split cycle engines has dra stically increased the overall performance in all respect. The split cycle concept basically separates the fou r strokes of the conventional cycle. The Scuderi engine one of the best-in-class engine desi gns based on the split cycle concept. The Scuderi engine works on the split cycle and gives higher efficiency than the previous split cycle engines resulting overall high perform ance. It also eliminates the problems faced by previous engines based on the spl it cycle in terms of breathing (volumetric efficien cy) and thermal efficiency. This paper throws light on the greater volumetric,thermal and overall efficiency key points related t o the Scuderi Engines.
EXPERIMENTAL EVALUATION OF TEMPERATURE DISTRIBUTION IN JOURNAL BEARING OPERAT...ijiert bestjournal
The excessive rise of temperature in the journal be aring operating at boundary/mixed lubrication regim es. Journal bearing test set- up is used to measure the temperature along the cir cumference of the bearing specimen for different lo ading conditions. Here in this journal bearing of l/d ratio 1,diameter of jo urnal is 60mm and the bearing length is 60mm,clear ance is .06mm has been designed and tested to access the temperature rise of the bearing. The result shows that as the load o n the bearing is increasing temperature also increasing. Temperature analysis o f journal bearing is also done by the Ansys workben ch software
STUDY OF SOLAR THERMAL CAVITY RECEIVER FOR PARABOLIC CONCENTRATING COLLECTOR ijiert bestjournal
Energy is one of the building blocks of the country . The growth of the country has been fueled by chea p,abundant energy resources. Solar energy is a form of renewable ener gy which is available abundantly and collected unre servedly. The parabolic concentrator reflects the direct incident solar rad iation onto a receiver mounted above the dish at it s focal point. The conversion of concentrated solar radiation to heat takes place in receiver. The heat transfer characteristics of the receiver changes during the rotation of the receiver which affects thermal performance. The working temperature may also influence the ther mal performance and overall efficiency of the system. Thermal as well as optica l losses affect the performance of a solar paraboli c dish-cavity receiver system. The thermal losses of a solar cavity receiver include c onvective and radiative losses to the air in the ca vity and conductive heat loss through the insulation used behind the helical tube surface. Convective and radiative heat losses form the major constituents of the thermal losses. The convection heat loss from cavit y receiver in parabolic dish solar thermal power sy stem can significantly reduce the efficiency and consequently the cost effectiveness of the system. It is important to assess this heat loss and subsequently improve the thermal performance of the receiver.
DESIGN, OPTIMIZATION AND FINITE ELEMENT ANALYSIS OF CRANKSHAFTijiert bestjournal
Crankshaft is a crucial component in an engine asse mbly. Crankshaft is consisting of two web sections and one crankpin,which converts the reciprocating displacement of the pist on to a rotary motion with a four link mechanism. G enerally crankshafts are manufactured using cast iron and forged steel mater ial. In this work to design and finite element anal ysis of crankshaft of 4 cylinder petrol engine of Maruti swift Vxi. of 1200 cubic capacity. The finite element analysis in ABA QUS software by using six materials based on their composition viz. Cast iron,EN30B,SAE4340,Structural steel,C70 Alloy steel and Aluminium based composite material reinforced with silicon carbide & fly ash. The parameter like von misses stress,deformation;maximum and minimum principal stress & strain were obtained from analysis software. The results of Finite element show that t he Aluminium based composite material is best mater ial among all. Compare the result like weight and Stiffness parameter. It is resulted of 65.539 % of weight,with reduction i n deformation.
ELECTRO CHEMICAL MACHINING AND ELECTRICAL DISCHARGE MACHINING PROCESSES MICRO...ijiert bestjournal
Nowadays,necessity of small components is a common trend. These requirements encourage the researcher s to develop very minutest size components to fulfill the demand. The manufact uring of these type of components is a difficult ob ligation and for that various machining methods are develop to manufacture such c omponents. In this article the Electro Chemical mac hining and Electrical Discharge Machining is reviewed. We tried to summar ize the work of various researchers. The study show s that this type of machining processes gives good alternative.
HEAT TRANSFER ENHANCEMENT BY USING NANOFLUID JET IMPINGEMENTijiert bestjournal
This document presents an experimental study on heat transfer enhancement using nanofluid jet impingement. Key findings include:
1) The use of nanofluids (Al2O3-water) can increase heat transfer coefficients by up to 44% compared to using water alone.
2) Heat transfer coefficients are highest near the stagnation point and decrease further from the center.
3) Varying the nozzle-to-plate distance (Z/D ratio) between 2-8 results in maximum heat transfer, with little effect beyond Z/D of 12.
4) Increasing the flow rate leads to higher heat transfer coefficients, up to a 5% increase from 2 lpm to 4 lpm.
MODIFICATION AND OPTIMIZATION IN STEEL SANDWICH PANELS USING ANSYS WORKBENCH ijiert bestjournal
The demand for bigger,faster and lighter moving ve hicles,such as ships,trains,trucks and buses has increased the importance of efficient str uctural arrangements. In principle two approaches exist to develop efficient structures:e ither application of new materials or the use of new structural design. A proven and well-establi shed solution is the use of composite materials and sandwich structures. In this way high strength to weight ratio and minimum weight can be obtained. The sandwich structures have potential to offer a w ide range of attractive design solutions. In addition to the obtained weight reduction,these so lutions can often bring space savings,fire resistance,noise control and improved heating and cooling performance. Laser-welded metallic sandwich panels offer a number of outstand ing properties allowing the designer to develop light and efficient structural configuratio ns for a large variety of applications. These panels have been under active investigations during the last 15 years in the world.
IMPACT ANALYSIS OF ALUMINUM HONEYCOMB SANDWICH PANEL BUMPER BEAM: A REVIEW ijiert bestjournal
Bumper is a energy absorbing protective element whi ch absorb the energy in front collision and protect valuable parts like radiator etc. Bumper is act like protect ive shield generally made of steel material. As eco nomic point of view and to reduce consumption of fuel manufacturin g of light weight vehicle is requirement of current situation. Application of composite material in automobile sec tor is now day common thing. Aluminum honeycomb san dwich panel is basically material from aerospace industri es and known for its high strength to weight ratio. Sandwich structure basically having its properties due to ge ometry. To determine various properties of sandwich structure conducting experiments is expensive,so generally F EA is used .However complex geometry is hurdle so t here are various theories are available for simplification o f model. These theories convert 3D model in to homo genous model .As far as concerning India manufacturing rate of s andwich structure is very less,so generally cost i s more. Greatest giant manufacturer is china we can observe their bu llet train and metro transport facility constructio n. Recently in march 2014 largest selling Indian cars are failed in NCap test in 100% frontal crash test. So requirement of more energy absorbing material with economy cons ideration is important.
Robotic welding requires specialized fixtures to ac curately hold the work piece during the welding operation. Despite the large variety of welding fix tures available today the focus has shifted in maki ng the welding arms more versatile,not the fixture. T he new fixture design reduces cycle time and operat or labor while increasing functionality;and allows co mplex welding operations to be completed on simple two axis welding arms
ADVANCED TRANSIENT THERMAL AND STRUCTURAL ANALYSIS OF DISC BRAKE BY USING ANS...ijiert bestjournal
In these paper structural fields of the solid disc brake during short and emergency braking with four different materials is studied. The distribution of the tempe rature depends on the various factors such as frict ion,surface roughness and speed. The effect of the angular velo city and the contact pressure induces the temperatu re rise of disc brake. The finite element simulation for three -dimensional model was preferred due to the heat fl ux ratio constantly distributed in circumferential direction . Here value of temperature,friction contact power,nodal displacement and deformation for different pressure condition using analysis software with four materi als namely cast iron,cast steel,aluminium and carbon fibre reinforced plastic are taken. Presently the D isc brakes are made up of cast iron and cast steel. With the v alue of simulation result best suitable material fo r the brake drum with higher life span is determined.
REVIEW ON MECHANICAL PROPERTIES OF NON-ASBESTOS COMPOSITE MATERIAL USED IN BR...ijiert bestjournal
Metallic matrix composites are combinations of two or more different metals inter metallic compounds or second phases in which dispersed phases are embe dded within the metallic matrix. They are produced by controlling the morphologies of the constituents to achieve optimum combination of properties. Properties of the composites depend on the properti es of the constituent phases,their relative amount,and dispersed phase geometry including particle siz e,shape and orientation in the matrix. In this pap er,The mechanical properties,behaviour and micro stru ctural evolution of aluminium metal matrix metallic composites fabricated under various process conditi ons were investigated to understand their process- structure�property relations by optimization proces s. Addition of silicon carbide to aluminum has show n an increase in its mechanical properties.
PERFORMANCE EVALUATION OF TRIBOLOGICAL PROPERTIES OF COTTON SEED OIL FOR MULT...ijiert bestjournal
A lubricant is a substance that reduces friction an d wear by providing a protective film between two moving surfaces. Good lubricants possess the proper ties such as low toxicity,high viscosity index,hi gh load carrying capacity,excellent coefficient of fr iction,good anti-wear capability,low emission int o the environment,high ignition temperature. So tribolog y related problems can be minimized by proper selection of lubricant from wear consideration. Tod ay,the depletion of reserves of crude oil,the gro wing prices of crude oil and concern about protecting th e environment against pollution have developed the interest towards environment-friendly lubricants. B ecause of these the purpose of this work is to eval uate the anti-wear characteristics of cottonseed oil and to check the suitability of cottonseed oil as a lu bricant for multi-cylinder engine. Four ball testing machin e is used for anti-wear testing as per ASTM D 4172. The wear preventive characteristic of cottonseed oi l is obtained by measuring wear scar diameter. The present study shows the potential of cotton seed oi l as an alternating lubricant.
Magnetic abrasive finishing is a machining process where the tooling allowance is remove by media wi th both magnetic and abrasive properties,with a magnetic f ield acting as a binder of a grain. Such machining falls into the category of erosion by abrasive suspension and lend itself to the finishing of any type of surface . The possibility of finishing complex surfaces is a spec ial benefit of this machining. Magnetic abrasive fi nishing process is most suitable for obtaining quality fini sh on metallic and non-metallic surfaces. Magnetic abrasive finishing used for complicated product finishing & Roughness and tolerance band achieved that is diffi cult using conventional machine process. The product dimension al requirement easily possible with taking trial wi th MAF parameters.
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
A SURVEY ON ACCURACY OF REQUIREMENT TRACEABILITY LINKS DURING SOFTWARE DEVELOPMENT
1. Novateur Publication’s
International Journal of Innovation in Engineering, Research and Technology [IJIERT]
ICITDCEME’15 Conference Proceedings
ISSN No - 2394-3696
1 | P a g e
A SURVEY ON ACCURACY OF REQUIREMENT TRACEABILITY
LINKS DURING SOFTWARE DEVELOPMENT
Mr. Vinayak M. Sale,
Department of Computer Science & Technology,
Shivaji University, Kolhapur, IndiaEmail- csvs13510@gmail.com
Prof. Santaji K. Shinde
Department of Information Technology,
Bharati Vidyapeeth’s College of Engineering, Kolhapur, India
Email: santaji@rediffmail.com
ABSTRACT
Traceability is used to ensure that source code of a system is consistent with its requirements. The only specified requirement has
been implemented by developers. During software maintenance and evolution, requirement traceability links become marginal
because no developer can devote effort to update it. However, to recover traceability links later is a very painful and monotonous
task also it is costly for developers too. Traceability supports the software development process in various ways, like as change
management, software maintenance and prevention of misunderstandings. But, while, in practice, traceability links between
requirements and codes are not created during the development of software as it requires extra efforts. So developers rarely use
such links during development. Why many challenges exist in traceability practices today? However, many of the challenges can
be overcome through organizational policy, quality requirements traceability tool support remains the open problem.
KEY WORDS: Traceability, requirement, management
INTRODUCTION
The traceability is very most important for any software project, and if we use it, it could be beneficial from
different perspectives for the development. When we develop a source code for any system that source code can be
traced and become identical with the requirement and analysis because we develop a source code as per the
requirement. A traceability link is the connection between the source code and requirement. Requirement
traceability helps software engineers to trace the requirement from its emergence to its fulfillment [5]. Traceability
may not help us to know how different components of systems are interlinked and dependent on each other in the
same system. We may also fail to find the impact of change on the software and system. A most important goal of
traceability, in absence of original requirements and other artifacts traceability links. Therefore, we should look at
traceability from all the aspects of traceability regarding scope and coverage [1].
Requirements traceability has proved much important over the past decade in the scientific literature. It is defined as
“the ability to illustrate and go after the life of a requirement, in both a onward and backward direction”.
Traceability links among of a system and its source code helps us in reducing system comprehension
attempt. While updating the software, the developers can add, remove, or modify features as per the users demand.
While maintenance and evolution of any software, requirement traceability links become marginal because no
developer can devote effort to update it. However to recover traceability links later is a very painful and tedious task
also it is costly for developers too. In fact, developers usually do not update requirement-traceability links with
source code. Requirements and source codes are different from each other, which decreases the textual similarity
[2].
REASONS FOR REQUIREMENTS TRACEABILITY
The traceability is one of needs of stakeholders – project sponsors, project managers, analysts, designers,
maintainers, and end users, because of their need, priority, and goal. The requirements traceability is a characteristic
of a system in which the requirements are clearly linked to their sources and the artifacts formed during the system
development life cycle based on these requirements [15].
In requirements engineering and elicitation phase, it is important that the rationales and sources to the requirements
are captured to know requirements development and confirmation [15].
2. Novateur Publication’s
International Journal of Innovation in Engineering, Research and Technology [IJIERT]
ICITDCEME’15 Conference Proceedings
ISSN No - 2394-3696
2 | P a g e
Modifications in design appear e.g. if the requirements evolve or if the system is developed incrementally [14].
During design phase requirements traceability helps to keep track of when the change request is implemented before
a system is redesigned. Traceability is able to impart information about the justifications, important decisions and
assumptions related to requirements [15].
Modifications after the delivery of the system occur due to various reasons (e.g. to a changing environment). Such
modifications are called system evolution [11] Empirical studies show that even experienced software professionals
predict incomplete sets of change impacts [17]. With the help of complete traceability, more accurate cost and
schedule of change(s) can be determined, instead of depending on the engineer or programmer who is expert [15].
A SURVEY OF RELEVANT LITERATURE
Traceability recovery, feature location, and trust models topics are related to this research work. Traceability
approaches can be divided into three main categories, i.e., dynamic, static, and hybrid.
Dynamic traceability approaches [9] require a system to be compliable and executable to perform traceability
creation tasks. It also requires pre-defined scenarios to execute the software system.
Dynamic approach collects and analyzes execution traces [9] to identify method a software link has been executed in
the particular scenario. However, it couldn't help to differ in overlapping scenarios, because a single method has
some limitations. Due to bugs and/or some other issues the legacy system may not be applicable. Thus, to collect
execution traces is not possible.
Static traceability approaches [8], [14] use source code structure and/or textual information to recover traceability
links between high-level and low-level software artifacts.
Software repositories have been preferred by many researchers [9] to recover traceability links, presented a formula
based approach to recover traceability links between software artifacts in which software systems' version history is
taken into consideration. It assumes, two files must have a potential link between them if they co-change. However,
in the certain case, two files are co-changing but they do not have any semantic relationship. Also, it is possible that
some software artifacts do not have software repositories, in such a case, their approach cannot find the link from/to
these documents, e.g., requirement specifications. In hybrid traceability, [4] there is a combination of static and
dynamic information. The study shows that combination of dynamic and static information can perform better than
the single IR technique.
The results are achieved by static approaches show that they do not require an executable software system. Thus,
static traceability approaches can be applied to a system that contains a bug or is not executable.
DIFFICULTIES IN REQUIREMENT TRACEABILITY
Apart from the benefits that traceability offers to the software engineering industry, there are many difficulties in
practice. These difficulties can be identified at the cost of time and effort, the difficulty of maintaining traceability
through change, different views on traceability held by various project stakeholders, organizational problems and
politics, and poor tool support.
COST: One of the biggest challenges facing the implementation of traceability is the cost involved. As a system
grows in size and complexity, capturing the requirement traces quickly becomes complex and expensive [10].
Because of this, the budget of a project gets collapsed. However, traceability can be detected in early development
process when it is easy and cheap too.
By this one can save a significant amount of effort by focusing traceability activities on the most important
requirements. However, it requires a clear understanding of each requirement in the system. It may not be an
option if full tracing is a requirement of the customer or the development process standards used for the
project.
3. Novateur Publication’s
International Journal of Innovation in Engineering, Research and Technology [IJIERT]
ICITDCEME’15 Conference Proceedings
ISSN No - 2394-3696
3 | P a g e
On the other hand, the high costs of traceability that is incurred can save much bigger sum in future of software
projects. It cannot solve the problem of the high costs of traceability implementation, but considering long term
benefits this incurred cost is beneficial because it can save a lot.
MANAGING VARIATIONS: To maintain traceability in different situations is another challenge. Experts assume
that change is inevitable in the life of any matter and software project is not objection too. If there is in the change,
then you have to update the traceability data to reflect such change [12]. To update the traceability data, the separate
system is required, which can be costly as it requires much of time if the change is extensive.
However, that discipline cannot be universal and applicable for all changes under every circumstance. To deal with
change and its impact on traceability is not a cup of tea. Some tools can be useful to identify the impact of change on
existing traceability data; but, still it requires a lot of efforts to update it [13]. At the same time, training can help
users to understand the importance of discipline in maintaining traceability data.
By keeping the eye at long-term benefits, developer prefers short-term incurred costs to sustain the organization.
TYPES OF TRACEABILITY
Over the years, several other terms related to requirements traceability have been established. According to Winkler
& von Pilgrim [6], the most common ones are pre-requirements specification, post-requirements specification,
forward, backward, horizontal, and vertical traceability. These terms are shown in Figure 1 and described in detail in
the following.
Fig. 1 Different Types of Traceability
Gotel & Finkelstein [19] have introduced the classification of pre-requirements specification (pre-RS) traceability
and post-requirements (post-RS) traceability. Pre-RS traceability is related to those aspects of a requirement’s life
before its inclusion in the RS, which means all traces that occur during elicitation, discussion, and agreement of
requirements. This includes dealing with informal, conflict, or overlap of information [6]. Post-RS traceability is
concerned to such aspects of a requirement’s life that resulted from its addition in the RS, which means all traces
that occur during the stepwise implementation of the requirements in the design and coding phases. It includes
documenting the traces of the various manual and automatic transformation steps eventually producing the system
[6].
The SRS has introduced the terms backward traceability and forward traceability. Backward traceability refers to the
ability to follow the traceability link from particular artifact to its sources from which is has been derived. Forward
traceability stands for following the traceability links to the artifacts that have been derived from the artifact under
construction.
Ramesh & Edwards [20] have introduced the terms horizontal traceability and vertical traceability. These terms are
used for the traceability links of an artifact belonging to the same project phase or level of abstraction (horizontal),
and links between artifacts belonging to different ones (vertical) [6].
Instead of above definitions Winkler & von Pilgrim [6] defines these essential traceability links as follows:
1.Traceability means the ability to describe and follow the life of a software artifact in the sense of the generalized
definition presented by [19].
4. Novateur Publication’s
International Journal of Innovation in Engineering, Research and Technology [IJIERT]
ICITDCEME’15 Conference Proceedings
ISSN No - 2394-3696
4 | P a g e
2.A trace is a piece of (implicit or explicit) information which is an indication or evidence showing what has existed
or happened.
3.Finally, a traceability link is, as already stated, a relation that is used to interrelate artifacts (e.g., by causality,
content, etc.) Following the notation of a trace, a traceability link is a more concrete (but not the only) form of
information that can be used to describe and follow certain aspects of the life of the representative software
artifacts [16].
REPRESENTING TRACEABILITY
Firstly, to any software evolution task, a developer has to comprehend the project landscape [4], particularly, system
architecture, design, implementation and the relations between the various artifacts using any available document.
Program comprehension occurs in a bottom-up manner, a top-down manner, or some combination thereof [3].
Developers use different types of knowledge during program comprehension, ranging from domain-specific
knowledge to general programming knowledge. Traceability links between source code and sections of the
documentation, e.g., requirements, aid both top-down and bottom-up comprehension [1].
Traceability links between the requirements of a system and its source code are helpful in reducing comprehension
effort. Requirement traceability is defined by [4] [5], “the capability to demonstrate and go subsequent to the life of
a requirement, in both a forward and toward the back direction”. This traceability information also helps in software
maintenance and evolution tasks.
In order to use traceability links, it is necessary to represent them in a form that is appropriate for its purpose.
Several different ways (traceability matrices, graphical models, cross references) exist to represent traceability links,
which are also supported by tools. Wieringa [18] distinguishes between three different kinds of traceability
representation, while [7] represent artifacts and the traceability links between them as a graph:
• TRACEABILITY MATRICES: Traceability links are represented in matrix form. The horizontal and vertical
dimensions are linked. The entries in the matrix represent links between the artifacts in the matrix [18].
• GRAPHICAL MODELS: Entity Relationship Model (ERM) is also used to represent traceability links. Also,
various UML diagrams support the representation of traceability links embedded in the different development
models [18].
• CROSS REFERENCES: Traceability links between artifacts are represented as links, pointers or annotations in
the text [18].
CONCLUSION
To develop any software requirements, traceability plays vital role similarly it plays the vital role in the maintenance
of software. Creating traceability links manually is one of the costly laborious work. Still it is need of the time to
make efforts on traceability links more cheaper in short standard solution should be formed. Requirements
specification for requirements traceability is formed alongside all the investigations, which drives both their
direction and focus.
REFERENCES
[1] Prashant N. Khetade, Vinod V.Nayyar “Establishing a Traceability Links Between The Source Code And Requirement Analysis, A Survey on
Traceability ” Int’l Conf on Advances in Engg & Tech – 2014 (ICAET-2014) 66 | Page (IOSR-JCE) e-ISSN: 2278-0661, p-ISSN: 2278-8727 PP
66-70
[2] S. Muthamizharasi, J. Selvakumar, M.Rajaram “Advanced Matching Technique for Trustrace To Improve The Accuracy Of Requirement”
Int’l Journal of Innovative Research in Science, Engg, and Tech -(ICETS’14) Volume 3, Special Issue 1, February 2014
[3] N. Ali, Y.-G. Gue´he´neuc, and G. Antoniol, “Trustrace: Mining Software Repositories to Improve the Accuracy of Requirement Traceability
Links” IEEE Trans. Software Eng., vol. 39, no. 5, pp. 725-741, May 2013
[4] N. Ali, Y.-G. Gue´he´neuc, and G. Antoniol, “Trust-Based Requirements Traceability”, Proc. 19th IEEE Int’l Conf. Program Comprehension,
S.E. Sim and F. Ricca, eds., pp. 111-120, June 2011.
[5] N. Ali, Y.-G. Gue´he´neuc, and G. Antoniol, “Factors Impacting the Inputs of Traceability Recovery Approaches”, A. Zisman, J. Cleland-
Huang, and O. Gotel, eds. Springer-Verlag, 2011.
5. Novateur Publication’s
International Journal of Innovation in Engineering, Research and Technology [IJIERT]
ICITDCEME’15 Conference Proceedings
ISSN No - 2394-3696
5 | P a g e
[6] Winkler, S., & Pilgrim, J. A survey of traceability in requirements engineering and model-driven development. Software & Systems
Modeling, vol. 9, issue 4, pp. 529-565 (2010)
[7] Schwarz, H., Ebert, J., and Winter, A. Graph-based traceability: a comprehensive approach. Software and Systems Modeling (2009)
[8] J. H. Hayes, G. Antoniol, and Y.-G. Gue´he´neuc, “PREREQIR: Recovering Pre-Requirements via Cluster Analysis,” Proc. 15th
Working
Conf. Reverse Eng., pp. 165-174, Oct. 2008.
[9] D. Poshyvanyk, Y.-G. Gue´he´neuc, A. Marcus, G. Antoniol, and V. Rajlich, “Feature Location Using Probabilistic Ranking of Methods
Based on Execution Scenarios and Information Retrieval,” IEEE Trans. Software Eng., vol. 33, no. 6, pp. 420-432, June 2007.
[10] Heindl, Matthias, and Stefan Biffl. A Case Study on Value-Based Requirements Tracing. Proc. of the 10th
European Software
Engineering Conference. Lisbon, Portugal, 2005: 60-69
[11] Lehman, M., Ramil, J. Software Evolution – Background, Theory, Practice Information Processing Letters, Vol. 88, Issues 1-2, October
2003, pages 33-44
[12] von K nethen, A .Change-Oriented Requirements Traceability. Support for Evolution of Embedded Systems Proc. of International
Conference on Software Maintenance, October 2002, pages 482-485
[13] Cleland-Huang, Jane, Carl K. Chang, and Yujia Ge. Supporting Event Based Traceability Through High-Level Recognition of Change
Events. Proc. of the 26th
Annual International Computer Software and Applications Conference on Prolonging Software Life: Development and
Redevelopment. Oxford, England, 2002: 595-602.
[14] G. Antoniol, G. Canfora, G. Casazza, A.D. Lucia, and E. Merlo, “Recovering Traceability Links between Code and Documentation,” IEEE
Trans. Software Eng., vol. 28, no. 10, pp. 970-983, Oct. 2002.
[15] Ramesh, B., Jarke, M. Toward Reference Models for Requirements Traceability IEEE Transactions on Software Engineering, Vol. 27, No.
1, January 2001, pages 58-93
[16] Clarke, Siobhán, et al. Subject Oriented Design: Towards Improved Alignment of Requirements, Design, and Code. Proc. of the 1999
ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications. Dallas, TX: 325-329.
[17] Lindvall, M., Sandahl, K. How well do experienced software developers predict software change? The Journal of Systems and Software 43,
1998, pages 19-27
[18] Wieringa, R. An Introduction to Requirements Traceability. Technical Report IR-389, Faculty of Mathematics and Computer Science (1995)
[19] Gotel, O. & Finkelstein, A. An analysis of the requirements traceability problem. In Proceedings of the First Int’l Conf. on Requirements
Engineering, pp. 94-101 (1994)
[20] Ramesh, B., Edwards, M. Issues in the development of a requirements traceability model. In Proceedings of the IEEE International
Symposium on Requirements Engineering, pp. 256-259 (1993)