This document discusses an efficient tool for a reusable software component taxonomy. It proposes integrating two existing classification schemes to develop a prototype system. Specifically:
- It proposes developing an integrated classification scheme using a combination of existing schemes to better classify and store reusable software components in a repository for efficient retrieval.
- A prototype was developed that integrates two existing classification schemes to demonstrate the proposed approach. This aims to improve on limitations of current software component retrieval methods.
The document describes a proposed system for classifying and retrieving reusable software components from a repository. It begins with an introduction to software reuse and existing component classification schemes, such as free text, enumerated, attribute-value, and faceted classification. The proposed system uses an integrated classification scheme that combines attribute-value and faceted classification. Components are classified using attributes like operating system, language, keywords, inputs, outputs, domain, and version. The system includes algorithms for inserting new components and searching for relevant components matching given attributes. Matching components are returned along with a download option. The goal is to take advantage of different classification schemes to improve search and retrieval of reusable software components.
This document discusses cost estimation models for identifying faulty objects when reusing object-oriented software components. It begins by introducing the benefits of reusable object-oriented components for reducing software development costs and time. However, sometimes these existing components contain "faulty objects" that are incompatible with new software systems and cause errors. The paper focuses on developing approaches to identify these faulty objects within reusable components. It reviews previous work on testing and fault detection techniques. It also discusses the need for reengineering existing components to estimate costs and identify incompatible objects. The document proposes using a reengineering cost model that considers the number of reused objects and their attributes to estimate identification efforts for faulty components. The goal is to determine if reusing or redeveloping is
TECHNIQUES FOR COMPONENT REUSABLE APPROACHcscpconf
This document discusses techniques for component reuse using a component retrieval approach. It proposes using UML models stored in MDL file format to retrieve relevant software components based on structural information like class names and relationships. A tool called a "smart environment" is described that can search a repository of MDL files and source code based on class diagrams or use case diagrams to find the best matching components for reuse. Weights are assigned to different model elements to return search results in order of closest match. The approach aims to improve on keyword-based searching by matching design specifications.
Ontology Based Approach for Semantic Information Retrieval SystemIJTET Journal
Abstract—The Information retrieval system is taking an important role in current search engine which performs searching operation based on keywords which results in an enormous amount of data available to the user, from which user cannot figure out the essential and most important information. This limitation may be overcome by a new web architecture known as the semantic web which overcome the limitation of the keyword based search technique called the conceptual or the semantic search technique. Natural language processing technique is mostly implemented in a QA system for asking user’s questions and several steps are also followed for conversion of questions to the query form for retrieving an exact answer. In conceptual search, search engine interprets the meaning of the user’s query and the relation among the concepts that document contains with respect to a particular domain that produces specific answers instead of showing lists of answers. In this paper, we proposed the ontology based semantic information retrieval system and the Jena semantic web framework in which, the user enters an input query which is parsed by Standford Parser then the triplet extraction algorithm is used. For all input queries, the SPARQL query is formed and further, it is fired on the knowledge base (Ontology) which finds appropriate RDF triples in knowledge base and retrieve the relevant information using the Jena framework.
IRJET- Determining Document Relevance using Keyword ExtractionIRJET Journal
This document describes a system that aims to search for and retrieve relevant documents from a large collection based on a user's query. It does this through three main components: keyword extraction, document searching, and a question answering bot. Keyword extraction is done using the TF-IDF algorithm to identify important words in documents. These keywords are stored in a database along with their TF-IDF weights. When a user submits a query, the system searches for documents containing keywords from the query and returns relevant results. It also includes a feedback mechanism for users to improve search accuracy over time. The goal is to deliver accurate search results quickly from large document collections.
A Novel Optimization towards Higher Reliability in Predictive Modelling towar...IJECEIAES
Although, the area of software engineering has made a remarkable progress in last decade but there is less attention towards the concept of code reusability in this regards.Code reusability is a subset of Software Reusability which is one of the signature topics in software engineering. We review the existing system to find that there is no progress or availability of standard research approach toward code reusability being introduced in last decade. Hence, this paper introduced a predictive framework that is used for optimizing the performance of code reusability. For this purpose, we introduce a case study of near real-time challenge and involved it in our modelling. We apply neural network and Damped-Least square algorithm to perform optimization with a sole target to compute and ensure highest possible reliability. The study outcome of our model exhibits higher reliability and better computational response time
This document summarizes a research paper on developing a feature-based product recommendation system. It begins by introducing recommender systems and their importance for e-commerce. It then describes how the proposed system takes basic product descriptions as input, recognizes features using association rule mining and k-nearest neighbor algorithms, and outputs recommended additional features to improve the product profile. The paper evaluates the system's performance on recommending antivirus software features. In under 3 sentences.
The document describes a proposed system for classifying and retrieving reusable software components from a repository. It begins with an introduction to software reuse and existing component classification schemes, such as free text, enumerated, attribute-value, and faceted classification. The proposed system uses an integrated classification scheme that combines attribute-value and faceted classification. Components are classified using attributes like operating system, language, keywords, inputs, outputs, domain, and version. The system includes algorithms for inserting new components and searching for relevant components matching given attributes. Matching components are returned along with a download option. The goal is to take advantage of different classification schemes to improve search and retrieval of reusable software components.
This document discusses cost estimation models for identifying faulty objects when reusing object-oriented software components. It begins by introducing the benefits of reusable object-oriented components for reducing software development costs and time. However, sometimes these existing components contain "faulty objects" that are incompatible with new software systems and cause errors. The paper focuses on developing approaches to identify these faulty objects within reusable components. It reviews previous work on testing and fault detection techniques. It also discusses the need for reengineering existing components to estimate costs and identify incompatible objects. The document proposes using a reengineering cost model that considers the number of reused objects and their attributes to estimate identification efforts for faulty components. The goal is to determine if reusing or redeveloping is
TECHNIQUES FOR COMPONENT REUSABLE APPROACHcscpconf
This document discusses techniques for component reuse using a component retrieval approach. It proposes using UML models stored in MDL file format to retrieve relevant software components based on structural information like class names and relationships. A tool called a "smart environment" is described that can search a repository of MDL files and source code based on class diagrams or use case diagrams to find the best matching components for reuse. Weights are assigned to different model elements to return search results in order of closest match. The approach aims to improve on keyword-based searching by matching design specifications.
Ontology Based Approach for Semantic Information Retrieval SystemIJTET Journal
Abstract—The Information retrieval system is taking an important role in current search engine which performs searching operation based on keywords which results in an enormous amount of data available to the user, from which user cannot figure out the essential and most important information. This limitation may be overcome by a new web architecture known as the semantic web which overcome the limitation of the keyword based search technique called the conceptual or the semantic search technique. Natural language processing technique is mostly implemented in a QA system for asking user’s questions and several steps are also followed for conversion of questions to the query form for retrieving an exact answer. In conceptual search, search engine interprets the meaning of the user’s query and the relation among the concepts that document contains with respect to a particular domain that produces specific answers instead of showing lists of answers. In this paper, we proposed the ontology based semantic information retrieval system and the Jena semantic web framework in which, the user enters an input query which is parsed by Standford Parser then the triplet extraction algorithm is used. For all input queries, the SPARQL query is formed and further, it is fired on the knowledge base (Ontology) which finds appropriate RDF triples in knowledge base and retrieve the relevant information using the Jena framework.
IRJET- Determining Document Relevance using Keyword ExtractionIRJET Journal
This document describes a system that aims to search for and retrieve relevant documents from a large collection based on a user's query. It does this through three main components: keyword extraction, document searching, and a question answering bot. Keyword extraction is done using the TF-IDF algorithm to identify important words in documents. These keywords are stored in a database along with their TF-IDF weights. When a user submits a query, the system searches for documents containing keywords from the query and returns relevant results. It also includes a feedback mechanism for users to improve search accuracy over time. The goal is to deliver accurate search results quickly from large document collections.
A Novel Optimization towards Higher Reliability in Predictive Modelling towar...IJECEIAES
Although, the area of software engineering has made a remarkable progress in last decade but there is less attention towards the concept of code reusability in this regards.Code reusability is a subset of Software Reusability which is one of the signature topics in software engineering. We review the existing system to find that there is no progress or availability of standard research approach toward code reusability being introduced in last decade. Hence, this paper introduced a predictive framework that is used for optimizing the performance of code reusability. For this purpose, we introduce a case study of near real-time challenge and involved it in our modelling. We apply neural network and Damped-Least square algorithm to perform optimization with a sole target to compute and ensure highest possible reliability. The study outcome of our model exhibits higher reliability and better computational response time
This document summarizes a research paper on developing a feature-based product recommendation system. It begins by introducing recommender systems and their importance for e-commerce. It then describes how the proposed system takes basic product descriptions as input, recognizes features using association rule mining and k-nearest neighbor algorithms, and outputs recommended additional features to improve the product profile. The paper evaluates the system's performance on recommending antivirus software features. In under 3 sentences.
The document describes an automated process for bug triage that uses text classification and data reduction techniques. It proposes using Naive Bayes classifiers to predict the appropriate developers to assign bugs to by applying stopword removal, stemming, keyword selection, and instance selection on bug reports. This reduces the data size and improves quality. It predicts developers based on their history and profiles while tracking bug status. The goal is to more efficiently handle software bugs compared to traditional manual triage processes.
1) The document discusses using data reduction techniques like instance selection and feature selection to reduce the scale and improve the quality of bug data for more effective bug triage.
2) It combines instance selection and feature selection to simultaneously reduce the number of bug reports (instances) and words (features) in bug data.
3) It evaluates the reduced bug data on two large open source projects and finds that combining the techniques can increase the accuracy of bug triage while reducing the data scale.
A DATA EXTRACTION ALGORITHM FROM OPEN SOURCE SOFTWARE PROJECT REPOSITORIES FO...ijseajournal
This document presents an algorithm to extract data from open source software project repositories on GitHub for building duration estimation models. The algorithm extracts data on contributors, commits, lines of code added and removed, and active days for each contributor within a release period. This data is then used to build linear regression models to estimate project duration based on the number of commits by contributor type (full-time, part-time, occasional). The algorithm is tested on data extracted from 21 releases of the WordPress project hosted on GitHub.
This document discusses data reduction techniques for improving bug triage in software projects. It proposes combining instance selection and feature selection to simultaneously reduce the scale of bug data on both the bug dimension and word dimension, while also improving the accuracy of bug triage. Historical bug data is used to build a predictive model to determine the optimal order of applying instance selection and feature selection for a new bug data set. The techniques are empirically evaluated on 600,000 bug reports from the Eclipse and Mozilla open source projects, showing the approach can effectively reduce data scale and improve triage accuracy.
An Efficient Approach for Requirement Traceability Integrated With Software R...IOSR Journals
Abstract: Traceability links between requirements of a system and its source code are helpful in reducing
system conception effort. During software updates and maintenance, the traceability links become invalid since
the developers may modify or remove some features of the source code. Hence, to acquire trustable links from a
system source code, a supervised link tracing approach is proposed here. In proposed approach, IR techniques
are applied on source code and requirements document to generate baseline traceability links. Concurrently,
software repositories are also mined to generate validating traceability links i.e. Histrace links which are then
called as experts. Now a trust model named as DynWing is used to rank the different types of experts. DynWing
dynamically assigns weights to different types of experts in ranking process. The top ranked experts are then fed
to the trust model named as Trumo. Trumo validates the baseline links with top ranked experts and finds the
trustable links from baseline links set. While validating the links, Trumo is capable of discarding or re-ranking
the experts and finds most traceable links. The proposed approach is able to improve the precision and recall
values of the traceability links.
Index Terms: Traceability, requirements, features, source code, repositories, experts.
TOWARDS EFFECTIVE BUG TRIAGE WITH SOFTWARE DATA REDUCTION TECHNIQUESShakas Technologies
This document summarizes an approach for data reduction in software bug triage. It combines instance selection and feature selection techniques to simultaneously reduce the number of bug reports (instances) and words (features) in bug datasets. This aims to create smaller, higher quality datasets that improve the accuracy of automatic bug triage while reducing labor costs. It evaluates different instance selection, feature selection, and their combination methods on large bug datasets from Eclipse and Mozilla projects. The results show the proposed data reduction approach can effectively shrink dataset sizes and boost bug triage accuracy.
Performance Evaluation of Query Processing Techniques in Information Retrievalidescitation
The first element of the search process is the query.
The user query being on an average restricted to two or three
keywords makes the query ambiguous to the search engine.
Given the user query, the goal of an Information Retrieval
[IR] system is to retrieve information which might be useful
or relevant to the information need of the user. Hence, the
query processing plays an important role in IR system.
The query processing can be divided into four categories
i.e. query expansion, query optimization, query classification and
query parsing. In this paper an attempt is made to evaluate the
performance of query processing algorithms in each of the
category. The evaluation was based on dataset as specified by
Forum for Information Retrieval [FIRE15]. The criteria used
for evaluation are precision and relative recall. The analysis is
based on the importance of each step in query processing. The
experimental results show that the significance of each step
in query processing and also the relevance of web semantics
and spelling correction in the user query.
The document presents an approach for improving requirement traceability by integrating it with software repositories. It proposes a supervised link tracing approach called Trustrace that uses information retrieval (IR) techniques to generate baseline traceability links between requirements and source code. It then mines software repositories like version control systems to generate validating traceability links called "Histrace links" from experts. A trust model called DynWing is used to dynamically assign weights to different expert links in ranking the baseline links. The top ranked expert links are then fed to another trust model called Trumo, which validates the baseline links and finds the most trustable links, improving precision and recall over basic IR techniques.
Towards Effective Bug Triage with Software Data Reduction Techniques1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
This document presents an approach to populating a release history database from version control and bug tracking systems. It combines data from CVS version control and Bugzilla bug tracking for the Mozilla project to analyze software evolution. The paper describes related work, outlines the data import process, and evaluates the approach by examining timescales, release history, and coupling for the Mozilla project. It concludes that this approach provides insights into a project's evolutionary processes but that more formal integration with version control could improve the analysis.
Survey on Software Data Reduction Techniques Accomplishing Bug TriageIRJET Journal
This document discusses various techniques for software data reduction to improve the accuracy of bug triage. It first provides background on bug triage and the challenges it aims to address like large volumes of low quality bug data. It then surveys literature on related techniques like automated test generation and text mining approaches. The document describes various text mining methods like term-based, phrase-based, concept-based and pattern taxonomy methods. It also covers data reduction techniques and their benefits for bug triage. Different classification techniques for bug identification are explained, including decision trees, nearest neighbor classifier and artificial neural networks.
A Survey on Bug Tracking System for Effective Bug ClearanceIRJET Journal
This document discusses bug tracking systems and methods for effective bug clearance. It describes how software organizations spend a large amount of resources handling bugs. It then summarizes an approach that uses instance selection and feature selection methods to classify bugs which are then assigned to bug solving experts based on their experience. A history of cleared bugs is also maintained to help resolve similar bugs faster. The goal is to reduce the time and costs involved in clearing bugs.
Developers spend much of their time understanding unfamiliar code during software maintenance tasks. The study found that developers interleaved three activities: searching for relevant code using manual and tool-based searches, following code dependencies, and collecting relevant code by encoding it in Eclipse interfaces. However, developers' searches were often unsuccessful due to misleading cues, and navigation tools caused overhead. Developers lost track of collected code, forcing re-finding. On average, developers spent 35% of time on redundant navigation. This suggests a new model of understanding as searching, relating, and collecting relevant information, and ideas for more effective tools.
Text Document categorization using support vector machineIRJET Journal
This document discusses using support vector machines for text document categorization. It begins with an abstract that introduces text categorization and automatic classification of documents into predefined categories based on content. The document then discusses related work on text categorization using machine learning techniques. It presents the system architecture for text categorization, which involves learning, term extraction, and classification processes. The implementation section discusses preprocessing text data, term extraction using TF-IDF weighting, and classification using support vector machines.
Open domain question answering system using semantic role labelingeSAT Publishing House
1. The document describes a proposed open domain question answering system that uses semantic role labeling to extract answers from documents retrieved from the web.
2. The system consists of three modules: question processing, document retrieval, and answer extraction. Semantic role labeling is used in the answer extraction module to identify answers based on the question type.
3. An evaluation of the proposed system showed it achieved higher accuracy compared to a baseline system using only pattern matching for answer extraction.
TOWARDS PREDICTING SOFTWARE DEFECTS WITH CLUSTERING TECHNIQUESijaia
The purpose of software defect prediction is to improve the quality of a software project by building a
predictive model to decide whether a software module is or is not fault prone. In recent years, much
research in using machine learning techniques in this topic has been performed. Our aim was to evaluate
the performance of clustering techniques with feature selection schemes to address the problem of software
defect prediction problem. We analysed the National Aeronautics and Space Administration (NASA)
dataset benchmarks using three clustering algorithms: (1) Farthest First, (2) X-Means, and (3) selforganizing map (SOM). In order to evaluate different feature selection algorithms, this article presents a
comparative analysis involving software defects prediction based on Bat, Cuckoo, Grey Wolf Optimizer
(GWO), and particle swarm optimizer (PSO). The results obtained with the proposed clustering models
enabled us to build an efficient predictive model with a satisfactory detection rate and acceptable number
of features.
ANALYSIS OF ENTERPRISE SHARED RESOURCE INVOCATION SCHEME BASED ON HADOOP AND Rijaia
The response rate and performance indicators of enterprise resource calls have become an important part
of measuring the difference in enterprise user experience. An efficient corporate shared resource calling
system can significantly improve the office efficiency of corporate users and significantly improve the
fluency of corporate users' resource calling. Hadoop has powerful data integration and analysis
capabilities in resource extraction, while R has excellent statistical capabilities and resource personalized
decomposition and display capabilities in data calling. This article will propose an integration plan for
enterprise shared resource invocation based on Hadoop and R to further improve the efficiency of
enterprise users' shared resource utilization, improve the efficiency of system operation, and bring
enterprise users a higher level of user experience. First, we use Hadoop to extract the corporate shared
resources required by corporate users from the nearby resource storage computer room and
terminal equipment to increase the call rate, and use the R function attribute to convert the user’s search
results into linear correlations, according to the correlation The strong and weak principles are displayed
in order to improve the corresponding speed and experience. This article proposes feasible solutions to the
shortcomings in the current enterprise shared resource invocation. We can use public data sets to perform
personalized regression analysis on user needs, and optimize and integrate most relevant information.
Characterization of reusable software components for better reuseeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document discusses software reuse and component-based development. It defines software reuse as creating software from existing software components rather than building from scratch. Component-based development allows large, abstract enterprise components to be reused to reduce development time. There are different types of software reuse and several benefits including increased reliability, reduced risks, and accelerated development. Component retrieval is discussed as an important part of software reuse, but it remains a difficult problem to find efficient solutions. Overall, the document presents an overview of software reuse and component-based development while noting that more work is still needed to improve component retrieval methods.
The document discusses software reuse and component-based software engineering. It describes how reuse can happen at different levels from full application systems down to individual functions. Reusing code, specifications, and designs can improve reliability, reduce risks and costs, and speed up development time. However, reuse requires an organized component library, confidence in component quality, and documentation to support adaptation. The document also outlines processes for incorporating reuse into development and enhancing the reusability of components.
Presentation on component based software engineering(cbse)Chandan Thakur
The document presents an overview of component based software engineering. It discusses what a component is, the fundamental principles of CBSE, the CBSE development lifecycle, and metrics used in CBSE. Benefits include reduced complexity and development time while difficulties include quality of components and satisfying requirements. CBSE uses pre-built components while traditional SE builds from scratch. Current component technologies discussed are CORBA, COM, EJB, and IDL. Applications of CBSE are in many domains.
The document describes an automated process for bug triage that uses text classification and data reduction techniques. It proposes using Naive Bayes classifiers to predict the appropriate developers to assign bugs to by applying stopword removal, stemming, keyword selection, and instance selection on bug reports. This reduces the data size and improves quality. It predicts developers based on their history and profiles while tracking bug status. The goal is to more efficiently handle software bugs compared to traditional manual triage processes.
1) The document discusses using data reduction techniques like instance selection and feature selection to reduce the scale and improve the quality of bug data for more effective bug triage.
2) It combines instance selection and feature selection to simultaneously reduce the number of bug reports (instances) and words (features) in bug data.
3) It evaluates the reduced bug data on two large open source projects and finds that combining the techniques can increase the accuracy of bug triage while reducing the data scale.
A DATA EXTRACTION ALGORITHM FROM OPEN SOURCE SOFTWARE PROJECT REPOSITORIES FO...ijseajournal
This document presents an algorithm to extract data from open source software project repositories on GitHub for building duration estimation models. The algorithm extracts data on contributors, commits, lines of code added and removed, and active days for each contributor within a release period. This data is then used to build linear regression models to estimate project duration based on the number of commits by contributor type (full-time, part-time, occasional). The algorithm is tested on data extracted from 21 releases of the WordPress project hosted on GitHub.
This document discusses data reduction techniques for improving bug triage in software projects. It proposes combining instance selection and feature selection to simultaneously reduce the scale of bug data on both the bug dimension and word dimension, while also improving the accuracy of bug triage. Historical bug data is used to build a predictive model to determine the optimal order of applying instance selection and feature selection for a new bug data set. The techniques are empirically evaluated on 600,000 bug reports from the Eclipse and Mozilla open source projects, showing the approach can effectively reduce data scale and improve triage accuracy.
An Efficient Approach for Requirement Traceability Integrated With Software R...IOSR Journals
Abstract: Traceability links between requirements of a system and its source code are helpful in reducing
system conception effort. During software updates and maintenance, the traceability links become invalid since
the developers may modify or remove some features of the source code. Hence, to acquire trustable links from a
system source code, a supervised link tracing approach is proposed here. In proposed approach, IR techniques
are applied on source code and requirements document to generate baseline traceability links. Concurrently,
software repositories are also mined to generate validating traceability links i.e. Histrace links which are then
called as experts. Now a trust model named as DynWing is used to rank the different types of experts. DynWing
dynamically assigns weights to different types of experts in ranking process. The top ranked experts are then fed
to the trust model named as Trumo. Trumo validates the baseline links with top ranked experts and finds the
trustable links from baseline links set. While validating the links, Trumo is capable of discarding or re-ranking
the experts and finds most traceable links. The proposed approach is able to improve the precision and recall
values of the traceability links.
Index Terms: Traceability, requirements, features, source code, repositories, experts.
TOWARDS EFFECTIVE BUG TRIAGE WITH SOFTWARE DATA REDUCTION TECHNIQUESShakas Technologies
This document summarizes an approach for data reduction in software bug triage. It combines instance selection and feature selection techniques to simultaneously reduce the number of bug reports (instances) and words (features) in bug datasets. This aims to create smaller, higher quality datasets that improve the accuracy of automatic bug triage while reducing labor costs. It evaluates different instance selection, feature selection, and their combination methods on large bug datasets from Eclipse and Mozilla projects. The results show the proposed data reduction approach can effectively shrink dataset sizes and boost bug triage accuracy.
Performance Evaluation of Query Processing Techniques in Information Retrievalidescitation
The first element of the search process is the query.
The user query being on an average restricted to two or three
keywords makes the query ambiguous to the search engine.
Given the user query, the goal of an Information Retrieval
[IR] system is to retrieve information which might be useful
or relevant to the information need of the user. Hence, the
query processing plays an important role in IR system.
The query processing can be divided into four categories
i.e. query expansion, query optimization, query classification and
query parsing. In this paper an attempt is made to evaluate the
performance of query processing algorithms in each of the
category. The evaluation was based on dataset as specified by
Forum for Information Retrieval [FIRE15]. The criteria used
for evaluation are precision and relative recall. The analysis is
based on the importance of each step in query processing. The
experimental results show that the significance of each step
in query processing and also the relevance of web semantics
and spelling correction in the user query.
The document presents an approach for improving requirement traceability by integrating it with software repositories. It proposes a supervised link tracing approach called Trustrace that uses information retrieval (IR) techniques to generate baseline traceability links between requirements and source code. It then mines software repositories like version control systems to generate validating traceability links called "Histrace links" from experts. A trust model called DynWing is used to dynamically assign weights to different expert links in ranking the baseline links. The top ranked expert links are then fed to another trust model called Trumo, which validates the baseline links and finds the most trustable links, improving precision and recall over basic IR techniques.
Towards Effective Bug Triage with Software Data Reduction Techniques1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
This document presents an approach to populating a release history database from version control and bug tracking systems. It combines data from CVS version control and Bugzilla bug tracking for the Mozilla project to analyze software evolution. The paper describes related work, outlines the data import process, and evaluates the approach by examining timescales, release history, and coupling for the Mozilla project. It concludes that this approach provides insights into a project's evolutionary processes but that more formal integration with version control could improve the analysis.
Survey on Software Data Reduction Techniques Accomplishing Bug TriageIRJET Journal
This document discusses various techniques for software data reduction to improve the accuracy of bug triage. It first provides background on bug triage and the challenges it aims to address like large volumes of low quality bug data. It then surveys literature on related techniques like automated test generation and text mining approaches. The document describes various text mining methods like term-based, phrase-based, concept-based and pattern taxonomy methods. It also covers data reduction techniques and their benefits for bug triage. Different classification techniques for bug identification are explained, including decision trees, nearest neighbor classifier and artificial neural networks.
A Survey on Bug Tracking System for Effective Bug ClearanceIRJET Journal
This document discusses bug tracking systems and methods for effective bug clearance. It describes how software organizations spend a large amount of resources handling bugs. It then summarizes an approach that uses instance selection and feature selection methods to classify bugs which are then assigned to bug solving experts based on their experience. A history of cleared bugs is also maintained to help resolve similar bugs faster. The goal is to reduce the time and costs involved in clearing bugs.
Developers spend much of their time understanding unfamiliar code during software maintenance tasks. The study found that developers interleaved three activities: searching for relevant code using manual and tool-based searches, following code dependencies, and collecting relevant code by encoding it in Eclipse interfaces. However, developers' searches were often unsuccessful due to misleading cues, and navigation tools caused overhead. Developers lost track of collected code, forcing re-finding. On average, developers spent 35% of time on redundant navigation. This suggests a new model of understanding as searching, relating, and collecting relevant information, and ideas for more effective tools.
Text Document categorization using support vector machineIRJET Journal
This document discusses using support vector machines for text document categorization. It begins with an abstract that introduces text categorization and automatic classification of documents into predefined categories based on content. The document then discusses related work on text categorization using machine learning techniques. It presents the system architecture for text categorization, which involves learning, term extraction, and classification processes. The implementation section discusses preprocessing text data, term extraction using TF-IDF weighting, and classification using support vector machines.
Open domain question answering system using semantic role labelingeSAT Publishing House
1. The document describes a proposed open domain question answering system that uses semantic role labeling to extract answers from documents retrieved from the web.
2. The system consists of three modules: question processing, document retrieval, and answer extraction. Semantic role labeling is used in the answer extraction module to identify answers based on the question type.
3. An evaluation of the proposed system showed it achieved higher accuracy compared to a baseline system using only pattern matching for answer extraction.
TOWARDS PREDICTING SOFTWARE DEFECTS WITH CLUSTERING TECHNIQUESijaia
The purpose of software defect prediction is to improve the quality of a software project by building a
predictive model to decide whether a software module is or is not fault prone. In recent years, much
research in using machine learning techniques in this topic has been performed. Our aim was to evaluate
the performance of clustering techniques with feature selection schemes to address the problem of software
defect prediction problem. We analysed the National Aeronautics and Space Administration (NASA)
dataset benchmarks using three clustering algorithms: (1) Farthest First, (2) X-Means, and (3) selforganizing map (SOM). In order to evaluate different feature selection algorithms, this article presents a
comparative analysis involving software defects prediction based on Bat, Cuckoo, Grey Wolf Optimizer
(GWO), and particle swarm optimizer (PSO). The results obtained with the proposed clustering models
enabled us to build an efficient predictive model with a satisfactory detection rate and acceptable number
of features.
ANALYSIS OF ENTERPRISE SHARED RESOURCE INVOCATION SCHEME BASED ON HADOOP AND Rijaia
The response rate and performance indicators of enterprise resource calls have become an important part
of measuring the difference in enterprise user experience. An efficient corporate shared resource calling
system can significantly improve the office efficiency of corporate users and significantly improve the
fluency of corporate users' resource calling. Hadoop has powerful data integration and analysis
capabilities in resource extraction, while R has excellent statistical capabilities and resource personalized
decomposition and display capabilities in data calling. This article will propose an integration plan for
enterprise shared resource invocation based on Hadoop and R to further improve the efficiency of
enterprise users' shared resource utilization, improve the efficiency of system operation, and bring
enterprise users a higher level of user experience. First, we use Hadoop to extract the corporate shared
resources required by corporate users from the nearby resource storage computer room and
terminal equipment to increase the call rate, and use the R function attribute to convert the user’s search
results into linear correlations, according to the correlation The strong and weak principles are displayed
in order to improve the corresponding speed and experience. This article proposes feasible solutions to the
shortcomings in the current enterprise shared resource invocation. We can use public data sets to perform
personalized regression analysis on user needs, and optimize and integrate most relevant information.
Characterization of reusable software components for better reuseeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document discusses software reuse and component-based development. It defines software reuse as creating software from existing software components rather than building from scratch. Component-based development allows large, abstract enterprise components to be reused to reduce development time. There are different types of software reuse and several benefits including increased reliability, reduced risks, and accelerated development. Component retrieval is discussed as an important part of software reuse, but it remains a difficult problem to find efficient solutions. Overall, the document presents an overview of software reuse and component-based development while noting that more work is still needed to improve component retrieval methods.
The document discusses software reuse and component-based software engineering. It describes how reuse can happen at different levels from full application systems down to individual functions. Reusing code, specifications, and designs can improve reliability, reduce risks and costs, and speed up development time. However, reuse requires an organized component library, confidence in component quality, and documentation to support adaptation. The document also outlines processes for incorporating reuse into development and enhancing the reusability of components.
Presentation on component based software engineering(cbse)Chandan Thakur
The document presents an overview of component based software engineering. It discusses what a component is, the fundamental principles of CBSE, the CBSE development lifecycle, and metrics used in CBSE. Benefits include reduced complexity and development time while difficulties include quality of components and satisfying requirements. CBSE uses pre-built components while traditional SE builds from scratch. Current component technologies discussed are CORBA, COM, EJB, and IDL. Applications of CBSE are in many domains.
This document discusses component-based software engineering (CBSE). It covers topics like components and component models, CBSE processes, and component composition. The key points are:
- CBSE relies on reusable software components with well-defined interfaces to improve reuse. Components are more abstract than classes.
- Essentials of CBSE include independent, interface-specified components; standards for integration; and middleware for interoperability.
- CBSE is based on principles like independence, hidden implementations, and replaceability through maintained interfaces.
This document discusses software reuse and application frameworks. It covers the benefits of software reuse like accelerated development and increased dependability. Application frameworks provide a reusable architecture for related applications and are implemented by adding components and instantiating abstract classes. Web application frameworks in particular use the model-view-controller pattern to support dynamic websites as a front-end for web applications.
This document discusses computer-aided software engineering (CASE) tools and their use in supporting the systems development life cycle. It describes the objectives and components of CASE tools, including upper CASE tools for analysis and design, lower CASE tools for implementation, and cross life-cycle tools. The document also discusses CASE repositories for storing design documents and generating code, as well as visual and emerging development tools like object-oriented tools.
A Methodology To Manage Victim Components Using Cbo Measureijseajournal
This document presents a methodology for managing victim components using coupling between object (CBO) measure. It defines several measures of software component reusability, including weighted component measure and depth of inheritance tree measure. These measures are calculated for components in a human resources (HR) portal application. The document identifies the business tier component as a potential victim component based on its low reuse count. It proposes using CBO measure to identify highly cohesive components that need reconfiguration to improve reusability. Reconfiguring such components would make them less cohesive and easier to reuse in other applications.
Approaches and Challenges of Software Reusability: A Review of Research Liter...IRJET Journal
The document discusses approaches and challenges related to software reusability. It outlines three main approaches to software reuse: component-based software reuse, domain engineering and software product lines, and architecture-based software reuse. It also discusses challenges to software reuse, including issues finding, selecting, and adapting existing components, as well as organizational and cultural challenges. Overall, the document examines the benefits of software reuse for improving productivity and quality, but also acknowledges there are still obstacles to implementing reuse effectively.
1) The document discusses various ways that artificial intelligence can be applied to different phases of the software engineering lifecycle, including requirements specification, design, coding, testing, and estimation.
2) It provides examples of using techniques like natural language processing to clarify requirements, knowledge graphs to manage requirements information, and computational intelligence for requirements prioritization.
3) For design, the document discusses using intelligent agents to recommend patterns and designs to satisfy quality attributes from requirements and assist with assigning responsibilities to components.
IRJET- A Novel Approch Automatically Categorizing Software TechnologiesIRJET Journal
This document proposes an automatic approach called Witt to categorize software technologies based on their descriptions. Witt takes a sentence describing a technology as input and outputs a general category (e.g. integrated development environment) along with qualifying attributes. It applies natural language processing and the Levenshtein distance algorithm to compare string similarities and categorize technologies from large datasets. The system architecture first obtains data on software methodologies and labels. It then applies NLP and Levenshtein distance to find hypernyms and transform them into categories with attributes for classification.
This document discusses and compares several agent-assisted methodologies for developing multi-agent systems:
- It reviews Gaia, HLIM, PASSI, and Tropos methodologies, outlining their key models and phases. Gaia focuses on analysis and design, HLIM models internal and external agent behavior, and PASSI and Tropos incorporate UML modeling.
- It then proposes a new MAB methodology intended to address shortcomings of existing approaches. MAB includes requirements, analysis, design, and implementation phases and models such as use case maps and agent roles.
- Finally, it concludes that agent technologies represent a promising approach for developing complex software systems, but that matching methodologies to problem domains and developing princip
This document discusses identifying faulty objects in reusable object-oriented software components through reengineering. It presents a cost estimation model for reengineering that considers the number of objects and attributes to estimate effort. The model aims to identify faulty objects that cause incorrect responses. Reengineering is time-consuming so the goal is to minimize effort needed to identify faulty objects in complex, reusable components.
An Empirical Study of the Improved SPLD Framework using Expert Opinion TechniqueIJEACS
Due to the growing need for high-performance and low-cost software applications and the increasing competitiveness, the industry is under pressure to deliver products with low development cost, reduced delivery time and improved quality. To address these demands, researchers have proposed several development methodologies and frameworks. One of the latest methodologies is software product line (SPL) which utilizes the concepts like reusability and variability to deliver successful products with shorter time-to-market, least development and minimum maintenance cost with a high-quality product. This research paper is a validation of our proposed framework, Improved Software Product Line (ISPL), using Expert Opinion Technique. An extensive survey based on a set of questionnaires on various aspects and sub-processes of the ISPLD Framework was carried. Analysis of the empirical data concludes that ISPL shows significant improvements on several aspects of the contemporary SPL frameworks.
International Journal of Engineering Research and Applications (IJERA) aims to cover the latest outstanding developments in the field of all Engineering Technologies & science.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
A Model To Compare The Degree Of Refactoring Opportunities Of Three Projects ...acijjournal
Refactoring is applied to the software artifacts so as to improve its internal structure, while preserving its
external behavior. Refactoring is an uncertain process and it is difficult to give some units for
measurement. The amount to refactoring that can be applied to the source-code depends upon the skills of
the developer. In this research, we have perceived refactoring as a quantified object on an ordinal scale of
measurement. We have a proposed a model for determining the degree of refactoring opportunities in the
given source-code. The model is applied on the three projects collected from a company. UML diagrams
are drawn for each project. The values for source-code metrics, that are useful in determining the quality of
code, are calculated for each UML of the projects. Based on the nominal values of metrics, each relevant
UML is represented on an ordinal scale. A machine learning tool, weka, is used to analyze the dataset,
imported in the form of arff file, produced by the three projects
A MODEL TO COMPARE THE DEGREE OF REFACTORING OPPORTUNITIES OF THREE PROJECTS ...acijjournal
This document presents a model for quantifying and comparing the degree of refactoring opportunities in three software projects. The model involves drawing UML diagrams for the projects, calculating source code metrics for each UML diagram, representing the diagrams on an ordinal scale based on the metrics, and using a machine learning tool (Weka) to analyze the resulting dataset. The tool uses a Naive Bayesian classifier to generate a confusion matrix for each project, allowing evaluation of the model's performance at classifying refactoring opportunities as low, medium, or high. The model is applied to three projects from a company to test its ability to measure and compare refactoring opportunities in code.
Multiagent Based Methodologies have become an
important subject of research in advance Software Engineering.
Several methodologies have been proposed as, a theoretical
approach, to facilitate and support the development of complex
distributed systems. An important question when facing the
construction of Agent Applications is deciding which
methodology to follow. Trying to answer this question, a
framework with several criteria is applied in this paper for the
comparative analysis of existing multiagent system
methodologies. The results of the comparative over two of them,
conclude that those methodologies have not reached a sufficient
maturity level to be used by the software industry. The
framework has also proved its utility for the evaluation of any
kind of Multiagent Based Software Engineering Methodology
A FRAMEWORK STUDIO FOR COMPONENT REUSABILITYcscpconf
The deployment of a software product requires considerable amount of time and effort. In order
to increase the productivity of the software products, reusability strategies were proposed in the
literature. However effective reuse is still a challenging issue. This paper presents a framework
studio for effective components reusability which provides the selection of components from framework studio and generation of source code based on stakeholders needs. The framework studio is implemented using swings which are integrated onto the Net Beans IDE which help in faster generation of the source code.
This document discusses elements that contribute to legacy program complexity. It identifies factors such as difficulty understanding old code, high cost of maintenance and replacement, large size, poor design, integration challenges with new technologies, lack of documentation, inflexibility, long processing times, unavailability of original staff, reliability issues, and bugs. The paper explores each of these elements in detail and argues that legacy programs are complex due to a combination of these interrelated factors such as large size, complex designs with many interconnected parts, and difficulty integrating old code and platforms with new technologies.
Reusability is an only one best direction to increase developing productivity and maintainability of application. One must first search for good tested software component and reusable. Developed Application software by one programmer can be shown useful for others also component. This is proving that code specifics to application requirements can be also reused in develop projects related with same requirements. The main aim of this paper proposed a way for reusable module. An process that takes source code as a input that will helped to take the decision approximately which particular software, reusable artefacts should be reused or not.
This document presents a framework for reusing existing software agents through ontological engineering. The framework includes components like a user interface agent, query processor, mapping agent, transfer agent, wrapper agent, and remote agents containing ontologies. The query processor reformulates the user's query, the mapping agent identifies relevant ontologies, and the transfer agent sends the query to remote agents. The remote agents provide ontologies as output, which are then integrated/merged and presented back to the user interface agent. The goal is to enable reuse of heterogeneous agents across different development environments through a standardized ontology representation.
Software requirement analysis enhancements by
prioritizing requirement attributes using rank
based Agents.
Ashok Kumar Vinay Goyal
Professor Assistant Professor
Department of Computer Science and Applications Department of MCA
Kurukshetra University, Kurukshetra, India Panipat Institute of Engineering & Technology
Panipat, India
Abstract- This paper proposes a new technique in the
domain of Agent oriented software engineering. Agents
work in autonomous environments and can respond to
agent triggers. Agents can be very useful in requirement
analysis phase of software development process, where
they can react towards the requirement triggers and
result in aligned notations to identify the best possible
design solution from existing designs. Agent helps in
design generation process, which includes the use of
Artificial intelligence. The results produced clearly
shows the improvements over the conventional
reusability principles and ideas.
1. INTRODUCTION
Agent oriented software engineering is a new
emerging technique which is growing very
rapidly. Software development industries have
invested huge efforts in this domain and results
published by many of them are very exiting [1].
The autonomous and reactive nature of agents
makes it possible for the designers to visualize
in terms of real life problem solving scenarios
where socio-logical [2] characteristics of agents
automatically activate the timely checks for any
problem in domain and to solve the same using
agents.
Agents are very helpful in the software
development life cycle. Experiments carried out
in past have shown [2][9][10] the improvement
in the SDLC and conclusion is that agents can be
very helpful in cost and effort minimization; if
tuned properly. Fine-tuning of agents and SDLC
process-state-plug-in for two-way
communications results in agent based software
development process where intelligent agents
will take decisions for better time and resource
utilization.
Fine-tuning of agents and SDLC process-state-
plug-in for two-way communications results in
agent based software development process
where intelligent agents will take decisions for
better time and resource utilization. Agents are
capable of storing historic data, which helps in
decision-making using heuristic based approach.
This paper discusses the details of one such
experiment conducted to improve the
requirement analysis process with the help of
proactive agents. Agents automatically sense the
requirement environment and propose their own
set of important requirement checklist. This is
sort of intelligent assistance with domain
heuristic, which leads to cover all possible
requirement entities of the problem domain.
2. RELATED WORK
Michael Wooldridge, Nicholas R. Jennings &
David Kinny describe the analysis process using
agent-oriented approach [1]. They have
considered the GAIA notations. The analysis
stages of Gaia are:
1) Identify the agent’s roles in the system, which
typically correspond to identify ro ...
The article proposes a new model for optimizing software effort and cost estimation based on code reusability. The model compares new projects to previously completed, similar projects stored in a code repository. By searching for and retrieving reusable code, functions, and methods from old projects, the model aims to reduce effort and cost estimates for new software development. The model is described as being based on the concept of estimation by analogy and using innovative search and retrieval techniques to achieve code reuse and thus decreased cost and effort estimates.
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSijseajournal
With the massive growth of the organizations files, the needs for archiving system become a must. A lot of
time is consumed in collecting requirements from the organization to build an archiving system. Sometimes
the system does not meet the organization needs. This paper proposes a domain-based requirement
engineering system that efficiently and effectively develops different archiving systems based on new
suggested technique that merges the two best used agile methodologies: extreme programming (XP) and
SCRUM. The technique is tested on a real case study. The results shows that the time and effort consumed
during analyzing and designing the archiving systems decreased significantly. The proposed methodology
also reduces the system errors that may happen at the early stages of the development of the system.
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSijseajournal
With the massive growth of the organizations files, the needs for archiving system become a must. A lot of time is consumed in collecting requirements from the organization to build an archiving system. Sometimes the system does not meet the organization needs. This paper proposes a domain-based requirement engineering system that efficiently and effectively develops different archiving systems based on new
suggested technique that merges the two best used agile methodologies: extreme programming (XP) and SCRUM. The technique is tested on a real case study. The results shows that the time and effort consumed during analyzing and designing the archiving systems decreased significantly. The proposed methodology also reduces the system errors that may happen at the early stages of the development of the system.
Developing reusable software components for distributed embedded systemseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Similar to An efficient tool for reusable software (20)
Mems based optical sensor for salinity measurementprjpublications
1. The document describes a MEMS-based optical sensor using a two-dimensional photonic crystal slab waveguide for measuring salinity.
2. The sensor takes advantage of the fact that the refractive index of sea water changes with salinity concentration. It detects these small refractive index changes by measuring the resulting effective index change in the photonic crystal slab waveguide.
3. Simulation results show that even small refractive index changes due to salinity produce a more significant change in effective index, demonstrating the high sensitivity of the designed sensor. Effective index decreases exponentially with increasing salinity percentage measured.
Implementation and analysis of multiple criteria decision routing algorithm f...prjpublications
This document summarizes a research paper that proposes and evaluates a new routing algorithm called Multiple Criteria Decision Routing (MCDR) for wireless sensor networks. MCDR selects the next node to forward data based on both the node's distance to the sink and its remaining energy. The performance of MCDR is compared to flooding algorithm through simulation. Simulation results show that MCDR has better performance than flooding in terms of energy efficiency and fast data delivery.
An approach to design a rectangular microstrip patch antenna in s band by tlm...prjpublications
This document describes the design of a rectangular microstrip patch antenna operating in the S band (2-4 GHz) using the transmission line model. Key steps included calculating antenna parameters like width, effective dielectric constant, and length based on the transmission line model. An inset feed was used for impedance matching. Simulation results showed the input impedance matched the 50 ohm feed at the design frequency of 2.4 GHz. The return loss was minimized (-169.4 dB) and VSWR was close to the ideal value of 1, indicating good impedance matching. The antenna was concluded to be well designed for operation in the S band with a bandwidth of 4.16%.
A design and simulation of optical pressure sensor based on photonic crystal ...prjpublications
This document describes the design and simulation of an optical pressure sensor based on photonic crystals in the sub-micron range. Two designs of the pressure sensor are proposed and modeled. The first uses a two-dimensional square lattice photonic crystal with rods in air and a waveguide carved between two dielectric slabs. The second uses a two-dimensional hexagonal lattice photonic crystal with holes in a dielectric slab and a waveguide. Applied pressure moves the upper slab, changing the waveguide dimension and altering the transmission spectrum in a way that corresponds to the pressure level. The designs were simulated using the Finite Difference Time Domain method with the MEEP software tool.
Pattern recognition using video surveillance for wildlife applicationsprjpublications
This document summarizes a research paper that proposes a wildlife monitoring system using video surveillance and pattern recognition. The system uses motion detection to capture images when movement is detected. A pattern recognition module then analyzes the images using Histogram of Oriented Gradients (HOG) to distinguish between harmful and harmless animals. If a harmful animal is identified, the system notifies authorities of the animal type and location using GSM and GPS modules. The researchers tested the system using a database of animal images and found that HOG provided accurate classification of tigers and other animals.
Precision face image retrieval by extracting the face features and comparing ...prjpublications
This document describes a proposed method for improving content-based face image retrieval. The method uses two orthogonal techniques: attribute-enhanced sparse coding and attribute-embedded inverted indexing. Attribute-enhanced sparse coding exploits global features to construct semantic codewords offline. Attribute-embedded inverted indexing considers local query image features in a binary signature to efficiently retrieve images. By combining these techniques, the method reduces errors and achieves better face image extraction from databases compared to existing content-based retrieval systems. It works by extracting features from the query image, matching them to database images, and returning ranked results.
Keyless approach of separable hiding data into encrypted imageprjpublications
This document summarizes a research paper that proposes a keyless approach to separately hiding data in an encrypted image. The approach uses Sieving, Division and Shuffling (SDS) algorithms to encrypt an original image, generating random shares. It then compresses the least significant bits of the encrypted image to create space for additional data. At the receiver end, the encrypted image and data are decrypted separately without keys. The SDS algorithms sieve an input image into RGB components, randomly divide each component into shares, and shuffle the shares. This encrypts the image. Data is then embedded in the encrypted image before transmission. At the receiver end, the inverse process extracts both the decrypted image and data.
Encryption based multi user manner secured data sharing and storing in cloudprjpublications
This summary provides the key details from the document in 3 sentences:
The document proposes a secure multi-owner data sharing scheme for dynamic groups in cloud computing. The scheme allows any user in a group to securely store and share data files with others through the untrusted cloud. It uses techniques like group signature and dynamic broadcast encryption to provide anonymous access control while enabling the group manager to trace real identities when needed, and allows efficient user revocation and participation of new users.
A secure payment scheme in multihop wireless network by trusted node identifi...prjpublications
The document proposes a secure payment scheme for multihop wireless networks using a trusted node identification method. It improves an existing report-based payment scheme by assigning trust values to nodes based on their past performance. The proposed scheme has 5 phases: 1) communication through high trust nodes, 2) report classification, 3) cheater identification, 4) credit account updates, 5) trust value updates. It aims to increase performance by reducing dropped packets through trusted nodes and minimizing overhead in the report-based scheme through limited cryptographic operations. The experimental results suggest it improves throughput and delivery ratio compared to other schemes.
Preparation gade and idol model for preventing multiple spoofing attackers in...prjpublications
This document proposes the GADE and IDOL models for detecting and localizing multiple spoofing attackers in wireless networks. GADE uses spatial correlation of received signal strength readings and cluster analysis to detect spoofing attacks and determine the number of attackers. IDOL builds on GADE and uses additional localization algorithms to pinpoint the locations of multiple adversaries. The models were evaluated using both 802.11 and 802.15.4 networks in real office environments, achieving over 90% accuracy in detecting attacks and localizing adversaries. Support vector machines were also used to improve determination of the number of attackers when training data is available.
This document discusses a study on using GIS to simulate a water quality model for Hussain Sagar Lake in Hyderabad, India. The study uses geospatial modeling techniques to understand water quality dynamics in the watershed and simulate parameters like BOD, DO, and nutrient loads entering the lake. The multi-layer GIS model results are expected to show agreement between measured and simulated water quality parameters. This will help prioritize effective management strategies to protect water quality in the lake.
Smes role in reduction of the unemployment problem in the area located in sa...prjpublications
This study examines the role of small and medium enterprises (SMEs) in reducing unemployment in northern Saudi Arabia. It analyzes survey responses from 1,370 SME workers. The study finds: 1) SMEs positively contribute to economic development and solving unemployment; 2) Low SME salaries discourage local workers; 3) Government support is needed to encourage SMEs and reduce unemployment. The study recommends that the government increase SME funding, wages, and support to provide jobs and reduce unemployment in northern Saudi Arabia.
Review of three categories of fingerprint recognitionprjpublications
This document reviews three categories of fingerprint recognition techniques: correlation-based, minutiae-based, and pattern-based. Minutiae-based matching is the most popular as it uses ridge endings and bifurcations, but it is time-consuming. Pattern-based matching uses a virtual core point and pattern points for alignment. Correlation-based matching superimposes images and computes pixel correlations but is computationally expensive. Challenges include handling low quality images and improving feature extraction and matching accuracy and speed.
Reduction of executive stress by development of emotional intelligence a stu...prjpublications
- The study examined the effectiveness of behavioral interventions in reducing stress and improving emotional intelligence among executives.
- Executives were divided into an experimental group that received 8 weeks of behavioral interventions and a control group. Interventions included relaxation techniques, yoga, and breathing exercises.
- Post-intervention testing found the experimental group had significantly lower stress levels and higher emotional intelligence scores than the control group. Dimensions like stress management, adaptability, and mood all saw marked improvements in the experimental group.
- The results indicate behavioral interventions were effective in enhancing emotional intelligence and reducing stress among the executives who received the targeted training techniques over the 8-week period.
Mathematical modeling approach for flood managementprjpublications
This document summarizes the development of a mathematical model for flood management in the Godavari River basin in India using the MIKE 11 software. The model is calibrated using data from 2009-2011 and validated against data from 2012. Real-time validation is also conducted during floods in 2013. Results show good agreement between measured and computed river stages, indicating the model can accurately forecast river levels for flood management.
Influences of child endorsers on the consumersprjpublications
This document summarizes a study that analyzes the level of influence that child endorsers have on consumers through commercials. The study found that while the sample population was generally not supportive of ads endorsed by children, they recognized both the positive and negative potential impacts. Respondents acknowledged that child endorsers can enhance beliefs about products and satisfaction with purchases. However, the sample also noted certain qualities of child endorsers like curiosity and interest that marketers may exploit. The study aims to contribute new research on the impact of child endorsements, which has not been extensively studied previously.
Impact of stress management by development of emotional intelligence in cmts,...prjpublications
This study examined the impact of stress management techniques on the emotional intelligence and stress levels of executives at BSNL, Tamil Nadu Circle. 186 executives were divided into experimental and control groups. The experimental group received behavioral interventions like relaxation techniques, while the control group did not. Both groups completed the Kindler Stress Inventory before and after the 8-week intervention period. Results showed the experimental group had significantly lower scores for somatic symptoms, psychological symptoms, and higher scores for stress resilience after the intervention, compared to the control group. This indicates the behavioral techniques helped reduce stress levels and improve emotional intelligence for the experimental group compared to the control group without intervention.
Faulty node recovery and replacement algorithm for wireless sensor networkprjpublications
This document describes a Fault Node Recovery and Replacement algorithm for wireless sensor networks that combines grade diffusion and genetic algorithms. It begins by explaining grade diffusion and genetic algorithms, as well as existing fault recovery techniques. It then introduces the Fault Node Recovery and Replacement algorithm, which uses grade diffusion to create routing tables and genetic algorithms to replace sensor nodes when they fail. The algorithm aims to reuse existing routing paths and replace fewer sensor nodes to extend the lifetime of the wireless sensor network while reducing replacement costs.
Extended information technology enabled service quality model for life insura...prjpublications
This document summarizes a research paper that develops an extended information technology enabled service quality model for life insurance services in India. The paper reviews literature on key concepts like information technology, service quality, customer satisfaction, and existing models. It then describes a study conducted with 221 respondents in Burdwan district, West Bengal, India to investigate the relationship between IT, service quality, and customer satisfaction in life insurance services. Statistical analysis was used to analyze the data and develop the proposed new model.
Employee spirituality and job engagement a correlational study across organi...prjpublications
This study examines the relationship between employee spirituality and job engagement across organizational hierarchies. It measures the spiritual levels and engagement of employees at a manufacturing organization in India. The study aims to correlate employee spirituality scores with engagement levels to determine if higher spirituality impacts engagement. It hypothesizes that the effect of spirituality on engagement increases at higher hierarchical levels. Survey data on spirituality, engagement, and tenure is collected and analyzed to validate these relationships.
2. International Journal of Computer Science and Engineering Research and Development (IJCSERD),
ISSN 2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 2, May-October (2011)
determine the proper scope for components. Few are of the opinion that all of the
resources used on a project including human expertise should be reused. Others feel that
reuse should focus on code, since this work is much more likely to have practical results.
As such, any life-cycle product falls within the scope of reuse problem. Reuse then
includes life-cycle objects as concept documents, estimates, requirements, designs, code,
test plans, maintenance plans and user documentation. However out of this more
emphasized area is software code.
The quest for software reusability paved the way for maintaining the reuse
repositories comprising of reusable software artifacts extracted from already developed
systems. The code snippets have to be classified, organized and stored in the library
systems. These library systems are accessed by the users to retrieve the artifacts
consistent with user requirements. Most of the software retrieval systems usually extract
a set of reusable candidates ranked by similarity with user needs. However users don’t
want to invest a great deal of effort in selecting a component. So, in most cases, the list of
retrieved candidates is not completely analyzed even is the highly ranked components are
discarded. The user assumes that the best suited components are the ones on the top of
the list of candidates. Thus to select a component from the list the user examines only the
associated information for those first components. If none of them satisfies his
requirements, may be he will try to refine or rewrite the original query or abandon the
search. Therefore retrieval systems should exhibit more precision in their answers, by
discarding some obviously unwanted components from the set of candidates and by
retrieving only the ones that satisfy more precisely the requirements of the user. Software
reuse is seen as the growing choice of the application programmers since past two
decades. This is inference from the survey conducted to discover the needs and attitude of
the developers towards reuse [1]. Most of them expect more from tools for automatic
program generation. Research is ongoing to develop more user-friendly and effective
reuse systems. A considerable number of tools and mechanisms for supporting reuse
activities in software development have been proposed. They provide the assistance
either to application developers for retrieving.
2. RELATED RESEARCH
Over approximately the past two decades, software reuse research has focused on
several different area: examining programming language mechanisms to improve
software reusability; developing software processes and management strategies that
support the reuse of software; also, strategies for setting up libraries containing reusable
code components, and classification and retrieval techniques to help a software engineer
select the component from the software library that is appropriate for his or her purposes.
A classified collection is not useful if it does not provide the search-and-retrieval
mechanism and use it.. A wide range of solutions to the software reuse classification and
retrieval have been proposed and implemented. At different times, based on available
software reuse systems and also based on researchers’ criteria, software reuse
classification and retrieval approaches have been classified differently, but with minor
differences.
Ostertag et al. [18] classified the reported approaches by three types: 1) free-text
keywords, 2) faceted index, and 3) semantic-net based. Free text based approaches
basically use information retrieval and indexing technology to automatically extract
61
3. International Journal of Computer Science and Engineering Research and Development (IJCSERD),
ISSN 2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 2, May-October (2011)
keywords from software documentation and index items with keywords. The free-text
keyword approach is simple, and it an automatic process. But free-text keyword based
approach is limited by lack of semantic information associated with keywords, thus it is
not a precise approach. For faceted index approaches, experts extract keywords from
program descriptions and documentation, and arrange the keywords by facets into a
classification scheme, which is used as a standard descriptor for software components. To
solve ambiguities, a thesaurus is derived for each facet to make sure the keyword
matched can only be within the facet context. Faceted classification and retrieval has
proven to be very effective in retrieving reuse component from repositories, but the
approach is labor intensive. Semantic-net based approaches usually need a large
knowledge-base, a natural language processor, and semantic retrieval algorithm to
semantically classify and retrieve software reuse components. The semantic-net based
approach is also labor intensive, and it is also rigid in narrow application domain.
Mili et al. [13] classifies search and retrieval approaches into four different types:
1) simple keyword and string match; 2) faceted classification and retrieval; 3)
signature matching; and 4) behavior matching. The last two approaches i.e. signature
matching and behavior matching are cumbersome and inefficient [4]. The classifications
here for the first two approaches i.e. simple keyword and string match and faceted
classification and retrieval are same as other researchers’ classification of the two
approaches.
Mili et al. [13] designed a software library in which software components are
described in a formal specification: a specification is represented by a pair(S, R), where S
is a set of specification, and R is a relation on S. The approach is classified as a keyword-
based retrieval system, while matching recall is enhanced with sufficient precision: a
match is considered as along as a specification key can refine a search argument. Besides
there are two retrieval operations: exact retrieval and approximate retrieval. If there is no
exact retrieval, approximate retrieval can give programs that need minimal modification
to satisfy the specification.
The faceted classification scheme for software reuse proposed by Prieto-Diaz and
Freeman [4] relies on facets which are extracted by experts to describe features about
components. Features serve as component descriptors, such as the component
functionality, how to run the component, and implementation details. To determine
similarity between query and software components, a weighted conceptual graph is used
to measure closeness by the conceptual distance among terms in a facet.
Vitharana et al. [10] proposed a scheme to classify and describe business
components within a knowledge based repository for storage and retrieval. Two
important steps in their proposed scheme are: 1) classification and coding for business
components, 2) knowledge based repository for storage and retrieval. The classification
groups similar parts, and symbols are coded. They borrowed the idea from facet-based
scheme to describe features of reusable software artifacts, whereas their classification and
coding scheme considers higher level business oriented features. In their proposed
classification and coding scheme, a business component is described by identifiers
(structured information such as name, industry type), followed by descriptor facets
(unstructured information, such as rules, functionality). In their knowledge-based
repository design, a database is used as the repository because it is efficient and effective
62
4. International Journal of Computer Science and Engineering Research and Development (IJCSERD),
ISSN 2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 2, May-October (2011)
for storage, search and retrieval; eXtensible Markup Language (XML) is suitable to
extensible numbers of descriptor facets.
Girardi and Ibrahim’s solution for retrieving software artifacts is based on natural
language processing. Both user queries and software component descriptions are
expressed in natural language. Natural language processing at the lexical, syntactic and
semantic levels is performed on software descriptions to automatically extract both verbal
and nominal phrases to create a frame-based indexing unit for software components. But
user queries and component descriptions are semantically formalized into the internal
representation, canonical forms. Then the matching in between is reduced to compute the
closeness of the query and the software component description, which is the distance of
the two canonical forms. The retrieval system looks for closeness. A public domain
lexicon is used to get the lexical information for both the query classification and the
software classifications. To avoid rigidity, this solution employs partial canonical forms
to allow some ambiguities and to infer all possible interpretations of a sentence.
Mingyang et al. proposed a component retrieval model combining knowledge
intensive case-based reasoning technologies and conversational case-based reasoning
methods. Case-based reasoning is a problem solving method. The main idea underlying
this scheme is that when confronted with a new problem , similar or approximate
problem solved in the past is analyzed which provides a break-through in solving the
present problem. This scheme is divided into four phases: retrieve, reuse, revise and
retain. This scheme focuses on retrieve phase. In retrieve phase, a new case (new problem
description) is compared to the stored cases, and most similar one (ones) will be
retrieved. Partial matching is adopted in the retrieve phase. Partial matching here refers to
the matching of a group of features in order to return a best match, and where each
feature typically has its own weight, distinguishes this technology from information
retrieval and database access methods in general. However some such methods are
knowledge-poor, which only consider superficial or syntactical similarities between a
new case and stored cases, while other systems take both the syntactic similarity and
semantic similarity into account by combining case-specific knowledge and general
domain knowledge. The latter approach is referred as knowledge-intensive case based
reasoning. Conversational case-based reasoning (CCBR) is an interactive form of case-
based reasoning. It uses a mixed initiative dialog to guide users to facilitate the case
retrieval process through a question-answer sequence. In the traditional CBR process,
users are expected to provide a well-defined problem description (a new case), and based
on such a description, the CBR system can find the most appropriate case. But usually
users can not define their problem clearly and accurately. So instead of letting users guess
how to describe the problem, CCBR calculates the most discriminative questions
automatically and incrementally, and display them to users to extract information to
facilitate the retrieval process.
Sugumaran and Storey[4] present a semantic-based solution to component retrieval.
The approach employs domain ontology to provide semantics in refining user queries
expressed in natural language and in matching between a user query and components in a
reusable repository. The approach includes a natural-language interface, a domain model,
and a reusable repository. The three major steps in the approach are: 1) initial query
generation, 2) query refinement, and 3) component retrieval and feedback. Initial query
generation is heuristic-based natural language processing to abstract keywords from user-
63
5. International Journal of Computer Science and Engineering Research and Development (IJCSERD),
ISSN 2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 2, May-October (2011)
queries. Their approach for natural language processing of user query and component
descriptions is based on the ROSA system [P10-10]. Query refinement is achieved by
mapping initial query against ontology to ensure correct terms are used in the query.
Ontology refers to a data-structure for storing the information in taxonomical way. In this
case, ontology provides a mechanism to represent and store domain specific knowledge,
which can be used to find the most related artifacts. In the component retrieval and
feedback step, a closeness measure proposed by Girardi and Ibrahim[1] is employed to
identify which are the most relevant components to the user’s query.
3. PROPOSED SYSTEM
The proposed system ‘Classification Of Software reusable components’ is required to
implement a classification scheme to build a library and provide an interface for
browsing and retrieving components. Of these the main requirement is to develop a
classification scheme which is used for classifying components. The system should
support three operations viz. Uploading components, downloading components and
search for components.
Uploading a component means storing the component in the repository. Developer
should be able to upload his components into the repository. While uploading the
components the system should make sure that components are not duplicated. If there are
two components with similar properties they should differ at least in the language,
developer or version and so on.
Downloading a component means the user can download the component on his system
for reuse. The user should be able to see the file size and depending on his availability of
the memory he can download the component.
Searching for components should be implemented such that the user can get the
relevant components by entering a search query. However the system under development
does not deal with natural language queries. The system should support an advanced
search facility which provides the user to narrow down his search results to most
appropriate components.
A helpful interface should be developed for this system, which should not lead to any
ambiguity. The interface should be very easy to operate and provide good interaction
between the system and the user.
Classify Components
Classify
Upload Components
Download
Components
Search Components
Figure 1 Use case diagrams for Software Reuse Repository
64
6. International Journal of Computer Science and Engineering Research and Development (IJCSERD),
ISSN 2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 2, May-October (2011)
The proposed system can be depicted as shown in figure 2 as explained in the functional
requirements; the system (big rectangle) provides the required functionality for
uploading, downloading and searching components. The system also maintains the
repository of the components. The system provides interface to the user’s viz,
administrator and users.
Administrator Us e r
Classification Scheme
Search
Engine
Relevant
Components
Reuse Repository
Figure 2 Proposed system model
3.1 Classification scheme
The intent of developing this classification is to integrate existing schemes. I used
both attribute-value and faceted classifications together to achieve the classification. I
have used the following attributes to identify the components:
• Component name
• Domain
• Language
Along with these attributes the following facets are used for classification of the
components.
• File size
• version
• Time taken
The attributes when used in query can narrow down the search space to be used while
retrieval. The attributes and facets listed here are for gathered from existing researches.
To be more effective the system should be deployed and thoroughly used, then depending
on the feedback system the system can be enhanced.
Here we used attributes and facets. The system also stores the descriptions of
each component uploaded in the repository. If system stores most of the component’s
properties the system can serve better and can be used in different ways.
3.2 Algorithms
3.2.1 Uploading Components:
Input: component and attributes
Output: status of component
Step 1: capture user’s input for component
Step 2: check for attribute similarities in existing components
65
7. International Journal of Computer Science and Engineering Research and Development (IJCSERD),
ISSN 2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 2, May-October (2011)
Step 3: if no matching components found classify the component and store it in the
repository
3.2.2 Searching Components
Input: Search query and some attributes
Output: list of components
Step 1: with user given criteria form a query that can be executed on the catalog
Step 2: execute the query to get the list of components that satisfy the attributes given by
the user.
Step 3: from the list of components find the components that will match to the search
query. The search query will be matched with component name, algorithm and function.
Step 4: all the components that are candidates for user requirements are presented to the
user.
3.2.3 Downloading Components:
Input: Choose the component
Output: Component gets downloaded
Step 1: With user given criteria forms a query that can be executed on the catalog.
Step 2: Execute the query to get the component catalog downloaded
Step 3: Choose an option to save or view the file
Step 4: Depending on user choice the component can be viewed /saved respectively
4. RESULTS
66
8. International Journal of Computer Science and Engineering Research and Development (IJCSERD),
ISSN 2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 2, May-October (2011)
67
9. International Journal of Computer Science and Engineering Research and Development (IJCSERD),
ISSN 2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 2, May-October (2011)
5 CONCLUSION AND FUTURE SCOPE
There have been many attempts to classify reusable components using various
techniques. The proposed classification system takes the advantage of the positive sides
of each classification scheme, whilst hopefully rendering the negative sides redundant.
This classification scheme uses the attribute value and faceted schemes for different parts
of a component.
Future work involved with this classification scheme will be to refine the scheme and
formalize it for implementation. Though some classification scheme is developed it was
not done with extensive study of all existing components. In order to be effective the
scheme should be enhanced to meet the interests of people.
REFERENCES
[1] William.B.Frakes and Thomas. P. Pole,” An Empirical Study of Representation
Methods for Reusable Software Components”, IEEE Transactions on Software
Engineering vol.20,no. 8,Aug.1994,pp.617-630.
68