This document describes a process for generating use case models from textual requirements. The process uses the EA-Miner tool to analyze textual requirements and extract information like functional concerns, RDL sentences, and a syntactically tagged document. This extracted information is used to derive initial candidate use cases, actors, and relationships. The candidate model is then refined by activities like removing undesirable use cases, completing abstraction names, adding new use cases/actors, and defining relationships between use cases. The overall goal is to reduce the time and effort required to produce requirements artifacts from textual specifications.
The document describes an Automatic Database Schema Generator tool that can generate a database schema from natural language textual requirements. It takes textual requirements as input, analyzes the text using natural language processing techniques like tokenization and part-of-speech tagging. It also parses a domain ontology related to the problem domain to help identify entities and attributes. The tool then extracts entities, attributes, and identifies primary and foreign keys to generate a relational database schema that can be used to develop the application database. The tool aims to automate the manual and iterative process of database schema design from requirements.
Availability Assessment of Software Systems Architecture Using Formal ModelsEditor IJCATR
There has been a significant effort to analyze, design and implement the information systems to process the information and data, and solve various problems. On the one hand, complexity of the contemporary systems, and eye-catching increase in the variety and volume of information has led to great number of the components and elements, and more complex structure and organization of the information systems. On the other hand, it is necessary to develop the systems which meet all of the stakeholders' functional and non-functional requirements. Considering the fact that evaluation and assessment of the aforementioned requirements - prior to the design and implementation phases - will consume less time and reduce costs, the best time to measure the evaluable behavior of the system is when its software architecture is provided. One of the ways to evaluate the architecture of software is creation of an executable model of architecture.
The present research used availability assessment and took repair, maintenance and accident time parameters into consideration. Failures of software and hardware components have been considered in the architecture of software systems. To describe the architecture easily, the authors used Unified Modeling Language (UML). However, due to the informality of UML, they utilized Colored Petri Nets (CPN) for assessment too. Eventually, the researchers evaluated a CPN-based executable model of architecture through CPN-Tools.
Software size estimation at early stages of project development holds great significance to meet the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for measuring the size of object oriented projects. The class point approach is used to quantify classes which are the logical building blocks in object oriented paradigm. In this paper, we propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class point approach is used to estimate the size of an OLAP System The results of our approach are validated.
Software size estimation at early stages of project development holds great significance to meet
the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor
of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for
measuring the size of object oriented projects. The class point approach is used to quantify
classes which are the logical building blocks in object oriented paradigm. In this paper, we
propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries
based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H
benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class
point approach is used to estimate the size of an OLAP System The results of our approach are
validated.
The document discusses the design of an online examination system. It describes the various modules of the system including admin, instructor and student modules. It provides details on the functionality available to each type of user. It also discusses the technologies used to develop the system such as PHP for the backend, and MySQL for the database. UML diagrams including use case, class, sequence, and ER diagrams are presented to model and design different components of the system.
This presentation will discusses about the following topics: Importance of Data Models
Basic Building Blocks
Business Rules
Translating Business Rules into Data Models
Evolution of Data Models
Hierarchical Data Model
Network Data Model
Relational Data Model
Entity Relational Model
Object Model
Summary
Followed by a Quiz
This presentation discusses the following topics:
What is XML?
Syntax of XML Document
DTD (Document Type Definition)
XML Schema
XML Query Language
XML Databases
Oracle JDBC
1. The document discusses mapping object-oriented software models to function point analysis. It proposes rules for counting function points based on the analysis phase models in the OOSE (object-oriented software engineering) methodology, including the use case model and analysis object model.
2. A tool called OOFP is proposed to measure function points from the requirements and analysis models in OOSE. The paper focuses on applying the tool and rules to the analysis phase models to identify transactional and data functions for function point counting.
3. A case study applies the proposed rules to example use case and analysis models from a course registration system to demonstrate identifying transaction and data functions for function point analysis.
The document describes an Automatic Database Schema Generator tool that can generate a database schema from natural language textual requirements. It takes textual requirements as input, analyzes the text using natural language processing techniques like tokenization and part-of-speech tagging. It also parses a domain ontology related to the problem domain to help identify entities and attributes. The tool then extracts entities, attributes, and identifies primary and foreign keys to generate a relational database schema that can be used to develop the application database. The tool aims to automate the manual and iterative process of database schema design from requirements.
Availability Assessment of Software Systems Architecture Using Formal ModelsEditor IJCATR
There has been a significant effort to analyze, design and implement the information systems to process the information and data, and solve various problems. On the one hand, complexity of the contemporary systems, and eye-catching increase in the variety and volume of information has led to great number of the components and elements, and more complex structure and organization of the information systems. On the other hand, it is necessary to develop the systems which meet all of the stakeholders' functional and non-functional requirements. Considering the fact that evaluation and assessment of the aforementioned requirements - prior to the design and implementation phases - will consume less time and reduce costs, the best time to measure the evaluable behavior of the system is when its software architecture is provided. One of the ways to evaluate the architecture of software is creation of an executable model of architecture.
The present research used availability assessment and took repair, maintenance and accident time parameters into consideration. Failures of software and hardware components have been considered in the architecture of software systems. To describe the architecture easily, the authors used Unified Modeling Language (UML). However, due to the informality of UML, they utilized Colored Petri Nets (CPN) for assessment too. Eventually, the researchers evaluated a CPN-based executable model of architecture through CPN-Tools.
Software size estimation at early stages of project development holds great significance to meet the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for measuring the size of object oriented projects. The class point approach is used to quantify classes which are the logical building blocks in object oriented paradigm. In this paper, we propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class point approach is used to estimate the size of an OLAP System The results of our approach are validated.
Software size estimation at early stages of project development holds great significance to meet
the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor
of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for
measuring the size of object oriented projects. The class point approach is used to quantify
classes which are the logical building blocks in object oriented paradigm. In this paper, we
propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries
based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H
benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class
point approach is used to estimate the size of an OLAP System The results of our approach are
validated.
The document discusses the design of an online examination system. It describes the various modules of the system including admin, instructor and student modules. It provides details on the functionality available to each type of user. It also discusses the technologies used to develop the system such as PHP for the backend, and MySQL for the database. UML diagrams including use case, class, sequence, and ER diagrams are presented to model and design different components of the system.
This presentation will discusses about the following topics: Importance of Data Models
Basic Building Blocks
Business Rules
Translating Business Rules into Data Models
Evolution of Data Models
Hierarchical Data Model
Network Data Model
Relational Data Model
Entity Relational Model
Object Model
Summary
Followed by a Quiz
This presentation discusses the following topics:
What is XML?
Syntax of XML Document
DTD (Document Type Definition)
XML Schema
XML Query Language
XML Databases
Oracle JDBC
1. The document discusses mapping object-oriented software models to function point analysis. It proposes rules for counting function points based on the analysis phase models in the OOSE (object-oriented software engineering) methodology, including the use case model and analysis object model.
2. A tool called OOFP is proposed to measure function points from the requirements and analysis models in OOSE. The paper focuses on applying the tool and rules to the analysis phase models to identify transactional and data functions for function point counting.
3. A case study applies the proposed rules to example use case and analysis models from a course registration system to demonstrate identifying transaction and data functions for function point analysis.
Reduced Software Complexity for E-Government Applications with ZEF FrameworkTELKOMNIKA JOURNAL
The situation of dynamic change is unpredictable and always growth increasingly. It also can
happen anytime and anywhere. The one kind which is always changing is the government policy.This
condition is suggested take the impact for software for information system. It will cause replacement,
modification, and enhancement of software for information system. There is some commonality and
variability of software features in Indonesian Government. Hence, to manage it, we present enhancement
of Zuma’s E-Government Framework (ZEF) for reduce software complexity.We enhance ZEF Framework
using SPLE and GORE approach in order to improve traditional software development.It can reduce, if
the changing continuously happen.The measurement of software complexity relate to functionality of
system.It can describe with function point, because function point can describe logical software
complexity also. The preliminary result of this study can reduce efficiency of software complexity such as
information processing size, technical complexity adjustment factors and function points in e-government
applications.
This document summarizes computational analysis methods for determining expectation values commonly used in bioinformatics databases. It discusses tools like BLAST, FASTA, and databases like NCBI that allow querying and analyzing sequences. The expectation value provides the probability that a match could occur by chance, with lower values indicating higher quality matches. These tools and databases facilitate customizable extraction of data from sequences to enable further analysis and knowledge discovery in bioinformatics.
Analysis and Design of Information Systems Financial Reports with Object Orie...ijceronline
Micro, Small and Medium Enterprises (SMEs) are a group effort proved resistant to a wide range of economic crisis shocks. But in the operation of their business financial management is still not transparent and are also still mixed between business finance and personal finance. So that needs to be done with good financial management. In this research, analysis and information system design financial reports as a basis for the development of the system. Software development life cycle (SDLC) using the model of the object oriented approach. With object-oriented approach, the tools used by the notation Unified Modelling Language (UML). In object-oriented approach all systems applications are Viewed as a collection of objects that allow organisasi interloking and end users to Easily understand logical entities. Object-oriented approach Provides the benefits of the reuse of codes and saves the time for developing quality product.
This document presents a framework for automatically generating entity-relationship (ER) diagrams from natural language text input. It involves five main modules: 1) text preprocessing and summary generation, 2) translating the summary to a Semantic Business Vocabulary and Rules (SBVR) format, 3) part-of-speech tagging, 4) extracting ER diagram requirements by identifying entities, relationships, and attributes, and 5) generating an XMI file that can be imported into a UML modeling tool to visualize the generated ER diagram. Keywords are extracted from the input text using term frequency, and sentences are scored and selected for the summary based on important keywords and nouns. The framework aims to reduce the complexity of manually creating ER diagrams by
WEB-BASED ONTOLOGY EDITOR ENHANCED BY PROPERTY VALUE EXTRACTIONIJwest
The document describes a web-based ontology editor that was developed to make ontology building easier for non-experts. The editor uses several approaches, including being web-based to not require installation, limiting technical terms and functions, and extracting property-value pairs from web pages to assist with registering instances. It provides an intuitive graphical view and list views of ontologies, along with sample applications to demonstrate usage. The key contribution discussed is an approach for extracting candidate property-value pairs using bootstrapping and dependency parsing techniques, and having users select the correct values. The accuracy of this extraction method is evaluated.
This document discusses online feature selection (OFS) for data mining applications. It addresses two tasks of OFS: 1) learning with full input, where the learner can access all features to select a subset, and 2) learning with partial input, where only a limited number of features can be accessed for each instance. Novel algorithms are presented for each task, and their performance is analyzed theoretically. Experiments on real-world datasets demonstrate the efficacy of the proposed OFS techniques for applications in computer vision, bioinformatics, and other domains involving high-dimensional sequential data.
2. an efficient approach for web query preprocessing edit satIAESIJEECS
The emergence of the Web technology generated a massive amount of raw data by enabling Internet users to post their opinions, comments, and reviews on the web. To extract useful information from this raw data can be a very challenging task. Search engines play a critical role in these circumstances. User queries are becoming main issues for the search engines. Therefore a preprocessing operation is essential. In this paper, we present a framework for natural language preprocessing for efficient data retrieval and some of the required processing for effective retrieval such as elongated word handling, stop word removal, stemming, etc. This manuscript starts by building a manually annotated dataset and then takes the reader through the detailed steps of process. Experiments are conducted for special stages of this process to examine the accuracy of the system.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
This document summarizes a research paper on developing a feature-based product recommendation system. It begins by introducing recommender systems and their importance for e-commerce. It then describes how the proposed system takes basic product descriptions as input, recognizes features using association rule mining and k-nearest neighbor algorithms, and outputs recommended additional features to improve the product profile. The paper evaluates the system's performance on recommending antivirus software features. In under 3 sentences.
The feature selection or extraction is the most important task in Opinion mining and Sentimental Analysis
(OSMA) for calculating the polarity score. These scores are used to determine the positive, negative, and
neutral polarity about the product, user reviews, user comments, and etc., in social media for the purpose
of decision making and Business Intelligence to individuals or organizations. In this paper, we have
performed an experimental study for different feature extraction or selection techniques available for
opinion mining task. This experimental study is carried out in four stages. First, the data collection process
has been done from readily available sources. Second, the pre-processing techniques are applied
automatically using the tools to extract the terms, POS (Parts-of-Speech). Third, different feature selection
or extraction techniques are applied over the content. Finally, the empirical study is carried out for
analyzing the sentiment polarity with different features.
Fuzzy Rule Base System for Software Classificationijcsit
This document describes a fuzzy rule-based system for classifying Java applications using object-oriented metrics. Key features of the system include automatically extracting OO metrics from source code, a configurable set of fuzzy rules, and classifying software at both the application and class level. The system is designed to address limitations of existing OO metric tools by providing an automated, unified analysis and classification without requiring complex post-processing methods. The document outlines the system design, including subsystems for the fuzzy rules engine and extracting OO metrics, and defines membership functions and fuzzy rules for classification.
A Methodology To Manage Victim Components Using Cbo Measureijseajournal
This document presents a methodology for managing victim components using coupling between object (CBO) measure. It defines several measures of software component reusability, including weighted component measure and depth of inheritance tree measure. These measures are calculated for components in a human resources (HR) portal application. The document identifies the business tier component as a potential victim component based on its low reuse count. It proposes using CBO measure to identify highly cohesive components that need reconfiguration to improve reusability. Reconfiguring such components would make them less cohesive and easier to reuse in other applications.
REALIZING A LOOSELY-COUPLED STUDENTS PORTAL FRAMEWORKijseajournal
Most of the currently available students' portal frameworks are tightly-coupled frameworks. A recent
research done by the authors of this paper has discussed how to distribute the concepts of the traditional
students' portal framework and came out with a distributed interoperable framework. This paper realizes
the distributed interoperable students' portal framework by developing a prototype. This prototype is based
on Service Oriented Architecture (SOA). The prototype is tested using web service testing and compatibility
testing.
Use Case Modeling in Software Development: A Survey and TaxonomyEswar Publications
Identifying use cases is one of the most important steps in the software requirement analysis. This paper makes a literature review over use cases and then presents six taxonomies for them. The first taxonomy is based on the level of functionality of a system in a domain. The second taxonomy is based on primacy of functionality and the third one relies on essentialness of functionality of the system. The fourth taxonomy is concerned with supporting of functionality. The fifth taxonomy is based on the boundary of functionality and the sixth one is related to generalization/specialization relation. Then the use cases are evaluated in a case study in a control command police system. Several guidelines are recommended for developing use cases and their refinement, based on some
practical experience obtained from the evaluation.
IRJET- A Review on Part-of-Speech Tagging on Gujarati LanguageIRJET Journal
This document reviews part-of-speech tagging methods for the Gujarati language. It discusses rule-based, stochastic, and hybrid techniques for POS tagging. Rule-based methods use linguistic rules but require extensive manual work. Stochastic methods like Hidden Markov Models, Maximum Entropy Markov Models, and Conditional Random Fields are more automated but can tag ungrammatical sentences. Hybrid methods combine rule-based and stochastic approaches to achieve high accuracy. The document evaluates different POS tagging methods for the challenges of tagging Gujarati text.
The document describes the design of two software engineering case studies using Rational Rose:
1) A Student Mark Analysis System to allow students and faculty to view marks and generate report cards. Key modules include generating and distributing report cards, updating grades, and viewing grades. UML diagrams like use case, class, sequence, and deployment diagrams are developed.
2) An Online Quiz Management System to organize quiz programs and produce results. The system will be developed using UML components and offers reliability and efficiency.
Both case studies involve analyzing requirements, designing the system using UML diagrams in Rational Rose, and developing the necessary software engineering methodology and documentation for the projects.
This document describes a web application that can automatically generate Entity Relationship (ER) diagrams. It takes entity, attribute, and relationship details as input from the user and outputs an ER diagram. The proposed system has a 3-module architecture: 1) it accepts input from the user, 2) generates the ER diagram automatically based on the input, and 3) stores the output diagram. Experimental results demonstrate how diagrams are generated at different levels of complexity based on filtering of the input details. The automated generation of ER diagrams using this web application makes the process easier for users compared to traditional manual tools.
The document appears to be a lab manual for an Object Oriented Analysis and Design course. It includes instructions for 12 experiments using Rational Rose software to model various systems. The first experiment is on introducing Rational Rose and modeling an ATM system. The manual provides the aim, infrastructure requirements, modular description, and UML diagrams for the ATM system experiment. It also shows the results and concludes the ATM system design was implemented efficiently.
IRJET- Determining Document Relevance using Keyword ExtractionIRJET Journal
This document describes a system that aims to search for and retrieve relevant documents from a large collection based on a user's query. It does this through three main components: keyword extraction, document searching, and a question answering bot. Keyword extraction is done using the TF-IDF algorithm to identify important words in documents. These keywords are stored in a database along with their TF-IDF weights. When a user submits a query, the system searches for documents containing keywords from the query and returns relevant results. It also includes a feedback mechanism for users to improve search accuracy over time. The goal is to deliver accurate search results quickly from large document collections.
A SOFTWARE REQUIREMENT ENGINEERING TECHNIQUE USING OOADA-RE AND CSC FOR IOT B...ijseajournal
This Internet of things is one of the most trending technology with wide range of applications. Here we are going to focus on Medical and Healthcare applications of IOT. Generally such IOT applications are very complex comprising of many different modules. Thus a lot of care has to be taken during the requirement engineering of IOT applications. Requirement Engineering is a process of structuring all the requirements of the users. This is the base phase of software development which greatly affects the rest of the phases. Thus our best should be given in the engineering of requirements because if the effort goes down here, it will greatly affect the quality of the end product. In this study we have presented an approach to improve the requirements engineering phase of IOT applications development by using Object Oriented Analysis and Design Approach(OOADA) along with Constraints Story Card(CSC) templates.
1) UCD-Generator is an application that uses natural language processing to automatically generate use case diagrams from plain English requirements.
2) It analyzes the text using LESSA, which performs tokenization, part-of-speech tagging, and meaning understanding to extract actors, actions, and objects.
3) It then generates use case diagrams in two phases - first extracting information, then drawing the diagrams based on that information. Experiments showed it could accurately generate diagrams for 85% of scenarios.
Reduced Software Complexity for E-Government Applications with ZEF FrameworkTELKOMNIKA JOURNAL
The situation of dynamic change is unpredictable and always growth increasingly. It also can
happen anytime and anywhere. The one kind which is always changing is the government policy.This
condition is suggested take the impact for software for information system. It will cause replacement,
modification, and enhancement of software for information system. There is some commonality and
variability of software features in Indonesian Government. Hence, to manage it, we present enhancement
of Zuma’s E-Government Framework (ZEF) for reduce software complexity.We enhance ZEF Framework
using SPLE and GORE approach in order to improve traditional software development.It can reduce, if
the changing continuously happen.The measurement of software complexity relate to functionality of
system.It can describe with function point, because function point can describe logical software
complexity also. The preliminary result of this study can reduce efficiency of software complexity such as
information processing size, technical complexity adjustment factors and function points in e-government
applications.
This document summarizes computational analysis methods for determining expectation values commonly used in bioinformatics databases. It discusses tools like BLAST, FASTA, and databases like NCBI that allow querying and analyzing sequences. The expectation value provides the probability that a match could occur by chance, with lower values indicating higher quality matches. These tools and databases facilitate customizable extraction of data from sequences to enable further analysis and knowledge discovery in bioinformatics.
Analysis and Design of Information Systems Financial Reports with Object Orie...ijceronline
Micro, Small and Medium Enterprises (SMEs) are a group effort proved resistant to a wide range of economic crisis shocks. But in the operation of their business financial management is still not transparent and are also still mixed between business finance and personal finance. So that needs to be done with good financial management. In this research, analysis and information system design financial reports as a basis for the development of the system. Software development life cycle (SDLC) using the model of the object oriented approach. With object-oriented approach, the tools used by the notation Unified Modelling Language (UML). In object-oriented approach all systems applications are Viewed as a collection of objects that allow organisasi interloking and end users to Easily understand logical entities. Object-oriented approach Provides the benefits of the reuse of codes and saves the time for developing quality product.
This document presents a framework for automatically generating entity-relationship (ER) diagrams from natural language text input. It involves five main modules: 1) text preprocessing and summary generation, 2) translating the summary to a Semantic Business Vocabulary and Rules (SBVR) format, 3) part-of-speech tagging, 4) extracting ER diagram requirements by identifying entities, relationships, and attributes, and 5) generating an XMI file that can be imported into a UML modeling tool to visualize the generated ER diagram. Keywords are extracted from the input text using term frequency, and sentences are scored and selected for the summary based on important keywords and nouns. The framework aims to reduce the complexity of manually creating ER diagrams by
WEB-BASED ONTOLOGY EDITOR ENHANCED BY PROPERTY VALUE EXTRACTIONIJwest
The document describes a web-based ontology editor that was developed to make ontology building easier for non-experts. The editor uses several approaches, including being web-based to not require installation, limiting technical terms and functions, and extracting property-value pairs from web pages to assist with registering instances. It provides an intuitive graphical view and list views of ontologies, along with sample applications to demonstrate usage. The key contribution discussed is an approach for extracting candidate property-value pairs using bootstrapping and dependency parsing techniques, and having users select the correct values. The accuracy of this extraction method is evaluated.
This document discusses online feature selection (OFS) for data mining applications. It addresses two tasks of OFS: 1) learning with full input, where the learner can access all features to select a subset, and 2) learning with partial input, where only a limited number of features can be accessed for each instance. Novel algorithms are presented for each task, and their performance is analyzed theoretically. Experiments on real-world datasets demonstrate the efficacy of the proposed OFS techniques for applications in computer vision, bioinformatics, and other domains involving high-dimensional sequential data.
2. an efficient approach for web query preprocessing edit satIAESIJEECS
The emergence of the Web technology generated a massive amount of raw data by enabling Internet users to post their opinions, comments, and reviews on the web. To extract useful information from this raw data can be a very challenging task. Search engines play a critical role in these circumstances. User queries are becoming main issues for the search engines. Therefore a preprocessing operation is essential. In this paper, we present a framework for natural language preprocessing for efficient data retrieval and some of the required processing for effective retrieval such as elongated word handling, stop word removal, stemming, etc. This manuscript starts by building a manually annotated dataset and then takes the reader through the detailed steps of process. Experiments are conducted for special stages of this process to examine the accuracy of the system.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
This document summarizes a research paper on developing a feature-based product recommendation system. It begins by introducing recommender systems and their importance for e-commerce. It then describes how the proposed system takes basic product descriptions as input, recognizes features using association rule mining and k-nearest neighbor algorithms, and outputs recommended additional features to improve the product profile. The paper evaluates the system's performance on recommending antivirus software features. In under 3 sentences.
The feature selection or extraction is the most important task in Opinion mining and Sentimental Analysis
(OSMA) for calculating the polarity score. These scores are used to determine the positive, negative, and
neutral polarity about the product, user reviews, user comments, and etc., in social media for the purpose
of decision making and Business Intelligence to individuals or organizations. In this paper, we have
performed an experimental study for different feature extraction or selection techniques available for
opinion mining task. This experimental study is carried out in four stages. First, the data collection process
has been done from readily available sources. Second, the pre-processing techniques are applied
automatically using the tools to extract the terms, POS (Parts-of-Speech). Third, different feature selection
or extraction techniques are applied over the content. Finally, the empirical study is carried out for
analyzing the sentiment polarity with different features.
Fuzzy Rule Base System for Software Classificationijcsit
This document describes a fuzzy rule-based system for classifying Java applications using object-oriented metrics. Key features of the system include automatically extracting OO metrics from source code, a configurable set of fuzzy rules, and classifying software at both the application and class level. The system is designed to address limitations of existing OO metric tools by providing an automated, unified analysis and classification without requiring complex post-processing methods. The document outlines the system design, including subsystems for the fuzzy rules engine and extracting OO metrics, and defines membership functions and fuzzy rules for classification.
A Methodology To Manage Victim Components Using Cbo Measureijseajournal
This document presents a methodology for managing victim components using coupling between object (CBO) measure. It defines several measures of software component reusability, including weighted component measure and depth of inheritance tree measure. These measures are calculated for components in a human resources (HR) portal application. The document identifies the business tier component as a potential victim component based on its low reuse count. It proposes using CBO measure to identify highly cohesive components that need reconfiguration to improve reusability. Reconfiguring such components would make them less cohesive and easier to reuse in other applications.
REALIZING A LOOSELY-COUPLED STUDENTS PORTAL FRAMEWORKijseajournal
Most of the currently available students' portal frameworks are tightly-coupled frameworks. A recent
research done by the authors of this paper has discussed how to distribute the concepts of the traditional
students' portal framework and came out with a distributed interoperable framework. This paper realizes
the distributed interoperable students' portal framework by developing a prototype. This prototype is based
on Service Oriented Architecture (SOA). The prototype is tested using web service testing and compatibility
testing.
Use Case Modeling in Software Development: A Survey and TaxonomyEswar Publications
Identifying use cases is one of the most important steps in the software requirement analysis. This paper makes a literature review over use cases and then presents six taxonomies for them. The first taxonomy is based on the level of functionality of a system in a domain. The second taxonomy is based on primacy of functionality and the third one relies on essentialness of functionality of the system. The fourth taxonomy is concerned with supporting of functionality. The fifth taxonomy is based on the boundary of functionality and the sixth one is related to generalization/specialization relation. Then the use cases are evaluated in a case study in a control command police system. Several guidelines are recommended for developing use cases and their refinement, based on some
practical experience obtained from the evaluation.
IRJET- A Review on Part-of-Speech Tagging on Gujarati LanguageIRJET Journal
This document reviews part-of-speech tagging methods for the Gujarati language. It discusses rule-based, stochastic, and hybrid techniques for POS tagging. Rule-based methods use linguistic rules but require extensive manual work. Stochastic methods like Hidden Markov Models, Maximum Entropy Markov Models, and Conditional Random Fields are more automated but can tag ungrammatical sentences. Hybrid methods combine rule-based and stochastic approaches to achieve high accuracy. The document evaluates different POS tagging methods for the challenges of tagging Gujarati text.
The document describes the design of two software engineering case studies using Rational Rose:
1) A Student Mark Analysis System to allow students and faculty to view marks and generate report cards. Key modules include generating and distributing report cards, updating grades, and viewing grades. UML diagrams like use case, class, sequence, and deployment diagrams are developed.
2) An Online Quiz Management System to organize quiz programs and produce results. The system will be developed using UML components and offers reliability and efficiency.
Both case studies involve analyzing requirements, designing the system using UML diagrams in Rational Rose, and developing the necessary software engineering methodology and documentation for the projects.
This document describes a web application that can automatically generate Entity Relationship (ER) diagrams. It takes entity, attribute, and relationship details as input from the user and outputs an ER diagram. The proposed system has a 3-module architecture: 1) it accepts input from the user, 2) generates the ER diagram automatically based on the input, and 3) stores the output diagram. Experimental results demonstrate how diagrams are generated at different levels of complexity based on filtering of the input details. The automated generation of ER diagrams using this web application makes the process easier for users compared to traditional manual tools.
The document appears to be a lab manual for an Object Oriented Analysis and Design course. It includes instructions for 12 experiments using Rational Rose software to model various systems. The first experiment is on introducing Rational Rose and modeling an ATM system. The manual provides the aim, infrastructure requirements, modular description, and UML diagrams for the ATM system experiment. It also shows the results and concludes the ATM system design was implemented efficiently.
IRJET- Determining Document Relevance using Keyword ExtractionIRJET Journal
This document describes a system that aims to search for and retrieve relevant documents from a large collection based on a user's query. It does this through three main components: keyword extraction, document searching, and a question answering bot. Keyword extraction is done using the TF-IDF algorithm to identify important words in documents. These keywords are stored in a database along with their TF-IDF weights. When a user submits a query, the system searches for documents containing keywords from the query and returns relevant results. It also includes a feedback mechanism for users to improve search accuracy over time. The goal is to deliver accurate search results quickly from large document collections.
A SOFTWARE REQUIREMENT ENGINEERING TECHNIQUE USING OOADA-RE AND CSC FOR IOT B...ijseajournal
This Internet of things is one of the most trending technology with wide range of applications. Here we are going to focus on Medical and Healthcare applications of IOT. Generally such IOT applications are very complex comprising of many different modules. Thus a lot of care has to be taken during the requirement engineering of IOT applications. Requirement Engineering is a process of structuring all the requirements of the users. This is the base phase of software development which greatly affects the rest of the phases. Thus our best should be given in the engineering of requirements because if the effort goes down here, it will greatly affect the quality of the end product. In this study we have presented an approach to improve the requirements engineering phase of IOT applications development by using Object Oriented Analysis and Design Approach(OOADA) along with Constraints Story Card(CSC) templates.
1) UCD-Generator is an application that uses natural language processing to automatically generate use case diagrams from plain English requirements.
2) It analyzes the text using LESSA, which performs tokenization, part-of-speech tagging, and meaning understanding to extract actors, actions, and objects.
3) It then generates use case diagrams in two phases - first extracting information, then drawing the diagrams based on that information. Experiments showed it could accurately generate diagrams for 85% of scenarios.
IRJET- Towards Efficient Framework for Semantic Query Search Engine in Large-...IRJET Journal
The document proposes a new framework for efficient semantic search in large datasets. It aims to improve understanding of short texts by enriching them with concepts and related terms from a probabilistic knowledge base. A deep learning model using stacked autoencoders is designed to learn features from the enriched short texts and encode them into binary codes, allowing similarity searches. Experiments show the new approach captures semantics better than existing methods and enables applications like short text retrieval and classification.
Reverse Engineering for Documenting Software Architectures, a Literature ReviewEditor IJCATR
Recently, much research in software engineering focused on reverse engineering of software systems which has become one
of the major engineering trends for software evolution. The objective of this survey paper is to provide a literature review on the
existing reverse engineering methodologies and approaches for documenting the architecture of software systems. The survey process
was based on selecting the most common approaches that form the current state of the art in documenting software architectures. We
discuss the limitations of these approaches and highlight the main directions for future research and describe specific open issues for
research.
1) The document discusses various ways that artificial intelligence can be applied to different phases of the software engineering lifecycle, including requirements specification, design, coding, testing, and estimation.
2) It provides examples of using techniques like natural language processing to clarify requirements, knowledge graphs to manage requirements information, and computational intelligence for requirements prioritization.
3) For design, the document discusses using intelligent agents to recommend patterns and designs to satisfy quality attributes from requirements and assist with assigning responsibilities to components.
Local Service Search Engine Management System LSSEMSYogeshIJTSRD
Local Services Search Engine Management System LSSEMS is a web based application which helps user to find serviceman in a local area such as maid, tuition teacher, plumber etc. LSSEMS contain data of serviceman maid, tuition teacher, plumber etc. . The main purpose of LSSEMS is to systematically record, store and update the serviceman records. Kaushik Mishra | Aditya Sharma | Mohak Gund "Local Service Search Engine Management System (LSSEMS)" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Special Issue | International Conference on Advances in Engineering, Science and Technology - 2021 , May 2021, URL: https://www.ijtsrd.com/papers/ijtsrd42462.pdf Paper URL : https://www.ijtsrd.com/engineering/computer-engineering/42462/local-service-search-engine-management-system-lssems/kaushik-mishra
HW/SW Partitioning Approach on Reconfigurable Multimedia System on ChipCSCJournals
Due to the complexity and the high performance requirement of multimedia applications, the design of embedded systems is the subject of different types of design constraints such as execution time, time to market, energy consumption, etc. Some approaches of joint software/hardware design (Co-design) were proposed in order to help the designer to seek an adequacy between applications and architecture that satisfies the different design constraints. This paper presents a new methodology for hardware/software partitioning on reconfigurable multimedia system on chip, based on dynamic and static steps. The first one uses the dynamic profiling and the second one uses the design trotter tools. The validation of our approach is made through 3D image synthesis.
IRJET- A Novel Approch Automatically Categorizing Software TechnologiesIRJET Journal
This document proposes an automatic approach called Witt to categorize software technologies based on their descriptions. Witt takes a sentence describing a technology as input and outputs a general category (e.g. integrated development environment) along with qualifying attributes. It applies natural language processing and the Levenshtein distance algorithm to compare string similarities and categorize technologies from large datasets. The system architecture first obtains data on software methodologies and labels. It then applies NLP and Levenshtein distance to find hypernyms and transform them into categories with attributes for classification.
An Analysis on Query Optimization in Distributed DatabaseEditor IJMTER
The query optimizer is a significant element in today’s relational database
management system. This element is responsible for translating a user-submitted query
commonly written in a non-procedural language-into an efficient query evaluation program that
can be executed against the database. This research paper describes architecture steps of query
process and optimization time and memory usage. Key goal of this paper is to understand the
basic query optimization process and its architecture.
TECHNIQUES FOR COMPONENT REUSABLE APPROACHcscpconf
This document discusses techniques for component reuse using a component retrieval approach. It proposes using UML models stored in MDL file format to retrieve relevant software components based on structural information like class names and relationships. A tool called a "smart environment" is described that can search a repository of MDL files and source code based on class diagrams or use case diagrams to find the best matching components for reuse. Weights are assigned to different model elements to return search results in order of closest match. The approach aims to improve on keyword-based searching by matching design specifications.
Named Entity Recognition (NER) Using Automatic Summarization of ResumesIRJET Journal
This document discusses using natural language processing techniques like Named Entity Recognition (NER) and BERT to automatically summarize resumes and extract key information to assist in the hiring process. It aims to reduce hiring costs by streamlining the process of reviewing thousands of resumes. The proposed methodology uses spaCy to train an NER model to identify entities like skills and experiences. BERT is also utilized to generate contextualized representations of text that capture both left and right contexts. This allows more accurate prediction of entity types. The system would extract and classify information from resumes to provide summaries of candidate qualifications for quick review by employers.
Semantic web based software engineering by automated requirements ontology ge...IJwest
This paper presents an approach for automated generation of requirements ontology using UML diagrams in service-oriented architecture (SOA). The goal of this paper is to convenience progress of software engineering processes like software design, software reuse, service discovering and etc. The proposed method is based on a four conceptual layers. The first layer includes requirements achieved by stakeholders, the second one designs service-oriented diagrams from the data in first layer and extracts XMI codes of them. The third layer includes requirement ontology and protocol ontology to describe behavior of services and relationships between them semantically. Finally the forth layer makes standard the concepts exists in ontologies of previous layer. The generated ontology exceeds absolute domain ontology because it considers the behavior of services moreover the hierarchical relationship of them. Experimental results conducted on a set of UML4Soa diagrams in different scopes demonstrate the improvement of the proposed approach from different points of view such as: completeness of requirements ontology, automatic generation and considering SOA.
This document describes the development of an employee management system. It discusses:
1) The programming tools used - Microsoft Access for the database and C# with .NET Framework for the application. Access allows constructing relational databases while C# provides an object-oriented interface.
2) The database design, which includes 6 tables - one main employee table and 5 child tables for additional employee details like work history, time records, and contact information. The tables are related through primary and foreign keys.
3) The development process, which first analyzed user needs, designed the database structure, then constructed the graphical user interface in the application to interact with the database according to its structure.
The document discusses use case modeling and diagrams. It defines a use case as a sequence of actions a system performs that yields an observable result for an actor. Use case diagrams depict the interactions between actors and the services (use cases) provided by the system. They help identify the classes needed for the system and provide a starting point for requirements, analysis, design, testing, and documentation. The example models the use cases for a bank that offers savings, checking, fixed deposit accounts and ATM services.
Automatic generation of business process models from user storiesIJECEIAES
In this paper, we propose an automated approach to extract business process models from requirements, which are presented as user stories. In agile software development, the user story is a simple description of the functionality of the software. It is presented from the user's point of view and is written in natural language. Acceptance criteria are a list of specifications on how a new software feature is expected to operate. Our approach analyzes a set of acceptance criteria accompanying the user story, in order, first, to automatically generate the components of the business model, and then to produce the business model as an activity diagram which is a unified modeling language (UML) behavioral diagram. We start with the use of natural language processing (NLP) techniques to extract the elements necessary to define the rules for retrieving artifacts from the business model. These rules are then developed in Prolog language and imported into Python code. The proposed approach was evaluated on a set of use cases using different performance measures. The results indicate that our method is capable of generating correct and accurate process models.
Model-Driven Architecture for Cloud Applications Development, A survey Editor IJCATR
Model Driven Architecture and Cloud computing are among the most important paradigms in software service engineering now a days. As cloud computing continues to gain more activities, more issues and challenges for many systems with its dynamic usage are introduced. Model Driven Architecture (MDA) approach for development and maintenance becomes an evident choice for ensuring software solutions that are robust, flexible and agile for developing applications.
This paper aims to survey and analyze the research issues and challenges that have been emerging in cloud computing applications with a focus on using Model Driven architecture (MDA) development. We discuss the open research issues and highlight future research problems.
Model-Driven Architecture for Cloud Applications Development, A surveyEditor IJCATR
Model Driven Architecture and Cloud computing are among the most important paradigms in software service engineering
now a days. As cloud computing continues to gain more activities, more issues and challenges for many systems with its dynamic usage
are introduced. Model Driven Architecture (MDA) approach for development and maintenance becomes an evident choice for ensuring
software solutions that are robust, flexible and agile for developing applications.
This paper aims to survey and analyze the research issues and challenges that have been emerging in cloud computing applications with
a focus on using Model Driven architecture (MDA) development. We discuss the open research issues and highlight future research
problems.
Model-Driven Architecture for Cloud Applications Development, A surveyEditor IJCATR
Model Driven Architecture and Cloud computing are among the most important paradigms in software service engineering
now a days. As cloud computing continues to gain more activities, more issues and challenges for many systems with its dynamic usage
are introduced. Model Driven Architecture (MDA) approach for development and maintenance becomes an evident choice for ensuring
software solutions that are robust, flexible and agile for developing applications.
This paper aims to survey and analyze the research issues and challenges that have been emerging in cloud computing applications with
a focus on using Model Driven architecture (MDA) development. We discuss the open research issues and highlight future research
problems.
General Methodology for developing UML models from UIijwscjournal
In recent past every discipline and every industry have their own methods of developing products. It may
be software development, mechanics, construction, psychology and so on. These demarcations work fine
as long as the requirements are within one discipline. However, if the project extends over several
disciplines, interfaces have to be created and coordinated between the methods of these disciplines.
Performance is an important quality aspect of Web Services because of their distributed nature.
Predicting the performance of web services during early stages of software development is significant. In
Industry, Prototype of these applications is developed during analysis phase of Software Development Life
Cycle (SDLC). However, Performance models are generated from UML models. Methodologies for
predicting the performance from UML models is available. Hence, In this paper, a methodology for
developing Use Case model and Activity model from User Interface is presented. The methodology is
illustrated with a case study on Amazon.com
Similar to Generating requirements analysis models from textual requiremen (20)
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network