This document proposes an integrated framework for IDEF method-based simulation model design and development. The framework uses IDEF0 for functional modeling, IDEF3 for process modeling, and IDEF1X for data modeling. A common data model is constructed from these IDEF models and then multiple simulation models are automatically generated from the data model using a database-driven approach. The framework aims to improve knowledge reuse, communication, and model maintainability. It is evaluated using a semiconductor fabrication case study. The case study shows the framework can help improve simulation project processes by leveraging descriptive IDEF models and a relational database.
A Model To Compare The Degree Of Refactoring Opportunities Of Three Projects ...acijjournal
Refactoring is applied to the software artifacts so as to improve its internal structure, while preserving its
external behavior. Refactoring is an uncertain process and it is difficult to give some units for
measurement. The amount to refactoring that can be applied to the source-code depends upon the skills of
the developer. In this research, we have perceived refactoring as a quantified object on an ordinal scale of
measurement. We have a proposed a model for determining the degree of refactoring opportunities in the
given source-code. The model is applied on the three projects collected from a company. UML diagrams
are drawn for each project. The values for source-code metrics, that are useful in determining the quality of
code, are calculated for each UML of the projects. Based on the nominal values of metrics, each relevant
UML is represented on an ordinal scale. A machine learning tool, weka, is used to analyze the dataset,
imported in the form of arff file, produced by the three projects
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
Reusability Metrics for Object-Oriented System: An Alternative ApproachWaqas Tariq
Object-oriented metrics plays an import role in ensuring the desired quality and have widely been applied to practical software projects. The benefits of object-oriented software development increasing leading to development of new measurement techniques. Assessing the reusability is more and more of a necessity. Reusability is the key element to reduce the cost and improve the quality of the software. Generic programming helps us to achieve the concept of reusability through C++ Templates which helps in developing reusable software modules and also identify effectiveness of this reuse strategy. The advantage of defining metrics for templates is the possibility to measure the reusability of software component and to identify the most effective reuse strategy. The need for such metrics is particularly useful when an organization is adopting a new technology, for which established practices have yet to be developed. Many researchers have done research on reusability metrics [2, 9, 3, 4]. In this paper we have proposed four new independent metrics Number of Template Children (NTC), Depth of Template Tree (DTT) Method Template Inheritance Factor (MTIF) and Attribute Template Inheritance Factor (ATIF), to measure the reusability for object-oriented systems.
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
A FRAMEWORK FOR ASPECTUAL REQUIREMENTS VALIDATION: AN EXPERIMENTAL STUDYijseajournal
Requirements engineering is a discipline of software engineering that is concerned with the
identification and handling of user and system requirements. Aspect-Oriented Requirements
Engineering (AORE) extends the existing requirements engineering approaches to cope with the
issue of tangling and scattering resulted from crosscutting concerns. Crosscutting concerns are
considered as potential aspects and can lead to the phenomena “tyranny of the dominant
decomposition”. Requirements-level aspects are responsible for producing scattered and tangled
descriptions of requirements in the requirements document. Validation of requirements artefacts
is an essential task in software development. This task ensures that requirements are correct and
valid in terms of completeness and consistency, hence, reducing the development cost,
maintenance and establish an approximately correct estimate of effort and completion time of the
project. In this paper, we present a validation framework to validate the aspectual requirements
and the crosscutting relationship of concerns that are resulted from the requirements engineering
phase. The proposed framework comprises a high-level and low-level validation to implement on
software requirements specification (SRS). The high-level validation validates the concerns with
stakeholders, whereas the low-level validation validates the aspectual requirement by
requirements engineers and analysts using a checklist. The approach has been evaluated using
an experimental study on two AORE approaches. The approaches are viewpoint-based called
AORE with ArCaDe and lexical analysis based on Theme/Doc approach. The results obtained
from the study demonstrate that the proposed framework is an effective validation model for
AORE artefacts.
PRODUCT QUALITY EVALUATION METHOD (PQEM): TO UNDERSTAND THE EVOLUTION OF QUAL...ijseajournal
Promoting quality within the context of agile software development, it is extremely important as well as
useful to improve not only the knowledge and decision-making of project managers, product owners, and
quality assurance leaders but also to support the communication between teams. In this context, quality
needs to be visible in a synthetic and intuitive way in order to facilitate the decision of accepting or
rejecting each iteration within the software life cycle. This article introduces a novel solution called
Product Quality Evaluation Method (PQEM) which can be used to evaluate a set of quality characteristics
for each iteration within a software product life cycle. PQEM is based on the Goal-Question-Metric
approach, the standard ISO/IEC 25010, and the extension made of testing coverage in order to obtain the
quality coverage of each quality characteristic. The outcome of PQEM is a unique multidimensional value,
that represents the quality level reached by each iteration of a product, as an aggregated measure. Even
though a value it is not the regular idea of measuring quality, we believe that it can be useful to use this
value to easily understand the quality level of each iteration. An illustrative example of the PQEM method
was carried out with two iterations from a web and mobile application, within the healthcare environment.
A single measure makes it possible to observe the evolution of the level of quality reached in the evolution
of the product through the iterations.
A Model To Compare The Degree Of Refactoring Opportunities Of Three Projects ...acijjournal
Refactoring is applied to the software artifacts so as to improve its internal structure, while preserving its
external behavior. Refactoring is an uncertain process and it is difficult to give some units for
measurement. The amount to refactoring that can be applied to the source-code depends upon the skills of
the developer. In this research, we have perceived refactoring as a quantified object on an ordinal scale of
measurement. We have a proposed a model for determining the degree of refactoring opportunities in the
given source-code. The model is applied on the three projects collected from a company. UML diagrams
are drawn for each project. The values for source-code metrics, that are useful in determining the quality of
code, are calculated for each UML of the projects. Based on the nominal values of metrics, each relevant
UML is represented on an ordinal scale. A machine learning tool, weka, is used to analyze the dataset,
imported in the form of arff file, produced by the three projects
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
Reusability Metrics for Object-Oriented System: An Alternative ApproachWaqas Tariq
Object-oriented metrics plays an import role in ensuring the desired quality and have widely been applied to practical software projects. The benefits of object-oriented software development increasing leading to development of new measurement techniques. Assessing the reusability is more and more of a necessity. Reusability is the key element to reduce the cost and improve the quality of the software. Generic programming helps us to achieve the concept of reusability through C++ Templates which helps in developing reusable software modules and also identify effectiveness of this reuse strategy. The advantage of defining metrics for templates is the possibility to measure the reusability of software component and to identify the most effective reuse strategy. The need for such metrics is particularly useful when an organization is adopting a new technology, for which established practices have yet to be developed. Many researchers have done research on reusability metrics [2, 9, 3, 4]. In this paper we have proposed four new independent metrics Number of Template Children (NTC), Depth of Template Tree (DTT) Method Template Inheritance Factor (MTIF) and Attribute Template Inheritance Factor (ATIF), to measure the reusability for object-oriented systems.
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
A FRAMEWORK FOR ASPECTUAL REQUIREMENTS VALIDATION: AN EXPERIMENTAL STUDYijseajournal
Requirements engineering is a discipline of software engineering that is concerned with the
identification and handling of user and system requirements. Aspect-Oriented Requirements
Engineering (AORE) extends the existing requirements engineering approaches to cope with the
issue of tangling and scattering resulted from crosscutting concerns. Crosscutting concerns are
considered as potential aspects and can lead to the phenomena “tyranny of the dominant
decomposition”. Requirements-level aspects are responsible for producing scattered and tangled
descriptions of requirements in the requirements document. Validation of requirements artefacts
is an essential task in software development. This task ensures that requirements are correct and
valid in terms of completeness and consistency, hence, reducing the development cost,
maintenance and establish an approximately correct estimate of effort and completion time of the
project. In this paper, we present a validation framework to validate the aspectual requirements
and the crosscutting relationship of concerns that are resulted from the requirements engineering
phase. The proposed framework comprises a high-level and low-level validation to implement on
software requirements specification (SRS). The high-level validation validates the concerns with
stakeholders, whereas the low-level validation validates the aspectual requirement by
requirements engineers and analysts using a checklist. The approach has been evaluated using
an experimental study on two AORE approaches. The approaches are viewpoint-based called
AORE with ArCaDe and lexical analysis based on Theme/Doc approach. The results obtained
from the study demonstrate that the proposed framework is an effective validation model for
AORE artefacts.
PRODUCT QUALITY EVALUATION METHOD (PQEM): TO UNDERSTAND THE EVOLUTION OF QUAL...ijseajournal
Promoting quality within the context of agile software development, it is extremely important as well as
useful to improve not only the knowledge and decision-making of project managers, product owners, and
quality assurance leaders but also to support the communication between teams. In this context, quality
needs to be visible in a synthetic and intuitive way in order to facilitate the decision of accepting or
rejecting each iteration within the software life cycle. This article introduces a novel solution called
Product Quality Evaluation Method (PQEM) which can be used to evaluate a set of quality characteristics
for each iteration within a software product life cycle. PQEM is based on the Goal-Question-Metric
approach, the standard ISO/IEC 25010, and the extension made of testing coverage in order to obtain the
quality coverage of each quality characteristic. The outcome of PQEM is a unique multidimensional value,
that represents the quality level reached by each iteration of a product, as an aggregated measure. Even
though a value it is not the regular idea of measuring quality, we believe that it can be useful to use this
value to easily understand the quality level of each iteration. An illustrative example of the PQEM method
was carried out with two iterations from a web and mobile application, within the healthcare environment.
A single measure makes it possible to observe the evolution of the level of quality reached in the evolution
of the product through the iterations.
Maintaining the quality of the software is the major challenge in the process of software development.
Software inspections which use the methods like structured walkthroughs and formal code reviews involve
careful examination of each and every aspect/stage of software development. In Agile software
development, refactoring helps to improve software quality. This refactoring is a technique to improve
software internal structure without changing its behaviour. After much study regarding the ways to
improve software quality, our research proposes an object oriented software metric tool called
“MetricAnalyzer”. This tool is tested on different codebases and is proven to be much useful.
THE REMOVAL OF NUMERICAL DRIFT FROM SCIENTIFIC MODELSIJSEA
Computer programs often behave differently under different compilers or in different computing
environments. Relative debugging is a collection of techniques by which these differences are analysed.
Differences may arise because of different interpretations of errors in the code, because of bugs in the
compilers or because of numerical drift, and all of these were observed in the present study. Numerical
drift arises when small and acceptable differences in values computed by different systems are
integrated, so that the results drift apart. This is well understood and need not degrade the validity of the
program results. Coding errors and compiler bugs may degrade the results and should be removed. This
paper describes a technique for the comparison of two program runs which removes numerical drift and
therefore exposes coding and compiler errors. The procedure is highly automated and requires very little
intervention by the user. The technique is applied to the Weather Research and Forecasting model, the
most widely used weather and climate modelling code.
A FRAMEWORK STUDIO FOR COMPONENT REUSABILITYcscpconf
The deployment of a software product requires considerable amount of time and effort. In order
to increase the productivity of the software products, reusability strategies were proposed in the
literature. However effective reuse is still a challenging issue. This paper presents a framework
studio for effective components reusability which provides the selection of components from framework studio and generation of source code based on stakeholders needs. The framework studio is implemented using swings which are integrated onto the Net Beans IDE which help in faster generation of the source code.
A Survey of functional verification techniquesIJSRD
In this paper, we present a survey of various techniques used in functional verification of industry hardware designs. Although the use of formal verification techniques has been increasing over time, there is still a need for an immediate practical solution resulting in an increased interest in hybrid verification techniques. Hybrid techniques combine formal and informal (traditional simulation based) techniques to take the advantage of both the worlds. A typical hybrid technique aims to address the verification bottleneck by enhancing the state space coverage.
International Journal of Engineering Research and DevelopmentIJERD Editor
• Electrical, Electronics and Computer Engineering,
• Information Engineering and Technology,
• Mechanical, Industrial and Manufacturing Engineering,
• Automation and Mechatronics Engineering,
• Material and Chemical Engineering,
• Civil and Architecture Engineering,
• Biotechnology and Bio Engineering,
• Environmental Engineering,
• Petroleum and Mining Engineering,
• Marine and Agriculture engineering,
• Aerospace Engineering.
SOFTWARE TESTING: ISSUES AND CHALLENGES OF ARTIFICIAL INTELLIGENCE & MACHINE ...ijaia
The history of Artificial Intelligence and Machine Learning dates back to 1950’s. In recent years, there has
been an increase in popularity for applications that implement AI and ML technology. As with traditional
development, software testing is a critical component of an efficient AI/ML application. However, the
approach to development methodology used in AI/ML varies significantly from traditional development.
Owing to these variations, numerous software testing challenges occur. This paper aims to recognize and
to explain some of the biggest challenges that software testers face in dealing with AI/ML applications. For
future research, this study has key implications. Each of the challenges outlined in this paper is ideal for
further investigation and has great potential to shed light on the way to more productive software testing
strategies and methodologies that can be applied to AI/ML applications.
Reusability is an only one best direction to increase developing productivity and maintainability of application. One must first search for good tested software component and reusable. Developed Application software by one programmer can be shown useful for others also component. This is proving that code specifics to application requirements can be also reused in develop projects related with same requirements. The main aim of this paper proposed a way for reusable module. An process that takes source code as a input that will helped to take the decision approximately which particular software, reusable artefacts should be reused or not.
Learning from Human Repairs Through the Exploitation of Software Repositories ijseajournal
ABSTRACT
Software systems drive our phones, cars, banks, cities, etc. This places a heavy load on software industry
because all these systems must be continuously updated, corrected and extended when users and
consumers express new needs and high quality software is now perceived as a "must have" rather than
"should have". Developers has to do their best to assure that their code has minimal defects.
Nowadays technologies evolve rapidly, software suffer from some majors problems. Firstly software
maintainers have to deal with the past (past languages, existing systems, old technologies). It is often
misunderstood and treated as a punishment. Secondly developers have to ensure a good software quality.
To deal with such problems, approaches have been proposed to support software maintenance and reduce
the efforts of developers. One approach commonly adopted is the usage of rules to ensure source code
quality and ease maintenance activities.
We propose in this paper an approach to ease corrective maintenance task and help to improve software
quality by exploiting human knowledge available in software repositories. This approach support
remediation of bad situations; by reducing for example time spent to read, understand and how the code
can be changed.
Maintaining the quality of the software is the major challenge in the process of software development.
Software inspections which use the methods like structured walkthroughs and formal code reviews involve
careful examination of each and every aspect/stage of software development. In Agile software
development, refactoring helps to improve software quality. This refactoring is a technique to improve
software internal structure without changing its behaviour. After much study regarding the ways to
improve software quality, our research proposes an object oriented software metric tool called
“MetricAnalyzer”. This tool is tested on different codebases and is proven to be much useful.
THE REMOVAL OF NUMERICAL DRIFT FROM SCIENTIFIC MODELSIJSEA
Computer programs often behave differently under different compilers or in different computing
environments. Relative debugging is a collection of techniques by which these differences are analysed.
Differences may arise because of different interpretations of errors in the code, because of bugs in the
compilers or because of numerical drift, and all of these were observed in the present study. Numerical
drift arises when small and acceptable differences in values computed by different systems are
integrated, so that the results drift apart. This is well understood and need not degrade the validity of the
program results. Coding errors and compiler bugs may degrade the results and should be removed. This
paper describes a technique for the comparison of two program runs which removes numerical drift and
therefore exposes coding and compiler errors. The procedure is highly automated and requires very little
intervention by the user. The technique is applied to the Weather Research and Forecasting model, the
most widely used weather and climate modelling code.
A FRAMEWORK STUDIO FOR COMPONENT REUSABILITYcscpconf
The deployment of a software product requires considerable amount of time and effort. In order
to increase the productivity of the software products, reusability strategies were proposed in the
literature. However effective reuse is still a challenging issue. This paper presents a framework
studio for effective components reusability which provides the selection of components from framework studio and generation of source code based on stakeholders needs. The framework studio is implemented using swings which are integrated onto the Net Beans IDE which help in faster generation of the source code.
A Survey of functional verification techniquesIJSRD
In this paper, we present a survey of various techniques used in functional verification of industry hardware designs. Although the use of formal verification techniques has been increasing over time, there is still a need for an immediate practical solution resulting in an increased interest in hybrid verification techniques. Hybrid techniques combine formal and informal (traditional simulation based) techniques to take the advantage of both the worlds. A typical hybrid technique aims to address the verification bottleneck by enhancing the state space coverage.
International Journal of Engineering Research and DevelopmentIJERD Editor
• Electrical, Electronics and Computer Engineering,
• Information Engineering and Technology,
• Mechanical, Industrial and Manufacturing Engineering,
• Automation and Mechatronics Engineering,
• Material and Chemical Engineering,
• Civil and Architecture Engineering,
• Biotechnology and Bio Engineering,
• Environmental Engineering,
• Petroleum and Mining Engineering,
• Marine and Agriculture engineering,
• Aerospace Engineering.
SOFTWARE TESTING: ISSUES AND CHALLENGES OF ARTIFICIAL INTELLIGENCE & MACHINE ...ijaia
The history of Artificial Intelligence and Machine Learning dates back to 1950’s. In recent years, there has
been an increase in popularity for applications that implement AI and ML technology. As with traditional
development, software testing is a critical component of an efficient AI/ML application. However, the
approach to development methodology used in AI/ML varies significantly from traditional development.
Owing to these variations, numerous software testing challenges occur. This paper aims to recognize and
to explain some of the biggest challenges that software testers face in dealing with AI/ML applications. For
future research, this study has key implications. Each of the challenges outlined in this paper is ideal for
further investigation and has great potential to shed light on the way to more productive software testing
strategies and methodologies that can be applied to AI/ML applications.
Reusability is an only one best direction to increase developing productivity and maintainability of application. One must first search for good tested software component and reusable. Developed Application software by one programmer can be shown useful for others also component. This is proving that code specifics to application requirements can be also reused in develop projects related with same requirements. The main aim of this paper proposed a way for reusable module. An process that takes source code as a input that will helped to take the decision approximately which particular software, reusable artefacts should be reused or not.
Learning from Human Repairs Through the Exploitation of Software Repositories ijseajournal
ABSTRACT
Software systems drive our phones, cars, banks, cities, etc. This places a heavy load on software industry
because all these systems must be continuously updated, corrected and extended when users and
consumers express new needs and high quality software is now perceived as a "must have" rather than
"should have". Developers has to do their best to assure that their code has minimal defects.
Nowadays technologies evolve rapidly, software suffer from some majors problems. Firstly software
maintainers have to deal with the past (past languages, existing systems, old technologies). It is often
misunderstood and treated as a punishment. Secondly developers have to ensure a good software quality.
To deal with such problems, approaches have been proposed to support software maintenance and reduce
the efforts of developers. One approach commonly adopted is the usage of rules to ensure source code
quality and ease maintenance activities.
We propose in this paper an approach to ease corrective maintenance task and help to improve software
quality by exploiting human knowledge available in software repositories. This approach support
remediation of bad situations; by reducing for example time spent to read, understand and how the code
can be changed.
An Empirical Study of the Improved SPLD Framework using Expert Opinion TechniqueIJEACS
Due to the growing need for high-performance and low-cost software applications and the increasing competitiveness, the industry is under pressure to deliver products with low development cost, reduced delivery time and improved quality. To address these demands, researchers have proposed several development methodologies and frameworks. One of the latest methodologies is software product line (SPL) which utilizes the concepts like reusability and variability to deliver successful products with shorter time-to-market, least development and minimum maintenance cost with a high-quality product. This research paper is a validation of our proposed framework, Improved Software Product Line (ISPL), using Expert Opinion Technique. An extensive survey based on a set of questionnaires on various aspects and sub-processes of the ISPLD Framework was carried. Analysis of the empirical data concludes that ISPL shows significant improvements on several aspects of the contemporary SPL frameworks.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
Using Model-Driven Engineering for Decision Support Systems Modelling, Implem...CSCJournals
Following the principle of everything is object, software development engineering has moved towards the principle of everything is model, through Model Driven Engineering (MDE). Its implementation is based on models and their successive transformations, which allow starting from the requirements specification to the code’s implementation. This engineering is used in the development of information systems, including Decision-Support Systems (DSS). Here we use MDE to propose an DSS development approach, using the Multidimensional Canonical Partitioning (MCP) design approach and a design pattern. We also use model’s transformation in order to obtain not only implementation codes, but also data warehouse feeds.
Functional and Information Modeling of Production Using IDEF MethodsMandar Trivedi
Suitable CASE tools are developed for modeling processes. When building this process a
standard is used for the functional modeling of IDEF0, through the use of the BPWin tool. The family of
integrated IDEF methods presents the basic tools of some modern strategies and methodologies of
business process improvement. This paper details the functional and informational model of sInvestment
building of production facilitys using graphical language IDEF0; i.e., CASE BPWin tool. We also suggest
a context diagram, an information model and a decomposition diagram of production - investment
building.
A MODEL TO COMPARE THE DEGREE OF REFACTORING OPPORTUNITIES OF THREE PROJECTS ...acijjournal
Refactoring is applied to the software artifacts so as to improve its internal structure, while preserving its
external behavior. Refactoring is an uncertain process and it is difficult to give some units for
measurement. The amount to refactoring that can be applied to the source-code depends upon the skills of
the developer. In this research, we have perceived refactoring as a quantified object on an ordinal scale of
measurement. We have a proposed a model for determining the degree of refactoring opportunities in the
given source-code. The model is applied on the three projects collected from a company. UML diagrams
are drawn for each project. The values for source-code metrics, that are useful in determining the quality of
code, are calculated for each UML of the projects. Based on the nominal values of metrics, each relevant
UML is represented on an ordinal scale. A machine learning tool, weka, is used to analyze the dataset,
imported in the form of arff file, produced by the three projects.
A Survey on Design Pattern Detection ApproachesCSCJournals
Design patterns play a key role in software development process. The interest in extracting design pattern instances from object-oriented software has increased tremendously in the last two decades. Design patterns enhance program understanding, help to document the systems and capture design trade-offs.
This paper provides the current state of the art in design patterns detection. The selected approaches cover the whole spectrum of the research in design patterns detection. We noticed diverse accuracy values extracted by different detection approaches. The lessons learned are listed at the end of this paper, which can be used for future research directions and guidelines in the area of design patterns detection.
PHP modernization approach generating KDM models from PHP legacy codejournalBEEI
With the rise of new web technologies such as web 2.0, Jquery, Bootstrap. Modernizing legacy web systems to benefit from the advantages of the new technologies is more and more relevant. The migration of a system from an environment to another is a time and effort consuming process, it involves a complete rewrite of the application adapted to the target platform. To realize this migration in an automated and standardized way, many approaches have tried to define standardized engineering processes. Architecture Driven Modernization (ADM) defines an approach to standardize and automate the reengineering process. We defined an ADM approach to represent PHP web applications in the highest level of abstraction models. To do this, we have used software artifacts as a entry point . This paper describes the extraction process, which permits discovering and understanding of the legacy system. And generate models to represent the system in an abstract way.
A sustainable procedural method of software design process improvementsnooriasukmaningtyas
In practice, the software process is an intermediate phase for enhancement and improvements the design for different types of software products and help developers to converts the specified requirements into prototypes that implement the design into reality. The objective of this paper is to provide software developers, designers and software engineers who work in small companies with a standards-based process improvement using a procedural method technique including detailed steps for designing the small software systems into their companies. The method used in this paper includes 1) analysis four different types of commonly design processes used by industry such as CMMI, conventional or software process in ISO 19759, generic and engineering design processes. 2) mapping between those four design processes. 3) collect the dispersed design concepts proposed by those four processes. 4) proposed a sustainable procedural method of software design process improvements 5) Illustration of the applicability of the proposed approach using A template-based implementation. The primary result of this study is a guideline procedure with detailed steps for software design process improvements to help and guide developers in small companies to analyze and design a small software scales with limited cost and duration. In conclusion, this paper proposed a method to improve the design process for different kinds of the software systems using a templatebased implementation to reduce the cost, effort and time needed in the implementation phase in small companies. The scientific implication behind a template-based implementation helps the system and software engineering to use this template easily in their small companies; because most of the time those engineering developers are responsible for analyzing, designing, implementing and testing their software systems during the whole software life cycle.
A SOFTWARE REQUIREMENT ENGINEERING TECHNIQUE USING OOADA-RE AND CSC FOR IOT B...ijseajournal
This Internet of things is one of the most trending technology with wide range of applications. Here we are going to focus on Medical and Healthcare applications of IOT. Generally such IOT applications are very complex comprising of many different modules. Thus a lot of care has to be taken during the requirement engineering of IOT applications. Requirement Engineering is a process of structuring all the requirements of the users. This is the base phase of software development which greatly affects the rest of the phases. Thus our best should be given in the engineering of requirements because if the effort goes down here, it will greatly affect the quality of the end product. In this study we have presented an approach to improve the requirements engineering phase of IOT applications development by using Object Oriented Analysis and Design Approach(OOADA) along with Constraints Story Card(CSC) templates.
Automatic model transformation on multi-platform system development with mode...CSITiaesprime
Several difficulties commonly arise during the software development process. Among them are the lengthy technical process of developing a system, the limited number and technical capabilities of human resources, the possibility of bugs and errors during the testing and implementation phase, dynamic and frequently changing user requirements, and the need for a system that supports multi-platforms. Rapid application development (RAD) is the software development life cycle (SDLC) that emphasizes the production of a prototype in a short amount of time (30-90 days). This study discovered that implementing a model-driven architecture (MDA) approach into the RAD method can accelerate the model design and prototyping stages. The goal is to accelerate the SDLC process. It took roughly five weeks to construct the system by applying all of the RAD stages. This time frame does not include iteration and the cutover procedure. During the prototype test, there were no errors with the create, read, update, and delete (CRUD) procedure. It was demonstrated that automatic transformation in MDA can shorten the RAD phases for designing the model and developing an early prototype, reduce code errors in standard processes like CRUD, and construct a system that supports multi-platform.
MODEL DRIVEN WEB APPLICATION DEVELOPMENT WITH AGILE PRACTICESijseajournal
Model driven development is an effective method due to its benefits such as code transformation, increasing
productivity and reducing human based error possibilities. Meanwhile, agile software development
increases the software flexibility and customer satisfaction by using iterative method. Can these two
development approaches be combined to develop web applications efficiently? What are the challenges and
what are the benefits of this approach? In this paper, we answer these two crucial problems; combining
model driven development and agile software development results in not only fast development and
easiness of the user interface design but also efficient job tracking. We also defined an agile model based
approach for web applications whose implementation study has been carried out to support the answers we
gave these two crucial problems.
SOFTWARE REQUIREMENT CHANGE EFFORT ESTIMATION MODEL PROTOTYPE TOOL FOR SOFTWA...ijseajournal
In software development phase software artifacts are not in consistent states such as: some of the class artifacts are fully developed some are half developed, some are major developed, some are minor developed and some are not developed yet. At this stage allowing too many software requirement changes may possibly delay in project delivery and increase development budget of the software. On the other hand rejecting too many changes may increase customer dissatisfaction. Software change effort estimation is one of the most challenging and important activity that helps software project managers in accepting or rejecting changes during software development phase. This paper extends our previous works on developing a software requirement change effort estimation model prototype tool for the software development phase. The significant achievements of the tool are demonstrated through an extensive experimental validation using several case studies. The experimental analysis shows improvement in the estimation accuracy over current change effort estimation models.
FUNCTIONAL AND INFORMATIONAL MODEL OF EXPERT SPECIALIZATION USING IDEF STANDARDMandar Trivedi
For process of modeling are developed suitable CASE tools. In the course of
building this process a standard is used for functional modeling of IDEF0 realized through
BPWin tool. Family of integrated IDEF methods presents basic tool of some modern
strategies and methodologies of business process improvement, like for example: BPR, CPI,
IPD, TQM etc. In paper is given functional and informational model of ″Process of expert
specialization of employees in education″ using graphical language IDEF0 that is, CASE
Bpwin tool.
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.