There is no enough sound and solid scientific researches expounding the benefits of using automated scripts over manual testing (Samuel R. , 2014). The ones available out there are virtuously promotional trailers made for marketing drive (Udin, 2014). This dissertation is made to fill up this gap. To this end, a comparative analysis of the test results achieved from both automated and manual testing have been piloted. Complementary research inputs such as data collected thru questionnaire, interview and group discussion have also been analyzed and synthesized to back up the outcome. Unified Functional Tester (UFT) is used to build test artifacts and execute automated scripts. The conclusion exhibits that using computerized scripts might offer considerable returns in terms of acquiring enhanced efficiency and enriched accuracy over manually testing, provided that the test is labor intensive, time taking and reoccurring.
Positive developments but challenges still ahead a survey study on ux profe...Journal Papers
This survey study summarizes previous research on UX professionals' work practices and identifies key issues: (1) UX professionals' knowledge and practices, (2) organizational integration challenges, and (3) involvement in local communities. The study surveys 422 UX professionals in 5 countries about these issues. Results show that professionals have strong UX knowledge and use common methods/tools, but organizational integration challenges remain such as lack of resources and user involvement. Involvement in local communities is still limited despite their presence. Overall progress is seen, but more work is needed to address longstanding challenges.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Despite of many advances in design of complex software development there remains the
problem of highly inadequately specifying the requirements form the stakeholders for any real time
application
Comparative Analysis of Model Based Testing and Formal Based Testing - A ReviewIJERA Editor
Software testing is one of the most important steps in the process of Software Development. Testing provides
the glimpse of the proper functioning of the system under different conditions. It makes it a necessary step to
choose the best testing method for the software system to be successful and accepted by a large number of
people as the market is really competitive these days and only error free systems can survive for a longer period
of time. This paper gives the comparative analysis of two major methods of testing : Formal Specifications
Based Software Testing and Model Based Software Testing, which are used widely in the process of software
development process. It brings out how these two methods of testing can provide reliability to software system
including the major uses, advantages, and disadvantages of both the testing methods. It briefly gives the detailed
comparative analysis of these two methods of software testing. It also brings out the situations where formal
specifications based testing is more effective and efficient while model based testing being effective in others.
This comparative analysis will help one in deciding on a better testing technique, depending upon the situation,
and requirements of software, for the software to be successful in long run
Smart Sim Selector: A Software for Simulation Software SelectionCSCJournals
In a period of continuous change in global business environment, organizations, large and small, are finding it increasingly difficult to deal with, and adjust to the demands for such change. Simulation is a powerful tool for allowing designers imagine new systems and enabling them to both quantify and observe behavior. Currently the market offers a variety of simulation software packages. Some are less expensive than others. Some are generic and can be used in a wide variety of application areas while others are more specific. Some have powerful features for modeling while others provide only basic features. Modeling approaches and strategies are different for different packages. Companies are seeking advice about the desirable features of software for manufacturing simulation, depending on the purpose of its use. Because of this, the importance of an adequate approach to simulation software selection is apparent. Smart Sim Selector is a software developed for the purpose of providing support for users when selecting simulation software. Smart Sim Selector consists of a database which is linked to an interface developed using Visual Basic 6.0. The system queries a database and finds a simulation package suitable to the user, based on requirements which have been specified. This paper provides an insight into the development of Smart Sim Selector, in addition to the reasoning behind the system.
Practical Guidelines to Improve Defect Prediction Model – A Reviewinventionjournals
Defect prediction models are used to pinpoint risky software modules and understand past pitfalls that lead to defective modules. The predictions and insights that are derived from defect prediction models may not be accurate and reliable if researchers do not consider the impact of experimental components (e.g., datasets, metrics, and classifiers) of defect prediction modeling. Therefore, a lack of awareness and practical guidelines from previous research can lead to invalid predictions and unreliable insights. Through case studies of systems that span both proprietary and open-source domains, find that (1) noise in defect datasets; (2) parameter settings of classification techniques; and (3) model validation techniques have a large impact on the predictions and insights of defect prediction models, suggesting that researchers should carefully select experimental components in order to produce more accurate and reliable defect prediction models.
The result of applying a new testing model for improving the quality of softw...amiraiti
This paper shows the result of applying a new testing model which provides the know-how for performing the different activities covered in the test process for functional testing. It was noticed that the customer risks experienced during examining the accuracy of software used in different business sectors are not the main focus of the Quality Control team members. Moreover there are no standard testing techniques used by the team members during creating the test conditions and test cases, the result is a lot of reworks.
This document provides an overview of software testing techniques and best practices covered in a course on the topic. It discusses the purpose of software testing, including verification, error detection, and validation. It then surveys common software testing methodologies like white box testing, black box testing, and unit testing. The document also includes two case studies, one on test automation and one on testing an intranet system. Finally, it provides a template for a software test plan and discusses several papers on software testing methods and techniques.
Positive developments but challenges still ahead a survey study on ux profe...Journal Papers
This survey study summarizes previous research on UX professionals' work practices and identifies key issues: (1) UX professionals' knowledge and practices, (2) organizational integration challenges, and (3) involvement in local communities. The study surveys 422 UX professionals in 5 countries about these issues. Results show that professionals have strong UX knowledge and use common methods/tools, but organizational integration challenges remain such as lack of resources and user involvement. Involvement in local communities is still limited despite their presence. Overall progress is seen, but more work is needed to address longstanding challenges.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Despite of many advances in design of complex software development there remains the
problem of highly inadequately specifying the requirements form the stakeholders for any real time
application
Comparative Analysis of Model Based Testing and Formal Based Testing - A ReviewIJERA Editor
Software testing is one of the most important steps in the process of Software Development. Testing provides
the glimpse of the proper functioning of the system under different conditions. It makes it a necessary step to
choose the best testing method for the software system to be successful and accepted by a large number of
people as the market is really competitive these days and only error free systems can survive for a longer period
of time. This paper gives the comparative analysis of two major methods of testing : Formal Specifications
Based Software Testing and Model Based Software Testing, which are used widely in the process of software
development process. It brings out how these two methods of testing can provide reliability to software system
including the major uses, advantages, and disadvantages of both the testing methods. It briefly gives the detailed
comparative analysis of these two methods of software testing. It also brings out the situations where formal
specifications based testing is more effective and efficient while model based testing being effective in others.
This comparative analysis will help one in deciding on a better testing technique, depending upon the situation,
and requirements of software, for the software to be successful in long run
Smart Sim Selector: A Software for Simulation Software SelectionCSCJournals
In a period of continuous change in global business environment, organizations, large and small, are finding it increasingly difficult to deal with, and adjust to the demands for such change. Simulation is a powerful tool for allowing designers imagine new systems and enabling them to both quantify and observe behavior. Currently the market offers a variety of simulation software packages. Some are less expensive than others. Some are generic and can be used in a wide variety of application areas while others are more specific. Some have powerful features for modeling while others provide only basic features. Modeling approaches and strategies are different for different packages. Companies are seeking advice about the desirable features of software for manufacturing simulation, depending on the purpose of its use. Because of this, the importance of an adequate approach to simulation software selection is apparent. Smart Sim Selector is a software developed for the purpose of providing support for users when selecting simulation software. Smart Sim Selector consists of a database which is linked to an interface developed using Visual Basic 6.0. The system queries a database and finds a simulation package suitable to the user, based on requirements which have been specified. This paper provides an insight into the development of Smart Sim Selector, in addition to the reasoning behind the system.
Practical Guidelines to Improve Defect Prediction Model – A Reviewinventionjournals
Defect prediction models are used to pinpoint risky software modules and understand past pitfalls that lead to defective modules. The predictions and insights that are derived from defect prediction models may not be accurate and reliable if researchers do not consider the impact of experimental components (e.g., datasets, metrics, and classifiers) of defect prediction modeling. Therefore, a lack of awareness and practical guidelines from previous research can lead to invalid predictions and unreliable insights. Through case studies of systems that span both proprietary and open-source domains, find that (1) noise in defect datasets; (2) parameter settings of classification techniques; and (3) model validation techniques have a large impact on the predictions and insights of defect prediction models, suggesting that researchers should carefully select experimental components in order to produce more accurate and reliable defect prediction models.
The result of applying a new testing model for improving the quality of softw...amiraiti
This paper shows the result of applying a new testing model which provides the know-how for performing the different activities covered in the test process for functional testing. It was noticed that the customer risks experienced during examining the accuracy of software used in different business sectors are not the main focus of the Quality Control team members. Moreover there are no standard testing techniques used by the team members during creating the test conditions and test cases, the result is a lot of reworks.
This document provides an overview of software testing techniques and best practices covered in a course on the topic. It discusses the purpose of software testing, including verification, error detection, and validation. It then surveys common software testing methodologies like white box testing, black box testing, and unit testing. The document also includes two case studies, one on test automation and one on testing an intranet system. Finally, it provides a template for a software test plan and discusses several papers on software testing methods and techniques.
Instance Space Analysis for Search Based Software EngineeringAldeida Aleti
Search-Based Software Engineering is now a mature area with numerous techniques developed to tackle some of the most challenging software engineering problems, from requirements to design, testing, fault localisation, and automated program repair. SBSE techniques have shown promising results, giving us hope that one day it will be possible for the tedious and labour intensive parts of software development to be completely automated, or at least semi-automated. In this talk, I will focus on the problem of objective performance evaluation of SBSE techniques. To this end, I will introduce Instance Space Analysis (ISA), which is an approach to identify features of SBSE problems that explain why a particular instance is difficult for an SBSE technique. ISA can be used to examine the diversity and quality of the benchmark datasets used by most researchers, and analyse the strengths and weaknesses of existing SBSE techniques. The instance space is constructed to reveal areas of hard and easy problems, and enables the strengths and weaknesses of the different SBSE techniques to be identified. I will present on how ISA enabled us to identify the strengths and weaknesses of SBSE techniques in two areas: Search-Based Software Testing and Automated Program Repair. Finally, I will end my talk with future directions of the objective assessment of SBSE techniques.
Software Defect Trend Forecasting In Open Source Projects using A Univariate ...CSCJournals
Our objective in this research is to provide a framework that will allow project managers, business owners, and developers an effective way to forecast the trend in software defects within a software project in real-time. By providing these stakeholders with a mechanism for forecasting defects, they can then provide the necessary resources at the right time in order to remove these defects before they become too much ultimately leading to software failure. In our research, we will not only show general trends in several open-source projects but also show trends in daily, monthly, and yearly activity. Our research shows that we can use this forecasting method up to 6 months out with only an MSE of 0.019. In this paper, we present our technique and methodologies for developing the inputs for the proposed model and the results of testing on seven open source projects. Further, we discuss the prediction models, the performance, and the implementation using the FBProphet framework and the ARIMA model.
From previous year researches, it is concluded that testing is playing a vital role in the development of the software product. As, software testing is a single approach to assure the quality of the software so most of the development efforts are put on the software testing. But software testing is an expensive process and consumes a lot of time. So, testing should be start as early as possible in the development to control the money and time problems. Even, testing should be performed at every step in the software development life cycle (SDLC) which is a structured approach used in the development of the software product. Software testing is a tradeoff between budget, time and quality. Now a day, testing becomes a very important activity in terms of exposure, security, performance and usability. Hence, software testing faces a collection of challenges.
Analyzing the solutions of DEA through information visualization and data min...ertekg
Download Link > https://ertekprojects.com/gurdal-ertek-publications/blog/analyzing-the-solutions-of-dea-through-information-visualization-and-data-mining-techniques-smartdea-framework/
Data envelopment analysis (DEA) has proven to be a useful tool for assessing efficiency or productivity of organizations, which is of vital practical importance in managerial decision making. DEA provides a significant amount of information from which analysts and managers derive insights and guidelines to promote their existing performances. Regarding to this fact, effective and methodologic analysis and interpretation of DEA solutions are very critical. The main objective of this study is then to develop a general decision support system (DSS) framework to analyze the solutions of basic DEA models. The paper formally shows how the solutions of DEA models should be structured so that these solutions can be examined and interpreted by analysts through information visualization and data mining techniques effectively. An innovative and convenient DEA solver, SmartDEA, is designed and developed in accordance with the pro-posed analysis framework. The developed software provides a DEA solution which is consistent with the framework and is ready-to-analyze with data mining tools, through a table-based structure. The developed framework is tested and applied in a real world project for benchmarking the vendors of a leading Turkish automotive company. The results show the effectiveness and the efficacy of the proposed framework.
Software testing defect prediction model a practical approacheSAT Journals
Abstract Software defects prediction aims to reduce software testing efforts by guiding the testers through the defect classification of software systems. Defect predictors are widely used in many organizations to predict software defects in order to save time, improve quality, testing and for better planning of the resources to meet the timelines. The application of statistical software testing defect prediction model in a real life setting is extremely difficult because it requires more number of data variables and metrics and also historical defect data to predict the next releases or new similar type of projects. This paper explains our statistical model, how it will accurately predict the defects for upcoming software releases or projects. We have used 20 past release data points of software project, 5 parameters and build a model by applying descriptive statistics, correlation and multiple linear regression models with 95% confidence intervals (CI). In this appropriate multiple linear regression model the R-square value was 0.91 and its Standard Error is 5.90%. The Software testing defect prediction model is now being used to predict defects at various testing projects and operational releases. We have found 90.76% precision between actual and predicted defects.
The document summarizes exploratory testing techniques. It discusses that exploratory testing simultaneously learns about the product, market, potential failures, weaknesses, and ways to test. Exploratory testing is a way of thinking about testing rather than a specific technique. The document contrasts exploratory testing with scripted testing, noting that exploratory testing emphasizes adaptability and learning while scripted testing emphasizes accountability and decidability. Key challenges of exploratory testing include learning, visibility, control, risk assessment, execution, logistics, determining correct results, reporting, documentation, metrics, and knowing when to stop testing.
Industry-Academia Communication In Empirical Software EngineeringPer Runeson
This document discusses industry-academia communication in empirical software engineering. It provides context on a conference in 1968 that aimed to improve communication between industry and academia. It notes key differences in time horizons and languages between the two. Industry focuses on short-term market changes and profits, while academia focuses on long-term learning and publications. The document advocates for both sides to learn each other's languages and cultures to improve collaboration and help tear down walls between the two. It provides examples of successful collaboration projects over time that have helped improve practice.
This document discusses selecting optimal software reliability growth models using an integrated entropy-TOPSIS approach. It proposes a hybrid method using Shannon entropy to calculate weights for model selection criteria, and TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution) to rank alternative software reliability growth models based on the criteria weights and model performance. The approach is demonstrated on two real software failure datasets. The results can help decision makers select the most suitable reliability growth model for a given software project.
Manual software testing interview questions and answers are provided. Key points include:
- The difference between a bug, error, and defect is explained. A bug or defect is a flaw that causes failure, while an error is a human mistake.
- White box testing and the V-model framework are described. White box testing uses internal structure, while the V-model integrates testing into each development phase.
- Stubs and drivers are parts of incremental testing used in bottom-up and top-down approaches. Stubs replace dependent components during testing.
Good unit tests are concise, focused on behavior rather than mechanics, and tell a story of intended usage through descriptive names and scenarios. Poor tests are overly procedural and verbose, lacking clarity. Effective testing requires considering tests as specifications that drive development by clearly expressing required functionality, rather than just verifying code works. Tests should focus on scenarios over individual operations and cut across code to demonstrate intended use.
On applications of Soft Computing Assisted Analysis for Software ReliabilityAM Publications
Developing high quality reliable software is one of the main challenges in software industry. Software
Reliability is a key concern of many users and developers of software. Demand for software reliability requires robust
modeling techniques for software quality prediction. Software reliability models are very useful to estimate the
probability of failure of software along with the time. In this study we review the available literature on software
reliability. We have also elicited the current trends, existing problems, specific difficulties, future directions and open
areas for research.
- The document discusses the relationship between requirement engineering processes and risk management in software development projects.
- It notes that many software projects fail or go over budget due to poor requirement engineering, including a lack of understanding of client requirements and frequent changes.
- The author conducted a survey of 23 software professionals from 9 companies to assess how requirement engineering processes impact risk management.
- The survey found that the vast majority of respondents believed that requirement engineering is important or very important for improving risk management and that it enables better management of requirements and assessment of changing requirements.
This document outlines 11 principles of software testing:
1. Testing is the process of executing software with test cases to reveal defects and evaluate quality. Testers detect defects before software is operational.
2. A good test case has a high probability of revealing undetected defects when the objective is defect detection.
3. Testers must meticulously inspect and interpret test results to avoid overlooking failures or suspecting failures that don't exist.
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...IJCSES Journal
With the sharp rise in software dependability and failure cost, high quality has been in great demand.However, guaranteeing high quality in software systems which have grown in size and complexity coupled with the constraints imposed on their development has become increasingly difficult, time and resource consuming activity. Consequently, it becomes inevitable to deliver software that have no serious faults. In
this case, object-oriented (OO) products being the de facto standard of software development with their unique features could have some faults that are hard to find or pinpoint the impacts of changes. The earlier faults are identified, found and fixed, the lesser the costs and the higher the quality. To assess product quality, software metrics are used. Many OO metrics have been proposed and developed. Furthermore,
many empirical studies have validated metrics and class fault proneness (FP) relationship. The challenge is which metrics are related to class FP and what activities are performed. Therefore, this study bring together the state-of-the-art in fault prediction of FP that utilizes CK and size metrics. We conducted a systematic literature review over relevant published empirical validation articles. The results obtained are
analysed and presented. It indicates that 29 relevant empirical studies exist and measures such as complexity, coupling and size were found to be strongly related to FP.
An Open Modern Software Testing Laboratory Courseware: An Experience ReportVahid Garousi
Vahid Garousi, An Open Modern Software Testing Laboratory Courseware: An Experience Report, Proceedings of the 23rd IEEE Conference on Software Engineering Education and Training, Pittsburgh, USA, March 2010
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
A REVIEW OF SECURITY INTEGRATION TECHNIQUE IN AGILE SOFTWARE DEVELOPMENTijseajournal
Agile software development has gained a lot of popularity in the software industry due to its iterative and
incremental approach as well as user involvement. Agile has also been criticized due to lack of its ability to
deliver secure software. In this paper, extensive literature has been performed, in order to highlight the
existing security issues in agile software development. Majority of challenges reported in literature,
occurred due to lack of involvement of security expert. Improving security of a software system without
damaging the real essence of Agile can achieved with the continuous involvement of security engineer
throughout development lifecycle with its defined role and responsibilities.
Performance Evaluation of Software Quality ModelEditor IJMTER
With the advent of Internet revolution and the emergence of knowledge based systems, Quality acquires a wider
and more challenging dimension. Quality has evolved and undergone transformation from the inspection era to
the quality control regime and then to quality management and finally to the present TQM approach. At every
stage of the transformation “Quality” has been attaining wider dimension with respect to Customer focus,
continual improvement and has been evolving for addressing increasing demands of customers with respect to
delivery of products and services.
Generation of Search Based Test Data on Acceptability Testing Principleiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Mathematical foundations of Multithreaded programming concepts in Java lang...AM Publications,India
The mathematical description of object oriented language has already been developed by Srivastav et al. The authors have already published papers where they have shown that it is possible to describe programming language such as C, Java using simple mathematical sets and relations. The authors have established that it is possible to describe object oriented language like Java and its various aspects such as object, class, inheritance using simple mathematical models. In the present study the authors have proposed the mathematical modeling of Multi threaded programming in java language. The authors have tried to explore the single threaded program and as well as multi threaded program using simple mathematical modeling. The same idea may be applied to C# language also
SRAAA – Secured Right Angled and Ant Search Hybrid Routing Protocol for MANETsAM Publications,India
— This paper is a contribution in the field of security analysis on mobile ad-hoc networks, and security requirements of applications. Limitations of the mobile nodes have been studied in order to design a secure routing protocol that thwarts different kinds of attacks. Our approach is based on the Right Angled and Ant Search Hybrid Routing Protocol (RAAA); the most popular hybrid routing protocol. The importance of the proposed solution lies in the fact that it ensures security as needed by providing a comprehensive architecture of secured Right Angled and Ant Search Hybrid Routing Protocol (SRAAA) based on efficient key management, secure neighbor discovery, secure routing packets, detection of malicious nodes, and preventing these nodes from destroying the network. In order to fulfill these objectives, both efficient key management and secure neighbor mechanisms have been designed to be performed prior to the functioning of the protocol. To validate the proposed solution, we use the network simulator NS-2 to test the performance of secure protocol and compare it with the conventional zone routing protocol over different number of factors that affect the network. Our results evidently show that our secure version paragons the conventional protocol in the packet delivery ratio while it has a tolerable increase in the routing overhead and average delay. Also, security analysis proves in details that the proposed protocol is robust enough to thwart all classes of ad-hoc attacks.
The Analysis of the Reasons and Measurements for Job-hopping of Enterprises i...AM Publications,India
With the development of economy, job hopping tends to be more and more widespread and younger, become an urgent problem to be solved. This paper mainly analyzes job hopping reasons such as Employee factors,enterprise factors and Social factors.And put forward the measures such as Accurate self positioning, Make the reasonable salary system ,Improve the staff-post matching degree,Exit interview,Strengthen the macro control of the government,Establish the restriction mechanism. Hope to help enterprises to meet the demands of staff, retain talent, to provide human resources guarantee for enterprise development
Instance Space Analysis for Search Based Software EngineeringAldeida Aleti
Search-Based Software Engineering is now a mature area with numerous techniques developed to tackle some of the most challenging software engineering problems, from requirements to design, testing, fault localisation, and automated program repair. SBSE techniques have shown promising results, giving us hope that one day it will be possible for the tedious and labour intensive parts of software development to be completely automated, or at least semi-automated. In this talk, I will focus on the problem of objective performance evaluation of SBSE techniques. To this end, I will introduce Instance Space Analysis (ISA), which is an approach to identify features of SBSE problems that explain why a particular instance is difficult for an SBSE technique. ISA can be used to examine the diversity and quality of the benchmark datasets used by most researchers, and analyse the strengths and weaknesses of existing SBSE techniques. The instance space is constructed to reveal areas of hard and easy problems, and enables the strengths and weaknesses of the different SBSE techniques to be identified. I will present on how ISA enabled us to identify the strengths and weaknesses of SBSE techniques in two areas: Search-Based Software Testing and Automated Program Repair. Finally, I will end my talk with future directions of the objective assessment of SBSE techniques.
Software Defect Trend Forecasting In Open Source Projects using A Univariate ...CSCJournals
Our objective in this research is to provide a framework that will allow project managers, business owners, and developers an effective way to forecast the trend in software defects within a software project in real-time. By providing these stakeholders with a mechanism for forecasting defects, they can then provide the necessary resources at the right time in order to remove these defects before they become too much ultimately leading to software failure. In our research, we will not only show general trends in several open-source projects but also show trends in daily, monthly, and yearly activity. Our research shows that we can use this forecasting method up to 6 months out with only an MSE of 0.019. In this paper, we present our technique and methodologies for developing the inputs for the proposed model and the results of testing on seven open source projects. Further, we discuss the prediction models, the performance, and the implementation using the FBProphet framework and the ARIMA model.
From previous year researches, it is concluded that testing is playing a vital role in the development of the software product. As, software testing is a single approach to assure the quality of the software so most of the development efforts are put on the software testing. But software testing is an expensive process and consumes a lot of time. So, testing should be start as early as possible in the development to control the money and time problems. Even, testing should be performed at every step in the software development life cycle (SDLC) which is a structured approach used in the development of the software product. Software testing is a tradeoff between budget, time and quality. Now a day, testing becomes a very important activity in terms of exposure, security, performance and usability. Hence, software testing faces a collection of challenges.
Analyzing the solutions of DEA through information visualization and data min...ertekg
Download Link > https://ertekprojects.com/gurdal-ertek-publications/blog/analyzing-the-solutions-of-dea-through-information-visualization-and-data-mining-techniques-smartdea-framework/
Data envelopment analysis (DEA) has proven to be a useful tool for assessing efficiency or productivity of organizations, which is of vital practical importance in managerial decision making. DEA provides a significant amount of information from which analysts and managers derive insights and guidelines to promote their existing performances. Regarding to this fact, effective and methodologic analysis and interpretation of DEA solutions are very critical. The main objective of this study is then to develop a general decision support system (DSS) framework to analyze the solutions of basic DEA models. The paper formally shows how the solutions of DEA models should be structured so that these solutions can be examined and interpreted by analysts through information visualization and data mining techniques effectively. An innovative and convenient DEA solver, SmartDEA, is designed and developed in accordance with the pro-posed analysis framework. The developed software provides a DEA solution which is consistent with the framework and is ready-to-analyze with data mining tools, through a table-based structure. The developed framework is tested and applied in a real world project for benchmarking the vendors of a leading Turkish automotive company. The results show the effectiveness and the efficacy of the proposed framework.
Software testing defect prediction model a practical approacheSAT Journals
Abstract Software defects prediction aims to reduce software testing efforts by guiding the testers through the defect classification of software systems. Defect predictors are widely used in many organizations to predict software defects in order to save time, improve quality, testing and for better planning of the resources to meet the timelines. The application of statistical software testing defect prediction model in a real life setting is extremely difficult because it requires more number of data variables and metrics and also historical defect data to predict the next releases or new similar type of projects. This paper explains our statistical model, how it will accurately predict the defects for upcoming software releases or projects. We have used 20 past release data points of software project, 5 parameters and build a model by applying descriptive statistics, correlation and multiple linear regression models with 95% confidence intervals (CI). In this appropriate multiple linear regression model the R-square value was 0.91 and its Standard Error is 5.90%. The Software testing defect prediction model is now being used to predict defects at various testing projects and operational releases. We have found 90.76% precision between actual and predicted defects.
The document summarizes exploratory testing techniques. It discusses that exploratory testing simultaneously learns about the product, market, potential failures, weaknesses, and ways to test. Exploratory testing is a way of thinking about testing rather than a specific technique. The document contrasts exploratory testing with scripted testing, noting that exploratory testing emphasizes adaptability and learning while scripted testing emphasizes accountability and decidability. Key challenges of exploratory testing include learning, visibility, control, risk assessment, execution, logistics, determining correct results, reporting, documentation, metrics, and knowing when to stop testing.
Industry-Academia Communication In Empirical Software EngineeringPer Runeson
This document discusses industry-academia communication in empirical software engineering. It provides context on a conference in 1968 that aimed to improve communication between industry and academia. It notes key differences in time horizons and languages between the two. Industry focuses on short-term market changes and profits, while academia focuses on long-term learning and publications. The document advocates for both sides to learn each other's languages and cultures to improve collaboration and help tear down walls between the two. It provides examples of successful collaboration projects over time that have helped improve practice.
This document discusses selecting optimal software reliability growth models using an integrated entropy-TOPSIS approach. It proposes a hybrid method using Shannon entropy to calculate weights for model selection criteria, and TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution) to rank alternative software reliability growth models based on the criteria weights and model performance. The approach is demonstrated on two real software failure datasets. The results can help decision makers select the most suitable reliability growth model for a given software project.
Manual software testing interview questions and answers are provided. Key points include:
- The difference between a bug, error, and defect is explained. A bug or defect is a flaw that causes failure, while an error is a human mistake.
- White box testing and the V-model framework are described. White box testing uses internal structure, while the V-model integrates testing into each development phase.
- Stubs and drivers are parts of incremental testing used in bottom-up and top-down approaches. Stubs replace dependent components during testing.
Good unit tests are concise, focused on behavior rather than mechanics, and tell a story of intended usage through descriptive names and scenarios. Poor tests are overly procedural and verbose, lacking clarity. Effective testing requires considering tests as specifications that drive development by clearly expressing required functionality, rather than just verifying code works. Tests should focus on scenarios over individual operations and cut across code to demonstrate intended use.
On applications of Soft Computing Assisted Analysis for Software ReliabilityAM Publications
Developing high quality reliable software is one of the main challenges in software industry. Software
Reliability is a key concern of many users and developers of software. Demand for software reliability requires robust
modeling techniques for software quality prediction. Software reliability models are very useful to estimate the
probability of failure of software along with the time. In this study we review the available literature on software
reliability. We have also elicited the current trends, existing problems, specific difficulties, future directions and open
areas for research.
- The document discusses the relationship between requirement engineering processes and risk management in software development projects.
- It notes that many software projects fail or go over budget due to poor requirement engineering, including a lack of understanding of client requirements and frequent changes.
- The author conducted a survey of 23 software professionals from 9 companies to assess how requirement engineering processes impact risk management.
- The survey found that the vast majority of respondents believed that requirement engineering is important or very important for improving risk management and that it enables better management of requirements and assessment of changing requirements.
This document outlines 11 principles of software testing:
1. Testing is the process of executing software with test cases to reveal defects and evaluate quality. Testers detect defects before software is operational.
2. A good test case has a high probability of revealing undetected defects when the objective is defect detection.
3. Testers must meticulously inspect and interpret test results to avoid overlooking failures or suspecting failures that don't exist.
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...IJCSES Journal
With the sharp rise in software dependability and failure cost, high quality has been in great demand.However, guaranteeing high quality in software systems which have grown in size and complexity coupled with the constraints imposed on their development has become increasingly difficult, time and resource consuming activity. Consequently, it becomes inevitable to deliver software that have no serious faults. In
this case, object-oriented (OO) products being the de facto standard of software development with their unique features could have some faults that are hard to find or pinpoint the impacts of changes. The earlier faults are identified, found and fixed, the lesser the costs and the higher the quality. To assess product quality, software metrics are used. Many OO metrics have been proposed and developed. Furthermore,
many empirical studies have validated metrics and class fault proneness (FP) relationship. The challenge is which metrics are related to class FP and what activities are performed. Therefore, this study bring together the state-of-the-art in fault prediction of FP that utilizes CK and size metrics. We conducted a systematic literature review over relevant published empirical validation articles. The results obtained are
analysed and presented. It indicates that 29 relevant empirical studies exist and measures such as complexity, coupling and size were found to be strongly related to FP.
An Open Modern Software Testing Laboratory Courseware: An Experience ReportVahid Garousi
Vahid Garousi, An Open Modern Software Testing Laboratory Courseware: An Experience Report, Proceedings of the 23rd IEEE Conference on Software Engineering Education and Training, Pittsburgh, USA, March 2010
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
A REVIEW OF SECURITY INTEGRATION TECHNIQUE IN AGILE SOFTWARE DEVELOPMENTijseajournal
Agile software development has gained a lot of popularity in the software industry due to its iterative and
incremental approach as well as user involvement. Agile has also been criticized due to lack of its ability to
deliver secure software. In this paper, extensive literature has been performed, in order to highlight the
existing security issues in agile software development. Majority of challenges reported in literature,
occurred due to lack of involvement of security expert. Improving security of a software system without
damaging the real essence of Agile can achieved with the continuous involvement of security engineer
throughout development lifecycle with its defined role and responsibilities.
Performance Evaluation of Software Quality ModelEditor IJMTER
With the advent of Internet revolution and the emergence of knowledge based systems, Quality acquires a wider
and more challenging dimension. Quality has evolved and undergone transformation from the inspection era to
the quality control regime and then to quality management and finally to the present TQM approach. At every
stage of the transformation “Quality” has been attaining wider dimension with respect to Customer focus,
continual improvement and has been evolving for addressing increasing demands of customers with respect to
delivery of products and services.
Generation of Search Based Test Data on Acceptability Testing Principleiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Mathematical foundations of Multithreaded programming concepts in Java lang...AM Publications,India
The mathematical description of object oriented language has already been developed by Srivastav et al. The authors have already published papers where they have shown that it is possible to describe programming language such as C, Java using simple mathematical sets and relations. The authors have established that it is possible to describe object oriented language like Java and its various aspects such as object, class, inheritance using simple mathematical models. In the present study the authors have proposed the mathematical modeling of Multi threaded programming in java language. The authors have tried to explore the single threaded program and as well as multi threaded program using simple mathematical modeling. The same idea may be applied to C# language also
SRAAA – Secured Right Angled and Ant Search Hybrid Routing Protocol for MANETsAM Publications,India
— This paper is a contribution in the field of security analysis on mobile ad-hoc networks, and security requirements of applications. Limitations of the mobile nodes have been studied in order to design a secure routing protocol that thwarts different kinds of attacks. Our approach is based on the Right Angled and Ant Search Hybrid Routing Protocol (RAAA); the most popular hybrid routing protocol. The importance of the proposed solution lies in the fact that it ensures security as needed by providing a comprehensive architecture of secured Right Angled and Ant Search Hybrid Routing Protocol (SRAAA) based on efficient key management, secure neighbor discovery, secure routing packets, detection of malicious nodes, and preventing these nodes from destroying the network. In order to fulfill these objectives, both efficient key management and secure neighbor mechanisms have been designed to be performed prior to the functioning of the protocol. To validate the proposed solution, we use the network simulator NS-2 to test the performance of secure protocol and compare it with the conventional zone routing protocol over different number of factors that affect the network. Our results evidently show that our secure version paragons the conventional protocol in the packet delivery ratio while it has a tolerable increase in the routing overhead and average delay. Also, security analysis proves in details that the proposed protocol is robust enough to thwart all classes of ad-hoc attacks.
The Analysis of the Reasons and Measurements for Job-hopping of Enterprises i...AM Publications,India
With the development of economy, job hopping tends to be more and more widespread and younger, become an urgent problem to be solved. This paper mainly analyzes job hopping reasons such as Employee factors,enterprise factors and Social factors.And put forward the measures such as Accurate self positioning, Make the reasonable salary system ,Improve the staff-post matching degree,Exit interview,Strengthen the macro control of the government,Establish the restriction mechanism. Hope to help enterprises to meet the demands of staff, retain talent, to provide human resources guarantee for enterprise development
Green Computing and Sustainable Environment – Introduction of E-documents and...AM Publications,India
This document discusses the environmental impacts of the paper manufacturing process. It begins by describing how paper is made, including the raw materials used and manufacturing steps. It then discusses several environmental impacts:
1) Deforestation from harvesting trees for pulp and concerns over monoculture plantations.
2) Air pollution from mill emissions like hydrogen sulfide, sulfur dioxide, chloroform and other volatile organics.
3) Water pollution from effluents containing biological oxygen demand, suspended solids, acidic compounds, organochlorines like chlorophenols, and dioxins and furans which are toxic, persistent and can bioaccumulate in wildlife and humans. The document analyzes the environmental effects of different pollut
Material Flow Management for Treating Waste from Medical ActivityAM Publications,India
This document proposes and summarizes a method for treating medical waste. The method involves chemically disinfecting infectious medical waste to convert it to non-hazardous household waste. It also includes an optoelectronic sorting module to remove recyclable plastic waste, which would then be used in asphalt production. A project planner was developed using Microsoft Project 2013 to schedule and plan the implementation of this proposed medical waste treatment method. The method aims to safely and effectively treat medical waste while also recycling plastic waste that is generated.
International students exhibited higher levels of intercultural sensitivity than domestic students based on a study of 209 freshmen at an international college in Thailand. There was also a statistically significant difference found based on having international friendships. Specifically, the study found that international students scored higher on a measure of intercultural sensitivity compared to domestic students. Additionally, having international friendships was found to correlate with higher intercultural sensitivity. However, there were no significant differences found based on gender, field of study, foreign language ability, international travel experience, or having studied abroad. In general, the freshmen scored high on intercultural sensitivity based on the measurement scale used.
Research on the Development of Cultural Industry in Shandong ProvinceAM Publications,India
This document discusses research on the development of the cultural industry in Shandong Province, China. It begins by defining cultural industry and outlining the scope. It then analyzes the current situation of cultural industry development in Shandong, including cultural resources, demand for cultural products, and government support. Some issues are identified, such as a lack of high-level talent, low levels of industry development and regional imbalances. The document concludes by proposing recommendations to strengthen resource integration, train cultural talent, broaden industry chains through cultural industry parks, and promote integration of technology and culture to improve competitiveness.
In these days cloud security is the main issue. As many users hesitate to adopt cloud computing due to security concerns. So in this paper we have discussed various security features of different cloud service models. The security of different clouds depends mainly upon the framework and programming practices that the developer uses in her application.
This paper aims to provide main advance in the delivering techniques which are adapting to learner using multiagent system. Including models and the corresponding methods.It focuses on both datamining and e-learning. Multiagent system is a computer programming based system which is composed by multiple interacting computer programs.MAS can be used to solve the program that are complex or seems impossible for an indivisual program to solve.Multiagent system composed of various entities that have different information or diverging interest.In multiagent system agents are computer program that act on behalf of the users to solve a computer program.
A Pilot Study on Current and Future Trends in E-learning, Distance Learning a...AM Publications,India
Due to tremendous explorations in internet the entire teaching learning methodologies have been changed. In the old days the education system mainly followed the “gurukul” system. Slowly the people shifted from those classical methods and adopted postal coaching in early 70-s. After innovations of internet a further changes have been made in teaching learning process. Now a new concept has come up. Education for all and anywhere and anytime. In the present study the author will make systematic study on new innovative teaching learning methodologies and also its importance in coming days. In the present paper the authors have tried to explore a comparative study among e-learning, distance learning, on-line learning methodologies. The authors have also made a study on how ARM technology can improve the teaching learning methods by reducing the H/W cost
Literature Review: Convey the Data in Massive Parallel ComputingAM Publications,India
In this paper we have studied several works on direct network architectures which are well-built contestant for useful in many successful cost-effective, experimental massive parallel computers and well scale up shared memory of multiprocessors. The uniqueness of direct networks, as reflected by the communication latency and routing latency metrics are significant to the performance of such systems. A multiprocessor system can be used for the wormhole routing for the most capable switching method and has been adopted in several new massive parallel computers. This technique is unique technical challenges in routing and flow control in particular system, and avoid deadlock. The highly scale up network is a combination of topology and hypercube. Due to the being of concurrent multiple mesh and hypercubes, this network provides a great architectural support for parallel processing. The growth of the network is more efficient in terms of communication, interconnection network is scaled up the network and will be more reliable and also the unreliability of the interconnection network to get minimized. This is very desirable characteristic for the interconnection network as the network remains equipped for more failure of adjoining nodes or links in parallel computer architecture. Formulations to optimize the performance of throughput of networks through queuing theory M\M\1 concept.
Today, the e-commerce companies develop rapidly and compete intensely, and the selection of e-commerce companies’ logistics operation mode plays an important role in maintaining core competitiveness. The article describes the VANCL departure, introducing changes in the logistics business model VANCL company in the development process, and then summarize VANCL mode selection
We are in the age of big data which involves collection of large datasets.Managing and processing large data sets is difficult with existing traditional database systems.Hadoop and Map Reduce has become one of the most powerful and popular tools for big data processing . Hadoop Map Reduce a powerful programming model is used for analyzing large set of data with parallelization, fault tolerance and load balancing and other features are it is elastic,scalable,efficient.MapReduce with cloud is combined to form a framework for storage, processing and analysis of massive machine maintenance data in a cloud computing environment.
Mobile Ad hoc Networks (MANETs) are wireless networks consisted of mobile free nodes that can move anywhere at any time without the need to any fixed infrastructure or any centralized administration. In this category of networks existing nodes must rely on each other to play the role of routers or switches instead of using central ones. The self-organized nature of such environments made MANETs vulnerable against many security threats. As a result, providing security requirements in MANETs is one of the most interesting challenges in such a network. In this group of networks, the use of cryptographic solutions is one of the most interesting security issues. The importance of this scientific area in MANETs is more drastic by considering that mentioned schemes must be lightweight enough to be appropriate for resource constrained platforms in such environment. This paper has tried to represent the position of cryptographic issues in MANETs. Moreover, security issues in mobile Ad hoc networks beside of different classes of public key cryptosystems have been introduced.
CT-SVD and Arnold Transform for Secure Color Image WatermarkingAM Publications,India
Watermarking is used for protecting copyright of digital images. In this paper, we propose a novel technique for watermarking using Contourlet Transform (CT) and Singular Value Decomposition (SVD). CT ensures imperceptibility of the watermark and SVD ensures its robustness against attacks. Arnold transform is used for scrambling watermark pixels to ensure watermark security. Watermark extraction is semi-blind, which avoids the need for original image for extraction. Both watermark and cover image are color images. Performance of the system is judged by using PSNR and Correlation Coefficient (CC) values. System shows good robustness against noise, JPEG compression, filtering and cropping
World Wide Web plays an important role in providing various knowledge sources to the world, which helps many applications to provide quality service to the consumers. As the years go on the web is overloaded with lot of information and it becomes very hard to extract the relevant information from the web. This gives way to the evolution of the Big Data and the volume of the data keeps increasing rapidly day by day. Data mining techniques are used to find the hidden information from the big data. In this paper we focus on the review of Big Data, its data classification methods and the way it can be mined using various mining methods.
Object Tracking System Using Motion Detection and Sound DetectionAM Publications,India
Visual monitoring activities using cameras automatically without human intervention is a big and challenging problem so we need automatic object tracker system. This paper presents a new object tracking system in Real time that systematically combines both motion detection and sound detection. In this system detect motion as well as sound in a real time and if lack of security it is also give alert message through alarm. The proposed method is excellent in real-time performance because it detect the moving objects efficiently and accurately form the video recorded by a shaking camera with changing background and noises.
Research on Retail E-Commerce Logistics operation mode of chinaAM Publications,India
This paper discusses the retail business enterprise logistics operation four common, self-logistics operation mode, the mode of operation of third-party resources, logistics alliance mode of operation and large logistics mode, and the first four models from the amount of capital investment, capital flows Comparative analysis of research, logistics, cost control and other aspects
Selenium - A Trending Automation Testing Toolijtsrd
Selenium is an important testing tool for software quality assurance. In recent days number of websites are increasing rapidly and it becomes essential to test the websites against various quality factors to make sure it meets the expected quality goals. Several companies are spending a lot of bucks for the testing tool while Selenium is available completely free for the performance test. The open source tool is well known for its unlimited capabilities and unlimited reach. Selenium stands out from the crowd in this aspect. Anyone could visit the Selenium website and download the latest version and use it. It is not only an open source but also highly modifiable. Testers could make changes based upon their needs and requirements. Manav Kundra "Selenium - A Trending Automation Testing Tool" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31202.pdf Paper Url :https://www.ijtsrd.com/engineering/software-engineering/31202/selenium-%E2%80%93-a-trending-automation-testing-tool/manav-kundra
This document summarizes research on factors that contribute to the success of software projects. It discusses functional requirements, operational quality, and usability. The introduction provides background on the importance of quality management and identifies requirements, quality, reliability, performance, and user satisfaction as key success factors. The literature review then summarizes several studies that have examined the impact of reliability, response time, balancing functionality and usability, and measuring customer satisfaction on project outcomes. Overall, the document outlines research establishing that addressing requirements, quality, usability, reliability, and customer satisfaction is critical for software project success.
This document summarizes research on factors that contribute to the success of software projects. It discusses functional requirements, operational quality, and usability. The introduction provides background on the importance of quality management and identifies reasons why software projects fail. The literature review then summarizes several studies that have examined how reliability, response time, usability, and meeting user requirements impact project success. Finally, the document discusses the need to balance functionality and usability through various design and testing approaches.
A Software Measurement Using Artificial Neural Network and Support Vector Mac...ijseajournal
Today, Software measurement are based on various techniques such that neural network, Genetic
algorithm, Fuzzy Logic etc. This study involves the efficiency of applying support vector machine using
Gaussian Radial Basis kernel function to software measurement problem to increase the performance and
accuracy. Support vector machines (SVM) are innovative approach to constructing learning machines that
Minimize generalization error. There is a close relationship between SVMs and the Radial Basis Function
(RBF) classifiers. Both have found numerous applications such as in optical character recognition, object
detection, face verification, text categorization, and so on. The result demonstrated that the accuracy and
generalization performance of SVM Gaussian Radial Basis kernel function is better than RBFN. We also
examine and summarize the several superior points of the SVM compared with RBFN.
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTINGijseajournal
Researchers consider that the first edition of the book "The Art of Software Testing" by Myers (1979)
initiated research in Software Testing. Since then, software testing has gone through evolutions that have
driven standards and tools. This evolution has accompanied the complexity and variety of software
deployment platforms. The migration to the cloud allowed benefits such as scalability, agility, and better
return on investment. Cloud computing requires more significant involvement in software testing to ensure
that services work as expected. In addition to testing cloud applications, cloud computing has paved the
way for testing in the Test-as-a-Service model. This review aims to understand software testing in the
context of cloud computing. Based on the knowledge explained here, we sought to linearize the evolution of
software testing, characterizing fundamental points and allowing us to compose a synthesis of the body of
knowledge in software testing, expanded by the cloud computing paradigm.
From the Art of Software Testing to Test-as-a-Service in Cloud Computingijseajournal
Researchers consider that the first edition of the book "The Art of Software Testing" by Myers (1979)
initiated research in Software Testing. Since then, software testing has gone through evolutions that have
driven standards and tools. This evolution has accompanied the complexity and variety of software
deployment platforms. The migration to the cloud allowed benefits such as scalability, agility, and better
return on investment. Cloud computing requires more significant involvement in software testing to ensure
that services work as expected. In addition to testing cloud applications, cloud computing has paved the
way for testing in the Test-as-a-Service model. This review aims to understand software testing in the
context of cloud computing. Based on the knowledge explained here, we sought to linearize the evolution of
software testing, characterizing fundamental points and allowing us to compose a synthesis of the body of
knowledge in software testing, expanded by the cloud computing paradigm.
Ease of Use and Its Effect on User Decision of Adopting New Method of Car Ren...IJERA Editor
The effective collaboration of multidisciplinary fields of software engineering and business will
eventually lead to a better understanding of UX and how to use such in our daily life .This paper
spots the light on the how can we improve the user experience of using a internet website to commit
and complete a business transaction of booking a vehicle through ease of use.
International journal of computer science and innovation vol 2015-n1-paper2sophiabelthome
This document provides a taxonomy and comparison of various automated software testing tools. It begins by introducing software testing and the importance of classifying testing tools. It then discusses the objectives of the research and limitations. Various testing methodologies and tool types are defined, including functional, management, and load testing tools. Criteria for comparing tools are outlined. Finally, 32 specific automated testing tools are described briefly, including Selenium, Ranorex, Test Complete, QTP, Watir, TOSCA, and others. The document aims to help professionals select the appropriate testing tools to meet their needs.
AUTOMATED PENETRATION TESTING: AN OVERVIEWcscpconf
The document discusses automated penetration testing and provides an overview. It compares manual and automated penetration testing, noting that automated testing allows for faster, more standardized and repeatable tests but has limitations in developing new exploits. It also reviews some current automated penetration testing methodologies and tools, including those using HTTP/TCP/IP attacks, linking common scanning tools, a Python-based tool targeting databases, and one using POMDPs for multi-step penetration test planning under uncertainty. The document concludes that automated testing is more efficient than manual for known vulnerabilities but cannot replace manual testing for discovering new exploits.
This document describes a new framework called the Bee-Hive model for the requirements engineering process. The Bee-Hive model aims to incorporate the advantages of both the waterfall model and iterative development model while reducing their disadvantages. It has four phases: background research, requirement specification, prototyping, and validation. The background research phase involves researching the application domain, scope of evolution, organizational factors, market, and scale of the project. This helps establish the feasibility of the project in different areas before requirements elicitation.
Data Quality Doesn’t Just Happen: And Here’s What Some of the Industry’s Most...InsightInnovation
Data quality isn’t always the sexiest topic, but it’s critical and one that buyers and suppliers often neglect to have. The ramifications of ignoring it can cost millions of dollars. Some of the industry’s largest buyers and suppliers have found a simple solution though and it’s one that is available to everyone else too. Come here about how the issue of data quality concerns haven’t gone away, and what others are doing to make sure they and their insights are protected.
Is Crowd Testing (relevant) for Software Engineers?Henry Muccini
This presentation was performed at AST 2014 (tech.brookes.ac.uk/AST2014/) the 9th International Workshop on Automation of Software Test (AST’14)
on June 1, 2014, in Hyderabad, India.
It introduces CrowdTesting and provides some initial thoughts on how Crowd Testing and Software Engineering can be combined to get even better results.
BugRaptor’s has a habit to remains up to date with the ongoing trends in software testing and many interesting trends has come through the year 2019. Trends like Machine learning, Cloud based testing tools, adopting newer test automation tools & practices and DevOps are found to be usually followed, which in turn has resulted to increase in automation being performed on various web applications.
FAULT DIAGNOSIS USING CLUSTERING. WHAT STATISTICAL TEST TO USE FOR HYPOTHESIS...JaresJournal
Predictive maintenance and condition-based monitoring systems have seen significant prominence in
recent years to minimize the impact of machine downtime on production and its costs. Predictive
maintenance involves using concepts of data mining, statistics, and machine learning to build models that
are capable of performing early fault detection, diagnosing the faults and predicting the time to failure.
Fault diagnosis has been one of the core areas where the actual failure mode of the machine is identified.
In fluctuating environments such as manufacturing, clustering techniques have proved to be more reliable
compared to supervised learning methods. One of the fundamental challenges of clustering is developing a
test hypothesis and choosing an appropriate statistical test for hypothesis testing. Most statistical analyses
use some underlying assumptions of the data which most real-world data is incapable of satisfying those
assumptions. This paper is dedicated to overcoming the following challenge by developing a test hypothesis
for fault diagnosis application using clustering technique and performing PERMANOVA test for hypothesis
testing.
This document presents a method for prioritizing test cases for event-driven software using a genetic algorithm. It proposes a single abstract model that can test both graphical user interface (GUI) and web applications. Test cases are executed and assigned a fitness value, then stored in a training database. When test cases have equal fitness values, prioritization criteria like fault detection, time, and code coverage are applied using the genetic algorithm to determine the optimal testing order. The approach was experimentally tested on small GUI and web apps and showed a reduction in latency time compared to other techniques. Future work could involve applying the algorithm to larger real-world software.
This study examines factors affecting adoption and implementation of enterprise resource planning (ERP) systems in discrete manufacturing companies in India. A survey was conducted of 30 manufacturing companies. The results suggest ERP is more likely to be adopted if production characteristics are compatible with ERP capabilities. However, statistical analysis showed the decision-maker's computer knowledge was more important than other factors, including production characteristics, in the decision to adopt ERP. The factors affecting implementation level were less clear. Contrary to expectations, manufacturing method did not appear significant in this sample.
This document summarizes a research paper on developing a software tool called "Smart Sim Selector" to help users select simulation software. It describes the development of the tool, including:
1) Designing a database containing information on various simulation software packages based on over 200 evaluation criteria.
2) Creating an interface in Visual Basic that allows users to specify their requirements and priorities, then queries the database to recommend suitable software.
3) Implementing different techniques (AHP, weighted scoring, TOPSIS) to analyze users' inputs and software attributes to determine the best recommendation.
The tool aims to provide an unbiased approach to simulation software selection and reduce problems companies face in choosing inappropriate packages.
Streamline and Accelerate User Acceptance Testing (UAT) with Automation.pdfRohitBhandari66
The document discusses how automation can streamline and accelerate the user acceptance testing (UAT) process. It describes how traditional manual UAT is time-consuming, labor-intensive, and error-prone. Automation tools like Opkey can overcome these challenges by enabling test case creation and management, automated test execution, comprehensive defect tracking and reporting, and integration with other tools. This allows organizations to complete UAT more efficiently, with improved test coverage, accuracy, and cost-effectiveness. The conclusion states that Opkey is an effective automation tool for UAT that saves time and money while improving software quality.
This document summarizes an article from the International Journal of Computer Engineering and Technology. The article proposes a reliability improvement predictive approach to software testing using mathematical modeling and the Empirical Bayesian method. It introduces a model predictive control framework for software testing. The key aspects are:
1) It uses the Empirical Bayesian method to estimate reliability and optimize the test allocation scheme online by repeatedly solving an optimal control problem.
2) A case study shows that the proposed approach can achieve better results in improving reliability than random testing.
3) The case study also demonstrates that finding more defects through testing does not necessarily lead to higher reliability, and the proposed approach more effectively directs testing efforts towards the most important bugs.
THE USABILITY METRICS FOR USER EXPERIENCE was innovatively created by Google engineers and it is ready for production in record time. The success of Google is to attributed the efficient search algorithm, and also to the underlying commodity hardware. As Google run number of application then Google’s goal became to build a vast storage network out of inexpensive commodity hardware. So Google create its own file system, named as THE USABILITY METRICS FOR USER EXPERIENCE that is GFS. THE USABILITY METRICS FOR USER EXPERIENCE is one of the largest file system in operation. Generally THE USABILITY METRICS FOR USER EXPERIENCE is a scalable distributed file system of large distributed data intensive apps. In the design phase of THE USABILITY METRICS FOR USER EXPERIENCE, in which the given stress includes component failures , files are huge and files are mutated by appending data. The entire file system is organized hierarchically in directories and identified by pathnames. The architecture comprises of multiple chunk servers, multiple clients and a single master. Files are divided into chunks, and that is the key design parameter. THE USABILITY METRICS FOR USER EXPERIENCE also uses leases and mutation order in their design to achieve atomicity and consistency. As of there fault tolerance, THE USABILITY METRICS FOR USER EXPERIENCE is highly available, replicas of chunk servers and master exists.
Similar to Benefits of Automated Testing Over Manual Testing (20)
Codeless Generative AI Pipelines
(GenAI with Milvus)
https://ml.dssconf.pl/user.html#!/lecture/DSSML24-041a/rate
Discover the potential of real-time streaming in the context of GenAI as we delve into the intricacies of Apache NiFi and its capabilities. Learn how this tool can significantly simplify the data engineering workflow for GenAI applications, allowing you to focus on the creative aspects rather than the technical complexities. I will guide you through practical examples and use cases, showing the impact of automation on prompt building. From data ingestion to transformation and delivery, witness how Apache NiFi streamlines the entire pipeline, ensuring a smooth and hassle-free experience.
Timothy Spann
https://www.youtube.com/@FLaNK-Stack
https://medium.com/@tspann
https://www.datainmotion.dev/
milvus, unstructured data, vector database, zilliz, cloud, vectors, python, deep learning, generative ai, genai, nifi, kafka, flink, streaming, iot, edge
Open Source Contributions to Postgres: The Basics POSETTE 2024ElizabethGarrettChri
Postgres is the most advanced open-source database in the world and it's supported by a community, not a single company. So how does this work? How does code actually get into Postgres? I recently had a patch submitted and committed and I want to share what I learned in that process. I’ll give you an overview of Postgres versions and how the underlying project codebase functions. I’ll also show you the process for submitting a patch and getting that tested and committed.
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.