The history of Artificial Intelligence and Machine Learning dates back to 1950’s. In recent years, there has been an increase in popularity for applications that implement AI and ML technology. As with traditional development, software testing is a critical component of an efficient AI/ML application. However, the approach to development methodology used in AI/ML varies significantly from traditional development. Owing to these variations, numerous software testing challenges occur. This paper aims to recognize and to explain some of the biggest challenges that software testers face in dealing with AI/ML applications. For future research, this study has key implications. Each of the challenges outlined in this paper is ideal for further investigation and has great potential to shed light on the way to more productive software testing strategies and methodologies that can be applied to AI/ML applications.
Strategic Approach to Software Testing, Strategic Issues, Test Conventional Software, Test Strategies for Object-Oriented Software, Test Strategies for WebApps, Validation Testing, System Testing, The Art of Debugging, Software Testing Fundamentals, White-Box Testing, Basis Path Testing,
Control Structure Testing
Software Testing Training : Tonex TrainingBryan Len
Software testing preparing shows you the propelled software testing and standards for the present thorough testing request and gives you the ground-breaking instrument and strategy to lessen software absconds, enhance the quality and upgrade your business achievement.
Audience :
IT professionals, Software testers, Test analysts,Project managers, quality managers, software development managers, business analysts or IT directors, Software developers , Executives and managers of software testing area and more.
Training objectives:
Understand Fundamentals of software testing
Necessary mathematical knowledge of software testing
Explain different phases of software life cycle
Understand static software testing techniques
Develop software test design techniques
Manage the software testing procedures
Recognize different types of software testing and apply the appropriate method for different software testing challenges
Apply testing tools for mobile systems
Carry out the functional and usability testing to software
Test the software through web, computer or mobile systems
Training outlines:
Introduction to Software Testing
Graph Theory Basics for Software Testers
Software Development Life Cycle
Distributions and Data Analysis
Software Testing Strategy
Static Software Testing
Usability Testing
Data Flow Testing
Data Flow Testing
Functional Testing
Software Testing Design Techniques
Software Testing Management
Hands On, Workshops, and Group Activities
Sample Workshops and Labs for Software Testing Training
The software testing course by TONEX is intuitive course with a considerable measure of class talks and activities planning to give you a valuable asset to software testing. This preparation primarily centers around lessening the holes between the software creation and what true needs. Along these lines, software testing must be put into the early period of the starting figuring with the goal that it can help experts for their software improvement vocations.
Request for more information. Visit Tonex training website links below
Software Testing Training
https://www.tonex.com/training-courses/software-testing-training/
Strategic Approach to Software Testing, Strategic Issues, Test Conventional Software, Test Strategies for Object-Oriented Software, Test Strategies for WebApps, Validation Testing, System Testing, The Art of Debugging, Software Testing Fundamentals, White-Box Testing, Basis Path Testing,
Control Structure Testing
Software Testing Training : Tonex TrainingBryan Len
Software testing preparing shows you the propelled software testing and standards for the present thorough testing request and gives you the ground-breaking instrument and strategy to lessen software absconds, enhance the quality and upgrade your business achievement.
Audience :
IT professionals, Software testers, Test analysts,Project managers, quality managers, software development managers, business analysts or IT directors, Software developers , Executives and managers of software testing area and more.
Training objectives:
Understand Fundamentals of software testing
Necessary mathematical knowledge of software testing
Explain different phases of software life cycle
Understand static software testing techniques
Develop software test design techniques
Manage the software testing procedures
Recognize different types of software testing and apply the appropriate method for different software testing challenges
Apply testing tools for mobile systems
Carry out the functional and usability testing to software
Test the software through web, computer or mobile systems
Training outlines:
Introduction to Software Testing
Graph Theory Basics for Software Testers
Software Development Life Cycle
Distributions and Data Analysis
Software Testing Strategy
Static Software Testing
Usability Testing
Data Flow Testing
Data Flow Testing
Functional Testing
Software Testing Design Techniques
Software Testing Management
Hands On, Workshops, and Group Activities
Sample Workshops and Labs for Software Testing Training
The software testing course by TONEX is intuitive course with a considerable measure of class talks and activities planning to give you a valuable asset to software testing. This preparation primarily centers around lessening the holes between the software creation and what true needs. Along these lines, software testing must be put into the early period of the starting figuring with the goal that it can help experts for their software improvement vocations.
Request for more information. Visit Tonex training website links below
Software Testing Training
https://www.tonex.com/training-courses/software-testing-training/
A strategy for software testing integrates the design of software test cases into a well-planned series of steps that result in successful development of the software.
Black-Box Testing, Model-Based Testing, Testing for Specialized Environments, Architecture, Object-Oriented Testing Strategies, Object-Oriented Testing Methods, Test Cases and the Class Hierarchy, Testing Concepts for WebApps, Testing Process – An Overview, User Interface Testing, Test Plan, Positive Testing Negative Testing
Software Reliability Testing Training Crash Course - Tonex TrainingBryan Len
Length: 4 Days
Software reliability testing training course prepares you with the most updated knowledge in testing domain allowing you to grow in your job and career, while helping your organization to make profit and excel.
Learn about:
Fundamentals of software testing
Verification & validation methodology
Various software testing techniques
Test elements usage (rule/scenario/case)
Software test management
Different levels of software testing
General testing principles
Test planning
Static analysis techniques
Test design techniques
Using a risk-based approach to testing
Managing the testing process
Managing a test team
Combining tools and automation to support software testing
Risk analysis methods
Software reliability
Software testing terminology
Levels of software testing
Software Testing techniques
Black Box methods and more.
Course Outline:
Overview
Factors Affecting Software Reliability
Software Reliability Models
Data Required for Models
Software Reliability Prediction Models
Software Reliability Evaluation Models
Software Reliability Metrics
Software Fault Trees
Software FMEAs
System Reliability Software Redundancy
Improving Software Reliability
Managing Software Reliability
How Testing Can Cut Effort & Time
How to Plan Effective Testing?
Master Testing Plan
Detailed Test Planning
White Box (Structural) Testing
Integration/System/Special Test Planning
Maintenance and Regression Testing
Automated Testing Tools
Measuring and Managing Testing
Request for more information. Visit tonex.com for course and workshop detail.
Software Reliability Testing Training Crash Course
https://www.tonex.com/training-courses/software-reliability-testing-training-crash-course/
Software Testing Fundamentals | Basics Of Software TestingKostCare
Learn the fundamental techniques and approaches to software testing and enhanced comprehend what to test, how to test it, and in what contexts certain practices. Fundamentals of Software Testing offer an eye-opening view into this difficult task based on multiple sources of industry best practice.
This ppt covers the following
A strategic approach to testing
Test strategies for conventional software
Test strategies for object-oriented software
Validation testing
System testing
The art of debugging
AI TESTING: ENSURING A GOOD DATA SPLIT BETWEEN DATA SETS (TRAINING AND TEST) ...ijsc
Artificial Intelligence and Machine Learning have been around for a long time. In recent years, there has been a surge in popularity for applications integrating AI and ML technology. As with traditional development, software testing is a critical component of a successful AI/ML application. The development methodology used in AI/ML contrasts significantly from traditional development. In light of these distinctions, various software testing challenges arise. The emphasis of this paper is on the challenge of effectively splitting the data into training and testing data sets. By applying a k-Means clustering strategy to the data set followed by a decision tree, we can significantly increase the likelihood of the training data set to represent the domain of the full dataset and thus avoid training a model that is likely to fail because it has only learned a subset of the full data domain.
A survey of predicting software reliability using machine learning methodsIAESIJAI
In light of technical and technological progress, software has become an urgent need in every aspect of human life, including the medicine sector and industrial control. Therefore, it is imperative that the software always works flawlessly. The information technology sector has witnessed a rapid expansion in recent years, as software companies can no longer rely only on cost advantages to stay competitive in the market, but programmers must provide reliable and high-quality software, and in order to estimate and predict software reliability using machine learning and deep learning, it was introduced A brief overview of the important scientific contributions to the subject of software reliability, and the researchers' findings of highly efficient methods and techniques for predicting software reliability.
AI for Software Testing Excellence in 2024Testgrid.io
Revolutionize your testing process with Artificial Intelligence. This book explores how AI is transforming software testing, delivering faster, more efficient, and more effective methods.
A strategy for software testing integrates the design of software test cases into a well-planned series of steps that result in successful development of the software.
Black-Box Testing, Model-Based Testing, Testing for Specialized Environments, Architecture, Object-Oriented Testing Strategies, Object-Oriented Testing Methods, Test Cases and the Class Hierarchy, Testing Concepts for WebApps, Testing Process – An Overview, User Interface Testing, Test Plan, Positive Testing Negative Testing
Software Reliability Testing Training Crash Course - Tonex TrainingBryan Len
Length: 4 Days
Software reliability testing training course prepares you with the most updated knowledge in testing domain allowing you to grow in your job and career, while helping your organization to make profit and excel.
Learn about:
Fundamentals of software testing
Verification & validation methodology
Various software testing techniques
Test elements usage (rule/scenario/case)
Software test management
Different levels of software testing
General testing principles
Test planning
Static analysis techniques
Test design techniques
Using a risk-based approach to testing
Managing the testing process
Managing a test team
Combining tools and automation to support software testing
Risk analysis methods
Software reliability
Software testing terminology
Levels of software testing
Software Testing techniques
Black Box methods and more.
Course Outline:
Overview
Factors Affecting Software Reliability
Software Reliability Models
Data Required for Models
Software Reliability Prediction Models
Software Reliability Evaluation Models
Software Reliability Metrics
Software Fault Trees
Software FMEAs
System Reliability Software Redundancy
Improving Software Reliability
Managing Software Reliability
How Testing Can Cut Effort & Time
How to Plan Effective Testing?
Master Testing Plan
Detailed Test Planning
White Box (Structural) Testing
Integration/System/Special Test Planning
Maintenance and Regression Testing
Automated Testing Tools
Measuring and Managing Testing
Request for more information. Visit tonex.com for course and workshop detail.
Software Reliability Testing Training Crash Course
https://www.tonex.com/training-courses/software-reliability-testing-training-crash-course/
Software Testing Fundamentals | Basics Of Software TestingKostCare
Learn the fundamental techniques and approaches to software testing and enhanced comprehend what to test, how to test it, and in what contexts certain practices. Fundamentals of Software Testing offer an eye-opening view into this difficult task based on multiple sources of industry best practice.
This ppt covers the following
A strategic approach to testing
Test strategies for conventional software
Test strategies for object-oriented software
Validation testing
System testing
The art of debugging
AI TESTING: ENSURING A GOOD DATA SPLIT BETWEEN DATA SETS (TRAINING AND TEST) ...ijsc
Artificial Intelligence and Machine Learning have been around for a long time. In recent years, there has been a surge in popularity for applications integrating AI and ML technology. As with traditional development, software testing is a critical component of a successful AI/ML application. The development methodology used in AI/ML contrasts significantly from traditional development. In light of these distinctions, various software testing challenges arise. The emphasis of this paper is on the challenge of effectively splitting the data into training and testing data sets. By applying a k-Means clustering strategy to the data set followed by a decision tree, we can significantly increase the likelihood of the training data set to represent the domain of the full dataset and thus avoid training a model that is likely to fail because it has only learned a subset of the full data domain.
A survey of predicting software reliability using machine learning methodsIAESIJAI
In light of technical and technological progress, software has become an urgent need in every aspect of human life, including the medicine sector and industrial control. Therefore, it is imperative that the software always works flawlessly. The information technology sector has witnessed a rapid expansion in recent years, as software companies can no longer rely only on cost advantages to stay competitive in the market, but programmers must provide reliable and high-quality software, and in order to estimate and predict software reliability using machine learning and deep learning, it was introduced A brief overview of the important scientific contributions to the subject of software reliability, and the researchers' findings of highly efficient methods and techniques for predicting software reliability.
AI for Software Testing Excellence in 2024Testgrid.io
Revolutionize your testing process with Artificial Intelligence. This book explores how AI is transforming software testing, delivering faster, more efficient, and more effective methods.
Machine learning techniques can be used to analyse data from different perspectives and enable developers to retrieve useful information. Machine learning techniques are proven to be useful
in terms of software bug prediction. In this paper, a comparative performance analysis of
different machine learning techniques is explored for software bug prediction on public
available data sets. Results showed most of the machine learning methods performed well on
software bug datasets.
Comparative Performance Analysis of Machine Learning Techniques for Software ...csandit
Machine learning techniques can be used to analyse data from different perspectives and enable
developers to retrieve useful information. Machine learning techniques are proven to be useful
in terms of software bug prediction. In this paper, a comparative performance analysis of
different machine learning techniques is explored for software bug prediction on public
available data sets. Results showed most of the machine learning methods performed well on
software bug datasets.
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Development of software defect prediction system using artificial neural networkIJAAS Team
Software testing is an activity to enable a system is bug free during execution process. The software bug prediction is one of the most encouraging exercises of the testing phase of the software improvement life cycle. In any case, in this paper, a framework was created to anticipate the modules that deformity inclined in order to be utilized to all the more likely organize software quality affirmation exertion. Genetic Algorithm was used to extract relevant features from the acquired datasets to eliminate the possibility of overfitting and the relevant features were classified to defective or otherwise modules using the Artificial Neural Network. The system was executed in MATLAB (R2018a) Runtime environment utilizing a statistical toolkit and the performance of the system was assessed dependent on the accuracy, precision, recall, and the f-score to check the effectiveness of the system. In the finish of the led explores, the outcome indicated that ECLIPSE JDT CORE, ECLIPSE PDE UI, EQUINOX FRAMEWORK and LUCENE has the accuracy, precision, recall and the f-score of 86.93, 53.49, 79.31 and 63.89% respectively, 83.28, 31.91, 45.45 and 37.50% respectively, 83.43, 57.69, 45.45 and 50.84% respectively and 91.30, 33.33, 50.00 and 40.00% respectively. This paper presents an improved software predictive system for the software defect detections.
AIIN TEST AUTOMATION: OVERCOMING CHALLENGES, EMBRACING IMPERATIVESijscai
This scholarly article delves into the intersection of Artificial Intelligence (AI) and Test Automation,
thoroughly examining the challenges inherent in implementing AI methodologies and elucidating
imperatives critical for successful integration within contemporary software testing frameworks. The
research entails a comprehensive exploration of challenges, ranging from intricacies in data quality to
algorithmic biases, tool complexities, and integration challenges, drawing on empirical evidence from case
studies and real-world scenarios. The paper articulates imperatives essential for overcoming challenges
and ensuring the efficacy of AI in test automation. It emphasizes the significance of structured training
programs, meticulous data management strategies, and the cultivation of an organizational culture
conducive to the seamless integration of AI technologies. Through a rigorous analysis of successful case
studies, the article provides a scholarly basis for the formulation of strategies and solutions to surmount
challenges faced by organizations adopting AI in testing practices. A visual matrix aligning challenges
with corresponding imperatives adds scholarly rigor to the article, offering a comprehensive framework
for understanding the intricate relationships between challenges and the imperative strategies required for
resolution. Furthermore, the exploration of emerging trends and innovations anticipates the future
trajectory of AI-driven test automation, contributing valuable insights for strategic planning in the realm of
software testing. This scholarly work underscores the importance of a systematic and informed approach
to AI in Test Automation. By addressing challenges with academic rigor and embracing imperative
strategies grounded in empirical evidence, organizations can position themselves at the forefront of AIdriven testing practices, advancing the field with a scholarly foundation for continued exploration and
innovation.
AIIN TEST AUTOMATION: OVERCOMING CHALLENGES, EMBRACING IMPERATIVESijscai
This scholarly article delves into the intersection of Artificial Intelligence (AI) and Test Automation,
thoroughly examining the challenges inherent in implementing AI methodologies and elucidating
imperatives critical for successful integration within contemporary software testing frameworks. The
research entails a comprehensive exploration of challenges, ranging from intricacies in data quality to
algorithmic biases, tool complexities, and integration challenges, drawing on empirical evidence from case
studies and real-world scenarios. The paper articulates imperatives essential for overcoming challenges
and ensuring the efficacy of AI in test automation. It emphasizes the significance of structured training
programs, meticulous data management strategies,
International Journal on Soft Computing, Artificial Intelligence and Applicat...ijscai
This scholarly article delves into the intersection of Artificial Intelligence (AI) and Test Automation,
thoroughly examining the challenges inherent in implementing AI methodologies and elucidating
imperatives critical for successful integration within contemporary software testing frameworks. The
research entails a comprehensive exploration of challenges, ranging from intricacies in data quality to
algorithmic biases, tool complexities, and integration challenges, drawing on empirical evidence from case
studies and real-world scenarios. The paper articulates imperatives essential for overcoming challenges
and ensuring the efficacy of AI in test automation. It emphasizes the significance of structured training
programs, meticulous data management strategies, and the cultivation of an organizational culture
conducive to the seamless integration of AI technologies. Through a rigorous analysis of successful case
studies, the article provides a scholarly basis for the formulation of strategies and solutions to surmount
challenges faced by organizations adopting AI in testing practices. A visual matrix aligning challenges
with corresponding imperatives adds scholarly rigor to the article, offering a comprehensive framework
for understanding the intricate relationships between challenges and the imperative strategies required for
resolution. Furthermore, the exploration of emerging trends and innovations anticipates the future
trajectory of AI-driven test automation, contributing valuable insights for strategic planning in the realm of
software testing. This scholarly work underscores the importance of a systematic and informed approach
to AI in Test Automation. By addressing challenges with academic rigor and embracing imperative
strategies grounded in empirical evidence, organizations can position themselves at the forefront of AIdriven testing practices, advancing the field with a scholarly foundation for continued exploration and
innovation.
AIIN TEST AUTOMATION: OVERCOMING CHALLENGES, EMBRACING IMPERATIVESijscai
This scholarly article delves into the intersection of Artificial Intelligence (AI) and Test Automation,
thoroughly examining the challenges inherent in implementing AI methodologies and elucidating
imperatives critical for successful integration within contemporary software testing frameworks. The
research entails a comprehensive exploration of challenges, ranging from intricacies in data quality to
algorithmic biases, tool complexities, and integration challenges, drawing on empirical evidence from case
studies and real-world scenarios. The paper articulates imperatives essential for overcoming challenges
and ensuring the efficacy of AI in test automation. I
A Software Measurement Using Artificial Neural Network and Support Vector Mac...ijseajournal
Today, Software measurement are based on various techniques such that neural network, Genetic
algorithm, Fuzzy Logic etc. This study involves the efficiency of applying support vector machine using
Gaussian Radial Basis kernel function to software measurement problem to increase the performance and
accuracy. Support vector machines (SVM) are innovative approach to constructing learning machines that
Minimize generalization error. There is a close relationship between SVMs and the Radial Basis Function
(RBF) classifiers. Both have found numerous applications such as in optical character recognition, object
detection, face verification, text categorization, and so on. The result demonstrated that the accuracy and
generalization performance of SVM Gaussian Radial Basis kernel function is better than RBFN. We also
examine and summarize the several superior points of the SVM compared with RBFN.
Review on Algorithmic and Non Algorithmic Software Cost Estimation Techniquesijtsrd
Effective software cost estimation is the most challenging and important activities in software development. Developers want a simple and accurate method of efforts estimation. Estimation of the cost before starting of work is a prediction and prediction always not accurate. Software effort estimation is a very critical task in the software engineering and to control quality and efficiency a suitable estimation technique is crucial. This paper gives a review of various available software effort estimation methods, mainly focus on the algorithmic model and non algorithmic model. These existing methods for software cost estimation are illustrated and their aspect will be discussed. No single technique is best for all situations, and thus a careful comparison of the results of several approaches is most likely to produce realistic estimation. This paper provides a detailed overview of existing software cost estimation models and techniques. This paper presents the strength and weakness of various cost estimation methods. This paper focuses on some of the relevant reasons that cause inaccurate estimation. Pa Pa Win | War War Myint | Hlaing Phyu Phyu Mon | Seint Wint Thu "Review on Algorithmic and Non-Algorithmic Software Cost Estimation Techniques" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26511.pdfPaper URL: https://www.ijtsrd.com/engineering/-/26511/review-on-algorithmic-and-non-algorithmic-software-cost-estimation-techniques/pa-pa-win
A NOVEL APPROACH TO ERROR DETECTION AND CORRECTION OF C PROGRAMS USING MACHIN...IJCI JOURNAL
There has always been a struggle for programmers to identify the errors while executing a program- be it
syntactical or logical error. This struggle has led to a research in identification of syntactical and logical
errors. This paper makes an attempt to survey those research works which can be used to identify errors as
well as proposes a new model based on machine learning and data mining which can detect logical and
syntactical errors by correcting them or providing suggestions. The proposed work is based on use of
hashtags to identify each correct program uniquely and this in turn can be compared with the logically
incorrect program in order to identify errors.
A simplified predictive framework for cost evaluation to fault assessment usi...IJECEIAES
Software engineering is an integral part of any software development scheme which frequently encounters bugs, errors, and faults. Predictive evaluation of software fault contributes towards mitigating this challenge to a large extent; however, there is no benchmarked framework being reported in this case yet. Therefore, this paper introduces a computational framework of the cost evaluation method to facilitate a better form of predictive assessment of software faults. Based on lines of code, the proposed scheme deploys adopts a machine-learning approach to address the perform predictive analysis of faults. The proposed scheme presents an analytical framework of the correlation-based cost model integrated with multiple standards machine learning (ML) models, e.g., linear regression, support vector regression, and artificial neural networks (ANN). These learning models are executed and trained to predict software faults with higher accuracy. The study considers assessing the outcomes based on error-based performance metrics in detail to determine how well each learning model performs and how accurate it is at learning. It also looked at the factors contributing to the training loss of neural networks. The validation result demonstrates that, compared to logistic regression and support vector regression, neural network achieves a significantly lower error score for software fault prediction.
Software Defect Prediction Using Radial Basis and Probabilistic Neural NetworksEditor IJCATR
Defects in modules of software systems is a major problem in software development. There are a variety of data mining
techniques used to predict software defects such as regression, association rules, clustering, and classification. This paper is concerned
with classification based software defect prediction. This paper investigates the effectiveness of using a radial basis function neural
network and a probabilistic neural network on prediction accuracy and defect prediction. The conclusions to be drawn from this work is
that the neural networks used in here provide an acceptable level of accuracy but a poor defect prediction ability. Probabilistic neural
networks perform consistently better with respect to the two performance measures used across all datasets. It may be advisable to use
a range of software defect prediction models to complement each other rather than relying on a single technique.
10th International Conference on Artificial Intelligence and Applications (AI...gerogepatton
10th International Conference on Artificial Intelligence and Applications (AI 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Artificial Intelligence and its applications. The Conference looks for significant contributions to all major fields of the Artificial Intelligence, Soft Computing in theoretical and practical aspects. The aim of the Conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
International Journal of Artificial Intelligence & Applications (IJAIA)gerogepatton
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
May 2024 - Top 10 Read Articles in Artificial Intelligence and Applications (...gerogepatton
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.
3rd International Conference on Artificial Intelligence Advances (AIAD 2024)gerogepatton
3rd International Conference on Artificial Intelligence Advances (AIAD 2024) will act as a major forum for the presentation of innovative ideas, approaches, developments, and research projects in the area advanced Artificial Intelligence. It will also serve to facilitate the exchange of information between researchers and industry professionals to discuss the latest issues and advancement in the research area. Core areas of AI and advanced multi-disciplinary and its applications will be covered during the conferences.
International Journal of Artificial Intelligence & Applications (IJAIA)gerogepatton
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.
Information Extraction from Product Labels: A Machine Vision Approachgerogepatton
This research tackles the challenge of manual data extraction from product labels by employing a blend of
computer vision and Natural Language Processing (NLP). We introduce an enhanced model that combines
Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) in a Convolutional
Recurrent Neural Network (CRNN) for reliable text recognition. Our model is further refined by
incorporating the Tesseract OCR engine, enhancing its applicability in Optical Character Recognition
(OCR) tasks. The methodology is augmented by NLP techniques and extended through the Open Food
Facts API (Application Programming Interface) for database population and text-only label prediction.
The CRNN model is trained on encoded labels and evaluated for accuracy on a dedicated test set.
Importantly, our approach enables visually impaired individuals to access essential information on
product labels, such as directions and ingredients. Overall, the study highlights the efficacy of deep
learning and OCR in automating label extraction and recognition.
10th International Conference on Artificial Intelligence and Applications (AI...gerogepatton
10th International Conference on Artificial Intelligence and Applications (AI 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Artificial Intelligence and its applications. The Conference looks for significant contributions to all major fields of the Artificial Intelligence, Soft Computing in theoretical and practical aspects. The aim of the Conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
International Journal of Artificial Intelligence & Applications (IJAIA)gerogepatton
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.
Research on Fuzzy C- Clustering Recursive Genetic Algorithm based on Cloud Co...gerogepatton
Aiming at the problems of poor local search ability and precocious convergence of fuzzy C-cluster
recursive genetic algorithm (FOLD++), a new fuzzy C-cluster recursive genetic algorithm based on
Bayesian function adaptation search (TS) was proposed by incorporating the idea of Bayesian function
adaptation search into fuzzy C-cluster recursive genetic algorithm. The new algorithm combines the
advantages of FOLD++ and TS. In the early stage of optimization, fuzzy C-cluster recursive genetic
algorithm is used to get a good initial value, and the individual extreme value pbest is put into Bayesian
function adaptation table. In the late stage of optimization, when the searching ability of fuzzy C-cluster
recursive genetic is weakened, the short term memory function of Bayesian function adaptation table in
Bayesian function adaptation search algorithm is utilized. Make it jump out of the local optimal solution,
and allow bad solutions to be accepted during the search. The improved algorithm is applied to function
optimization, and the simulation results show that the calculation accuracy and stability of the algorithm
are improved, and the effectiveness of the improved algorithm is verified
International Journal of Artificial Intelligence & Applications (IJAIA)gerogepatton
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.
10th International Conference on Artificial Intelligence and Soft Computing (...gerogepatton
10th International Conference on Artificial Intelligence and Soft Computing (AIS 2024) will
provide an excellent international forum for sharing knowledge and results in theory, methodology, and
applications of Artificial Intelligence, Soft Computing. The Conference looks for significant
contributions to all major fields of the Artificial Intelligence, Soft Computing in theoretical and practical
aspects. The aim of the Conference is to provide a platform to the researchers and practitioners from
both academia as well as industry to meet and share cutting-edge development in the field.
International Journal of Artificial Intelligence & Applications (IJAIA)gerogepatton
Employee attrition refers to the decrease in staff numbers within an organization due to various reasons.
As it has a negative impact on long-term growth objectives and workplace productivity, firms have
recognized it as a significant concern. To address this issue, organizations are increasingly turning to
machine-learning approaches to forecast employee attrition rates. This topic has gained significant
attention from researchers, especially in recent times. Several studies have applied various machinelearning methods to predict employee attrition, producing different resultsdepending on the employed
methods, factors, and datasets. However, there has been no comprehensive comparative review of multiple
studies applying machine-learning models to predict employee attrition to date. Therefore, this study aims
to fill this gap by providing an overview of research conducted on applying machine learning to predict
employee attrition from 2019 to February 2024. A literature review of relevant studies was conducted,
summarized, and classified. Most studies agree on conducting comparative experiments with multiple
predictive models to determine the most effective one.From this literature survey, the RF algorithm and
XGB ensemble method are repeatedly the best-performing, outperforming many other algorithms.
Additionally, the application of deep learning to employee attrition prediction issues also shows promise.
While there are discrepancies in the datasets used in previous studies, it is notable that the dataset
provided by IBM is the most widely utilized. This study serves as a concise review for new researchers,
facilitating their understanding of the primary techniques employed in predicting employee attrition and
highlighting recent research trends in this field. Furthermore, it provides organizations with insight into
the prominent factors affecting employee attrition, as identified by studies, enabling them to implement
solutions aimed at reducing attrition rates.
10th International Conference on Artificial Intelligence and Applications (AI...gerogepatton
10th International Conference on Artificial Intelligence and Applications (AIFU 2024) is a forum for presenting new advances and research results in the fields of Artificial Intelligence. The conference will bring together leading researchers, engineers and scientists in the domain of interest from around the world. The scope of the conference covers all theoretical and practical aspects of the Artificial Intelligence.
International Journal of Artificial Intelligence & Applications (IJAIA)gerogepatton
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.
THE TRANSFORMATION RISK-BENEFIT MODEL OF ARTIFICIAL INTELLIGENCE:BALANCING RI...gerogepatton
This paper summarizes the most cogent advantages and risks associated with Artificial Intelligence from an
in-depth review of the literature. Then the authors synthesize the salient risk-related models currently being
used in AI, technology and business-related scenarios. Next, in view of an updated context of AI along with
theories and models reviewed and expanded constructs, the writers propose a new framework called “The
Transformation Risk-Benefit Model of Artificial Intelligence” to address the increasing fears and levels of
AIrisk. Using the model characteristics, the article emphasizes practical and innovative solutions where
benefitsoutweigh risks and three use cases in healthcare, climate change/environment and cyber security to
illustrate unique interplay of principles, dimensions and processes of this powerful AI transformational
model.
13th International Conference on Software Engineering and Applications (SEA 2...gerogepatton
13th International Conference on Software Engineering and Applications (SEA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Software Engineering and Applications. The goal of this conference is to bring together researchers and practitioners from academia and industry to focus on understanding Modern software engineering concepts and establishing new collaborations in these areas.
International Journal of Artificial Intelligence & Applications (IJAIA)gerogepatton
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.
AN IMPROVED MT5 MODEL FOR CHINESE TEXT SUMMARY GENERATIONgerogepatton
Complicated policy texts require a lot of effort to read, so there is a need for intelligent interpretation of
Chinese policies. To better solve the Chinese Text Summarization task, this paper utilized the mT5 model
as the core framework and initial weights. Additionally, In addition, this paper reduced the model size
through parameter clipping, used the Gap Sentence Generation (GSG) method as unsupervised method,
and improved the Chinese tokenizer. After training on a meticulously processed 30GB Chinese training
corpus, the paper developed the enhanced mT5-GSG model. Then, when fine-tuning the Chinese Policy
text, this paper chose the idea of “Dropout Twice”, and innovatively combined the probability distribution
of the two Dropouts through the Wasserstein distance. Experimental results indicate that the proposed
model achieved Rouge-1, Rouge-2, and Rouge-L scores of 56.13%, 45.76%, and 56.41% respectively on
the Chinese policy text summarization dataset.
10th International Conference on Artificial Intelligence and Applications (AI...gerogepatton
10th International Conference on Artificial Intelligence and Applications (AIFU 2024) is a
forum for presenting new advances and research results in the fields of Artificial Intelligence.
The conference will bring together leading researchers, engineers and scientists in the domain of
interest from around the world. The scope of the conference covers all theoretical and practical
aspects of the Artificial Intelligence.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Knowledge engineering: from people to machines and back
Software Testing: Issues and Challenges of Artificial Intelligence & Machine Learning
1. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.1, January 2021
DOI: 10.5121/ijaia.2021.12107 101
SOFTWARE TESTING: ISSUES AND
CHALLENGES OF ARTIFICIAL
INTELLIGENCE & MACHINE LEARNING
Kishore Sugali, Chris Sprunger and Venkata N Inukollu
Department of Computer Science and, Purdue University, Fort Wayne, USA
ABSTRACT
The history of Artificial Intelligence and Machine Learning dates back to 1950’s. In recent years, there has
been an increase in popularity for applications that implement AI and ML technology. As with traditional
development, software testing is a critical component of an efficient AI/ML application. However, the
approach to development methodology used in AI/ML varies significantly from traditional development.
Owing to these variations, numerous software testing challenges occur. This paper aims to recognize and
to explain some of the biggest challenges that software testers face in dealing with AI/ML applications. For
future research, this study has key implications. Each of the challenges outlined in this paper is ideal for
further investigation and has great potential to shed light on the way to more productive software testing
strategies and methodologies that can be applied to AI/ML applications.
KEYWORDS
Artificial Intelligence (AI), Machine Learning (ML), Software Testing
1. INTRODUCTION
1.1. Overview of Artificial Intelligence and Machine Learning.
Artificial Intelligence (AI), a widely recognized branch of Computer Science , refers to any smart
machine that exhibits human like intelligence. By learning to emulate the perception, logical and
rational thought, and decision-making capabilities of human intelligence, AI helps machines to
imitate intelligent human behavior. AI has significantly increased efficiencies in multiple fields in
recent years and is gaining popularity in the use of computers to decipher otherwise unresolvable
problems and exceed the efficiency of existing computer systems. [2]. In [3], Derek Partridge
shows various major classes of the association between artificial intelligence (AI) and software
engineering (SE). These areas of communication are software support environments; AI tools and
techniques in standard software; and the use of standard software technology in AI systems. Mark
Kandel and Bunke, H, in [7], have attempted to correlate AI and software engineering at certain
levels and addressed whether or not AI can be directly applied to SE problems, and whether or
not SE Processes are capable of taking advantage of AI techniques.
1.2. How AI Impacts Software Testing
Research demonstrates that software testing utilizes enterprise resources and adds no
functionality to the application. When regression testing discloses a new error introduced by a
revision code, new regression cycle begins. Many software applications often require engineers
to write testing scripts, and their skills must be equal to the developers who develop the original
2. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.1, January 2021
102
app. This additional overhead expense in the quality assurance process is consistent with the
growing complexity of software products; [3]
In order to improve the quality of AI products and to reduce costs, automated testing engineers
must continually focus on AI ability, performance, and speed. New applications progressively
provide AI functionality, sparing the human testers the challenge to comprehensively evaluate the
entire product. Logically, AI would be increasingly needed to certify intelligence- containing
systems, in part because the spectrum of possibilities for input and output possibilities is so wide.
The aim of this paper is to discuss the issues and challenges that AI methods are facing in the
field of software testing. Different AI techniques, such as classification and clustering algorithms,
are currently focused primarily on monotonous data to train models to predict precise results [10].
In the case of automated software testing procedures, standard machine learning (ML) and more
specifically deep learning methods like neural networks, support vector machines (SVM),
reinforcement learning techniques, and decision networks can be trained with input data,
combined with the respective output of the application under test.
2. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
2.1. Types of Machine Learning Algorithms
Based on their function, Machine Learning Algorithms have been divided into various categories.
The following are the key groups : [20]
Supervised learning – ML tries to model relationships and dependencies between the target
prediction output and the input features. Input data is called training data and has a known
label or result. Algorithms include - Nearest Neighbor, Naive Bayes, Decision Trees,
Linear Regression, Support Vector Machines (SVM), Neural Networks.
Unsupervised Learning - Input data are not labeled and do not have a known result. Mainly
used in pattern detection and descriptive modeling. Algorithms includes - k-means
clustering and association rules.
Semi-supervised Learning - In the previous two types, either there are no labels for all the
observations in the dataset or labels are present for all the observations. Semi-supervised
learning falls in between the two. Input data is a mixture of labeled and unlabeled.
Reinforcement Learning - It allows machines and software agents to automatically
determine the ideal behavior within a specific context, to maximize its performance.
Algorithms include Q-Learning, Temporal Difference (TD), and Deep Adversarial
Networks.
2.2. Role of AI in Software Testing
The application of methods and techniques of artificial intelligence in software engineering and
testing is a progressive research field that has been documented in several completed works.
Varieties of AI tools are used to generate test data, research data suitability, optimization and
analysis of the coverage and test management. In [8], Annealing Genetic Algorithm (AGA) and
Restricted Genetic Algorithm (RGA), the two modified versions of Genetic Algorithm (GA) have
been used for testing. ACO can be used for model-based test data generation and groups of ants
can effectively explore the graph and generate optimal test data to achieve the test coverage
requirement [9]. Test sequence generation has been used for state-based software testing [9].
Many other AI based techniques and methodologies have been used for software testing
purposes. Requirements-based testing based on particle swarm optimization (PSO), partition
testing also based on PSO, using PSO and GA in the context of evolutionary and structural
3. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.1, January 2021
103
testing, test case minimization using an artificial neural network (ANN), test case minimization
using a data mining info-fuzzy network, building a Bayesian network (BN) model to predict
software fault content and fault proneness, use of artificial neural networks (ANN) and info-fuzzy
networks (IFN) for test case selection and automated test oracle development, using a neural
network for pruning a test cases, software test data generation using ACO and automatic partial
order planner for test case generation are just a few of the available literature in this respect.
2.3. Role of AI in Test Case Generation
There has been an increasing interest in the use of AI for application testing and some research
has been done on how to manage application testing with the help of AI. The various forms of
this technique can be found in a simple query to the ACM library on this subject. Some of these
techniques include methods for generating a model-based application, model-based tests, and
automating the generation of test cases to make it possible to regenerate the tests each time the
application changes and to build automated oracles that model the behavior of the user interface.
There have also been cases of generating tests based on artificial intelligence (AI) planning
techniques and genetic modeling. It would appear that the implementation of genetic algorithms
is an interesting way of approaching the automation of test case improvement [21]. In [22]
Intelligent Search Agent (ISA) for optimal test sequence generation, and, Intelligent Test Case
Optimization Agent (ITOA) for optimal test case generation were used. Memon [24] has several
papers about automatically generating test cases from the GUI using an AI Planner. Similarly in
[23], a genetic algorithm is used to evolve new test cases to increase the coverage of test suite
coverage while avoiding unworkable sequences.
3. ISSUES AND CHALLENGES OF AI
Owing to the lack of technical expertise and research work results, AI software testing has
different challenges. Below, the challenges have been summarized:
3.1. Identifying Test Data
In the field of AI, the model must be trained and tested before being deployed into the real
environment. Instead of a software-testing expert, model training and testing is typically carried
out by a data scientist or engineer. To conduct model testing, however, a logical or structural
understanding of the model is not necessary. Model testing is like the testing conducted at the
unit or component level. It is necessary for a tester to understand how data in the model
evaluation process has been obtained or defined and used. For example, if production data has
been gathered as training data, has this been done at a certain time of day? Is this representative
of average use? Is the data set too broad or too narrow? If the sampling is conducted poorly, or if
the training data is not representative, the real-world findings will be wrong [15]. How do we use
an organized method to prepare quality training datasets and coverage-oriented test datasets for
AI-based functional features in learning-based AI for todays’ AI systems?
As per [14], Uncertainty in system outputs, responses, or actions has been identified or
experienced with AI based mobile apps for the same input test data when context conditions are
changed. Similarly, when the training data sets and/or test data sets were amended, accuracy
problems were encountered. Moreover, to achieve certain accurate AI model coverage, white-box
testing for AI software must pay attention to design training and test data.
In conventional system function testing, we validate whether a system function performs
correctly by generating functional test cases based on the specified function requirements in
4. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.1, January 2021
104
terms of inputs and predicted outputs. In AI software testing, we find that the function correctness
is highly reliant on the selection & training of the AI model and the provision of training data
sets. For a particular training data set, different AI models typically provide different quality
outcomes. For the same AI model, different training and test data sets may generate different
outcomes and lead to different correctness. For AI software engineers, this offers a new challenge
for quality testing and assurance. Many AI functions, for instance, are designed to accept multiple
rich media inputs (text, image, audio, or video events) and produce multiple results in terms of
rich media and events/actions. This adds more complexity to the planning of test cases and the
generation of test data.
According to [16], Researchers have built a deep learning system to identify gender in images
across a variety of human faces, here the challenging aspect leads to adjusting or analyzing
images to perform more accurately; and the critical role of the data used to train them in making
to perform more (or less) effectively. More than 2,000 unique models were trained and tested
based on a common deep learning architecture, and a great deal of variation was discovered in the
process in the ability of these models to accurately identify gender in adverse image sets.
The models that were trained using more diverse sets of images (which include the demographic
composition and quality and types of images used in each set) were better at identifying gender in
a similarly diverse group of photos than models that were trained on data that are more limited.
The model that had been trained on images taken from all seven of the individual collections had
the best performance. It accurately identified the correct gender for 87% of the training data
images, while the models trained using only one of the individual data collections achieved
accuracies between 74% and 82%.
5. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.1, January 2021
105
At a broader level, these models appeared to have more difficulty identifying women: Six of the
eight (including the model that was built using the most diverse training data possible) were more
accurate at identifying men than women. But two of the models were considerably more accurate
at identifying women than men and as with their overall accuracy, it is not entirely obvious or
predictable why certain models would be better at identifying men than women, or vice versa.
In view of these outcomes, AI models can be inconsistent in identifying accurate results.
3.2. Algorithmic Uncertainty
Uncertainty is a challenge in any kind of software testing. Uncertainty emerges from a non-
complete specification or from requirements that are not fully understood. Algorithmic instability
is of particular importance in AI applications [1]. This is due to the randomness inherent in the
algorithms used in the development of AI. A software engineer designing an AI application does
not code to a well-defined algorithm. Instead, the developer is working to train a model with a set
of test data. There must be a starting point for neurons and weights in that model. While the
developer chooses the number of neurons and the beginning weights; there is no magic formula
to determine the correct starting point. The choices made to initialize the model can have a
significant impact on how the model evolves during training. The randomness of the resulting
algorithm makes many it very challenging for traditional testing strategies. White box test
coverage measures become impossible to calculate.
3.3. Lack of Good Measures of Effectiveness
Black-box testing conducts the functionality testing of an application without knowing the
structure of implementation. Based on the requirement specifications, test suites for black-box
testing are generated. For AI models, applying black box testing will mean testing the model
without knowing the features or algorithm used to create the model. The challenge here is to
identify the test oracle, which could verify the test output against the expected values in advance
[13]. In the case of AI models, there are no expected values in advance. Because of the
prediction form of the models, it is not easy to compare or verify the prediction against expected
value. During the model development phase, data scientists test the model performance by
comparing the predicted values with the actual values. This is not the case with testing the model,
where the expected values are unknown for any given input. To assess the effectiveness of the
Black box testing on AI models from a quality assurance perspective, we have to find the
techniques to test or perform quality control checks. While there are some suggested techniques
such as Model Performance Testing, Metamorphic Testing, Testing with various data slices, etc.,
these seem to be inadequate in the AI environment to evaluate the efficacy of a model.
6. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.1, January 2021
106
3.4. Lack of a Testable Specification
Intuitively, black box testing techniques should be the most straightforward way to test an AI
application because they avoid the intricate inner workings of AI algorithms. One of the most
popular black box testing techniques is testing against a requirements specification. However, AI
applications present challenges even when testing to a specification. There is a fundamental
conflict with the very notion of AI requirements. AI applications are intended to produce
generalized behavior, while a requirement is intended to document specific behavior. An AI
requirement specification by its nature is then attempting to specify general behavior. Validation
of an AI application involves testing the quality of the algorithm's prediction or performance
[11]. Measuring the prediction quality in a testable way is very challenging.
In [1], we see examples of typical AI requirements that illustrate the testing challenge. We may
have an application designed to carry out facial recognition. The requirement is to recognize a
person from a picture. It is one thing to be able to train the model with a collection of photos of a
person from one photo shoot with the person in slightly different poses and then to test it on a
different set of photos from the same photo shoot. However, this limited capability would
certainly not be good enough. There are several more variants that can be required to be handled
by the application. What if we add makeup or change the color or length of the hair? What about
aging, weight loss, or weight gain? It would not be possible to train the model on every variation.
What does good enough really look like?
3.5. Splitting Data Into Training and Testing Sets
Typically, a data scientist will use a framework for automatically splitting the available data into
mutually exclusive training and testing datasets. According to [15] one popular framework used
to do this is SciKit-Learn, which allows developers to split off the required size of dataset by
random selection. When assessing or retraining different models, it is important to override the
random seed used to split the data. The results would not be consistent, comparable, or
reproducible if this is not done precisely. Typically 70 percent - 80 percent of the knowledge is
used for model training, with the remainder reserved for model evaluation. There are numerous
specialized methods available to ensure that the splitting of training and testing data has been
carried out in a representative manner. It should be calculated in terms of data rather than lines of
code when considering the coverage of model testing.
An important point for consideration is the design of regression testing activities. In traditional
software development, unless very significant changes are made, the risk of functional regression
is typically low. In the case of AI almost any change to the algorithm, model parameters, or
training data usually needs the model to be rebuilt from scratch, and the risk of regression for
previously tested functionality is very high. This is because rather than a small percentage of
change in the model based on the required modifications, 100% of the model could potentially
change. In summary:
The way the initial data was obtained is important to understand whether it is
representative.
It is important that the training data is not used to test the model otherwise testing will only
appear to pass.
The data split for training and testing may need to be made reproducible.
Improper splitting of datasets into training, validation, and testing sets can result in overfitting
and underperformance in production.
7. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.1, January 2021
107
For instance, if a trained model expects a certain input feature, but if that feature isn’t passed to
the model at inference time, the model will simply fail to render a prediction. Other times
however, the model will fail quietly. Data leakage is one of the most important issues for a data
scientist to understand. If you don't know how to prevent it, the leakage will come up often, and it
will ruin your models in the most subtle and dangerous ways. In particular, leakage causes a
model to look precise before you start making decisions with the model, and then the model
becomes very imprecise.
3.6. Incomprehensibility
As alluded to in section B above, white box testing techniques are particularly challenging in AI
applications. One of the biggest drivers is the concept of incomprehensibility. A neural network
by its nature is very difficult to understand logically [1]. This makes many traditional methods of
measuring code coverage such as branch or decision coverage next to impossible to compute.
Identification of strategies to approximate an equivalent coverage metric for AI applications is an
area ripe for further research.
The way a developer creates a neural network varies greatly from traditional software
development. In traditional software development, the coder must develop and implement an
algorithm for solving a problem. To be able to generate the code correctly in traditional software
development, the developer has to understand every detail and every case. This is not the case in
neural net development. The goal is to be able to train a model from a collection of test data in
order to generalize. Instead of designing a specific algorithm, the coder decides on a number of
levels and how many neurons to be included in each level of the neural net. The neural net is
then trained on a set of test data. Theoretically, the neural net could be transformed to a complex
decision tree of if statements [12]. While a human can be able to read one of the if statements,
that does not imply that they could fully comprehend the larger system of if statements in the
decision tree. Consider a neural net trained to recognize dogs from pictures. Pictures of several
different types of dogs as well as pictures of other types of animals which are not dogs must be
included in the training data. After training, you might test the neural net with a series of pictures
that were not part of the training data. You might as well begin with a picture of a fish, then a
husky, then a wolf, then a poodle. How the net will identify the fish as not a dog and the husky as
a dog is not too difficult to grasp. It should then also consider the poodle as a dog, however, and
the wolf as not a dog. A wolf looks like a husky a lot more than a poodle does. How would you
describe in detail how to recognize the poodle as a dog and the wolf as not a dog? This is only a
simplistic instance of where incomprehensibility begins to creep in. There are many more tricky
cases.
3.7. AI in Cybersecurity
In today's world, cyber security is the greatest problem. Though cyber security awareness has
increased, significant sums of money are being spent and attempts are being made to tackle
cybercrimes. Even though several frameworks, software applications, and appropriate steps are
taken, cyber risks still emerge from possible malicious actions for different reasons.
Organizations, businesses, and researchers are heading towards cybersecurity with Artificial
Intelligence to solve cyber problems or challenges. The framework, “Intelligent Cyber Police
agents for Early Warnings in an integrated security approach”, is introduced in [17] using
Artificial and Deep Neural Networks to prevent attacks along with Expert systems to support
reaction and response measures within an integrated security approach, as seen in the figures
below.
8. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.1, January 2021
108
Despite advantages of the aforementioned proposal to use AI in the cybersecurity domain, the
following are the issues and risks of using AI.
1. Data privacy is a major challenge for big data. The analysis of huge volumes of data may
cause organizations to be concerned about the privacy of their personal data, and some are
even reluctant to share their data. Given that AI methodologies are designed to learn and
provide predictive analysis, organizations are concerned about the transparency of their
personal data [18].
2. While there are different legal concerns regarding AI, the loss of human control over the
consequences of AI’s autonomy is the main concern. Due to the unique and unforeseeable
nature of AI, existing legal frameworks do not necessarily apply [19].
3. While the adaptation of AI algorithms to cybersecurity has made great strides, they are not
yet fully autonomous or can completely replace human decisions. Tasks that need human
intervention are still there.
The most common use cases for AI in cybersecurity are listed below, where some evidence of
real-world business use has been found:
● AI for Network Threat Identification
● AI Email Monitoring
● AI-based Antivirus Software
● AI-based User Behavior Modeling
● AI for Fighting AI Threats
AI-use can still be defined as nascent in cybersecurity systems. Businesses need to ensure that
their systems are trained with feedback from cybersecurity experts, which would make the
softwares stronger than conventional cybersecurity systems to detect real cyber threats with much
greater precision.
Another challenge for companies using purely AI-based cybersecurity detection methods is to
reduce the number of false-positive detections. If the program knows what has been tagged as
false positive results, this could theoretically become easier to do. The algorithms will flag
9. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.1, January 2021
109
statistically significant deviations as anomalies once a baseline of behavior has been constructed
and warn security analysts that further investigation is needed.
3.8. Adaptive Behavior
Adaptive behavior is yet another challenge in the testing of AI applications. A primary goal in
the development of AI applications is to collect a robust set of clean data from the real work and
to use that data to train a model that is generalized to fit that data. This makes the performance of
the application dependent on future data matching the historic data on which the model was
trained. In many real-world scenarios, the final data population is dynamic [5]. Think of
seasonal shopping patterns or disease rates. These shifts in the data cannot be tested if they did
not exist in the training data. At best, the historic data can be divided into a training and testing
set where the training data set is used to train the model and the testing set is used to test model
loss. These tests can show that the model has been generalized and works well with the data
available during implementation, but what would be the result when a shift is seen in future real-
world data.
It is actually very common for AI techniques to be utilized in scenarios where the target
environment is expected to be dynamic [1]. Voice recognition, for instance, is a popular field for
AI implementation. Suppose a model was trained using voice samples of native speakers of
English from across the United States. Despite regional variations within the US, this application
can be trained to run very well. Now, for people who speak English as a second language,
imagine beginning to use the same application. Less promising outcomes will possibly begin to
be seen in the output of the application. The dynamism of the target environment then allows the
AI model to be recalibrated from time to time to account for changes in the new data. Detecting
the ideal time to recalibrate the model and even automating the recalibration is an area ripe for
further research.
3.9. Communication Failure in Middleware Systems/API Resulting in Disconnected
Data Communication and Visualization.
In order to concentrate on the accuracy and completeness of data flows, system integration testing
is very important, since much of the software development effort typically takes place in data
extraction, normalization, cleaning, and transformation [15]. Any data errors can cause important
and complex failures in the quality of the system. For example:
∙ An error in the transformation logic for birth dates could lead to a large error rate in an
algorithm that relies heavily on predictions based on the age of someone.
∙ A failure to send the correct outcome to the actuators can contribute to system behavior that
does not support its goals.
To ensure that all inputs to the algorithm, including from all sensors in the AI environment, have
been checked is a challenging job for integration testers. Also intricate is verifying if the number
of inputs or records loaded into the device often match the anticipated outcome, ensuring that
data enters the system without data loss or truncation, and that after transformation, the data
features fulfill expectations.
3.10. Overfitting the model
In AI applications, overfitting is a classic research problem. The fuel required for any AI system
is a robust collection of good data. One of the first steps is to divide the data into a data set of
10. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.1, January 2021
110
training data that will be used to train the model and a data set of testing that will be withheld
from training and used to evaluate the AI model instead. When the model knows the training data
too well, overfitting occurs. The model begins to understand the noise in the training data and a
large difference between the error rate in the training data and the error rate in the test data is
observed [4]. The goal when training the model with testing data is for the model to generalize.
Often simpler is better. Consider an AI application as an example whose purpose is to predict the
existence or absence of a specific disease. In the data for the training of this model, there may be
several different variables to consider, but we will use 2 dimensions for simplicity in visualizing.
Here, x indicates the presence of the disease and the absence of the disease will be expressed by
y.
Figure 1 – Generalized model
Figure 1 shows a simple model that is generalized based on the test data. Note that the model is
not perfect. The red x and the red 0 represent model loss on the test data. This is acceptable and
is to be expected. The only way to account for the two misses would be to complicate the model
curve which would result in overfitting.
Figure 2 shows what an overfit model might look like. The equation of the curve is much more
complex and now perfectly fits the training data. The issue arises when we start to introduce test
data that the model has not yet seen. The two blue question marks represent instances of test
data. The model in Figure 2 will predict the top question mark as diseased and the bottom
question mark as no disease while the more general model from figure 1 will predict the opposite.
Either model could be correct, but it is more likely that the model in Figure 1 will be correct.
Figure 2 – Overfit model
11. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.1, January 2021
111
At first glance, the best approach to building the model may seem to be to reduce losses in
training data. However, overfitting results and the model performance on new data degrades. The
best model will have an error rate on the training data that is very close to the error rate on the
test data.
A common technique to help avoid overfitting is to subdivide the set of training data into
multiple subsets called folds. The model is then trained multiple times, once on each fold of
training data. This approach provides the model with an opportunity to compare the results from
each fold of training data to see what works best and strengthen the generalization of the model.
Providing the model with the opportunity to learn from multiple different combinations of data
decreases the chances of overfitting [6].
4. RELATED WORK
There has been little research done in terms of software testing in AI. In [14] key challenges of
AI software quality validations have been summarized, which include how to define quality
assurance standard systems and develop adequate quality test coverage [11], how to connect
quality assurance requirements and testing coverage criteria for AI systems based on big data,
and how to use systematic methods to develop quality test models. [15] Presents several new
challenges that AI systems are facing in predicting system behavior like determining the exact
pre-conditions and inputs to obtain an expected result, define expected results and verify the
accuracy of test outputs, and measure the test coverage. Also [15] interprets the various
challenges in test generation on the code, unit, module, or component levels, and , generating
tests from code makes it very challenging for AI to understand the state of the software and its
data and necessary dependencies. Parameters may be complex and goals as well as output
optimizations may be unclear. The challenges that the model has in adjusting itself in performing
more accurately in recognizing gender images [16] has been described clearly. [1] Focused on the
issues faced in terms of testing facial recognition in AI systems. Data privacy is another major
challenge in AI methodologies because of predictive analysis. Organizations are concerned about
the transparency of their personal data [18]. Overfitting is another challenge that AI models are
facing today [4] due to the gap in the training data and test data selection. [15] defines the issues
with the AI system integration testing in terms of transforming, cleaning, extraction, and
normalization of data.
5. CONCLUSION
In AI/ML applications, software testing is just as critical as it is in any other software
development form. Due to the nature of how AI systems work and are built, there are many
difficulties that software testers face with AI applications. Such issues range from conventional
methodologies for white box and black box testing to problems that are very special to AI
systems, such as overfitting and proper splitting of data into training and test data sets. In the
efficient testing of AI/ML applications, this paper has enumerated and provided an overview of
ten challenges For future studies, each of these challenges is ideal for seeking solutions that
alleviate the problem and illuminate the path to more efficient software testing methods and
methodologies that can be applied to AI/ML applications.
REFERENCES
[1] H.Zhuy et al., “Datamorphic Testing: A Methodology for Testing AI Applications”, Technical
Report OBU-ECM-AFM-2018-02.
[2] Partridge D. “To add AI, or not to add AI? In Proceedings of Expert Systems, the 8th Annual BCS
SGES Technical conference”, pages 3-13, 1988.
12. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.1, January 2021
112
[3] Partridge, D. ” The relationships of AI to software engineering”, Software Engineering and AI
(Artificial Intelligence), IEE Colloquium on (Digest No.087), Apr 1992.
[4] S. Salman, X. Liu, “Overfitting Mechanism and Avoidance in Deep Neural Networks”,
arXiv:1901.06566v1 [cs.LG], 19 Jan 2019.
[5] Enrico Coiera, MB BS, PhD, “The Last Mile: Where Artificial Intelligence Meets Reality”, (J Med
Internet Res 2019;21(11):e16323) doi 10.2196/16323.
[6] G. Handelman, et. al., “Peering Into the Black Box of Artificial Intelligence: Evaluation Metrics of
Machine Learning Methods”, doi.org/10.2214/AJR.18.20224, August 2, 2018.
[7] Last, M., Kandel, A., and Bunke, H. 2004 Artificial Intelligence Methods in Software Testing (Series
in Machine Perception &Artifical Intelligence “Vol. 56). World Scientific Publishing Co., Inc.
[8] Xie, X., Xu, B., Nie, C., Shi, L., and Xu, L. 2005. Configuration Strategies for Evolutionary Testing.
In Proceedings of the 29th Annual international Computer Software and Applications Conference -
Volume 02 (July 26 - 28, 2005). COMPSAC. IEEE Computer Society, Washington, DC.
[9] H. Li and C. P. Lam, “Software Test Data Generation using Ant Colony Optimization”, appeared in
Proc. ICCI 2004, 2004.
[10] Manjunatha Kukkuru, “Testing Imperatives for AI Systems”, https://www.infosys.com/insights/ai-
automation/testing-imperative-for-ai-systems.html.
[11] L Du Bousquet, M Nakamura, “Improving Testability of Software Systems that Include a Leaning
Feature”, Learning, 2018 - pdfs.semanticscholar.org.
[12] R.V. Yampolskiy, “Unexplainability and Incomprehensibility of Artificial Intelligence”, arXiv
preprint arXiv:1907.03869, 2019 - arxiv.org.
[13] A. Kumar, “QA: Blackbox Testing for Machine Learning Models”, 4 Sep
2018,https://dzone.com/articles/qa-blackbox-testing-for-machine-learning-models.
[14] J. Gao, et. al., “What is AI testing? and why”, 2019 IEEE International Conference on Service-
Oriented System Engineering (SOSE), DOI: 10.1109/SOSE.2019.00015.
[15] Alliance For Qualification, “AI and Software Testing Foundation Syllabus”, 17 September 2019,
https://www.gasq.org/files/content/gasq/downloads/certification/A4Q%20AI%20&%20Software%20
Testing/AI_Software_Testing_Syllabus%20(1.0).pdf.
[16] S. Wojcik, E. Remy, “The challenges of using machine learning to identify gender in images”, 5
September 2019, https://www.pewresearch.org/internet/2019/09/05/the-challenges-of-using-machine-
learning-to-identify-gender-in-images/.
[17] N. Wirkuttis, H. Klein, “Artificial Intelligence in Cybersecurity”, Cyber, Intelligence, and Security |
Volume 1 | No. 1 | January 2017.
[18] Tom Simonit, "Microsoft and Google Want to Let AI Loose on Our Most Private Data", "MIT
Technology Review", April 19, 2016,, https://www.technologyreview.com/s/601294/microsoft-and-
google-want-to-let-artificial-intelligence-loose-on-our-most-private-data/.
[19] Bernd Stahl, "Ethical and Legal Issues of the Use of Computational Intelligence Techniques in
computer security and Forensics", The 2010 International Joint conference in Neural Networks.
[20] David Fumo, https://towardsdatascience.com/types-of-machine-learning-algorithms-you-should-
know-953a08248861.
[21] Benoit Baudry, “From genetic to bacteriological algorithms for mutation-based testing”, Software
Testing, Verification & Reliability, pp 73–96, 2005.
[22] V. Mohan, D. Jeya Mala “IntelligenTester -Test Sequence Optimization framework using Multi-
Agents”, Journal of Computers, June 2008.
[23] Si Huang, Myra Cohen, Atif M. Memon, “Repairing GUI Test Suites Using a Genetic Algorithm” by
in ICST 2010: Proceedings of the 3rd IEEE International Conference on Software Testing,
Verification and Validation, (Washington, DC, USA), 2010.
[24] Memon, A. M., Soffa, M. L. and Pollack, M. E., Coverage criteria for gui testing. ESEC/FSE-9:
Proceedings of the 8th European software engineering conference held jointly with 9th.