Automated Software Testing Cases Generation Framework to Ensure the Efficiency of the Gesture Recognition Systems by Sheikh Monirul Hasan. This is a research work for creating a standard or benchmark for testing gesture recognition system software. sheikh monirul Hasan is the first author of the research paper, the complete work summary of the research we tried to discuss in the presentation slide. so there have many kinds of software engineering think and how we get a quality product specially for gesture recognition system.
A Novel Framework For Numerical Character Recognition With Zoning Distance Fe...IJERD Editor
Advancements of Computer technology has made every organization to implement the automatic processing systems for its activities. One of the examples is the recognition of handwritten characters, which has always been a challenging task in image processing and pattern recognition. In this paper we propose Zone based features for recognition of the handwritten characters. In this zoning approach a digit image is divided into 8x8 zones and centre pixel is computed for each zone. This procedure is sequentially repeated for entire zone. Finally features are extracted for classification and recognition.
The document discusses using aspect-oriented programming (AOP) to improve productivity for high performance computing. AOP can reduce program complexity by separating cross-cutting concerns from the core algorithm. This allows developing the serial algorithm and parallel strategies separately. The author proposes measuring productivity using metrics like development time, compilation time, execution time, and concern fences to evaluate if AOP improves productivity for HPC applications.
Avihu Efrat's Viola and Jones face detection slideswolf
The document summarizes the Viola-Jones object detection framework. It uses a cascade of classifiers with increasingly more complex features trained with AdaBoost to rapidly detect objects. Integral images allow for very fast feature evaluations. The framework was applied to face detection, achieving very fast average detection speeds of 270 microseconds per sub-window while maintaining low false positive rates.
The Viola-Jones object detection framework uses Haar-like features, integral images, Adaboost, and cascading classifiers to provide competitive object detection rates in real-time. It was proposed in 2001 primarily for face detection. Haar-like features are similar to convolution kernels used to detect patterns in images. An integral image allows feature values to be calculated rapidly. Adaboost selects the best features from over 160,000 candidates. Cascading classifiers discard non-face windows quickly through multiple classifier stages to focus computation on probable faces. The algorithm was demonstrated in Python using OpenCV.
IDA 2015: Efficient model selection for regularized classification by exploit...George Balikas
A new method for model selection. It is an alternative on standard methods such as k-fold cross validation and hold out. It build on a learning theory result, which is an upper bound on classification performance.
In the method, we use unlabeled data and quantification to accelerate the tuning of machine learning model hyper-parameters such as the C value of Support Vector Machines or Logistic Regression. We present classification results with data from Wikipedia and Dmoz with SVMs and Logistic Regression.
The method was presented as a paper on the Intelligent Data Analysis (IDA) 2015 conference.
This document summarizes research on deep learning approaches for face recognition. It describes the DeepFace model from Facebook, which used a deep convolutional network trained on 4.4 million faces to achieve state-of-the-art accuracy on the Labeled Faces in the Wild (LFW) dataset. It also summarizes the DeepID2 and DeepID3 models from Chinese University of Hong Kong, which employed joint identification-verification training of convolutional networks and achieved performance comparable or superior to DeepFace on LFW. Evaluation metrics for face verification and identification tasks are also outlined.
1. The document summarizes the robust real-time face detection method proposed by Viola and Jones in 2002, which uses integral images for fast feature computation, AdaBoost for feature selection, and a cascade structure for real-time processing.
2. It describes how integral images allow computing rectangular features in constant time, and how AdaBoost selects the most discriminative features by iteratively assigning higher weights to misclassified examples.
3. Finally, it explains that the cascade structure filters out most negative sub-windows using simple classifiers at the top, focusing computation only on the few potentially positive windows.
This document describes a proposed system for identifying individual faces among a crowd using video footage. The system utilizes a training process involving face detection, feature extraction using HOG, and classification with SVM. Testing involves extracting video frames, detecting faces using LBP features, extracting HOG features, and classifying faces using SVM. The goal is to identify specific suspects from videos recorded with a CCTV camera mounted 2.5 meters high at a 60 degree angle, achieving the highest accuracy and lowest processing time from frame sampling and threshold testing.
A Novel Framework For Numerical Character Recognition With Zoning Distance Fe...IJERD Editor
Advancements of Computer technology has made every organization to implement the automatic processing systems for its activities. One of the examples is the recognition of handwritten characters, which has always been a challenging task in image processing and pattern recognition. In this paper we propose Zone based features for recognition of the handwritten characters. In this zoning approach a digit image is divided into 8x8 zones and centre pixel is computed for each zone. This procedure is sequentially repeated for entire zone. Finally features are extracted for classification and recognition.
The document discusses using aspect-oriented programming (AOP) to improve productivity for high performance computing. AOP can reduce program complexity by separating cross-cutting concerns from the core algorithm. This allows developing the serial algorithm and parallel strategies separately. The author proposes measuring productivity using metrics like development time, compilation time, execution time, and concern fences to evaluate if AOP improves productivity for HPC applications.
Avihu Efrat's Viola and Jones face detection slideswolf
The document summarizes the Viola-Jones object detection framework. It uses a cascade of classifiers with increasingly more complex features trained with AdaBoost to rapidly detect objects. Integral images allow for very fast feature evaluations. The framework was applied to face detection, achieving very fast average detection speeds of 270 microseconds per sub-window while maintaining low false positive rates.
The Viola-Jones object detection framework uses Haar-like features, integral images, Adaboost, and cascading classifiers to provide competitive object detection rates in real-time. It was proposed in 2001 primarily for face detection. Haar-like features are similar to convolution kernels used to detect patterns in images. An integral image allows feature values to be calculated rapidly. Adaboost selects the best features from over 160,000 candidates. Cascading classifiers discard non-face windows quickly through multiple classifier stages to focus computation on probable faces. The algorithm was demonstrated in Python using OpenCV.
IDA 2015: Efficient model selection for regularized classification by exploit...George Balikas
A new method for model selection. It is an alternative on standard methods such as k-fold cross validation and hold out. It build on a learning theory result, which is an upper bound on classification performance.
In the method, we use unlabeled data and quantification to accelerate the tuning of machine learning model hyper-parameters such as the C value of Support Vector Machines or Logistic Regression. We present classification results with data from Wikipedia and Dmoz with SVMs and Logistic Regression.
The method was presented as a paper on the Intelligent Data Analysis (IDA) 2015 conference.
This document summarizes research on deep learning approaches for face recognition. It describes the DeepFace model from Facebook, which used a deep convolutional network trained on 4.4 million faces to achieve state-of-the-art accuracy on the Labeled Faces in the Wild (LFW) dataset. It also summarizes the DeepID2 and DeepID3 models from Chinese University of Hong Kong, which employed joint identification-verification training of convolutional networks and achieved performance comparable or superior to DeepFace on LFW. Evaluation metrics for face verification and identification tasks are also outlined.
1. The document summarizes the robust real-time face detection method proposed by Viola and Jones in 2002, which uses integral images for fast feature computation, AdaBoost for feature selection, and a cascade structure for real-time processing.
2. It describes how integral images allow computing rectangular features in constant time, and how AdaBoost selects the most discriminative features by iteratively assigning higher weights to misclassified examples.
3. Finally, it explains that the cascade structure filters out most negative sub-windows using simple classifiers at the top, focusing computation only on the few potentially positive windows.
This document describes a proposed system for identifying individual faces among a crowd using video footage. The system utilizes a training process involving face detection, feature extraction using HOG, and classification with SVM. Testing involves extracting video frames, detecting faces using LBP features, extracting HOG features, and classifying faces using SVM. The goal is to identify specific suspects from videos recorded with a CCTV camera mounted 2.5 meters high at a 60 degree angle, achieving the highest accuracy and lowest processing time from frame sampling and threshold testing.
- The document discusses the speaker's 25 years of experience applying AI techniques to software engineering projects. It covers early work in the 1990s on fault prediction and the challenges of applying machine learning at that time. It then discusses subsequent work in areas like search-based software engineering, natural language processing for requirements engineering, and using simulation and search techniques for testing autonomous vehicle systems. The speaker reflects on both the benefits and challenges of these different AI applications in software engineering.
Enabling Automated Software Testing with Artificial IntelligenceLionel Briand
1. The document discusses using artificial intelligence techniques like machine learning and natural language processing to help automate software testing. It focuses on applying these techniques to testing advanced driver assistance systems.
2. A key challenge in software testing is scalability as the input spaces and code bases grow large and complex. Effective automation is needed to address this challenge. The document describes several industrial research projects applying AI to help automate testing of advanced driver assistance systems.
3. One project aims to develop an automated testing technique for emergency braking systems in cars using a physics-based simulation. The goal is to efficiently explore complex test scenarios and identify critical situations like failures to avoid collisions.
This document provides an overview of simulation and modeling. It discusses key concepts such as systems, states, activities, and classification of systems. It also covers the system methodology process including planning, modeling, validation, and application. Examples are provided on simulating a coin toss and daily demand for a grocery store. Advantages and disadvantages of simulation are listed. The document appears to be from a textbook on simulation and modeling and provides foundational information on the topic.
IRJET - Object Detection using Hausdorff DistanceIRJET Journal
This document proposes using Hausdorff distance for object detection as it can better handle noise compared to other methods like Euclidean distance. The document discusses preprocessing images using Gaussian filtering for noise cancellation. It then represents shapes as point sets for feature extraction before using Hausdorff distance to match shapes between reference and test images for object recognition. Encouraging results were obtained when testing on MNIST, COIL and private handwritten digit datasets.
Machine learning techniques can be applied in formal verification in several ways:
1) To enhance current formal verification tools by automating tasks like debugging, specification mining, and theorem proving.
2) To enable the development of new formal verification tools by applying machine learning to problems like SAT solving, model checking, and property checking.
3) Specific applications include using machine learning for debugging and root cause identification, learning specifications from runtime traces, aiding theorem proving by selecting heuristics, and tuning SAT solver parameters and selection.
An Empirical Study on the Adequacy of Testing in Open Source ProjectsPavneet Singh Kochhar
In this study, we investigate the state-of-the-practice of testing
by measuring code coverage in open-source software projects. We examine over 300 large open-source projects written in Java, to measure the code coverage of their associated test cases.
IRJET- Object Detection using Hausdorff DistanceIRJET Journal
This document proposes a new object recognition system using Hausdorff distance. The system aims to improve on existing methods like YOLO that struggle with small objects and can capture garbage data. The document outlines preprocessing steps like noise cancellation, representing shapes as point sets, and extracting features. It then describes using Hausdorff distance and shape context to find the best match between input and reference shapes. Testing on datasets showed encouraging results for recognizing handwritten digits.
Implementation of Automated Attendance System using Deep LearningMd. Mahfujur Rahman
This paper proposes a real-time automated attendance system using deep learning. The system uses an improved deep convolutional neural network model called VIPLFaceNet for face detection and recognition. A dataset was collected containing images of students and colleagues and preprocessed for the model. The model architecture involves flattening, normalizing, and grayscaling images before classifying them. The system achieved 95% accuracy on a test set of 5 live and 2 spoofed faces, outperforming other models based on evaluation metrics. The paper concludes the proposed approach improves real-time attendance systems through identity authentication.
Improve Captcha's Security Using Gaussian Blur Filtersipij
Providing security for webservers against unwanted and automated registrations has become a big
concern. To prevent these kinds of false registrations many websites use CAPTCHAs. Among all kinds of
CAPTCHAs OCR-Based or visual CAPTCHAs are very common. Actually visual CAPTCHA is an image
containing a sequence of characters. So far most of visual CAPTCHAs, in order to resist against OCR
programs, use some common implementations such as wrapping the characters, random placement and
rotations of characters, etc. In this paper we applied Gaussian Blur filter, which is an image
transformation, to visual CAPTCHAs to reduce their readability by OCR programs. We concluded that this
technique made CAPTCHAs almost unreadable for OCR programs but, their readability by human users
still remained high.
Off-line English Character Recognition: A Comparative Surveyidescitation
It has been decades since the evolution of idea that
human brain can be mimicked by artificial neuron like
mathematical structures. Till date, the development of this
endeavor has not reached the threshold of excellence. Neural
networks are commonly used to solve sample-recognition
problems. One of these is character recognition. The solution
of this problem is one of the easier implementations of neural
networks. This paper presents a detailed comparative
literature survey on the research accomplished for the last
few decades. The comparative literature review will help us
understand the platform on which we stand today to achieve
the highest efficiency in terms of Character Recognition
accuracy as well as computational resource and cost.
Visual diagnostics for more effective machine learningBenjamin Bengfort
The model selection process is a search for the best combination of features, algorithm, and hyperparameters that maximize F1, R2, or silhouette scores after cross-validation. This view of machine learning often leads us toward automated processes such as grid searches and random walks. Although this approach allows us to try many combinations, we are often left wondering if we have actually succeeded.
By enhancing model selection with visual diagnostics, data scientists can inject human guidance to steer the search process. Visualizing feature transformations, algorithmic behavior, cross-validation methods, and model performance allows us a peek into the high dimensional realm that our models operate. As we continue to tune our models, trying to minimize both bias and variance, these glimpses allow us to be more strategic in our choices. The result is more effective modeling, speedier results, and greater understanding of underlying processes.
Visualization is an integral part of the data science workflow, but visual diagnostics are directly tied to machine learning transformers and models. The Yellowbrick library extends the scikit-learn API providing a Visualizer object, an estimator that learns from data and produces a visualization as a result. In this talk, we will explore feature visualizers, visualizers for classification, clustering, and regression, as well as model analysis visualizers. We'll work through several examples and show how visual diagnostics steer model selection, making machine learning more effective.
AN INTEGRATED APPROACH TO CONTENT BASED IMAGERETRIEVAL by MadhuMadhu Rock
This document summarizes an integrated approach to content-based image retrieval. It discusses extracting both color and texture features from images using color moments and local binary patterns. The system is tested on a database of 1000 images across 10 classes. Results show the integrated approach of using both color and texture features provides more accurate retrievals than using either feature alone. Evaluation metrics like precision, recall and accuracy are calculated to quantitatively analyze the system's performance. Overall, the proposed multi-feature approach is found to improve content-based image retrieval compared to single-feature methods.
Long-term Face Tracking in the Wild using Deep LearningElaheh Rashedi
This paper investigates long-term face tracking of a specific person given his/her face image in a single frame as a query in a video stream. Through taking advantage of pre-trained deep learning models on big data, a novel system is developed for accurate video face tracking in the unconstrained environments depicting various people and objects moving in and out of the frame. In the proposed system, we present a detection-verification-tracking method (dubbed as 'DVT') which accomplishes the long-term face tracking task through the collaboration of face detection, face verification, and (short-term) face tracking. An offline trained detector based on cascaded convolutional neural networks localizes all faces appeared in the frames, and an offline trained face verifier based on deep convolutional neural networks and similarity metric learning decides if any face or which face corresponds to the queried person. An online trained tracker follows the face from frame to frame. When validated on a sitcom episode and a TV show, the DVT method outperforms tracking-learning-detection (TLD) and face-TLD in terms of recall and precision. The proposed system is also tested on many other types of videos and shows very promising results.
Towards a Macrobenchmark Framework for Performance Analysis of Java ApplicationsGábor Szárnyas
This document discusses the need for macrobenchmarks to evaluate the performance and scalability of large model querying systems. It presents the Train Benchmark, which measures the performance of validation queries on randomly generated railway network models of increasing sizes. The benchmark includes loading models, running validation queries to detect errors, transforming models by injecting faults, and revalidating. It aims to provide a realistic and scalable way to assess model querying tools for domains like software engineering, where models can contain billions of elements.
In this talk, we discuss QuTrack, a Blockchain-based approach to track experiment and model changes primarily for AI and ML models. In addition, we discuss how change analytics can be used for process improvement and to enhance the model development and deployment processes.
This document summarizes Martin Pinzger's research on predicting buggy methods using software repository mining. The key points are:
1. Pinzger and colleagues conducted experiments on 21 Java projects to predict buggy methods using source code and change metrics. Change metrics like authors and method histories performed best with up to 96% accuracy.
2. Predicting buggy methods at a finer granularity than files can save manual inspection and testing effort. Accuracy decreases as fewer methods are predicted but change metrics maintain higher precision.
3. Case studies on two classes show that method-level prediction achieves over 82% precision compared to only 17-42% at the file level. This demonstrates the benefit of finer-
A Parallel Architecture for Multiple-Face Detection Technique Using AdaBoost ...Hadi Santoso
Face detection is a very important biometric application in the field of image
analysis and computer vision. The basic face detection method is AdaBoost
algorithm with a cascading Haar-like feature classifiers based on the
framework proposed by Viola and Jones. Real-time multiple-face detection,
for instance on CCTVs with high resolution, is a computation-intensive
procedure. If the procedure is performed sequentially, an optimal real-time
performance will not be achieved. In this paper we propose an architectural
design for a parallel and multiple-face detection technique based on Viola
and Jones' framework. To do this systematically, we look at the problem
from 4 points of view, namely: data processing taxonomy, parallel memory
architecture, the model of parallel programming, as well as the design of
parallel program. We also build a prototype of the proposed parallel
technique and conduct a series of experiments to investigate the gained
acceleration.
This is the second progress report of the project "Classification and Detection of liquid samples in hospitals and chemical labs using computer vision". The group has created a dataset of over 2000 annotated images across 12 classes. They trained 3 object detection models - YOLOv4_pro, YOLOv5n, YOLOv5s on this dataset and compared their validation mAP scores. The group has also tested these models on different size images and videos. Their next steps are to write a research paper, develop a real-time application, and test model speed on videos.
Overcoming the PLG Trap: Lessons from Canva's Head of Sales & Head of EMEA Da...saastr
SaaStr Europa 2024
More Related Content
Similar to Automated software testing cases generation framework to ensure the efficiency of the gesture recognition systems iccit_362_by_sheikh_monirul_hasan
- The document discusses the speaker's 25 years of experience applying AI techniques to software engineering projects. It covers early work in the 1990s on fault prediction and the challenges of applying machine learning at that time. It then discusses subsequent work in areas like search-based software engineering, natural language processing for requirements engineering, and using simulation and search techniques for testing autonomous vehicle systems. The speaker reflects on both the benefits and challenges of these different AI applications in software engineering.
Enabling Automated Software Testing with Artificial IntelligenceLionel Briand
1. The document discusses using artificial intelligence techniques like machine learning and natural language processing to help automate software testing. It focuses on applying these techniques to testing advanced driver assistance systems.
2. A key challenge in software testing is scalability as the input spaces and code bases grow large and complex. Effective automation is needed to address this challenge. The document describes several industrial research projects applying AI to help automate testing of advanced driver assistance systems.
3. One project aims to develop an automated testing technique for emergency braking systems in cars using a physics-based simulation. The goal is to efficiently explore complex test scenarios and identify critical situations like failures to avoid collisions.
This document provides an overview of simulation and modeling. It discusses key concepts such as systems, states, activities, and classification of systems. It also covers the system methodology process including planning, modeling, validation, and application. Examples are provided on simulating a coin toss and daily demand for a grocery store. Advantages and disadvantages of simulation are listed. The document appears to be from a textbook on simulation and modeling and provides foundational information on the topic.
IRJET - Object Detection using Hausdorff DistanceIRJET Journal
This document proposes using Hausdorff distance for object detection as it can better handle noise compared to other methods like Euclidean distance. The document discusses preprocessing images using Gaussian filtering for noise cancellation. It then represents shapes as point sets for feature extraction before using Hausdorff distance to match shapes between reference and test images for object recognition. Encouraging results were obtained when testing on MNIST, COIL and private handwritten digit datasets.
Machine learning techniques can be applied in formal verification in several ways:
1) To enhance current formal verification tools by automating tasks like debugging, specification mining, and theorem proving.
2) To enable the development of new formal verification tools by applying machine learning to problems like SAT solving, model checking, and property checking.
3) Specific applications include using machine learning for debugging and root cause identification, learning specifications from runtime traces, aiding theorem proving by selecting heuristics, and tuning SAT solver parameters and selection.
An Empirical Study on the Adequacy of Testing in Open Source ProjectsPavneet Singh Kochhar
In this study, we investigate the state-of-the-practice of testing
by measuring code coverage in open-source software projects. We examine over 300 large open-source projects written in Java, to measure the code coverage of their associated test cases.
IRJET- Object Detection using Hausdorff DistanceIRJET Journal
This document proposes a new object recognition system using Hausdorff distance. The system aims to improve on existing methods like YOLO that struggle with small objects and can capture garbage data. The document outlines preprocessing steps like noise cancellation, representing shapes as point sets, and extracting features. It then describes using Hausdorff distance and shape context to find the best match between input and reference shapes. Testing on datasets showed encouraging results for recognizing handwritten digits.
Implementation of Automated Attendance System using Deep LearningMd. Mahfujur Rahman
This paper proposes a real-time automated attendance system using deep learning. The system uses an improved deep convolutional neural network model called VIPLFaceNet for face detection and recognition. A dataset was collected containing images of students and colleagues and preprocessed for the model. The model architecture involves flattening, normalizing, and grayscaling images before classifying them. The system achieved 95% accuracy on a test set of 5 live and 2 spoofed faces, outperforming other models based on evaluation metrics. The paper concludes the proposed approach improves real-time attendance systems through identity authentication.
Improve Captcha's Security Using Gaussian Blur Filtersipij
Providing security for webservers against unwanted and automated registrations has become a big
concern. To prevent these kinds of false registrations many websites use CAPTCHAs. Among all kinds of
CAPTCHAs OCR-Based or visual CAPTCHAs are very common. Actually visual CAPTCHA is an image
containing a sequence of characters. So far most of visual CAPTCHAs, in order to resist against OCR
programs, use some common implementations such as wrapping the characters, random placement and
rotations of characters, etc. In this paper we applied Gaussian Blur filter, which is an image
transformation, to visual CAPTCHAs to reduce their readability by OCR programs. We concluded that this
technique made CAPTCHAs almost unreadable for OCR programs but, their readability by human users
still remained high.
Off-line English Character Recognition: A Comparative Surveyidescitation
It has been decades since the evolution of idea that
human brain can be mimicked by artificial neuron like
mathematical structures. Till date, the development of this
endeavor has not reached the threshold of excellence. Neural
networks are commonly used to solve sample-recognition
problems. One of these is character recognition. The solution
of this problem is one of the easier implementations of neural
networks. This paper presents a detailed comparative
literature survey on the research accomplished for the last
few decades. The comparative literature review will help us
understand the platform on which we stand today to achieve
the highest efficiency in terms of Character Recognition
accuracy as well as computational resource and cost.
Visual diagnostics for more effective machine learningBenjamin Bengfort
The model selection process is a search for the best combination of features, algorithm, and hyperparameters that maximize F1, R2, or silhouette scores after cross-validation. This view of machine learning often leads us toward automated processes such as grid searches and random walks. Although this approach allows us to try many combinations, we are often left wondering if we have actually succeeded.
By enhancing model selection with visual diagnostics, data scientists can inject human guidance to steer the search process. Visualizing feature transformations, algorithmic behavior, cross-validation methods, and model performance allows us a peek into the high dimensional realm that our models operate. As we continue to tune our models, trying to minimize both bias and variance, these glimpses allow us to be more strategic in our choices. The result is more effective modeling, speedier results, and greater understanding of underlying processes.
Visualization is an integral part of the data science workflow, but visual diagnostics are directly tied to machine learning transformers and models. The Yellowbrick library extends the scikit-learn API providing a Visualizer object, an estimator that learns from data and produces a visualization as a result. In this talk, we will explore feature visualizers, visualizers for classification, clustering, and regression, as well as model analysis visualizers. We'll work through several examples and show how visual diagnostics steer model selection, making machine learning more effective.
AN INTEGRATED APPROACH TO CONTENT BASED IMAGERETRIEVAL by MadhuMadhu Rock
This document summarizes an integrated approach to content-based image retrieval. It discusses extracting both color and texture features from images using color moments and local binary patterns. The system is tested on a database of 1000 images across 10 classes. Results show the integrated approach of using both color and texture features provides more accurate retrievals than using either feature alone. Evaluation metrics like precision, recall and accuracy are calculated to quantitatively analyze the system's performance. Overall, the proposed multi-feature approach is found to improve content-based image retrieval compared to single-feature methods.
Long-term Face Tracking in the Wild using Deep LearningElaheh Rashedi
This paper investigates long-term face tracking of a specific person given his/her face image in a single frame as a query in a video stream. Through taking advantage of pre-trained deep learning models on big data, a novel system is developed for accurate video face tracking in the unconstrained environments depicting various people and objects moving in and out of the frame. In the proposed system, we present a detection-verification-tracking method (dubbed as 'DVT') which accomplishes the long-term face tracking task through the collaboration of face detection, face verification, and (short-term) face tracking. An offline trained detector based on cascaded convolutional neural networks localizes all faces appeared in the frames, and an offline trained face verifier based on deep convolutional neural networks and similarity metric learning decides if any face or which face corresponds to the queried person. An online trained tracker follows the face from frame to frame. When validated on a sitcom episode and a TV show, the DVT method outperforms tracking-learning-detection (TLD) and face-TLD in terms of recall and precision. The proposed system is also tested on many other types of videos and shows very promising results.
Towards a Macrobenchmark Framework for Performance Analysis of Java ApplicationsGábor Szárnyas
This document discusses the need for macrobenchmarks to evaluate the performance and scalability of large model querying systems. It presents the Train Benchmark, which measures the performance of validation queries on randomly generated railway network models of increasing sizes. The benchmark includes loading models, running validation queries to detect errors, transforming models by injecting faults, and revalidating. It aims to provide a realistic and scalable way to assess model querying tools for domains like software engineering, where models can contain billions of elements.
In this talk, we discuss QuTrack, a Blockchain-based approach to track experiment and model changes primarily for AI and ML models. In addition, we discuss how change analytics can be used for process improvement and to enhance the model development and deployment processes.
This document summarizes Martin Pinzger's research on predicting buggy methods using software repository mining. The key points are:
1. Pinzger and colleagues conducted experiments on 21 Java projects to predict buggy methods using source code and change metrics. Change metrics like authors and method histories performed best with up to 96% accuracy.
2. Predicting buggy methods at a finer granularity than files can save manual inspection and testing effort. Accuracy decreases as fewer methods are predicted but change metrics maintain higher precision.
3. Case studies on two classes show that method-level prediction achieves over 82% precision compared to only 17-42% at the file level. This demonstrates the benefit of finer-
A Parallel Architecture for Multiple-Face Detection Technique Using AdaBoost ...Hadi Santoso
Face detection is a very important biometric application in the field of image
analysis and computer vision. The basic face detection method is AdaBoost
algorithm with a cascading Haar-like feature classifiers based on the
framework proposed by Viola and Jones. Real-time multiple-face detection,
for instance on CCTVs with high resolution, is a computation-intensive
procedure. If the procedure is performed sequentially, an optimal real-time
performance will not be achieved. In this paper we propose an architectural
design for a parallel and multiple-face detection technique based on Viola
and Jones' framework. To do this systematically, we look at the problem
from 4 points of view, namely: data processing taxonomy, parallel memory
architecture, the model of parallel programming, as well as the design of
parallel program. We also build a prototype of the proposed parallel
technique and conduct a series of experiments to investigate the gained
acceleration.
This is the second progress report of the project "Classification and Detection of liquid samples in hospitals and chemical labs using computer vision". The group has created a dataset of over 2000 annotated images across 12 classes. They trained 3 object detection models - YOLOv4_pro, YOLOv5n, YOLOv5s on this dataset and compared their validation mAP scores. The group has also tested these models on different size images and videos. Their next steps are to write a research paper, develop a real-time application, and test model speed on videos.
Similar to Automated software testing cases generation framework to ensure the efficiency of the gesture recognition systems iccit_362_by_sheikh_monirul_hasan (20)
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Nunit vs XUnit vs MSTest Differences Between These Unit Testing Frameworks.pdfflufftailshop
When it comes to unit testing in the .NET ecosystem, developers have a wide range of options available. Among the most popular choices are NUnit, XUnit, and MSTest. These unit testing frameworks provide essential tools and features to help ensure the quality and reliability of code. However, understanding the differences between these frameworks is crucial for selecting the most suitable one for your projects.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAU
Automated software testing cases generation framework to ensure the efficiency of the gesture recognition systems iccit_362_by_sheikh_monirul_hasan
1. 0
22nd International Conference on Computer and Information Technology (ICCIT)
Automated Software Testing Cases Generation
Framework to Ensure the Efficiency of the
Gesture Recognition Systems
Presented by:
Sheik Monirul Hasan
Authors:
1. Sheik Monirul Hasan
2. Md.Saiful Islam
3. Md.Ashaduzzaman
4. Dr. Muhammad Aminur
Rahaman
Dept. of Computer Science and Engineering
Green University of Bangladesh
2. Contents
1
• Introduction
• Objectives
• Problem Domain
• Related Research
• Motivation
• Proposed Methodology
• Experimental Result Analysis
• Research Contribution
• Conclusion
3. Introduction
What is software testing?
•Software testing is an activity to detect the bug and released a
quality product that is met with the specification of customers.
Software testing standards
•ISO/IEC 29119-2:2013 – Testing Processes
•ISO/IEC 29119-4:2015 – Testing Techniques
2
4. Objective
• To generate a testing case for gesture recognition systems automatically
• To create an automated testing framework
• To improve the quality of software by detecting the defects of the
existing systems
• To minimize the training and testing time
• To minimize training and testing cost
• To build a standard system for testing any gesture recognition system
3
5. Problem Domain
• The existing system prepares input images for training and testing
manually which
Consumes more time
Increase the cost of the system testing and training
• The gesture recognition system has no a recognized testing
standard
• Limited image samples are used in training
• Very few testing cases are used in testing 4
6. Related Research
5
Title: Real-Time Computer Vision-Based Bengali Sign Language Recognition
Reference. [1] Muhammad Aminur Rahaman, Mahmood Jasim, Md. Haider Ali and Md. Hasanuzzaman, “Real-Time
Computer Vision-Based Bengali Sign Language Recognition” Department of Computer Science & Engineering
University of Dhaka, 17th Int'l Conf. on Computer and Information Technology, 22-23 December 2014, Daffodil
International University, Dhaka, Bangladesh
7. Related Research
6
Result :
• Average Accuracy is 96.17%
• Computational Cost is 93.55 Milliseconds/frame
Limitations:
• The system cannot properly segments hand area if some objects rather
than hand has skin like colors.
• The system cannot properly distinguish some signs such as “র” with “ল”
and “ফ”with “ঝ” because of their similarity among their binary images
• The performer faces difficulties to perform few alphabets such as“o, খ, জ,
ঠ, র” due to camera position.
8. Related Research
7
Title: Real-Time Bengali and Chinese Numeral Signs Recognition Using Contour Matching
Reference: [2] M. A. Rahaman, M. Jasim, T. Zhang, M. H. Ali, and M. Hasanuzzaman, “Real-time
bengali and chinese numeral signs recognition using contour matching,”in2015 IEEE International
Conference on Robotics and Biomimetics (ROBIO).IEEE, 2015, pp. 1215–1220.
Methodology:
10. Related Research
9
Title: A Real-Time Hand-Signs Segmentation And Classification System Using Fuzzy Rule
Based RGB Model And Grid-Pattern Analysis
Reference: [3] M. Rahaman, M. Jasim, M. Ali, T. Zhang, and M. Hasanuzzaman, “A real-timehand-signs
segmentation and classification system using fuzzy rule based rgbmodel and grid-pattern
analysis,”Frontiers of Computer Science, vol. 12, 11 2018.
11. Related Research
10
Result:
• The system achieves the mean accuracy of 98.40% for E1,
• 97.1% for E2, 96.60% for E3, 96.20% for E4,
• 95.80% for E5, 95.30% for E6 and 96.57%
Limitation :
• The system may fail to segment the hand-signs,
• If any skin-color objects with similar motion of hand-signs are
presented in the ROI.
12. Related Research
11
Title: Bangla Language Modeling Algorithm For Automatic Recognition of Hand-Sign-
Spelled Bangla Sign Language
Reference: [4] M. Rahaman, M. Jasim, M. Ali, and M. Hasanuzzaman, “Bangla language mod-eling algorithm for
automatic recognition of hand-sign-spelled bangla sign lan-guage,”Front. Comput. Sci., p. 0, 2018
13. Related Research
12
Result :
• The system achieves mean accuracy of 93.50% for words, 95.50%
for composite
• Numerals and 90.50% for sentences recognition in BdSL
Limitation :
• The system sometimes fails to distinguish two similar binary signs
such as ‘র’ and ‘ল’
• Which the color images of them are distinguishable
14. Related Research
13
Title: Hand Sign to Bangla Speech: A Deep Learning in Vision based system for
Recognizing Hand Sign Digits and Generating Bangla Speech.
Reference: [5] S. Ahmed, M. Islam, J. Hassan, M. U. Ahmed, B. J. Ferdosi, S. Saha, M. Sho-ponet al.,
“Hand sign to bangla speech: A deep learning in vision based systemfor recognizing hand sign digits
and generating bangla speech,”arXiv preprintarXiv:1901.05613, 2019.
Methodology:
15. Related Research
14
Result:
• Notable fact is without image augmentation the validation loss result
was incredibly reduced from 0.37 to 0.31.
• Where the accuracy was increased from 89% to 92%.
Limitation:
• Sign language to voice output can eliminate a baiter
• Communication between speech impaired people
.
16. Related Research
15
Title: Elaborating Software Test Processes and Strategies
Methodology:
• This study is to build a reference model for practical application
• Test Strategy defined in the ISO/IEC 29119.
• Focuses on the system Compose testing strategy;
• Human resources, test tools, test case selection, testing methods and the
role of the management The test process to name a few of the major
components.
Limitation:
• There is needed other software testing standard.
Reference: [6] Jussi Kasurinen,“Elaborating Software Test Processes and Strategies”, 2010 Third
International Conference on Software Testing, Verification and Validation , p.p. 10 November 2010
17. Motivation
16
• To create an automated testing framework
• To minimize the time of training and testing sample generation
• To minimize cost of training and testing
• To create large number of test cases automatically from limited
number of sample images
• To build a standard for testing any gesture recognition system
18. Common Testing Parameters for Proposed System
17
TABLE I: Consideration Common Parameters of Testing in Each Existing System
to Generate Test Cases.
Systems
Test Cases
Rotation Contrast Scale Backgrounds Noise
System-1 (Rahaman et. al. [1]) × c × × c
System-2 (Rahaman et. al. [2]) c c c × ×
System-3 (Rahaman et. al. [3]) c c c c c
System-4(Rahaman et. al. [4]) c c c c c
System-5 ( Shahjalal Ahmedet. al. [5]) c c c × ×
19. Why we Choose Those Parameters
18
• We are analysis some system
• Every system use the nearest same feature for processing images
and train up and testing their system
• We take some common features
• That features also uses every system
• Our proposed model has five parameters
• To test the existing system
20. Proposed Methodology
19
Fig.6. The architecture of the proposed automated testing framework for gesture recognition systems.
Input = n
Rotation
Contrast
Scale
Background
Noice
Testing System
T= R+C+S+B+N
Output
Test case
Generation
Testing
Phase
Number of
Defect
R
C
S
B
N
21. Different Test Cases
20
TABLE II: Comparative Analysis of Generated Testing Cases Between Our Proposed
System and Other Existing Systems
Method
Total No.of
sample(p)
No of test sample created by our Proposed
Framework
Existing Methode
sample image by per letter Total test
Images (p*q)
Per letter
sample
image(j)
Sample
Data (k)
Total Test
Images(j*k)Rotat
ion
Cont
rast
Scal
e
Backgr
ound
Noise
Total
(q)
System-1 (Rahaman
et. al. [1])
36 100 100 100 100 100 500 18000 100 36 3600
System-2 (Rahaman
et. al. [2])
10 100 100 100 100 100 500 5000 100 10 1000
System-3 (Rahaman
et. al. [3])
46 100 100 100 100 100 500 23000 100 46 4600
System-4(Rahaman
et. al. [4])
52 100 100 100 100 100 500 26000 100 52 5200
System-5 ( Shahjalal
Ahmedet. al. [5])
10 100 100 100 100 100 500 5000 100 10 1000
22. Algorithm
21
Data: A set of Images I, an Existing Testing Model µ
Result: Accuracy Tac for the set of Images in I tested on
Model µ
Procedure:
R; B; S; C; N I
for each i image in I do
R R U GetRotatedSamples for i
B B U GetDifferentBackgroundSamples for i
S S U GetScalingSamples for i
C C U GetContrastSamples for i
N N U GetNoisedSamples for i
end
Algorithm 6 : Algorithm to Implement the Proposed Testing Framework.
23. Algorithm Cont…
22
for each sample in R do
Calculate accuracy Rac using Model µ
end
for each sample in B do
Calculate accuracy Bac using Model µ
end
for each sample in S do
Calculate accuracy Sac using Model µ
end
for each sample in C do
Calculate accuracy Cac using Model µ
end
for each sample in N do
Calculate accuracy Nac using Model µ
end
Tac = (Rac + Bac + Sac + Cac + Nac)/5
Return: Return Tac
24. Test Case-1: Different Rotation
23
Example : Using rotation we can get different type of images .
23
(a) Rotation 0° (b) Rotation -10° (c) Rotation -20° (d) Rotation -30° (e) Rotation -45°
(f) Rotation 0° (g) Rotation +10° (h) Rotation +20° (i) Rotation +30° (j) Rotation +45°
Fig.4. Example of Test Images Generated by the Proposed System for Different Rotation.
25. Test Case-2: Different Contrast
2424
Example : Using contrast we can get different type of images .
24
(a) Contrast 0 (b) Contrast -20 (c) Contrast -40 (d) Contrast -60 (e) Contrast -80 (f) Contrast -100
(g) Contrast 0 (h) Contrast +20 (i) Contrast +40 (j) Contrast +60 (k) Contrast +80 (l) Contrast +100
Fig.6. Example of Testing Images Generated by the Proposed System with Different Contrasting Factors.
26. Test Case-3: Different Scale
25
Example : Using Size we can get different type of images
25
(a) Scaling 0 (b) Scaling x-5 (c) Scaling x-25 (d) Scaling xy -5 (e) Scaling xy-25 (f) Scaling y-5 (g) Scaling y-25
(h) Scaling 0 (i) Scaling x +5 (j) Scaling x +25 (k) Scaling xy +5 (l) Scaling xy +25 (m) Scaling y +5 (n) Scaling y+25
Fig.5. Example Testing Images Generated by the Proposed System with Different Scaling Factors.
27. Test Case-4: Different Background
26
Example : Using Background we can get different type of images .
26
(a) Background 1 (b) Background 2
(f) Background 6 (g) Background 7 (h) Background 8 (i) Background 9 (j) Background 10
(e) Background 5(d) Background 4(c) Background 3
Fig.7. Example of Testing Images Generated by the Proposed System with Different Backgrounds.
28. Test Case-5: Different Noise
27
Example : We can get a image to many images
27
(a) Noise-1 (b) Noise-2 (e) Noise-5(d) Noise-4
(f) Noise-6 (g) Noise-7 (h) Noise-8 (i) Noise-9 (j) Noise-10
(c) Noise-3
Fig.8. Example Testing Images Generated by the Proposed System with Different Noise Filters.
29. Experimental ResultAnalysis
28
TABLE III: Comparative Analysis of Experimental Results for Each System Using Test
Cases Generated by Our Proposed Testing Framework And Existing Methods
Method
Recognition accuracy using proposed test cases (%) Existing Method
Rotation Contrast Scale Backgrounds Noise mean
mean accuracy
(%)
System-1 (Rahaman et. al. [1]) 60.45 90.58 65.05 80.56 90.3 77.388 96.46
System-2 (Rahaman et. al. [2]) 96.45 90.66 94..85 30.24 50.8 67.038 95.85
System-3 (Rahaman et. al. [3]) 92.5 94.9 95.23 91.73 90.89 93.05 95.67
System-4(Rahaman et. al. [4]) 92.1 93.34 96.13 92.56 92.6 93.346 95.83
System-5 ( Shahjalal Ahmed et. al.
[5])
80.9 85.3 90.45 75.6 65.8
79.61 92.00
30. Research Contribution
29
• We proposed a automated software testing cases generation framework
for gesture recognition system.
• Generating a different kind of testing cases
• Creating lots of training and testing images
• Identified total bug of the existing system
• Showing the total accuracy
• Help to reduce the processing time and computational cost
• Minimize cost of training and testing
31. Conclusion
30
• We have proposed and implemented an automated software testing
framework, especially for the gesture recognition system.
• Through this framework, we can easily test any gesture recognition
system.
• The testing tool works with five parameters such as rotation Algorithm,
contrast Algorithm, scaling Algorithm, background Algorithm and noise
Algorithm.
32. Limitations of The Work
31
• The framework is only used for gesture recognition system testing
• There have many software testing standards but the proposed
framework has used only two common standards
ISO/IEC 29119-2:2013 - test processes
ISO/IEC 29119-4:2015 test techniques
• Some times the proposed framework evaluates the performances of
existing gesture recognition tools wrongly
33. Future Works
32
• We can increase the testing cases
• There will be adding other kinds of the testing process like (Unit and
Integration testing)
• Opportunity to tests others kind of image processing system testing
• AI application system Testing
• Others type of software testing standard will also use.
34. Application
33
This system can be use
• To generate automatic test cases to test any gesture recognition system
• To standardized or a benchmark for testing gesture recognition tool
• To identify defect any gesture recognition system
35. References
34
[1] M. A. Rahaman, M. Jasim, M. H. Ali, and M. Hasanuzzaman, “Realtime computer vision-based bengali sign language recognition,” in 2014 17th
International Conference on Computer and Information Technology (ICCIT). IEEE, 2014, pp. 192–197.
[2] M. A. Rahaman, M. Jasim, T. Zhang, M. H. Ali, and M. Hasanuzzaman, “Real-time bengali and chinese numeral signs recognition using contour
matching,” in 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2015, pp. 1215–1220.
[3] M. Rahaman, M. Jasim, M. Ali, T. Zhang, and M. Hasanuzzaman, “A real-time hand-signs segmentation and classification system using fuzzy
rule based rgb model and grid-pattern analysis,” Frontiers of Computer Science, vol. 12, 11 2018.
[4] M. Rahaman, M. Jasim, M. Ali, and M. Hasanuzzaman, “Bangla language modeling algorithm for automatic recognition of hand-signspelled
bangla sign language,” Front. Comput. Sci., p. 0, 2018.
[5] S. Ahmed, M. Islam, J. Hassan, M. U. Ahmed, B. J. Ferdosi, S. Saha, M. Shopon et al., “Hand sign to bangla speech: A deep learning in vision
based system for recognizing hand sign digits and generating bangla speech,” arXiv preprint arXiv:1901.05613, 2019.
[6] Jussi Kasurinen,“Elaborating Software Test Processes and Strategies”, 2010 Third International Conference on Software Testing, Verification
and Validation , p.p. 10 November 2010
[7] J. Gao, H.-S. Tsao, and Y. Wu, Testing and quality assurance for component-based software. Artech House, 2003.
[8] G. Davis, “Managing the test process [software testing],” in Proceedings International Conference on Software Methods and Tools. SMT 2000,
Nov 2000, pp.119–126.
[9] A. Akoum and N. Al Mawla, “Hand gesture recognition approach for asl language using hand extraction algorithm,” Journal of Software
Engineering and Applications, vol. 8, no. 08, p. 419, 2015.