International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Face Recognition: From Scratch To Hatch / Эдуард Тянтов (Mail.ru Group)Ontico
HighLoad++ 2017
Зал «Найроби+Касабланка», 7 ноября, 15:00
Тезисы:
http://www.highload.ru/2017/abstracts/3044.html
Мы разработали технологию по детекту и распознаванию лиц для продуктов компании Mail.ru, которая показывает высокие результаты на известных тестах. Технология на данный момент используется в Мобильном Облаке@Mail.ru для кластеризации фотографий по людям, а также во внутренних сервисах компании.
...
MIP AND UNSUPERVISED CLUSTERING FOR THE DETECTION OF BRAIN TUMOUR CELLSAM Publications
Image processing is widely used in biomedical applications. Image processing can be used to analyze
different MRI brain images in order to get the abnormality in the image .The objective is to extract meaningful
information from the imaged signals. Image segmentation is a process of partitioning an image in to different parts.
The division in to parts is often based on the characteristics of the pixels in the image. In our paper the segmentation
of the tumour tissues is carried out using k-means and fuzzy c-means clustering.Tumour can be found and faster
detection is achieved with only few seconds for execution. The input image of the brain is taken from the available
database and the presence of tumourin input image can be detected.
An Iot Based Smart Manifold Attendance SystemIJERDJOURNAL
ABSTRACT:- Attendance has been an age old procedure employed in different disciplines of educational institutions. While attendance systems have witnessed growth right from manual techniques to biometrics, plight of taking attendance is undeniable. In fingerprint based attendance monitoring, if fingers get roughed / scratched, it leads to misreading. Also for face recognition, students will have to make a queue and each one will have to wait until their face gets recognised. Our proposed system is employing “manifold attendance” that means employing passive attendance, where at a time, the attendance of multiple people can get captured. We have eliminated the need of queue system / paper-pen system of attendance, and just with a single click the attendance is not only captured, but monitored as well, that too without any human intervention. In the proposed system, creation of database and face detection is done by using the concepts of bounding box, whereas for face recognition we employ histogram equalization and matching technique.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Face Recognition: From Scratch To Hatch / Эдуард Тянтов (Mail.ru Group)Ontico
HighLoad++ 2017
Зал «Найроби+Касабланка», 7 ноября, 15:00
Тезисы:
http://www.highload.ru/2017/abstracts/3044.html
Мы разработали технологию по детекту и распознаванию лиц для продуктов компании Mail.ru, которая показывает высокие результаты на известных тестах. Технология на данный момент используется в Мобильном Облаке@Mail.ru для кластеризации фотографий по людям, а также во внутренних сервисах компании.
...
MIP AND UNSUPERVISED CLUSTERING FOR THE DETECTION OF BRAIN TUMOUR CELLSAM Publications
Image processing is widely used in biomedical applications. Image processing can be used to analyze
different MRI brain images in order to get the abnormality in the image .The objective is to extract meaningful
information from the imaged signals. Image segmentation is a process of partitioning an image in to different parts.
The division in to parts is often based on the characteristics of the pixels in the image. In our paper the segmentation
of the tumour tissues is carried out using k-means and fuzzy c-means clustering.Tumour can be found and faster
detection is achieved with only few seconds for execution. The input image of the brain is taken from the available
database and the presence of tumourin input image can be detected.
An Iot Based Smart Manifold Attendance SystemIJERDJOURNAL
ABSTRACT:- Attendance has been an age old procedure employed in different disciplines of educational institutions. While attendance systems have witnessed growth right from manual techniques to biometrics, plight of taking attendance is undeniable. In fingerprint based attendance monitoring, if fingers get roughed / scratched, it leads to misreading. Also for face recognition, students will have to make a queue and each one will have to wait until their face gets recognised. Our proposed system is employing “manifold attendance” that means employing passive attendance, where at a time, the attendance of multiple people can get captured. We have eliminated the need of queue system / paper-pen system of attendance, and just with a single click the attendance is not only captured, but monitored as well, that too without any human intervention. In the proposed system, creation of database and face detection is done by using the concepts of bounding box, whereas for face recognition we employ histogram equalization and matching technique.
Medical Image Fusion Using Discrete Wavelet TransformIJERA Editor
Medical image fusion is the process of registering and combining multiple images from single or multiple imaging modalities to improve the imaging quality and reduce randomness and redundancy in order to increase the clinical applicability of medical images for diagnosis and assessment of medical problems. Multimodal medical image fusion algorithms and devices have shown notable achievements in improving clinical accuracy of decisions based on medical images. The domain where image fusion is readily used nowadays is in medical diagnostics to fuse medical images such as CT (Computed Tomography), MRI (Magnetic Resonance Imaging) and MRA. This paper aims to present a new algorithm to improve the quality of multimodality medical image fusion using Discrete Wavelet Transform (DWT) approach. Discrete Wavelet transform has been implemented using different fusion techniques including pixel averaging, maximum minimum and minimum maximum methods for medical image fusion. Performance of fusion is calculated on the basis of PSNR, MSE and the total processing time and the results demonstrate the effectiveness of fusion scheme based on wavelet transform.
Implementing Tumor Detection and Area Calculation in Mri Image of Human Brain...IJERA Editor
This paper is based on the research on Human Brain Tumor which uses the MRI imaging technique to capture the image. In this proposed work Brain Tumor area is calculated to define the Stage or level of seriousness of the tumor. Image Processing techniques are used for the brain tumor area calculation and Neural Network algorithms for the tumor position calculation. Also in the further advancement the classification of the tumor based on few parameters is also expected. Proposed work is divided in to following Modules: Module 1: Image Pre-Processing Module 2: Feature Extraction, Segmentation using K-Means Algorithm and Fuzzy C-Means Algorithm Module 3: Tumor Area calculation & Stage detection Module 4: Classification and position calculation of tumor using Neural Network
Currently, magnetic resonance imaging (MRI) has been utilized extensively to obtain high contrast medical image due to its safety which can be applied repetitively. To extract important information from an MRI medical images, an efficient image segmentation or edge detection is required. Edges are represented as important contour features in the medical image since they are the boundaries where distinct intensity changes or discontinuities occur. However, in practices, it is found rather difficult to design an edge detector that is capable of finding all the true edges in an image as there is always noise, and the subjectivity of sensitiveness in detecting the edges. Many traditional algorithms have been proposed to detect the edge, such as Canny, Sobel, Prewitt, Roberts, Zerocross, and Laplacian of Gaussian (LoG). Moreover, many researches have shown the potential of using Artificial Neural Network (ANN) for edge detection. Although many algorithms have been conducted on edge detection for medical images, however higher computational cost and subjective image quality could be further improved. Therefore, the objective of this paper is to develop a fast ANN based edge detection algorithm for MRI medical images. First, we developed features based on horizontal, vertical, and diagonal difference. Then, Canny edge detector will be used as the training output. Finally, optimized parameters will be obtained, including number of hidden layers and output threshold. The edge detection image will be analysed its quality subjectively and computational. Results showed that the proposed algorithm provided better image quality while it has faster processing time around three times time compared to other traditional algorithms, such as Sobel and Canny edge detector.
One-Sample Face Recognition Using HMM Model of Fiducial AreasCSCJournals
In most real world applications, multiple image samples of individuals are not easy to collate for direct implementation of recognition or verification systems. Therefore there is a need to perform these tasks even if only one training sample per person is available. This paper describes an effective algorithm for recognition and verification with one sample image per class. It uses two dimensional discrete wavelet transform (2D DWT) to extract features from images and hidden Markov model (HMM) was used for training, recognition and classification. It was tested with a subset of the AT&T database and up to 90% correct classification (Hit) and false acceptance rate (FAR) of 0.02% was achieved.
Analysis of Inertial Sensor Data Using Trajectory Recognition Algorithmijcisjournal
This paper describes a digital pen based on IMU sensor for gesture and handwritten digit gesture
trajectory recognition applications. This project allows human and Pc interaction. Handwriting
Recognition is mainly used for applications in the field of security and authentication. By using embedded
pen the user can make hand gesture or write a digit and also an alphabetical character. The embedded pen
contains an inertial sensor, microcontroller and a module having Zigbee wireless transmitter for creating
handwriting and trajectories using gestures. The propound trajectory recognition algorithm constitute the
sensing signal attainment, pre-processing techniques, feature origination, feature extraction, classification
technique. The user hand motion is measured using the sensor and the sensing information is wirelessly
imparted to PC for recognition. In this process initially excerpt the time domain and frequency domain
features from pre-processed signal, later it performs linear discriminant analysis in order to represent
features with reduced dimension. The dimensionally reduced features are processed with two classifiers –
State Vector Machine (SVM) and k-Nearest Neighbour (kNN). Through this algorithm with SVM classifier
provides recognition rate is 98.5% and with kNN classifier recognition rate is 95.5% .
Detection and classification of brain tumor are very important because it provides anatomical information of normal and abnormal tissues which helps in early treatment planning and patient's case follow-up. There is a number of techniques for medical image classification. We used PNN (Probabilistic Neural Network Algorithm) for image classification technique based on Genetic Algorithm (GA) and K-Nearest Neighbor (K-NN) classifier for feature selection is proposed in this paper. The searching capabilities of genetic algorithms are explored for appropriate selection of features from input data and to obtain an optimal classification. The method is implemented to classify and label brain MRI images into seven tumor types. A number of texture features (Gray Level Co-occurrence Matrix (GLCM)) can be extracted from an image, so choosing the best features to avoid poor generalization and over specialization is of paramount importance then the classification of the image and compare results based on the PNN algorithm.
Recognition of Numerals Using Neural NetworkIOSR Journals
Abstract : Recognition of handwritten numerals is a challenging problem researchers had been research into this area for so long especially in the recent years. The goal of optical digit recognition is to classify optical patterns contained in a digital image corresponding to numerals. In our study there are many fields concern with numbers, for example, checks in banks or recognizing numbers in car plates, the subject of digit recognition appears. A system for recognizing isolated digits may be as an approach for dealing with such application. In other words, to let the computer understand the English numbers that is written manually by users and views them according to the computer process. Image processing is simply the processing of the given image. The input is just an image that may be from any source, and the output may be an image or a set of parameters that are related to that particular image. The main objective for our system was to recognize isolated digits exist in different applications. For example, different users had their own handwriting styles where here the main challenge falls to let computer system understand these different handwriting styles and recognize them as standard writing. The process involves three phases namely pre-processing, training and recognition. Pre-processing stage performs noise removal, binarization, labelling, rescaling and segmentation operations. Training stage adopts back propagation with feed forward technique. Recognition stage recognizes input images of numerals. Keywords: - Artificial Neural network, segmentation, geometrical feature extraction, OTSU’s method, feed forward back propagation algorithm.
Recognition of Epilepsy from Non Seizure Electroencephalogram using combinati...Atrija Singh
IC3: International IEEE Conference on Contemporary Computing
Noida India
Presented on 10th August 2017.
Topic : Recognition of Epilepsy from Non Seizure Electroencephalogram using combination of Linear SVM and Time Domain Attributes.
Shot Boundary Detection In Videos Sequences Using Motion ActivitiesCSCJournals
Video segmentation is fundamental to a number of applications related to video retrieval and analysis. To realize the content based video retrieval, the video information should be organized to elaborate the structure of the video. The segmentation video into shot is an important step to make. This paper presents a new method of shot boundaries detection based on motion activities in video sequence. The proposed algorithm is tested on the various video types and the experimental results show that our algorithm is effective and reliably detects shot boundaries.
To be honest, this work is done for the purpose of building self confidence in me, based on my interest. Being Electronics student it gives enough courage to explore more on Machine Learning and Artificial Intelligence topics.
Thankyou for viewing and please leave a like to elevate my Confidence.
To add-on, this my first work on Slideshare.
Happy Learning
Adaptive modified backpropagation algorithm based on differential errorsIJCSEA Journal
A new efficient modified back propagation algorithm with adaptive learning rate is proposed to increase the convergence speed and to minimize the error. The method eliminates initial fixing of learning rate through trial and error and replaces by adaptive learning rate. In each iteration, adaptive learning rate for output and hidden layer are determined by calculating differential linear and nonlinear errors of output layer and hidden layer separately. In this method, each layer has different learning rate in each iteration. The performance of the proposed algorithm is verified by the simulation results.
Gesture Recognition Review: A Survey of Various Gesture Recognition AlgorithmsIJRES Journal
This paper presents simple as well as effective methods to realize hand gesture recognition. Gesture recognition is mainly apprehensive on analysing the functionality of human Intelligence. The main aim of gesture detection and recognition is to design an efficient system which is able to recognize particular human gestures and use these detected gestures to transfer information or for controlling devices. Hand gestures enable a vivid complementary modal to communicate with speech for expressing ones thought of idea. The information which is associated with hand gestures detection in a conversation is extent or degree, detection discourse structure, spatial and temporal design structure. Based on the above given points the paper discusses various models of gesture detection and recognition.
Using deep neural networks in classifying electromyography signals for hand g...IAESIJAI
Electromyography (EMG) signals are used for various applications, especially in smart prostheses. Recognizing various gestures (hand movements) in EMG systems introduces challenges. These challenges include the noise effect on EMG signals and the difficulty in identifying the exact movement from the collected EMG data amongst others. In this paper, three neural network models are trained using an open EMG dataset to classify and recognize seven different gestures based on the collected EMG data. The three implemented models are: a four-layer deep neural network (DNN), an eight-layer DNN, and a five-layer convolutional neural network (CNN). In addition, five optimizers are tested for each model, namely Adam, Adamax, Nadam, Adagrad, and AdaDelta. It has been found that four layers achieve respectable recognition accuracy of 95% in the proposed model.
Recognition of new gestures using myo armband for myoelectric prosthetic appl...IJECEIAES
Myoelectric prostheses are a viable solution for people with amputations. The chal- lenge in implementing a usable myoelectric prosthesis lies in accurately recognizing different hand gestures. The current myoelectric devices usually implement very few hand gestures. In order to approximate a real hand functionality, a myoelectric prosthesis should implement a large number of hand and finger gestures. However, increasing number of gestures can lead to a decrease in recognition accuracy. In this work a Myo armband device is used to recognize fourteen gestures (five build in gestures of Myo armband in addition to nine new gestures). The data in this research is collected from three body-able subjects for a period of 7 seconds per gesture. The proposed method uses a pattern recognition technique based on Multi-Layer Perceptron Neural Network (MLPNN). The results show an average accuracy of 90.5% in recognizing the proposed fourteen gestures.
Comparative analysis of machine learning algorithms on myoelectric signal fro...IAESIJAI
Control strategies of smart hand prosthesis-based myoelectric signals in recent years don't provide the patients with the sensation of biological control of prostheses hand fingers. Therefore, in current work hyperparameters optimization in machine learning algorithm and hand gesture recognition techniques were applied to the myoelectric signal-based on residual muscles contraction of the amputees corresponding to intact forearm limb movement to improve their biological control. In this paper, myoelectric signals are extracted using the MYO armband to recognize ten gestures from ten volunteers (healthy and transradial amputation) on the forearm, thereafter the noise of myoelectric signals using a notch filter (NF) is removed. The proposed classification system involved two machine learning algorithms: (1) the decision tree (DT), tri-layered neural network (TLNN), k-nearest-neighbor (KNN), support vector machine (SVM) and ensemble boosted tree (EBT) classifiers. (2) the optimized machine learning classifiers, i.e., OKNN, OSVM, OEBT with optical diffraction tomography (ODT) and ommatidia detecting algorithm (ODA). The experimental results of classifiers comparison pointed out an algorithm that outperformed with high accuracy is OEBT closely followed by OKNN achieves an accuracy of 97.8% and 97.1% for intact forearm limb, while for transradial amputation with an accuracy of 91.9% and 91.4%, respectively.
Medical Image Fusion Using Discrete Wavelet TransformIJERA Editor
Medical image fusion is the process of registering and combining multiple images from single or multiple imaging modalities to improve the imaging quality and reduce randomness and redundancy in order to increase the clinical applicability of medical images for diagnosis and assessment of medical problems. Multimodal medical image fusion algorithms and devices have shown notable achievements in improving clinical accuracy of decisions based on medical images. The domain where image fusion is readily used nowadays is in medical diagnostics to fuse medical images such as CT (Computed Tomography), MRI (Magnetic Resonance Imaging) and MRA. This paper aims to present a new algorithm to improve the quality of multimodality medical image fusion using Discrete Wavelet Transform (DWT) approach. Discrete Wavelet transform has been implemented using different fusion techniques including pixel averaging, maximum minimum and minimum maximum methods for medical image fusion. Performance of fusion is calculated on the basis of PSNR, MSE and the total processing time and the results demonstrate the effectiveness of fusion scheme based on wavelet transform.
Implementing Tumor Detection and Area Calculation in Mri Image of Human Brain...IJERA Editor
This paper is based on the research on Human Brain Tumor which uses the MRI imaging technique to capture the image. In this proposed work Brain Tumor area is calculated to define the Stage or level of seriousness of the tumor. Image Processing techniques are used for the brain tumor area calculation and Neural Network algorithms for the tumor position calculation. Also in the further advancement the classification of the tumor based on few parameters is also expected. Proposed work is divided in to following Modules: Module 1: Image Pre-Processing Module 2: Feature Extraction, Segmentation using K-Means Algorithm and Fuzzy C-Means Algorithm Module 3: Tumor Area calculation & Stage detection Module 4: Classification and position calculation of tumor using Neural Network
Currently, magnetic resonance imaging (MRI) has been utilized extensively to obtain high contrast medical image due to its safety which can be applied repetitively. To extract important information from an MRI medical images, an efficient image segmentation or edge detection is required. Edges are represented as important contour features in the medical image since they are the boundaries where distinct intensity changes or discontinuities occur. However, in practices, it is found rather difficult to design an edge detector that is capable of finding all the true edges in an image as there is always noise, and the subjectivity of sensitiveness in detecting the edges. Many traditional algorithms have been proposed to detect the edge, such as Canny, Sobel, Prewitt, Roberts, Zerocross, and Laplacian of Gaussian (LoG). Moreover, many researches have shown the potential of using Artificial Neural Network (ANN) for edge detection. Although many algorithms have been conducted on edge detection for medical images, however higher computational cost and subjective image quality could be further improved. Therefore, the objective of this paper is to develop a fast ANN based edge detection algorithm for MRI medical images. First, we developed features based on horizontal, vertical, and diagonal difference. Then, Canny edge detector will be used as the training output. Finally, optimized parameters will be obtained, including number of hidden layers and output threshold. The edge detection image will be analysed its quality subjectively and computational. Results showed that the proposed algorithm provided better image quality while it has faster processing time around three times time compared to other traditional algorithms, such as Sobel and Canny edge detector.
One-Sample Face Recognition Using HMM Model of Fiducial AreasCSCJournals
In most real world applications, multiple image samples of individuals are not easy to collate for direct implementation of recognition or verification systems. Therefore there is a need to perform these tasks even if only one training sample per person is available. This paper describes an effective algorithm for recognition and verification with one sample image per class. It uses two dimensional discrete wavelet transform (2D DWT) to extract features from images and hidden Markov model (HMM) was used for training, recognition and classification. It was tested with a subset of the AT&T database and up to 90% correct classification (Hit) and false acceptance rate (FAR) of 0.02% was achieved.
Analysis of Inertial Sensor Data Using Trajectory Recognition Algorithmijcisjournal
This paper describes a digital pen based on IMU sensor for gesture and handwritten digit gesture
trajectory recognition applications. This project allows human and Pc interaction. Handwriting
Recognition is mainly used for applications in the field of security and authentication. By using embedded
pen the user can make hand gesture or write a digit and also an alphabetical character. The embedded pen
contains an inertial sensor, microcontroller and a module having Zigbee wireless transmitter for creating
handwriting and trajectories using gestures. The propound trajectory recognition algorithm constitute the
sensing signal attainment, pre-processing techniques, feature origination, feature extraction, classification
technique. The user hand motion is measured using the sensor and the sensing information is wirelessly
imparted to PC for recognition. In this process initially excerpt the time domain and frequency domain
features from pre-processed signal, later it performs linear discriminant analysis in order to represent
features with reduced dimension. The dimensionally reduced features are processed with two classifiers –
State Vector Machine (SVM) and k-Nearest Neighbour (kNN). Through this algorithm with SVM classifier
provides recognition rate is 98.5% and with kNN classifier recognition rate is 95.5% .
Detection and classification of brain tumor are very important because it provides anatomical information of normal and abnormal tissues which helps in early treatment planning and patient's case follow-up. There is a number of techniques for medical image classification. We used PNN (Probabilistic Neural Network Algorithm) for image classification technique based on Genetic Algorithm (GA) and K-Nearest Neighbor (K-NN) classifier for feature selection is proposed in this paper. The searching capabilities of genetic algorithms are explored for appropriate selection of features from input data and to obtain an optimal classification. The method is implemented to classify and label brain MRI images into seven tumor types. A number of texture features (Gray Level Co-occurrence Matrix (GLCM)) can be extracted from an image, so choosing the best features to avoid poor generalization and over specialization is of paramount importance then the classification of the image and compare results based on the PNN algorithm.
Recognition of Numerals Using Neural NetworkIOSR Journals
Abstract : Recognition of handwritten numerals is a challenging problem researchers had been research into this area for so long especially in the recent years. The goal of optical digit recognition is to classify optical patterns contained in a digital image corresponding to numerals. In our study there are many fields concern with numbers, for example, checks in banks or recognizing numbers in car plates, the subject of digit recognition appears. A system for recognizing isolated digits may be as an approach for dealing with such application. In other words, to let the computer understand the English numbers that is written manually by users and views them according to the computer process. Image processing is simply the processing of the given image. The input is just an image that may be from any source, and the output may be an image or a set of parameters that are related to that particular image. The main objective for our system was to recognize isolated digits exist in different applications. For example, different users had their own handwriting styles where here the main challenge falls to let computer system understand these different handwriting styles and recognize them as standard writing. The process involves three phases namely pre-processing, training and recognition. Pre-processing stage performs noise removal, binarization, labelling, rescaling and segmentation operations. Training stage adopts back propagation with feed forward technique. Recognition stage recognizes input images of numerals. Keywords: - Artificial Neural network, segmentation, geometrical feature extraction, OTSU’s method, feed forward back propagation algorithm.
Recognition of Epilepsy from Non Seizure Electroencephalogram using combinati...Atrija Singh
IC3: International IEEE Conference on Contemporary Computing
Noida India
Presented on 10th August 2017.
Topic : Recognition of Epilepsy from Non Seizure Electroencephalogram using combination of Linear SVM and Time Domain Attributes.
Shot Boundary Detection In Videos Sequences Using Motion ActivitiesCSCJournals
Video segmentation is fundamental to a number of applications related to video retrieval and analysis. To realize the content based video retrieval, the video information should be organized to elaborate the structure of the video. The segmentation video into shot is an important step to make. This paper presents a new method of shot boundaries detection based on motion activities in video sequence. The proposed algorithm is tested on the various video types and the experimental results show that our algorithm is effective and reliably detects shot boundaries.
To be honest, this work is done for the purpose of building self confidence in me, based on my interest. Being Electronics student it gives enough courage to explore more on Machine Learning and Artificial Intelligence topics.
Thankyou for viewing and please leave a like to elevate my Confidence.
To add-on, this my first work on Slideshare.
Happy Learning
Adaptive modified backpropagation algorithm based on differential errorsIJCSEA Journal
A new efficient modified back propagation algorithm with adaptive learning rate is proposed to increase the convergence speed and to minimize the error. The method eliminates initial fixing of learning rate through trial and error and replaces by adaptive learning rate. In each iteration, adaptive learning rate for output and hidden layer are determined by calculating differential linear and nonlinear errors of output layer and hidden layer separately. In this method, each layer has different learning rate in each iteration. The performance of the proposed algorithm is verified by the simulation results.
Gesture Recognition Review: A Survey of Various Gesture Recognition AlgorithmsIJRES Journal
This paper presents simple as well as effective methods to realize hand gesture recognition. Gesture recognition is mainly apprehensive on analysing the functionality of human Intelligence. The main aim of gesture detection and recognition is to design an efficient system which is able to recognize particular human gestures and use these detected gestures to transfer information or for controlling devices. Hand gestures enable a vivid complementary modal to communicate with speech for expressing ones thought of idea. The information which is associated with hand gestures detection in a conversation is extent or degree, detection discourse structure, spatial and temporal design structure. Based on the above given points the paper discusses various models of gesture detection and recognition.
Using deep neural networks in classifying electromyography signals for hand g...IAESIJAI
Electromyography (EMG) signals are used for various applications, especially in smart prostheses. Recognizing various gestures (hand movements) in EMG systems introduces challenges. These challenges include the noise effect on EMG signals and the difficulty in identifying the exact movement from the collected EMG data amongst others. In this paper, three neural network models are trained using an open EMG dataset to classify and recognize seven different gestures based on the collected EMG data. The three implemented models are: a four-layer deep neural network (DNN), an eight-layer DNN, and a five-layer convolutional neural network (CNN). In addition, five optimizers are tested for each model, namely Adam, Adamax, Nadam, Adagrad, and AdaDelta. It has been found that four layers achieve respectable recognition accuracy of 95% in the proposed model.
Recognition of new gestures using myo armband for myoelectric prosthetic appl...IJECEIAES
Myoelectric prostheses are a viable solution for people with amputations. The chal- lenge in implementing a usable myoelectric prosthesis lies in accurately recognizing different hand gestures. The current myoelectric devices usually implement very few hand gestures. In order to approximate a real hand functionality, a myoelectric prosthesis should implement a large number of hand and finger gestures. However, increasing number of gestures can lead to a decrease in recognition accuracy. In this work a Myo armband device is used to recognize fourteen gestures (five build in gestures of Myo armband in addition to nine new gestures). The data in this research is collected from three body-able subjects for a period of 7 seconds per gesture. The proposed method uses a pattern recognition technique based on Multi-Layer Perceptron Neural Network (MLPNN). The results show an average accuracy of 90.5% in recognizing the proposed fourteen gestures.
Comparative analysis of machine learning algorithms on myoelectric signal fro...IAESIJAI
Control strategies of smart hand prosthesis-based myoelectric signals in recent years don't provide the patients with the sensation of biological control of prostheses hand fingers. Therefore, in current work hyperparameters optimization in machine learning algorithm and hand gesture recognition techniques were applied to the myoelectric signal-based on residual muscles contraction of the amputees corresponding to intact forearm limb movement to improve their biological control. In this paper, myoelectric signals are extracted using the MYO armband to recognize ten gestures from ten volunteers (healthy and transradial amputation) on the forearm, thereafter the noise of myoelectric signals using a notch filter (NF) is removed. The proposed classification system involved two machine learning algorithms: (1) the decision tree (DT), tri-layered neural network (TLNN), k-nearest-neighbor (KNN), support vector machine (SVM) and ensemble boosted tree (EBT) classifiers. (2) the optimized machine learning classifiers, i.e., OKNN, OSVM, OEBT with optical diffraction tomography (ODT) and ommatidia detecting algorithm (ODA). The experimental results of classifiers comparison pointed out an algorithm that outperformed with high accuracy is OEBT closely followed by OKNN achieves an accuracy of 97.8% and 97.1% for intact forearm limb, while for transradial amputation with an accuracy of 91.9% and 91.4%, respectively.
Estimation of Arm Joint Angles from Surface Electromyography signals using Ar...IOSR Journals
Abstract: Vicon system is implemented in almost every motion analysis systems. It has many applications like
robotics, gaming, virtual reality and animated movies. The motion and orientation plays an important role in
the above mentioned applications. In this paper we propose a method to estimate arm joint angles from surface
Electromyography (s-EMG) signals using Artificial Neural Network (ANN). The neural network is trained with
EMG data from wrist flexion and extension action as input and joint angle values from the vicon system as
target. The results shown in this paper illustrate the neural network performance in estimating the joint angle
values during offline testing.
Index Terms: Vicon system, Joint angle, Surface EMG, Artificial Neural Network, Virtual reality, Robotics.
Comparative Analysis of Hand Gesture Recognition TechniquesIJERA Editor
During past few years, human hand gesture for interaction with computing devices has continues to be active area of research. In this paper survey of hand gesture recognition is provided. Hand Gesture Recognition is contained three stages: Pre-processing, Feature Extraction or matching and Classification or recognition. Each stage contains different methods and techniques. In this paper define small description of different methods used for hand gesture recognition in existing system with comparative analysis of all method with its benefits and drawbacks are provided.
he main idea of the current work is to use a wireless Electroencephalography (EEG) headset as a remote control for the mouse cursor of a personal computer. The proposed system uses EEG signals as a communication link between brains and computers. Signal records obtained from the PhysioNet EEG dataset were analyzed using the Coif lets wavelets and many features were extracted using different amplitude estimators for the wavelet coefficients. The extracted features were inputted into machine learning algorithms to generate the decision rules required for our application. The suggested real time implementation of the system was tested and very good performance was achieved. This system could be helpful for disabled people as they can control computer applications via the imagination of fists and feet movements in addition to closing eyes for a short period of time
Correlation Analysis of Electromyogram SignalsIJMTST Journal
An inability to adapt myoelectric interfaces to a user’s unique style of hand motion. The system also adapts
the motion style of an opposite limb. These are the important factors inhibiting the practical application of
myoelectric interfaces. This is mainly attributed to the individual differences in the exhibited electromyogram
(EMG) signals generated by the muscles of different limbs. In this project myoelectric interface easily adapts
the signal from the users and maintains good movement recognition performance. At the initial stage the
myoelectric signal is extracted from the user by using the data acquisition system. A new set of features
describing the movements of user’s is extracted and the user’s features are classifed using SVM
classification. The given signal is then compared with the database signal with the accuracy of 90.910 %
across all the EMG signals.
Modelling and Control of a Robotic Arm Using Artificial Neural NetworkIOSR Journals
Abstract: Often it can be seen that men with a lost arm face severe difficulties doing daily chores. Artificial
Intelligence could be effectively used to provide some respite to those people. Neural networks and their
applications have been an active research topic since recent past in the rehabilitation robotics/machine
learning community, as it can be used to predict posture/gesture which is guided by signals from the human
brain. In this paper, a method is proposed to estimate force from Surface Electromyography (s-EMG) signals
generated by specific hand movements and then design and control a Robotic arm using Artificial Neural
Network (ANN) to replicate human arm. Here the force prediction is a Regression process. A hand model has
been successfully moved using servo motor that has been programmed based on the results obtained from
sample data. The results shown in this paper illustrate how the Robotic arm performs.
Index Terms: Surface EMG, Artificial Neural Network, Robotic arm, Regression.
Short-term hand gestures recognition based on electromyography signalsIAESIJAI
Electromyography pattern recognition to predict limb movements can significantly enhance the control of the prosthesis. However, this technique has not yet been widely used in clinical practice. Improvements in the myoelectric pattern recognition (MPR) system can improve the functionality of the prosthesis. This study proposes new sets of time domain features to enhance the MPR control system. Three groups of features are evaluated, time domain with auto regression (TD-AR), frequency domain (FD), and timefrequency domain (TFD). The electromyography signals (EMG) are obtained from the Ninapro database-5 (DB5), a publicly available dataset for hand prosthetics. The long-term signals of DB5 are divided into short-term signals to perform short-term signals recognition. The three feature sets are extracted from the short-term signals. The results showed that the performance of the proposed TD-AR features outperformed that of the FD and TFD feature sets. The TD-AR-based discrimination performance of 40 gestures achieved a precision of 88.8% and a sensitivity of 82.6%. The integration of short-term identification with reliable features can improve classification accuracy even for a large number of gestures. A comparison with the latest works shows the reliability of the proposed work.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
"Impact of front-end architecture on development cost", Viktor Turskyi
Final Thesis Presentation
1.
2. Electromyography Based Intelligence Gesture
Classification Empowered With Support Vector
Machine
Presented by: Thesis Supervisor:
Sajid Rasheed Dr. Muhammad Adnan Khan
Roll No. 20
Department of Computer Science & Information Technology
MINHAJ UNIVERSITY LAHORE HAMDARD CHOWK
TOWNSHIP LAHORE
3. Outline
Introduction
Electromyography (EMG)
EMG Images
Literature Review
Problem statement
Contribution
Objectives
Datasets
Input / Output Variables
Proposed Methodology
-Proposed System Model
- Support Vector Machine (SVM)
- Math Behind SVM
-Performance Evaluating Parameters
Results Analysis
- Training results
- Testing and Validation results
- Accuracy of Proposed Model
- Compare of Proposed Model with Previous
Models
Conclusion
- Conclusion
- Future Work
4. Introduction
With the growing presence of computerized systems in our day to day lives, the importance
of Human-Computer Interface (HCI) system has been increased. HCI defines optimal usage
for storage, connectivity, and display capabilities used in the information flow. The
development of simple applications to understand movements of the body of the user and
transform these applications into commands of machine that has been of considerable
importance during the last years [1].
Specific biological signals could be used for neuronal communication with devices, and
could be obtained from the particular organ, body, or cell network such as the system of
nervous.
Reference[1]
5. Continue…
The example of these approaches are Electromyogram (EMG), Electro-encephalogram
(EEG), and Electrooculogram (EOG). These methods are particularly important to people
with physical disabilities. There have been several attempts to use gesture-based EMG
signals for HCI growth.
There is currently ongoing work on the processing EMG Signal and controllers in a variety
of fields, including the development of a graphical interface continuous EMG Signal
Classification to help disabled people use word processing applications and other personal
computer applications.
The EMG tool can be designed for the identification of gestures based on the signal analysis
of different muscle groups in motion.
6. Electromyography (EMG) is the study of electrical signals in the muscles, called myoelectric
operation, which are derived from the surface of the skin via sensors [2].
Electromyography is a medical technique that measures the health status of the muscles and
nerve cells that regulate them or motor neurons. Muscle-acquired EMG signals permit
sophisticated methods for identification, decomposition, sorting, and classification.
Different EMG signal analysis methodologies and techniques for the completion of this
objective include quick and accurate ways to grasp the signal and its existence
electromyographic signals.
Electromyography (EMG)
Reference[2]
8. Literature Review
• Lobov et al. [5]
Work on the classification of hand gestures and implement it in dynamic gamming environment.
Proposed model classified seven hand movement by using Artificial Neural Network (ANN).
For data accusation, EMG Thalmic bracelet used which contained eight sensors.
Proposed model achieved accuracy up to 91.5% using ANN.
• Alejandro et al. [6]
An automated hand or wrist gesture identification system based on techniques of supervised machine
learning.
Proposed model used an open-access collection of 36 subjects that included recordings of EMG
signals.
Six hand gestures classified by using Convolutional Neural Network (CNN) and random forest
model and obtained accuracy up to 94.77% and 95.39% respectively.
Reference[5,6]
9. Continue…
• Benalcázar et al. [7]
Proposed a real-time hand classification method for the classification of hand movement.
Discussed model used raw data from surface of eight EMG signal to measure the movement of
forearm.
Model used K-Nearest Neighbors (KNN) classifier to identify five hand gesture without any feature
extraction method
Proposed model show the best classification accuracy of 89.5% to recognize hand gestures.
• Bian et al. [8]
discussed four classification systems are used to identify hand gestures relying on pattern recognition
of electromyographic (sEMG) surface signals.
The results indicate that both accuracy and the preparation time of the model are outperformed by
the support vector machine. System classification accuracy is about as high as 92.25 percent
Reference[7,8]
10. Problem statement
Before many decades, Electromyography (EMG) has been used to find the problem of
muscles and nerve cells that control them, as well as used in computer science applications
to control the input devices such as a mouse, joystick, and home hold devices. However, the
use of EMG in controlling computing devices still a matter of discussion.
A lot of work has been done by the researcher on EMG signals with some classification
methodologies like Fuzzy logic, artificial neural networks, Support Vector Machine
(SVM), k-nearest neighbor, etc. to recognized hand movement and other body parts.
But the classification of hand gestures still a huge problem and unable to used
commercially for controlling devices based on hand movements.
11. Contributions
Accuracy, efficiency, and generalizability are the major challenges of the existing gesture
classification systems. To overcome these limitations, an Intelligence Gesture Classification
System Empowered with Support Vector Machine (IGCS-SVM) proposed to recognize hand
movement.
The proposed model extract features from the surface of EMG through eight EMG sensors
then support vector machine used to classify extracted features to recognize hand gestures.
The system communicates with controlling devices through the Internet of Things (IoT).
SVM is a supervised data classification learning methodology and provide accurate results
that are better than others. That’s why the researcher has taken up this task in the shape of a
support vector machine technique to solve the classification problem.
12. Objectives
To enables quantification of the gestures’ fidelity in a dynamic gaming environment.
To reduce miss rate and mean square rate of intelligence gesture classification system.
To improve the accuracy of intelligence gesture classification system empowered with
support vector machine.
13. Datasets
The proposed model acquire dataset from internet that is publically available
on the website of UCI Machine Learning Repository [8] to classify hand
gestures. EMG used an MYO Thalmic bracelet to acquire data that was warned
by the user in his/her forearm.
For the collection of data, 36 subjects participate that worn Thalmic bracelets
and perform seven basic gestures. Dataset contains ten attributes, one attribute
is time that record in a millisecond, other eight attributes contain eight EMG
channel to record the movement of gestures, and one attribute is class that
contain eight gestures. Reference[8]
14. Input / Output Variables
Sr. No. Input / Output Variable Name
Input 1 Time (ms)
Input 2 Channel I
Input 3 Channel II
Input 4 Channel III
Input 5 Channel IV
Input 6 Channel V
Input 7 Channel VI
Input 8 Channel VII
Input 9 Channel VIII
Output 1 Class
15. Detail of Output Variable
Class Label of Gestures
0 unmarked data
1 hand at rest
2 hand clenched in a fist
3 wrist flexion
4 wrist extension
5 radial deviations
6 ulnar deviations
7 extended palm
18. Feature Extraction
• Mean Absolute Value
Mean absolute value of an electromyography signal is determined by taking the absolute value of the
signal average. It is an estimate of the mean absolute signal xj value in the length of a segment j
which is W samples.
𝑀𝐴𝑉 =
1
w 𝑗=1
𝑤
| xj |, where j= 1,……, w - 1
• Root Mean Square Value
Root mean square value for the surface of Electromyography can be calculated in the following
manners:
𝑅𝑀𝑆 =
1
𝑊 𝑗=1
𝑊
𝑥 𝑦2
Where, 𝑥 𝑦 represents signals of Electromyography and W represents the length of signals.
19. Support Vector Machine Classifier
Support Vector Machine (SVM) is a supervised machine learning technique that helps in
solving big data classification problems, it provide classification learning model and
algorithm.
The purpose of SVM is to decide the ideal hyperplane that divides two classes of space
points. The hyperplane must satisfy the criterion to have a possible maximal distance from
both classes.
20. Mathematical Model
• As we know that the equation of the line is
y2 = ay1 + c (1)
Where ‘a’ is a slope of a line and ‘c’ is the intercept, therefore
ay1 − y2 + c = 0
• Let y = y1 , y2 and Z = a, −1 then above equation can be written as
zy + c = 0 (2)
This equation is derived from 2-dimensional vectors. But in fact, it also works for any number
of dimensions, equation 2 also known as hyper plane equation.
• The direction of a vector y = y1 , y2 is written as Z and is defined as
z =
y1
| y |
+
y2
| y |
(3)
21. Mathematical Model
• Length of Vector y calculated as
| y | = y1+
2
y2+
2
y3+
2
… … … . . yn
2
• The dot product for n − dimensional vectors can be computed as
z. y = i=1
n
ziyi (4)
Let
f = x (z . y + c)
If sign (f) > 0 then correctly classified and if sign (f) < 0 then incorrectly classified
• Given a dataset D, we compute f on a training dataset
fi = xi (z . y + c)
Then F which is called functional margin of the dataset
F = min
i=1,2,3,..…..,n
fi
22. Mathematical Model
When comparing hyperplanes, the hyperplane with the largest F will be complimentary selected. Where F is
called the geometric margin of the dataset.
Our objective is to find an optimal hyperplane, which means we need to find the values of z and c of the
optimal hyperplane.
SVM optimization problem is case of constrained optimization problem, Lagrange multipliers are used to solve
it.
• Lagrangian function is
ℒ z, c, λ = (1/2) z. z −
i=1
n
λi [xi z. yi + c − 1]
With respect to z
𝛻zℒ z, c, λ = 𝑧 − i=1
n
λi xi yi = 0 (5)
With respect to c
𝛻cℒ z, c, λ = i=1
n
λi xi = 0 (6)
23. Mathematical Model
From two equations (5) and (6) we get
z = i=1
n
λ xi yi and i=1
n
λi xi = 0 (7)
Equation (7) only find the optimal value of z that is dependent on λ , so the value of λ must
be find and value of c also need both z and λ.
• After substitute the value of z in Lagrangian function ℒ then we get
z λ , c =
i=1
n
λi −
1
2
i=1
n
k=1
n
λi λkxi xk yiyk
Above equation is dual optimization problem
thus
max
λ
i=1
n
λi −
1
2 i=1
n
k=1
n
λi λkxi xk yiyk (8)
Subject to constraint is λi ≥ 0 , i = 1 … . n , i=1
n
λi xi = 0
24. Mathematical Model
Because the constraints have inequalities, so we extend the Lagrangian multipliers method to the
Karush-Kuhn-Tucker (KKT) conditions. The complementary condition of KKT states that
λi xi zi. y∗ + c − 1 = 0 (9)
y∗
is the optimal point.
λ is positive value otherwise, λ is equal to 0 on other points
So
xi zi. y∗
+ c − 1 = 0 (10)
• These are called support vectors, which are the closest points to the hyperplane. According to the
above equation (10)
z −
i=1
n
λi xi yi = 0
z = i=1
n
λi xi yi (11)
25. Mathematical Model
• To calculate the value of c we find
xi zi. y∗ + c − 1 = 0 (12)
• In equation (12) multiply by x on both sides so we get
xi
2
zi. y∗
+ c − xi = 0
Where xi
2
= 1
zi. y∗ + c − xi = 0
c = x − zi. y∗ (13)
Then
c =
1
v i=1
v
( x − z . y) (14)
V is the number of support vectors. On one occasion we will have the hyperplane, then
we can use the hyperplane to make predictions.
26. Mathematical Model
• Where the hypothesis function is
h zi =
+1 if z. y + c ≥ 0
−1 if z. y + c < 0
(15)
The above-mentioned point on the hyperplane is categorized as class + 1 (gesture
successfully classified) and the point below the hyperplane is categorized as class -1 (gesture
not classified).
So, basically the goal of the SVM Algorithm is to find a hyperplane which could separate
the data accurately and we need to find the best one, which is often referred as the optimal
hyperplane.
27. Performance Evaluating Parameters
The objective/quantitative method includes performance evaluating metrics that
gives the statistical results. The quantitative way of assessment includes
Accuracy
Miss rate
28. • Accuracy can be defined as the percentage of correctly classified instances.
o Accuracy = (correctly predicted class / total testing class) × 100%.
𝐀𝐜𝐜 =
𝑻𝑷+𝑻𝑵
𝑻𝑷+𝑭𝑷+𝑻𝑵+𝑭𝑵
where TP, FN, FP and TN represent the number of true positives, false negatives, false
positives and true negatives, respectively.
• Miss Rate can be defined as the percentage of wrongly classified instances.
o Miss Rate = (wrongly predicted class / total testing class) × 100%.
Miss Rate=
𝑭𝑷+𝑭𝑵
𝑻𝑷+𝑭𝑷+𝑻𝑵+𝑭𝑵
Continued…
29. Results Analysis
Training accuracy of proposed IGCS-SVM model in the form of number of
observation as well as Positive Predictive Value And False Discovery Rate
Training section of proposed model contained 80% data of whole dataset which
contain 8669 samples to predict seven classes of hand gestures.
Training Results of Proposed Model
32. Testing and Validation Phase
Testing and Validation accuracy of proposed IGCS-SVM model in the form of
number of observation as well as Positive Predictive Value And False Discovery
Rate.
Testing and Validation phase of proposed model contained 20% data of whole
dataset which contain 2168 samples to predict seven classes of hand gestures.
35. Training And Validation Accuracy of Proposed Model
Accuracy Miss rate
Training 99.2% 0.8%
Validation 99.9% 0.1%
36. Comparison of Proposed IGCS-SVM With Previous Work
Model Accuracy Miss Rate
Benalcazar et al. (2017) [10] 86% 14%
Chawathe (2019) [9] 89% 11%
Lobov et al. (2018) [5] 91.5% 8.5%
Alejandro et al. (2020) [6]
CNN Model
94.77% 5.23%
Random Forest Model 95.39% 4.61%
Proposed IGCS-SVM Model 99.9% 0.1% Reference[5,6,9,10]
37. Conclusion
In current thesis, an IGCS-SVM model is proposed for intelligent gesture classification
system based on Electromyography (EMG) signals.
Proposed model collect data from eight EMG sensors and then analyze it to classified
gesture. Support vector machine classified hand gestures in this model.
EMG signals acquired from different muscles location, through the Mayo armband Thalmic
bracelet, then support vector machine classified acquired signals. The proposed model
communicates with computing devices through IoT.
Presented IGCS-SVM model achieved gesture classification accuracy 99.9% using SVM.
Computational results show that the support vector machine proved a good choice to classify
hand gestures.
The simulation findings show that the suggested methodology produced batter outcomes as
compared to the previous approaches used by model Lobov et al. (2018) [5], Alejandro et al
(2020) [6], Chawathe (2019)[9] and Benalcazar et al. (2017) [10].
38. Future Work
The present research opened up innovative opportunities for future researchers in the area of
human-computer interaction by implementing the efficiencies of the proposed Intelligence
Gesture Classification System empowered with Support Vector Machine (IGCS-SVM.
In the future, a real-time application build using this technique. Furthermore, we will use
new classification algorithms for classification and feature extraction to build models that
enhance the performance of the real-time application.
39. [1]. Ahsan, M. R., Ibrahimy, M. I., & Khalifa, O. O. (2009). EMG signal classification for human computer interaction: a
review. European Journal of Scientific Research, 33(3), 480- 501.
[2]. Reaz, M. B. I., Hussain, M. S., & Mohd-Yasin, F. (2006). Techniques of EMG signal analysis: detection, processing,
classification and applications. Biological procedures online, 8(1), 11-35.
[3]. https://encrypted-tbn0.gstatic.com/images?q=tbn%3AANd9GcQAApwTeIx8t4NV9Yz5kA71grP2wepYR1-p8g&usqp=CAU
[4]. https://www.mdpi.com/sensors/sensors-18-00183/article_deploy/html/images/sensors-18-00183-g001.png
[5]. Lobov, S., Krilova, N., Kastalskiy, I., Kazantsev, V., & Makarov, V. A. (2018). Latent factors limiting the performance of
sEMG-interfaces. Sensors, 18(4), 1122.
[6]. Alejandro Mora Rubio, J. A. A. G., Reinel Tabares-Soto ORCID logo, Simón Orozco-Arias, Cristian Felipe Jiménez Varón, Jorge
Iván Padilla Buriticá (2020). Identification of Hand Movements from Electromyographic Signals Using Machine
Learning. doi: doi: 10.20944/preprints202002.0443.v1
[7]. Benalcázar, M. E., Jaramillo, A. G., Zea, A., Páez, A., & Andaluz, V. H. (2017). Hand gesture recognition using
machine learning and the Myo armband. Paper presented at the 2017 25th European Signal Processing Conference
(EUSIPCO).
References
40. Continued…
[8]. https://archive.ics.uci.edu/ml/datasets/EMG+data+for+gestures
[9]. Chawathe, S. S. (2019). Hand Gestures from Low-Cost Surface-Electromyographs. IEEE National Aerospace and
Electronics Conference (NAECON).
[10]. Benalcázar, M. E., Motoche, C., Zea, J. A., Jaramillo, A. G., Anchundia, C. E., Zambrano, P., . . . Pérez, M. (2017). Real-
time hand gesture recognition using the Myo armband and muscle activity detection. Paper presented at the 2017
IEEE Second Ecuador Technical Chapters Meeting (ETCM).