In the present paper, electroencephalogram (EEG)
data have been used to human identification by computing
sample entropy and graph entropy as feature extractions. Used
two classifier types, which are K-Nearest Neighbors (K-NN) and
Support Vector Machine (SVM). Python and Matlab software
were used in this study and EEG data was collected by UCI
repository . Matlab used when Thirteen channels was applied as
feature extraction . The experimental results show that, Python
software classifies the EEG-UCI data better than MATLAB
environment software where the accuracy of KNN and SVM
were 85.2% and 91.5% respectively.
This document discusses using artificial neural networks for satellite image classification. It begins with an introduction to remote sensing and the importance of image classification. It then discusses how artificial neural networks can be used for image classification, specifically using the backpropagation algorithm. The backpropagation algorithm allows training of a neural network to classify images into different land use and land cover classes. An example is provided showing an original satellite image and the classified image output from the neural network. The conclusion states that artificial neural networks are well-suited for analyzing remote sensing data and developing image classification algorithms due to their ability to extract information from incomplete inputs.
Computational model for artificial learning using formal concept analysisAboul Ella Hassanien
The document presents a computational model for artificial learning using formal concept analysis. It proposes using formal concept analysis to describe the classification process and derive classification rules. The model was tested on several datasets and showed improved accuracy over support vector machines and classification and regression trees on most datasets based on various performance metrics. ROC curves were also generated to evaluate model performance. The proposed model aims to better understand and model the classification learning processes involved in human intelligence.
This document summarizes a semi-supervised clustering approach for classifying P300 signals for brain-computer interface (BCI) speller systems. It involves using k-means clustering on wavelet features extracted from EEG data, with some data points labeled to initialize the clusters. An ensemble of support vector machines is then trained on the clustered data points to classify new unlabeled P300 signals. The document outlines the P300 speller paradigm used to collect the EEG data, pre-processing steps like filtering and wavelet transformation, the seeded k-means semi-supervised clustering method, and using an ensemble SVM classifier trained on the clustered data for classification.
This document discusses a hybrid approach using genetic algorithms and fuzzy logic to improve anomaly and intrusion detection. It begins with an overview of intrusion detection systems and discusses different types of database anomalies. It then describes techniques for intrusion detection including clustering, genetic algorithms, and fuzzy c-means clustering. The document presents the advantages of using a genetic algorithm for intrusion detection systems. It provides results of experiments measuring fit value and time using the hybrid genetic algorithm and fuzzy approach. The experiments showed this approach can accurately detect different attacks. The conclusion is that the hybrid genetic algorithm and fuzzy method was effective at anomaly-based intrusion detection within a network.
1. The document discusses data hiding techniques for images, specifically uniform embedding. It reviews existing methods like LSB substitution and proposes developing a new technique to select pixels for embedding, reduce embedded text size, and increase confidentiality.
2. It surveys related work on minimizing distortion in steganography, a modified matrix encoding technique for low distortion, and designing adaptive steganographic schemes.
3. The objectives are to develop a new pixel selection technique for embedding, reduce embedded text size, and increase resistance to extraction through high confidentiality. The significance is providing a solution to digital image steganography problems and focusing on choosing pixels to embed text under conditions.
IRJET - Object Detection using Deep Learning with OpenCV and PythonIRJET Journal
This document summarizes research on object detection techniques using deep learning. It discusses using the YOLO algorithm to identify objects in images using a single neural network that predicts bounding boxes and class probabilities. The document reviews prior research on algorithms like R-CNN, Fast R-CNN, Faster R-CNN, Mask R-CNN and RetinaNet. It then describes the YOLO loss function and methodology for finding bounding boxes of objects in an image. The document concludes that YOLO is well-suited for real-time object detection applications due to its advantages over other algorithms.
A Parallel Architecture for Multiple-Face Detection Technique Using AdaBoost ...Hadi Santoso
Face detection is a very important biometric application in the field of image
analysis and computer vision. The basic face detection method is AdaBoost
algorithm with a cascading Haar-like feature classifiers based on the
framework proposed by Viola and Jones. Real-time multiple-face detection,
for instance on CCTVs with high resolution, is a computation-intensive
procedure. If the procedure is performed sequentially, an optimal real-time
performance will not be achieved. In this paper we propose an architectural
design for a parallel and multiple-face detection technique based on Viola
and Jones' framework. To do this systematically, we look at the problem
from 4 points of view, namely: data processing taxonomy, parallel memory
architecture, the model of parallel programming, as well as the design of
parallel program. We also build a prototype of the proposed parallel
technique and conduct a series of experiments to investigate the gained
acceleration.
This document discusses using artificial neural networks for satellite image classification. It begins with an introduction to remote sensing and the importance of image classification. It then discusses how artificial neural networks can be used for image classification, specifically using the backpropagation algorithm. The backpropagation algorithm allows training of a neural network to classify images into different land use and land cover classes. An example is provided showing an original satellite image and the classified image output from the neural network. The conclusion states that artificial neural networks are well-suited for analyzing remote sensing data and developing image classification algorithms due to their ability to extract information from incomplete inputs.
Computational model for artificial learning using formal concept analysisAboul Ella Hassanien
The document presents a computational model for artificial learning using formal concept analysis. It proposes using formal concept analysis to describe the classification process and derive classification rules. The model was tested on several datasets and showed improved accuracy over support vector machines and classification and regression trees on most datasets based on various performance metrics. ROC curves were also generated to evaluate model performance. The proposed model aims to better understand and model the classification learning processes involved in human intelligence.
This document summarizes a semi-supervised clustering approach for classifying P300 signals for brain-computer interface (BCI) speller systems. It involves using k-means clustering on wavelet features extracted from EEG data, with some data points labeled to initialize the clusters. An ensemble of support vector machines is then trained on the clustered data points to classify new unlabeled P300 signals. The document outlines the P300 speller paradigm used to collect the EEG data, pre-processing steps like filtering and wavelet transformation, the seeded k-means semi-supervised clustering method, and using an ensemble SVM classifier trained on the clustered data for classification.
This document discusses a hybrid approach using genetic algorithms and fuzzy logic to improve anomaly and intrusion detection. It begins with an overview of intrusion detection systems and discusses different types of database anomalies. It then describes techniques for intrusion detection including clustering, genetic algorithms, and fuzzy c-means clustering. The document presents the advantages of using a genetic algorithm for intrusion detection systems. It provides results of experiments measuring fit value and time using the hybrid genetic algorithm and fuzzy approach. The experiments showed this approach can accurately detect different attacks. The conclusion is that the hybrid genetic algorithm and fuzzy method was effective at anomaly-based intrusion detection within a network.
1. The document discusses data hiding techniques for images, specifically uniform embedding. It reviews existing methods like LSB substitution and proposes developing a new technique to select pixels for embedding, reduce embedded text size, and increase confidentiality.
2. It surveys related work on minimizing distortion in steganography, a modified matrix encoding technique for low distortion, and designing adaptive steganographic schemes.
3. The objectives are to develop a new pixel selection technique for embedding, reduce embedded text size, and increase resistance to extraction through high confidentiality. The significance is providing a solution to digital image steganography problems and focusing on choosing pixels to embed text under conditions.
IRJET - Object Detection using Deep Learning with OpenCV and PythonIRJET Journal
This document summarizes research on object detection techniques using deep learning. It discusses using the YOLO algorithm to identify objects in images using a single neural network that predicts bounding boxes and class probabilities. The document reviews prior research on algorithms like R-CNN, Fast R-CNN, Faster R-CNN, Mask R-CNN and RetinaNet. It then describes the YOLO loss function and methodology for finding bounding boxes of objects in an image. The document concludes that YOLO is well-suited for real-time object detection applications due to its advantages over other algorithms.
A Parallel Architecture for Multiple-Face Detection Technique Using AdaBoost ...Hadi Santoso
Face detection is a very important biometric application in the field of image
analysis and computer vision. The basic face detection method is AdaBoost
algorithm with a cascading Haar-like feature classifiers based on the
framework proposed by Viola and Jones. Real-time multiple-face detection,
for instance on CCTVs with high resolution, is a computation-intensive
procedure. If the procedure is performed sequentially, an optimal real-time
performance will not be achieved. In this paper we propose an architectural
design for a parallel and multiple-face detection technique based on Viola
and Jones' framework. To do this systematically, we look at the problem
from 4 points of view, namely: data processing taxonomy, parallel memory
architecture, the model of parallel programming, as well as the design of
parallel program. We also build a prototype of the proposed parallel
technique and conduct a series of experiments to investigate the gained
acceleration.
Brain-Computer Interfaces are communication
systems that use brain signals as commands to a device. Despite
being the only means by which severely paralysed people can
interact with the world most effort is focused on improving and
testing algorithms offline, not worrying about their validation in
real life conditions. The Cybathlon’s BCI-race offers a unique
opportunity to apply theory in real life conditions and fills
the gap. We present here a Neural Network architecture for
the 4-way classification paradigm of the BCI-race able to run
in real-time. The procedure to find the architecture and best
combination of mental commands best suiting this architecture
for personalised used are also described. Using spectral power
features and one layer convolutional plus one fully connected
layer network we achieve a performance similar to that in
literature for 4-way classification and prove that following our
method we can obtain similar accuracies online and offline
closing this well-known gap in BCI performances
Algorithmic Analysis to Video Object Tracking and Background Segmentation and...Editor IJCATR
This document discusses algorithms for video object tracking and background segmentation. It analyzes the drawbacks of existing algorithms like partial least squares analysis, hidden Markov models, and CAMSHIFT that are used for object tracking. These include high computational complexity, sensitivity to changes in lighting/scene, and inability to handle occlusion. The document proposes using a Gaussian mixture model (GMM) and level set method for background segmentation, where each pixel is modeled as a mixture of Gaussians. It also proposes using wavelet transforms, specifically the Haar wavelet packet transform, for object tracking. The key advantages of these approaches are their ability to handle multi-modal backgrounds, occlusion, and noise in video signals.
Objects detection and tracking using fast principle component purist and kalm...IJECEIAES
The detection and tracking of moving objects attracted a lot of concern because of the vast computer vision applications. This paper proposes a new algorithm based on several methods for identifying, detecting, and tracking an object in order to develop an effective and efficient system in several applications. This algorithm has three main parts: the first part for background modeling and foreground extraction, the second part for smoothing, filtering and detecting moving objects within the video frame and the last part includes tracking and prediction of detected objects. In this proposed work, a new algorithm to detect moving objects from video data is designed by the Fast Principle Component Purist (FPCP). Then we used an optimal filter that performs well to reduce noise through the median filter. The Fast Regionconvolution neural networks (Fast- RCNN) is used to add smoothness to the spatial identification of objects and their areas. Then the detected object is tracked by Kalman Filter. Experimental results show that our algorithm adapts to different situations and outperforms many existing algorithms.
ON THE IMAGE QUALITY AND ENCODING TIMES OF LSB, MSB AND COMBINED LSB-MSBijcsit
The Least Significant Bit (LSB) algorithm and the Most Significant Bit (MSB) algorithm are stenography algorithms with each one having its demerits. This work therefore proposed a Hybrid approach and compared its efficiency with LSB and MSB algorithms. The Least Significant Bit (LSB) and Most
Significant Bit (MSB) techniques were combined in the proposed algorithm. Two bits (the least significant bit and the most significant bit) of the cover images were replaced with a secret message. Comparisons were made based on Mean-Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR) and the encoding time between the proposed algorithm, LSB and MSB after embedding in digital images. The combined
technique produced a stego-image with minimal distortion in image quality than MSB technique independent of the nature of data that was hidden. However, LSB algorithm produced the best stego-image quality. Large cover images however made the combined algorithm’s quality better improved. The combined algorithm had lesser time of image and text encoding. Therefore, a trade-off exists between the encoding time and the quality of stego-image as demonstrated in this work.
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCHcscpconf
Data collection is an essential, but manpower intensive procedure in ecological research. An
algorithm was developed by the author which incorporated two important computer vision
techniques to automate data cataloging for butterfly measurements. Optical Character
Recognition is used for character recognition and Contour Detection is used for imageprocessing.
Proper pre-processing is first done on the images to improve accuracy. Although
there are limitations to Tesseract’s detection of certain fonts, overall, it can successfully identify
words of basic fonts. Contour detection is an advanced technique that can be utilized to
measure an image. Shapes and mathematical calculations are crucial in determining the precise
location of the points on which to draw the body and forewing lines of the butterfly. Overall,
92% accuracy were achieved by the program for the set of butterflies measured.
This chapter introduces the theoretical foundations of knowledge mining and intelligent agents. It discusses key concepts like knowledge, intelligent agents, and the fundamental tasks of knowledge discovery in databases. The chapter also provides an overview of several well-developed intelligent agent methodologies, including ant colony optimization, particle swarm optimization, and evolutionary algorithms that can be used for knowledge mining.
Segmentation and recognition of handwritten digit numeral string using a mult...ijfcstjournal
In this paper, the use of Multi-Layer Perceptron (MLP) Neural Network model is proposed for recognizing
unconstrained offline handwritten Numeral strings. The Numeral strings are segmented and isolated
numerals are obtained using a connected component labeling (CCL) algorithm approach. The structural
part of the models has been modeled using a Multilayer Perceptron Neural Network. This paper also
presents a new technique to remove slope and slant from handwritten numeral string and to normalize the
size of text images and classify with supervised learning methods. Experimental results on a database of
102 numeral string patterns written by 3 different people show that a recognition rate of 99.7% is obtained
on independent digits contained in the numeral string of digits includes both the skewed and slant data.
MULTIPLE HUMAN TRACKING USING RETINANET FEATURES, SIAMESE NEURAL NETWORK, AND...IAEME Publication
Multiple human tracking based on object detection has been a challenge due to its
complexity. Errors in object detection would be propagated to tracking errors. In this
paper, we propose a tracking method that minimizes the error produced by object
detector. We use RetinaNet as object detector and Hungarian algorithm for tracking.
The cost matrix for Hungarian algorithm is calculated using the RetinaNet features,
bounding box center distances, and intersection of unions of bounding boxes. We
interpolate the missing detections in the last step. The proposed method yield 43.2
MOTA for MOT16 benchmark
The document discusses the Mechanical Engineering department at IIT Kanpur. It describes some of the research areas and projects being conducted, including developing train safety systems like wheel impact load detection and derailment detection devices. It also discusses research on measuring wheel technology, onboard diagnosis systems, bogie design, and developing stability control for cars. The department has strong programs in areas like computational mechanics, materials science, robotics, and is growing its nuclear engineering program.
This document presents a method for image upscaling using a fuzzy ARTMAP neural network. It begins with an introduction to image upscaling and interpolation techniques. It then provides background on ARTMAP neural networks and fuzzy logic. The proposed method uses a linear interpolation algorithm trained with an ARTMAP network. Results show the method performs better than nearest neighbor interpolation in terms of peak signal-to-noise ratio, mean squared error, and structural similarity, though not as high as bicubic interpolation. Overall, the fuzzy ARTMAP network provides an effective way to perform image upscaling with fewer artifacts than traditional methods.
A NOVEL BACKGROUND SUBTRACTION ALGORITHM FOR PERSON TRACKING BASED ON K-NN csandit
Object tracking can be defined as the process of detecting an object of interest from a video scene and keeping track of its motion, orientation, occlusion etc. in order to extract useful
information. It is indeed a challenging problem and it’s an important task. Many researchers are getting attracted in the field of computer vision, specifically the field of object tracking in video surveillance. The main purpose of this paper is to give to the reader information of the present state of the art object tracking, together with presenting steps involved in Background Subtraction and their techniques. In related literature we found three main methods of object tracking: the first method is the optical flow; the second is related to the background subtraction, which is divided into two types presented in this paper, and the last one is temporal
differencing. We present a novel approach to background subtraction that compare a current frame with the background model that we have set before, so we can classified each pixel of the image as a foreground or a background element, then comes the tracking step to present our object of interest, which is a person, by his centroid. The tracking step is divided into two different methods, the surface method and the K-NN method, both are explained in the paper.Our proposed method is implemented and evaluated using CAVIAR database.
Application of Artificial Neural Networking for Determining the Plane of Vibr...IOSRJMCE
In this paper a new approach for Artificial Neural Networking using Feed Forward Back Propagation Method and Levenberg-Marquardt backpropagation training function has been developed using Java Programming, where by directly feeding the RMS and Phase values of vibration, the unbalance plane can be detected with minimum error. In a Machine Fault Simulator RMS value and phase values of vibrations are collected from the four accelerometers placed in X and Y direction of Left and Right Bearings .Further these data are fed into the neural network for training purpose. In the testing phase of the neural network, the plane of vibration has been determined using different training algorithms available in MATLAB. Their prediction values have been compared with the actual value, errors for different training algorithms are calculated and a conclusion has been drawn for the best training function available for this current research work.
Text Recognition using Convolutional Neural Network: A ReviewIRJET Journal
This document reviews a system for text recognition using convolutional neural networks. The system uses an artificial neural network and nearest neighbor concepts to develop an optical character recognition (OCR) engine. The OCR engine takes images as input and converts them to soft copies through various processing stages, including preprocessing, segmentation, character recognition, and error detection and correction. It aims to improve on existing OCR engines by reducing errors. The system is intended to be implemented as an Android app to allow offline conversion of printed texts to soft copies. It reviews the methodology and various components of the proposed system, including the neural network architecture and training approach.
IRJET-Artificial Neural Networks to Determine Source of Acoustic Emission and...IRJET Journal
This document summarizes research using neural networks to analyze acoustic emission data collected from sensors on a concrete wall during controlled cracking tests. The data is divided into 3 types: Type I from cracking tests, Type II from pulsing a sensor, and Type III from ambient noise. A neural network is trained on Type I data to learn correlations between sensor recordings. It is then tested on the other data types and shown to distinguish Type I from the others, indicating it can identify cracking events. While preliminary results are promising, more research is needed to validate the technique for acoustic emission source identification.
Face Recognition Based Intelligent Door Control Systemijtsrd
This paper presents the intelligent door control system based on face detection and recognition. This system can avoid the need to control by persons with the use of keys, security cards, password or pattern to open the door. The main objective is to develop a simple and fast recognition system for personal identification and face recognition to provide the security system. Face is a complex multidimensional structure and needs good computing techniques for recognition. The system is composed of two main parts face recognition and automatic door access control. It needs to detect the face before recognizing the face of the person. In face detection step, Viola Jones face detection algorithm is applied to detect the human face. Face recognition is implemented by using the Principal Component Analysis PCA and Neural Network. Image processing toolbox which is in MATLAB 2013a is used for the recognition process in this research. The PIC microcontroller is used to automatic door access control system by programming MikroC language. The door is opened automatically for the known person according to the result of verification in the MATLAB. On the other hand, the door remains closed for the unknown person. San San Naing | Thiri Oo Kywe | Ni Ni San Hlaing ""Face Recognition Based Intelligent Door Control System"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23893.pdf
Paper URL: https://www.ijtsrd.com/engineering/electrical-engineering/23893/face-recognition-based-intelligent-door-control-system/san-san-naing
Recognition of Epilepsy from Non Seizure Electroencephalogram using combinati...Atrija Singh
IC3: International IEEE Conference on Contemporary Computing
Noida India
Presented on 10th August 2017.
Topic : Recognition of Epilepsy from Non Seizure Electroencephalogram using combination of Linear SVM and Time Domain Attributes.
Semantic Concept Detection in Video Using Hybrid Model of CNN and SVM Classif...CSCJournals
In today's era of digitization and fast internet, many video are uploaded on websites, a mechanism is required to access this video accurately and efficiently. Semantic concept detection achieve this task accurately and is used in many application like multimedia annotation, video summarization, annotation, indexing and retrieval. Video retrieval based on semantic concept is efficient and challenging research area. Semantic concept detection bridges the semantic gap between low level extraction of features from key-frame or shot of video and high level interpretation of the same as semantics. Semantic Concept detection automatically assigns labels to video from predefined vocabulary. This task is considered as supervised machine learning problem. Support vector machine (SVM) emerged as default classifier choice for this task. But recently Deep Convolutional Neural Network (CNN) has shown exceptional performance in this area. CNN requires large dataset for training. In this paper, we present framework for semantic concept detection using hybrid model of SVM and CNN. Global features like color moment, HSV histogram, wavelet transform, grey level co-occurrence matrix and edge orientation histogram are selected as low level features extracted from annotated groundtruth video dataset of TRECVID. In second pipeline, deep features are extracted using pretrained CNN. Dataset is partitioned in three segments to deal with data imbalance issue. Two classifiers are separately trained on all segments and fusion of scores is performed to detect the concepts in test dataset. The system performance is evaluated using Mean Average Precision for multi-label dataset. The performance of the proposed framework using hybrid model of SVM and CNN is comparable to existing approaches.
IRJET - Content based Image ClassificationIRJET Journal
The document discusses content based image classification, which involves grouping large numbers of digital images uploaded daily into categories based on their visual content. It describes how content based image classification systems work by extracting features from images like shape, color, and texture to classify them. The document also outlines some challenges in content based image classification and potential areas of future research like using deep learning approaches.
NEURAL NETWORKS FOR HIGH PERFORMANCE TIME-DELAY ESTIMATION AND ACOUSTIC SOURC...csandit
Time-delay estimation is an essential building block of many signal processing applications.This paper follows up on earlier work for acoustic source localization and time delay estimation
using pattern recognition techniques in the adverse environment such as reverberant rooms or underwater; it presents unprecedented high performance results obtained with supervised training of neural networks which challenge the state of the art and compares its performance to that of well-known methods such as the Generalized Cross-Correlation or Adaptive Eigenvalue Decomposition.
IRJET- Spot Me - A Smart Attendance System based on Face RecognitionIRJET Journal
The article discusses international issues. It mentions that globalization has increased economic interdependence between nations while also raising tensions over immigration and trade. Solutions will require cooperation and compromise and a recognition that isolationism is not a viable strategy in an interconnected world.
Overview of Machine Learning and Deep Learning Methods in Brain Computer Inte...IJCSES Journal
Research under the field of Brain Computer Interfaces is adapting various Machine Learning and Deep
Learning techniques in recent times. With the advent of modern BCI, the data generated by various devices
is now capable of detecting brain signals more accurately. This paper gives an overview of all the steps
involved in the process of applying Machine Learning as well as Deep Learning methods from Data
Acquisition to application of algorithms. It aims to study techniques currently employed to extract data,
features from brain data, different algorithms employed to draw insights from the extracted features, and
how it can be used in various BCI applications. By this study, I aim to put forward current Machine
Learning and Deep Learning Trends in the field of BCI.
Brain-Computer Interfaces are communication
systems that use brain signals as commands to a device. Despite
being the only means by which severely paralysed people can
interact with the world most effort is focused on improving and
testing algorithms offline, not worrying about their validation in
real life conditions. The Cybathlon’s BCI-race offers a unique
opportunity to apply theory in real life conditions and fills
the gap. We present here a Neural Network architecture for
the 4-way classification paradigm of the BCI-race able to run
in real-time. The procedure to find the architecture and best
combination of mental commands best suiting this architecture
for personalised used are also described. Using spectral power
features and one layer convolutional plus one fully connected
layer network we achieve a performance similar to that in
literature for 4-way classification and prove that following our
method we can obtain similar accuracies online and offline
closing this well-known gap in BCI performances
Algorithmic Analysis to Video Object Tracking and Background Segmentation and...Editor IJCATR
This document discusses algorithms for video object tracking and background segmentation. It analyzes the drawbacks of existing algorithms like partial least squares analysis, hidden Markov models, and CAMSHIFT that are used for object tracking. These include high computational complexity, sensitivity to changes in lighting/scene, and inability to handle occlusion. The document proposes using a Gaussian mixture model (GMM) and level set method for background segmentation, where each pixel is modeled as a mixture of Gaussians. It also proposes using wavelet transforms, specifically the Haar wavelet packet transform, for object tracking. The key advantages of these approaches are their ability to handle multi-modal backgrounds, occlusion, and noise in video signals.
Objects detection and tracking using fast principle component purist and kalm...IJECEIAES
The detection and tracking of moving objects attracted a lot of concern because of the vast computer vision applications. This paper proposes a new algorithm based on several methods for identifying, detecting, and tracking an object in order to develop an effective and efficient system in several applications. This algorithm has three main parts: the first part for background modeling and foreground extraction, the second part for smoothing, filtering and detecting moving objects within the video frame and the last part includes tracking and prediction of detected objects. In this proposed work, a new algorithm to detect moving objects from video data is designed by the Fast Principle Component Purist (FPCP). Then we used an optimal filter that performs well to reduce noise through the median filter. The Fast Regionconvolution neural networks (Fast- RCNN) is used to add smoothness to the spatial identification of objects and their areas. Then the detected object is tracked by Kalman Filter. Experimental results show that our algorithm adapts to different situations and outperforms many existing algorithms.
ON THE IMAGE QUALITY AND ENCODING TIMES OF LSB, MSB AND COMBINED LSB-MSBijcsit
The Least Significant Bit (LSB) algorithm and the Most Significant Bit (MSB) algorithm are stenography algorithms with each one having its demerits. This work therefore proposed a Hybrid approach and compared its efficiency with LSB and MSB algorithms. The Least Significant Bit (LSB) and Most
Significant Bit (MSB) techniques were combined in the proposed algorithm. Two bits (the least significant bit and the most significant bit) of the cover images were replaced with a secret message. Comparisons were made based on Mean-Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR) and the encoding time between the proposed algorithm, LSB and MSB after embedding in digital images. The combined
technique produced a stego-image with minimal distortion in image quality than MSB technique independent of the nature of data that was hidden. However, LSB algorithm produced the best stego-image quality. Large cover images however made the combined algorithm’s quality better improved. The combined algorithm had lesser time of image and text encoding. Therefore, a trade-off exists between the encoding time and the quality of stego-image as demonstrated in this work.
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCHcscpconf
Data collection is an essential, but manpower intensive procedure in ecological research. An
algorithm was developed by the author which incorporated two important computer vision
techniques to automate data cataloging for butterfly measurements. Optical Character
Recognition is used for character recognition and Contour Detection is used for imageprocessing.
Proper pre-processing is first done on the images to improve accuracy. Although
there are limitations to Tesseract’s detection of certain fonts, overall, it can successfully identify
words of basic fonts. Contour detection is an advanced technique that can be utilized to
measure an image. Shapes and mathematical calculations are crucial in determining the precise
location of the points on which to draw the body and forewing lines of the butterfly. Overall,
92% accuracy were achieved by the program for the set of butterflies measured.
This chapter introduces the theoretical foundations of knowledge mining and intelligent agents. It discusses key concepts like knowledge, intelligent agents, and the fundamental tasks of knowledge discovery in databases. The chapter also provides an overview of several well-developed intelligent agent methodologies, including ant colony optimization, particle swarm optimization, and evolutionary algorithms that can be used for knowledge mining.
Segmentation and recognition of handwritten digit numeral string using a mult...ijfcstjournal
In this paper, the use of Multi-Layer Perceptron (MLP) Neural Network model is proposed for recognizing
unconstrained offline handwritten Numeral strings. The Numeral strings are segmented and isolated
numerals are obtained using a connected component labeling (CCL) algorithm approach. The structural
part of the models has been modeled using a Multilayer Perceptron Neural Network. This paper also
presents a new technique to remove slope and slant from handwritten numeral string and to normalize the
size of text images and classify with supervised learning methods. Experimental results on a database of
102 numeral string patterns written by 3 different people show that a recognition rate of 99.7% is obtained
on independent digits contained in the numeral string of digits includes both the skewed and slant data.
MULTIPLE HUMAN TRACKING USING RETINANET FEATURES, SIAMESE NEURAL NETWORK, AND...IAEME Publication
Multiple human tracking based on object detection has been a challenge due to its
complexity. Errors in object detection would be propagated to tracking errors. In this
paper, we propose a tracking method that minimizes the error produced by object
detector. We use RetinaNet as object detector and Hungarian algorithm for tracking.
The cost matrix for Hungarian algorithm is calculated using the RetinaNet features,
bounding box center distances, and intersection of unions of bounding boxes. We
interpolate the missing detections in the last step. The proposed method yield 43.2
MOTA for MOT16 benchmark
The document discusses the Mechanical Engineering department at IIT Kanpur. It describes some of the research areas and projects being conducted, including developing train safety systems like wheel impact load detection and derailment detection devices. It also discusses research on measuring wheel technology, onboard diagnosis systems, bogie design, and developing stability control for cars. The department has strong programs in areas like computational mechanics, materials science, robotics, and is growing its nuclear engineering program.
This document presents a method for image upscaling using a fuzzy ARTMAP neural network. It begins with an introduction to image upscaling and interpolation techniques. It then provides background on ARTMAP neural networks and fuzzy logic. The proposed method uses a linear interpolation algorithm trained with an ARTMAP network. Results show the method performs better than nearest neighbor interpolation in terms of peak signal-to-noise ratio, mean squared error, and structural similarity, though not as high as bicubic interpolation. Overall, the fuzzy ARTMAP network provides an effective way to perform image upscaling with fewer artifacts than traditional methods.
A NOVEL BACKGROUND SUBTRACTION ALGORITHM FOR PERSON TRACKING BASED ON K-NN csandit
Object tracking can be defined as the process of detecting an object of interest from a video scene and keeping track of its motion, orientation, occlusion etc. in order to extract useful
information. It is indeed a challenging problem and it’s an important task. Many researchers are getting attracted in the field of computer vision, specifically the field of object tracking in video surveillance. The main purpose of this paper is to give to the reader information of the present state of the art object tracking, together with presenting steps involved in Background Subtraction and their techniques. In related literature we found three main methods of object tracking: the first method is the optical flow; the second is related to the background subtraction, which is divided into two types presented in this paper, and the last one is temporal
differencing. We present a novel approach to background subtraction that compare a current frame with the background model that we have set before, so we can classified each pixel of the image as a foreground or a background element, then comes the tracking step to present our object of interest, which is a person, by his centroid. The tracking step is divided into two different methods, the surface method and the K-NN method, both are explained in the paper.Our proposed method is implemented and evaluated using CAVIAR database.
Application of Artificial Neural Networking for Determining the Plane of Vibr...IOSRJMCE
In this paper a new approach for Artificial Neural Networking using Feed Forward Back Propagation Method and Levenberg-Marquardt backpropagation training function has been developed using Java Programming, where by directly feeding the RMS and Phase values of vibration, the unbalance plane can be detected with minimum error. In a Machine Fault Simulator RMS value and phase values of vibrations are collected from the four accelerometers placed in X and Y direction of Left and Right Bearings .Further these data are fed into the neural network for training purpose. In the testing phase of the neural network, the plane of vibration has been determined using different training algorithms available in MATLAB. Their prediction values have been compared with the actual value, errors for different training algorithms are calculated and a conclusion has been drawn for the best training function available for this current research work.
Text Recognition using Convolutional Neural Network: A ReviewIRJET Journal
This document reviews a system for text recognition using convolutional neural networks. The system uses an artificial neural network and nearest neighbor concepts to develop an optical character recognition (OCR) engine. The OCR engine takes images as input and converts them to soft copies through various processing stages, including preprocessing, segmentation, character recognition, and error detection and correction. It aims to improve on existing OCR engines by reducing errors. The system is intended to be implemented as an Android app to allow offline conversion of printed texts to soft copies. It reviews the methodology and various components of the proposed system, including the neural network architecture and training approach.
IRJET-Artificial Neural Networks to Determine Source of Acoustic Emission and...IRJET Journal
This document summarizes research using neural networks to analyze acoustic emission data collected from sensors on a concrete wall during controlled cracking tests. The data is divided into 3 types: Type I from cracking tests, Type II from pulsing a sensor, and Type III from ambient noise. A neural network is trained on Type I data to learn correlations between sensor recordings. It is then tested on the other data types and shown to distinguish Type I from the others, indicating it can identify cracking events. While preliminary results are promising, more research is needed to validate the technique for acoustic emission source identification.
Face Recognition Based Intelligent Door Control Systemijtsrd
This paper presents the intelligent door control system based on face detection and recognition. This system can avoid the need to control by persons with the use of keys, security cards, password or pattern to open the door. The main objective is to develop a simple and fast recognition system for personal identification and face recognition to provide the security system. Face is a complex multidimensional structure and needs good computing techniques for recognition. The system is composed of two main parts face recognition and automatic door access control. It needs to detect the face before recognizing the face of the person. In face detection step, Viola Jones face detection algorithm is applied to detect the human face. Face recognition is implemented by using the Principal Component Analysis PCA and Neural Network. Image processing toolbox which is in MATLAB 2013a is used for the recognition process in this research. The PIC microcontroller is used to automatic door access control system by programming MikroC language. The door is opened automatically for the known person according to the result of verification in the MATLAB. On the other hand, the door remains closed for the unknown person. San San Naing | Thiri Oo Kywe | Ni Ni San Hlaing ""Face Recognition Based Intelligent Door Control System"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23893.pdf
Paper URL: https://www.ijtsrd.com/engineering/electrical-engineering/23893/face-recognition-based-intelligent-door-control-system/san-san-naing
Recognition of Epilepsy from Non Seizure Electroencephalogram using combinati...Atrija Singh
IC3: International IEEE Conference on Contemporary Computing
Noida India
Presented on 10th August 2017.
Topic : Recognition of Epilepsy from Non Seizure Electroencephalogram using combination of Linear SVM and Time Domain Attributes.
Semantic Concept Detection in Video Using Hybrid Model of CNN and SVM Classif...CSCJournals
In today's era of digitization and fast internet, many video are uploaded on websites, a mechanism is required to access this video accurately and efficiently. Semantic concept detection achieve this task accurately and is used in many application like multimedia annotation, video summarization, annotation, indexing and retrieval. Video retrieval based on semantic concept is efficient and challenging research area. Semantic concept detection bridges the semantic gap between low level extraction of features from key-frame or shot of video and high level interpretation of the same as semantics. Semantic Concept detection automatically assigns labels to video from predefined vocabulary. This task is considered as supervised machine learning problem. Support vector machine (SVM) emerged as default classifier choice for this task. But recently Deep Convolutional Neural Network (CNN) has shown exceptional performance in this area. CNN requires large dataset for training. In this paper, we present framework for semantic concept detection using hybrid model of SVM and CNN. Global features like color moment, HSV histogram, wavelet transform, grey level co-occurrence matrix and edge orientation histogram are selected as low level features extracted from annotated groundtruth video dataset of TRECVID. In second pipeline, deep features are extracted using pretrained CNN. Dataset is partitioned in three segments to deal with data imbalance issue. Two classifiers are separately trained on all segments and fusion of scores is performed to detect the concepts in test dataset. The system performance is evaluated using Mean Average Precision for multi-label dataset. The performance of the proposed framework using hybrid model of SVM and CNN is comparable to existing approaches.
IRJET - Content based Image ClassificationIRJET Journal
The document discusses content based image classification, which involves grouping large numbers of digital images uploaded daily into categories based on their visual content. It describes how content based image classification systems work by extracting features from images like shape, color, and texture to classify them. The document also outlines some challenges in content based image classification and potential areas of future research like using deep learning approaches.
NEURAL NETWORKS FOR HIGH PERFORMANCE TIME-DELAY ESTIMATION AND ACOUSTIC SOURC...csandit
Time-delay estimation is an essential building block of many signal processing applications.This paper follows up on earlier work for acoustic source localization and time delay estimation
using pattern recognition techniques in the adverse environment such as reverberant rooms or underwater; it presents unprecedented high performance results obtained with supervised training of neural networks which challenge the state of the art and compares its performance to that of well-known methods such as the Generalized Cross-Correlation or Adaptive Eigenvalue Decomposition.
IRJET- Spot Me - A Smart Attendance System based on Face RecognitionIRJET Journal
The article discusses international issues. It mentions that globalization has increased economic interdependence between nations while also raising tensions over immigration and trade. Solutions will require cooperation and compromise and a recognition that isolationism is not a viable strategy in an interconnected world.
Overview of Machine Learning and Deep Learning Methods in Brain Computer Inte...IJCSES Journal
Research under the field of Brain Computer Interfaces is adapting various Machine Learning and Deep
Learning techniques in recent times. With the advent of modern BCI, the data generated by various devices
is now capable of detecting brain signals more accurately. This paper gives an overview of all the steps
involved in the process of applying Machine Learning as well as Deep Learning methods from Data
Acquisition to application of algorithms. It aims to study techniques currently employed to extract data,
features from brain data, different algorithms employed to draw insights from the extracted features, and
how it can be used in various BCI applications. By this study, I aim to put forward current Machine
Learning and Deep Learning Trends in the field of BCI.
IRJET- Anomaly Detection System in CCTV Derived VideosIRJET Journal
This document describes a proposed system for anomaly detection in CCTV videos using deep learning techniques. The system has two main components: 1) feature extraction using convolutional neural networks to learn representations of normal behavior from training videos, and 2) an anomaly detection classifier to identify abnormal events in new videos based on the learned features. Several related works incorporating techniques like k-means clustering, decision trees, and neural networks for video-based anomaly detection are also reviewed. The methodology section outlines the overall framework, including preprocessing steps and separate training and testing phases to extract normal features and then detect anomalies.
ANIMAL SPECIES RECOGNITION SYSTEM USING DEEP LEARNINGIRJET Journal
The document describes an animal species recognition system using deep learning. The system uses a convolutional neural network trained on the ImageNet dataset to extract features from animal images. It then classifies the animals and identifies their species with high accuracy, even with limited training samples. The system is implemented in an app called Imagenet of Animals to allow users to easily identify animal species from pictures. It achieves accurate recognition by leveraging transfer learning from large pre-trained models like GoogleNet Inception v4.
IRJET- Design an Approach for Prediction of Human Activity Recognition us...IRJET Journal
The document proposes a framework for human activity recognition using smartphones. It involves collecting data from a smartphone's accelerometer and gyroscope sensors worn on the waist during various activities of daily living. The data is preprocessed and classified using machine learning algorithms like Naive Bayes, logistic regression, and SVM. The proposed framework first loads and preprocesses the sensor data, then generates features before splitting the data into training and test sets. Various classifiers are applied and evaluated to select the best performing one for activity recognition. The authors conclude that implementing tri-axial acceleration from sensors provides different accuracy for different algorithms, with SVM achieving maximum accuracy in previous work.
Automatic Selection of Open Source Multimedia Softwares Using Error Back-Prop...IJERA Editor
Open source opens a new era to provide license of the software for the user at free of cost which is advantage over paid licensed software. In Multimedia applications there are many versions of software are available and there is a problem for the user to select compatible software for their own system. Most of the time while surfing for software a huge list of software opens in response. The selection of particular software which is pretty suitable for the system from a real big list is the biggest challenge that is faced by the users. This work has been done that focuses on the existing open source software that are widely used and to design an automatic system for selection of particular open source software according to the compatibility of users own system. In this work, error back-propagation based neural network is designed in MATLAB for automatic selection of open source software. The system provides the open source software name after taking the information from user. Regression coefficient of 0.93877 is obtained and the results shown are up to the mark and can be utilized for the fast and effective software search.
IRJET- A Survey on Medical Image Interpretation for Predicting PneumoniaIRJET Journal
This document summarizes research on using machine learning and deep learning techniques to interpret medical images and predict pneumonia. It first discusses how medical image analysis is an active field for machine learning. It then reviews several related studies on using convolutional neural networks (CNNs) and transfer learning to classify chest x-rays and detect pneumonia. Specifically, it examines research on developing CNN models for pneumonia classification and using pre-trained CNN architectures like VGG16, VGG19, and ResNet with transfer learning. The document concludes that computer-aided diagnosis systems using deep learning can provide accurate predictions to assist radiologists in pneumonia diagnosis from chest x-rays.
The document describes a proposed real-time sign language detection system using machine learning. The system would use images captured by a webcam to detect gestures in sign language and translate them to text in real-time. The proposed system would be built using the Single Shot Detection algorithm and TensorFlow Object Detection API. A dataset of images of 5 different signs would be created and labelled using LabelImg software. 13 images per sign would be used to train the model and 2 images per sign to test it. The system aims to help deaf people communicate without requiring an expensive human interpreter.
Image Recognition Expert System based on deep learningPRATHAMESH REGE
The document summarizes literature on image recognition expert systems and deep learning. It discusses two papers:
1. The Low-Power Image Recognition Challenge which established a benchmark for comparing low-power image recognition solutions based on both accuracy and energy efficiency using datasets like ILSVRC.
2. The role of knowledge-based systems and expert systems in automatic interpretation of aerial images. It discusses techniques like semantic networks, frames and logical inference used to solve ill-defined problems with limited information. Frameworks like the blackboard model, ACRONYM and SIGMA are discussed.
IRJET- Recognition of Handwritten Characters based on Deep Learning with Tens...IRJET Journal
This paper proposes a convolutional neural network model to recognize handwritten digits using the MNIST dataset. The model is built using TensorFlow and consists of convolutional, pooling and fully connected layers. The model is trained on 60,000 images and tested on 10,000 images, achieving 98% accuracy on the training set and classifying digits with low error of 0.03% on the test set. Previous methods for handwritten digit recognition are discussed and the CNN approach is shown to provide superior performance with faster training times compared to other models.
A Comparative Study of Machine Learning Algorithms for EEG Signal Classificationsipij
In this paper, different machine learning algorithms such as Linear Discriminant Analysis, Support vector
machine (SVM), Multi-layer perceptron, Random forest, K-nearest neighbour, and Autoencoder with SVM
have been compared. This comparison was conducted to seek a robust method that would produce good
classification accuracy. To this end, a robust method of classifying raw Electroencephalography (EEG)
signals associated with imagined movement of the right hand and relaxation state, namely Autoencoder
with SVM has been proposed. The EEG dataset used in this research was created by the University of
Tubingen, Germany. The best classification accuracy achieved was 70.4% with SVM through feature
engineering. However, our prosed method of autoencoder in combination with SVM produced a similar
accuracy of 65% without using any feature engineering technique. This research shows that this system of
classification of motor movements can be used in a Brain-Computer Interface system (BCI) to mentally
control a robotic device or an exoskeleton.
A Comparative Study of Machine Learning Algorithms for EEG Signal Classificationsipij
In this paper, different machine learning algorithms such as Linear Discriminant Analysis, Support vector
machine (SVM), Multi-layer perceptron, Random forest, K-nearest neighbour, and Autoencoder with SVM
have been compared. This comparison was conducted to seek a robust method that would produce good
classification accuracy. To this end, a robust method of classifying raw Electroencephalography (EEG)
signals associated with imagined movement of the right hand and relaxation state, namely Autoencoder
with SVM has been proposed. The EEG dataset used in this research was created by the University of
Tubingen, Germany. The best classification accuracy achieved was 70.4% with SVM through feature
engineering. However, our prosed method of autoencoder in combination with SVM produced a similar
accuracy of 65% without using any feature engineering technique. This research shows that this system of
classification of motor movements can be used in a Brain-Computer Interface system (BCI) to mentally
control a robotic device or an exoskeleton.
A Comparative Study of Machine Learning Algorithms for EEG Signal Classificationsipij
In this paper, different machine learning algorithms such as Linear Discriminant Analysis, Support vector
machine (SVM), Multi-layer perceptron, Random forest, K-nearest neighbour, and Autoencoder with SVM
have been compared. This comparison was conducted to seek a robust method that would produce good
classification accuracy. To this end, a robust method of classifying raw Electroencephalography (EEG)
signals associated with imagined movement of the right hand and relaxation state, namely Autoencoder
with SVM has been proposed. The EEG dataset used in this research was created by the University of
Tubingen, Germany. The best classification accuracy achieved was 70.4% with SVM through feature
engineering. However, our prosed method of autoencoder in combination with SVM produced a similar
accuracy of 65% without using any feature engineering technique. This research shows that this system of
classification of motor movements can be used in a Brain-Computer Interface system (BCI) to mentally
control a robotic device or an exoskeleton.
A COMPARATIVE STUDY OF MACHINE LEARNING ALGORITHMS FOR EEG SIGNAL CLASSIFICATIONsipij
In this paper, different machine learning algorithms such as Linear Discriminant Analysis, Support vector
machine (SVM), Multi-layer perceptron, Random forest, K-nearest neighbour, and Autoencoder with SVM
have been compared. This comparison was conducted to seek a robust method that would produce good
classification accuracy. To this end, a robust method of classifying raw Electroencephalography (EEG)
signals associated with imagined movement of the right hand and relaxation state, namely Autoencoder
with SVM has been proposed. The EEG dataset used in this research was created by the University of
Tubingen, Germany. The best classification accuracy achieved was 70.4% with SVM through feature
engineering. However, our prosed method of autoencoder in combination with SVM produced a similar
accuracy of 65% without using any feature engineering technique. This research shows that this system of
classification of motor movements can be used in a Brain-Computer Interface system (BCI) to mentally
control a robotic device or an exoskeleton.
End-to-end deep auto-encoder for segmenting a moving object with limited tra...IJECEIAES
The document proposes two end-to-end deep auto-encoder approaches for segmenting moving objects from surveillance videos when limited training data is available. The first approach uses transfer learning with a pre-trained VGG-16 model as the encoder and its transposed architecture as the decoder. The second approach uses a multi-depth auto-encoder with convolutional and upsampling layers. Both approaches apply data augmentation techniques like PCA and traditional methods to increase the training data size. The models are trained and evaluated on the CDnet2014 dataset, achieving better performance than other models trained with limited data.
A feature selection method based on auto-encoder for internet of things intru...IJECEIAES
The evolution in gadgets where various devices have become connected to the internet such as sensors, cameras, smartphones, and others, has led to the emergence of internet of things (IoT). As any network, security is the main issue facing IoT. Several studies addressed the intrusion detection task in IoT. The majority of these studies utilized different statistical and bio-inspired feature selection techniques. Deep learning is a family of techniques that demonstrated remarkable performance in the field of classification. The emergence of deep learning techniques has led to configure new neural network architectures that is designed for the feature selection task. This study proposes a deep learning architecture known as auto-encoder (AE) for the task of feature selection in IoT intrusion detection. A benchmark dataset for IoT intrusions has been considered in the experiments. The proposed AE has been carried out for the feature selection task along with a simple neural network (NN) architecture for the classification task. Experimental results showed that the proposed AE showed an accuracy of 99.97% with a false alarm rate (FAR) of 1.0. The comparison against the state of the art proves the efficacy of AE.
Tifinagh handwritten character recognition using optimized convolutional neu...IJECEIAES
Tifinagh handwritten character recognition has been a challenging problem due to the similarity and variability of its alphabets. This paper proposes an optimized convolutional neural network (CNN) architecture for handwritten character recognition. The suggested model of CNN has a multi-layer feedforward neural network that gets features and properties directly from the input data images. It is based on the newest deep learning open-source Keras Python library. The novelty of the model is to optimize the optical character recognition (OCR) system in order to obtain best performance results in terms of accuracy and execution time. The new optical character recognition system is tested on a customized dataset generated from the amazigh handwritten character database. Experimental results show a good accuracy of the system (99.27%) with an optimal execution time of the classification compared to the previous works.
Abstract: The processing power of computing devices has increased with number of available cores. This paper presents an approach
towards clustering of categorical data on multi-core platform. K-modes algorithm is used for clustering of categorical data which
uses simple dissimilarity measure for distance computation. The multi-core approach aims to achieve speedup in processing. Open
Multi Processing (OpenMP) is used to achieve parallelism in k-modes algorithm. OpenMP is a shared memory API that uses
thread approach using the fork-join model. The dataset used for experiment is Congressional Voting Dataset collected from UCI
repository. The dataset contains votes of members in categorical format provided in CSV format. The experiment is performed for
increased number of clusters and increasing size of dataset.
IRJET - A Novel Approach for Software Defect Prediction based on Dimensio...IRJET Journal
This document presents a novel approach for software defect prediction using dimensionality reduction techniques. The proposed approach uses an artificial neural network to extract features from initial change measures, and then trains a classifier on the extracted features. This is compared to other dimensionality reduction techniques like principal component analysis, linear discriminant analysis, and kernel principal component analysis. Five open source datasets from NASA are used to evaluate the different techniques based on accuracy, F1 score, and area under the receiver operating characteristic curve. The results show that the artificial neural network approach outperforms the other dimensionality reduction techniques, and kernel principal component analysis performs best among those techniques. The document also discusses related work on using machine learning for software defect prediction.
Performance Comparison between Pytorch and Mindsporeijdms
Deep learning has been well used in many fields. However, there is a large amount of data when training neural networks, which makes many deep learning frameworks appear to serve deep learning practitioners, providing services that are more convenient to use and perform better. MindSpore and PyTorch are both deep learning frameworks. MindSpore is owned by HUAWEI, while PyTorch is owned by Facebook. Some people think that HUAWEI's MindSpore has better performance than FaceBook's PyTorch, which makes deep learning practitioners confused about the choice between the two. In this paper, we perform analytical and experimental analysis to reveal the comparison of training speed of MIndSpore and PyTorch on a single GPU. To ensure that our survey is as comprehensive as possible, we carefully selected neural networks in 2 main domains, which cover computer vision and natural language processing (NLP). The contribution of this work is twofold. First, we conduct detailed benchmarking experiments on MindSpore and PyTorch to analyze the reasons for their performance differences. This work provides guidance for end users to choose between these two frameworks.
Similar to Using K-Nearest Neighbors and Support Vector Machine Classifiers in Personal Identification based on EEG Signals (20)
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Using K-Nearest Neighbors and Support Vector Machine Classifiers in Personal Identification based on EEG Signals
1. Using K-Nearest Neighbors and Support Vector
Machine Classifiers in Personal Identification based
on EEG Signals
Shaymaa adnan Abdulrahman
Department of computer
Engineering, Imam Ja'afar Al-Sadiq
University, Baghdad, Iraq
PhD Student at Ain Shams University,
Egypt
Shaymaaa416@gmail.com
Mohamed Roushdy
Faculty of Computers &Information
Technology, Future University in Egypt,
, New Cairo, Egypt
Mohamed.Roushdy@fue.edu.eg
Abdel-Badeeh M. Salem
Department of Computer Science, Ain
Shams University ,Cairo,Egypt
absalem@cis.asu.edu.eg
Abstract—In the present paper, electroencephalogram (EEG)
data have been used to human identification by computing
sample entropy and graph entropy as feature extractions. Used
two classifier types, which are K-Nearest Neighbors (K-NN) and
Support Vector Machine (SVM). Python and Matlab software
were used in this study and EEG data was collected by UCI
repository . Matlab used when Thirteen channels was applied as
feature extraction . The experimental results show that, Python
software classifies the EEG-UCI data better than MATLAB
environment software where the accuracy of KNN and SVM
were 85.2% and 91.5% respectively.
Keywords- Biometrics, K-Nearest Neighbors, Support Vector
Machines , Electrocardiograph Signals, Python, Human
identification
I.INTRODUCTION
Human identification can be considered as the first and
most important step in a validation process for use in security
system. The word "identification" in some cases is being
confused with two other security associated terms which are
authorization and authentication . In fact,. identification can
be defined as the detection ,or identification of a certain
persons authentication is defined as the verification of the
individual's claimed identity and the . authorization is defined
as the official approval [1] . Human identification with the use
of a special behavioral - physiological feature of a personals
referred to as the biometrics. Usually biometrics that are based
on the EEG signals have increased in research . The reason for
this increase is that it transmits information and data from the
human brain . This information is individual so that it can be
used as a biometric [2] . The EEG are among the most reliable
measure and hard to reproduce. In addition not possible to be
stolen or obtained under , threat . There are a large number of
investigations were carried out regarding the identification of
identity through the use of number of programming
environments such as MATLAB , Python ,JAVA, C# ,….. etc.
Previous studies in this research contain different methods for
extracting data with different methods for the classification
process , especially the use of the python environment .
The contribution of the present study is a comparison
between Python environment and Matlab software when two
different methods such as ( KNN and SVM) as classifier for
identifying a person . The previous work has been used overall
process in Matlab software . While for implementing and ,
evaluating the suggested technique , utilized two distinct
programming languages : preprocessing and feature
extractions in Matlab software while classification and
evaluation in Python -Scikit-learn Toolbox . For access to
built-in machine learning functions in scikit-learn package [
12].
This paper has been structured as: Section 2: Background
Section 3:Overview data analysis with python environment
Section 4: Proposed work , Section 5: Result and Discussion ,
Section 5:Conclusion
II. BACKGROUND
According to (Hu, Jianfeng ,2018)[3] suggested EEG to
recognize gender . Four different approach was used like
(fuzzy entropy, sample entropy, approximate entropy, spectral
entropy) as feature Extraction . While different types of
classification was applied with EEG signals collected from
twenty eight subjects . While (Shaymaa Adnan Abdulrahma ,
et al 2019) [1] used electroencephalography to human
identification . Sample Entropy and Horizontal Visibility
Graphs used as feature extraction . The accuracy with
Horizontal Visibility Graphs had a much better than sample
entropy when used Machine Learning Repository (UCI) as
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 5, May 2020
29 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
2. dataset . (Galbally et al. 2012)[4] investigated twenty two
images quality measures (like: occlusion, focus, motion, and
pupil dilation) . Sample entropy was implemented as feature
extraction . Best feature is m chosen via the sequential
selection of the floating feature for feeding a quadrat
discriminant classifier . Smart-Box for Face Recognition was
implemented according to (AkhilGoel et al , 2018) [5] .
Usually Smart-Box consists of 3 adversarial perturbation
modules such as (attack generation, detection, mitigation) .
Images of a single individual from the Yale Face Database.
The accuracy was 45 and identification more than 90%
when deep-Fool decreases algorithm was implemented . A
novel BCI P300 used to collected dataset when used EEG
signals for identification was suggested by (Santosh Thakur,et
al 2020) [6] . nonparametric weighted feature extraction
approach has been implemented for the extraction of features.
KNN classifier based rule set was applied to identification.
Shown Table 1
TABLE I. TABLE 1 : PREVIOUS WORKS FOR HUMAN IDENTIFICATION
The above table shows some of the different methods in the
classification process . Data set was taken from different types
of biometric like( face , EEG, iris ). Some of this data was
taken directly from the user by using electrodes to measure
brain signals . During the study of previous works . The
accuracy ranged between 92% to 97% .
III .OVERVIEW DATA ANALYSIS WITH PYTHON ENVIRONMENT
Python is one of the programming language that can be
used in the mathematical and statistical calculations process
and for many operation such as classification pre-process ,
data extraction , prediction …..etc .[7] . Python is considered
to be one of the simple and powerful programming languages.
It has bridged the gap between programming such as C and
shell program . One of the advantage of Python is it fast
learning . The interpreter of Python is extended easily with
new data types and functions which are implemented in C and
is suitable as well, as one of the extension. languages for the
highly customizable , applications of the C programming
language like window, managers or editors, Website
( https://www.python.org/) . Python is available for a variety
of the OS, like Amoeba, UNIX, Apple Macintosh OS, and
MS-DOS. Moreover, it contains different high-level libraries
(for the detailed benefits of Python python.org/about/) ,, like
Scipy. (scipy.org/) allowing the user to be running MATLAB
codes , following a little alteration . As for the signals and
neuroimaging such as EEG , ECG ,EMG ,….etc.
(http://nipy.sourceforge.net/) , it contains a set of toolbox
dealing with this type of data . We will mention some of these
function which are :
1)Scikit-learn Toolbox: refer to python module contain large
number of algorithms used to machine learning . Represent
this package is focus on delivering the machine learning to the
non-specialists through the use high level general-purpose
languages [7 ]. The focus has been emphasized on the
performance ; documentation , ease of utilization and .
Consistency of the API . It has minimum dependency values
and has been distributed under simplified license of the BSD,
which encourages its utilization in the academic as well as the
commercial settings (scikit-learn.sourceforge.net) . Scikit-
learn python package can be defined as rich environment for
providing state-of-art implementation of numerous
algorithms of machine learning with, keeping the easy-to-use
interface closely integrated . The scikit-learn differs from
other , tool-boxes of the machine learning in Python for ,
many reasons:
• It has incorporated compiled, code for effectiveness,
in contrast to the MDP [ Zito etal., 2008] [8] and
pybrain [Schaul et al., 2010] [9] .
• It has been distributed under the license of Berkeley
Software Distribution (BSD)
• Focus on imperative , programming .
• Has the ability of evaluating the efficiency of the
estimator or selecting parameters
• with the optional use of the cross validation, which ,
distributes the computation to numerous cores .
• It is only, dependent upon the numpy and ,scipy for
the facilitation of the easy distribution
Authors Feature
extraction
Classifier Dataset Biometric
type
Accuracy
[Hu,
Jianfeng,
2018]
sample
entropy
fuzzy
entropy
approximat
e entropy
spectral
entropy
KNN
RF
DT
QDA
28
subject
by using
electrode
EEG
0.99
0.949
0.961
0.966
[Shaymaa
Adnan
Abdulrahm
an, et al
2018]
Sample
Entropy
+
Horizontal
Visibility
Graphs
KNN UCI
Database
EEG 92.6%
97.4%
[Galbally
et al. 2012]
Sample
entropy - UCI
Dataset
Iris
[AkhilGoel
et al ,
2018]
- Yale
Face
Database
Face
recogniti
on
[Santosh
Thakur et
al ,2020]
non-
parametric
weighted
KNN 20
subjects
by using
electrode
EEG 92.46%.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 5, May 2020
30 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
3. Usually Scikit-learn includes a large group of the statistical
learning approaches, like (unsupervised and supervised) . In
addition its applications to the neuro-imaging information
has given versatile tools for studying human brain. There
wide variety of tools found inside the Scikit –Python[13] .
2)PyEEG Toolbox: is an open source module of Python for
the EEG which has been utilized for feature extractions. In
addition this Toolbox applied for the analysis of other
physiological signals which have the ability of being
considered as time series like MEG signals , representing the
magnetic fields which have been induced, by the neural,
electrical activity currents . One of the advantage of this
Toolbox [14] . The framework PyEEG has been focused only
on the extraction of the features from the segments of
EEG/MEG. It contains no functions which are utilized for
importing information of different formats or exporting
features to a process of the classifier. Shown figure 1 .
PyEEG contains numerous parameters and different
algorithms for one characteristic such as Relative Intensity
Ratio and Power Spectral Intensity (PSI), Petrosian Fractal
Dimension, Higuchi Fractal Dimension
(https://www.hindawi.com/) .Numerical values of the feature
which has been obtained through using PyEEG can differ
from the features which have been obtained by other tool-
boxes. Therefore programmers must utilize the non-default
values for parameters so as to meet their requirements.
Figure1 Framework of PyEEG
3)Wyrm Toolbox: refer to an open source BCI toolbox in
Python . These toolbox can implemented to a wide range of
neuroscience problems . It analyzes and visualizes
neurophysiological data in real time setting
(https://github.com/bbci/wyrm) such as on line Brain-
Computer Interface (BCI ) application. Shown figure 2 . Row
contain number of channels like (F7, F8, C3, C4 , ………),
while Colum refer to real time[10] .
Figure 2 :Visualization of the Data object and attributes of
Wyrm Toolbox
4)MNE-Python Toolbox : is an open source package
software . It used for (information / data) preprocessing or
statistical analysis or estimation of the functional connectivity
between the distributed areas of the brain and Source
localization . One of the advantage of this MNE is that it can
access pre-processed data in an easy and fast . This feature
helps users to quickly reproducibility approach and methods
by other researchers (http://martinos.org/mne.) . MNE-
Python has a distinctive feature that it gives a modules with
the graphical . user interfaces(GUIs) , valuable to inspect ,and
explore data[11] . Shown figure 3
Figure 3: GUI applications which are provided MNE Python
5)PyMVPA –Tool-box : is also an open source software built
in Python environment using for the application of
classification based analysis technique to f-MRI data-sets.
Usually this toolbox applied for accessing the libraries which
have been written in many different computing environments
and programming. Languages . to the machine learning
package interfaces [15] . The function inside this toolbox help
researchers for easily conducting the analyses of noise
perturbation. The tool-box of the PyMVPA is a free open
source software , which was available from the website
(www.pymvpa.org) .
IV. PROPOSED WORK
A. Data set
The data-set utilized in the presented study was obtained
from the repository of the UCI [16][17] . The data-set includes
twelve input feature vectors , and a single target vector . Those
input vectors have been acquired from the application of
wavelet packet analysis . On the original signal in the band of
the frequency (7Hz- 13Hz). The target vector is the planning
or relaxed state . The training data includes ( 91 samples) and
EEG series
Non feature
extraction
Feature extraction
functions
Feature values
PyEEG
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 5, May 2020
31 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
4. testing data includes ( 91 samples) as well . There has been a
total of (182 samples ) . Due to the fact that dataset which has
been utilized in [18][19] has been 50% test data and 50%
training data a similar number of testing and training data
have been utilized in our study. When applying this
experiment. Thirteen channels such as
(AF8,C1,C2,C3,C4,CP1,CP5,CP6,FC5,FT7,P8,PO8,PZ ) were
used to extract feature . Figure 4 display the proposed
research methodology of this work.
Figure 4 diagram of proposed search
B. Feature extraction
Sample-Entropy is one of the way in which the process of
extracting information from raw data . It has proposed since
2000 by (Moorman and Richman) in order to improve the
approach of the approximate entropy, which is a non-linear
dynamic parameter for measuring the Complexity of a
sequence [20]. Sample-Entropy consist of three parameters
as input . Which are r represents the criterion of similarity( m)
represents the embedded dimension . and (n) represents time
series length [19]. In the present paper two parameters were
applied : Se1 m=2.0, r=0.15 and Se2 :m=2.0, r=0.20 of
every epoch of EEG signals are extract. These same values
were used due to a compare this experience with previous
studies [ 19] . We applied Sample-Entropy with 13 channels
as feature extraction . Sample entropy refer to the negative
natural logarithms when provided with the conditional
likelihood , meaning that any. 2 sequences which are same for
the (m) points stay. unchanged at the following point, in
which ( r ) represents the similar criterion and (m ) represents
the data segment’s length [21]. The mathematical application
of the Sample-Entropy process may be represented as follows:
Shown question 1 according to [19].
…….(1)
Where
refer to probability of the 2 sequences, matching for
(m) points .While stands for the likelihood for 2
sequences to be matching for m+1 points . .
, N data point from time series
x(n)=x(1),x(2) ,x(3)…..x(n) . m vectors represented as
for 1 k N-m+1 at the ith
samples .There are many techniques to use for the calculation
of the Graph- Entropy approach dependent on edges or vertex.
This work know the ( Graph- Entropy ) [22] with the formula
of Shannon’s entropy (Clarke 1968). The Shannon’s entropy
can be define in the equation 4.
Graph- Entropy =
where p(k) is the probability of i
C. Classification Tool
C.1 KNN Classification
One of the type of techniques used in supervised learning
method is k-NN. The basic principle of this method is to
classify the data by calculating the k closest neighbors to a
Data-
Set
Accuracy
A
F
8
C
1
C
2
C
3
C
4
C
P
1
C
P
5
C
P
6
F
C
5
F
T
Channels
MATLAB
Environment
Feature extraction
Sample-
Entropy
Graph -
Entropy
Python
environment
SVM K-NN
Classification
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 5, May 2020
32 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
5. data point . In addition assigning the point to the most
common class among those k neighbors . Euclidean distance
was applied for distance metric .
Euclidean distance X1, X2 = …. (5)
Where X = (x11, x12, …, x1n) and X2 = (x21, x22, …, x2n)
KNN method were trained through the (sklearn.
Neighbors)[23] . K-Neighbors classifier module of python
library scikit-learn with the use of the parameter below:(I), the
Number of the neighbors . The number of neighbor k is (1)
and (5 ) fold cross validation was utilized for calculation of
the precision. The (python software)based implementation
code to find the optimal values of the parameter for the model
of the K-NN has been presented in figure 5. While Python
based implementations of KNN, classification model has
been illustrated in the figure 6[29].
K-NN (OPTIMAL PARAMETER )[29]
#/ Configuring the matplot library for drawing in-line graph on the web-page/
/get_ipython().magic('matplotlib inline')/
# Importing needed packages
/from the sklearn. model_ selection importing thetrain_test_split
importing the pandas as pd/
from the . sklearn. neighbours .importing the K-NeighborsClassifier
from the. sklearn. metrics importing the confusion . matrix
from the sklearn. metrics importing the classification. report
from the sklearn model -selection importing the GridSearch-CV
importing the -numpy as np
# Loading the EEG EyeState data-set from the file of thecvsto pandas
eeg = pd.read_csv('EEG-EyeState.csv')
# printing few records from EEG EyeState data-set
printing(eeg.head())
y = eeg.iloc[:,12]
# Extraction of the Independent (Prediction) variables (X) and Dependent
(Predicted) Variable (Y) from the EEG EyeState data-set
X = eeg.iloc[:,0:12]
# Computing summary statistics and round them to 2 digits
;eeg.describing().rounding(2).transposing()
# Computing corramongst all of the variables (Independent as well as
dependent)
eeg.corr().rounding(2)
# Plotting of scatter matrix amongst all of the independent variables
With the use of the dependent one
sm = pd.scatter_matrix(eeg, c = y, figsize = [20, 20], s=150,marker = 'D')
# Splitting thefull Data-set to Training data-set (50%) and Testing data-set
(50%)
Xtrain, Xtest, Ytrain, Ytest = train_test_split(X, y,test_size=0.5,
random_state=21, stratify=y)
# The count of the neighbors should decide from 1 to 10inclusive
param_grid = {'n_neighbors': np.arange(1, 10)}
Figure 5 : optimum k - parameter by using Python
KNN –Classification[29]
#The creation of the reference of the K-NN classifier
knn = K-Neighbors Classifier()
# Performing the search of the grid with the use of the K-NN approach and
finding the optimal neighbors’ value on which the approach will be giving
high precision with the cross validation = 12
k-nn_cv = GridSearch-CV(k-nn, param_grid, cv=12)
# Building K-NN based model of classification
K=-nn_cv.fit(Xtrain, Ytrain)
# Printing the optimal value of neighbour and the optimal score which is
accomplished by the training set K-NN model
/Printing (k-nn_cv.best_params_)
/Printing (k-nn_cv.best_score_)
# Performing prediction (i.e. classification) on the testing data-set
y_pred = k-nn_cv.predict(Xtest)
# Calculation of the prediction score on the testing data-set
k-nn_cv.score(Xtest, ytest)
# Computing the confusion matrix according to the actual v.s. the predicted
classon the testing dataset
print(confusion. matrix .(Ytest, Ypred))
# Computing the Classification according to the actual v.s. predicted class on
the testing data-set
Printing the (classification _report(Ytest, Ypred))
Figure 6 : pseudo-code of K-NN Classification
C.2 SVM Classification
The second technique used in this study is SVM. It is
considered one of the widely used methods in the field of
supervised learning approach. SVMs perform classification
through the detection of the hyperplane maximizing the
margin between both classes, notice, linearly separable and
binary. Support vector machine technique is proposed by
(Cortes and Vapnik) [24] . This method works on the basis of
reducing structural risk . SVM classifier with RBF kernel
were applied on two different binary classification problems in
this study . Shown the question (6) that is represented SVM
classifier . The classifiers were implemented using the open-
source scikit-learn library for python [25][28].
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 5, May 2020
33 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
6. Where and
Where 'w1' is the normal to separating hyperplane which is
represented by( ). For a set of data points ( ,) in which ( =
+1,−1) the margin between 2 classes is given by: . The
optimum margin, is computed through minimizing the
problem of the constrained optimization that is additionally
addressed through the reduction. It to the issue of the
quantization programming optimization which will yield.
Python Scikit-learn toolbox based implementation of SVM ,
classification model has been illustrated in the figure 7[30]
.
Pseudo-code (SVM scikit-learn-Python)[30]
Importing the numpy as np
Importing the pylab as pl
from sklearn import svm, data-sets
EEG = data-sets.load_EEG()
X = EEG.data[:, :2]
Y = EEG.target
h = .02 # the size of the step in mesh
# creating an SVMs instance and fitting out the -data. The data is not scaleddu
e to the need for plotting support vectors
C = 1 # SVMs parameter of regularization
svc = svm.SVC(kernel='linear', C=C).fit(X, Y)
rbf_svc = svm.SVC(kernel='rbf', fit(X, Y)
poly_svc = svm.SVC(kernel='poly',).fit(X, Y)
lin_svc = svm.LinearSVC(C=C).fit(X, Y)
Xmin, Xmax = X[:, 0].min() - 1, X[:, 0].max() + 1
Ymin, Ymax = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(Xmin, Xmax, h),
np.arange(Ymin, Ymax, h))
titles = 'SVC with RBF kernel',
for i, clf in enumerate((svc, rbf_svc, poly_svc, lin_svc)):
# Plotting decision boundary, by assigning a color forevery one of the point
s in mesh [Xmin, Mmax]x[Ymin, Ymax].
pl.sub-plot(2, 2, i + 1)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()
Z = Z.reshape(xx.shape)
pl.contourf(xx, yy, Z, cmap=pl.cm.Paired)
pl.axis('off')
# Plotting training points as well
pl.scatter(X[:, 0], X[:, 1], c=Y, cmap=pl.cm.Paired)
pl.title(titles[i])pl.show()
Figure 7: pseudo-code of SVM Classification
V. RESULTS AND DISCUSSION
In comparison with other common programming
languages like Python and MATLA . These programs are
friendly programming languages that contain similar
characteristics and feature . Although there are similar
feature , some of the advantages of both MATLAB and
Python [27] . Table 2 illustrates these feature[31].
Table 2 comparison between Python and MATLAB software [31]
MATLABPython
- Simple computing
environment
-friendly for learn
-Fast Debug
-easy and fast code
-can be abstract out a lot of
implementation details
-In addition simple for matrix
operations
- not good to manage free
package
-Open source
-In Python , it need to add
specific libraries for mathematics
-requires function calls when use
matrix
Such as
>>> X = numpy.array()
>>> W = numpy.array()
>>> X.dot(W)
Or >>> X@W (Python 3.5)
Through the comparison between MATLAB and Python as
per the above Table , we found that in implement machine
learning algorithms is easy and simple compared to Python/
NumPy (library to work with array structures vector equations
with the use of the linear algebra or Scikit-Toolbox with basic
machine learning tasks . MATLAB can be better than
Python in relation to the introductory courses of
computational neuroscience . Python may be considered
ideal for low-resource institutions . (for the benefits of
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 5, May 2020
34 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
7. Python, read (www.python.org/about/) for more detailed
information). like Scipy, (www.scipy.org/) allowing the user
running the, codes of MATLAB following a little of
modifications . Python - scikit- learn toolboxes used for train
and test . Split the data samples to 2 groups: 50% of the data
have been utilized for training while 50% percent for testing .
MATLAB environment was used Sample –Entropy and
Graph- Entropy used as feature extraction . While Python
software applied K-NN and SVM as classification . According
to accuracy . Our results demonstrate that used Scikit-learn
which has been used in the classification process better than
Matlab according to accuracy . Table 3 : refer to comparing
the result obtained in this experiment with the previous
study.
Table 3 : summarizes all results
Platform
classes
Data-
set
Number
of
channels
Feature
extraction
Classifier Accuracy
MATLAB
Software
(previous
work )
[Shaymaa
Adnan
Abdulrahman
,et al 2020]
[19]
UCI
13
(AF8
C1
C2
C3
C4
CP1
CP5
CP6
FC5
FT7
P8
PO8
PZ
Sample
entropy
+
Graph-
Entropy
K-NN 83.7%
SVM
90.8%
Python
Software
(Proposed
work)
UCI
13
(AF8
C1
C2
C3
C4
CP1
CP5
CP6
FC5
FT7
P8
PO8
PZ)
Sample
entropy
+
Graph-
Entropy
K-NN 85.2%
SVM 91.5%
Through table 3 illustrates the use KNN and SVM as
classifier. According to the suggestion in this experiment is
the use of Python in classification process and Matlab in
feature extraction operation . The accuracy was 83.7% when
used KNN, and 90.8% with SVM classifier . This result is
considered the best comparison with the previous work
referred to in this research . Especially that the same feature
extraction algorithm was used and for the same data-set.
VI. CONCLUSION
In the present study, a detailed analysis of signals such as
Electroencephalography 13 channels data as classification
class is carried out and compared to result which have been
provided in [19] on an identical data-set . Sample entropy and
graph entropy used as feature extraction . This step was
executed using MATLAB environment . In comparison with
the Graph- Entropy and Sample -Entropy has higher
robustness to the noise compared to ( log energy entropy ,
spectral entropy , kroskov entropy) . In addition , highly
independent of data , series length . Sample-Entropy doesn’t
perform the counting of the self – match. Thereby, the bias has
been eliminated . A lower Sample- Entropy value will indicate
higher, self-similarity in time series. For evaluating of our
approach , the KNN and SVM was implemented using a
second programming language . This software is Python .
Especially Scikit-learn Toolbox . The results show that ,
Python software classifies better than Matlab environment
software with KNN and SVM , where was the accuracy 85.2%
and 91.5% respectively . We found that simple algorithms can
give better result compared to complex algorithms . The
reason for this, some programming languages are function
inside toolbox built in a way that can give us better results in
many respects such as implementation time , accuracy ,
maintenance. In addition the specification of the device on
which the experiments are conducted according to the
processor speed , type and version have an effect on the also
results . In the future we can use two different languages of
programming to human identification but using another type
of biometric like (iris , voice , face ,…etc) to show if it gives
better accuracy when using the same two programming
languages.
REFERENCES
[1]Shaymaa Adnan Abdulrahman, WaelKhalifa, Mohamed
Roushdy, Abdel-Badeeh M. Salem " A survey of biometrics
using electroencephalogram EEG " International Journal
"Information Content and Processing", Volume 6, Number 1,
2019
[2] J. Klonovs, C. Kjeldgaard Petersen, H. Olesen, A.
Hammershø, “Development of a Mobile EEG-based
Biometric Authentication System."IEEEVahicular magazine,
pp. 81-89, Volume: 8, Issue:1,
February 2013.
[3] Hu, Jianfeng," An approach to EEG-based gender
recognition using entropy measurement methods, Knowledge-
Based Systems, vol 140 ,pp 134-141 , 2018
[4] J. Galbally, J. Ortiz-Lopez, J. Fierrez, and J. Ortega-
Garcia, “Iris liveness detection based on quality related
features,” in IAPR Int. Conference on Biometrics (ICB), 2012,
pp. 271–276.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 5, May 2020
35 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
8. [5]Goel, Akhil and Singh, Anirudh and Agarwal, Akshay and
Vatsa, Mayank and Singh, Richa ," Smartbox: Benchmarking
adversarial detection and mitigation algorithms for face
recognition" IEEE 9th International Conference on Biometrics
Theory, Applications and Systems (BTAS) pp 1-7 , 2018
[6] Santosh Thakur, Ramesh Dharavath , Damodar Reddy
Edla , " Spark and Rule-KNN based scalable machine learning
framework for EEG deceit identification " Biomedical Signal
Processing and Control, 2020
[7]Fabian Pedregosa , Ga¨elVaroquaux , Alexandre Gramfort
, Vincent Michel
Bertrand Thirion , " Scikit-learn: Machine Learning in Python
" Journal of Machine Learning Research , 2011
[8] T. Zito, N.Wilbert, L.Wiskott, and P. Berkes. Modular
toolkit for data processing (MDP): A Python data processing
framework. Frontiers in Neuroinformatics, 2, 2008.
[9] T. Schaul, J. Bayer, D. Wierstra, Y. Sun, M. Felder, F.
Sehnke, T. R¨uckstieß, and J. Schmidhuber.
PyBrain. The Journal of Machine Learning Research, 11:743–
746, 2010.
[10] Venthur, Bastian and D{"a}hne, Sven and H{"o}hne,
Johannes and Heller, Hendrik and Blankertz, Benjamin ,
Wyrm: A brain-computer interface toolbox in python, ,
Neuroinformatics , vol 13 , no 4 ,pp 471-486 , 2015
[11] Gramfort, Alexandre and Luessi, Martin and Larson, Eric
and Engemann, Denis A and Strohmeier, Daniel and
Brodbeck, Christian and Parkkonen, Lauri and
H{"a}m{"a}l{"a}inen, Matti S, " MNE software for
processing MEG and EEG data " Neuroimage, vol 86 ,
pp446-460 ,2014
[12] Pedregosa, F., et al.: Scikit-learn: machine learning in
python. J. Mach. Learn. Res. 12, 2825–2830 (2011)
[13] Buitinck, Lars and Louppe, Gilles and Blondel, Mathieu
and Pedregosa, Fabian and Mueller, Andreas and Grisel,
Olivier and Niculae, Vlad and Prettenhofer, Peter and
Gramfort, Alexandre and Grobler, Jaques and others, " API
design for machine learning software: experiences from the
scikit-learn project " arXiv preprint arXiv:1309.0238 , 2013.
[14] Bao, Forrest Sheng and Liu, Xin and Zhang, Christina, "
PyEEG: an open source python module for EEG/MEG feature
extraction "Computational intelligence and neuroscience,
2011
[15] Watson, David M and Hymers, Mark and Hartley, Tom
and Andrews, Timothy J, Patterns of neural response in scene-
selective regions of the human brain are affected by low-level
manipulations of spatial frequency, NeuroImage ,VOL 124
,PP 107-117 , 2016
[16] Rajaguru, Harikumar, and Sunil Kumar Prabhakar. 2018.
“Factor Analysis and Weighted KNN Classifier for Epilepsy
Classification from EEG Signals.” Proceedings of the 2nd
International Conference on Electronics, Communication and
Aerospace Technology, ICECA 2018 (Iceca): 332–35.
[17] C. Blake and C. Merz, “UCI repository of machine
learning databases,” Department of Information and Computer
Sciences, University of California, Irvine,2015
[18] Sivachitra.M, Vijayachitra.S " Planning and Relaxed state
EEG signal classification using Complex Valued Neural
Classifier for Brain Computer Interface "IEEE 2015
[19]Shaymaa Adnan Abdulrahman, Mohamed Roushdy,
Abdel-Badeeh M. Salem "support vector machine approach
for human identification based on EEG signals , journal of
mechanics of continua and mathematical sciences , ISSN
(Online) : 2454 -7190 Vol.-15, No.-2, pp 270-280 ISSN
(Print) 0973-8975, February 2020
[20] Richman, J S, and J R Moorman. 2000. “Physiological
Time-Series Analysis Using Approximate Entropy and
Sample Entropy.” American journal of physiology. Heart and
circulatory physiology 278(6): H2039-49.
http://www.ncbi.nlm.nih.gov/pubmed/10843903.
[21] Das, Kaushik, and RajkishurMudoi. 2018. “Analysis of
EEG Signals Using Empirical Mode Decomposition and
Support Vector Machine.” IEEE International Conference on
Power, Control, Signals and Instrumentation Engineering,
ICPCSI 2017: 358–62
[22] Matthias Dehmer, Abbe Mowshowitz " A history of
graph entropy measures" Information Sciences 181, 2011, 57-
78
[23]Pedregosa et al. (2011). Scikit-learn: Machine Learning in
Python, JMLR 12, pp. 2825-2830.
[24] [20] C. Cortes and V. Vapnik, “Machine Learning,” in
Machine Learning, vol. 20, Boston: Kluwer Academic
Publishers, pp. 273–297,1995
[25] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B.
Thirion, O. Grisel, M. Blondel, P. Prettenhofer,
R. Weiss, V. Dubourg et al., Scikit-learn: Machine learning in
python," Journal of machine learning research,
vol. 12, no. Oct, pp. 2825{2830, 2011.
[26] Shaymaa Adnan Abdulrahman, Mohamed Roushdy,
Abdel-Badeeh M. Salem " HumanIdentification based on
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 5, May 2020
36 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
9. electroencephalography Signals using Sample Entropy and
Horizontal Visibility Graphs "WSEAS TRANSACTIONS on
SIGNAL PROCESSING , ISSN: 2224-3488 Volume 15,
2019
[27] Stein, Joshua S and Holmgren, William F and Forbess,
Jessica and Hansen, Clifford W , " PVLIB: Open source
photovoltaic performance modeling functions for Matlab and
Python "43rd photovoltaic specialists conference (pvsc),
pp3425-3430 , 2016
[28] Shaymaa Adnan Abdulrahman, WaelKhalifa, Mohamed
Roushdy, Abdel-Badeeh M. Salem" Comparative Study for 8
Computational Intelligence Algorithms for Human
Identification" Computer Science Review Journal,Vol 36
,2020
[29] Syed Hasan Adil, MansoorEbrahim, Kamran Raza, Syed
SaadAzhar Ali. "Prediction of Eye State Using KNN
Algorithm", 2018 International Conference on Intelligent and
Advanced System (ICIAS), 2018
[30]Fabio Nelli. "Chapter 8 Machine Learning with scikit-
learn", Springer Science and Business Media LLC, 2018
[31] http://www . sebastianraschka.com
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 5, May 2020
37 https://sites.google.com/site/ijcsis/
ISSN 1947-5500