This paper proposes a system to detect Indian sign language gestures using a Kinect sensor and convert them to text and speech output. The system works by capturing skeletal images of the user's body with the Kinect and extracting the hand gesture. Image processing techniques like segmentation and filtering are used to isolate the hand and detect fingers. Hidden Markov models match the gesture to a database to determine the sign's meaning. An accuracy of 94.5% was achieved on a test set of 50 gestures. The system provides a direct interface for deaf people to communicate via their sign language without requiring complicated setups or technologies. Future work will focus on reducing noise and errors in finger detection.
This document provides an overview of deep learning including definitions, architectures, types of deep learning networks, and applications. It defines deep learning as a branch of machine learning that uses neural networks with multiple hidden layers to perform feature extraction and transformation without being explicitly programmed. The main architectures discussed are deep neural networks, deep belief networks, and recurrent neural networks. The types of deep learning networks covered include feedforward neural networks, recurrent neural networks, convolutional neural networks, restricted Boltzmann machines, and autoencoders. Finally, the document discusses several applications of deep learning across industries such as self-driving cars, natural language processing, virtual assistants, and healthcare.
A LOW COST EEG BASED BCI PROSTHETIC USING MOTOR IMAGERY ijitcs
Brain Computer Interfaces (BCI) provide the opportunity to control external devices using the brain
ElectroEncephaloGram (EEG) signals. In this paper we propose two software framework in order to
control a 5 degree of freedom robotic and prosthetic hand. Results are presented where an Emotiv
Cognitive Suite (i.e. the 1st framework) combined with an embedded software system (i.e. an open source
Arduino board) is able to control the hand through character input associated with the taught actions of
the suite. This system provides evidence of the feasibility of brain signals being a viable approach to
controlling the chosen prosthetic. Results are then presented in the second framework. This latter one
allowed for the training and classification of EEG signals for motor imagery tasks. When analysing the
system, clear visual representations of the performance and accuracy are presented in the results using a
confusion matrix, accuracy measurement and a feedback bar signifying signal strength. Experiments with
various acquisition datasets were carried out and with a critical evaluation of the results given. Finally
depending on the classification of the brain signal a Python script outputs the driving command to the
Arduino to control the prosthetic. The proposed architecture performs overall good results for the design
and implementation of economically convenient BCI and prosthesis.
This document describes a software system that aims to help deaf and dumb people communicate using hand gesture recognition and text-to-speech conversion. The system has three main modules: 1) text-to-voice conversion, 2) text-to-image matching, and 3) image recognition of hand gestures to provide audio or image outputs. The system uses algorithms like localization, pixel analysis, and skin color detection to analyze hand gestures from images. Evaluation results demonstrate the system's ability to correctly convert text inputs to audio or matched images and recognize stored gesture images to output audio or images. Future work to implement the system on mobile devices using sensors is also discussed.
Two level data security using steganography and 2 d cellular automataeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
In our World of today, the quest to get rich at all cost without working for our money has led some of our youth into crimes such as robbery and kidnapping. As a result of this and by the sheer fact that vehicles are now very expensive to buy these days, there is a need for people to safeguard their vehicles against these hoodlums to avoid loss of their precious Assets to these rampaging criminals. Tracking is technology that is used by many companies and individuals to track a vehicle, an individual or an asset by using many ways like GPS that operates using satellites and ground-based stations or by using our approach which depends on the cellular mobile towers. Vehicle tracking system is a system that can be used in monitoring and locating a vehicle, avoid theft or recover a stolen vehicle, for monitoring of vehicle routes to ensure strict compliance to an already defined vehicle routes, monitor driver’s behavior, predict bus arrival as well as for fleet management. Internet of things has made it very possible to devices to inter communicate amongst themselves and exchange information, helping in acquiring and analyzing information faster that we used to know in the past and this has helped more especially in vehicle monitoring to ensure that vehicle owners feel safe about their investments without fearing about their loss. In this paper, we propose a vehicle monitoring system based on IOT technology, using 4G/LTE to get the get the coordinate, speed, and overall condition of the vehicle, process and send to a remote server to be analyzed and used in locating the vehicle and monitor its other configured parameters. This is realized using Raspberry pi, 4G/LTE, GPS, Accelerometer and other sensors with communicate amongst themselves to get the environmental parameters which is processed and sent to a remote server where it is analyzed and represented on a map to locate the vehicle and monitor the other set parameters. 4G/LTE provides fast internet connectivity with overcomes the usual delay usually experienced in sending the acquired signals to be processed. The True Vehicle position is represented using google geolocation service and the actual position triangulated in real-time.
This document summarizes a research paper on offline handwritten signature verification using an Associative Memory Network (AMN). The paper proposes an algorithm to train an AMN using genuine signature samples and test it on 12 forged signature samples. Key findings include:
1) The AMN algorithm detected forgeries with 92.3% accuracy, which is comparable to other methods.
2) Parallelizing the AMN algorithm using OpenMP reduced the average computation time from 9.85 seconds to 2.98 seconds.
3) The AMN was able to correctly reject forged signatures but incorrectly rejected the original signature, due to the mismatch threshold being set at 25%.
This document summarizes research using Echo State Networks (ESN) to model and classify electroencephalography (EEG) signals recorded during mental tasks in brain-computer interfaces (BCI). ESN were trained to forecast EEG signals one step ahead in time using data from 14 participants performing four mental tasks. Separate ESN models for each task act as experts in modeling EEG for that task. Novel EEG data is classified by selecting the label of the model with the lowest forecasting error. Offline experiments show ESN can model EEG with errors as low as 3% and classify two tasks with up to 95% accuracy and four tasks with up to 65% accuracy at two-second intervals.
Biometrics was developed with the aim of improving the overall security level in all society contexts. A biometric system describes a set of techniques to analyze certain individual's biometric features, store and then using those patterns to identify or verify the identity of a person. The palmprint contains not only principal curves and wrinkles but also rich texture and miniscule points, so the palmprint identification is able to achieve a high accuracy because of available rich information in palmprint. Various palmprint identification methods, such as coding based methods and principal curve methods have been proposed in past decades. In addition to these methods, subspace based methods can also perform well for palmprint identification. Combining the left and right palmprint images to perform multibiometrics is easy to implement and can obtain better results.
Multimodal biometrics can provide higher identification accuracy than single or unimodal biometrics, so it is more suitable for some real-world personal identification applications that need high-standard security. A onetime password is included for higher security and accuracy.
One time passwords generally expire after using once. They are generated for using it within a certain time period after which it is useless. These passwords are set as a secondary security measure for the primary palmprint recognition.
This document provides an overview of deep learning including definitions, architectures, types of deep learning networks, and applications. It defines deep learning as a branch of machine learning that uses neural networks with multiple hidden layers to perform feature extraction and transformation without being explicitly programmed. The main architectures discussed are deep neural networks, deep belief networks, and recurrent neural networks. The types of deep learning networks covered include feedforward neural networks, recurrent neural networks, convolutional neural networks, restricted Boltzmann machines, and autoencoders. Finally, the document discusses several applications of deep learning across industries such as self-driving cars, natural language processing, virtual assistants, and healthcare.
A LOW COST EEG BASED BCI PROSTHETIC USING MOTOR IMAGERY ijitcs
Brain Computer Interfaces (BCI) provide the opportunity to control external devices using the brain
ElectroEncephaloGram (EEG) signals. In this paper we propose two software framework in order to
control a 5 degree of freedom robotic and prosthetic hand. Results are presented where an Emotiv
Cognitive Suite (i.e. the 1st framework) combined with an embedded software system (i.e. an open source
Arduino board) is able to control the hand through character input associated with the taught actions of
the suite. This system provides evidence of the feasibility of brain signals being a viable approach to
controlling the chosen prosthetic. Results are then presented in the second framework. This latter one
allowed for the training and classification of EEG signals for motor imagery tasks. When analysing the
system, clear visual representations of the performance and accuracy are presented in the results using a
confusion matrix, accuracy measurement and a feedback bar signifying signal strength. Experiments with
various acquisition datasets were carried out and with a critical evaluation of the results given. Finally
depending on the classification of the brain signal a Python script outputs the driving command to the
Arduino to control the prosthetic. The proposed architecture performs overall good results for the design
and implementation of economically convenient BCI and prosthesis.
This document describes a software system that aims to help deaf and dumb people communicate using hand gesture recognition and text-to-speech conversion. The system has three main modules: 1) text-to-voice conversion, 2) text-to-image matching, and 3) image recognition of hand gestures to provide audio or image outputs. The system uses algorithms like localization, pixel analysis, and skin color detection to analyze hand gestures from images. Evaluation results demonstrate the system's ability to correctly convert text inputs to audio or matched images and recognize stored gesture images to output audio or images. Future work to implement the system on mobile devices using sensors is also discussed.
Two level data security using steganography and 2 d cellular automataeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
In our World of today, the quest to get rich at all cost without working for our money has led some of our youth into crimes such as robbery and kidnapping. As a result of this and by the sheer fact that vehicles are now very expensive to buy these days, there is a need for people to safeguard their vehicles against these hoodlums to avoid loss of their precious Assets to these rampaging criminals. Tracking is technology that is used by many companies and individuals to track a vehicle, an individual or an asset by using many ways like GPS that operates using satellites and ground-based stations or by using our approach which depends on the cellular mobile towers. Vehicle tracking system is a system that can be used in monitoring and locating a vehicle, avoid theft or recover a stolen vehicle, for monitoring of vehicle routes to ensure strict compliance to an already defined vehicle routes, monitor driver’s behavior, predict bus arrival as well as for fleet management. Internet of things has made it very possible to devices to inter communicate amongst themselves and exchange information, helping in acquiring and analyzing information faster that we used to know in the past and this has helped more especially in vehicle monitoring to ensure that vehicle owners feel safe about their investments without fearing about their loss. In this paper, we propose a vehicle monitoring system based on IOT technology, using 4G/LTE to get the get the coordinate, speed, and overall condition of the vehicle, process and send to a remote server to be analyzed and used in locating the vehicle and monitor its other configured parameters. This is realized using Raspberry pi, 4G/LTE, GPS, Accelerometer and other sensors with communicate amongst themselves to get the environmental parameters which is processed and sent to a remote server where it is analyzed and represented on a map to locate the vehicle and monitor the other set parameters. 4G/LTE provides fast internet connectivity with overcomes the usual delay usually experienced in sending the acquired signals to be processed. The True Vehicle position is represented using google geolocation service and the actual position triangulated in real-time.
This document summarizes a research paper on offline handwritten signature verification using an Associative Memory Network (AMN). The paper proposes an algorithm to train an AMN using genuine signature samples and test it on 12 forged signature samples. Key findings include:
1) The AMN algorithm detected forgeries with 92.3% accuracy, which is comparable to other methods.
2) Parallelizing the AMN algorithm using OpenMP reduced the average computation time from 9.85 seconds to 2.98 seconds.
3) The AMN was able to correctly reject forged signatures but incorrectly rejected the original signature, due to the mismatch threshold being set at 25%.
This document summarizes research using Echo State Networks (ESN) to model and classify electroencephalography (EEG) signals recorded during mental tasks in brain-computer interfaces (BCI). ESN were trained to forecast EEG signals one step ahead in time using data from 14 participants performing four mental tasks. Separate ESN models for each task act as experts in modeling EEG for that task. Novel EEG data is classified by selecting the label of the model with the lowest forecasting error. Offline experiments show ESN can model EEG with errors as low as 3% and classify two tasks with up to 95% accuracy and four tasks with up to 65% accuracy at two-second intervals.
Biometrics was developed with the aim of improving the overall security level in all society contexts. A biometric system describes a set of techniques to analyze certain individual's biometric features, store and then using those patterns to identify or verify the identity of a person. The palmprint contains not only principal curves and wrinkles but also rich texture and miniscule points, so the palmprint identification is able to achieve a high accuracy because of available rich information in palmprint. Various palmprint identification methods, such as coding based methods and principal curve methods have been proposed in past decades. In addition to these methods, subspace based methods can also perform well for palmprint identification. Combining the left and right palmprint images to perform multibiometrics is easy to implement and can obtain better results.
Multimodal biometrics can provide higher identification accuracy than single or unimodal biometrics, so it is more suitable for some real-world personal identification applications that need high-standard security. A onetime password is included for higher security and accuracy.
One time passwords generally expire after using once. They are generated for using it within a certain time period after which it is useless. These passwords are set as a secondary security measure for the primary palmprint recognition.
ADAPTABLE FINGERPRINT MINUTIAE EXTRACTION ALGORITHM BASED-ON CROSSING NUMBER ...IJCSEIT Journal
In this article, a main perspective of developing and implementing fingerprint extraction and matching
algorithms as a part of fingerprint recognition system is focused. First, developing a simple algorithm to
extract fingerprint features and test this algorithm on PC. The second thing is implementing this algorithm
into FPGA devices. The major research topics on which the proposed approach is developing and
modifying fingerprint extraction feature algorithm. This development and modification are using crossing
number method on pixel representation value ’0’. In this new proposed algorithm, it is no need a process
concerning ROI segmentation and no trigonometry calculation. And specially in obtaining their parameters
using Angle Calculation Block avoiding floating points calculation. As this method is local feature that
usually involve with 60-100 minutiae points, makes the template is small in size. Providing FAR, FRR and
EER, performs the performance evaluation of proposed algorithm. The result is an adaptable fingerprint
minutiae extraction algorithm into hardware implementation with 14.05 % of EER, better than reference
algorithm, which is 20.39 % .The computational time is 18 seconds less than a similar method, which takes
60-90 seconds just for pre-processing step. The first step of algorithm implementation in hardware
environment (embedded) using FPGA Device by developing IP Core without using any soft processor is
presented.
IRJET- Automated Detection of Gender from Face ImagesIRJET Journal
1) The document describes a system to automatically detect gender from face images using convolutional neural networks and Python. The system was developed to help address problems like security, fraud, and criminal identification.
2) The system uses a CNN classifier trained on the UTKFace dataset of facial images. The CNN model contains convolutional, activation, max pooling, flatten, dense and dropout layers to analyze image features and predict the gender of an unknown input face image.
3) The goal of the system is to identify gender from images faster than traditional criminal identification methods in order to help solve crimes and security issues more efficiently.
IRJET- Spot Me - A Smart Attendance System based on Face RecognitionIRJET Journal
The article discusses international issues. It mentions that globalization has increased economic interdependence between nations while also raising tensions over immigration and trade. Solutions will require cooperation and compromise and a recognition that isolationism is not a viable strategy in an interconnected world.
Analysis of Inertial Sensor Data Using Trajectory Recognition Algorithmijcisjournal
This paper describes a digital pen based on IMU sensor for gesture and handwritten digit gesture
trajectory recognition applications. This project allows human and Pc interaction. Handwriting
Recognition is mainly used for applications in the field of security and authentication. By using embedded
pen the user can make hand gesture or write a digit and also an alphabetical character. The embedded pen
contains an inertial sensor, microcontroller and a module having Zigbee wireless transmitter for creating
handwriting and trajectories using gestures. The propound trajectory recognition algorithm constitute the
sensing signal attainment, pre-processing techniques, feature origination, feature extraction, classification
technique. The user hand motion is measured using the sensor and the sensing information is wirelessly
imparted to PC for recognition. In this process initially excerpt the time domain and frequency domain
features from pre-processed signal, later it performs linear discriminant analysis in order to represent
features with reduced dimension. The dimensionally reduced features are processed with two classifiers –
State Vector Machine (SVM) and k-Nearest Neighbour (kNN). Through this algorithm with SVM classifier
provides recognition rate is 98.5% and with kNN classifier recognition rate is 95.5% .
An approach for text detection and reading of product label for blind personsVivek Chamorshikar
This document summarizes a research paper that proposes a camera-based assistive reading system to help blind individuals read text on hand-held objects. The system uses a motion-based method to detect the object of interest in the camera view. It then applies text localization and recognition algorithms to extract and identify text from the object. The system architecture includes components for scene capture, data processing, and audio output of recognized text. The proposed system aims to address challenges blind users face in positioning objects for cameras and automatically extracting text information from complex backgrounds. It is projected to be implemented in phases over a scheduled timeline.
This document describes a proposed sign language interpreter system that uses machine learning and computer vision techniques. It aims to enable deaf and mute users to communicate through computers and the internet by recognizing static hand gestures from camera input and translating them to text. The proposed system extracts features from captured images of signs and uses a support vector machine model to classify the gestures by comparing to a dataset of labeled images. If implemented, this system could help overcome communication barriers for deaf users in an increasingly digital world.
This document provides a summary of a minor project report on image recognition submitted in partial fulfillment of the requirements for a Bachelor of Technology degree in Computer Science and Engineering. The report was submitted by Bhaskar Tripathi and Joel Jose in October 2018 under the supervision of Dr. P. Mohamed Fathimal, Assistant Professor in the Department of Computer Science and Engineering at SRM Institute of Science and Technology. The report includes acknowledgements, a table of contents, and chapters on the introduction, project details, tools and technologies used, proposed system architecture, modules and functionality.
A SURVEY ON DEEP LEARNING METHOD USED FOR CHARACTER RECOGNITIONIJCIRAS Journal
The field of Artificial Intelligence is very fashionable today, especially neural networks that work well in various areas such as speech recognition and natural language processing. This Research Article briefly describes how deep learning models work and what different techniques are used in text recognition. It also describes the great progress that has been made in the field of medicine, the analysis of forensic documents, the recognition of license plates, banking, health and the legal industry. The recognition of handwritten characters is one of the research areas in the field of artificial intelligence. The individual character recognition has a higher recognition accuracy than the complete word recognition. The new method for categorizing Freeman strings is presented using four connectivity events and eight connectivity events with a deep learning approach.
IRJET- Image to Text Conversion using TesseractIRJET Journal
This document discusses using Tesseract OCR engine to convert images containing text into editable text files. It begins with an abstract describing how digital images often contain text data that users need to access and edit digitally. Tesseract is an open-source OCR tool that uses neural networks like LSTM to recognize text in images with high accuracy and convert it into editable text. It then reviews existing OCR methods before describing Tesseract's image processing and recognition steps in more detail. The document also notes that the converted text could then be used to create audio files for visually impaired users to hear the text content.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document summarizes a semi-supervised clustering approach for classifying P300 signals for brain-computer interface (BCI) speller systems. It involves using k-means clustering on wavelet features extracted from EEG data, with some data points labeled to initialize the clusters. An ensemble of support vector machines is then trained on the clustered data points to classify new unlabeled P300 signals. The document outlines the P300 speller paradigm used to collect the EEG data, pre-processing steps like filtering and wavelet transformation, the seeded k-means semi-supervised clustering method, and using an ensemble SVM classifier trained on the clustered data for classification.
This document provides an overview of expert systems and applications of artificial intelligence. It discusses how expert systems use knowledge and reasoning to solve complex problems, and how they are widely used today in fields like science, engineering, business, and medicine. The document also explores several current uses of AI technologies, including using expert systems to optimize power system stabilizers, for network intrusion protection, improving medical diagnosis and treatment, and enhancing computer games.
Neural network based numerical digits recognization using nnt in matlabijcses
Artificial neural networks are models inspired by human nervous system that is capable of learning. One of
the important applications of artificial neural network is character Recognition. Character Recognition
finds its application in number of areas, such as banking, security products, hospitals, in robotics also.
This paper is based on a system that recognizes a english numeral, given by the user, which is already
trained on the features of the numbers to be recognized using NNT (Neural network toolbox) .The system
has a neural network as its core, which is first trained on a database. The training of the neural network
extracts the features of the English numbers and stores in the database. The next phase of the system is to
recognize the number given by the user. The features of the number given by the user are extracted and
compared with the feature database and the recognized number is displayed.
Study on Different Human Emotions Using Back Propagation Methodijiert bestjournal
With fast evolving technology,Cognitive Science plays a vital role in our day-to-day life. Cognitive science is summed up as the study of mind based on scientific methods. It is al l about the sum of all interdisciplinary like philosophy,psychology,linguistics,artificial intelligence,robot ics,and neuroscience. In this paper,I focused on the facial expressions or emotions of human being as it has an impor tant role in interpersonal relations. Without verb communication,one can imagine the mood of a person by expressions. In this method,we use back propagation neural network for implementation. It is an information proce ssing system that has been developed as a generalization of the mathematical model of human recognition.
The document describes a hand gesture recognition system for a paint tool using machine learning. Key points:
- The system uses a webcam and hand gestures to control a paint program, providing a more natural user interface than traditional pointing devices.
- A machine learning approach using Haar-like classifiers to detect hands achieved 96% accuracy, higher than glove-based or computer vision methods.
- The system detects different gestures to draw lines, circles, and select colors on the paint screen in real-time. Hand detection and gesture recognition are performed using OpenCV and a Python platform.
- A literature review found machine learning provided the best balance of high accuracy, low cost, and ease of use compared to other hand
Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...CSCJournals
Sign language is the fundamental communication method among people who suffer from speech and hearing defects. The rest of the world doesn’t have a clear idea of sign language. “Sign Language Communicator” (SLC) is designed to solve the language barrier between the sign language users and the rest of the world. The main objective of this research is to provide a low cost affordable method of sign language interpretation. This system will also be very useful to the sign language learners as they can practice the sign language. During the research available human computer interaction techniques in posture recognition was tested and evaluated. A series of image processing techniques with Hu-moment classification was identified as the best approach. To improve the accuracy of the system, a new approach; height to width ratio filtration was implemented along with Hu-moments. System is able to recognize selected Sign Language signs with the accuracy of 84% without a controlled background with small light adjustments.
CRIMINAL IDENTIFICATION FOR LOW RESOLUTION SURVEILLANCEvivatechijri
Criminal Identification System allows the user to identify a certain criminal based on their biometrics. With advancements in security technology, CCTV cameras have been installed in many public and private areas to provide surveillance activities. The CCTV footage becomes crucial for understanding of the criminal activities that take place and to detect suspects. Additionallywhen a criminal is found it is difficult to locate and track him with just his image if he is on the run. Currently this procedure consists of finding such people in CCTV surveillance footage manually which is time consuming. It is also a tedious process as the resolution for such CCTV cameras is quite low. As a solution to these issues, the proposed system is developed to go through real time surveillance footage, detect and recognize the criminals based on reference datasets of criminals. The use of facial recognition for identifying criminals proves to bebeneficial. Once the best match is found the real time cropped image of the recognized criminal is saved which can be accessed by authorized officials for locating and tracking criminals or for further investigative use.
Measuring memetic algorithm performance on image fingerprints datasetTELKOMNIKA JOURNAL
Personal identification has become one of the most important terms in our society regarding access control, crime and forensic identification, banking and also computer system. The fingerprint is the most used biometric feature caused by its unique, universality and stability. The fingerprint is widely used as a security feature for forensic recognition, building access, automatic teller machine (ATM) authentication or payment. Fingerprint recognition could be grouped in two various forms, verification and identification. Verification compares one on one fingerprint data. Identification is matching input fingerprint with data that saved in the database. In this paper, we measure the performance of the memetic algorithm to process the image fingerprints dataset. Before we run this algorithm, we divide our fingerprints into four groups according to its characteristics and make 15 specimens of data, do four partial tests and at the last of work we measure all computation time.
Hand gesture recognition method arriving great consideration in latest few years since of its manifoldness application and facility to interrelate by machine efficiently during human computer interaction. This paper mainly focuses on the survey on Hand Gesture Recognition. The hand gestures give a divide complementary modality to speech for express ones data. Hand gesture is the method of non-verbal communiqué for human beings for its freer expressions much more other than the body parts. Hand gesture detection has greater significance in scheme a competent human computer interaction method. This paper emphasis on different hand gesture approaches, technologies and applications.
Day by day lots of efforts are been taken towards
developing an intelligent and natural interface between computer
system and users. And looking at the technologies now a day’s it
has become possible by means of variety of media information like
visualization, audio, paint etc. Gesture has become important part
of human communication to convey the information. Thus In this
paper we proposed a method for HAND GESTURE
RECOGNIZATION which includes Hand Segmentation, Hand
Tracking and Edge Traversal Algorithm. We have designed a
system which is limited to the hardware parts such as computer
and webcam. The system consists of four modules: Hand
Tracking and Segmentation, Feature Extraction, Neural
Training, and Testing. The objective of this system to explore the
utility of a neural network-based approach to the recognition of
the hand gestures that create a system that will easily identify the
gesture and use them for device control and convey information
instead of normal inputs devices such as mouse and keyboard.
ADAPTABLE FINGERPRINT MINUTIAE EXTRACTION ALGORITHM BASED-ON CROSSING NUMBER ...IJCSEIT Journal
In this article, a main perspective of developing and implementing fingerprint extraction and matching
algorithms as a part of fingerprint recognition system is focused. First, developing a simple algorithm to
extract fingerprint features and test this algorithm on PC. The second thing is implementing this algorithm
into FPGA devices. The major research topics on which the proposed approach is developing and
modifying fingerprint extraction feature algorithm. This development and modification are using crossing
number method on pixel representation value ’0’. In this new proposed algorithm, it is no need a process
concerning ROI segmentation and no trigonometry calculation. And specially in obtaining their parameters
using Angle Calculation Block avoiding floating points calculation. As this method is local feature that
usually involve with 60-100 minutiae points, makes the template is small in size. Providing FAR, FRR and
EER, performs the performance evaluation of proposed algorithm. The result is an adaptable fingerprint
minutiae extraction algorithm into hardware implementation with 14.05 % of EER, better than reference
algorithm, which is 20.39 % .The computational time is 18 seconds less than a similar method, which takes
60-90 seconds just for pre-processing step. The first step of algorithm implementation in hardware
environment (embedded) using FPGA Device by developing IP Core without using any soft processor is
presented.
IRJET- Automated Detection of Gender from Face ImagesIRJET Journal
1) The document describes a system to automatically detect gender from face images using convolutional neural networks and Python. The system was developed to help address problems like security, fraud, and criminal identification.
2) The system uses a CNN classifier trained on the UTKFace dataset of facial images. The CNN model contains convolutional, activation, max pooling, flatten, dense and dropout layers to analyze image features and predict the gender of an unknown input face image.
3) The goal of the system is to identify gender from images faster than traditional criminal identification methods in order to help solve crimes and security issues more efficiently.
IRJET- Spot Me - A Smart Attendance System based on Face RecognitionIRJET Journal
The article discusses international issues. It mentions that globalization has increased economic interdependence between nations while also raising tensions over immigration and trade. Solutions will require cooperation and compromise and a recognition that isolationism is not a viable strategy in an interconnected world.
Analysis of Inertial Sensor Data Using Trajectory Recognition Algorithmijcisjournal
This paper describes a digital pen based on IMU sensor for gesture and handwritten digit gesture
trajectory recognition applications. This project allows human and Pc interaction. Handwriting
Recognition is mainly used for applications in the field of security and authentication. By using embedded
pen the user can make hand gesture or write a digit and also an alphabetical character. The embedded pen
contains an inertial sensor, microcontroller and a module having Zigbee wireless transmitter for creating
handwriting and trajectories using gestures. The propound trajectory recognition algorithm constitute the
sensing signal attainment, pre-processing techniques, feature origination, feature extraction, classification
technique. The user hand motion is measured using the sensor and the sensing information is wirelessly
imparted to PC for recognition. In this process initially excerpt the time domain and frequency domain
features from pre-processed signal, later it performs linear discriminant analysis in order to represent
features with reduced dimension. The dimensionally reduced features are processed with two classifiers –
State Vector Machine (SVM) and k-Nearest Neighbour (kNN). Through this algorithm with SVM classifier
provides recognition rate is 98.5% and with kNN classifier recognition rate is 95.5% .
An approach for text detection and reading of product label for blind personsVivek Chamorshikar
This document summarizes a research paper that proposes a camera-based assistive reading system to help blind individuals read text on hand-held objects. The system uses a motion-based method to detect the object of interest in the camera view. It then applies text localization and recognition algorithms to extract and identify text from the object. The system architecture includes components for scene capture, data processing, and audio output of recognized text. The proposed system aims to address challenges blind users face in positioning objects for cameras and automatically extracting text information from complex backgrounds. It is projected to be implemented in phases over a scheduled timeline.
This document describes a proposed sign language interpreter system that uses machine learning and computer vision techniques. It aims to enable deaf and mute users to communicate through computers and the internet by recognizing static hand gestures from camera input and translating them to text. The proposed system extracts features from captured images of signs and uses a support vector machine model to classify the gestures by comparing to a dataset of labeled images. If implemented, this system could help overcome communication barriers for deaf users in an increasingly digital world.
This document provides a summary of a minor project report on image recognition submitted in partial fulfillment of the requirements for a Bachelor of Technology degree in Computer Science and Engineering. The report was submitted by Bhaskar Tripathi and Joel Jose in October 2018 under the supervision of Dr. P. Mohamed Fathimal, Assistant Professor in the Department of Computer Science and Engineering at SRM Institute of Science and Technology. The report includes acknowledgements, a table of contents, and chapters on the introduction, project details, tools and technologies used, proposed system architecture, modules and functionality.
A SURVEY ON DEEP LEARNING METHOD USED FOR CHARACTER RECOGNITIONIJCIRAS Journal
The field of Artificial Intelligence is very fashionable today, especially neural networks that work well in various areas such as speech recognition and natural language processing. This Research Article briefly describes how deep learning models work and what different techniques are used in text recognition. It also describes the great progress that has been made in the field of medicine, the analysis of forensic documents, the recognition of license plates, banking, health and the legal industry. The recognition of handwritten characters is one of the research areas in the field of artificial intelligence. The individual character recognition has a higher recognition accuracy than the complete word recognition. The new method for categorizing Freeman strings is presented using four connectivity events and eight connectivity events with a deep learning approach.
IRJET- Image to Text Conversion using TesseractIRJET Journal
This document discusses using Tesseract OCR engine to convert images containing text into editable text files. It begins with an abstract describing how digital images often contain text data that users need to access and edit digitally. Tesseract is an open-source OCR tool that uses neural networks like LSTM to recognize text in images with high accuracy and convert it into editable text. It then reviews existing OCR methods before describing Tesseract's image processing and recognition steps in more detail. The document also notes that the converted text could then be used to create audio files for visually impaired users to hear the text content.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document summarizes a semi-supervised clustering approach for classifying P300 signals for brain-computer interface (BCI) speller systems. It involves using k-means clustering on wavelet features extracted from EEG data, with some data points labeled to initialize the clusters. An ensemble of support vector machines is then trained on the clustered data points to classify new unlabeled P300 signals. The document outlines the P300 speller paradigm used to collect the EEG data, pre-processing steps like filtering and wavelet transformation, the seeded k-means semi-supervised clustering method, and using an ensemble SVM classifier trained on the clustered data for classification.
This document provides an overview of expert systems and applications of artificial intelligence. It discusses how expert systems use knowledge and reasoning to solve complex problems, and how they are widely used today in fields like science, engineering, business, and medicine. The document also explores several current uses of AI technologies, including using expert systems to optimize power system stabilizers, for network intrusion protection, improving medical diagnosis and treatment, and enhancing computer games.
Neural network based numerical digits recognization using nnt in matlabijcses
Artificial neural networks are models inspired by human nervous system that is capable of learning. One of
the important applications of artificial neural network is character Recognition. Character Recognition
finds its application in number of areas, such as banking, security products, hospitals, in robotics also.
This paper is based on a system that recognizes a english numeral, given by the user, which is already
trained on the features of the numbers to be recognized using NNT (Neural network toolbox) .The system
has a neural network as its core, which is first trained on a database. The training of the neural network
extracts the features of the English numbers and stores in the database. The next phase of the system is to
recognize the number given by the user. The features of the number given by the user are extracted and
compared with the feature database and the recognized number is displayed.
Study on Different Human Emotions Using Back Propagation Methodijiert bestjournal
With fast evolving technology,Cognitive Science plays a vital role in our day-to-day life. Cognitive science is summed up as the study of mind based on scientific methods. It is al l about the sum of all interdisciplinary like philosophy,psychology,linguistics,artificial intelligence,robot ics,and neuroscience. In this paper,I focused on the facial expressions or emotions of human being as it has an impor tant role in interpersonal relations. Without verb communication,one can imagine the mood of a person by expressions. In this method,we use back propagation neural network for implementation. It is an information proce ssing system that has been developed as a generalization of the mathematical model of human recognition.
The document describes a hand gesture recognition system for a paint tool using machine learning. Key points:
- The system uses a webcam and hand gestures to control a paint program, providing a more natural user interface than traditional pointing devices.
- A machine learning approach using Haar-like classifiers to detect hands achieved 96% accuracy, higher than glove-based or computer vision methods.
- The system detects different gestures to draw lines, circles, and select colors on the paint screen in real-time. Hand detection and gesture recognition are performed using OpenCV and a Python platform.
- A literature review found machine learning provided the best balance of high accuracy, low cost, and ease of use compared to other hand
Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...CSCJournals
Sign language is the fundamental communication method among people who suffer from speech and hearing defects. The rest of the world doesn’t have a clear idea of sign language. “Sign Language Communicator” (SLC) is designed to solve the language barrier between the sign language users and the rest of the world. The main objective of this research is to provide a low cost affordable method of sign language interpretation. This system will also be very useful to the sign language learners as they can practice the sign language. During the research available human computer interaction techniques in posture recognition was tested and evaluated. A series of image processing techniques with Hu-moment classification was identified as the best approach. To improve the accuracy of the system, a new approach; height to width ratio filtration was implemented along with Hu-moments. System is able to recognize selected Sign Language signs with the accuracy of 84% without a controlled background with small light adjustments.
CRIMINAL IDENTIFICATION FOR LOW RESOLUTION SURVEILLANCEvivatechijri
Criminal Identification System allows the user to identify a certain criminal based on their biometrics. With advancements in security technology, CCTV cameras have been installed in many public and private areas to provide surveillance activities. The CCTV footage becomes crucial for understanding of the criminal activities that take place and to detect suspects. Additionallywhen a criminal is found it is difficult to locate and track him with just his image if he is on the run. Currently this procedure consists of finding such people in CCTV surveillance footage manually which is time consuming. It is also a tedious process as the resolution for such CCTV cameras is quite low. As a solution to these issues, the proposed system is developed to go through real time surveillance footage, detect and recognize the criminals based on reference datasets of criminals. The use of facial recognition for identifying criminals proves to bebeneficial. Once the best match is found the real time cropped image of the recognized criminal is saved which can be accessed by authorized officials for locating and tracking criminals or for further investigative use.
Measuring memetic algorithm performance on image fingerprints datasetTELKOMNIKA JOURNAL
Personal identification has become one of the most important terms in our society regarding access control, crime and forensic identification, banking and also computer system. The fingerprint is the most used biometric feature caused by its unique, universality and stability. The fingerprint is widely used as a security feature for forensic recognition, building access, automatic teller machine (ATM) authentication or payment. Fingerprint recognition could be grouped in two various forms, verification and identification. Verification compares one on one fingerprint data. Identification is matching input fingerprint with data that saved in the database. In this paper, we measure the performance of the memetic algorithm to process the image fingerprints dataset. Before we run this algorithm, we divide our fingerprints into four groups according to its characteristics and make 15 specimens of data, do four partial tests and at the last of work we measure all computation time.
Hand gesture recognition method arriving great consideration in latest few years since of its manifoldness application and facility to interrelate by machine efficiently during human computer interaction. This paper mainly focuses on the survey on Hand Gesture Recognition. The hand gestures give a divide complementary modality to speech for express ones data. Hand gesture is the method of non-verbal communiqué for human beings for its freer expressions much more other than the body parts. Hand gesture detection has greater significance in scheme a competent human computer interaction method. This paper emphasis on different hand gesture approaches, technologies and applications.
Day by day lots of efforts are been taken towards
developing an intelligent and natural interface between computer
system and users. And looking at the technologies now a day’s it
has become possible by means of variety of media information like
visualization, audio, paint etc. Gesture has become important part
of human communication to convey the information. Thus In this
paper we proposed a method for HAND GESTURE
RECOGNIZATION which includes Hand Segmentation, Hand
Tracking and Edge Traversal Algorithm. We have designed a
system which is limited to the hardware parts such as computer
and webcam. The system consists of four modules: Hand
Tracking and Segmentation, Feature Extraction, Neural
Training, and Testing. The objective of this system to explore the
utility of a neural network-based approach to the recognition of
the hand gestures that create a system that will easily identify the
gesture and use them for device control and convey information
instead of normal inputs devices such as mouse and keyboard.
At the present time, hand gestures recognition system could be used as a more expected and useable
approach for human computer interaction. Automatic hand gesture recognition system provides us a new
tactic for interactive with the virtual environment. In this paper, a face and hand gesture recognition
system which is able to control computer media player is offered. Hand gesture and human face are the key
element to interact with the smart system. We used the face recognition scheme for viewer verification and
the hand gesture recognition in mechanism of computer media player, for instance, volume down/up, next
music and etc. In the proposed technique, first, the hand gesture and face location is extracted from the
main image by combination of skin and cascade detector and then is sent to recognition stage. In
recognition stage, first, the threshold condition is inspected then the extracted face and gesture will be
recognized. In the result stage, the proposed technique is applied on the video dataset and the high
precision ratio acquired. Additional the recommended hand gesture recognition method is applied on static
American Sign Language (ASL) database and the correctness rate achieved nearby 99.40%. also the
planned method could be used in gesture based computer games and virtual reality.
Abstract: The main communication methods used by deaf people are sign language, but opposed to common thought, there is no specific universal sign language: every country or even regional group uses its own set of signs. The use of sign language in digital systems can enhance communication in both directions: animated avatars can synthesize signals based on voice or text recognition; and sign language can be translated into various text or sound forms based on different images, videos and sensors input. The ultimate goal of this research, but it is not a simple spelling of spoken language, so that recognizing different signs or letters of the alphabet (which has been a common approach) is not sufficient for its transcription and automatic interpretation. Here proposes an algorithm and method for an application this would help us in recognising the various user defined signs. The palm images of right and left hand are loaded at runtime. Firstly these images will be seized and stored in directory. Then technique called Template matching is used for finding areas of an image that match (are similar) to a template image (patch). Our goal is to detect the highest matching area. We need two primary components- A) Source image (I): In the template image in which we try to find a match. B) Template image (T): The patch image which will be compared to the template image. In proposed system user defined patterns will be having 60% accuracy while default patterns will be provided with 80% accuracy.
Deep convolutional neural network for hand sign language recognition using mo...journalBEEI
An image processing system that based computer vision has received many attentions from science and technology expert. Research on image processing is needed in the development of human-computer interactions such as hand recognition or gesture recognition for people with hearing impairments and deaf people. In this research we try to collect the hand gesture data and used a simple deep neural network architecture that we called model E to recognize the actual hand gestured. The dataset that we used is collected from kaggle.com and in the form of ASL (American Sign Language) datasets. We doing accuracy comparison with another existing model such as AlexNet to see how robust our model. We find that by adjusting kernel size and number of epoch for each model also give a different result. After comparing with AlexNet model we find that our model E is perform better with 96.82% accuracy.
This document is a final report on gesture recognition submitted by three students. It contains an abstract, introduction, background information on gesture recognition including American Sign Language and object recognition techniques. It discusses digital image processing and neural networks. It outlines the approach, modules, flowcharts, results and conclusions of the project, which developed a method to recognize static hand gestures using a perceptron neural network trained on orientation histograms of the input images. Source code and applications are also discussed.
Sign language SL is commonly considered as the primary gesture based language for deaf and dumb people. It is a medium of communication for such people. Basically image based and sensor based are the two important sign language recognition methods. Because of the difficulties in wearing complex devices like Hand Gloves, armbands, helmets etc. in sensor based approaches, lots of researches are done by companies and researchers on image based approaches. Sign language is used by these people to communicate with the normal people. Understanding this sign language is a difficult task according to the normal people. To address these difficulties, a real time translator for sign language using deep learning DL is introduced. It enables to reduce the limitations and cons of other methods to a greater extent. With the help of this real time translator, communication will be better and fast without causing any delay. Jeni Moni | Anju J Prakash "Real Time Translator for Sign Language" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-5 , August 2020, URL: https://www.ijtsrd.com/papers/ijtsrd32915.pdf Paper Url :https://www.ijtsrd.com/computer-science/other/32915/real-time-translator-for-sign-language/jeni-moni
Real Time Vision Hand Gesture Recognition Based Media Control via LAN & Wirel...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
The document is a research paper that studies using a neural network model for fingerprint recognition. It discusses how fingerprint recognition is an important technique for security and restricting intruders. The paper proposes using an artificial neural network with backpropagation training to recognize fingerprints. It describes collecting fingerprint images, classifying them, enhancing the images, and training the neural network to match images and recognize fingerprints with high accuracy. The methodology, implementation, and results of using a backpropagation neural network for fingerprint recognition are analyzed.
As we know the fingerprint is unique of every living objects. It is quite difficult to find out the prints.
Usually the Forensics use Fine powder and duct tapes to identify the prints of living object. As powder is
exceptionally muddled, so such molecule can cause loss of information after that examination the information is
coordinated with the system. The proposed system consists of an embedded device in which it consists of ultra
light to glow the fingerprints details. After that we can detect the fingerprint, analysis and it will checks on the
database, and it will return the output after matching. For matching and analysis of the Fingerprint, we will be
using the Algorithm for matching.
The document describes a project to develop a real-time sign language detection system using computer vision and deep learning techniques. The researchers collected over 500 images of 5 different signs and trained a convolutional neural network model using transfer learning with a pre-trained SSD MobileNet V2 model. The model takes input from a webcam video stream and classifies each frame in real-time to detect the sign language. Some key applications of this system include improving communication for deaf individuals and teaching sign language. The researchers achieved reliable detection results under controlled lighting conditions and aim to expand the dataset and model capabilities in future work.
IRJET- Recognition of Theft by Gestures using Kinect Sensor in Machine Le...IRJET Journal
This document discusses a system that uses a Kinect sensor to recognize theft gestures using machine learning. The system tracks a person's skeleton and compares their gestures to a dictionary of known theft and normal gestures. If a match for a theft gesture is found, an alarm and SMS notification are generated. The system was implemented using Processing and a logistic regression machine learning algorithm to classify poses as abnormal or normal based on joint angle features extracted from Kinect skeleton data. The system aims to automatically detect theft in environments like banks and stores to improve security.
This document describes a system to help deaf and mute people communicate through sign language and voice recognition. The system uses algorithms like support vector machines and hidden Markov models to recognize hand gestures and speech. It can translate sign language into text and voice into sign language representations. The system aims to reduce communication barriers for deaf/mute communities by converting between sign language, text, and voice. It outlines the implementation process which includes steps like skin color detection, hand location detection, finger region detection, and pattern matching to recognize gestures from video input.
This document discusses finger tracking techniques. It begins with an introduction to finger tracking and its uses in technology. It then discusses different types of finger tracking, including those that use interfaces like gloves and those that track fingers without interfaces. The document outlines an algorithm for finger tracking and describes test sequences used to evaluate the algorithm. It concludes by discussing applications of finger tracking and its future potential to replace devices like mice.
This document proposes an e-learning application called ELGR that uses gesture recognition to control a computer interface. Specifically, it aims to recognize finger movements and patterns to perform mouse operations like clicking, dragging, etc. The application would use color tracking rather than complex RGB-to-YCbCr conversion to identify gestures in real time. The document reviews literature on gesture recognition techniques, discusses relevant concepts in image processing and computer vision, and outlines the proposed seven-step algorithm for ELGR to provide a more natural user experience for e-learning.
Sign Language Identification based on Hand GesturesIRJET Journal
This document presents a study on sign language identification based on hand gestures. The researchers aim to develop a system that can recognize American Sign Language gestures from video sequences. They use two different models - a Convolutional Neural Network (CNN) to analyze the spatial features of video frames, and a Recurrent Neural Network (RNN) to analyze the temporal features across frames. The document discusses the methodology used, including data collection from videos, pre-processing of frames, feature extraction using CNN models, and gesture classification. It also provides a literature review on previous studies related to sign language recognition and communication systems for deaf people.
Gesture recognition using artificial neural network,a technology for identify...NidhinRaj Saikripa
This paper contains a technology for identifying any type of body motions commonly originating from hand and face using artificial neural network.This include identifying sign language also.This technology is for speech impaired individuals.
// I have shared a presentation in this topic
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardWaqas Tariq
The main aim of this paper is to build a system that is capable of detecting and recognizing the hand gesture in an image captured by using a camera. The system is built based on Altera’s FPGA DE2 board, which contains a Nios II soft core processor. Image processing techniques and a simple but effective algorithm are implemented to achieve this purpose. Image processing techniques are used to smooth the image in order to ease the subsequent processes in translating the hand sign signal. The algorithm is built for translating the numerical hand sign signal and the result are displayed on the seven segment display. Altera’s Quartus II, SOPC Builder and Nios II EDS software are used to construct the system. By using SOPC Builder, the related components on the DE2 board can be interconnected easily and orderly compared to traditional method that requires lengthy source code and time consuming. Quartus II is used to compile and download the design to the DE2 board. Then, under Nios II EDS, C programming language is used to code the hand sign translation algorithm. Being able to recognize the hand sign signal from images can helps human in controlling a robot and other applications which require only a simple set of instructions provided a CMOS sensor is included in the system.
Basic Gesture Based Communication for Deaf and Dumb is an Application which converts Input Gesture to Corresponding text. It is observed that people having Speech or Listening Disability face many communication problem while interacting with other people. Also it is not easy for people without such disability to understand what the opposite person wants to say with the help of the gesture he or she may be showing. In order to overcome this barrier we made an attempt of creating an application which will detect these gesture and provide a textual output enabling a smoother process of communication. There is a lot of research being done on Gesture Recognition. This Project will help the users ie the deaf and dumb people to communicate with other people without having any barriers due their disability.
Hand Gesture Recognition using OpenCV and Pythonijtsrd
Hand gesture recognition system has developed excessively in the recent years, reason being its ability to cooperate with machine successfully. Gestures are considered as the most natural way for communication among human and PCs in virtual framework. We often use hand gestures to convey something as it is non verbal communication which is free of expression. In our system, we used background subtraction to extract hand region. In this application, our PCs camera records a live video, from which a preview is taken with the assistance of its functionalities or activities. Surya Narayan Sharma | Dr. A Rengarajan "Hand Gesture Recognition using OpenCV and Python" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-2 , February 2021, URL: https://www.ijtsrd.com/papers/ijtsrd38413.pdf Paper Url: https://www.ijtsrd.com/computer-science/other/38413/hand-gesture-recognition-using-opencv-and-python/surya-narayan-sharma
Similar to Kinect Sensor based Indian Sign Language Detection with Voice Extraction (20)
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Kinect Sensor based Indian Sign Language Detection with Voice Extraction
1. Abstract
We are progressing towards new discoveries and
inventions in the field of science and technology, but
unfortunately, very rare inventions could have helped
the problems faced by the physically challenged
people who face difficulties in communicating with
normal people as they use sign language as their
prime medium for communication. Mostly, the sign
languages are not understood by the common people.
Studies say that many research works have been done
to eliminate such kind of communication barrier. But
those work involves the functioning of
Microcontrollers or by some other complicated
techniques. Our study advances this process by using
the Kinect sensor. Kinect sensor is a highly sensitive
motion sensing device with many other applications.
Our workflow from capturing of an image of the
body to conversion into the skeletal image and from
image processing to feature extraction of the detected
image hence getting an output along with its meaning
and voice. The experimental results of our proposed
algorithm are also very promising with an accuracy
of 94.5%.
Keywords: Hidden Markov Model (HMM), Image
Processing, Kinect Sensor, Skeletal Image
1. Introduction
Since a very long time, we are experiencing a
better life due to the existence of various electronic
systems and the sensing elements almost in every
field. Physically challenged people find it easier to
communicate with each other and common people
using different sets of hand gestures and body
movements. We hereby provide an aid to very
efficiently express themselves in front of common
people wherein their sign languages will be
automatically converted into text and speeches. Their
hand and body gestures will be taken as inputs by the
by the sensor, making it easier for them to
systems and the sensing elements almost in every
field.Physically challenged people find it easier to
communicate with each other and common people
using different sets of hand gestures and body
movements. We hereby provide an aid to very
efficiently express themselves in front of common
people wherein their sign languages will be
automatically converted into text and speeches. Their
hand and body gestures will be taken as inputs by the
sensor, making it easier for them to understand.
This is a machine to human interaction system
which includes Kinect sensor and Matlab for
processing the data given as input.
There have been multitudinous researches done
till date, but this paper provides a direct and flexible
system for deaf and dumb people. It extracts voice
from the human gesture of sign language as well as
generates images and texts depending upon the input
gestures given to the system. The very first step is to
give the input as gesture data to the Kinect sensor, by
this it senses the data and a 3-D image are created.
This data is then transferred to Matlab where it is
interfaced through the programming along with
image processing and feature extraction using
different segmentations and Hidden Markov Model
(HMM) algorithm. From the complete segmented
body, only the image of the hand is cropped, the
gesture of that hand is then equated with the available
image in the database and if they match the speech
and text is obtained as output making it easier for the
Kinect Sensor based Indian Sign Language Detection with Voice
Extraction
Shubham Juneja1
, Chhaya Chandra2
, P.D Mahapatra3
, Siddhi Sathe4
, Nilesh B. Bahadure5
and Sankalp Verma6
1
Material Science Program, M.Tech first year student, Indian Institute of Technology
Kanpur, UttarPradesh, India
junejashubh@gmail.com
2
B.E, Electrical and Electronics Engineering, Bhilai Institute of Technology
Raipur, Chhattisgarh, India
chhaya.chandra02@gmail.com
3
B.E, Electrical and Electronics Engineering, Bhilai Institute of Technology
Raipur, Chhattisgarh, India
pushpadas27495@gmail.com
4
Department of Electrical & Electronics Engineering, B.E. Final Year Student, Bhilai Institute of Technology
Raipur, Chhattisgarh, India
sathesiddhi1996@gmail.com
5
Assoc. Professor at Department of Electronics & Telecommunication Engineering, MIT College of Railway Engineering & Research
Solapur, Maharashtra, India
nbahadure@gmail.com
6
Assoc. Professor at Department of Electrical & Electronics Engineering, Bhilai Institute of Technology
Raipur, Chhattisgarh, India
sankalpverma99@gmail.com
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
135 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
2. common people to understand it. By this, the disabled
people will be confident enough to express their
views anywhere and everywhere despite physically
challenged.
2. Literature Review
The Sign Language detection is considered an
efficient way by which physically challenged people
can communicate. Many researchers have studied and
investigations are done on different algorithms to
make the process easier.
Gunasekaran and Manikandan [1] have worked
on a technique using PIC Microcontroller for
detection of sign languages. The authors stated that
their method is better as it solves the real time
problems faced by the disabled ones. Their work
involves extraction of voice as soon as any sign
language is detected.
Kiratey Patil et al. [2] worked on detection of
American Sign Language. Their work is based on
accessing American Sign Language and converting
into English and the output flashes on LCD. This way
their work may omit the communication gap between
common and disabled ones.
Tavari et al. [3] worked on recognition of
Indian Sign languages by hand gestures. They
proposed an idea of recognizing images formed by
different hand movement gestures. They proposed an
idea of recognizing images formed by different hand
movement gestures. They used a web camera in their
work. For identifying signs and translation of text to
voice, Artificial Neural Network has been used.
Simon Lang [4] worked on Sign Language
detection. He proposed a system that uses Kinect
sensor instead of a web camera. Out of nine signs
performed by many people, 97% detection rate have
been seen for eight signs. The important body parts
are sensed by Kinect sensor easily and Markov
Model is used continuously for detection.
Sign Writing system proposed by Cayley et al.
[5] deals with procuring in helping deaf people by
using stylus and screen contraption for the written
literacy in Sign Language. They have provided
databases for enhancing the studies in another paper
so that the sequence of the characters can be stored
and retrieved in order to signify the sign language
and then the editing could be done. In order to
enhance their work on sensing algorithm, they are
further researching on it.
According to Singha and Das [6], several Indian
sign languages have been acknowledged by the
process of skin filtering, hand cropping feature
extraction and classification by making use of Eigen
value weighted Euclidean distance. Hence out of 26
alphabets, only dynamic alphabets ‘H & J’ were not
taken into account & they will be considered in their
future studies.
According to Xiujuan Chaivfgtxtgf.,njoh et al.
[7] for hard and body tracking 3-D motion by using
Kinect Sensor is more effective and clear. This makes
sign language detection easier.
According to our earlier work [8], the efficiency
was 92% but now our efficiency has increased to
94.5%. We have used very simple algorithm here
rather than using FCM.
From the above studies, it has been observed
that few methods are only proposed for hand gestures
recognition and few are only for feature extraction.
Also from the above done survey, it is understood
that no precise idea about feature extraction in easiest
way is mentioned. But this problem is solved in our
study. We have proposed an algorithm and used
HMM technique also. Gestures are identified easily,
that information is then matched with our preset
databases and voice is extracted. This process enables
the common people to understand the sign language
easily.
3. Methodology
We have proposed an algorithm which follows the
following steps:
1. After the detection of the body in front of the
Microsoft X-box Kinect Sensor as shown in
Fig.1, it locates the joints of the body by
pointing it out and hence we get a skeletal
image.
Fig. 1 Kinect Sensor
2. Then the segmented image of the body is
formed from the skeletal image. The area of the
hands where the signs are captured are cropped
out of the whole segmented image. Then the
cropped image is converted into dots and
dashes. The length of the dash is 4 unit and the
spacing between the two lines of dashes is also 4
units.
3. Through observation, we found that wherever
the length of the dashes is greater than 4 units it
resembles the image of the cropped hand.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
136 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
3. 4. In our proposed algorithm, we have taken the
concept of loops, it is used to detect the black
points that are the space between the dashes.
This detection of black points determines the
position of dashes by successive subtraction of
points in the iterations which is going on and on.
5. The basic algorithm behind this work is that
after successive subtraction of points if the value
is equal to 4 then there is no data and if the
value is greater than 4 then there is the actual
image of the cropped hand.
6. Now, here arises a problem that how we detect
the location of fingers. So the algorithm behind
this is based on the formation of matrices of the
black points detected earlier. In an iteration, the
matrix coordinates of the finger are four times
the number of lines of dashes.
7. To highlight the fingers, we plotted star point of
the same coordinates as of the fingers and for
more precise feature extraction the image
processing is followed by filtration of the image
that means the star points plotted on the figures
go through following conditions.
• If they fall on a straight line either
horizontal or vertical.
• If they fall on a constant slope.
• If they fall within 4-unit coordinate
difference.
8. Then it eliminates the identical points which we
call as garbage point. Now we get the filtered
image but the process of feature extraction
continues for identification of fingers that
whether it is an index finger, middle finger, ring
finger, little finger or thumb.
Following is the Table 1 which shows the
range of coordinates in which the fingers are
detected.
Table 1: Range of coordinates of fingers
S No. Finger Range of coordinates
1 Index 80-88
2 Middle 60-68
3 Ring 44-52
4 Little 32-40
5 Thumb 104-112
The flow chart of the above algorithm is shown
in following Fig.2:
Fig. 2 Flow chart of the proposed algorithm
4. Hidden Markov Model
HMM [5], [9] is the algorithm which says
that the actual stages of the work continued in the
system is not visible the final output after the whole
processing is only visible.
HMM works on probability and it uses a
hidden variable of any input data and select them for
various observations and then process all those
variables through Markov process. HMM undergoes
four stage process:
• Filtering: This state involves the
computation which takes place during the
hidden process of the given statistical
parameters.
• Smoothing: This state does the same work
as the filtering process but works in between
the sequence wherever needed.
• Most likely Explanation: This state is
different from the above two states. It is
generally used whenever HMM is exposed
to a different number of problems and to
find overall maximum possible state
sequences.
• Statistical Significance: This state of HMM
is used to obtain statistical data and evaluate
the data of the possible outcome.
5. Result
Finally, after the detection of the whole image
of fingers or we can say that a complete hand
ANDing operation continues in Matlab for the final
output.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
137 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
4. The detected image of the sign is searched in
the database for its meaning and as and when the
match is found search is complete and we get the
final output along with the image of the meaning of
sign and its voice.
We can take the example as, if number 4 is to
be detected by the Kinect sensor then the person
gestures 4 using his hands. The Kinect captures the
skeletal image of the body as shown in the Fig. 3.
Fig. 3 Skeletal image
After skeletal image, the image is converted into
depth image as in Fig. 4.
Fig. 4 Depth image
Then the image is converted into its segmented image
as shown in Fig. 5.
Fig. 5 Segmented image
The image of the hand is cropped from the segmented
image as shown in Fig. 6.
Fig. 6 Cropped image
The cropped image is then converted into a figure
with dots and dashes as shown in Fig. 7.
Fig. 7 Image with dots and dashes
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
138 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
5. The filtration of the figure after star marking it to
detect the fingers is done in 3 steps as shown in Fig.
8.
Fig. 8 Filtration of figure after star marking
The detailed information of the fingers detected are
shown in command window which is shown in Fig.
9.
Fig. 9 Detailed information of fingers detected
After the filtration is done the database is searched
for its match and hence we get an output in the form
of image as shown in Fig. 10(a) and voice which is
plotted in the form of the histogram as shown in Fig.
10(b).
Fig. 10(a) Output in the form of image
Fig. 10(b) Voice in the form of histogram
Overall output of various input given to the system
shown in Fig.11.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
139 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
6. [a] [b] [c]
Fig. 11 [a] Cropped images, [b] Image of the meaning of signs, [c]
Histogram image of voice output
Detailed analysis of the various outputs with respect
to given inputs have been tabulated in Table 2.
Table 2: Detailed Analysis
Total no. of attempts = 50
Accuracy = (No. of correct attempts – No. of wrong
attempts)/Total no. of attempts
Hence we get the total accuracy as 94.5%.
6. Conclusions and Future work
With references to all the earlier studies, our
work provides the better accessibility with the
simpler algorithm and more precise output.
Since programming is done for the detection of
left hand the coordinates are taken accordingly. Our
algorithm gives all the relevant information about the
coordinates of each and every finger detected.
This system is very flexible and user-friendly as
the user can be of any age, gender, size or color, the
results will be same. But the intensity of light and
distance of the body from the sensor affects the
efficiency. For working effectively with the devices it
is suggested to keep the Kinect sensor at a height of
about 62 cm from the ground and the body to be
detected should be distanced at about 90cm.
In our earlier work [8] the exact location of the
fingers and the detailed information about them has
not been determined, so we have overcome these
problems here.
The star points that we have marked earlier is
not completely filtered, very few points still remain
there. Sometimes misinterpretation of detected
fingers also occurs but its possibility is one out of ten.
Acknowledgments
We would like to extend our gratitude and
sincere thanks to all people who have taken a great
deal of interest in our work and helped us with a
valuable suggestion.
References
[1] K. Gunasekaran and R. Manikandan, “Sign
language to speech translation system using pic
microcontroller”, International Journal of
Engineering and Technology, vol. 05, no. 02, pp.
1024–1028, April 2013.
[2] K. Patil, G. Pendharkar, and G. N. Gaikwad,
“American sign language detection”, International
Journal of Scientific and Research Publications, vol.
04, pp. 01–06, November 2014.
[3] N. V. Tavari, A. V. Deorankar, and P. N. Chatur,
“Hand gesture recognition of Indian sign language to
aid physically impaired people”, International Journal
of Engineering Research and Applications, pp. 60–
66, April 2014.
[4] S. Lang, “Sign language recognition with
Kinect”, Master’s thesis, Freie University Berlin,
2011.
S No. No. of correct
attempts
No. of
wrong
attempts
Accuracy(%)
1 50 0 100
2 50 0 100
3 49 1 96
4 48 2 92
5 49 1 96
6 49 1 96
7 47 3 88
8 47 3 88
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
140 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
7. [5] C.-Y. Kao and C.-S. Fahn, “A human-machine
interaction technique: hand gesture recognition based
on Hidden Markov Models with trajectory of hand
motion”, Elsevier Advanced in Control Engineering
and Information Science, vol. 15, pp. 3739–3743,
2011.
[6] J. Singha and K. Das, “Indian sign language
recognition using Eigen- value weighted Euclidian
distance based classification technique”, International
Journal of Advanced Computer Science and
Applications, vol. 04, no. 02, pp. 188–195, 2013.
[7] X. Chai, G. Li, Y. Lin, Z. Xu, Y. Tang, X. Chen,
and M. Zhou, “Sign language recognition and
translation with Kinect”, in IEEE International
Conference on Automatic Face and Gesture
Recognition, April 2013.
[8] N.B.Bahadure, Sankalp Verma, Shubham Juneja,
Chhaya Chandra, D.K.Mishra, and P.D. Mahapatra,
“Sign Language Detection with Voice Extraction in
Matlab using Kinect Sensor”, in International
Journal of Computer Science and Information
Security, vol.14, no. 12, pp.858-863, December 2016.
[9] T. Starner and A. Pentland, “Visual recognition of
American sign language using Hidden Markov
models”, in Proceedings of International workshop
face and gesture recognition, 1995, pp. 189–194.
[10] S. R. Ganapathy, B. Aravind, B. Keerthana, and
M. Sivagami, “Conversation of sign language to
speech with human gestures”, in Elsevier 2nd
International Symposium on Big Data and Cloud
Computing, vol. 50, 2015, pp. 10–15.
[11] M. Boulares and M. Jemni, “3d motion
trajectory analysis approach to improve sign
language 3d-based content recognition”, in Elsevier
Proceedings of International Neural Network Society
Winter Conference, vol. 13, 2012, pp. 133–143.
[12] D. Vinson, R. L. Thompson, R. Skinner, and G.
Vigliocco, “A faster path between meaning and
form? Iconicity facilitates sign recognition and
production in British sign language”, Elsevier Journal
of Memory and Language, vol. 82, pp. 56–85, March
2015.
[13] K. Cormier, A. Schembri, D. Winson, and E.
Orfanidou, “A faster path between meaning and
form? Iconicity facilitates sign recognition and
production in British sign language”, Elsevier Journal
of Cognition, vol. 124, pp. 50–65, May 2012.
[14] H. Anupreethi and S. Vijayakumar, “Msp430
based sign language recognizer for dumb patient”, in
Elsevier International Conference on Modeling
Optimization and Computing, vol. 38, 2012, pp.
1374–1380.
[15] C. Guimaraes, J. F. Guardez, and S. Fernanades,
“Sign language writing acquisition- technology for
writing system”, in IEEE Hawaii International
Conference on System Science, 2014, pp. 120–129.
[16] C. Guimaraes, J. F. Guardezi, S. Fernanades, and
L. E. Oliveira, “Deaf culture and sign language
writing system- a database for a new approach to
writing system recognition technology”, in IEEE
Hawaii International Conference on System Science,
2014, pp. 3368–3377.
[17] J. S. R. Jang, C. T. Sun, and E. Mizutani, Neuro
- Fuzzy and Soft Computing. Eastern Economy
Edition Prentice Hall of India, 2014.
[18] S. C. Pandey and P. K. Misra, “Modified
memory convergence with fuzzy PSO”, in
Proceedings of the World Congress on Engineering,
vol. 01, 2 - 4 July 2007.
[19] M. S. Chafi, M.-R. Akbarzadeh-T, M.
Moavenian, and M. Ziejewski, “Agent based soft
computing approach for component fault detection
and isolation of CNC x - axis drive system”, in
ASME International Mechanical Engineering
Congress and Exposition, Seattle, Washington, USA,
November 2007, pp. 1–10.
[20] S. C. Pandey and P. K. Misra, “Memory
convergence and optimization with fuzzy PSO and
ACS”, Journal of Computer Science, vol. 04, no. 02,
pp. 139–147, February 2008.
[21] N. B. Bahadure, A. K. Ray, and H. P. Thethi,
“Performance analysis of image segmentation using
watershed algorithm, fuzzy c – means of clustering
algorithm and Simulink design”, in IEEE
International Conference on Computing for
Sustainable Global Development, New Delhi, India,
March 2016, pp. 30–34.
[22] H.-D. Yang, “Sign language recognition with
Kinect sensor based on conditional random fields”,
Sensors Journal, vol. 15, pp. 135–147, December
2014.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
141 https://sites.google.com/site/ijcsis/
ISSN 1947-5500