This document summarizes research on sign language recognition systems. It discusses previous work on image-based sign language recognition using approaches like colored gloves, geometric feature extraction, and orientation histograms. It then describes the proposed system, an Android application that uses hand gesture recognition with real-time text and speech conversion. Key steps include gesture extraction using background subtraction and blob detection, gesture matching, and text-to-speech conversion. The system allows users to define their own sign language database to facilitate communication across different sign languages.
Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...CSCJournals
Sign language is the fundamental communication method among people who suffer from speech and hearing defects. The rest of the world doesn’t have a clear idea of sign language. “Sign Language Communicator” (SLC) is designed to solve the language barrier between the sign language users and the rest of the world. The main objective of this research is to provide a low cost affordable method of sign language interpretation. This system will also be very useful to the sign language learners as they can practice the sign language. During the research available human computer interaction techniques in posture recognition was tested and evaluated. A series of image processing techniques with Hu-moment classification was identified as the best approach. To improve the accuracy of the system, a new approach; height to width ratio filtration was implemented along with Hu-moments. System is able to recognize selected Sign Language signs with the accuracy of 84% without a controlled background with small light adjustments.
A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...INFOGAIN PUBLICATION
The objective of this research is to develop a supervised machine learning hand-gesturing model to recognize Arabic Sign Language (ArSL), using two sensors: Microsoft's Kinect with a Leap Motion Controller. The proposed model relies on the concept of supervised learning to predict a hand pose from two depth images and defines a classifier algorithm to dynamically transform gestural interactions based on 3D positions of a hand-joint direction into their corresponding letters whereby live gesturing can be then compared and letters displayed in real time. This research is motivated by the need to increase the opportunity for the Arabic hearing-impaired to communicate with ease using ArSL and is the first step towards building a full communication system for the Arabic hearing impaired that can improve the interpretation of detected letters using fewer calculations. To evaluate the model, participants were asked to gesture the 28 letters of the Arabic alphabet multiple times each to create an ArSL letter data set of gestures built by the depth images retrieved by these devices. Then, participants were later asked to gesture letters to validate the classifier algorithm developed. The results indicated that using both devices for the ArSL model were essential in detecting and recognizing 22 of the 28 Arabic alphabet correctly 100 %.
Design and Development of a 2D-Convolution CNN model for Recognition of Handw...CSCJournals
Owing to the innumerable appearances due to different writers, their writing styles, technical environment differences and noise, the handwritten character recognition has always been one of the most challenging task in pattern recognition. The emergence of deep learning has provided a new direction to break the limits of decades old traditional methods. There exist many scripts in the world which are being used by millions of people. Handwritten character recognition studies of several of these scripts are found in the literature. Different hand-crafted feature sets have been used in these recognition studies. Feature based approaches derive important properties from the test patterns and employ them in a more sophisticated classification model. Feature extraction using Zernike moment and Polar harmonic transformation techniques was also performed and a moderate classification accuracy was also achieved. The problems faced while using these techniques led us to use CNN based recognition approach which is capable of learning the feature vector from the training character image samples in an unsupervised manner in the sense that no hand-crafting is employed to determine the feature vector. This paper presents a deep learning paradigm using a Convolution Neural Network (CNN) which is implemented for handwritten Gurumukhi and devanagari character recognition (HGDCR). In the present experiment, the training of a 34-layer CNN for a 35 class self-generated handwritten Gurumukhi and 60 class (50 alphabet and 10 digits) handwritten Devanagari character dataset was performed on a GPU (Graphic Processing Unit) machine. The experiment resulted with an average recognition accuracy of more than 92% in case of Handwritten Gurumukhi Character dataset and 97.25% in case of Handwritten Devanagari Character dataset. It was also concluded that the training and classification through our network design performed about 10 times faster than on a moderately fast CPU. The advantage of this framework is proved by the experimental results.
A Comprehensive Study On Handwritten Character Recognition Systemiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Gesture Acquisition and Recognition of Sign LanguageIRJET Journal
The document discusses sign language recognition techniques. It begins with an introduction to sign languages and issues faced by deaf communities in communication. It then reviews recent work in sign language recognition, covering approaches used like hand tracking, feature extraction, and classification methods. Finally, it discusses existing challenges and future research opportunities in sign language recognition.
This paper introduces a new concept for the establishment of human-robot symbiotic relationship. The
system is based on the implementation of knowledge-based image processing methodologies for model
based vision and intelligent task scheduling for an autonomous social robot. This paper aims to develop an
automatic translation of static gestures of alphabets and signs in American Sign Language (ASL), using
neural network with backpropagation algorithm. System deals with images of bare hands to achieve the
recognition task. For each individual sign 10 sample images have been considered, which means in
total300 samples have been processed. In order to compare between the training set of signs and the
considered sample images, are converted into feature vectors. Experimental results reveal that this can
recognize selected ASL signs (accuracy of 92.00%). Finally, the system has been implemented issuing hand
gesture commands for ASL to a robot car, named “Moto-robo”.
Character recognition of kannada text in scene images using neuralIAEME Publication
The document summarizes a proposed method for recognizing Kannada characters in low resolution scene images using neural networks. It involves extracting zone-wise horizontal and vertical profile features from character images. During training, features are extracted from samples and used to train a neural network. During testing, features are extracted from test images and recognized using the trained neural network classifier. The method achieved an average recognition accuracy of 92% on 490 Kannada character images captured from mobile phones under varying conditions.
This document is a project report submitted by Mohammad Saiful Islam for a CMPUT 551 course on December 21st, 2010 regarding Bengali handwritten digit recognition using support vector machines. The report discusses building a dataset of Bengali digits written by the author, preprocessing and feature extraction steps, and using a multiclass support vector machine with different kernels for classification. The author hypothesizes that SVM will perform well, RBF kernels will improve performance over linear and polynomial kernels, and using raw pixel values can achieve good accuracy, though testing on different writers may reduce performance. Experiments are planned to test these hypotheses using the collected dataset.
Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...CSCJournals
Sign language is the fundamental communication method among people who suffer from speech and hearing defects. The rest of the world doesn’t have a clear idea of sign language. “Sign Language Communicator” (SLC) is designed to solve the language barrier between the sign language users and the rest of the world. The main objective of this research is to provide a low cost affordable method of sign language interpretation. This system will also be very useful to the sign language learners as they can practice the sign language. During the research available human computer interaction techniques in posture recognition was tested and evaluated. A series of image processing techniques with Hu-moment classification was identified as the best approach. To improve the accuracy of the system, a new approach; height to width ratio filtration was implemented along with Hu-moments. System is able to recognize selected Sign Language signs with the accuracy of 84% without a controlled background with small light adjustments.
A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...INFOGAIN PUBLICATION
The objective of this research is to develop a supervised machine learning hand-gesturing model to recognize Arabic Sign Language (ArSL), using two sensors: Microsoft's Kinect with a Leap Motion Controller. The proposed model relies on the concept of supervised learning to predict a hand pose from two depth images and defines a classifier algorithm to dynamically transform gestural interactions based on 3D positions of a hand-joint direction into their corresponding letters whereby live gesturing can be then compared and letters displayed in real time. This research is motivated by the need to increase the opportunity for the Arabic hearing-impaired to communicate with ease using ArSL and is the first step towards building a full communication system for the Arabic hearing impaired that can improve the interpretation of detected letters using fewer calculations. To evaluate the model, participants were asked to gesture the 28 letters of the Arabic alphabet multiple times each to create an ArSL letter data set of gestures built by the depth images retrieved by these devices. Then, participants were later asked to gesture letters to validate the classifier algorithm developed. The results indicated that using both devices for the ArSL model were essential in detecting and recognizing 22 of the 28 Arabic alphabet correctly 100 %.
Design and Development of a 2D-Convolution CNN model for Recognition of Handw...CSCJournals
Owing to the innumerable appearances due to different writers, their writing styles, technical environment differences and noise, the handwritten character recognition has always been one of the most challenging task in pattern recognition. The emergence of deep learning has provided a new direction to break the limits of decades old traditional methods. There exist many scripts in the world which are being used by millions of people. Handwritten character recognition studies of several of these scripts are found in the literature. Different hand-crafted feature sets have been used in these recognition studies. Feature based approaches derive important properties from the test patterns and employ them in a more sophisticated classification model. Feature extraction using Zernike moment and Polar harmonic transformation techniques was also performed and a moderate classification accuracy was also achieved. The problems faced while using these techniques led us to use CNN based recognition approach which is capable of learning the feature vector from the training character image samples in an unsupervised manner in the sense that no hand-crafting is employed to determine the feature vector. This paper presents a deep learning paradigm using a Convolution Neural Network (CNN) which is implemented for handwritten Gurumukhi and devanagari character recognition (HGDCR). In the present experiment, the training of a 34-layer CNN for a 35 class self-generated handwritten Gurumukhi and 60 class (50 alphabet and 10 digits) handwritten Devanagari character dataset was performed on a GPU (Graphic Processing Unit) machine. The experiment resulted with an average recognition accuracy of more than 92% in case of Handwritten Gurumukhi Character dataset and 97.25% in case of Handwritten Devanagari Character dataset. It was also concluded that the training and classification through our network design performed about 10 times faster than on a moderately fast CPU. The advantage of this framework is proved by the experimental results.
A Comprehensive Study On Handwritten Character Recognition Systemiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Gesture Acquisition and Recognition of Sign LanguageIRJET Journal
The document discusses sign language recognition techniques. It begins with an introduction to sign languages and issues faced by deaf communities in communication. It then reviews recent work in sign language recognition, covering approaches used like hand tracking, feature extraction, and classification methods. Finally, it discusses existing challenges and future research opportunities in sign language recognition.
This paper introduces a new concept for the establishment of human-robot symbiotic relationship. The
system is based on the implementation of knowledge-based image processing methodologies for model
based vision and intelligent task scheduling for an autonomous social robot. This paper aims to develop an
automatic translation of static gestures of alphabets and signs in American Sign Language (ASL), using
neural network with backpropagation algorithm. System deals with images of bare hands to achieve the
recognition task. For each individual sign 10 sample images have been considered, which means in
total300 samples have been processed. In order to compare between the training set of signs and the
considered sample images, are converted into feature vectors. Experimental results reveal that this can
recognize selected ASL signs (accuracy of 92.00%). Finally, the system has been implemented issuing hand
gesture commands for ASL to a robot car, named “Moto-robo”.
Character recognition of kannada text in scene images using neuralIAEME Publication
The document summarizes a proposed method for recognizing Kannada characters in low resolution scene images using neural networks. It involves extracting zone-wise horizontal and vertical profile features from character images. During training, features are extracted from samples and used to train a neural network. During testing, features are extracted from test images and recognized using the trained neural network classifier. The method achieved an average recognition accuracy of 92% on 490 Kannada character images captured from mobile phones under varying conditions.
This document is a project report submitted by Mohammad Saiful Islam for a CMPUT 551 course on December 21st, 2010 regarding Bengali handwritten digit recognition using support vector machines. The report discusses building a dataset of Bengali digits written by the author, preprocessing and feature extraction steps, and using a multiclass support vector machine with different kernels for classification. The author hypothesizes that SVM will perform well, RBF kernels will improve performance over linear and polynomial kernels, and using raw pixel values can achieve good accuracy, though testing on different writers may reduce performance. Experiments are planned to test these hypotheses using the collected dataset.
Recognition of Facial Emotions Based on Sparse CodingIJERA Editor
This paper deals with acknowledgment of characteristic feelings from human countenances is a fascinating subject with an extensive variety of potential applications like human-PC communication, robotized mentoring frameworks, picture and video recovery, brilliant situations, what's more, driver cautioning frameworks. Generally, facial feeling acknowledgment frameworks have been assessed on lab controlled information, which is not illustrative of the earth confronted in genuine applications. To vigorously perceive facial feelings in genuine regular circumstances, this paper proposes a methodology called Extreme Sparse Learning (ESL), which can mutually take in a word reference (set of premise) and a non-direct grouping model. The proposed approach consolidates the discriminative force of Extreme Learning Machine (ELM) with the reproduction property of meager representation to empower exact arrangement when given uproarious signs and blemished information recorded in common settings. Moreover, this work exhibits another neighborhood spatioworldly descriptor that is particular what's more, posture invariant. The proposed structure can accomplish best in class acknowledgment precision on both acted what's more, unconstrained facial feeling databases.
Handwritten character recognition is one of the most challenging and ongoing areas of research in the
field of pattern recognition. HCR research is matured for foreign languages like Chinese and Japanese but
the problem is much more complex for Indian languages. The problem becomes even more complicated for
South Indian languages due to its large character set and the presence of vowels modifiers and compound
characters. This paper provides an overview of important contributions and advances in offline as well as
online handwritten character recognition of Malayalam scripts.
Speech Recognition using HMM & GMM Models: A Review on Techniques and Approachesijsrd.com
Many ways of communications are used between human and computer, while using gesture is considered to be one of the most natural ways in a virtual reality system. Speech recognition is one of the typical methods of non-verbal communication for human beings and we naturally use various gestures to express our own intentions in everyday life. Gesture recognizers are supposed to capture and analyze the information transmitted by the hands of a person who communicates in sign language. This is a prerequisite for automatic sign-to-spoken-language translation, which has the potential to support the integration of deaf people into society. This paper present part of literature review on ongoing research and findings on different technique and approaches in gesture recognition using Hidden Markov Models for vision-based approach.
A gesture recognition system for the Colombian sign language based on convolu...journalBEEI
Sign languages (or signed languages) are languages that use visual techniques, primarily with the hands, to transmit information and enable communication with deaf-mutes people. This language is traditionally only learned by people with this limitation, which is why communication between deaf and non-deaf people is difficult. To solve this problem we propose an autonomous model based on convolutional networks to translate the Colombian Sign Language (CSL) into normal Spanish text. The scheme uses characteristic images of each static sign of the language within a base of 24000 images (1000 images per category, with 24 categories) to train a deep convolutional network of the NASNet type (Neural Architecture Search Network). The images in each category were taken from different people with positional variations to cover any angle of view. The performance evaluation showed that the system is capable of recognizing all 24 signs used with an 88% recognition rate.
Handwriting Recognition Using Deep Learning and Computer VersionNaiyan Noor
This document presents a method for handwriting recognition using deep learning and computer vision. It discusses preprocessing images by removing noise and converting to grayscale. Thresholding is used to separate darker text pixels from lighter background pixels. The image is then segmented into individual lines and words. Python libraries like TensorFlow, Spyder and Jupyter Notebook are used. The goal is to build a system that can recognize text in images and display the text to users. Future work may include recognizing cursive text and additional languages.
The presentation will describe an algorithm through which one can recognize Devanagari Characters. Devanagari is the script in which Hindi is represented. This algorithm
could automatically segment character from the image of Devenagari text and then recognize them.
For extracting the individual characters from the image of Devanagari text, algorithm segmented the image several
times using the vertical and horizontal projection.
The algorithm starts with first segmenting the lines separately from the document by taking horizontal projection and then the line
into words by taking vertical projection of the line. Another step which is particular to the separation of
Devanagari characters was required and was done by first removing the header line by finding horizontal projection
of each word. The characters can then be extracted by vertical projection of the word without the header line.
Algorithm uses a Kohonen Neural Netowrk for the recognition task. After the separation of the characters from the
image, the image matrix was then downsampled to bring it down to a fixed size so as to make the recognition
size independent. The matrix can then be fed as input neurons to the Kohonen Neural Network and the winning neuron is
found which identifies the recognized the character. This information in Kohonen Neural Network was stored
earlier during the training phase of the neural network. For this, we first assigned random weights from input neurons
to output neurons and then for each training set, the winning neuron was calculated by finding the maximum
output produced by the neurons. The wights for this winning neuron were then adjusted so that it responds to this
pattern more strongly the next time.
This document summarizes a research paper on developing a speech and gesture recognition system for human-computer interaction using a self-organizing Markov map approach. The system consists of modules for gesture recognition, speech recognition and controlling a wheelchair robot. Gesture recognition involves extracting features from images of hand and head gestures. Speech recognition involves spectral coding of voice signals. A self-organizing Markov map is used to provide flexibility and robustness against noise. The system recognizes symbolic gestures and voice commands to control the movement of a wheelchair robot.
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATIONijnlc
Sign language is a visual-gestural language used by deaf-dumb people for communication. As normal people are unfamiliar of sign language, the hearing-impaired people find it difficult to communicate with them. The communication gap between the normal and the deaf-dumb people can be bridged by means of Human–Computer Interaction. The objective of this paper is to convert the Dravidian (Tamil) sign language into text. The proposed method recognizes 12 vowels, 18 consonants and a special character “Aytham” of Tamil language by a vision based approach. In this work, the static images of the hand signs are obtained a web/digital camera. The hand region is segmented by a threshold applied to the hue channel of the input image. Then the region of interest (i.e. from wrist to fingers) is segmented using the reversed horizontal projection profile and the Discrete Cosine transformed signature is extracted from the boundary of hand sign. These features are invariant to translation, scale and rotation. Sparse representation classifier is incorporated to recognize 31 hand signs. The proposed method has attained a maximum recognition accuracy of 71% in a uniform background.
Handwritten Character Recognition: A Comprehensive Review on Geometrical Anal...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A Deep Neural Framework for Continuous Sign Language Recognition by Iterative...ijtsrd
Sign Language SL is a medium of communication for physically disabled people. It is a gesture based language for communication of dumb and deaf people. These people communicate by using different actions of hands, where each different action means something. Sign language is the only way of conversation for deaf and dumb people. It is very difficult to understand this language for the common people. Hence sign language recognition has become an important task. There is a necessity for a translator to communicate with the world. Real time translator for sign language provides a medium to communicate with others. Previous methods employs sensor gloves, hat mounted cameras, armband etc. which has wearing difficulties and have noisy behaviour. To alleviate this problem, a real time gesture recognition system using Deep Learning DL is proposed. It enables to achieve improvements on the gesture recognition performance. Jeni Moni | Anju J Prakash ""A Deep Neural Framework for Continuous Sign Language Recognition by Iterative Training: Survey"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30032.pdf
Paper Url : https://www.ijtsrd.com/engineering/computer-engineering/30032/a-deep-neural-framework-for-continuous-sign-language-recognition-by-iterative-training-survey/jeni-moni
Appearance based static hand gesture alphabet recognitionIAEME Publication
This document describes a system for recognizing static hand gestures in American Sign Language (ASL). It discusses:
1) Using image processing techniques like segmentation, morphological filtering, and contour tracing to extract features from images of hand gestures. Features include centroid, shape, and number of fingers.
2) A classification approach using rule-based recognition based on the extracted features to identify the gesture as a particular letter in the ASL alphabet.
3) The implementation of the system in MATLAB, including steps for image capture, preprocessing, feature extraction, and classification. Experimental results demonstrating recognition of the letter "A" are shown.
Hand and wrist localization approach: sign language recognition Sana Fakhfakh
This document proposes a new method for hand detection and wrist localization to achieve automatic recognition of Arabic sign language gestures without clothing or background conditions. The method involves:
1) Using marker-controlled watershed segmentation to localize the hand region.
2) Rotating the hand region vertically, dividing it into sections, and detecting the wrist position as the first line with minimum white pixels in the hand region and maximum black pixels in the background region, focusing the search in the lower sections to avoid detecting fingers.
3) Extracting shape-based features like geometric moments and Zernike moments from the localized hand region to recognize Arabic digit sign gestures for sign language interaction.
Hand Written Character Recognition Using Neural Networks Chiranjeevi Adi
This document discusses a project to develop a handwritten character recognition system using a neural network. It will take handwritten English characters as input and recognize the patterns using a trained neural network. The system aims to recognize individual characters as well as classify them into groups. It will first preprocess, segment, extract features from, and then classify the input characters using the neural network. The document reviews several existing approaches to handwritten character recognition and the use of gradient and edge-based feature extraction with neural networks. It defines the objectives and methods for the proposed system, which will involve preprocessing, segmentation, feature extraction, and classification/recognition steps. Finally, it outlines the hardware and software requirements to implement the system as a MATLAB application.
Artificial Neural Network For Recognition Of Handwritten Devanagari CharacterIOSR Journals
1) The document discusses recognizing handwritten Devanagari characters using artificial neural networks and zone-based feature extraction.
2) It proposes extracting features from images by dividing them into zones and calculating average pixel distances to the image and zone centroids.
3) This zone-based feature vector is then input to a feedforward neural network for character recognition.
Automatic Isolated word sign language recognitionSana Fakhfakh
This paper suggests a new system to help the
deaf and the hearing-impaired community improve their
connection with the hearing world and communicate
freely. The most important thing in this system is
how to help the users be free and finally have a more
natural way of communication. For this reason, we
present a new process based on two levels: a static-level
aiming to extract the most head/hands key points and
a dynamic-level with the objective of accumulating the
key-point trajectory matrix. Also our proposed approach
takes into account the signer-independence constraint.
A SIGNUM database is applied in the classification
stage and our system performances have improved with
a 94.3% recognition rate. Furthermore, a reduction
in time processing is obtained when the removing of
redundant frame step is applied. The obtained results
prove the superiority of our system compared to the
state-of- the-art methods in terms of recognition rate and
execution time.
IRJET- Sign Language Interpreter using Image Processing and Machine LearningIRJET Journal
This document describes a system to translate sign language gestures to text or audio using image processing and machine learning techniques. The system takes an image of a sign language gesture as input using a webcam. It then performs preprocessing steps like skin detection and edge detection. Features are extracted from the preprocessed image using a Histogram of Oriented Gradients algorithm. These features are fed into a Support Vector Machine classifier that has been trained on a dataset of 6000 images of English alphabet signs. The system is able to recognize the signs with 88% accuracy and translate them to text or audio output, aiding communication for deaf individuals.
Iaetsd appearance based american sign language recognitionIaetsd Iaetsd
This document summarizes a research paper on developing an automatic recognition system for static gestures of the American Sign Language (ASL) alphabet using image processing and neural networks. The system extracts features from images of bare hands using three different methods: edge detection, orientation histograms, and a modified Scale-Invariant Feature Transform (SIFT) algorithm. A neural network then classifies the feature vectors to recognize the ASL letters. Testing showed the modified SIFT approach achieved the highest recognition accuracy of 98.99%, outperforming the other methods. The system provides a signer-independent way to recognize ASL alphabets from images of bare hands without devices.
PLENÁRIO DE UTENTES DA SAÚDE DO MUNICÍPIO DE SALVATERRA DE MAGOSfreguesiademarinhais
As Juntas de Freguesia e a Comissão de Utentes da Saúde vão realizar um plenário sobre a assistência à saúde no Concelho de Salvaterra de Magos, que está em risco de ser liquidada. O plenário vai acontecer em 28 de outubro na Casa do Povo de Glória do Ribatejo para defender os direitos dos munícipes à saúde e protestar contra as condições catastróficas do setor no concelho.
Este documento presenta una serie de ejercicios sobre grafos y dígrafos. Incluye encontrar la matriz de adyacencia y de incidencia de un grafo dado, determinar si es conexo, simple, regular o completo, y encontrar una cadena y un ciclo de ciertas características. También incluye encontrar un árbol generador de un grafo usando un algoritmo, demostrar si un grafo es euleriano o hamiltoniano, y calcular distancias en un dígrafo usando el algoritmo de Dijkstra.
Recognition of Facial Emotions Based on Sparse CodingIJERA Editor
This paper deals with acknowledgment of characteristic feelings from human countenances is a fascinating subject with an extensive variety of potential applications like human-PC communication, robotized mentoring frameworks, picture and video recovery, brilliant situations, what's more, driver cautioning frameworks. Generally, facial feeling acknowledgment frameworks have been assessed on lab controlled information, which is not illustrative of the earth confronted in genuine applications. To vigorously perceive facial feelings in genuine regular circumstances, this paper proposes a methodology called Extreme Sparse Learning (ESL), which can mutually take in a word reference (set of premise) and a non-direct grouping model. The proposed approach consolidates the discriminative force of Extreme Learning Machine (ELM) with the reproduction property of meager representation to empower exact arrangement when given uproarious signs and blemished information recorded in common settings. Moreover, this work exhibits another neighborhood spatioworldly descriptor that is particular what's more, posture invariant. The proposed structure can accomplish best in class acknowledgment precision on both acted what's more, unconstrained facial feeling databases.
Handwritten character recognition is one of the most challenging and ongoing areas of research in the
field of pattern recognition. HCR research is matured for foreign languages like Chinese and Japanese but
the problem is much more complex for Indian languages. The problem becomes even more complicated for
South Indian languages due to its large character set and the presence of vowels modifiers and compound
characters. This paper provides an overview of important contributions and advances in offline as well as
online handwritten character recognition of Malayalam scripts.
Speech Recognition using HMM & GMM Models: A Review on Techniques and Approachesijsrd.com
Many ways of communications are used between human and computer, while using gesture is considered to be one of the most natural ways in a virtual reality system. Speech recognition is one of the typical methods of non-verbal communication for human beings and we naturally use various gestures to express our own intentions in everyday life. Gesture recognizers are supposed to capture and analyze the information transmitted by the hands of a person who communicates in sign language. This is a prerequisite for automatic sign-to-spoken-language translation, which has the potential to support the integration of deaf people into society. This paper present part of literature review on ongoing research and findings on different technique and approaches in gesture recognition using Hidden Markov Models for vision-based approach.
A gesture recognition system for the Colombian sign language based on convolu...journalBEEI
Sign languages (or signed languages) are languages that use visual techniques, primarily with the hands, to transmit information and enable communication with deaf-mutes people. This language is traditionally only learned by people with this limitation, which is why communication between deaf and non-deaf people is difficult. To solve this problem we propose an autonomous model based on convolutional networks to translate the Colombian Sign Language (CSL) into normal Spanish text. The scheme uses characteristic images of each static sign of the language within a base of 24000 images (1000 images per category, with 24 categories) to train a deep convolutional network of the NASNet type (Neural Architecture Search Network). The images in each category were taken from different people with positional variations to cover any angle of view. The performance evaluation showed that the system is capable of recognizing all 24 signs used with an 88% recognition rate.
Handwriting Recognition Using Deep Learning and Computer VersionNaiyan Noor
This document presents a method for handwriting recognition using deep learning and computer vision. It discusses preprocessing images by removing noise and converting to grayscale. Thresholding is used to separate darker text pixels from lighter background pixels. The image is then segmented into individual lines and words. Python libraries like TensorFlow, Spyder and Jupyter Notebook are used. The goal is to build a system that can recognize text in images and display the text to users. Future work may include recognizing cursive text and additional languages.
The presentation will describe an algorithm through which one can recognize Devanagari Characters. Devanagari is the script in which Hindi is represented. This algorithm
could automatically segment character from the image of Devenagari text and then recognize them.
For extracting the individual characters from the image of Devanagari text, algorithm segmented the image several
times using the vertical and horizontal projection.
The algorithm starts with first segmenting the lines separately from the document by taking horizontal projection and then the line
into words by taking vertical projection of the line. Another step which is particular to the separation of
Devanagari characters was required and was done by first removing the header line by finding horizontal projection
of each word. The characters can then be extracted by vertical projection of the word without the header line.
Algorithm uses a Kohonen Neural Netowrk for the recognition task. After the separation of the characters from the
image, the image matrix was then downsampled to bring it down to a fixed size so as to make the recognition
size independent. The matrix can then be fed as input neurons to the Kohonen Neural Network and the winning neuron is
found which identifies the recognized the character. This information in Kohonen Neural Network was stored
earlier during the training phase of the neural network. For this, we first assigned random weights from input neurons
to output neurons and then for each training set, the winning neuron was calculated by finding the maximum
output produced by the neurons. The wights for this winning neuron were then adjusted so that it responds to this
pattern more strongly the next time.
This document summarizes a research paper on developing a speech and gesture recognition system for human-computer interaction using a self-organizing Markov map approach. The system consists of modules for gesture recognition, speech recognition and controlling a wheelchair robot. Gesture recognition involves extracting features from images of hand and head gestures. Speech recognition involves spectral coding of voice signals. A self-organizing Markov map is used to provide flexibility and robustness against noise. The system recognizes symbolic gestures and voice commands to control the movement of a wheelchair robot.
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATIONijnlc
Sign language is a visual-gestural language used by deaf-dumb people for communication. As normal people are unfamiliar of sign language, the hearing-impaired people find it difficult to communicate with them. The communication gap between the normal and the deaf-dumb people can be bridged by means of Human–Computer Interaction. The objective of this paper is to convert the Dravidian (Tamil) sign language into text. The proposed method recognizes 12 vowels, 18 consonants and a special character “Aytham” of Tamil language by a vision based approach. In this work, the static images of the hand signs are obtained a web/digital camera. The hand region is segmented by a threshold applied to the hue channel of the input image. Then the region of interest (i.e. from wrist to fingers) is segmented using the reversed horizontal projection profile and the Discrete Cosine transformed signature is extracted from the boundary of hand sign. These features are invariant to translation, scale and rotation. Sparse representation classifier is incorporated to recognize 31 hand signs. The proposed method has attained a maximum recognition accuracy of 71% in a uniform background.
Handwritten Character Recognition: A Comprehensive Review on Geometrical Anal...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A Deep Neural Framework for Continuous Sign Language Recognition by Iterative...ijtsrd
Sign Language SL is a medium of communication for physically disabled people. It is a gesture based language for communication of dumb and deaf people. These people communicate by using different actions of hands, where each different action means something. Sign language is the only way of conversation for deaf and dumb people. It is very difficult to understand this language for the common people. Hence sign language recognition has become an important task. There is a necessity for a translator to communicate with the world. Real time translator for sign language provides a medium to communicate with others. Previous methods employs sensor gloves, hat mounted cameras, armband etc. which has wearing difficulties and have noisy behaviour. To alleviate this problem, a real time gesture recognition system using Deep Learning DL is proposed. It enables to achieve improvements on the gesture recognition performance. Jeni Moni | Anju J Prakash ""A Deep Neural Framework for Continuous Sign Language Recognition by Iterative Training: Survey"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30032.pdf
Paper Url : https://www.ijtsrd.com/engineering/computer-engineering/30032/a-deep-neural-framework-for-continuous-sign-language-recognition-by-iterative-training-survey/jeni-moni
Appearance based static hand gesture alphabet recognitionIAEME Publication
This document describes a system for recognizing static hand gestures in American Sign Language (ASL). It discusses:
1) Using image processing techniques like segmentation, morphological filtering, and contour tracing to extract features from images of hand gestures. Features include centroid, shape, and number of fingers.
2) A classification approach using rule-based recognition based on the extracted features to identify the gesture as a particular letter in the ASL alphabet.
3) The implementation of the system in MATLAB, including steps for image capture, preprocessing, feature extraction, and classification. Experimental results demonstrating recognition of the letter "A" are shown.
Hand and wrist localization approach: sign language recognition Sana Fakhfakh
This document proposes a new method for hand detection and wrist localization to achieve automatic recognition of Arabic sign language gestures without clothing or background conditions. The method involves:
1) Using marker-controlled watershed segmentation to localize the hand region.
2) Rotating the hand region vertically, dividing it into sections, and detecting the wrist position as the first line with minimum white pixels in the hand region and maximum black pixels in the background region, focusing the search in the lower sections to avoid detecting fingers.
3) Extracting shape-based features like geometric moments and Zernike moments from the localized hand region to recognize Arabic digit sign gestures for sign language interaction.
Hand Written Character Recognition Using Neural Networks Chiranjeevi Adi
This document discusses a project to develop a handwritten character recognition system using a neural network. It will take handwritten English characters as input and recognize the patterns using a trained neural network. The system aims to recognize individual characters as well as classify them into groups. It will first preprocess, segment, extract features from, and then classify the input characters using the neural network. The document reviews several existing approaches to handwritten character recognition and the use of gradient and edge-based feature extraction with neural networks. It defines the objectives and methods for the proposed system, which will involve preprocessing, segmentation, feature extraction, and classification/recognition steps. Finally, it outlines the hardware and software requirements to implement the system as a MATLAB application.
Artificial Neural Network For Recognition Of Handwritten Devanagari CharacterIOSR Journals
1) The document discusses recognizing handwritten Devanagari characters using artificial neural networks and zone-based feature extraction.
2) It proposes extracting features from images by dividing them into zones and calculating average pixel distances to the image and zone centroids.
3) This zone-based feature vector is then input to a feedforward neural network for character recognition.
Automatic Isolated word sign language recognitionSana Fakhfakh
This paper suggests a new system to help the
deaf and the hearing-impaired community improve their
connection with the hearing world and communicate
freely. The most important thing in this system is
how to help the users be free and finally have a more
natural way of communication. For this reason, we
present a new process based on two levels: a static-level
aiming to extract the most head/hands key points and
a dynamic-level with the objective of accumulating the
key-point trajectory matrix. Also our proposed approach
takes into account the signer-independence constraint.
A SIGNUM database is applied in the classification
stage and our system performances have improved with
a 94.3% recognition rate. Furthermore, a reduction
in time processing is obtained when the removing of
redundant frame step is applied. The obtained results
prove the superiority of our system compared to the
state-of- the-art methods in terms of recognition rate and
execution time.
IRJET- Sign Language Interpreter using Image Processing and Machine LearningIRJET Journal
This document describes a system to translate sign language gestures to text or audio using image processing and machine learning techniques. The system takes an image of a sign language gesture as input using a webcam. It then performs preprocessing steps like skin detection and edge detection. Features are extracted from the preprocessed image using a Histogram of Oriented Gradients algorithm. These features are fed into a Support Vector Machine classifier that has been trained on a dataset of 6000 images of English alphabet signs. The system is able to recognize the signs with 88% accuracy and translate them to text or audio output, aiding communication for deaf individuals.
Iaetsd appearance based american sign language recognitionIaetsd Iaetsd
This document summarizes a research paper on developing an automatic recognition system for static gestures of the American Sign Language (ASL) alphabet using image processing and neural networks. The system extracts features from images of bare hands using three different methods: edge detection, orientation histograms, and a modified Scale-Invariant Feature Transform (SIFT) algorithm. A neural network then classifies the feature vectors to recognize the ASL letters. Testing showed the modified SIFT approach achieved the highest recognition accuracy of 98.99%, outperforming the other methods. The system provides a signer-independent way to recognize ASL alphabets from images of bare hands without devices.
PLENÁRIO DE UTENTES DA SAÚDE DO MUNICÍPIO DE SALVATERRA DE MAGOSfreguesiademarinhais
As Juntas de Freguesia e a Comissão de Utentes da Saúde vão realizar um plenário sobre a assistência à saúde no Concelho de Salvaterra de Magos, que está em risco de ser liquidada. O plenário vai acontecer em 28 de outubro na Casa do Povo de Glória do Ribatejo para defender os direitos dos munícipes à saúde e protestar contra as condições catastróficas do setor no concelho.
Este documento presenta una serie de ejercicios sobre grafos y dígrafos. Incluye encontrar la matriz de adyacencia y de incidencia de un grafo dado, determinar si es conexo, simple, regular o completo, y encontrar una cadena y un ciclo de ciertas características. También incluye encontrar un árbol generador de un grafo usando un algoritmo, demostrar si un grafo es euleriano o hamiltoniano, y calcular distancias en un dígrafo usando el algoritmo de Dijkstra.
This document summarizes research conducted on competitors in the social action/anti-bullying advertising market. It describes the operations of three competitors: The Diana Award, an organization that recognizes young people making extraordinary impacts in their communities through various anti-bullying programs and campaigns; Bullies Out, an anti-bullying organization that provides help, support and information and works directly to address bullying; and BeatBully.org, the organization developing an anti-bullying advertising campaign that is researching these competitors to inform their own campaign.
This document discusses the use of fuzzy queries to retrieve information from databases. Fuzzy queries allow for imprecise or vague terms to be used in queries, similar to natural language. The document first provides background on limitations of traditional database queries. It then discusses how fuzzy set theory and membership functions can be applied to queries and data to handle uncertain terms. The proposed approach applies fuzzy queries to a relational database, defining linguistic variables and membership functions. This allows information to be retrieved based on fuzzy criteria and improves the ability to query databases using human-like terms. Benefits of fuzzy queries include more natural interaction and accounting for real-world data imperfections.
Software keyloggers are a fast growing class of invasive software often used to harvest confidential
information. One of the main reasons for this rapid growth is the possibility for unprivileged programs
running in user space to eavesdrop and record all thekeystrokes typed by the users of a system. The ability
to run in unprivileged mode facilitates their implementation and distribution, but,at the same time, allows
one to understand and model their behavior in detail. Leveraging this characteristic, we propose a new
detection technique that simulates carefully crafted keystroke sequences in input and observes the behavior
of the keylogger in output to unambiguously identify it among all the running processes. We have
prototyped our technique as an unprivileged application, hence matching the same ease of deployment of a
keylogger executing in unprivileged mode. We have successfully evaluated the underlying technique
against the most common free keyloggers. This confirms the viability of our approach in practical
scenarios. We have also devised potential evasion techniques that may be adopted to circumvent our
approach and proposed a heuristic to strengthen the effectiveness of our solution against more elaborated
attacks. Extensive experimental results confirm that our technique is robust to both false positives and
false negatives in realistic settings.
This study aims to enlighten the researchers about the details of process mining. As process mining is a new research area, it includes process modelling and process analysis, as well as business intelligence and data mining. Also it is used as a tool that gives information about procedures. In this paper classification of process mining techniques, different process mining algorithms, challenges and area of application have been explained.Therefore, it was concluded that process mining can be a useful technique with faster results and ability to check conformance and compliance.
As Diabetes Mellitus combined with other ailments will become a deadly combination, hence there
is an urgent need to break the link between diabetes and its related complications. For this purpose image
processing based analysis can potentially be helpful for earlier detection, education and treatment. Medical
image analysis of Diabetic patients with its related complications such as DR, CVD & Diabetic
Myonecrosis (i.e. on Retinal Images, Coronary angiographs, Electron micrographs, MRI etc) is to be the
aprioristic because of its more prevalence. Thus the main work of this paper is on literature review about
Diabetes and Imaging such as the Prevalence, Classification, Causes and Medical Imaging & Survey of
Image processing methods applied on Diabetic Related Causes.
Keywords — Image, segmentation, retinopathy, Myonecrosis,
Osteoarthritis (OA) is the most common form of arthritis seen in aged or older populations. It is caused
because of a degeneration of articular cartilage, which functions as shock absorption cushion in knee joint. OA
also leads sliding of bones together, cause swelling, pain, eventually and loss of motion. Nowadays, magnetic
resonance imaging (MRI) technique is widely used in the progression of osteoarthritis diagnosis due to the ability
to display the contrast between bone and cartilage. Usually, analysis of MRI image is done manually by a
physician which is very unpredictable, subjective and time consuming. Hence, there is need to develop automated
system to reduce the processing time. In this paper, a new automatic knee OA detection system based on feature
extraction and artificial neural network is developed. The different features viz GLCM texture, statistical, shape
etc. is extracted by using different image processing algorithms. This detection system consists of 4 stages, which
are pre-processing with ROI cropping, segmentation, feature extraction, and classification by neural network. This
technique results 98.5% of classification accuracy at training stage and 92% at testing stage.
Keywords — Artificial Neural Network (ANN), Gray Level Co-occurrence Matrix (GLCM),Knee
Joint, Magnetic Resonance Imaging (MRI), Osteoarthritis(OA).
This document summarizes a research paper on designing a fuel monitoring and control system for fuel stations. The system uses a PIC16F877A microcontroller to control the fuel pumping time based on the requested amount, environmental temperature, and other factors. It compensates the fuel volume based on temperature changes to provide accurate amounts to customers and dealers. The hardware components include a temperature sensor, keypad, LCD display, and relays to control the fuel pump. The software calculates the total pumping time based on temperature, request amount, and other variables. It was found that the electronic control system provided flexibility and accuracy over traditional flow sensor-based systems. The system could dispense various fluids by controlling the pumping time through software.
To develop an Application to visualize the key board of computer with the concept of image processing. The virtual
keyboard should be accessible and functioning. The keyboard must give input to computer. With the help of camera, image
of keyboard will be fetched. The typing will be captured by camera, as we type on cardboard simply drawn on paper. Camera
will capture finger movement while typing. So basically this is giving the virtual keyboard.
As the technology advances, more and more systems are introduced which will look after the users comfort. Few
years before hard switches were used as keys. Traditional QWERTY keyboards are bulky and offer very little in terms of
enhancements. Now-a-days soft touch keypads are much popular in the market. These keypads give an elegant look and a
better feel. Currently keyboards are static and their interactivity and usability would increase if they were made dynamic and
adaptable. Various on-screen virtual keyboards are available but it is difficult to accommodate full sized keyboard on the
screen as it creates hindrance to see the documents being typed. Virtual Keyboard has no physical appearance. Although
other forms of Virtual Keyboards exist; they provide solutions using specialized devices such as 3D cameras. Due to this, a
practical implementation of such keyboards is not feasible. The Virtual Keyboard that we propose uses only a standard web
camera, with no additional hardware. Thus we see that the new technology always has more Benefits and is more userfriendly.
rring of useable data, for example a sensor in a room to monitor and control the temperature. It is estimated
that by 2020 there will be about 50 billion internet-enabled devices. The Internet of things presently is being used
in the fields of automobiles, agriculture, security surveillance, building management, smart-homes, and health
care. The IOT expects to use low-cost computing devices where there is less energy consumption impact to the environment[1]. and limited
This paper aims to describe a way for giving security to IT companies, scouting units, business organizations and
volunteer groups. Among the person identification methods, face recognition is known to be the most natural
ones, since the face modality is the modality that uses to identify people in everyday lives. This face detection
differentiates faces from non-faces and is therefore essential for accurate security. The other strategy involves face
recognition for marking the employees. The Raspberry pi module is used for face detection & recognition. The
camera will be connected to the Raspberry pi module. The employees database is collected. The database includes
name of the employees, there images & ID number[2].
This RFID reader module will be installed at the front side of organizations in such a way that all employees
provided by the RFID cards. Raspberry pi system have the database of the employees so comparing the database if
employees details are matched employ can entered into company. Thus with the help of this system, time will be
saved and it is so convenient to record employees. And the details of the employees will be sent to the
corresponding head of organization using IOT technology
This research paper presents a new technique called nomogram-based synthesis for synthesizing complex planar mechanisms without needing to solve nonlinear equations or use optimization techniques. The author applies this technique to synthesize a 6 bar-2 slider planar mechanism. A nomogram is constructed using four performance measures: time ratio, normalized stroke, minimum transmission angle, and maximum transmission angle. A five-step procedure utilizes the nomogram to synthesize the mechanism for desired time ratio and stroke values while maintaining transmission angle within recommended ranges. As an example, the technique is used to synthesize a mechanism with a time ratio of 2 and normalized stroke of 1.5, obtaining transmission angles between 108.5-112 degrees.
The content based Image Retrieval is the restoration of images with respect to the visual appearances
like texture, shape and color.The methods, components and the algorithms adopted in this content based
retrieval of images were commonly derived from the areas like pattern identification, signal progressing
and the computer vision. Moreover the shape and the color features were abstracted in the course of
wavelet transformation and color histogram. Thus the new content based retrieval is proposed in this
research paper.In this paper the algorithms were required to propose with regards to the shape, shade and
texture feature abstraction .The concept of discrete wavelet transform to be implemented in order to
compute the Euclidian distance.The calculation of clusters was made with the help of the modified KMeans
clustering technique. Thus the analysis is made in among the query image and the database
image.The MATLAB software is implemented to execute the queries. The K-Means of abstraction is
proposed by performing fragmentation and grid-means module, feature extraction and K- nearest neighbor
clustering algorithms to construct the content based image retrieval system.Thus the obtained result are
made to compute and compared to all other algorithm for the retrieval of quality image features
As the rising of transportation system design, so many data logger design was developed for safety. In our country, there are
many accidents on highways. Both driver’s faults and road construction cause accidents. To reduce these condition some safety
system such as obstacle detection system, vehicle declination alarm system, temperature and smoke level display unit,
signboard warning on road sides should be used for both driver and passengers. In addition data logger system for whole
vehicle must be equipped for safety. With the implementation of PIC microcontroller as an Embedded device, this logger
design was constructed with many sensors and C# service-based database. Using Arduino boards, vehicle detection sensing
circuit, Check point radio signal sensing circuit for dangerous road sector, hall-effect magnetic wheel revolution sensing circuit
were designed to be connected with main PIC microcontroller and Personal Computer. Real time result was displayed on C#
Graphical User Interface and Vehicle data log could be easily exported to Microsoft Excel report.
Keywords -Arduino, alarm and alert system,C# service-based database, PC based control system, Vehicle data
logger.
Recently technological and population development, the usage of vehicles is rapidly increasing and at the
same time the occurrence accident is also increased. Hence, the value of human life is ignored. No one can prevent
the accident, but can save their life by expediting the ambulance to the hospital in time. The objective of this
scheme is to minimize the delay caused by traffic congestion and to provide the smooth flow of emergency
vehicles. The concept of this scheme is to green the traffic signal in the path of ambulance automatically so that
the ambulance can reach the spot in time and human life can be saved. The main server finds the ambulance
through mail. At the same time, it controls the traffic light according to the ambulance location and thus arriving at
the hospital safely. This scheme is fully automated, thus it locates emergency vehicle and controls the traffic
lights, provide the shortest path to reach the hospital in time.
In real world applications, most of the optimization problems involve more than one objective to
be optimized. The objectives in most of engineering problems are often conflicting, i.e., maximize
performance, minimize cost, maximize reliability, etc. In the case, one extreme solution would not satisfy
both objective functions and the optimal solution of one objective will not necessary be the best solution
for other objective(s). Therefore different solutions will produce trade-offs between different objectives
and a set of solutions is required to represent the optimal solutions of all objectives. Multi-objective
formulations are realistic models for many complex engineering optimization problems. Customized
genetic algorithms have been demonstrated to be particularly effective to determine excellent solutions to
these problems. A reasonable solution to a multi-objective problem is to investigate a set of solutions, each
of which satisfies the objectives at an acceptable level without being dominated by any other solution. In
this paper, an overview is presented describing various multi objective genetic algorithms developed to
handle different problems with multiple objectives.
This document discusses e-commerce in India, including its prospects, challenges, and factors for growth. It begins by defining e-commerce and outlining its benefits for producers, distributors, retailers, and customers. It then examines India's prospects in e-commerce and the services it provides different groups. Major challenges for e-commerce in India include security issues, customer acquisition costs, product delivery times, and lack of awareness. Essential factors for e-commerce growth are improving customer convenience, adopting multi-channel investments, establishing trust through transparency, utilizing location-based services, and offering multiple payment options. The document concludes that e-commerce offers benefits like cost-effectiveness but still faces challenges that must be addressed for further
Free Space Optics is a medium with high bandwidth which has maximum data rate. Demand for
large data speed capacity has been increasing exponentially due to the massive spread of internet
So with the growing transmission rate and demand in the field of optical communication, the electronic
regeneration has become more expensive. With the introduction of power optical amplifiers the cost of
converting optical signals to electronic
combinations of hybrid amplifiers have been studied and emerged in FSO system. Their performances
have been compared on the basis of transmission distance
The document describes a project to develop a real-time sign language detection system using computer vision and deep learning techniques. The researchers collected over 500 images of 5 different signs and trained a convolutional neural network model using transfer learning with a pre-trained SSD MobileNet V2 model. The model takes input from a webcam video stream and classifies each frame in real-time to detect the sign language. Some key applications of this system include improving communication for deaf individuals and teaching sign language. The researchers achieved reliable detection results under controlled lighting conditions and aim to expand the dataset and model capabilities in future work.
The document describes a hand gesture recognition system for a paint tool using machine learning. Key points:
- The system uses a webcam and hand gestures to control a paint program, providing a more natural user interface than traditional pointing devices.
- A machine learning approach using Haar-like classifiers to detect hands achieved 96% accuracy, higher than glove-based or computer vision methods.
- The system detects different gestures to draw lines, circles, and select colors on the paint screen in real-time. Hand detection and gesture recognition are performed using OpenCV and a Python platform.
- A literature review found machine learning provided the best balance of high accuracy, low cost, and ease of use compared to other hand
Sign language SL is commonly considered as the primary gesture based language for deaf and dumb people. It is a medium of communication for such people. Basically image based and sensor based are the two important sign language recognition methods. Because of the difficulties in wearing complex devices like Hand Gloves, armbands, helmets etc. in sensor based approaches, lots of researches are done by companies and researchers on image based approaches. Sign language is used by these people to communicate with the normal people. Understanding this sign language is a difficult task according to the normal people. To address these difficulties, a real time translator for sign language using deep learning DL is introduced. It enables to reduce the limitations and cons of other methods to a greater extent. With the help of this real time translator, communication will be better and fast without causing any delay. Jeni Moni | Anju J Prakash "Real Time Translator for Sign Language" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-5 , August 2020, URL: https://www.ijtsrd.com/papers/ijtsrd32915.pdf Paper Url :https://www.ijtsrd.com/computer-science/other/32915/real-time-translator-for-sign-language/jeni-moni
This document describes a system to help deaf and mute people communicate through sign language and voice recognition. The system uses algorithms like support vector machines and hidden Markov models to recognize hand gestures and speech. It can translate sign language into text and voice into sign language representations. The system aims to reduce communication barriers for deaf/mute communities by converting between sign language, text, and voice. It outlines the implementation process which includes steps like skin color detection, hand location detection, finger region detection, and pattern matching to recognize gestures from video input.
Hand gesture recognition method arriving great consideration in latest few years since of its manifoldness application and facility to interrelate by machine efficiently during human computer interaction. This paper mainly focuses on the survey on Hand Gesture Recognition. The hand gestures give a divide complementary modality to speech for express ones data. Hand gesture is the method of non-verbal communiqué for human beings for its freer expressions much more other than the body parts. Hand gesture detection has greater significance in scheme a competent human computer interaction method. This paper emphasis on different hand gesture approaches, technologies and applications.
Vision Based Approach to Sign Language RecognitionIJAAS Team
We propose an algorithm for automatically recognizing some certain amount of gestures from hand movements to help deaf and dumb and hard hearing people. Hand gesture recognition is quite a challenging problem in its form. We have considered a fixed set of manual commands and a specific environment, and develop a effective, procedure for gesture recognition. Our approach contains steps for segmenting the hand region, locating the fingers, and finally classifying the gesture which in general terms means detecting, tracking and recognising. The algorithm is non-changing to rotations, translations and scale of the hand. We will be demonstrating the effectiveness of the technique on real imagery.
Abstract: The main communication methods used by deaf people are sign language, but opposed to common thought, there is no specific universal sign language: every country or even regional group uses its own set of signs. The use of sign language in digital systems can enhance communication in both directions: animated avatars can synthesize signals based on voice or text recognition; and sign language can be translated into various text or sound forms based on different images, videos and sensors input. The ultimate goal of this research, but it is not a simple spelling of spoken language, so that recognizing different signs or letters of the alphabet (which has been a common approach) is not sufficient for its transcription and automatic interpretation. Here proposes an algorithm and method for an application this would help us in recognising the various user defined signs. The palm images of right and left hand are loaded at runtime. Firstly these images will be seized and stored in directory. Then technique called Template matching is used for finding areas of an image that match (are similar) to a template image (patch). Our goal is to detect the highest matching area. We need two primary components- A) Source image (I): In the template image in which we try to find a match. B) Template image (T): The patch image which will be compared to the template image. In proposed system user defined patterns will be having 60% accuracy while default patterns will be provided with 80% accuracy.
Design of a Communication System using Sign Language aid for Differently Able...IRJET Journal
This document describes a proposed system to design a communication system using sign language to aid differently abled people. The system aims to use image processing and artificial intelligence techniques to recognize characters in sign language from video input and convert them to text and speech output. It discusses technologies like blob detection, skin color recognition and template matching that would be used for sign recognition. The system is intended to help deaf and mute people communicate by translating their sign language to a format understandable by others.
Abstract: The main communication methods used by deaf people are sign language, but opposed to common thought, there is no specific universal sign language: every country or even regional group uses its own set of signs. The use of sign language in digital systems can enhance communication in both directions: animated avatars can synthesize signals based on voice or text recognition; and sign language can be translated into various text or sound forms based on different images, videos and sensors input. The ultimate goal of this research, but it is not a simple spelling of spoken language, so that recognizing different signs or letters of the alphabet (which has been a common approach) is not sufficient for its transcription and automatic interpretation. Here proposes an algorithm and method for an application this would help us in recognising the various user defined signs. The palm images of right and left hand are loaded at runtime. Firstly these images will be seized and stored in directory. Then technique called Template matching is used for finding areas of an image that match (are similar) to a template image (patch). Our goal is to detect the highest matching area. We need two primary components- A) Source image (I): In the template image in which we try to find a match. B) Template image (T): The patch image which will be compared to the template image. In proposed system user defined patterns will be having 60% accuracy while default patterns will be provided with 80% accuracy.
Basic Gesture Based Communication for Deaf and Dumb is an Application which converts Input Gesture to Corresponding text. It is observed that people having Speech or Listening Disability face many communication problem while interacting with other people. Also it is not easy for people without such disability to understand what the opposite person wants to say with the help of the gesture he or she may be showing. In order to overcome this barrier we made an attempt of creating an application which will detect these gesture and provide a textual output enabling a smoother process of communication. There is a lot of research being done on Gesture Recognition. This Project will help the users ie the deaf and dumb people to communicate with other people without having any barriers due their disability.
General Purpose Image Tampering Detection using Convolutional Neural Network ...sipij
Digital image tampering detection has been an active area of research in recent times due to the ease with
which digital image can be modified to convey false or misleading information. To address this problem,
several studies have proposed forensics algorithms for digital image tampering detection. While these
approaches have shown remarkable improvement, most of them only focused on detecting a specific type of
image tampering. The limitation of these approaches is that new forensic method must be designed for
each new manipulation approach that is developed. Consequently, there is a need to develop methods
capable of detecting multiple tampering operations. In this paper, we proposed a novel general purpose
image tampering scheme based on CNNs and Local Optimal Oriented Pattern (LOOP) which is capable of
detecting five types of image tampering in both binary and multiclass scenarios. Unlike the existing deep
learning techniques which used constrained pre-processing layers to suppress the effect of image content
in order to capture image tampering traces, our method uses LOOP features, which can effectively subdue
the effect image content, thus, allowing the proposed CNNs to capture the needed features to distinguish
among different types of image tampering. Through a number of detailed experiments, our results
demonstrate that the proposed general purpose image tampering method can achieve high detection
accuracies in individual and multiclass image tampering detections respectively and a comparative
analysis of our results with the existing state of the arts reveals that the proposed model is more robust
than most of the exiting methods.
General Purpose Image Tampering Detection using Convolutional Neural Network ...sipij
Digital image tampering detection has been an active area of research in recent times due to the ease with
which digital image can be modified to convey false or misleading information. To address this problem,
several studies have proposed forensics algorithms for digital image tampering detection. While these
approaches have shown remarkable improvement, most of them only focused on detecting a specific type of
image tampering. The limitation of these approaches is that new forensic method must be designed for
each new manipulation approach that is developed. Consequently, there is a need to develop methods
capable of detecting multiple tampering operations. In this paper, we proposed a novel general purpose
image tampering scheme based on CNNs and Local Optimal Oriented Pattern (LOOP) which is capable of
detecting five types of image tampering in both binary and multiclass scenarios. Unlike the existing deep
learning techniques which used constrained pre-processing layers to suppress the effect of image content
in order to capture image tampering traces, our method uses LOOP features, which can effectively subdue
the effect image content, thus, allowing the proposed CNNs to capture the needed features to distinguish
among different types of image tampering. Through a number of detailed experiments, our results
demonstrate that the proposed general purpose image tampering method can achieve high detection
accuracies in individual and multiclass image tampering detections respectively and a comparative
analysis of our results with the existing state of the arts reveals that the proposed model is more robust
than most of the exiting methods.
Sign Language Detection using Action RecognitionIRJET Journal
This document presents a sign language detection system using action recognition. It aims to enhance current systems' performance in terms of response time and accuracy. The proposed system uses machine learning algorithms like LSTM neural networks trained on data sets to classify sign language gestures in real-time video. It segments hand regions, extracts features, and recognizes signs with 98% accuracy for 26 gestures. The system is intended to help deaf individuals communicate through translating signs to text in real-world applications.
This document presents a study on sign language recognition using computer vision techniques. It aims to develop a system that can identify characters and numbers in Indian Sign Language (ISL) using convolutional neural networks. ISL uses both hands to communicate unlike American Sign Language which uses a single hand. The system creates a dataset of ISL gestures and trains a CNN model on it. It then tests the ability of the trained model to accurately predict numbers from 1 to 10 and letters from A to Z when presented with new sign language inputs. The model achieves over 90% accuracy on test data, providing an effective way to translate ISL signs and bridge communication between deaf/mute and non-signing individuals.
This document summarizes a research paper on developing a real-time sign language detector using computer vision and machine learning techniques. The researchers created a dataset of hand gestures for letters, numbers, and common signs in Indian Sign Language (ISL) using webcam photos. They used a pre-trained SSD MobileNet V2 model with transfer learning to classify the gestures with 70-80% accuracy. Their goal was to build a free and user-friendly app to help deaf and hard of hearing people communicate through automated sign language detection and translation, with the aim of closing communication gaps. The technology identifies selected ISL signs in low light and uncontrolled backgrounds using image processing and human movement classification algorithms.
Sign Language Recognition using MediapipeIRJET Journal
This document summarizes a student research project that aims to develop a sign language recognition system using the Mediapipe framework. The system takes video input of signed letters from the American Sign Language alphabet and outputs the recognized letters in text format. The document provides background on sign language and gesture recognition, describes the Mediapipe framework and implementation methodology using KNN classification, and presents preliminary results of the system detecting hand positions and recognizing letters in real-time. The overall goal is to reduce communication barriers for deaf individuals by translating sign language to written text.
A Study on Face Expression Observation Systemsijtsrd
Human expressions can convey a great deal of information. We cannot learn every language in the world, but we can decipher the majority of human expressions. The state of a users conduct in various settings and scenarios can be inferred from their facial expressions. Through various human computer interface and programming approaches, facial expression can be digitized. Face detection, feature extraction, and kind of expression determination are all parts of the facial expression perception process. Verbal and non verbal forms of communication are both possible. Through their emotions, people can communicate nonverbally Weve given a broad summary of the various facial expression perception processes in the literature in this article. Jyoti | Neeraj Chawaria | Ekta "A Study on Face Expression Observation Systems" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-5 , August 2022, URL: https://www.ijtsrd.com/papers/ijtsrd50450.pdf Paper URL: https://www.ijtsrd.com/computer-science/computer-graphics/50450/a-study-on-face-expression-observation-systems/jyoti
SIGN LANGUAGE RECOGNITION USING MACHINE LEARNINGIRJET Journal
1. The document describes a study on developing a real-time sign language recognition system using machine learning. The system captures hand gestures using a webcam and identifies the region of interest to predict the sign.
2. Convolutional neural networks are used to train the model to classify signs. Related works that also use CNNs and other machine learning techniques for sign language recognition from images are discussed.
3. The proposed system aims to make communication easier for deaf and mute people by automatically translating signs to text in real-time without requiring an expert translator.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Real-Time System of Hand Detection And Gesture Recognition In Cyber Presence ...IJERA Editor
The development of technologies of multimedia, linked to that of Internet and democratization of high outflow, has made henceforth E-learning possible for learners being in virtual classes and geographically distributed. The quality and quantity of asynchronous and synchronous communications are the key elements for E-learning success. It is important to have a propitious supervision to reduce the feeling of isolation in E-learning. This feeling of isolation is among the main causes of loss and high rates of stalling in E-learning. The researches to be conducted in this domain aim to bring solutions of convergence coming from real time image for the capture and recognition of hand gestures. These gestures will be analyzed by the system and transformed as indicator of participation. This latter is displayed in the table of performance of the tutor as a curve according to the time. In case of isolation of learner, the indicator of participation will become red and the tutor will be informed of learners with difficulties to participate during learning session.
Similar to [IJET-V1I5P9] Author: Prutha Gandhi, Dhanashri Dalvi, Pallavi Gaikwad, Shubham Khode (20)
These days we have an increased number of heart diseases including increased risk of heart attacks. Our proposed system users sensors that allow to detect heart rate of a person using heartbeat sensing even if the person is at home. The sensor is then interfaced to a microcontroller that allows checking heart rate readings and transmitting them over internet. The user may set the high as well as low levels of heart beat limit. After setting these limits, the system starts monitoring and as soon as patient heart beat goes above a certain limit, the system sends an alert to the controller which then transmits this over the internet and alerts the doctors as well as concerned users. Also the system alerts for lower heartbeats. Whenever the user logs on for monitoring, the system also displays the live heart rate of the patient. Thus concerned ones may monitor heart rate as well get an alert of heart attack to the patient immediately from anywhere and the person can be saved on time.This value will continue to grow if no proper solution is found. Internet of Things (IoT) technology developments allows humans to control a variety of high-tech equipment in our daily lives. One of these is the ease of checking health using gadgets, either a phone, tablet or laptop. we mainly focused on the safety measures for both driver and vehicle by using three types of sensors: Heartbeat sensor, Traffic light sensor and Level sensor. Heartbeat sensor is used to monitor heartbeat rate of the driver constantly and prevents from the accidents by controlling through IOT.
ABSTRACT The success of the cloud computing paradigm is due to its on-demand, self-service, and pay-by-use nature. Public key encryption with keyword search applies only to the certain circumstances that keyword cipher text can only be retrieved by a specific user and only supports single-keyword matching. In the existing searchable encryption schemes, either the communication mode is one-to-one, or only single-keyword search is supported. This paper proposes a searchable encryption that is based on attributes and supports multi-keyword search. Searchable encryption is a primitive, which not only protects data privacy of data owners but also enables data users to search over the encrypted data. Most existing searchable encryption schemes are in the single-user setting. There are only few schemes in the multiple data users setting, i.e., encrypted data sharing. Among these schemes, most of the early techniques depend on a trusted third party with interactive search protocols or need cumbersome key management. To remedy the defects, the most recent approaches borrow ideas from attribute-based encryption to enable attribute-based keyword search (ABKS
This document reviews the behavior of reinforced concrete deep beams. Deep beams are defined as having a shear span to depth ratio of less than 5. The response of deep beams differs from regular beams due to the influence of shear deformations and stresses. Failure modes include flexure, flexural-shear, and diagonal cracking. Previous studies investigated factors affecting shear strength such as concrete strength, reinforcement, and loading conditions. Equations have been proposed to predict shear strength based on test results.
Subcutaneous administration of toluene to rabbits for 6 weeks resulted in significant increases in liver enzyme levels and histopathological changes in the liver tissue. Liver sections from toluene-treated rabbits showed congested central veins, flattening and vacuolation of hepatocytes, and disarrangement of hepatic architecture. In contrast, liver sections from control rabbits appeared normal. Toluene exposure is known to cause oxidative stress and damage cell membranes in the liver through its metabolism.
This document summarizes a research paper that proposes a system to analyze crop phenology (growth stages) using IoT to support parallel agriculture management. The system would use sensors to collect data on soil moisture, temperature, humidity and other parameters. This data would be input to a database. Then, a multiple linear regression model trained on past data would predict the optimal crop and expected yield based on the tested sensor data and parameters. This system aims to help farmers select crops and fertilization practices tailored to their specific fields' conditions.
This document summarizes a study that determined the liberation size of gold ore from the Iperindo-Ilesha deposit in Nigeria and assessed its amenability to froth flotation. Samples of the ore were collected and subjected to sieve analysis to determine particle size fractions. Chemical analysis found that the actual and economic liberation sizes were 45μm and 250μm, respectively. Froth flotation experiments at 45μm particle size and varying collector dosages achieved a maximum gold recovery of 78.93% at 0.3 mol/dm3 collector dosage, with concentrate grade of 115 ppm Au. These parameters will be used for further processing to extract gold from this deposit.
This document presents a proposal for an IOT-based intelligent baby care system with a web application for remote baby monitoring. The system uses sensors to automatically swing a cradle when a baby cries, sound alarms if the baby cries for too long or the mattress is wet, and sends alerts to a web page for parents to monitor the baby's status from anywhere via internet connection. The proposed system aims to help working parents manage childcare remotely using sensors, a Raspberry Pi, web camera, and cloud server to detect the baby's activities and notify parents through a web application on their phone.
This document discusses various sources of water pollution and new techniques being developed for water purification. It begins by outlining how water pollution occurs from industrial wastes like mining and manufacturing, agricultural runoff containing pesticides, and domestic waste. It then examines some specific pollutants in more depth from these sources. New techniques under research for water purification are also mentioned, with the goal of developing more affordable methods. The document aims to analyze the impact of pollutants on water and introduce promising new purification techniques.
This document summarizes a research paper on using big data methodologies with IoT and its applications. It discusses how big data analytics is being used across various fields like engineering, data management, and more. It also discusses how IoT enables the collection of massive amounts of data from sensors and devices. Machine learning techniques are used to analyze this big data from IoT and enable communication between devices. The document provides examples of domains where big data and IoT are being applied, such as healthcare, energy, transportation, and others. It analyzes the similarities and differences in how big data techniques are used across these IoT domains.
The document describes a proposed smart library automation and monitoring system using RFID technology. The system uses RFID tags attached to books and student ID cards. An RFID scanner reads the tags to automate processes like tracking student entry and exit, book check-in/check-out, and inventory management. This allows transactions to occur without manual intervention. The system also includes an Android app for students to search books and check availability. The goals are to streamline library operations, prevent unauthorized access, and help locate misplaced books. Raspberry Pi hardware and a MySQL database are part of the proposed implementation.
This document discusses congestion control techniques for vehicular ad hoc networks (VANETs). It first provides background on VANETs, noting their use of vehicle-to-vehicle communication to share information. Congestion can occur when there is a sudden increase in data from nodes in the network. The document then reviews different existing congestion control schemes, which vary in how they adjust source sending rates and handle transient congestion. It proposes a priority-based congestion control technique using dual queues, one for transit packets and one for locally generated packets. This approach aims to route packets along less congested paths when congestion is detected based on buffer occupancy.
This document summarizes a research paper that proposes applying principles of Vedic mathematics to optimize the design of multipliers, squarers, and cubers. It begins by providing background on multipliers and their importance in electronic systems. It then reviews related work applying Vedic mathematics to multiplier design. The document outlines the methodology for performing multiplication, squaring, and cubing according to Vedic mathematics principles. It presents simulation and synthesis results comparing the proposed Vedic designs to traditional array-based designs, finding improvements in speed, power, and area. The document concludes that Vedic mathematics provides an effective approach for optimizing the design of these fundamental arithmetic components.
Cloud computing is the one of the emerging techniques to process the big data. Large collection of set or large
volume of data is known as big data. Processing of big data (MRI images and DICOM images) normally takes
more time compare with other data. The main tasks such as handling big data can be solved by using the concepts
of hadoop. Enhancing the hadoop concept it will help the user to process the large set of images or data. The
Advanced Hadoop Distributed File System (AHDF) and MapReduce are the two default main functions which
are used to enhance hadoop. HDF method is a hadoop file storing system, which is used for storing and retrieving
the data. MapReduce is the combinations of two functions namely maps and reduce. Map is the process of
splitting the inputs and reduce is the process of integrating the output of map’s input. Recently, in medical fields
the experienced problems like machine failure and fault tolerance while processing the result for the scanned
data. A unique optimized time scheduling algorithm, called Advanced Dynamic Handover Reduce Function
(ADHRF) algorithm is introduced in the reduce function. Enhancement of hadoop and cloud introduction of
ADHRF helps to overcome the processing risks, to get optimized result with less waiting time and reduction in
error percentage of the output image
Text mining has turned out to be one of the in vogue handle that has been joined in a few research
fields, for example, computational etymology, Information Retrieval (IR) and data mining. Natural
Language Processing (NLP) methods were utilized to extricate learning from the textual text that is
composed by people. Text mining peruses an unstructured form of data to give important
information designs in a most brief day and age. Long range interpersonal communication locales
are an awesome wellspring of correspondence as the vast majority of the general population in this
day and age utilize these destinations in their everyday lives to keep associated with each other. It
turns into a typical practice to not compose a sentence with remedy punctuation and spelling. This
training may prompt various types of ambiguities like lexical, syntactic, and semantic and because of
this kind of indistinct data; it is elusive out the genuine data arrange. As needs be, we are directing
an examination with the point of searching for various text mining techniques to get different
textual requests via web-based networking media sites. This review expects to depict how
contemplates in online networking have utilized text investigation and text mining methods to
identify the key topics in the data. This study concentrated on examining the text mining
contemplates identified with Facebook and Twitter; the two prevailing web-based social networking
on the planet. Aftereffects of this overview can fill in as the baselines for future text mining research.
Colorectal cancer (CRC) has potential to spread within the peritoneal cavity, and this transcoelomic
dissemination is termed “peritoneal metastases” (PM).The aim of this article was to summarise the current
evidence regarding CRC patients at high risk of PM. Colorectal cancer is the second most common cause of cancer
death in the UK. Prompt investigation of suspicious symptoms is important, but there is increasing evidence that
screening for the disease can produce significant reductions in mortality.High quality surgery is of paramount
importance in achieving good outcomes, particularly in rectal cancer, but adjuvant radiotherapy and chemotherapy
have important parts to play. The treatment of advanced disease is still essentially palliative, although surgery for
limited hepatic metastases may be curative in a small proportion of patients.
This document summarizes a research paper on the thermal performance of air conditioners using nanofluids compared to base fluids. Key points:
- Nanofluids, which are liquids containing nanoparticles, can improve heat transfer in heat pipes and cooling systems due to their higher thermal conductivity compared to base fluids.
- The document reviews how factors like nanofluid type, nanoparticle size and concentration affect thermal efficiency and heat transfer limits. It also examines using nanofluids to enhance heat exchange in transmission fluids.
- An experimental setup is described to study heat transfer and friction factors of water-based Al2O3 nanofluids in a horizontal tube under constant heat flux. Temperature, pressure and flow rate are measured
Now-a-day’s pedal powered grinding machine is used only for grinding purpose. Also, it requires lots of efforts
and limited for single application use. Another problem in existing model is that it consumed more time and also has
lower efficiency. Our aim is to design a human powered grinding machine which can also be used for many purposes
like pumping, grinding, washing, cutting, etc. it can carry water to a height 8 meter and produces 4 ampere of electricity
in most effective way. The system is also useful for the health conscious work out purpose. The purpose of this technical
study is to increase the performance and output capacity of pedal powered grinding machine.
This document summarizes a research paper that proposes using distributed control of multiple energy storage units (ESUs) to manage voltage and loading in electric distribution networks with renewable energy sources like solar and wind. The distributed control approach coordinates the ESUs to store excess power generated during peak periods and discharge it during peak load periods. Each ESU can provide both active and reactive power to support voltage and manage power flows. The distributed control strategy uses a consensus algorithm to divide the required active power reduction equally among ESUs based on their available capacity. Simulation results are presented to analyze the coordinated control of ESU active and reactive power outputs over time.
The steady increase in non-linear loads on the power supply network such as, AC variable speed drives,
DC variable Speed drives, UPS, Inverter and SMPS raises issues about power quality and reliability. In this
subject, attention has been focused on harmonics . Harmonics overload the power system network and cause
reliability problems on equipment and system and also waste energy. Passive and active harmonic filters are
used to mitigate harmonic problems. The use of both active and passive filter is justified to mitigate the
harmonics. The difficulty for practicing engineers is to select and deploy correct harmonic filters , This paper
explains which solutions are suitable when it comes to choosing active and passive harmonic filters and also
explains the mistakes need to be avoided.
This Paper is aimed at analyzing the few important Power System equipment failures generally
occurring in the Industrial Power Distribution system. Many such general problems if not resolved it may
lead to huge production stoppage and unforeseen equipment damages. We can improve the reliability of
Power system by simply applying the problem solving tool for every case study and finding out the root cause
of the problem, validation of root cause and elimination by corrective measures. This problem solving
approach to be practiced by every day to improve the power system reliability. This paper will throw the light
and will be a guide for the Practicing Electrical Engineers to find out the solution for every problem which
they come across in their day to day maintenance activity.
More from IJET - International Journal of Engineering and Techniques (20)
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
1. International Journal of Engineering and Techniques - Volume 1 Issue 5, September-October 2015
ISSN: 2395-1303 http://www.ijetjournal.org Page 55
Image Based Sign Language Recognition on Android
Prutha Gandhi1
, Dhanashri Dalvi2
, Pallavi Gaikwad3
, Shubham Khode4
1,2,3,4
(Computer Department, Modern Education Society’s College of Engineering, and Pune)
I. INTRODUCTION
A Deaf person is very much dependent on the sign
language to communicate with the other person. So
the person who is interacting with the deaf person
needs to know the sign language in order to
understand and communicate effectively. Since
many people are not familiar with sign language, it
very difficult for a deaf person to have interaction
with the society.
The previously implemented system had a
predefined database with a limited scope. Thus, we
are facilitating an application that will allow a user
to define their own database i.e. to define and
upload his own sign language in the system. This
feature will help deaf people to communicate with
other people varying from different countries or
regions. Our application includes these phases:
• Digitization and image capture
• Compression (coding)
• Segmentation
o Edge and feature detection
• Scene Analysis
o Color
o Motion
o Object recognition
• Pattern matching
• Text generation
• Text to speech conversion
• Speech to text conversion
II.RELATED WORK
In [1] M. Mohandes, M. Deriche, and J. Liu,
designed an ArSLR (Arabic Sign Language
Recognition System) for alphabet recognition,
isolated word recognition, and Continuous signer
recognition using image based and sensor based
approach. The image based approach mostly
depends on the coloured gloves and knuckles. This
approach works efficiently for determining
geometric features and body/facial expressions. The
sensor based approach utilizes the glove specs
RESEARCH ARTICLE OPEN ACCESS
Abstract:
A mediator person is required for communication between deaf person and a second person. But a
mediator should know the sign language used by deaf person. But this is also not possible always since there are
multiple sign languages for multiple languages. It is difficult for a deaf person to understand what a second
person speaks. And therefore deaf person should keep track of lip movements of second person in order to know
what he is speaking. But the lip movements do not give proper efficiency and accuracy since the facial
expressions and speech might not match. To overcome the above problems we have proposed a system, an
Android Application for recognizing sign language using hand gesture with the facility for user to define and
upload their own sign language into the system. The features of this system are the real time conversion of
gesture to text and speech. For two-way communication between deaf person and second person, the speech of
second person is converted into text. The processing steps include: gesture extraction, gesture matching and
conversion of text to speech and vice-versa. The system is not only useful for deaf community but can also be
used by common people who migrate to different regions and do not know local language.
Keywords — Gesture extraction and detection, pattern matching, text generation, text to speech and speech
to text conversion,
2. International Journal of Engineering and Techniques - Volume 1 Issue 5, September-October 2015
ISSN: 2395-1303 http://www.ijetjournal.org Page 56
(power, cyber, and data gloves), mostly statistical
features, 3D position information.
In [2] Ravikiran J, Kavi Mahesh, Suhas Mahishi,
Dheeraj R, Sudheender S, Nitin V Pujari, designed
a highly accurate image processing algorithm for
recognizing American Sign Language. Their
algorithm implementation does not require use of
any gloves or markers. The implemented system
detects the number of open fingers using the
concept of boundary tracing which also combines
finger-tip detection.
In [3] Son Lam Phung, Abdesselam Bouzerdoum
and Douglas Chai have analysed three important
issues related to pixel wise skin segmentation:
colour representation, colour quantization, and
classification algorithm. They found that the
Bayesian classifier with the histogram technique
and the multilayer perceptron classifier have higher
classification rates compared to other tested
classifiers.
In [4] Ashish Sethi, Hemanth S, Kuldeep
Kumar,Bhaskara Rao N,Krishnan R, designed an
application for dumb and deaf person. The
application was the integration of already existing
methods. Their application processing includes:
gesture extraction, gesture matching and conversion
to speech. They used histogram matching, bounding
box computation, skin colour segmentation and
region growing methods for gesture extraction. For
gesture matching they used feature point matching
and correlation based matching techniques.
Integrating all these methods their paper provides
following four approaches:
Approach A: Skin colour segmentation with
Feature point matching using SIFT
Approach B: Region Growing with Feature point
matching using SIFT
Approach C: Skin colour segmentation with
Correlation matching
Approach D: Region Growing with Correlation
matching
The application also includes gesture to text
conversion.
In [5] William T. Freeman, Michal Roth, provided
orientation histogram as a feature vector for gesture
classification and interpolation. This method is
simple and fast to compute, and provides robustness
to scene illumination changes. They provide two
categories of gestures: Static gestures and Dynamic
gestures. Static gesture includes a particular hand
configuration and pose represented by a single
image.Moving gesture, represented by a sequence
of images comes under dynamic gesture. They used
a pattern recognition system, converts the a image
or sequence of images into a feature vector, which
then compared with the feature vectors of a training
set of gestures. They also used a Euclidean distance
metric and video digitizer. In the run phase, the
computer compares the feature vector for the
present image with those in the training set, and
picks the category of the nearest vector, or
interpolates between vectors. The methods are
image-based, simple, and fast.
In [6] V. Nayakwadi, N. B. Pokale,In this paper a
survey on various recent gesture recognition
approaches is provided with particular emphasis on
hand gestures. A review of static hand posture
methods are explained with different tools and
algorithms
appliedongesturerecognitionsystem,includingconne
ctionistmodels,hidden Markov model, and fuzzy
clustering.
Vision Based approaches: In vision based methods
the system requires only cameras to capture the
image required for the natural interaction between
human and these approaches are simple but a lot of
gesture challenges are raised such as the complex
background, lighting variation, and other skin
colour objects with the hand object.
InstrumentedGloveapproaches:
Markedglovesorcolouredmarkersaregloves
thatwornbythehumanhandwithsomecolourstodirectt
heprocessoftrackingthehandandlocatingthepalmand
fingers,whichprovidetheabilitytoextract geometric
features necessary to form hand shape. Gesture
Recognition Techniques: Most of the researches use
ANN as a classifier in gesture recognition process.
Histogram Based Feature: A method for
recognizing gestures based on pattern recognition
using orientation histogram.
Fuzzy Clustering Algorithm: In fuzzy clustering,
the partitioning of sample
dataintogroupsinafuzzywayarethemaindifferencebet
3. International Journal of Engineering and Techniques - Volume 1 Issue 5, September-October 2015
ISSN: 2395-1303 http://www.ijetjournal.org Page 57
weenfuzzyclustering and other clustering algorithm,
where the single data pattern might belong to
different data groups.
Hidden Markov Model (HMM): HMM is a
stochastic process, with a unite number of states of
Markov chain, and a number of random functions
so that each stat ehasarandom function. HMM
system topology is represented by one state for the
initial state, a set of output symbols, and a set of
transitions state. HMM contained a lot of
mathematical structures and has proved its
efficiency for modelling spatiotemporal lin
formation data. Sign language recognition, are one
of the most applications of HMM.
In [7] Massimo Piccardi, reviews the Background
subtraction method, which is a widely used
approach for detecting moving objects from static
cameras. This paper provides a review of the main
methods and an original categorisation based on
speed, memory requirements and accuracy.
Methods reviewed include parameter I can dnon-
parametric background density estimates and spatial
correlation approaches. Several methods
forperformingbackgroundsubtractionhavebeenprop
osed, allofthesemethods try to effectively estimate
the background model from the temporal sequence
of the frames. The methods reviewed in the
following are: Running Gaussian average,
Temporal median filter, Mixture of Gaussians,
Kernel density estimation (KDE), Sequential KD
approximation, Concurrence of image variations,
Eigen-backgrounds.
III. RESEARCH WORK
For extracting and processing of an image, process
of acquisition is performed. Generally the image
acquisition process involves pre-processing such as
scaling etc. A scale space representation can be
used. A scale space is representation of an image at
multiple resolution levels. Then Difference of
Gaussians can be applied on the image. Difference
of Gaussians is a feature enhancement algorithm
that involves the subtraction of one blurred version
of an original image from another, less blurred
version of the original. In the simple case of
grayscale images, the blurred images are obtained
by convolving the original grayscale images with
Gaussian kernels having differing standard
deviations. Blurring an image using a Gaussian
kernel suppresses only high-frequency spatial
information. Subtracting one image from the other
preserves spatial information that lies between the
range of frequencies that are preserved in the two
blurred images. Thus, the difference of Gaussians is
a band-pass filter that discards all but a handful of
spatial frequencies that are present in the original
grayscale image. There are many ways to handle
image translation, one way to handle translation
problems on images is template matching, it is used
to compare the intensities of the pixels, using the
SAD (Sum of absolute differences) measure. The
other way include a pixel in the search image with
coordinates (xs, ys) has intensity Is(xs, ys) and a
pixel in the template with coordinates (xt, yt) has
intensity It(xt, yt). Thus the absolute difference in
the pixel intensities is defined as
Diff(xs, ysxt, yt) = | Is(xs, ys) – It(xt, yt) |.
Image processing also includes image
segmentation. Segmentation procedures partition an
image into its constituent parts or objects. In
general, autonomous segmentation is one of the
most difficult tasks in digital image processing. A
rugged segmentation procedure bringstheprocess
towards successful solution of imaging problems
that require objects to be identified individually.
Then the representation and description of image is
provided. Knowledge base is used to store the
information about an image that can be later utilize
for object recognition.
The algorithms for image extraction and
detectioninclude background subtraction and blob
detection algorithm.
Blob detection is an algorithm used to determine
if a group of connecting pixels are related to each
other. This is useful for identifying separate objects
in a scene, or counting the number of objects in a
scene.
The Background subtraction is a technique in the
fields of image processing and computer vision
wherein an image's foreground is extracted for
further processing (object recognition etc.)
The background subtraction method has some
disadvantages such as:
4. International Journal of Engineering and Techniques - Volume 1 Issue 5, September-October 2015
ISSN: 2395-1303 http://www.ijetjournal.org Page 58
Background subtraction can be a powerfulallied
when it comes to segmenting objects in a scene.
The method, however, has some build-in limitations
that are exposed especially when processing video
of outdoor scenes. First of all, the method requires
the background to be empty when learning the
background model for each pixel. This can be a
challenge in a natural scene where moving objects
may always be present. One solution is to median
filter all training samples for each pixel. This will
eliminated pixels where an object is moving
through the scene and the resulting model of the
pixel will be a true background pixel. An extension
is to first order all training pixels (as done in the
median filter) and then calculate the average of the
pixels closest to the median. This will provide both
a mean and variance per pixel. Such approaches
assume that each pixel is covered by objects less
than half the time in the training period.
Another problem is that when processing outdoor
video a pixel may cover more than one background.
This will result in poor segmentation of pixel
during background subtraction. Another problem in
outdoor video is shadows due to strong sunlight.
Such shadow pixels can easily appear different
from the learnt background model and hence be
incorrectly classified as object pixels.
IV. ALGORITHMS
A. Background Subtraction
It is also known as Foreground Detection [7], is a
technique in the fields of image processing and
computer vision wherein an image's foreground is
extracted for further processing (object recognition
etc.). Generally an image's regions of interest are
objects (humans, cars, text etc.) in its foreground.
After the stage of image pre-processing (which may
include image delousing, post processing like
morphology etc.) object localization is required
which may make use of this technique. Background
subtraction is a widely used approach for detecting
moving objects in videos from static cameras. The
rationale in the approach is that of detecting the
moving objects from the difference between the
current frame and a reference frame, often called
“background image”, or “background model”.
Background subtraction is mostly done if the image
in question is a part of a video stream. Background
subtraction provides important cues for numerous
applications in computer vision. It includes the
following steps:
step1:Motion detection, it is done by using
segmentation where the moving projects are
segmented from the background, i.e. take an image
as background and take the frames obtained at the
time t, denoted by I(t) to compare with the
background image denoted by B.
step 2:We can segment out the objects simply by
using image subtraction technique of computer
vision meaning for each pixels in I(t), take the pixel
value denoted by P[I(t)] and subtract it with the
corresponding pixels at the same position on the
background image denoted as P[B].In mathematical
equation, it is written as:
P [F(t)]=P[I(t)]-P[B]
step3:The background is assumed to be the frame at
time t. This difference image would show some
intensity for the pixel locations which have changed
in the two frames.This approach will only work for
cases where all foreground pixels are moving and
all background pixels are static.
step4:A threshold "Threshold" is put on this
difference image to improve the subtraction.
|P [F(t)]-PF(t+1)] |>{Threshold}
This means that the difference image's pixel's
intensities are 'thresholded' or filtered on the basis
of value of Threshold. The accuracy of this
approach is dependent on speed of movement in the
scene. Faster movements may require higher
thresholds. Threshold: The simplest thresholding
methods replace each pixel in an image with a black
pixel if the image intensity Ii,j is less than some
fixed constant T (that is, Ii,j <T), or a white pixel if
the image intensity is greater than that constant. In
the example image on the right, this results in the
dark tree becoming completely black, and the white
snow becoming complete white.
Multiband thresholding: Colour images can also be
thresholded. One approach is to designate a
separate threshold for each of the RGB components
of the image and then combine them with an AND
operation. This reflects the way the camera works
5. International Journal of Engineering and Techniques
ISSN: 2395-1303
and how the data is stored in the computer, but it
does not correspond to the way that people
recognize color. Therefore, the HSL and HSV color
models are more often used; since hue is a circu
quantity it requires circular thresholding.
B. Blob Detection Algorithm
Blob detection is an algorithm used to determine if
a group of connecting pixels are related to each
other[8]. This is useful for identifying separate
objects in a scene, or counting the number of
objects in a scene. In computer vision, blob
detection methods are aimed at detecting regions
in a digital image that differ in properties, such as
brightness or color, compared to surrounding
regions. A blob is a region of an image in which
some properties are constant or approximately
constant; all the points in a blob can be considered
to be similar to each other. To find colored blobs,
you should convert your color image from RGB to
HSV format so that the colors are easier to
separate. Check given images taken by camera at
different times and correspondences displacements
or changes. Apply Filter with Gaussian at different
scales: This is done by just repeatedly filtering
with the same Gaussian. Blob detectors is based on
the Laplacian of the Gaussian (LoG). Given an
input image f(x, y), this image is convolved by a
Gaussian kernel
at a certain scale‘t’ to give a scale space
representation L(x, y; t) = g(x, y, t) * f(x, y). Then,
the result of applying the Laplacian operator
is computed, which usually results in strong
positive responses for dark blobs of extent
strong negative responses for bright blobs of similar
size. To automatically capture blobs of different
(unknown) size in the image domain, a multi
approach is therefore necessary. Now Subtract
image filtered at one scale with image filtered at
previous scale. Then do the Template matching
International Journal of Engineering and Techniques - Volume 1 Issue 5, September
1303 http://www.ijetjournal.org
and how the data is stored in the computer, but it
does not correspond to the way that people
recognize color. Therefore, the HSL and HSV color
models are more often used; since hue is a circular
quantity it requires circular thresholding.
Blob detection is an algorithm used to determine if
a group of connecting pixels are related to each
. This is useful for identifying separate
the number of
objects in a scene. In computer vision, blob
detection methods are aimed at detecting regions
in a digital image that differ in properties, such as
brightness or color, compared to surrounding
regions. A blob is a region of an image in which
some properties are constant or approximately
constant; all the points in a blob can be considered
to be similar to each other. To find colored blobs,
you should convert your color image from RGB to
HSV format so that the colors are easier to
eck given images taken by camera at
displacements
Filter with Gaussian at different
scales: This is done by just repeatedly filtering
Blob detectors is based on
the Gaussian (LoG). Given an
input image f(x, y), this image is convolved by a
to give a scale space
representation L(x, y; t) = g(x, y, t) * f(x, y). Then,
the result of applying the Laplacian operator
, which usually results in strong
positive responses for dark blobs of extent √2 and
strong negative responses for bright blobs of similar
size. To automatically capture blobs of different
(unknown) size in the image domain, a multi-scale
Now Subtract
image filtered at one scale with image filtered at
Template matching,a
basic method of template matching uses a
convolution mask (template), tailored to a specific
feature of the search image, which we want to
detect. The convolution output will be highest at
places where the imagestructure matches the mask
structure, where large image values get multiplied
by large mask values. Implementation:1. Pick a
part of the search image to use as a template: Let
the search image be S(x, y), where (x, y) represent
the coordinates of each pixel in the search image.
Let the template be T(x t, y t
represent the coordinates of each pixel in
template.2. Then simply move the center (or the
origin) of the template T(x t, y
point in the search image and calculate the sum of
products between the coefficients in S(x, y) and T(x
t, y t) over the whole area spanned by the template.
As all possible positions of the template with
respect to the search image are considered, the
position with the highest score is the best position.
This method is also referred to
Filtering' and the template is called a filter mask.
IV. CONCLUSION
Sign languages are one of the main
communication methods used by deaf people, but
opposed to common thought, there is no universal
sign language: every country or even r
uses its own set of signs. The use of sign language
in
digitalsystemscanenhancecommunicationinbothdire
ctions: animatedavatarscan
synthesizesignalsbasedonvoiceortextrecognition;an
dsignlanguagecanbetranslated into text or sound
based on images, videos and sensors input. The
latest is the ultimate goal of this research, but it is
not a simple spelling of spoken language, so that
recognizing isolated signs or letters of the alphabet
(which has been a common
approach)isnotsufficientforitstranscripti
aticinterpretation. Thesystem will provide the
output in the form of text which is equivalent to the
recognized sign language hand con
system will ease and encourage the interaction of
common people with the handicapped people sinc
the common people would no longer be required to
learn the various sign languages in order to
September-October 2015
Page 59
basic method of template matching uses a
convolution mask (template), tailored to a specific
feature of the search image, which we want to
detect. The convolution output will be highest at
places where the imagestructure matches the mask
large image values get multiplied
Implementation:1. Pick a
part of the search image to use as a template: Let
the search image be S(x, y), where (x, y) represent
the coordinates of each pixel in the search image.
t), where (x t, y t)
represent the coordinates of each pixel in
template.2. Then simply move the center (or the
, y t) over each (x, y)
point in the search image and calculate the sum of
ts in S(x, y) and T(x
) over the whole area spanned by the template.
As all possible positions of the template with
respect to the search image are considered, the
position with the highest score is the best position.
This method is also referred to as 'Linear Spatial
Filtering' and the template is called a filter mask.
Sign languages are one of the main
communication methods used by deaf people, but
opposed to common thought, there is no universal
sign language: every country or even regional group
uses its own set of signs. The use of sign language
digitalsystemscanenhancecommunicationinbothdire
ctions: animatedavatarscan
synthesizesignalsbasedonvoiceortextrecognition;an
dsignlanguagecanbetranslated into text or sound
videos and sensors input. The
latest is the ultimate goal of this research, but it is
not a simple spelling of spoken language, so that
recognizing isolated signs or letters of the alphabet
(which has been a common
ficientforitstranscriptionandautom
aticinterpretation. Thesystem will provide the
output in the form of text which is equivalent to the
recognized sign language hand configuration. This
system will ease and encourage the interaction of
common people with the handicapped people since
the common people would no longer be required to
learn the various sign languages in order to
6. International Journal of Engineering and Techniques - Volume 1 Issue 5, September-October 2015
ISSN: 2395-1303 http://www.ijetjournal.org Page 60
communicate with them. This system could be
applied at various tasks, be it commercial or non-
commercial, where there is involvement of
handicapped people. Handicapped people can
benefit from this system in their day to day life
whenever they need to easily convey their message
through their sign language to common people.
ACKNOWLEDGMENT
It gives us great pleasure in presenting the
preliminary project report on ‘IMAGE BASED
SIGN LANGUAGE RECOGNITION ON
ANDROID’. We would like to take this opportunity
to thank our internal guide Prof. S. S. Raskar for
giving us all the help and guidance we needed. We
are really grateful to them for their kind support.
Their valuable suggestions were very helpful. We
are also grateful to Prof. N.F.Shaikh, Head of
Computer Engineering Department,Modern
Education Society’s College of Engineering for her
indispensable support, suggestions.
REFERENCES
1. M. Mohandes, M. Deriche, and J. Liu ,
“Image-Based andSensor-Based Approaches to
Arabic Sign Language Recognition”, IEEE
TRANSACTIONS ON HUMAN-MACHINE
SYSTEMS, VOL. 44, NO. 4, AUGUST 2014
2. Ravikiran J, Kavi Mahesh, Suhas Mahishi,
Dheeraj R, Sudheender S, Nitin V Pujari,
“Finger Detection for Sign Language
Recognition”, Proceedings of the International
MultiConference of Engineers and Computer
Scientists 2009 Vol I IMECS 2009, March 18 -
20, 2009, Hong Kong
3. Son Lam Phung, Member, IEEE,Abdesselam
Bouzerdoum, Sr. Member, IEEE, and Douglas
Chai, Sr. Member, IEEE “Skin Segmentation
Using Color Pixel Classification: Analysis and
Comparison”, IEEE TRANSACTIONS ON
PATTERN ANALYSIS AND MACHINE
INTELLIGENCE, VOL. 27, NO. 1,
JANUARY 2005
4. Ashish Sethi, Hemanth S,Kuldeep
Kumar,Bhaskara Rao N,Krishnan R,“SignPro-
An Application Suite for Deaf and
Dumb”,IJCSET May 2012 Vol 2, Issue
5,1203-1206
5. William T. Freeman, Michal Roth,
“Orientation Histograms for Hand Gesture
Recognition”,IEEE Intl. Wkshp. on Automatic
Face and Gesture Recognition, Zurich, June,
1995
6. V.Nayakwadi, N. B. Pokale, “Natural Hand
Gestures Recognition System for Intelligent
HCI,” International Journal of Computer
Applications Technology and Research, 2013
7. Massimo
Piccardi,“Backgroundsubtractiontechniques:
areview”,IEEEInternationalConferenceonSyste
ms,ManandCybernetics,2004
8. Anne Kaspers, “BlobDetection”.