SlideShare a Scribd company logo
1 of 11
Download to read offline
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 
A SIGN LANGUAGE RECOGNITION APPROACH FOR 
HUMAN-ROBOT SYMBIOSIS 
Afzal Hossian1, Shahrin Chowdhury2, and Asma-ull-Hosna3 
1IIT (Institute of Information Technology), University of Dhaka, Bangladesh 
2Chalmers University of Technology, Gothenburg, Sweden 
3Sookmyung Women’s University, Seoul, South-Korea 
ABSTRACT 
This paper introduces a new concept for the establishment of human-robot symbiotic relationship. The 
system is based on the implementation of knowledge-based image processing methodologies for model 
based vision and intelligent task scheduling for an autonomous social robot. This paper aims to develop an 
automatic translation of static gestures of alphabets and signs in American Sign Language (ASL), using 
neural network with backpropagation algorithm. System deals with images of bare hands to achieve the 
recognition task. For each individual sign 10 sample images have been considered, which means in 
total300 samples have been processed. In order to compare between the training set of signs and the 
considered sample images, are converted into feature vectors. Experimental results reveal that this can 
recognize selected ASL signs (accuracy of 92.00%). Finally, the system has been implemented issuing hand 
gesture commands for ASL to a robot car, named “Moto-robo”. 
KEYWORDS 
American Sign Language, Histogram equalization, Human-robot symbiosis, Moto-Robo, Skin colour 
segmentation. 
1. INTRODUCTION 
In the dictionary of American Cultural Heritage the word “symbiosis” is defined as following: “A 
close, prolonged association among two or more different organisms of different species that may 
but does not necessarily benefit each member” [1]. In recent times this biological term has been 
used to define similar relations among wider collection of entities. In this research, the main 
purpose is to establish a symbiotic relationship between robots and human beings for their 
coexistence and co-operative work and consolidate their relationship for the benefit of each other. 
Image understanding concerns the issues of finding interpretations of images. These 
interpretations would explain the meaning of the contents of the images. In order to establish a 
human-robot symbiotic society, different kinds of objects are being interpreted using the visual, 
geometrical and knowledge-based approaches. When the robots are working cooperatively with 
human beings, it is necessary to share and exchange their ideas and thoughts. Human hand 
gesture is, therefore, immerging tremendous interest in the advancement of human-robot interface 
since it provides a natural and efficient way of exploring expressions. 
The sign language is the fundamental communication method between people who suffer from 
hearing defects. In order for an ordinary person to communicate with deaf people, a translator is 
usually needed to translate sign language into natural language and vice versa [2]. As a primary 
DOI : 10.5121/ijcseit.2014.4402 11
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 
component of many sign languages and in particular the American Sign Language (ASL), hand 
gestures and finger-spelling language plays an important role in deaf learning and their 
communication. Therefore, sign language can be considered as a collection of gestures, 
movements, postures, and facial expressions corresponding to letters and words in natural 
languages. 
American Sign Language (ASL) is considered to be a complete language which includes signs 
using hands, other gesture with the support of facial expression and postures of the body [2]. ASL 
follows different grammar pattern compare to any other normal languages. Near about 6000 
gestures of common words are represented using finger spelling by ASL. 26 individual alphabets 
are signified by 26 different gesture with the use of single hand. These 26 alphabets of ASL are 
presented in Fig. 1. 
Charayaphan and Marble [3] investigated a way using image processing to understand ASL. Out 
of 31 ASL symbols 27 can correctly recognize by their suggested system. Fels and Hinton [4] 
have developed a system. VPL DataGlove Mark II along with a Polhemus tracker was used as 
input devices in their developed system. For categorized hand gestures neural network method 
was applied. For the input of HMMs, two-dimensional features of a solo camera along with view-based 
approach were applied by Starner and Pentland [5]. Using HMMs and considering 262-sign 
vocabulary, 91.3% accuracy was achieved for recognized isolated signs by Grobel and Assan [6]. 
While collecting sample features from the video recordings of users, they were using colour 
gloves. Bowden and Sarhadi [7] developed a non-linear model of shape and motion for tracking 
finger spelt American Sign Language. This approach is similar to HMM where ASL’s are 
projected into shape space to guess the models and also to follow them. 
12 
A B C D E 
F G H I J (dynamic) 
K L M N O 
P Q R S T 
U V W X Y 
Z (dynamic) 
Figure 1. Alphabets of American Sign Language 
This system is capable of visually detecting all static signs of the American Sign Language 
(ASL): alphabets, numerical digits as well as general words for example: like, love, not agreed
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 
etc. can also be represent using ASL. Fortunately the users can interact using his/her fingers only; 
there is no need to use additional gloves or any other devices. Still, variation of hand shapes and 
operational habit leads to recognition difficulties. Therefore, we realized the necessity to 
investigate the signer independent sign language recognition to improve the system robustness 
and practicability. Since the system is based on Affine transformation, our method relies on 
presenting the gesture as a feature vector that is translation, scale and rotation invariant. 
13 
2. SYSTEM DESIGN 
The ASL recognition system has two phases: the feature extraction phase and the classification 
phase, as shown in Fig. 2. 
The image samples are resized and then converted from RGB to YIQ colour model. Afterwards 
the images are segmented to detect and digitize the sign image. 
In the classification stage, a 3-layer, feed-forward backpropagation neural network is constructed. 
It consists 
start 
Resize the Image 
Convert RGB to YIQ 
Neural Network 
Is gesture? 
Figure 2. System overview 
Feature 
Extraction 
stage 
Classifica-tion 
stage 
Gesture command 
generation 
Robot action 
Yes 
No 
Skin color segmentation 
Connected Component 
analysis
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 
of (40×30) neurons in the input layer, 768 (70% of input) neurons for the hidden layer, and 30 
(total number of ASL image for the classification network) for the neurons in the output layer. 
( ) = of a 
14 
2.1 Features to analyse images 
Normalization of sample images, equalization of histogram, image filtering, and skin colour 
segmentation are highlighted in this phase. 
2.1.1 Normalization of sample images 
A low pass filter is used in order to reduce aliasing the nearest neighbour interpolation method, to 
find out the values of pixels in the output image where images are resized to 160 by 120. 
2.1.2. Equalization of Histogram 
Equalization of Histogram is used to improve the lighting conditions and the contrast of image as 
the hand images contrast depends on the lighting condition. Let the histogram 
p 
n 
h r 
i 
i 
digital hand image consists of the colour bins in the range [0, C − 1] , where i r is the I th colour bin, 
i p is the number of pixels in the image with that colour bin and n is the total number of pixels in 
the image. 
Some scaling constant are calculated using the cumulative sum of bins for any interval [0,1] of r 
[8]. For individual pixel value r in the original images of level s and 0 £ T (r ) £ 1 for 0 £ r £ 1. is 
used to yield the mapping to perform the function s = T (r ), of transforming by allowing the range. 
The histogram equalization process is illustrated in Fig. 3. 
2.1.3. Image Filtering 
Prewitt filter provides the advantage of suppressing the noise which collected from various 
sources without erasing some of the image details like low-pass filter. 
2.1.4. Skin colour segmentation 
Skin colour segmentation is based on visual information of the human skin colours from the 
image sequences in YIQ colour space. The image samples are converted from RGB to YIQ 
colour model. To check, the amount of skin colour value to identify the specific colour that have 
dominance over the image by searching in YIQ space. 
In the following matrix, luminance channel and two chrominance channels are represented with Y 
and (I,Q) respectively where linear transformation of RGB is produced from YIQ. 
Luminance, hue and saturation these three attributes are described using YIQ colour model [8]: 
 
   
 
 
   
 
 
  
 
 
   
 
   
 
 
Y 
   
 
0.25 0.587 0.49 
= − − 
0.45 0.384 0.320 
− 
R 
G 
B 
I 
Q 
0.212 0.639 0.79 
(1) 
Here red, green, and blue component values are denoted with R, G, B between the range of 
[0,255].
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 
Since the human skin colours are clustered in colour space and differ from person to person and 
of races, so in order to detect the hand parts in an image, the skin pixels are thresholded 
empirically [9],[10]. 
15 
The threshold value is calculated using following equation: 
(60  Y  200) and (20  I  50) (2) 
The detection of hand region boundaries by such a YIQ segmentation process is illustrated in Fig. 
4. 
The exact location of the hand is then determined from the image with largest connected region of 
skin-coloured pixels. For uneven segment image detection of connected components, the Region-growing 
algorithm is applied. 
In this experiment, 8-pixel neighbourhood connectivity is employed. In order to remove the false 
regions from the isolated blocks, smaller connected regions are assigned by the values of the 
background pixels 
2.2 Classification phase 
The classification phase includes neural network training for the recognition of binary image 
patterns of the hand. In the neural networks the result will be not perfect. Sometimes practice 
represents the best solution. Decision in this field is very difficult; we had to examine different 
architectures and decide according to their results 
Figure 3. Histogram equalization 
Figure 4. Skin colour segmentation
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 
Therefore, after several experiments, it has been decided that the proposed system should be 
based on supervised learning in which the learning rule is provided with the set of examples (the 
training set). When the parameters, weights and biases of the network are initialized, the network 
is ready for training. The multi-layer perceptron, as shown in Fig. 5, with backpropagation 
algorithm has been employed for this research. 
16 
M 
y1 
wij 
wjk 
xn yn 
Figure 5. Network architecture of BPNN 
x1 
M 
M 
The number of epochs was 10,000 and the goal was 0.0001. The back-propagation training 
algorithm is given below. 
Step 1 Initial Phase 
To ensure the uniform distribution, the random numbers are generated using weight and threshold 
from network levels 
 
, 
. 
 
− + 
2 4 2 4 
, 
. 
 
 
Fi Fi 
where Fi is the total number of inputs of neurons I in the network. 
Step 2 Active Phase 
Back-propagation neural network is activated to get desire yields ( ), ( ), ..., ( ). y , t y , t y , t d 1 d 2 d n by 
applying inputs x1(t), x2 (t), ...,xn (t) . 
(a) The hidden layer of authentic output of neurons, is calculated using below function: 
1   
  
n 
=  × − 
y ( t ) sigmoid xi ( t ) wij ( t ) q j j , 
(3) 
= 
i 
where n is the number of inputs of neuron j in the hidden layer, and sigmoid is the sigmoid 
activation function. 
(b) The output layer of authentic outputs of the neurons, is calculated using below function: 
 
 
m 
=  × − 
y ( t ) sigmoid y j ( t ) w ( t ) q , 
(4) 
k jk k 1  
 
= 
j 
where m is the number of inputs of neuron k in the hidden layer. 
Step 3 Training of Weight 
The following equation is used to propagate errors related with output neurons to update the 
weights in the back-propagation network:
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 
17 
w ( p 1) w ( p) w ( p) jk 
jk 
jk 
+ = + D , (5) 
D = a d , (6) 
where w ( p) y ( p) ( p) 
ij j k 
w ( p 1) w ( p) wij ( p) 
ij 
ij 
+ = + D , (7) 
D = a d , (8) 
where w ( p) x ( p) ( p) 
ij j j 
where error, e ( p) y ( p) y ( p) 
= − , (9) 
k dk k 
error gradient for neuron in the output layer is: 
d = − , (10) 
( p) y ( p)[1 y ( p)]e ( p) 
k k k k 
and ( ) ( )[1 ( )] ( ) ( ) 
d = − d . (11) 
 
= 
p y p y p p w p 
j j j k jk 
1 
l 
k 
Step 4 Iteration 
Increase iteration t by one, go back to Step 2 and repeat the process until the error value reduces 
to the desired level. A complete flow chart of our proposed network is shown in Fig. 6. 
3. EXPERIMENTS RESULTS  PERFORMANCE 
The performance and effectiveness of the system has been justified using different hand 
expressions and issuing commands to a robot named “Moto-Robo”. The computer configuration 
for this experiment was Pentium IV 1.2 GHz PC along with 512 MB RAM. Visual C++ was used 
as the programming language to implement the algorithm. 
3.1. Interfacing the robot 
The communication link between the computer and the robot has been established by means of 
parallel communication port. The parallel port is a 25 pin D-shaped female (DB25) connector 
equipped in the back of the computer. The pin configuration of DB25 connector is shown in the 
Fig. 7. The lines in DB25 connector are divided in to three groups: Status lines, Data lines and 
Control lines. As the name refers, data is transferred over data lines, Control lines are used to 
control the peripheral and of course, the peripheral returns status signals back computer through 
Status lines. 
In order to access parallel ports by the programmers some library functions are used.
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 
18 
start 
Initialization 
Activation 
Weight training 
Iteration 
Terminate? 
Yes 
Stop 
Figure 6. Flow chart of BPNN 
No 
Status Resister Data Resister 
Control Resister 
Figure 7. Pin configuration of DB25 port 
Visual C++ provides two functions to access IO mapped peripherals, ‘inp’ for reading data from 
the port and ‘outp’ for writing data into the port. 
3.2 Analysis of Experiments 
At first to determine the testing ability of the recognition system of hand gesture, to classify signs 
for both training and testing set of data where the quantity of inputs influences the neural 
network. Few of the signs have resemblances between them which lead to create some problems 
in the performance. 
In this experiment, the binary images are used to recognize the system using training and testing 
data set. Also 10 samples for each sign were taken from 10 different volunteers. For each sign, 5 
out of 10 samples were used for training purpose, while the remaining 5 signs were used for 
testing. Various orientations and distances is considered while collecting sample images using
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 
digital camera. This way, we were able to obtain a data set with cases that had different sizes and 
orientations, so we could examine the capabilities of our feature extraction scheme. 
Performance evaluation of the system depends on its capability of correctly categorizing samples 
related to their classes. The ratio of correctly categorizing samples and total amount of sample is 
denoted as recognition ratio, i.e. 
19 
Number of correctly categorized sign 
cognition rate (12) 
Re = ×100% 
Total amount of signs 
In backpropagation learning algorithm modification in weights considered for a number of 
periods results to continuous decrement in Training curve is represented in Fig. 8 
1.4 
1.2 
1.0 
0.8 
0.6 
0.4 
0.2 
Figure 8. Error versus iteration for training the BPNN 
Figure 9. Program interface for robot control 
0 
0 500 1000 1500 2000 2500 
Learning time (iteration) 
Error
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 
20 
Table 1 Command to control Moto-Robo 
3.3 Implementation 
A remote control car (Moto-Robo), connected to the pc through the parallel port, has been 
controlled by means of commands directed by the hand gesture of the user. The car has several 
movements, such as: Forward, Backward, Turn right, Turn Left, Turn Light on, Turn Light off 
and so on depending on the sign languages F, B, R, L, W, E, respectively. Some of the ASL 
employed for controlling the robot is listed in Table 1. 
The system was tested with (300) images, (ten images for each sign) untrained images; previously 
unseen for the testing phase. In order to determining the yields in a suitable way a GUI has been 
created by us. An example is shown in the Fig 9, where one of the actions of the robot as a result 
of hand gesture recognition process is shown. 
4. CONCLUSION 
This research presents the development of a system for the recognition of American Sign 
Language. On recognition of different hand gestures, a real time robot interaction system has 
been implemented. For individual image pattern related to the set of training a set of input data 
for accomplishing the work. Without the need of any gloves, images for different signs were 
captured by digital camera. Deviation in position, direction, size and gesture are proved to be 
easily adapted by the developed system. This is because the extracted features method used 
Affine transformation to make the system translation, scaling and rotation invariant. The 
recognition rate for training data and testing data are 92.0% and 80% respectively for the future 
system. 
The work presented in this research deals with static signs of ASL only. Adaptation of dynamic 
signs can be an interesting thing to watch in future. There is a limitation in the existing system 
that it only deals with images that have a non-skin colour background, overcoming this limitation 
can make the system more compatible in real life. Beside hand images other types of images for 
example eye tracking, facial expression, head gesture etc. can also be considered as sample 
images for the network to analyse. The goal is to create a symbiotic environment in order to give 
the opportunity to the robots to exchange their ideas with human beings which will definitely 
bring benefits for both and also have an positive impact on the society.
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 
21 
REFERENCE 
[1] M. A. Bhuiyan and H. Ueno,(2003) “Image Understanding for Human -robot Symbiosis”, 6th ICCIT, 
Dhaka, Bangladesh, Vol. 2, pp. 782-787. 
[2] International Bibliography of Sign Language, (2005),[online]. Available: 
http://www.signlang.unihamburg.de/bibweb/FJournals.html 
[3] C. Charayaphan and A. Marble, (1992) “Image processing system for interpreting motion in 
American sign language”, Journal of Biomedical Engineering, Vol. 14, pp. 419–425. 
[4] S. Fels and G. Hinton, (1993) “GloveTalk: a neural network interface between a DataGlove and a 
speech synthesizer”, IEEE Transactions on Neural Networks, Vol. 4, pp. 2–8. 
[5] T. Starner and A. Pentland,(1995) “Visual recognition of American sign language using hidden 
Markov models”, International Workshop on Automatic Face and Gesture Recognition, Zurich, 
Switzerland, pp. 189–194. 
[6] K. Grobel and M. Assan, (1996) “Isolated sign language recognition using hidden Markov models. In 
Proceedings of the international conference of system, man and cybernetics”, pp. 162–167. 
[7] R. Bowden and M. Sarhadi, (2002) “A non-linear model of shape and motion for tracking finger spelt 
American sign language”, Image and Vision Computing, Vol. 9–10, pp. 597–607. 
[8] R. C. Gonzalez and R. E. Woods, (2003) “Digital Image Processing”, Pearson Education Inc., 2nd 
Edition, Delhi. 
[9] M. A. Bhuiyan, V. Ampornaramveth, S. Muto, and H. Ueno,(2003) “Face Detection and Facial 
Feature Localization for Human-machine Interface”, NII Journal, Vol. 5, No. 1, pp. 26-39. 
[10] M. A. Bhuiyan, V. Ampornaramveth, S. Muto, and H. Ueno,(2004) “ On Tracking of Eye for Human- 
Robot Interface”, International Journal of Robotics and Automation, Vol. 19, No. 1, pp. 42-54.

More Related Content

What's hot

Zernike moment of invariants for effective image retrieval using gaussian fil...
Zernike moment of invariants for effective image retrieval using gaussian fil...Zernike moment of invariants for effective image retrieval using gaussian fil...
Zernike moment of invariants for effective image retrieval using gaussian fil...IAEME Publication
 
Skin Detection Of Animation Characters
Skin Detection Of Animation Characters  Skin Detection Of Animation Characters
Skin Detection Of Animation Characters ijsc
 
Skin detection of animation characters
Skin detection of animation charactersSkin detection of animation characters
Skin detection of animation charactersijsc
 
Appearance based static hand gesture alphabet recognition
Appearance based static hand gesture alphabet recognitionAppearance based static hand gesture alphabet recognition
Appearance based static hand gesture alphabet recognitionIAEME Publication
 
COMPARATIVE ANALYSIS OF SKIN COLOR BASED MODELS FOR FACE DETECTION
COMPARATIVE ANALYSIS OF SKIN COLOR  BASED MODELS FOR FACE DETECTIONCOMPARATIVE ANALYSIS OF SKIN COLOR  BASED MODELS FOR FACE DETECTION
COMPARATIVE ANALYSIS OF SKIN COLOR BASED MODELS FOR FACE DETECTIONsipij
 
Speech Recognition using HMM & GMM Models: A Review on Techniques and Approaches
Speech Recognition using HMM & GMM Models: A Review on Techniques and ApproachesSpeech Recognition using HMM & GMM Models: A Review on Techniques and Approaches
Speech Recognition using HMM & GMM Models: A Review on Techniques and Approachesijsrd.com
 
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATION
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATIONA SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATION
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATIONijnlc
 
A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...
A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...
A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...INFOGAIN PUBLICATION
 
Skin Detection Based on Color Model and Low Level Features Combined with Expl...
Skin Detection Based on Color Model and Low Level Features Combined with Expl...Skin Detection Based on Color Model and Low Level Features Combined with Expl...
Skin Detection Based on Color Model and Low Level Features Combined with Expl...IJERA Editor
 
Artificial Neural Network For Recognition Of Handwritten Devanagari Character
Artificial Neural Network For Recognition Of Handwritten Devanagari CharacterArtificial Neural Network For Recognition Of Handwritten Devanagari Character
Artificial Neural Network For Recognition Of Handwritten Devanagari CharacterIOSR Journals
 
Evaluation of Euclidean and Manhanttan Metrics In Content Based Image Retriev...
Evaluation of Euclidean and Manhanttan Metrics In Content Based Image Retriev...Evaluation of Euclidean and Manhanttan Metrics In Content Based Image Retriev...
Evaluation of Euclidean and Manhanttan Metrics In Content Based Image Retriev...IJERA Editor
 
A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...
A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...
A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...iosrjce
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER) International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER) ijceronline
 
DEVELOPMENT OF AN ALPHABETIC CHARACTER RECOGNITION SYSTEM USING MATLAB FOR BA...
DEVELOPMENT OF AN ALPHABETIC CHARACTER RECOGNITION SYSTEM USING MATLAB FOR BA...DEVELOPMENT OF AN ALPHABETIC CHARACTER RECOGNITION SYSTEM USING MATLAB FOR BA...
DEVELOPMENT OF AN ALPHABETIC CHARACTER RECOGNITION SYSTEM USING MATLAB FOR BA...Mohammad Liton Hossain
 

What's hot (16)

Zernike moment of invariants for effective image retrieval using gaussian fil...
Zernike moment of invariants for effective image retrieval using gaussian fil...Zernike moment of invariants for effective image retrieval using gaussian fil...
Zernike moment of invariants for effective image retrieval using gaussian fil...
 
Skin Detection Of Animation Characters
Skin Detection Of Animation Characters  Skin Detection Of Animation Characters
Skin Detection Of Animation Characters
 
Skin detection of animation characters
Skin detection of animation charactersSkin detection of animation characters
Skin detection of animation characters
 
Appearance based static hand gesture alphabet recognition
Appearance based static hand gesture alphabet recognitionAppearance based static hand gesture alphabet recognition
Appearance based static hand gesture alphabet recognition
 
COMPARATIVE ANALYSIS OF SKIN COLOR BASED MODELS FOR FACE DETECTION
COMPARATIVE ANALYSIS OF SKIN COLOR  BASED MODELS FOR FACE DETECTIONCOMPARATIVE ANALYSIS OF SKIN COLOR  BASED MODELS FOR FACE DETECTION
COMPARATIVE ANALYSIS OF SKIN COLOR BASED MODELS FOR FACE DETECTION
 
Speech Recognition using HMM & GMM Models: A Review on Techniques and Approaches
Speech Recognition using HMM & GMM Models: A Review on Techniques and ApproachesSpeech Recognition using HMM & GMM Models: A Review on Techniques and Approaches
Speech Recognition using HMM & GMM Models: A Review on Techniques and Approaches
 
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATION
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATIONA SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATION
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATION
 
Ay4103315317
Ay4103315317Ay4103315317
Ay4103315317
 
A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...
A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...
A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...
 
Skin Detection Based on Color Model and Low Level Features Combined with Expl...
Skin Detection Based on Color Model and Low Level Features Combined with Expl...Skin Detection Based on Color Model and Low Level Features Combined with Expl...
Skin Detection Based on Color Model and Low Level Features Combined with Expl...
 
Artificial Neural Network For Recognition Of Handwritten Devanagari Character
Artificial Neural Network For Recognition Of Handwritten Devanagari CharacterArtificial Neural Network For Recognition Of Handwritten Devanagari Character
Artificial Neural Network For Recognition Of Handwritten Devanagari Character
 
Evaluation of Euclidean and Manhanttan Metrics In Content Based Image Retriev...
Evaluation of Euclidean and Manhanttan Metrics In Content Based Image Retriev...Evaluation of Euclidean and Manhanttan Metrics In Content Based Image Retriev...
Evaluation of Euclidean and Manhanttan Metrics In Content Based Image Retriev...
 
A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...
A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...
A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER) International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
E123440
E123440E123440
E123440
 
DEVELOPMENT OF AN ALPHABETIC CHARACTER RECOGNITION SYSTEM USING MATLAB FOR BA...
DEVELOPMENT OF AN ALPHABETIC CHARACTER RECOGNITION SYSTEM USING MATLAB FOR BA...DEVELOPMENT OF AN ALPHABETIC CHARACTER RECOGNITION SYSTEM USING MATLAB FOR BA...
DEVELOPMENT OF AN ALPHABETIC CHARACTER RECOGNITION SYSTEM USING MATLAB FOR BA...
 

Similar to ttA sign language recognition approach for

A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSISA SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSISijcseit
 
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSISA SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSISijcseit
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysispaperpublications3
 
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...IJERA Editor
 
Translation of sign language using generic fourier descriptor and nearest nei...
Translation of sign language using generic fourier descriptor and nearest nei...Translation of sign language using generic fourier descriptor and nearest nei...
Translation of sign language using generic fourier descriptor and nearest nei...ijcisjournal
 
Real time Myanmar Sign Language Recognition System using PCA and SVM
Real time Myanmar Sign Language Recognition System using PCA and SVMReal time Myanmar Sign Language Recognition System using PCA and SVM
Real time Myanmar Sign Language Recognition System using PCA and SVMijtsrd
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysispaperpublications3
 
Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Co...
Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Co...Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Co...
Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Co...Waqas Tariq
 
A New Algorithm for Human Face Detection Using Skin Color Tone
A New Algorithm for Human Face Detection Using Skin Color ToneA New Algorithm for Human Face Detection Using Skin Color Tone
A New Algorithm for Human Face Detection Using Skin Color ToneIOSR Journals
 
Explicit Content Image Detection
Explicit Content Image DetectionExplicit Content Image Detection
Explicit Content Image Detectionsipij
 
A CHINESE CHARACTER RECOGNITION METHOD BASED ON POPULATION MATRIX AND RELATIO...
A CHINESE CHARACTER RECOGNITION METHOD BASED ON POPULATION MATRIX AND RELATIO...A CHINESE CHARACTER RECOGNITION METHOD BASED ON POPULATION MATRIX AND RELATIO...
A CHINESE CHARACTER RECOGNITION METHOD BASED ON POPULATION MATRIX AND RELATIO...Teady Matius
 
Hand Gesture Recognition using Neural Network
Hand Gesture Recognition using Neural NetworkHand Gesture Recognition using Neural Network
Hand Gesture Recognition using Neural NetworkBhagwat Singh Rathore
 
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...MangaiK4
 
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...MangaiK4
 
SIGN LANGUAGE RECOGNITION USING MACHINE LEARNING
SIGN LANGUAGE RECOGNITION USING MACHINE LEARNINGSIGN LANGUAGE RECOGNITION USING MACHINE LEARNING
SIGN LANGUAGE RECOGNITION USING MACHINE LEARNINGIRJET Journal
 
A gesture recognition system for the Colombian sign language based on convolu...
A gesture recognition system for the Colombian sign language based on convolu...A gesture recognition system for the Colombian sign language based on convolu...
A gesture recognition system for the Colombian sign language based on convolu...journalBEEI
 
EFFECTIVE SEARCH OF COLOR-SPATIAL IMAGE USING SEMANTIC INDEXING
EFFECTIVE SEARCH OF COLOR-SPATIAL IMAGE USING SEMANTIC INDEXINGEFFECTIVE SEARCH OF COLOR-SPATIAL IMAGE USING SEMANTIC INDEXING
EFFECTIVE SEARCH OF COLOR-SPATIAL IMAGE USING SEMANTIC INDEXINGIJCSEA Journal
 
Illumination Invariant Hand Gesture Classification against Complex Background...
Illumination Invariant Hand Gesture Classification against Complex Background...Illumination Invariant Hand Gesture Classification against Complex Background...
Illumination Invariant Hand Gesture Classification against Complex Background...IJCSIS Research Publications
 

Similar to ttA sign language recognition approach for (20)

A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSISA SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
 
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSISA SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysis
 
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...
 
Translation of sign language using generic fourier descriptor and nearest nei...
Translation of sign language using generic fourier descriptor and nearest nei...Translation of sign language using generic fourier descriptor and nearest nei...
Translation of sign language using generic fourier descriptor and nearest nei...
 
Real time Myanmar Sign Language Recognition System using PCA and SVM
Real time Myanmar Sign Language Recognition System using PCA and SVMReal time Myanmar Sign Language Recognition System using PCA and SVM
Real time Myanmar Sign Language Recognition System using PCA and SVM
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysis
 
Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Co...
Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Co...Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Co...
Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Co...
 
A New Algorithm for Human Face Detection Using Skin Color Tone
A New Algorithm for Human Face Detection Using Skin Color ToneA New Algorithm for Human Face Detection Using Skin Color Tone
A New Algorithm for Human Face Detection Using Skin Color Tone
 
Explicit Content Image Detection
Explicit Content Image DetectionExplicit Content Image Detection
Explicit Content Image Detection
 
A CHINESE CHARACTER RECOGNITION METHOD BASED ON POPULATION MATRIX AND RELATIO...
A CHINESE CHARACTER RECOGNITION METHOD BASED ON POPULATION MATRIX AND RELATIO...A CHINESE CHARACTER RECOGNITION METHOD BASED ON POPULATION MATRIX AND RELATIO...
A CHINESE CHARACTER RECOGNITION METHOD BASED ON POPULATION MATRIX AND RELATIO...
 
Hand Gesture Recognition using Neural Network
Hand Gesture Recognition using Neural NetworkHand Gesture Recognition using Neural Network
Hand Gesture Recognition using Neural Network
 
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
 
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
 
Ay4102371374
Ay4102371374Ay4102371374
Ay4102371374
 
B017310612
B017310612B017310612
B017310612
 
SIGN LANGUAGE RECOGNITION USING MACHINE LEARNING
SIGN LANGUAGE RECOGNITION USING MACHINE LEARNINGSIGN LANGUAGE RECOGNITION USING MACHINE LEARNING
SIGN LANGUAGE RECOGNITION USING MACHINE LEARNING
 
A gesture recognition system for the Colombian sign language based on convolu...
A gesture recognition system for the Colombian sign language based on convolu...A gesture recognition system for the Colombian sign language based on convolu...
A gesture recognition system for the Colombian sign language based on convolu...
 
EFFECTIVE SEARCH OF COLOR-SPATIAL IMAGE USING SEMANTIC INDEXING
EFFECTIVE SEARCH OF COLOR-SPATIAL IMAGE USING SEMANTIC INDEXINGEFFECTIVE SEARCH OF COLOR-SPATIAL IMAGE USING SEMANTIC INDEXING
EFFECTIVE SEARCH OF COLOR-SPATIAL IMAGE USING SEMANTIC INDEXING
 
Illumination Invariant Hand Gesture Classification against Complex Background...
Illumination Invariant Hand Gesture Classification against Complex Background...Illumination Invariant Hand Gesture Classification against Complex Background...
Illumination Invariant Hand Gesture Classification against Complex Background...
 

Recently uploaded

New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfngoud9212
 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentationphoebematthew05
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 

Recently uploaded (20)

New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdf
 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentation
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 

ttA sign language recognition approach for

  • 1. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS Afzal Hossian1, Shahrin Chowdhury2, and Asma-ull-Hosna3 1IIT (Institute of Information Technology), University of Dhaka, Bangladesh 2Chalmers University of Technology, Gothenburg, Sweden 3Sookmyung Women’s University, Seoul, South-Korea ABSTRACT This paper introduces a new concept for the establishment of human-robot symbiotic relationship. The system is based on the implementation of knowledge-based image processing methodologies for model based vision and intelligent task scheduling for an autonomous social robot. This paper aims to develop an automatic translation of static gestures of alphabets and signs in American Sign Language (ASL), using neural network with backpropagation algorithm. System deals with images of bare hands to achieve the recognition task. For each individual sign 10 sample images have been considered, which means in total300 samples have been processed. In order to compare between the training set of signs and the considered sample images, are converted into feature vectors. Experimental results reveal that this can recognize selected ASL signs (accuracy of 92.00%). Finally, the system has been implemented issuing hand gesture commands for ASL to a robot car, named “Moto-robo”. KEYWORDS American Sign Language, Histogram equalization, Human-robot symbiosis, Moto-Robo, Skin colour segmentation. 1. INTRODUCTION In the dictionary of American Cultural Heritage the word “symbiosis” is defined as following: “A close, prolonged association among two or more different organisms of different species that may but does not necessarily benefit each member” [1]. In recent times this biological term has been used to define similar relations among wider collection of entities. In this research, the main purpose is to establish a symbiotic relationship between robots and human beings for their coexistence and co-operative work and consolidate their relationship for the benefit of each other. Image understanding concerns the issues of finding interpretations of images. These interpretations would explain the meaning of the contents of the images. In order to establish a human-robot symbiotic society, different kinds of objects are being interpreted using the visual, geometrical and knowledge-based approaches. When the robots are working cooperatively with human beings, it is necessary to share and exchange their ideas and thoughts. Human hand gesture is, therefore, immerging tremendous interest in the advancement of human-robot interface since it provides a natural and efficient way of exploring expressions. The sign language is the fundamental communication method between people who suffer from hearing defects. In order for an ordinary person to communicate with deaf people, a translator is usually needed to translate sign language into natural language and vice versa [2]. As a primary DOI : 10.5121/ijcseit.2014.4402 11
  • 2. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 component of many sign languages and in particular the American Sign Language (ASL), hand gestures and finger-spelling language plays an important role in deaf learning and their communication. Therefore, sign language can be considered as a collection of gestures, movements, postures, and facial expressions corresponding to letters and words in natural languages. American Sign Language (ASL) is considered to be a complete language which includes signs using hands, other gesture with the support of facial expression and postures of the body [2]. ASL follows different grammar pattern compare to any other normal languages. Near about 6000 gestures of common words are represented using finger spelling by ASL. 26 individual alphabets are signified by 26 different gesture with the use of single hand. These 26 alphabets of ASL are presented in Fig. 1. Charayaphan and Marble [3] investigated a way using image processing to understand ASL. Out of 31 ASL symbols 27 can correctly recognize by their suggested system. Fels and Hinton [4] have developed a system. VPL DataGlove Mark II along with a Polhemus tracker was used as input devices in their developed system. For categorized hand gestures neural network method was applied. For the input of HMMs, two-dimensional features of a solo camera along with view-based approach were applied by Starner and Pentland [5]. Using HMMs and considering 262-sign vocabulary, 91.3% accuracy was achieved for recognized isolated signs by Grobel and Assan [6]. While collecting sample features from the video recordings of users, they were using colour gloves. Bowden and Sarhadi [7] developed a non-linear model of shape and motion for tracking finger spelt American Sign Language. This approach is similar to HMM where ASL’s are projected into shape space to guess the models and also to follow them. 12 A B C D E F G H I J (dynamic) K L M N O P Q R S T U V W X Y Z (dynamic) Figure 1. Alphabets of American Sign Language This system is capable of visually detecting all static signs of the American Sign Language (ASL): alphabets, numerical digits as well as general words for example: like, love, not agreed
  • 3. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 etc. can also be represent using ASL. Fortunately the users can interact using his/her fingers only; there is no need to use additional gloves or any other devices. Still, variation of hand shapes and operational habit leads to recognition difficulties. Therefore, we realized the necessity to investigate the signer independent sign language recognition to improve the system robustness and practicability. Since the system is based on Affine transformation, our method relies on presenting the gesture as a feature vector that is translation, scale and rotation invariant. 13 2. SYSTEM DESIGN The ASL recognition system has two phases: the feature extraction phase and the classification phase, as shown in Fig. 2. The image samples are resized and then converted from RGB to YIQ colour model. Afterwards the images are segmented to detect and digitize the sign image. In the classification stage, a 3-layer, feed-forward backpropagation neural network is constructed. It consists start Resize the Image Convert RGB to YIQ Neural Network Is gesture? Figure 2. System overview Feature Extraction stage Classifica-tion stage Gesture command generation Robot action Yes No Skin color segmentation Connected Component analysis
  • 4. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 of (40×30) neurons in the input layer, 768 (70% of input) neurons for the hidden layer, and 30 (total number of ASL image for the classification network) for the neurons in the output layer. ( ) = of a 14 2.1 Features to analyse images Normalization of sample images, equalization of histogram, image filtering, and skin colour segmentation are highlighted in this phase. 2.1.1 Normalization of sample images A low pass filter is used in order to reduce aliasing the nearest neighbour interpolation method, to find out the values of pixels in the output image where images are resized to 160 by 120. 2.1.2. Equalization of Histogram Equalization of Histogram is used to improve the lighting conditions and the contrast of image as the hand images contrast depends on the lighting condition. Let the histogram p n h r i i digital hand image consists of the colour bins in the range [0, C − 1] , where i r is the I th colour bin, i p is the number of pixels in the image with that colour bin and n is the total number of pixels in the image. Some scaling constant are calculated using the cumulative sum of bins for any interval [0,1] of r [8]. For individual pixel value r in the original images of level s and 0 £ T (r ) £ 1 for 0 £ r £ 1. is used to yield the mapping to perform the function s = T (r ), of transforming by allowing the range. The histogram equalization process is illustrated in Fig. 3. 2.1.3. Image Filtering Prewitt filter provides the advantage of suppressing the noise which collected from various sources without erasing some of the image details like low-pass filter. 2.1.4. Skin colour segmentation Skin colour segmentation is based on visual information of the human skin colours from the image sequences in YIQ colour space. The image samples are converted from RGB to YIQ colour model. To check, the amount of skin colour value to identify the specific colour that have dominance over the image by searching in YIQ space. In the following matrix, luminance channel and two chrominance channels are represented with Y and (I,Q) respectively where linear transformation of RGB is produced from YIQ. Luminance, hue and saturation these three attributes are described using YIQ colour model [8]: Y 0.25 0.587 0.49 = − − 0.45 0.384 0.320 − R G B I Q 0.212 0.639 0.79 (1) Here red, green, and blue component values are denoted with R, G, B between the range of [0,255].
  • 5. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 Since the human skin colours are clustered in colour space and differ from person to person and of races, so in order to detect the hand parts in an image, the skin pixels are thresholded empirically [9],[10]. 15 The threshold value is calculated using following equation: (60 Y 200) and (20 I 50) (2) The detection of hand region boundaries by such a YIQ segmentation process is illustrated in Fig. 4. The exact location of the hand is then determined from the image with largest connected region of skin-coloured pixels. For uneven segment image detection of connected components, the Region-growing algorithm is applied. In this experiment, 8-pixel neighbourhood connectivity is employed. In order to remove the false regions from the isolated blocks, smaller connected regions are assigned by the values of the background pixels 2.2 Classification phase The classification phase includes neural network training for the recognition of binary image patterns of the hand. In the neural networks the result will be not perfect. Sometimes practice represents the best solution. Decision in this field is very difficult; we had to examine different architectures and decide according to their results Figure 3. Histogram equalization Figure 4. Skin colour segmentation
  • 6. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 Therefore, after several experiments, it has been decided that the proposed system should be based on supervised learning in which the learning rule is provided with the set of examples (the training set). When the parameters, weights and biases of the network are initialized, the network is ready for training. The multi-layer perceptron, as shown in Fig. 5, with backpropagation algorithm has been employed for this research. 16 M y1 wij wjk xn yn Figure 5. Network architecture of BPNN x1 M M The number of epochs was 10,000 and the goal was 0.0001. The back-propagation training algorithm is given below. Step 1 Initial Phase To ensure the uniform distribution, the random numbers are generated using weight and threshold from network levels , . − + 2 4 2 4 , . Fi Fi where Fi is the total number of inputs of neurons I in the network. Step 2 Active Phase Back-propagation neural network is activated to get desire yields ( ), ( ), ..., ( ). y , t y , t y , t d 1 d 2 d n by applying inputs x1(t), x2 (t), ...,xn (t) . (a) The hidden layer of authentic output of neurons, is calculated using below function: 1 n = × − y ( t ) sigmoid xi ( t ) wij ( t ) q j j , (3) = i where n is the number of inputs of neuron j in the hidden layer, and sigmoid is the sigmoid activation function. (b) The output layer of authentic outputs of the neurons, is calculated using below function: m = × − y ( t ) sigmoid y j ( t ) w ( t ) q , (4) k jk k 1 = j where m is the number of inputs of neuron k in the hidden layer. Step 3 Training of Weight The following equation is used to propagate errors related with output neurons to update the weights in the back-propagation network:
  • 7. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 17 w ( p 1) w ( p) w ( p) jk jk jk + = + D , (5) D = a d , (6) where w ( p) y ( p) ( p) ij j k w ( p 1) w ( p) wij ( p) ij ij + = + D , (7) D = a d , (8) where w ( p) x ( p) ( p) ij j j where error, e ( p) y ( p) y ( p) = − , (9) k dk k error gradient for neuron in the output layer is: d = − , (10) ( p) y ( p)[1 y ( p)]e ( p) k k k k and ( ) ( )[1 ( )] ( ) ( ) d = − d . (11) = p y p y p p w p j j j k jk 1 l k Step 4 Iteration Increase iteration t by one, go back to Step 2 and repeat the process until the error value reduces to the desired level. A complete flow chart of our proposed network is shown in Fig. 6. 3. EXPERIMENTS RESULTS PERFORMANCE The performance and effectiveness of the system has been justified using different hand expressions and issuing commands to a robot named “Moto-Robo”. The computer configuration for this experiment was Pentium IV 1.2 GHz PC along with 512 MB RAM. Visual C++ was used as the programming language to implement the algorithm. 3.1. Interfacing the robot The communication link between the computer and the robot has been established by means of parallel communication port. The parallel port is a 25 pin D-shaped female (DB25) connector equipped in the back of the computer. The pin configuration of DB25 connector is shown in the Fig. 7. The lines in DB25 connector are divided in to three groups: Status lines, Data lines and Control lines. As the name refers, data is transferred over data lines, Control lines are used to control the peripheral and of course, the peripheral returns status signals back computer through Status lines. In order to access parallel ports by the programmers some library functions are used.
  • 8. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 18 start Initialization Activation Weight training Iteration Terminate? Yes Stop Figure 6. Flow chart of BPNN No Status Resister Data Resister Control Resister Figure 7. Pin configuration of DB25 port Visual C++ provides two functions to access IO mapped peripherals, ‘inp’ for reading data from the port and ‘outp’ for writing data into the port. 3.2 Analysis of Experiments At first to determine the testing ability of the recognition system of hand gesture, to classify signs for both training and testing set of data where the quantity of inputs influences the neural network. Few of the signs have resemblances between them which lead to create some problems in the performance. In this experiment, the binary images are used to recognize the system using training and testing data set. Also 10 samples for each sign were taken from 10 different volunteers. For each sign, 5 out of 10 samples were used for training purpose, while the remaining 5 signs were used for testing. Various orientations and distances is considered while collecting sample images using
  • 9. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 digital camera. This way, we were able to obtain a data set with cases that had different sizes and orientations, so we could examine the capabilities of our feature extraction scheme. Performance evaluation of the system depends on its capability of correctly categorizing samples related to their classes. The ratio of correctly categorizing samples and total amount of sample is denoted as recognition ratio, i.e. 19 Number of correctly categorized sign cognition rate (12) Re = ×100% Total amount of signs In backpropagation learning algorithm modification in weights considered for a number of periods results to continuous decrement in Training curve is represented in Fig. 8 1.4 1.2 1.0 0.8 0.6 0.4 0.2 Figure 8. Error versus iteration for training the BPNN Figure 9. Program interface for robot control 0 0 500 1000 1500 2000 2500 Learning time (iteration) Error
  • 10. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 20 Table 1 Command to control Moto-Robo 3.3 Implementation A remote control car (Moto-Robo), connected to the pc through the parallel port, has been controlled by means of commands directed by the hand gesture of the user. The car has several movements, such as: Forward, Backward, Turn right, Turn Left, Turn Light on, Turn Light off and so on depending on the sign languages F, B, R, L, W, E, respectively. Some of the ASL employed for controlling the robot is listed in Table 1. The system was tested with (300) images, (ten images for each sign) untrained images; previously unseen for the testing phase. In order to determining the yields in a suitable way a GUI has been created by us. An example is shown in the Fig 9, where one of the actions of the robot as a result of hand gesture recognition process is shown. 4. CONCLUSION This research presents the development of a system for the recognition of American Sign Language. On recognition of different hand gestures, a real time robot interaction system has been implemented. For individual image pattern related to the set of training a set of input data for accomplishing the work. Without the need of any gloves, images for different signs were captured by digital camera. Deviation in position, direction, size and gesture are proved to be easily adapted by the developed system. This is because the extracted features method used Affine transformation to make the system translation, scaling and rotation invariant. The recognition rate for training data and testing data are 92.0% and 80% respectively for the future system. The work presented in this research deals with static signs of ASL only. Adaptation of dynamic signs can be an interesting thing to watch in future. There is a limitation in the existing system that it only deals with images that have a non-skin colour background, overcoming this limitation can make the system more compatible in real life. Beside hand images other types of images for example eye tracking, facial expression, head gesture etc. can also be considered as sample images for the network to analyse. The goal is to create a symbiotic environment in order to give the opportunity to the robots to exchange their ideas with human beings which will definitely bring benefits for both and also have an positive impact on the society.
  • 11. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014 21 REFERENCE [1] M. A. Bhuiyan and H. Ueno,(2003) “Image Understanding for Human -robot Symbiosis”, 6th ICCIT, Dhaka, Bangladesh, Vol. 2, pp. 782-787. [2] International Bibliography of Sign Language, (2005),[online]. Available: http://www.signlang.unihamburg.de/bibweb/FJournals.html [3] C. Charayaphan and A. Marble, (1992) “Image processing system for interpreting motion in American sign language”, Journal of Biomedical Engineering, Vol. 14, pp. 419–425. [4] S. Fels and G. Hinton, (1993) “GloveTalk: a neural network interface between a DataGlove and a speech synthesizer”, IEEE Transactions on Neural Networks, Vol. 4, pp. 2–8. [5] T. Starner and A. Pentland,(1995) “Visual recognition of American sign language using hidden Markov models”, International Workshop on Automatic Face and Gesture Recognition, Zurich, Switzerland, pp. 189–194. [6] K. Grobel and M. Assan, (1996) “Isolated sign language recognition using hidden Markov models. In Proceedings of the international conference of system, man and cybernetics”, pp. 162–167. [7] R. Bowden and M. Sarhadi, (2002) “A non-linear model of shape and motion for tracking finger spelt American sign language”, Image and Vision Computing, Vol. 9–10, pp. 597–607. [8] R. C. Gonzalez and R. E. Woods, (2003) “Digital Image Processing”, Pearson Education Inc., 2nd Edition, Delhi. [9] M. A. Bhuiyan, V. Ampornaramveth, S. Muto, and H. Ueno,(2003) “Face Detection and Facial Feature Localization for Human-machine Interface”, NII Journal, Vol. 5, No. 1, pp. 26-39. [10] M. A. Bhuiyan, V. Ampornaramveth, S. Muto, and H. Ueno,(2004) “ On Tracking of Eye for Human- Robot Interface”, International Journal of Robotics and Automation, Vol. 19, No. 1, pp. 42-54.