SlideShare a Scribd company logo
1 of 7
Download to read offline
Abstract
We are progressing towards new discoveries and
inventions in the field of science and technology, but
unfortunately, very rare inventions could have helped
the problems faced by the physically challenged
people who face difficulties in communicating with
normal people as they use sign language as their
prime medium for communication. Mostly, the sign
languages are not understood by the common people.
Studies say that many research works have been done
to eliminate such kind of communication barrier. But
those work involves the functioning of
Microcontrollers or by some other complicated
techniques. Our study advances this process by using
the Kinect sensor. Kinect sensor is a highly sensitive
motion sensing device with many other applications.
Our workflow from capturing of an image of the
body to conversion into the skeletal image and from
image processing to feature extraction of the detected
image hence getting an output along with its meaning
and voice. The experimental results of our proposed
algorithm are also very promising with an accuracy
of 94.5%.
Keywords: Hidden Markov Model (HMM), Image
Processing, Kinect Sensor, Skeletal Image
1. Introduction
Since a very long time, we are experiencing a
better life due to the existence of various electronic
systems and the sensing elements almost in every
field. Physically challenged people find it easier to
communicate with each other and common people
using different sets of hand gestures and body
movements. We hereby provide an aid to very
efficiently express themselves in front of common
people wherein their sign languages will be
automatically converted into text and speeches. Their
hand and body gestures will be taken as inputs by the
by the sensor, making it easier for them to
systems and the sensing elements almost in every
field.Physically challenged people find it easier to
communicate with each other and common people
using different sets of hand gestures and body
movements. We hereby provide an aid to very
efficiently express themselves in front of common
people wherein their sign languages will be
automatically converted into text and speeches. Their
hand and body gestures will be taken as inputs by the
sensor, making it easier for them to understand.
This is a machine to human interaction system
which includes Kinect sensor and Matlab for
processing the data given as input.
There have been multitudinous researches done
till date, but this paper provides a direct and flexible
system for deaf and dumb people. It extracts voice
from the human gesture of sign language as well as
generates images and texts depending upon the input
gestures given to the system. The very first step is to
give the input as gesture data to the Kinect sensor, by
this it senses the data and a 3-D image are created.
This data is then transferred to Matlab where it is
interfaced through the programming along with
image processing and feature extraction using
different segmentations and Hidden Markov Model
(HMM) algorithm. From the complete segmented
body, only the image of the hand is cropped, the
gesture of that hand is then equated with the available
image in the database and if they match the speech
and text is obtained as output making it easier for the
Kinect Sensor based Indian Sign Language Detection with Voice
Extraction
Shubham Juneja1
, Chhaya Chandra2
, P.D Mahapatra3
, Siddhi Sathe4
, Nilesh B. Bahadure5
and Sankalp Verma6
1
Material Science Program, M.Tech first year student, Indian Institute of Technology
Kanpur, UttarPradesh, India
junejashubh@gmail.com
2
B.E, Electrical and Electronics Engineering, Bhilai Institute of Technology
Raipur, Chhattisgarh, India
chhaya.chandra02@gmail.com
3
B.E, Electrical and Electronics Engineering, Bhilai Institute of Technology
Raipur, Chhattisgarh, India
pushpadas27495@gmail.com
4
Department of Electrical & Electronics Engineering, B.E. Final Year Student, Bhilai Institute of Technology
Raipur, Chhattisgarh, India
sathesiddhi1996@gmail.com
5
Assoc. Professor at Department of Electronics & Telecommunication Engineering, MIT College of Railway Engineering & Research
Solapur, Maharashtra, India
nbahadure@gmail.com
6
Assoc. Professor at Department of Electrical & Electronics Engineering, Bhilai Institute of Technology
Raipur, Chhattisgarh, India
sankalpverma99@gmail.com
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
135 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
common people to understand it. By this, the disabled
people will be confident enough to express their
views anywhere and everywhere despite physically
challenged.
2. Literature Review
The Sign Language detection is considered an
efficient way by which physically challenged people
can communicate. Many researchers have studied and
investigations are done on different algorithms to
make the process easier.
Gunasekaran and Manikandan [1] have worked
on a technique using PIC Microcontroller for
detection of sign languages. The authors stated that
their method is better as it solves the real time
problems faced by the disabled ones. Their work
involves extraction of voice as soon as any sign
language is detected.
Kiratey Patil et al. [2] worked on detection of
American Sign Language. Their work is based on
accessing American Sign Language and converting
into English and the output flashes on LCD. This way
their work may omit the communication gap between
common and disabled ones.
Tavari et al. [3] worked on recognition of
Indian Sign languages by hand gestures. They
proposed an idea of recognizing images formed by
different hand movement gestures. They proposed an
idea of recognizing images formed by different hand
movement gestures. They used a web camera in their
work. For identifying signs and translation of text to
voice, Artificial Neural Network has been used.
Simon Lang [4] worked on Sign Language
detection. He proposed a system that uses Kinect
sensor instead of a web camera. Out of nine signs
performed by many people, 97% detection rate have
been seen for eight signs. The important body parts
are sensed by Kinect sensor easily and Markov
Model is used continuously for detection.
Sign Writing system proposed by Cayley et al.
[5] deals with procuring in helping deaf people by
using stylus and screen contraption for the written
literacy in Sign Language. They have provided
databases for enhancing the studies in another paper
so that the sequence of the characters can be stored
and retrieved in order to signify the sign language
and then the editing could be done. In order to
enhance their work on sensing algorithm, they are
further researching on it.
According to Singha and Das [6], several Indian
sign languages have been acknowledged by the
process of skin filtering, hand cropping feature
extraction and classification by making use of Eigen
value weighted Euclidean distance. Hence out of 26
alphabets, only dynamic alphabets ‘H & J’ were not
taken into account & they will be considered in their
future studies.
According to Xiujuan Chaivfgtxtgf.,njoh et al.
[7] for hard and body tracking 3-D motion by using
Kinect Sensor is more effective and clear. This makes
sign language detection easier.
According to our earlier work [8], the efficiency
was 92% but now our efficiency has increased to
94.5%. We have used very simple algorithm here
rather than using FCM.
From the above studies, it has been observed
that few methods are only proposed for hand gestures
recognition and few are only for feature extraction.
Also from the above done survey, it is understood
that no precise idea about feature extraction in easiest
way is mentioned. But this problem is solved in our
study. We have proposed an algorithm and used
HMM technique also. Gestures are identified easily,
that information is then matched with our preset
databases and voice is extracted. This process enables
the common people to understand the sign language
easily.
3. Methodology
We have proposed an algorithm which follows the
following steps:
1. After the detection of the body in front of the
Microsoft X-box Kinect Sensor as shown in
Fig.1, it locates the joints of the body by
pointing it out and hence we get a skeletal
image.
Fig. 1 Kinect Sensor
2. Then the segmented image of the body is
formed from the skeletal image. The area of the
hands where the signs are captured are cropped
out of the whole segmented image. Then the
cropped image is converted into dots and
dashes. The length of the dash is 4 unit and the
spacing between the two lines of dashes is also 4
units.
3. Through observation, we found that wherever
the length of the dashes is greater than 4 units it
resembles the image of the cropped hand.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
136 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
4. In our proposed algorithm, we have taken the
concept of loops, it is used to detect the black
points that are the space between the dashes.
This detection of black points determines the
position of dashes by successive subtraction of
points in the iterations which is going on and on.
5. The basic algorithm behind this work is that
after successive subtraction of points if the value
is equal to 4 then there is no data and if the
value is greater than 4 then there is the actual
image of the cropped hand.
6. Now, here arises a problem that how we detect
the location of fingers. So the algorithm behind
this is based on the formation of matrices of the
black points detected earlier. In an iteration, the
matrix coordinates of the finger are four times
the number of lines of dashes.
7. To highlight the fingers, we plotted star point of
the same coordinates as of the fingers and for
more precise feature extraction the image
processing is followed by filtration of the image
that means the star points plotted on the figures
go through following conditions.
• If they fall on a straight line either
horizontal or vertical.
• If they fall on a constant slope.
• If they fall within 4-unit coordinate
difference.
8. Then it eliminates the identical points which we
call as garbage point. Now we get the filtered
image but the process of feature extraction
continues for identification of fingers that
whether it is an index finger, middle finger, ring
finger, little finger or thumb.
Following is the Table 1 which shows the
range of coordinates in which the fingers are
detected.
Table 1: Range of coordinates of fingers
S No. Finger Range of coordinates
1 Index 80-88
2 Middle 60-68
3 Ring 44-52
4 Little 32-40
5 Thumb 104-112
The flow chart of the above algorithm is shown
in following Fig.2:
Fig. 2 Flow chart of the proposed algorithm
4. Hidden Markov Model
HMM [5], [9] is the algorithm which says
that the actual stages of the work continued in the
system is not visible the final output after the whole
processing is only visible.
HMM works on probability and it uses a
hidden variable of any input data and select them for
various observations and then process all those
variables through Markov process. HMM undergoes
four stage process:
• Filtering: This state involves the
computation which takes place during the
hidden process of the given statistical
parameters.
• Smoothing: This state does the same work
as the filtering process but works in between
the sequence wherever needed.
• Most likely Explanation: This state is
different from the above two states. It is
generally used whenever HMM is exposed
to a different number of problems and to
find overall maximum possible state
sequences.
• Statistical Significance: This state of HMM
is used to obtain statistical data and evaluate
the data of the possible outcome.
5. Result
Finally, after the detection of the whole image
of fingers or we can say that a complete hand
ANDing operation continues in Matlab for the final
output.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
137 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
The detected image of the sign is searched in
the database for its meaning and as and when the
match is found search is complete and we get the
final output along with the image of the meaning of
sign and its voice.
We can take the example as, if number 4 is to
be detected by the Kinect sensor then the person
gestures 4 using his hands. The Kinect captures the
skeletal image of the body as shown in the Fig. 3.
Fig. 3 Skeletal image
After skeletal image, the image is converted into
depth image as in Fig. 4.
Fig. 4 Depth image
Then the image is converted into its segmented image
as shown in Fig. 5.
Fig. 5 Segmented image
The image of the hand is cropped from the segmented
image as shown in Fig. 6.
Fig. 6 Cropped image
The cropped image is then converted into a figure
with dots and dashes as shown in Fig. 7.
Fig. 7 Image with dots and dashes
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
138 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
The filtration of the figure after star marking it to
detect the fingers is done in 3 steps as shown in Fig.
8.
Fig. 8 Filtration of figure after star marking
The detailed information of the fingers detected are
shown in command window which is shown in Fig.
9.
Fig. 9 Detailed information of fingers detected
After the filtration is done the database is searched
for its match and hence we get an output in the form
of image as shown in Fig. 10(a) and voice which is
plotted in the form of the histogram as shown in Fig.
10(b).
Fig. 10(a) Output in the form of image
Fig. 10(b) Voice in the form of histogram
Overall output of various input given to the system
shown in Fig.11.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
139 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
[a] [b] [c]
Fig. 11 [a] Cropped images, [b] Image of the meaning of signs, [c]
Histogram image of voice output
Detailed analysis of the various outputs with respect
to given inputs have been tabulated in Table 2.
Table 2: Detailed Analysis
Total no. of attempts = 50
Accuracy = (No. of correct attempts – No. of wrong
attempts)/Total no. of attempts
Hence we get the total accuracy as 94.5%.
6. Conclusions and Future work
With references to all the earlier studies, our
work provides the better accessibility with the
simpler algorithm and more precise output.
Since programming is done for the detection of
left hand the coordinates are taken accordingly. Our
algorithm gives all the relevant information about the
coordinates of each and every finger detected.
This system is very flexible and user-friendly as
the user can be of any age, gender, size or color, the
results will be same. But the intensity of light and
distance of the body from the sensor affects the
efficiency. For working effectively with the devices it
is suggested to keep the Kinect sensor at a height of
about 62 cm from the ground and the body to be
detected should be distanced at about 90cm.
In our earlier work [8] the exact location of the
fingers and the detailed information about them has
not been determined, so we have overcome these
problems here.
The star points that we have marked earlier is
not completely filtered, very few points still remain
there. Sometimes misinterpretation of detected
fingers also occurs but its possibility is one out of ten.
Acknowledgments
We would like to extend our gratitude and
sincere thanks to all people who have taken a great
deal of interest in our work and helped us with a
valuable suggestion.
References
[1] K. Gunasekaran and R. Manikandan, “Sign
language to speech translation system using pic
microcontroller”, International Journal of
Engineering and Technology, vol. 05, no. 02, pp.
1024–1028, April 2013.
[2] K. Patil, G. Pendharkar, and G. N. Gaikwad,
“American sign language detection”, International
Journal of Scientific and Research Publications, vol.
04, pp. 01–06, November 2014.
[3] N. V. Tavari, A. V. Deorankar, and P. N. Chatur,
“Hand gesture recognition of Indian sign language to
aid physically impaired people”, International Journal
of Engineering Research and Applications, pp. 60–
66, April 2014.
[4] S. Lang, “Sign language recognition with
Kinect”, Master’s thesis, Freie University Berlin,
2011.
S No. No. of correct
attempts
No. of
wrong
attempts
Accuracy(%)
1 50 0 100
2 50 0 100
3 49 1 96
4 48 2 92
5 49 1 96
6 49 1 96
7 47 3 88
8 47 3 88
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
140 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
[5] C.-Y. Kao and C.-S. Fahn, “A human-machine
interaction technique: hand gesture recognition based
on Hidden Markov Models with trajectory of hand
motion”, Elsevier Advanced in Control Engineering
and Information Science, vol. 15, pp. 3739–3743,
2011.
[6] J. Singha and K. Das, “Indian sign language
recognition using Eigen- value weighted Euclidian
distance based classification technique”, International
Journal of Advanced Computer Science and
Applications, vol. 04, no. 02, pp. 188–195, 2013.
[7] X. Chai, G. Li, Y. Lin, Z. Xu, Y. Tang, X. Chen,
and M. Zhou, “Sign language recognition and
translation with Kinect”, in IEEE International
Conference on Automatic Face and Gesture
Recognition, April 2013.
[8] N.B.Bahadure, Sankalp Verma, Shubham Juneja,
Chhaya Chandra, D.K.Mishra, and P.D. Mahapatra,
“Sign Language Detection with Voice Extraction in
Matlab using Kinect Sensor”, in International
Journal of Computer Science and Information
Security, vol.14, no. 12, pp.858-863, December 2016.
[9] T. Starner and A. Pentland, “Visual recognition of
American sign language using Hidden Markov
models”, in Proceedings of International workshop
face and gesture recognition, 1995, pp. 189–194.
[10] S. R. Ganapathy, B. Aravind, B. Keerthana, and
M. Sivagami, “Conversation of sign language to
speech with human gestures”, in Elsevier 2nd
International Symposium on Big Data and Cloud
Computing, vol. 50, 2015, pp. 10–15.
[11] M. Boulares and M. Jemni, “3d motion
trajectory analysis approach to improve sign
language 3d-based content recognition”, in Elsevier
Proceedings of International Neural Network Society
Winter Conference, vol. 13, 2012, pp. 133–143.
[12] D. Vinson, R. L. Thompson, R. Skinner, and G.
Vigliocco, “A faster path between meaning and
form? Iconicity facilitates sign recognition and
production in British sign language”, Elsevier Journal
of Memory and Language, vol. 82, pp. 56–85, March
2015.
[13] K. Cormier, A. Schembri, D. Winson, and E.
Orfanidou, “A faster path between meaning and
form? Iconicity facilitates sign recognition and
production in British sign language”, Elsevier Journal
of Cognition, vol. 124, pp. 50–65, May 2012.
[14] H. Anupreethi and S. Vijayakumar, “Msp430
based sign language recognizer for dumb patient”, in
Elsevier International Conference on Modeling
Optimization and Computing, vol. 38, 2012, pp.
1374–1380.
[15] C. Guimaraes, J. F. Guardez, and S. Fernanades,
“Sign language writing acquisition- technology for
writing system”, in IEEE Hawaii International
Conference on System Science, 2014, pp. 120–129.
[16] C. Guimaraes, J. F. Guardezi, S. Fernanades, and
L. E. Oliveira, “Deaf culture and sign language
writing system- a database for a new approach to
writing system recognition technology”, in IEEE
Hawaii International Conference on System Science,
2014, pp. 3368–3377.
[17] J. S. R. Jang, C. T. Sun, and E. Mizutani, Neuro
- Fuzzy and Soft Computing. Eastern Economy
Edition Prentice Hall of India, 2014.
[18] S. C. Pandey and P. K. Misra, “Modified
memory convergence with fuzzy PSO”, in
Proceedings of the World Congress on Engineering,
vol. 01, 2 - 4 July 2007.
[19] M. S. Chafi, M.-R. Akbarzadeh-T, M.
Moavenian, and M. Ziejewski, “Agent based soft
computing approach for component fault detection
and isolation of CNC x - axis drive system”, in
ASME International Mechanical Engineering
Congress and Exposition, Seattle, Washington, USA,
November 2007, pp. 1–10.
[20] S. C. Pandey and P. K. Misra, “Memory
convergence and optimization with fuzzy PSO and
ACS”, Journal of Computer Science, vol. 04, no. 02,
pp. 139–147, February 2008.
[21] N. B. Bahadure, A. K. Ray, and H. P. Thethi,
“Performance analysis of image segmentation using
watershed algorithm, fuzzy c – means of clustering
algorithm and Simulink design”, in IEEE
International Conference on Computing for
Sustainable Global Development, New Delhi, India,
March 2016, pp. 30–34.
[22] H.-D. Yang, “Sign language recognition with
Kinect sensor based on conditional random fields”,
Sensors Journal, vol. 15, pp. 135–147, December
2014.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 4, April 2018
141 https://sites.google.com/site/ijcsis/
ISSN 1947-5500

More Related Content

What's hot

ADAPTABLE FINGERPRINT MINUTIAE EXTRACTION ALGORITHM BASED-ON CROSSING NUMBER ...
ADAPTABLE FINGERPRINT MINUTIAE EXTRACTION ALGORITHM BASED-ON CROSSING NUMBER ...ADAPTABLE FINGERPRINT MINUTIAE EXTRACTION ALGORITHM BASED-ON CROSSING NUMBER ...
ADAPTABLE FINGERPRINT MINUTIAE EXTRACTION ALGORITHM BASED-ON CROSSING NUMBER ...IJCSEIT Journal
 
IRJET- Automated Detection of Gender from Face Images
IRJET-  	  Automated Detection of Gender from Face ImagesIRJET-  	  Automated Detection of Gender from Face Images
IRJET- Automated Detection of Gender from Face ImagesIRJET Journal
 
IRJET- Spot Me - A Smart Attendance System based on Face Recognition
IRJET- Spot Me - A Smart Attendance System based on Face RecognitionIRJET- Spot Me - A Smart Attendance System based on Face Recognition
IRJET- Spot Me - A Smart Attendance System based on Face RecognitionIRJET Journal
 
Analysis of Inertial Sensor Data Using Trajectory Recognition Algorithm
Analysis of Inertial Sensor Data Using Trajectory Recognition AlgorithmAnalysis of Inertial Sensor Data Using Trajectory Recognition Algorithm
Analysis of Inertial Sensor Data Using Trajectory Recognition Algorithmijcisjournal
 
An approach for text detection and reading of product label for blind persons
An approach for text detection and reading of product label for blind personsAn approach for text detection and reading of product label for blind persons
An approach for text detection and reading of product label for blind personsVivek Chamorshikar
 
IRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET Journal
 
Image recognition
Image recognitionImage recognition
Image recognitionJoel Jose
 
A SURVEY ON DEEP LEARNING METHOD USED FOR CHARACTER RECOGNITION
A SURVEY ON DEEP LEARNING METHOD USED FOR CHARACTER RECOGNITIONA SURVEY ON DEEP LEARNING METHOD USED FOR CHARACTER RECOGNITION
A SURVEY ON DEEP LEARNING METHOD USED FOR CHARACTER RECOGNITIONIJCIRAS Journal
 
IRJET- Image to Text Conversion using Tesseract
IRJET-  	  Image to Text Conversion using TesseractIRJET-  	  Image to Text Conversion using Tesseract
IRJET- Image to Text Conversion using TesseractIRJET Journal
 
Neural network based numerical digits recognization using nnt in matlab
Neural network based numerical digits recognization using nnt in matlabNeural network based numerical digits recognization using nnt in matlab
Neural network based numerical digits recognization using nnt in matlabijcses
 
Study on Different Human Emotions Using Back Propagation Method
Study on Different Human Emotions Using Back Propagation MethodStudy on Different Human Emotions Using Back Propagation Method
Study on Different Human Emotions Using Back Propagation Methodijiert bestjournal
 
IRJET - Paint using Hand Gesture
IRJET - Paint using Hand GestureIRJET - Paint using Hand Gesture
IRJET - Paint using Hand GestureIRJET Journal
 
Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...
Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...
Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...CSCJournals
 
CRIMINAL IDENTIFICATION FOR LOW RESOLUTION SURVEILLANCE
CRIMINAL IDENTIFICATION FOR LOW RESOLUTION SURVEILLANCECRIMINAL IDENTIFICATION FOR LOW RESOLUTION SURVEILLANCE
CRIMINAL IDENTIFICATION FOR LOW RESOLUTION SURVEILLANCEvivatechijri
 
Measuring memetic algorithm performance on image fingerprints dataset
Measuring memetic algorithm performance on image fingerprints datasetMeasuring memetic algorithm performance on image fingerprints dataset
Measuring memetic algorithm performance on image fingerprints datasetTELKOMNIKA JOURNAL
 

What's hot (20)

ADAPTABLE FINGERPRINT MINUTIAE EXTRACTION ALGORITHM BASED-ON CROSSING NUMBER ...
ADAPTABLE FINGERPRINT MINUTIAE EXTRACTION ALGORITHM BASED-ON CROSSING NUMBER ...ADAPTABLE FINGERPRINT MINUTIAE EXTRACTION ALGORITHM BASED-ON CROSSING NUMBER ...
ADAPTABLE FINGERPRINT MINUTIAE EXTRACTION ALGORITHM BASED-ON CROSSING NUMBER ...
 
IRJET- Automated Detection of Gender from Face Images
IRJET-  	  Automated Detection of Gender from Face ImagesIRJET-  	  Automated Detection of Gender from Face Images
IRJET- Automated Detection of Gender from Face Images
 
IRJET- Spot Me - A Smart Attendance System based on Face Recognition
IRJET- Spot Me - A Smart Attendance System based on Face RecognitionIRJET- Spot Me - A Smart Attendance System based on Face Recognition
IRJET- Spot Me - A Smart Attendance System based on Face Recognition
 
Analysis of Inertial Sensor Data Using Trajectory Recognition Algorithm
Analysis of Inertial Sensor Data Using Trajectory Recognition AlgorithmAnalysis of Inertial Sensor Data Using Trajectory Recognition Algorithm
Analysis of Inertial Sensor Data Using Trajectory Recognition Algorithm
 
An approach for text detection and reading of product label for blind persons
An approach for text detection and reading of product label for blind personsAn approach for text detection and reading of product label for blind persons
An approach for text detection and reading of product label for blind persons
 
IRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET- Sign Language Interpreter
IRJET- Sign Language Interpreter
 
Image recognition
Image recognitionImage recognition
Image recognition
 
A SURVEY ON DEEP LEARNING METHOD USED FOR CHARACTER RECOGNITION
A SURVEY ON DEEP LEARNING METHOD USED FOR CHARACTER RECOGNITIONA SURVEY ON DEEP LEARNING METHOD USED FOR CHARACTER RECOGNITION
A SURVEY ON DEEP LEARNING METHOD USED FOR CHARACTER RECOGNITION
 
IRJET- Image to Text Conversion using Tesseract
IRJET-  	  Image to Text Conversion using TesseractIRJET-  	  Image to Text Conversion using Tesseract
IRJET- Image to Text Conversion using Tesseract
 
Be32364369
Be32364369Be32364369
Be32364369
 
Image recognition
Image recognitionImage recognition
Image recognition
 
573 248-259
573 248-259573 248-259
573 248-259
 
Ai applications study
Ai applications  studyAi applications  study
Ai applications study
 
Image recognition
Image recognitionImage recognition
Image recognition
 
Neural network based numerical digits recognization using nnt in matlab
Neural network based numerical digits recognization using nnt in matlabNeural network based numerical digits recognization using nnt in matlab
Neural network based numerical digits recognization using nnt in matlab
 
Study on Different Human Emotions Using Back Propagation Method
Study on Different Human Emotions Using Back Propagation MethodStudy on Different Human Emotions Using Back Propagation Method
Study on Different Human Emotions Using Back Propagation Method
 
IRJET - Paint using Hand Gesture
IRJET - Paint using Hand GestureIRJET - Paint using Hand Gesture
IRJET - Paint using Hand Gesture
 
Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...
Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...
Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...
 
CRIMINAL IDENTIFICATION FOR LOW RESOLUTION SURVEILLANCE
CRIMINAL IDENTIFICATION FOR LOW RESOLUTION SURVEILLANCECRIMINAL IDENTIFICATION FOR LOW RESOLUTION SURVEILLANCE
CRIMINAL IDENTIFICATION FOR LOW RESOLUTION SURVEILLANCE
 
Measuring memetic algorithm performance on image fingerprints dataset
Measuring memetic algorithm performance on image fingerprints datasetMeasuring memetic algorithm performance on image fingerprints dataset
Measuring memetic algorithm performance on image fingerprints dataset
 

Similar to Kinect Sensor based Indian Sign Language Detection with Voice Extraction

Review on Hand Gesture Recognition
Review on Hand Gesture RecognitionReview on Hand Gesture Recognition
Review on Hand Gesture Recognitiondbpublications
 
Real time human-computer interaction
Real time human-computer interactionReal time human-computer interaction
Real time human-computer interactionijfcstjournal
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysispaperpublications3
 
Deep convolutional neural network for hand sign language recognition using mo...
Deep convolutional neural network for hand sign language recognition using mo...Deep convolutional neural network for hand sign language recognition using mo...
Deep convolutional neural network for hand sign language recognition using mo...journalBEEI
 
Real Time Translator for Sign Language
Real Time Translator for Sign LanguageReal Time Translator for Sign Language
Real Time Translator for Sign Languageijtsrd
 
Real Time Vision Hand Gesture Recognition Based Media Control via LAN & Wirel...
Real Time Vision Hand Gesture Recognition Based Media Control via LAN & Wirel...Real Time Vision Hand Gesture Recognition Based Media Control via LAN & Wirel...
Real Time Vision Hand Gesture Recognition Based Media Control via LAN & Wirel...IJMER
 
System for Fingerprint Image Analysis
System for Fingerprint Image AnalysisSystem for Fingerprint Image Analysis
System for Fingerprint Image Analysisvivatechijri
 
Real Time Sign Language Detection
Real Time Sign Language DetectionReal Time Sign Language Detection
Real Time Sign Language DetectionIRJET Journal
 
IRJET- Recognition of Theft by Gestures using Kinect Sensor in Machine Le...
IRJET-  	  Recognition of Theft by Gestures using Kinect Sensor in Machine Le...IRJET-  	  Recognition of Theft by Gestures using Kinect Sensor in Machine Le...
IRJET- Recognition of Theft by Gestures using Kinect Sensor in Machine Le...IRJET Journal
 
Paper id 23201490
Paper id 23201490Paper id 23201490
Paper id 23201490IJRAT
 
Sign Language Identification based on Hand Gestures
Sign Language Identification based on Hand GesturesSign Language Identification based on Hand Gestures
Sign Language Identification based on Hand GesturesIRJET Journal
 
Gesture recognition using artificial neural network,a technology for identify...
Gesture recognition using artificial neural network,a technology for identify...Gesture recognition using artificial neural network,a technology for identify...
Gesture recognition using artificial neural network,a technology for identify...NidhinRaj Saikripa
 
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 Board
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardDevelopment of Sign Signal Translation System Based on Altera’s FPGA DE2 Board
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardWaqas Tariq
 
Basic Gesture Based Communication for Deaf and Dumb
Basic Gesture Based Communication for Deaf and DumbBasic Gesture Based Communication for Deaf and Dumb
Basic Gesture Based Communication for Deaf and DumbRSIS International
 
Hand Gesture Recognition using OpenCV and Python
Hand Gesture Recognition using OpenCV and PythonHand Gesture Recognition using OpenCV and Python
Hand Gesture Recognition using OpenCV and Pythonijtsrd
 

Similar to Kinect Sensor based Indian Sign Language Detection with Voice Extraction (20)

Review on Hand Gesture Recognition
Review on Hand Gesture RecognitionReview on Hand Gesture Recognition
Review on Hand Gesture Recognition
 
HAND GESTURE RECOGNITION FOR HCI (HUMANCOMPUTER INTERACTION) USING ARTIFICIAL...
HAND GESTURE RECOGNITION FOR HCI (HUMANCOMPUTER INTERACTION) USING ARTIFICIAL...HAND GESTURE RECOGNITION FOR HCI (HUMANCOMPUTER INTERACTION) USING ARTIFICIAL...
HAND GESTURE RECOGNITION FOR HCI (HUMANCOMPUTER INTERACTION) USING ARTIFICIAL...
 
Real time human-computer interaction
Real time human-computer interactionReal time human-computer interaction
Real time human-computer interaction
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysis
 
Deep convolutional neural network for hand sign language recognition using mo...
Deep convolutional neural network for hand sign language recognition using mo...Deep convolutional neural network for hand sign language recognition using mo...
Deep convolutional neural network for hand sign language recognition using mo...
 
gesture-recognition
gesture-recognitiongesture-recognition
gesture-recognition
 
Real Time Translator for Sign Language
Real Time Translator for Sign LanguageReal Time Translator for Sign Language
Real Time Translator for Sign Language
 
Real Time Vision Hand Gesture Recognition Based Media Control via LAN & Wirel...
Real Time Vision Hand Gesture Recognition Based Media Control via LAN & Wirel...Real Time Vision Hand Gesture Recognition Based Media Control via LAN & Wirel...
Real Time Vision Hand Gesture Recognition Based Media Control via LAN & Wirel...
 
L026070074
L026070074L026070074
L026070074
 
System for Fingerprint Image Analysis
System for Fingerprint Image AnalysisSystem for Fingerprint Image Analysis
System for Fingerprint Image Analysis
 
Real Time Sign Language Detection
Real Time Sign Language DetectionReal Time Sign Language Detection
Real Time Sign Language Detection
 
IRJET- Recognition of Theft by Gestures using Kinect Sensor in Machine Le...
IRJET-  	  Recognition of Theft by Gestures using Kinect Sensor in Machine Le...IRJET-  	  Recognition of Theft by Gestures using Kinect Sensor in Machine Le...
IRJET- Recognition of Theft by Gestures using Kinect Sensor in Machine Le...
 
Paper id 23201490
Paper id 23201490Paper id 23201490
Paper id 23201490
 
Finger tracking
Finger trackingFinger tracking
Finger tracking
 
Paper
PaperPaper
Paper
 
Sign Language Identification based on Hand Gestures
Sign Language Identification based on Hand GesturesSign Language Identification based on Hand Gestures
Sign Language Identification based on Hand Gestures
 
Gesture recognition using artificial neural network,a technology for identify...
Gesture recognition using artificial neural network,a technology for identify...Gesture recognition using artificial neural network,a technology for identify...
Gesture recognition using artificial neural network,a technology for identify...
 
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 Board
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardDevelopment of Sign Signal Translation System Based on Altera’s FPGA DE2 Board
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 Board
 
Basic Gesture Based Communication for Deaf and Dumb
Basic Gesture Based Communication for Deaf and DumbBasic Gesture Based Communication for Deaf and Dumb
Basic Gesture Based Communication for Deaf and Dumb
 
Hand Gesture Recognition using OpenCV and Python
Hand Gesture Recognition using OpenCV and PythonHand Gesture Recognition using OpenCV and Python
Hand Gesture Recognition using OpenCV and Python
 

Recently uploaded

Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 

Recently uploaded (20)

Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptxVulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 

Kinect Sensor based Indian Sign Language Detection with Voice Extraction

  • 1. Abstract We are progressing towards new discoveries and inventions in the field of science and technology, but unfortunately, very rare inventions could have helped the problems faced by the physically challenged people who face difficulties in communicating with normal people as they use sign language as their prime medium for communication. Mostly, the sign languages are not understood by the common people. Studies say that many research works have been done to eliminate such kind of communication barrier. But those work involves the functioning of Microcontrollers or by some other complicated techniques. Our study advances this process by using the Kinect sensor. Kinect sensor is a highly sensitive motion sensing device with many other applications. Our workflow from capturing of an image of the body to conversion into the skeletal image and from image processing to feature extraction of the detected image hence getting an output along with its meaning and voice. The experimental results of our proposed algorithm are also very promising with an accuracy of 94.5%. Keywords: Hidden Markov Model (HMM), Image Processing, Kinect Sensor, Skeletal Image 1. Introduction Since a very long time, we are experiencing a better life due to the existence of various electronic systems and the sensing elements almost in every field. Physically challenged people find it easier to communicate with each other and common people using different sets of hand gestures and body movements. We hereby provide an aid to very efficiently express themselves in front of common people wherein their sign languages will be automatically converted into text and speeches. Their hand and body gestures will be taken as inputs by the by the sensor, making it easier for them to systems and the sensing elements almost in every field.Physically challenged people find it easier to communicate with each other and common people using different sets of hand gestures and body movements. We hereby provide an aid to very efficiently express themselves in front of common people wherein their sign languages will be automatically converted into text and speeches. Their hand and body gestures will be taken as inputs by the sensor, making it easier for them to understand. This is a machine to human interaction system which includes Kinect sensor and Matlab for processing the data given as input. There have been multitudinous researches done till date, but this paper provides a direct and flexible system for deaf and dumb people. It extracts voice from the human gesture of sign language as well as generates images and texts depending upon the input gestures given to the system. The very first step is to give the input as gesture data to the Kinect sensor, by this it senses the data and a 3-D image are created. This data is then transferred to Matlab where it is interfaced through the programming along with image processing and feature extraction using different segmentations and Hidden Markov Model (HMM) algorithm. From the complete segmented body, only the image of the hand is cropped, the gesture of that hand is then equated with the available image in the database and if they match the speech and text is obtained as output making it easier for the Kinect Sensor based Indian Sign Language Detection with Voice Extraction Shubham Juneja1 , Chhaya Chandra2 , P.D Mahapatra3 , Siddhi Sathe4 , Nilesh B. Bahadure5 and Sankalp Verma6 1 Material Science Program, M.Tech first year student, Indian Institute of Technology Kanpur, UttarPradesh, India junejashubh@gmail.com 2 B.E, Electrical and Electronics Engineering, Bhilai Institute of Technology Raipur, Chhattisgarh, India chhaya.chandra02@gmail.com 3 B.E, Electrical and Electronics Engineering, Bhilai Institute of Technology Raipur, Chhattisgarh, India pushpadas27495@gmail.com 4 Department of Electrical & Electronics Engineering, B.E. Final Year Student, Bhilai Institute of Technology Raipur, Chhattisgarh, India sathesiddhi1996@gmail.com 5 Assoc. Professor at Department of Electronics & Telecommunication Engineering, MIT College of Railway Engineering & Research Solapur, Maharashtra, India nbahadure@gmail.com 6 Assoc. Professor at Department of Electrical & Electronics Engineering, Bhilai Institute of Technology Raipur, Chhattisgarh, India sankalpverma99@gmail.com International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 4, April 2018 135 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 2. common people to understand it. By this, the disabled people will be confident enough to express their views anywhere and everywhere despite physically challenged. 2. Literature Review The Sign Language detection is considered an efficient way by which physically challenged people can communicate. Many researchers have studied and investigations are done on different algorithms to make the process easier. Gunasekaran and Manikandan [1] have worked on a technique using PIC Microcontroller for detection of sign languages. The authors stated that their method is better as it solves the real time problems faced by the disabled ones. Their work involves extraction of voice as soon as any sign language is detected. Kiratey Patil et al. [2] worked on detection of American Sign Language. Their work is based on accessing American Sign Language and converting into English and the output flashes on LCD. This way their work may omit the communication gap between common and disabled ones. Tavari et al. [3] worked on recognition of Indian Sign languages by hand gestures. They proposed an idea of recognizing images formed by different hand movement gestures. They proposed an idea of recognizing images formed by different hand movement gestures. They used a web camera in their work. For identifying signs and translation of text to voice, Artificial Neural Network has been used. Simon Lang [4] worked on Sign Language detection. He proposed a system that uses Kinect sensor instead of a web camera. Out of nine signs performed by many people, 97% detection rate have been seen for eight signs. The important body parts are sensed by Kinect sensor easily and Markov Model is used continuously for detection. Sign Writing system proposed by Cayley et al. [5] deals with procuring in helping deaf people by using stylus and screen contraption for the written literacy in Sign Language. They have provided databases for enhancing the studies in another paper so that the sequence of the characters can be stored and retrieved in order to signify the sign language and then the editing could be done. In order to enhance their work on sensing algorithm, they are further researching on it. According to Singha and Das [6], several Indian sign languages have been acknowledged by the process of skin filtering, hand cropping feature extraction and classification by making use of Eigen value weighted Euclidean distance. Hence out of 26 alphabets, only dynamic alphabets ‘H & J’ were not taken into account & they will be considered in their future studies. According to Xiujuan Chaivfgtxtgf.,njoh et al. [7] for hard and body tracking 3-D motion by using Kinect Sensor is more effective and clear. This makes sign language detection easier. According to our earlier work [8], the efficiency was 92% but now our efficiency has increased to 94.5%. We have used very simple algorithm here rather than using FCM. From the above studies, it has been observed that few methods are only proposed for hand gestures recognition and few are only for feature extraction. Also from the above done survey, it is understood that no precise idea about feature extraction in easiest way is mentioned. But this problem is solved in our study. We have proposed an algorithm and used HMM technique also. Gestures are identified easily, that information is then matched with our preset databases and voice is extracted. This process enables the common people to understand the sign language easily. 3. Methodology We have proposed an algorithm which follows the following steps: 1. After the detection of the body in front of the Microsoft X-box Kinect Sensor as shown in Fig.1, it locates the joints of the body by pointing it out and hence we get a skeletal image. Fig. 1 Kinect Sensor 2. Then the segmented image of the body is formed from the skeletal image. The area of the hands where the signs are captured are cropped out of the whole segmented image. Then the cropped image is converted into dots and dashes. The length of the dash is 4 unit and the spacing between the two lines of dashes is also 4 units. 3. Through observation, we found that wherever the length of the dashes is greater than 4 units it resembles the image of the cropped hand. International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 4, April 2018 136 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 3. 4. In our proposed algorithm, we have taken the concept of loops, it is used to detect the black points that are the space between the dashes. This detection of black points determines the position of dashes by successive subtraction of points in the iterations which is going on and on. 5. The basic algorithm behind this work is that after successive subtraction of points if the value is equal to 4 then there is no data and if the value is greater than 4 then there is the actual image of the cropped hand. 6. Now, here arises a problem that how we detect the location of fingers. So the algorithm behind this is based on the formation of matrices of the black points detected earlier. In an iteration, the matrix coordinates of the finger are four times the number of lines of dashes. 7. To highlight the fingers, we plotted star point of the same coordinates as of the fingers and for more precise feature extraction the image processing is followed by filtration of the image that means the star points plotted on the figures go through following conditions. • If they fall on a straight line either horizontal or vertical. • If they fall on a constant slope. • If they fall within 4-unit coordinate difference. 8. Then it eliminates the identical points which we call as garbage point. Now we get the filtered image but the process of feature extraction continues for identification of fingers that whether it is an index finger, middle finger, ring finger, little finger or thumb. Following is the Table 1 which shows the range of coordinates in which the fingers are detected. Table 1: Range of coordinates of fingers S No. Finger Range of coordinates 1 Index 80-88 2 Middle 60-68 3 Ring 44-52 4 Little 32-40 5 Thumb 104-112 The flow chart of the above algorithm is shown in following Fig.2: Fig. 2 Flow chart of the proposed algorithm 4. Hidden Markov Model HMM [5], [9] is the algorithm which says that the actual stages of the work continued in the system is not visible the final output after the whole processing is only visible. HMM works on probability and it uses a hidden variable of any input data and select them for various observations and then process all those variables through Markov process. HMM undergoes four stage process: • Filtering: This state involves the computation which takes place during the hidden process of the given statistical parameters. • Smoothing: This state does the same work as the filtering process but works in between the sequence wherever needed. • Most likely Explanation: This state is different from the above two states. It is generally used whenever HMM is exposed to a different number of problems and to find overall maximum possible state sequences. • Statistical Significance: This state of HMM is used to obtain statistical data and evaluate the data of the possible outcome. 5. Result Finally, after the detection of the whole image of fingers or we can say that a complete hand ANDing operation continues in Matlab for the final output. International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 4, April 2018 137 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 4. The detected image of the sign is searched in the database for its meaning and as and when the match is found search is complete and we get the final output along with the image of the meaning of sign and its voice. We can take the example as, if number 4 is to be detected by the Kinect sensor then the person gestures 4 using his hands. The Kinect captures the skeletal image of the body as shown in the Fig. 3. Fig. 3 Skeletal image After skeletal image, the image is converted into depth image as in Fig. 4. Fig. 4 Depth image Then the image is converted into its segmented image as shown in Fig. 5. Fig. 5 Segmented image The image of the hand is cropped from the segmented image as shown in Fig. 6. Fig. 6 Cropped image The cropped image is then converted into a figure with dots and dashes as shown in Fig. 7. Fig. 7 Image with dots and dashes International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 4, April 2018 138 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 5. The filtration of the figure after star marking it to detect the fingers is done in 3 steps as shown in Fig. 8. Fig. 8 Filtration of figure after star marking The detailed information of the fingers detected are shown in command window which is shown in Fig. 9. Fig. 9 Detailed information of fingers detected After the filtration is done the database is searched for its match and hence we get an output in the form of image as shown in Fig. 10(a) and voice which is plotted in the form of the histogram as shown in Fig. 10(b). Fig. 10(a) Output in the form of image Fig. 10(b) Voice in the form of histogram Overall output of various input given to the system shown in Fig.11. International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 4, April 2018 139 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 6. [a] [b] [c] Fig. 11 [a] Cropped images, [b] Image of the meaning of signs, [c] Histogram image of voice output Detailed analysis of the various outputs with respect to given inputs have been tabulated in Table 2. Table 2: Detailed Analysis Total no. of attempts = 50 Accuracy = (No. of correct attempts – No. of wrong attempts)/Total no. of attempts Hence we get the total accuracy as 94.5%. 6. Conclusions and Future work With references to all the earlier studies, our work provides the better accessibility with the simpler algorithm and more precise output. Since programming is done for the detection of left hand the coordinates are taken accordingly. Our algorithm gives all the relevant information about the coordinates of each and every finger detected. This system is very flexible and user-friendly as the user can be of any age, gender, size or color, the results will be same. But the intensity of light and distance of the body from the sensor affects the efficiency. For working effectively with the devices it is suggested to keep the Kinect sensor at a height of about 62 cm from the ground and the body to be detected should be distanced at about 90cm. In our earlier work [8] the exact location of the fingers and the detailed information about them has not been determined, so we have overcome these problems here. The star points that we have marked earlier is not completely filtered, very few points still remain there. Sometimes misinterpretation of detected fingers also occurs but its possibility is one out of ten. Acknowledgments We would like to extend our gratitude and sincere thanks to all people who have taken a great deal of interest in our work and helped us with a valuable suggestion. References [1] K. Gunasekaran and R. Manikandan, “Sign language to speech translation system using pic microcontroller”, International Journal of Engineering and Technology, vol. 05, no. 02, pp. 1024–1028, April 2013. [2] K. Patil, G. Pendharkar, and G. N. Gaikwad, “American sign language detection”, International Journal of Scientific and Research Publications, vol. 04, pp. 01–06, November 2014. [3] N. V. Tavari, A. V. Deorankar, and P. N. Chatur, “Hand gesture recognition of Indian sign language to aid physically impaired people”, International Journal of Engineering Research and Applications, pp. 60– 66, April 2014. [4] S. Lang, “Sign language recognition with Kinect”, Master’s thesis, Freie University Berlin, 2011. S No. No. of correct attempts No. of wrong attempts Accuracy(%) 1 50 0 100 2 50 0 100 3 49 1 96 4 48 2 92 5 49 1 96 6 49 1 96 7 47 3 88 8 47 3 88 International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 4, April 2018 140 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 7. [5] C.-Y. Kao and C.-S. Fahn, “A human-machine interaction technique: hand gesture recognition based on Hidden Markov Models with trajectory of hand motion”, Elsevier Advanced in Control Engineering and Information Science, vol. 15, pp. 3739–3743, 2011. [6] J. Singha and K. Das, “Indian sign language recognition using Eigen- value weighted Euclidian distance based classification technique”, International Journal of Advanced Computer Science and Applications, vol. 04, no. 02, pp. 188–195, 2013. [7] X. Chai, G. Li, Y. Lin, Z. Xu, Y. Tang, X. Chen, and M. Zhou, “Sign language recognition and translation with Kinect”, in IEEE International Conference on Automatic Face and Gesture Recognition, April 2013. [8] N.B.Bahadure, Sankalp Verma, Shubham Juneja, Chhaya Chandra, D.K.Mishra, and P.D. Mahapatra, “Sign Language Detection with Voice Extraction in Matlab using Kinect Sensor”, in International Journal of Computer Science and Information Security, vol.14, no. 12, pp.858-863, December 2016. [9] T. Starner and A. Pentland, “Visual recognition of American sign language using Hidden Markov models”, in Proceedings of International workshop face and gesture recognition, 1995, pp. 189–194. [10] S. R. Ganapathy, B. Aravind, B. Keerthana, and M. Sivagami, “Conversation of sign language to speech with human gestures”, in Elsevier 2nd International Symposium on Big Data and Cloud Computing, vol. 50, 2015, pp. 10–15. [11] M. Boulares and M. Jemni, “3d motion trajectory analysis approach to improve sign language 3d-based content recognition”, in Elsevier Proceedings of International Neural Network Society Winter Conference, vol. 13, 2012, pp. 133–143. [12] D. Vinson, R. L. Thompson, R. Skinner, and G. Vigliocco, “A faster path between meaning and form? Iconicity facilitates sign recognition and production in British sign language”, Elsevier Journal of Memory and Language, vol. 82, pp. 56–85, March 2015. [13] K. Cormier, A. Schembri, D. Winson, and E. Orfanidou, “A faster path between meaning and form? Iconicity facilitates sign recognition and production in British sign language”, Elsevier Journal of Cognition, vol. 124, pp. 50–65, May 2012. [14] H. Anupreethi and S. Vijayakumar, “Msp430 based sign language recognizer for dumb patient”, in Elsevier International Conference on Modeling Optimization and Computing, vol. 38, 2012, pp. 1374–1380. [15] C. Guimaraes, J. F. Guardez, and S. Fernanades, “Sign language writing acquisition- technology for writing system”, in IEEE Hawaii International Conference on System Science, 2014, pp. 120–129. [16] C. Guimaraes, J. F. Guardezi, S. Fernanades, and L. E. Oliveira, “Deaf culture and sign language writing system- a database for a new approach to writing system recognition technology”, in IEEE Hawaii International Conference on System Science, 2014, pp. 3368–3377. [17] J. S. R. Jang, C. T. Sun, and E. Mizutani, Neuro - Fuzzy and Soft Computing. Eastern Economy Edition Prentice Hall of India, 2014. [18] S. C. Pandey and P. K. Misra, “Modified memory convergence with fuzzy PSO”, in Proceedings of the World Congress on Engineering, vol. 01, 2 - 4 July 2007. [19] M. S. Chafi, M.-R. Akbarzadeh-T, M. Moavenian, and M. Ziejewski, “Agent based soft computing approach for component fault detection and isolation of CNC x - axis drive system”, in ASME International Mechanical Engineering Congress and Exposition, Seattle, Washington, USA, November 2007, pp. 1–10. [20] S. C. Pandey and P. K. Misra, “Memory convergence and optimization with fuzzy PSO and ACS”, Journal of Computer Science, vol. 04, no. 02, pp. 139–147, February 2008. [21] N. B. Bahadure, A. K. Ray, and H. P. Thethi, “Performance analysis of image segmentation using watershed algorithm, fuzzy c – means of clustering algorithm and Simulink design”, in IEEE International Conference on Computing for Sustainable Global Development, New Delhi, India, March 2016, pp. 30–34. [22] H.-D. Yang, “Sign language recognition with Kinect sensor based on conditional random fields”, Sensors Journal, vol. 15, pp. 135–147, December 2014. International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 4, April 2018 141 https://sites.google.com/site/ijcsis/ ISSN 1947-5500