SlideShare a Scribd company logo
1 of 6
Download to read offline
International Journal of Engineering and Techniques - Volume 1 Issue 5, September-October 2015
ISSN: 2395-1303 http://www.ijetjournal.org Page 55
Image Based Sign Language Recognition on Android
Prutha Gandhi1
, Dhanashri Dalvi2
, Pallavi Gaikwad3
, Shubham Khode4
1,2,3,4
(Computer Department, Modern Education Society’s College of Engineering, and Pune)
I. INTRODUCTION
A Deaf person is very much dependent on the sign
language to communicate with the other person. So
the person who is interacting with the deaf person
needs to know the sign language in order to
understand and communicate effectively. Since
many people are not familiar with sign language, it
very difficult for a deaf person to have interaction
with the society.
The previously implemented system had a
predened database with a limited scope. Thus, we
are facilitating an application that will allow a user
to dene their own database i.e. to dene and
upload his own sign language in the system. This
feature will help deaf people to communicate with
other people varying from different countries or
regions. Our application includes these phases:
• Digitization and image capture
• Compression (coding)
• Segmentation
o Edge and feature detection
• Scene Analysis
o Color
o Motion
o Object recognition
• Pattern matching
• Text generation
• Text to speech conversion
• Speech to text conversion
II.RELATED WORK
In [1] M. Mohandes, M. Deriche, and J. Liu,
designed an ArSLR (Arabic Sign Language
Recognition System) for alphabet recognition,
isolated word recognition, and Continuous signer
recognition using image based and sensor based
approach. The image based approach mostly
depends on the coloured gloves and knuckles. This
approach works efficiently for determining
geometric features and body/facial expressions. The
sensor based approach utilizes the glove specs
RESEARCH ARTICLE OPEN ACCESS
Abstract:
A mediator person is required for communication between deaf person and a second person. But a
mediator should know the sign language used by deaf person. But this is also not possible always since there are
multiple sign languages for multiple languages. It is difficult for a deaf person to understand what a second
person speaks. And therefore deaf person should keep track of lip movements of second person in order to know
what he is speaking. But the lip movements do not give proper efficiency and accuracy since the facial
expressions and speech might not match. To overcome the above problems we have proposed a system, an
Android Application for recognizing sign language using hand gesture with the facility for user to dene and
upload their own sign language into the system. The features of this system are the real time conversion of
gesture to text and speech. For two-way communication between deaf person and second person, the speech of
second person is converted into text. The processing steps include: gesture extraction, gesture matching and
conversion of text to speech and vice-versa. The system is not only useful for deaf community but can also be
used by common people who migrate to different regions and do not know local language.
Keywords — Gesture extraction and detection, pattern matching, text generation, text to speech and speech
to text conversion,
International Journal of Engineering and Techniques - Volume 1 Issue 5, September-October 2015
ISSN: 2395-1303 http://www.ijetjournal.org Page 56
(power, cyber, and data gloves), mostly statistical
features, 3D position information.
In [2] Ravikiran J, Kavi Mahesh, Suhas Mahishi,
Dheeraj R, Sudheender S, Nitin V Pujari, designed
a highly accurate image processing algorithm for
recognizing American Sign Language. Their
algorithm implementation does not require use of
any gloves or markers. The implemented system
detects the number of open fingers using the
concept of boundary tracing which also combines
finger-tip detection.
In [3] Son Lam Phung, Abdesselam Bouzerdoum
and Douglas Chai have analysed three important
issues related to pixel wise skin segmentation:
colour representation, colour quantization, and
classification algorithm. They found that the
Bayesian classifier with the histogram technique
and the multilayer perceptron classifier have higher
classification rates compared to other tested
classifiers.
In [4] Ashish Sethi, Hemanth S, Kuldeep
Kumar,Bhaskara Rao N,Krishnan R, designed an
application for dumb and deaf person. The
application was the integration of already existing
methods. Their application processing includes:
gesture extraction, gesture matching and conversion
to speech. They used histogram matching, bounding
box computation, skin colour segmentation and
region growing methods for gesture extraction. For
gesture matching they used feature point matching
and correlation based matching techniques.
Integrating all these methods their paper provides
following four approaches:
Approach A: Skin colour segmentation with
Feature point matching using SIFT
Approach B: Region Growing with Feature point
matching using SIFT
Approach C: Skin colour segmentation with
Correlation matching
Approach D: Region Growing with Correlation
matching
The application also includes gesture to text
conversion.
In [5] William T. Freeman, Michal Roth, provided
orientation histogram as a feature vector for gesture
classification and interpolation. This method is
simple and fast to compute, and provides robustness
to scene illumination changes. They provide two
categories of gestures: Static gestures and Dynamic
gestures. Static gesture includes a particular hand
configuration and pose represented by a single
image.Moving gesture, represented by a sequence
of images comes under dynamic gesture. They used
a pattern recognition system, converts the a image
or sequence of images into a feature vector, which
then compared with the feature vectors of a training
set of gestures. They also used a Euclidean distance
metric and video digitizer. In the run phase, the
computer compares the feature vector for the
present image with those in the training set, and
picks the category of the nearest vector, or
interpolates between vectors. The methods are
image-based, simple, and fast.
In [6] V. Nayakwadi, N. B. Pokale,In this paper a
survey on various recent gesture recognition
approaches is provided with particular emphasis on
hand gestures. A review of static hand posture
methods are explained with different tools and
algorithms
appliedongesturerecognitionsystem,includingconne
ctionistmodels,hidden Markov model, and fuzzy
clustering.
Vision Based approaches: In vision based methods
the system requires only cameras to capture the
image required for the natural interaction between
human and these approaches are simple but a lot of
gesture challenges are raised such as the complex
background, lighting variation, and other skin
colour objects with the hand object.
InstrumentedGloveapproaches:
Markedglovesorcolouredmarkersaregloves
thatwornbythehumanhandwithsomecolourstodirectt
heprocessoftrackingthehandandlocatingthepalmand
ngers,whichprovidetheabilitytoextract geometric
features necessary to form hand shape. Gesture
Recognition Techniques: Most of the researches use
ANN as a classier in gesture recognition process.
Histogram Based Feature: A method for
recognizing gestures based on pattern recognition
using orientation histogram.
Fuzzy Clustering Algorithm: In fuzzy clustering,
the partitioning of sample
dataintogroupsinafuzzywayarethemaindifferencebet
International Journal of Engineering and Techniques - Volume 1 Issue 5, September-October 2015
ISSN: 2395-1303 http://www.ijetjournal.org Page 57
weenfuzzyclustering and other clustering algorithm,
where the single data pattern might belong to
different data groups.
Hidden Markov Model (HMM): HMM is a
stochastic process, with a unite number of states of
Markov chain, and a number of random functions
so that each stat ehasarandom function. HMM
system topology is represented by one state for the
initial state, a set of output symbols, and a set of
transitions state. HMM contained a lot of
mathematical structures and has proved its
efficiency for modelling spatiotemporal lin
formation data. Sign language recognition, are one
of the most applications of HMM.
In [7] Massimo Piccardi, reviews the Background
subtraction method, which is a widely used
approach for detecting moving objects from static
cameras. This paper provides a review of the main
methods and an original categorisation based on
speed, memory requirements and accuracy.
Methods reviewed include parameter I can dnon-
parametric background density estimates and spatial
correlation approaches. Several methods
forperformingbackgroundsubtractionhavebeenprop
osed, allofthesemethods try to effectively estimate
the background model from the temporal sequence
of the frames. The methods reviewed in the
following are: Running Gaussian average,
Temporal median lter, Mixture of Gaussians,
Kernel density estimation (KDE), Sequential KD
approximation, Concurrence of image variations,
Eigen-backgrounds.
III. RESEARCH WORK
For extracting and processing of an image, process
of acquisition is performed. Generally the image
acquisition process involves pre-processing such as
scaling etc. A scale space representation can be
used. A scale space is representation of an image at
multiple resolution levels. Then Difference of
Gaussians can be applied on the image. Difference
of Gaussians is a feature enhancement algorithm
that involves the subtraction of one blurred version
of an original image from another, less blurred
version of the original. In the simple case of
grayscale images, the blurred images are obtained
by convolving the original grayscale images with
Gaussian kernels having differing standard
deviations. Blurring an image using a Gaussian
kernel suppresses only high-frequency spatial
information. Subtracting one image from the other
preserves spatial information that lies between the
range of frequencies that are preserved in the two
blurred images. Thus, the difference of Gaussians is
a band-pass filter that discards all but a handful of
spatial frequencies that are present in the original
grayscale image. There are many ways to handle
image translation, one way to handle translation
problems on images is template matching, it is used
to compare the intensities of the pixels, using the
SAD (Sum of absolute differences) measure. The
other way include a pixel in the search image with
coordinates (xs, ys) has intensity Is(xs, ys) and a
pixel in the template with coordinates (xt, yt) has
intensity It(xt, yt). Thus the absolute difference in
the pixel intensities is defined as
Diff(xs, ysxt, yt) = | Is(xs, ys) – It(xt, yt) |.
Image processing also includes image
segmentation. Segmentation procedures partition an
image into its constituent parts or objects. In
general, autonomous segmentation is one of the
most difficult tasks in digital image processing. A
rugged segmentation procedure bringstheprocess
towards successful solution of imaging problems
that require objects to be identified individually.
Then the representation and description of image is
provided. Knowledge base is used to store the
information about an image that can be later utilize
for object recognition.
The algorithms for image extraction and
detectioninclude background subtraction and blob
detection algorithm.
Blob detection is an algorithm used to determine
if a group of connecting pixels are related to each
other. This is useful for identifying separate objects
in a scene, or counting the number of objects in a
scene.
The Background subtraction is a technique in the
fields of image processing and computer vision
wherein an image's foreground is extracted for
further processing (object recognition etc.)
The background subtraction method has some
disadvantages such as:
International Journal of Engineering and Techniques - Volume 1 Issue 5, September-October 2015
ISSN: 2395-1303 http://www.ijetjournal.org Page 58
Background subtraction can be a powerfulallied
when it comes to segmenting objects in a scene.
The method, however, has some build-in limitations
that are exposed especially when processing video
of outdoor scenes. First of all, the method requires
the background to be empty when learning the
background model for each pixel. This can be a
challenge in a natural scene where moving objects
may always be present. One solution is to median
filter all training samples for each pixel. This will
eliminated pixels where an object is moving
through the scene and the resulting model of the
pixel will be a true background pixel. An extension
is to first order all training pixels (as done in the
median filter) and then calculate the average of the
pixels closest to the median. This will provide both
a mean and variance per pixel. Such approaches
assume that each pixel is covered by objects less
than half the time in the training period.
Another problem is that when processing outdoor
video a pixel may cover more than one background.
This will result in poor segmentation of pixel
during background subtraction. Another problem in
outdoor video is shadows due to strong sunlight.
Such shadow pixels can easily appear different
from the learnt background model and hence be
incorrectly classified as object pixels.
IV. ALGORITHMS
A. Background Subtraction
It is also known as Foreground Detection [7], is a
technique in the fields of image processing and
computer vision wherein an image's foreground is
extracted for further processing (object recognition
etc.). Generally an image's regions of interest are
objects (humans, cars, text etc.) in its foreground.
After the stage of image pre-processing (which may
include image delousing, post processing like
morphology etc.) object localization is required
which may make use of this technique. Background
subtraction is a widely used approach for detecting
moving objects in videos from static cameras. The
rationale in the approach is that of detecting the
moving objects from the difference between the
current frame and a reference frame, often called
“background image”, or “background model”.
Background subtraction is mostly done if the image
in question is a part of a video stream. Background
subtraction provides important cues for numerous
applications in computer vision. It includes the
following steps:
step1:Motion detection, it is done by using
segmentation where the moving projects are
segmented from the background, i.e. take an image
as background and take the frames obtained at the
time t, denoted by I(t) to compare with the
background image denoted by B.
step 2:We can segment out the objects simply by
using image subtraction technique of computer
vision meaning for each pixels in I(t), take the pixel
value denoted by P[I(t)] and subtract it with the
corresponding pixels at the same position on the
background image denoted as P[B].In mathematical
equation, it is written as:
P [F(t)]=P[I(t)]-P[B]
step3:The background is assumed to be the frame at
time t. This difference image would show some
intensity for the pixel locations which have changed
in the two frames.This approach will only work for
cases where all foreground pixels are moving and
all background pixels are static.
step4:A threshold "Threshold" is put on this
difference image to improve the subtraction.
|P [F(t)]-PF(t+1)] |>{Threshold}
This means that the difference image's pixel's
intensities are 'thresholded' or filtered on the basis
of value of Threshold. The accuracy of this
approach is dependent on speed of movement in the
scene. Faster movements may require higher
thresholds. Threshold: The simplest thresholding
methods replace each pixel in an image with a black
pixel if the image intensity Ii,j is less than some
fixed constant T (that is, Ii,j <T), or a white pixel if
the image intensity is greater than that constant. In
the example image on the right, this results in the
dark tree becoming completely black, and the white
snow becoming complete white.
Multiband thresholding: Colour images can also be
thresholded. One approach is to designate a
separate threshold for each of the RGB components
of the image and then combine them with an AND
operation. This reflects the way the camera works
International Journal of Engineering and Techniques
ISSN: 2395-1303
and how the data is stored in the computer, but it
does not correspond to the way that people
recognize color. Therefore, the HSL and HSV color
models are more often used; since hue is a circu
quantity it requires circular thresholding.
B. Blob Detection Algorithm
Blob detection is an algorithm used to determine if
a group of connecting pixels are related to each
other[8]. This is useful for identifying separate
objects in a scene, or counting the number of
objects in a scene. In computer vision, blob
detection methods are aimed at detecting regions
in a digital image that differ in properties, such as
brightness or color, compared to surrounding
regions. A blob is a region of an image in which
some properties are constant or approximately
constant; all the points in a blob can be considered
to be similar to each other. To find colored blobs,
you should convert your color image from RGB to
HSV format so that the colors are easier to
separate. Check given images taken by camera at
different times and correspondences displacements
or changes. Apply Filter with Gaussian at different
scales: This is done by just repeatedly filtering
with the same Gaussian. Blob detectors is based on
the Laplacian of the Gaussian (LoG). Given an
input image f(x, y), this image is convolved by a
Gaussian kernel
at a certain scale‘t’ to give a scale space
representation L(x, y; t) = g(x, y, t) * f(x, y). Then,
the result of applying the Laplacian operator
is computed, which usually results in strong
positive responses for dark blobs of extent
strong negative responses for bright blobs of similar
size. To automatically capture blobs of different
(unknown) size in the image domain, a multi
approach is therefore necessary. Now Subtract
image filtered at one scale with image filtered at
previous scale. Then do the Template matching
International Journal of Engineering and Techniques - Volume 1 Issue 5, September
1303 http://www.ijetjournal.org
and how the data is stored in the computer, but it
does not correspond to the way that people
recognize color. Therefore, the HSL and HSV color
models are more often used; since hue is a circular
quantity it requires circular thresholding.
Blob detection is an algorithm used to determine if
a group of connecting pixels are related to each
. This is useful for identifying separate
the number of
objects in a scene. In computer vision, blob
detection methods are aimed at detecting regions
in a digital image that differ in properties, such as
brightness or color, compared to surrounding
regions. A blob is a region of an image in which
some properties are constant or approximately
constant; all the points in a blob can be considered
to be similar to each other. To find colored blobs,
you should convert your color image from RGB to
HSV format so that the colors are easier to
eck given images taken by camera at
displacements
Filter with Gaussian at different
scales: This is done by just repeatedly filtering
Blob detectors is based on
the Gaussian (LoG). Given an
input image f(x, y), this image is convolved by a
to give a scale space
representation L(x, y; t) = g(x, y, t) * f(x, y). Then,
the result of applying the Laplacian operator
, which usually results in strong
positive responses for dark blobs of extent √2 and
strong negative responses for bright blobs of similar
size. To automatically capture blobs of different
(unknown) size in the image domain, a multi-scale
Now Subtract
image filtered at one scale with image filtered at
Template matching,a
basic method of template matching uses a
convolution mask (template), tailored to a specific
feature of the search image, which we want to
detect. The convolution output will be highest at
places where the imagestructure matches the mask
structure, where large image values get multiplied
by large mask values. Implementation:1. Pick a
part of the search image to use as a template: Let
the search image be S(x, y), where (x, y) represent
the coordinates of each pixel in the search image.
Let the template be T(x t, y t
represent the coordinates of each pixel in
template.2. Then simply move the center (or the
origin) of the template T(x t, y
point in the search image and calculate the sum of
products between the coefficients in S(x, y) and T(x
t, y t) over the whole area spanned by the template.
As all possible positions of the template with
respect to the search image are considered, the
position with the highest score is the best position.
This method is also referred to
Filtering' and the template is called a filter mask.
IV. CONCLUSION
Sign languages are one of the main
communication methods used by deaf people, but
opposed to common thought, there is no universal
sign language: every country or even r
uses its own set of signs. The use of sign language
in
digitalsystemscanenhancecommunicationinbothdire
ctions: animatedavatarscan
synthesizesignalsbasedonvoiceortextrecognition;an
dsignlanguagecanbetranslated into text or sound
based on images, videos and sensors input. The
latest is the ultimate goal of this research, but it is
not a simple spelling of spoken language, so that
recognizing isolated signs or letters of the alphabet
(which has been a common
approach)isnotsufcientforitstranscripti
aticinterpretation. Thesystem will provide the
output in the form of text which is equivalent to the
recognized sign language hand con
system will ease and encourage the interaction of
common people with the handicapped people sinc
the common people would no longer be required to
learn the various sign languages in order to
September-October 2015
Page 59
basic method of template matching uses a
convolution mask (template), tailored to a specific
feature of the search image, which we want to
detect. The convolution output will be highest at
places where the imagestructure matches the mask
large image values get multiplied
Implementation:1. Pick a
part of the search image to use as a template: Let
the search image be S(x, y), where (x, y) represent
the coordinates of each pixel in the search image.
t), where (x t, y t)
represent the coordinates of each pixel in
template.2. Then simply move the center (or the
, y t) over each (x, y)
point in the search image and calculate the sum of
ts in S(x, y) and T(x
) over the whole area spanned by the template.
As all possible positions of the template with
respect to the search image are considered, the
position with the highest score is the best position.
This method is also referred to as 'Linear Spatial
Filtering' and the template is called a filter mask.
Sign languages are one of the main
communication methods used by deaf people, but
opposed to common thought, there is no universal
sign language: every country or even regional group
uses its own set of signs. The use of sign language
digitalsystemscanenhancecommunicationinbothdire
ctions: animatedavatarscan
synthesizesignalsbasedonvoiceortextrecognition;an
dsignlanguagecanbetranslated into text or sound
videos and sensors input. The
latest is the ultimate goal of this research, but it is
not a simple spelling of spoken language, so that
recognizing isolated signs or letters of the alphabet
(which has been a common
cientforitstranscriptionandautom
aticinterpretation. Thesystem will provide the
output in the form of text which is equivalent to the
recognized sign language hand conguration. This
system will ease and encourage the interaction of
common people with the handicapped people since
the common people would no longer be required to
learn the various sign languages in order to
International Journal of Engineering and Techniques - Volume 1 Issue 5, September-October 2015
ISSN: 2395-1303 http://www.ijetjournal.org Page 60
communicate with them. This system could be
applied at various tasks, be it commercial or non-
commercial, where there is involvement of
handicapped people. Handicapped people can
benet from this system in their day to day life
whenever they need to easily convey their message
through their sign language to common people.
ACKNOWLEDGMENT
It gives us great pleasure in presenting the
preliminary project report on ‘IMAGE BASED
SIGN LANGUAGE RECOGNITION ON
ANDROID’. We would like to take this opportunity
to thank our internal guide Prof. S. S. Raskar for
giving us all the help and guidance we needed. We
are really grateful to them for their kind support.
Their valuable suggestions were very helpful. We
are also grateful to Prof. N.F.Shaikh, Head of
Computer Engineering Department,Modern
Education Society’s College of Engineering for her
indispensable support, suggestions.
REFERENCES
1. M. Mohandes, M. Deriche, and J. Liu ,
“Image-Based andSensor-Based Approaches to
Arabic Sign Language Recognition”, IEEE
TRANSACTIONS ON HUMAN-MACHINE
SYSTEMS, VOL. 44, NO. 4, AUGUST 2014
2. Ravikiran J, Kavi Mahesh, Suhas Mahishi,
Dheeraj R, Sudheender S, Nitin V Pujari,
“Finger Detection for Sign Language
Recognition”, Proceedings of the International
MultiConference of Engineers and Computer
Scientists 2009 Vol I IMECS 2009, March 18 -
20, 2009, Hong Kong
3. Son Lam Phung, Member, IEEE,Abdesselam
Bouzerdoum, Sr. Member, IEEE, and Douglas
Chai, Sr. Member, IEEE “Skin Segmentation
Using Color Pixel Classification: Analysis and
Comparison”, IEEE TRANSACTIONS ON
PATTERN ANALYSIS AND MACHINE
INTELLIGENCE, VOL. 27, NO. 1,
JANUARY 2005
4. Ashish Sethi, Hemanth S,Kuldeep
Kumar,Bhaskara Rao N,Krishnan R,“SignPro-
An Application Suite for Deaf and
Dumb”,IJCSET May 2012 Vol 2, Issue
5,1203-1206
5. William T. Freeman, Michal Roth,
“Orientation Histograms for Hand Gesture
Recognition”,IEEE Intl. Wkshp. on Automatic
Face and Gesture Recognition, Zurich, June,
1995
6. V.Nayakwadi, N. B. Pokale, “Natural Hand
Gestures Recognition System for Intelligent
HCI,” International Journal of Computer
Applications Technology and Research, 2013
7. Massimo
Piccardi,“Backgroundsubtractiontechniques:
areview”,IEEEInternationalConferenceonSyste
ms,ManandCybernetics,2004
8. Anne Kaspers, “BlobDetection”.

More Related Content

What's hot

Recognition of Facial Emotions Based on Sparse Coding
Recognition of Facial Emotions Based on Sparse CodingRecognition of Facial Emotions Based on Sparse Coding
Recognition of Facial Emotions Based on Sparse CodingIJERA Editor
 
Handwritten character recognition in
Handwritten character recognition inHandwritten character recognition in
Handwritten character recognition inijaia
 
Speech Recognition using HMM & GMM Models: A Review on Techniques and Approaches
Speech Recognition using HMM & GMM Models: A Review on Techniques and ApproachesSpeech Recognition using HMM & GMM Models: A Review on Techniques and Approaches
Speech Recognition using HMM & GMM Models: A Review on Techniques and Approachesijsrd.com
 
character recognition: Scope and challenges
 character recognition: Scope and challenges character recognition: Scope and challenges
character recognition: Scope and challengesVikas Dongre
 
A gesture recognition system for the Colombian sign language based on convolu...
A gesture recognition system for the Colombian sign language based on convolu...A gesture recognition system for the Colombian sign language based on convolu...
A gesture recognition system for the Colombian sign language based on convolu...journalBEEI
 
Handwriting Recognition Using Deep Learning and Computer Version
Handwriting Recognition Using Deep Learning and Computer VersionHandwriting Recognition Using Deep Learning and Computer Version
Handwriting Recognition Using Deep Learning and Computer VersionNaiyan Noor
 
Devanagari Character Recognition
Devanagari Character RecognitionDevanagari Character Recognition
Devanagari Character RecognitionPulkit Goyal
 
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATION
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATIONA SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATION
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATIONijnlc
 
Handwritten Character Recognition: A Comprehensive Review on Geometrical Anal...
Handwritten Character Recognition: A Comprehensive Review on Geometrical Anal...Handwritten Character Recognition: A Comprehensive Review on Geometrical Anal...
Handwritten Character Recognition: A Comprehensive Review on Geometrical Anal...iosrjce
 
A Deep Neural Framework for Continuous Sign Language Recognition by Iterative...
A Deep Neural Framework for Continuous Sign Language Recognition by Iterative...A Deep Neural Framework for Continuous Sign Language Recognition by Iterative...
A Deep Neural Framework for Continuous Sign Language Recognition by Iterative...ijtsrd
 
Appearance based static hand gesture alphabet recognition
Appearance based static hand gesture alphabet recognitionAppearance based static hand gesture alphabet recognition
Appearance based static hand gesture alphabet recognitionIAEME Publication
 
Hand and wrist localization approach: sign language recognition
Hand and wrist localization approach: sign language recognition Hand and wrist localization approach: sign language recognition
Hand and wrist localization approach: sign language recognition Sana Fakhfakh
 
Hand Written Character Recognition Using Neural Networks
Hand Written Character Recognition Using Neural Networks Hand Written Character Recognition Using Neural Networks
Hand Written Character Recognition Using Neural Networks Chiranjeevi Adi
 
Artificial Neural Network For Recognition Of Handwritten Devanagari Character
Artificial Neural Network For Recognition Of Handwritten Devanagari CharacterArtificial Neural Network For Recognition Of Handwritten Devanagari Character
Artificial Neural Network For Recognition Of Handwritten Devanagari CharacterIOSR Journals
 
Automatic Isolated word sign language recognition
Automatic Isolated word sign language recognitionAutomatic Isolated word sign language recognition
Automatic Isolated word sign language recognitionSana Fakhfakh
 
IRJET- Sign Language Interpreter using Image Processing and Machine Learning
IRJET- Sign Language Interpreter using Image Processing and Machine LearningIRJET- Sign Language Interpreter using Image Processing and Machine Learning
IRJET- Sign Language Interpreter using Image Processing and Machine LearningIRJET Journal
 
Iaetsd appearance based american sign language recognition
Iaetsd appearance based  american sign language recognitionIaetsd appearance based  american sign language recognition
Iaetsd appearance based american sign language recognitionIaetsd Iaetsd
 

What's hot (18)

Recognition of Facial Emotions Based on Sparse Coding
Recognition of Facial Emotions Based on Sparse CodingRecognition of Facial Emotions Based on Sparse Coding
Recognition of Facial Emotions Based on Sparse Coding
 
Handwritten character recognition in
Handwritten character recognition inHandwritten character recognition in
Handwritten character recognition in
 
Speech Recognition using HMM & GMM Models: A Review on Techniques and Approaches
Speech Recognition using HMM & GMM Models: A Review on Techniques and ApproachesSpeech Recognition using HMM & GMM Models: A Review on Techniques and Approaches
Speech Recognition using HMM & GMM Models: A Review on Techniques and Approaches
 
character recognition: Scope and challenges
 character recognition: Scope and challenges character recognition: Scope and challenges
character recognition: Scope and challenges
 
A gesture recognition system for the Colombian sign language based on convolu...
A gesture recognition system for the Colombian sign language based on convolu...A gesture recognition system for the Colombian sign language based on convolu...
A gesture recognition system for the Colombian sign language based on convolu...
 
Handwriting Recognition Using Deep Learning and Computer Version
Handwriting Recognition Using Deep Learning and Computer VersionHandwriting Recognition Using Deep Learning and Computer Version
Handwriting Recognition Using Deep Learning and Computer Version
 
Devanagari Character Recognition
Devanagari Character RecognitionDevanagari Character Recognition
Devanagari Character Recognition
 
42 128-1-pb
42 128-1-pb42 128-1-pb
42 128-1-pb
 
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATION
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATIONA SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATION
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATION
 
Handwritten Character Recognition: A Comprehensive Review on Geometrical Anal...
Handwritten Character Recognition: A Comprehensive Review on Geometrical Anal...Handwritten Character Recognition: A Comprehensive Review on Geometrical Anal...
Handwritten Character Recognition: A Comprehensive Review on Geometrical Anal...
 
A Deep Neural Framework for Continuous Sign Language Recognition by Iterative...
A Deep Neural Framework for Continuous Sign Language Recognition by Iterative...A Deep Neural Framework for Continuous Sign Language Recognition by Iterative...
A Deep Neural Framework for Continuous Sign Language Recognition by Iterative...
 
Appearance based static hand gesture alphabet recognition
Appearance based static hand gesture alphabet recognitionAppearance based static hand gesture alphabet recognition
Appearance based static hand gesture alphabet recognition
 
Hand and wrist localization approach: sign language recognition
Hand and wrist localization approach: sign language recognition Hand and wrist localization approach: sign language recognition
Hand and wrist localization approach: sign language recognition
 
Hand Written Character Recognition Using Neural Networks
Hand Written Character Recognition Using Neural Networks Hand Written Character Recognition Using Neural Networks
Hand Written Character Recognition Using Neural Networks
 
Artificial Neural Network For Recognition Of Handwritten Devanagari Character
Artificial Neural Network For Recognition Of Handwritten Devanagari CharacterArtificial Neural Network For Recognition Of Handwritten Devanagari Character
Artificial Neural Network For Recognition Of Handwritten Devanagari Character
 
Automatic Isolated word sign language recognition
Automatic Isolated word sign language recognitionAutomatic Isolated word sign language recognition
Automatic Isolated word sign language recognition
 
IRJET- Sign Language Interpreter using Image Processing and Machine Learning
IRJET- Sign Language Interpreter using Image Processing and Machine LearningIRJET- Sign Language Interpreter using Image Processing and Machine Learning
IRJET- Sign Language Interpreter using Image Processing and Machine Learning
 
Iaetsd appearance based american sign language recognition
Iaetsd appearance based  american sign language recognitionIaetsd appearance based  american sign language recognition
Iaetsd appearance based american sign language recognition
 

Viewers also liked

PLENÁRIO DE UTENTES DA SAÚDE DO MUNICÍPIO DE SALVATERRA DE MAGOS
PLENÁRIO DE UTENTES DA SAÚDE DO MUNICÍPIO DE SALVATERRA DE MAGOSPLENÁRIO DE UTENTES DA SAÚDE DO MUNICÍPIO DE SALVATERRA DE MAGOS
PLENÁRIO DE UTENTES DA SAÚDE DO MUNICÍPIO DE SALVATERRA DE MAGOSfreguesiademarinhais
 
Современный маркетинг / Modern marketing
Современный маркетинг /  Modern marketingСовременный маркетинг /  Modern marketing
Современный маркетинг / Modern marketingCyril Korolev
 
Estructura grafos y digrafos
Estructura grafos y digrafosEstructura grafos y digrafos
Estructura grafos y digrafosdobleafp7
 
Market research task 2d finished x
Market research task 2d finished xMarket research task 2d finished x
Market research task 2d finished xkeeleyb01
 

Viewers also liked (20)

Trendbook rtw 13 14 материалы
Trendbook rtw 13 14 материалыTrendbook rtw 13 14 материалы
Trendbook rtw 13 14 материалы
 
PLENÁRIO DE UTENTES DA SAÚDE DO MUNICÍPIO DE SALVATERRA DE MAGOS
PLENÁRIO DE UTENTES DA SAÚDE DO MUNICÍPIO DE SALVATERRA DE MAGOSPLENÁRIO DE UTENTES DA SAÚDE DO MUNICÍPIO DE SALVATERRA DE MAGOS
PLENÁRIO DE UTENTES DA SAÚDE DO MUNICÍPIO DE SALVATERRA DE MAGOS
 
Современный маркетинг / Modern marketing
Современный маркетинг /  Modern marketingСовременный маркетинг /  Modern marketing
Современный маркетинг / Modern marketing
 
Estructura grafos y digrafos
Estructura grafos y digrafosEstructura grafos y digrafos
Estructura grafos y digrafos
 
Market research task 2d finished x
Market research task 2d finished xMarket research task 2d finished x
Market research task 2d finished x
 
[IJET-V2I3P19] Authors: Priyanka Sharma
[IJET-V2I3P19] Authors: Priyanka Sharma[IJET-V2I3P19] Authors: Priyanka Sharma
[IJET-V2I3P19] Authors: Priyanka Sharma
 
[IJCT-V3I2P25] Authors: Mr.S.Jagadeesan,M.Sc, MCA., M.Phil., ME[CSE]., S.Rubiya
[IJCT-V3I2P25] Authors: Mr.S.Jagadeesan,M.Sc, MCA., M.Phil., ME[CSE]., S.Rubiya[IJCT-V3I2P25] Authors: Mr.S.Jagadeesan,M.Sc, MCA., M.Phil., ME[CSE]., S.Rubiya
[IJCT-V3I2P25] Authors: Mr.S.Jagadeesan,M.Sc, MCA., M.Phil., ME[CSE]., S.Rubiya
 
[IJET-V1I4P3] Authors :Mekhala
[IJET-V1I4P3] Authors :Mekhala[IJET-V1I4P3] Authors :Mekhala
[IJET-V1I4P3] Authors :Mekhala
 
[IJET-V1I4P17] Authors: Fahimuddin. Shaik, Dr.Anil Kumar Sharma, Dr.Syed.Must...
[IJET-V1I4P17] Authors: Fahimuddin. Shaik, Dr.Anil Kumar Sharma, Dr.Syed.Must...[IJET-V1I4P17] Authors: Fahimuddin. Shaik, Dr.Anil Kumar Sharma, Dr.Syed.Must...
[IJET-V1I4P17] Authors: Fahimuddin. Shaik, Dr.Anil Kumar Sharma, Dr.Syed.Must...
 
[IJET-V1I3P22] Authors :Dipali D. Deokar, Chandrasekhar G. Patil.
[IJET-V1I3P22] Authors :Dipali D. Deokar, Chandrasekhar G. Patil.[IJET-V1I3P22] Authors :Dipali D. Deokar, Chandrasekhar G. Patil.
[IJET-V1I3P22] Authors :Dipali D. Deokar, Chandrasekhar G. Patil.
 
[IJET-V1I4P12] Authors :Nang Khin Su Yee, Theingi, Kyaw Thiha
[IJET-V1I4P12] Authors :Nang Khin Su Yee, Theingi, Kyaw Thiha[IJET-V1I4P12] Authors :Nang Khin Su Yee, Theingi, Kyaw Thiha
[IJET-V1I4P12] Authors :Nang Khin Su Yee, Theingi, Kyaw Thiha
 
[IJET-V1I5P13] Authors: Ayesha Shaikh, Antara Kanade,Mabel Fernandes, Shubhan...
[IJET-V1I5P13] Authors: Ayesha Shaikh, Antara Kanade,Mabel Fernandes, Shubhan...[IJET-V1I5P13] Authors: Ayesha Shaikh, Antara Kanade,Mabel Fernandes, Shubhan...
[IJET-V1I5P13] Authors: Ayesha Shaikh, Antara Kanade,Mabel Fernandes, Shubhan...
 
[IJET V2I3P6] Authors: Ajeeth, Sandhya raani M
[IJET V2I3P6] Authors:  Ajeeth, Sandhya raani M[IJET V2I3P6] Authors:  Ajeeth, Sandhya raani M
[IJET V2I3P6] Authors: Ajeeth, Sandhya raani M
 
[IJET-V1I6P7] Authors: Galal Ali Hassaan
[IJET-V1I6P7] Authors: Galal Ali Hassaan[IJET-V1I6P7] Authors: Galal Ali Hassaan
[IJET-V1I6P7] Authors: Galal Ali Hassaan
 
[IJET V2I3P9] Authors: Ruchi Kumari , Sandhya Tarar
[IJET V2I3P9] Authors: Ruchi Kumari , Sandhya Tarar[IJET V2I3P9] Authors: Ruchi Kumari , Sandhya Tarar
[IJET V2I3P9] Authors: Ruchi Kumari , Sandhya Tarar
 
[IJET-V1I4P10] Authers :EiEi Thwe, Theingi
[IJET-V1I4P10] Authers :EiEi Thwe, Theingi[IJET-V1I4P10] Authers :EiEi Thwe, Theingi
[IJET-V1I4P10] Authers :EiEi Thwe, Theingi
 
[IJET V2I3P3] Authors: Kavya K, Dr. Geetha C R
[IJET V2I3P3] Authors: Kavya K, Dr. Geetha C R[IJET V2I3P3] Authors: Kavya K, Dr. Geetha C R
[IJET V2I3P3] Authors: Kavya K, Dr. Geetha C R
 
[IJCT-V3I2P31] Authors: Amarbir Singh
[IJCT-V3I2P31] Authors: Amarbir Singh[IJCT-V3I2P31] Authors: Amarbir Singh
[IJCT-V3I2P31] Authors: Amarbir Singh
 
[IJET-V1I2P7] Authors :Mr. Harjot Kaur, Mrs.Daljit Kaur
[IJET-V1I2P7] Authors :Mr. Harjot Kaur, Mrs.Daljit Kaur[IJET-V1I2P7] Authors :Mr. Harjot Kaur, Mrs.Daljit Kaur
[IJET-V1I2P7] Authors :Mr. Harjot Kaur, Mrs.Daljit Kaur
 
[IJCT-V3I2P16] Authors: Pooja Kumar, Rajeev Thakur
[IJCT-V3I2P16] Authors: Pooja Kumar, Rajeev Thakur[IJCT-V3I2P16] Authors: Pooja Kumar, Rajeev Thakur
[IJCT-V3I2P16] Authors: Pooja Kumar, Rajeev Thakur
 

Similar to Image-Based Sign Language Recognition on Android

Real Time Sign Language Detection
Real Time Sign Language DetectionReal Time Sign Language Detection
Real Time Sign Language DetectionIRJET Journal
 
IRJET - Paint using Hand Gesture
IRJET - Paint using Hand GestureIRJET - Paint using Hand Gesture
IRJET - Paint using Hand GestureIRJET Journal
 
Real Time Translator for Sign Language
Real Time Translator for Sign LanguageReal Time Translator for Sign Language
Real Time Translator for Sign Languageijtsrd
 
Paper id 23201490
Paper id 23201490Paper id 23201490
Paper id 23201490IJRAT
 
Review on Hand Gesture Recognition
Review on Hand Gesture RecognitionReview on Hand Gesture Recognition
Review on Hand Gesture Recognitiondbpublications
 
Vision Based Approach to Sign Language Recognition
Vision Based Approach to Sign Language RecognitionVision Based Approach to Sign Language Recognition
Vision Based Approach to Sign Language RecognitionIJAAS Team
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysispaperpublications3
 
Design of a Communication System using Sign Language aid for Differently Able...
Design of a Communication System using Sign Language aid for Differently Able...Design of a Communication System using Sign Language aid for Differently Able...
Design of a Communication System using Sign Language aid for Differently Able...IRJET Journal
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysispaperpublications3
 
Basic Gesture Based Communication for Deaf and Dumb
Basic Gesture Based Communication for Deaf and DumbBasic Gesture Based Communication for Deaf and Dumb
Basic Gesture Based Communication for Deaf and DumbRSIS International
 
Sign Language Detection using Action Recognition
Sign Language Detection using Action RecognitionSign Language Detection using Action Recognition
Sign Language Detection using Action RecognitionIRJET Journal
 
Sign Language Recognition
Sign Language RecognitionSign Language Recognition
Sign Language RecognitionIRJET Journal
 
Real-Time Sign Language Detector
Real-Time Sign Language DetectorReal-Time Sign Language Detector
Real-Time Sign Language DetectorIRJET Journal
 
Sign Language Recognition using Mediapipe
Sign Language Recognition using MediapipeSign Language Recognition using Mediapipe
Sign Language Recognition using MediapipeIRJET Journal
 
A Study on Face Expression Observation Systems
A Study on Face Expression Observation SystemsA Study on Face Expression Observation Systems
A Study on Face Expression Observation Systemsijtsrd
 
SIGN LANGUAGE RECOGNITION USING MACHINE LEARNING
SIGN LANGUAGE RECOGNITION USING MACHINE LEARNINGSIGN LANGUAGE RECOGNITION USING MACHINE LEARNING
SIGN LANGUAGE RECOGNITION USING MACHINE LEARNINGIRJET Journal
 
Ay4102371374
Ay4102371374Ay4102371374
Ay4102371374IJERA Editor
 
Real-Time System of Hand Detection And Gesture Recognition In Cyber Presence ...
Real-Time System of Hand Detection And Gesture Recognition In Cyber Presence ...Real-Time System of Hand Detection And Gesture Recognition In Cyber Presence ...
Real-Time System of Hand Detection And Gesture Recognition In Cyber Presence ...IJERA Editor
 
Ay4103315317
Ay4103315317Ay4103315317
Ay4103315317IJERA Editor
 
Sign Language Identification based on Hand Gestures
Sign Language Identification based on Hand GesturesSign Language Identification based on Hand Gestures
Sign Language Identification based on Hand GesturesIRJET Journal
 

Similar to Image-Based Sign Language Recognition on Android (20)

Real Time Sign Language Detection
Real Time Sign Language DetectionReal Time Sign Language Detection
Real Time Sign Language Detection
 
IRJET - Paint using Hand Gesture
IRJET - Paint using Hand GestureIRJET - Paint using Hand Gesture
IRJET - Paint using Hand Gesture
 
Real Time Translator for Sign Language
Real Time Translator for Sign LanguageReal Time Translator for Sign Language
Real Time Translator for Sign Language
 
Paper id 23201490
Paper id 23201490Paper id 23201490
Paper id 23201490
 
Review on Hand Gesture Recognition
Review on Hand Gesture RecognitionReview on Hand Gesture Recognition
Review on Hand Gesture Recognition
 
Vision Based Approach to Sign Language Recognition
Vision Based Approach to Sign Language RecognitionVision Based Approach to Sign Language Recognition
Vision Based Approach to Sign Language Recognition
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysis
 
Design of a Communication System using Sign Language aid for Differently Able...
Design of a Communication System using Sign Language aid for Differently Able...Design of a Communication System using Sign Language aid for Differently Able...
Design of a Communication System using Sign Language aid for Differently Able...
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysis
 
Basic Gesture Based Communication for Deaf and Dumb
Basic Gesture Based Communication for Deaf and DumbBasic Gesture Based Communication for Deaf and Dumb
Basic Gesture Based Communication for Deaf and Dumb
 
Sign Language Detection using Action Recognition
Sign Language Detection using Action RecognitionSign Language Detection using Action Recognition
Sign Language Detection using Action Recognition
 
Sign Language Recognition
Sign Language RecognitionSign Language Recognition
Sign Language Recognition
 
Real-Time Sign Language Detector
Real-Time Sign Language DetectorReal-Time Sign Language Detector
Real-Time Sign Language Detector
 
Sign Language Recognition using Mediapipe
Sign Language Recognition using MediapipeSign Language Recognition using Mediapipe
Sign Language Recognition using Mediapipe
 
A Study on Face Expression Observation Systems
A Study on Face Expression Observation SystemsA Study on Face Expression Observation Systems
A Study on Face Expression Observation Systems
 
SIGN LANGUAGE RECOGNITION USING MACHINE LEARNING
SIGN LANGUAGE RECOGNITION USING MACHINE LEARNINGSIGN LANGUAGE RECOGNITION USING MACHINE LEARNING
SIGN LANGUAGE RECOGNITION USING MACHINE LEARNING
 
Ay4102371374
Ay4102371374Ay4102371374
Ay4102371374
 
Real-Time System of Hand Detection And Gesture Recognition In Cyber Presence ...
Real-Time System of Hand Detection And Gesture Recognition In Cyber Presence ...Real-Time System of Hand Detection And Gesture Recognition In Cyber Presence ...
Real-Time System of Hand Detection And Gesture Recognition In Cyber Presence ...
 
Ay4103315317
Ay4103315317Ay4103315317
Ay4103315317
 
Sign Language Identification based on Hand Gestures
Sign Language Identification based on Hand GesturesSign Language Identification based on Hand Gestures
Sign Language Identification based on Hand Gestures
 

More from IJET - International Journal of Engineering and Techniques

More from IJET - International Journal of Engineering and Techniques (20)

healthcare supervising system to monitor heart rate to diagonize and alert he...
healthcare supervising system to monitor heart rate to diagonize and alert he...healthcare supervising system to monitor heart rate to diagonize and alert he...
healthcare supervising system to monitor heart rate to diagonize and alert he...
 
verifiable and multi-keyword searchable attribute-based encryption scheme for...
verifiable and multi-keyword searchable attribute-based encryption scheme for...verifiable and multi-keyword searchable attribute-based encryption scheme for...
verifiable and multi-keyword searchable attribute-based encryption scheme for...
 
Ijet v5 i6p18
Ijet v5 i6p18Ijet v5 i6p18
Ijet v5 i6p18
 
Ijet v5 i6p17
Ijet v5 i6p17Ijet v5 i6p17
Ijet v5 i6p17
 
Ijet v5 i6p16
Ijet v5 i6p16Ijet v5 i6p16
Ijet v5 i6p16
 
Ijet v5 i6p15
Ijet v5 i6p15Ijet v5 i6p15
Ijet v5 i6p15
 
Ijet v5 i6p14
Ijet v5 i6p14Ijet v5 i6p14
Ijet v5 i6p14
 
Ijet v5 i6p13
Ijet v5 i6p13Ijet v5 i6p13
Ijet v5 i6p13
 
Ijet v5 i6p12
Ijet v5 i6p12Ijet v5 i6p12
Ijet v5 i6p12
 
Ijet v5 i6p11
Ijet v5 i6p11Ijet v5 i6p11
Ijet v5 i6p11
 
Ijet v5 i6p10
Ijet v5 i6p10Ijet v5 i6p10
Ijet v5 i6p10
 
Ijet v5 i6p2
Ijet v5 i6p2Ijet v5 i6p2
Ijet v5 i6p2
 
IJET-V3I2P24
IJET-V3I2P24IJET-V3I2P24
IJET-V3I2P24
 
IJET-V3I2P23
IJET-V3I2P23IJET-V3I2P23
IJET-V3I2P23
 
IJET-V3I2P22
IJET-V3I2P22IJET-V3I2P22
IJET-V3I2P22
 
IJET-V3I2P21
IJET-V3I2P21IJET-V3I2P21
IJET-V3I2P21
 
IJET-V3I2P20
IJET-V3I2P20IJET-V3I2P20
IJET-V3I2P20
 
IJET-V3I2P19
IJET-V3I2P19IJET-V3I2P19
IJET-V3I2P19
 
IJET-V3I2P18
IJET-V3I2P18IJET-V3I2P18
IJET-V3I2P18
 
IJET-V3I2P17
IJET-V3I2P17IJET-V3I2P17
IJET-V3I2P17
 

Recently uploaded

HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...VICTOR MAESTRE RAMIREZ
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLDeelipZope
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learningmisbanausheenparvam
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxvipinkmenon1
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxPoojaBan
 
Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfme23b1001
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEroselinkalist12
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...asadnawaz62
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 

Recently uploaded (20)

HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCL
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learning
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
 
young call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Serviceyoung call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Service
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptx
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptx
 
Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdf
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
 
POWER SYSTEMS-1 Complete notes examples
POWER SYSTEMS-1 Complete notes  examplesPOWER SYSTEMS-1 Complete notes  examples
POWER SYSTEMS-1 Complete notes examples
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 

Image-Based Sign Language Recognition on Android

  • 1. International Journal of Engineering and Techniques - Volume 1 Issue 5, September-October 2015 ISSN: 2395-1303 http://www.ijetjournal.org Page 55 Image Based Sign Language Recognition on Android Prutha Gandhi1 , Dhanashri Dalvi2 , Pallavi Gaikwad3 , Shubham Khode4 1,2,3,4 (Computer Department, Modern Education Society’s College of Engineering, and Pune) I. INTRODUCTION A Deaf person is very much dependent on the sign language to communicate with the other person. So the person who is interacting with the deaf person needs to know the sign language in order to understand and communicate effectively. Since many people are not familiar with sign language, it very difficult for a deaf person to have interaction with the society. The previously implemented system had a predened database with a limited scope. Thus, we are facilitating an application that will allow a user to dene their own database i.e. to dene and upload his own sign language in the system. This feature will help deaf people to communicate with other people varying from different countries or regions. Our application includes these phases: • Digitization and image capture • Compression (coding) • Segmentation o Edge and feature detection • Scene Analysis o Color o Motion o Object recognition • Pattern matching • Text generation • Text to speech conversion • Speech to text conversion II.RELATED WORK In [1] M. Mohandes, M. Deriche, and J. Liu, designed an ArSLR (Arabic Sign Language Recognition System) for alphabet recognition, isolated word recognition, and Continuous signer recognition using image based and sensor based approach. The image based approach mostly depends on the coloured gloves and knuckles. This approach works efficiently for determining geometric features and body/facial expressions. The sensor based approach utilizes the glove specs RESEARCH ARTICLE OPEN ACCESS Abstract: A mediator person is required for communication between deaf person and a second person. But a mediator should know the sign language used by deaf person. But this is also not possible always since there are multiple sign languages for multiple languages. It is difficult for a deaf person to understand what a second person speaks. And therefore deaf person should keep track of lip movements of second person in order to know what he is speaking. But the lip movements do not give proper efficiency and accuracy since the facial expressions and speech might not match. To overcome the above problems we have proposed a system, an Android Application for recognizing sign language using hand gesture with the facility for user to dene and upload their own sign language into the system. The features of this system are the real time conversion of gesture to text and speech. For two-way communication between deaf person and second person, the speech of second person is converted into text. The processing steps include: gesture extraction, gesture matching and conversion of text to speech and vice-versa. The system is not only useful for deaf community but can also be used by common people who migrate to different regions and do not know local language. Keywords — Gesture extraction and detection, pattern matching, text generation, text to speech and speech to text conversion,
  • 2. International Journal of Engineering and Techniques - Volume 1 Issue 5, September-October 2015 ISSN: 2395-1303 http://www.ijetjournal.org Page 56 (power, cyber, and data gloves), mostly statistical features, 3D position information. In [2] Ravikiran J, Kavi Mahesh, Suhas Mahishi, Dheeraj R, Sudheender S, Nitin V Pujari, designed a highly accurate image processing algorithm for recognizing American Sign Language. Their algorithm implementation does not require use of any gloves or markers. The implemented system detects the number of open fingers using the concept of boundary tracing which also combines finger-tip detection. In [3] Son Lam Phung, Abdesselam Bouzerdoum and Douglas Chai have analysed three important issues related to pixel wise skin segmentation: colour representation, colour quantization, and classification algorithm. They found that the Bayesian classifier with the histogram technique and the multilayer perceptron classifier have higher classification rates compared to other tested classifiers. In [4] Ashish Sethi, Hemanth S, Kuldeep Kumar,Bhaskara Rao N,Krishnan R, designed an application for dumb and deaf person. The application was the integration of already existing methods. Their application processing includes: gesture extraction, gesture matching and conversion to speech. They used histogram matching, bounding box computation, skin colour segmentation and region growing methods for gesture extraction. For gesture matching they used feature point matching and correlation based matching techniques. Integrating all these methods their paper provides following four approaches: Approach A: Skin colour segmentation with Feature point matching using SIFT Approach B: Region Growing with Feature point matching using SIFT Approach C: Skin colour segmentation with Correlation matching Approach D: Region Growing with Correlation matching The application also includes gesture to text conversion. In [5] William T. Freeman, Michal Roth, provided orientation histogram as a feature vector for gesture classification and interpolation. This method is simple and fast to compute, and provides robustness to scene illumination changes. They provide two categories of gestures: Static gestures and Dynamic gestures. Static gesture includes a particular hand configuration and pose represented by a single image.Moving gesture, represented by a sequence of images comes under dynamic gesture. They used a pattern recognition system, converts the a image or sequence of images into a feature vector, which then compared with the feature vectors of a training set of gestures. They also used a Euclidean distance metric and video digitizer. In the run phase, the computer compares the feature vector for the present image with those in the training set, and picks the category of the nearest vector, or interpolates between vectors. The methods are image-based, simple, and fast. In [6] V. Nayakwadi, N. B. Pokale,In this paper a survey on various recent gesture recognition approaches is provided with particular emphasis on hand gestures. A review of static hand posture methods are explained with different tools and algorithms appliedongesturerecognitionsystem,includingconne ctionistmodels,hidden Markov model, and fuzzy clustering. Vision Based approaches: In vision based methods the system requires only cameras to capture the image required for the natural interaction between human and these approaches are simple but a lot of gesture challenges are raised such as the complex background, lighting variation, and other skin colour objects with the hand object. InstrumentedGloveapproaches: Markedglovesorcolouredmarkersaregloves thatwornbythehumanhandwithsomecolourstodirectt heprocessoftrackingthehandandlocatingthepalmand ngers,whichprovidetheabilitytoextract geometric features necessary to form hand shape. Gesture Recognition Techniques: Most of the researches use ANN as a classier in gesture recognition process. Histogram Based Feature: A method for recognizing gestures based on pattern recognition using orientation histogram. Fuzzy Clustering Algorithm: In fuzzy clustering, the partitioning of sample dataintogroupsinafuzzywayarethemaindifferencebet
  • 3. International Journal of Engineering and Techniques - Volume 1 Issue 5, September-October 2015 ISSN: 2395-1303 http://www.ijetjournal.org Page 57 weenfuzzyclustering and other clustering algorithm, where the single data pattern might belong to different data groups. Hidden Markov Model (HMM): HMM is a stochastic process, with a unite number of states of Markov chain, and a number of random functions so that each stat ehasarandom function. HMM system topology is represented by one state for the initial state, a set of output symbols, and a set of transitions state. HMM contained a lot of mathematical structures and has proved its efficiency for modelling spatiotemporal lin formation data. Sign language recognition, are one of the most applications of HMM. In [7] Massimo Piccardi, reviews the Background subtraction method, which is a widely used approach for detecting moving objects from static cameras. This paper provides a review of the main methods and an original categorisation based on speed, memory requirements and accuracy. Methods reviewed include parameter I can dnon- parametric background density estimates and spatial correlation approaches. Several methods forperformingbackgroundsubtractionhavebeenprop osed, allofthesemethods try to effectively estimate the background model from the temporal sequence of the frames. The methods reviewed in the following are: Running Gaussian average, Temporal median lter, Mixture of Gaussians, Kernel density estimation (KDE), Sequential KD approximation, Concurrence of image variations, Eigen-backgrounds. III. RESEARCH WORK For extracting and processing of an image, process of acquisition is performed. Generally the image acquisition process involves pre-processing such as scaling etc. A scale space representation can be used. A scale space is representation of an image at multiple resolution levels. Then Difference of Gaussians can be applied on the image. Difference of Gaussians is a feature enhancement algorithm that involves the subtraction of one blurred version of an original image from another, less blurred version of the original. In the simple case of grayscale images, the blurred images are obtained by convolving the original grayscale images with Gaussian kernels having differing standard deviations. Blurring an image using a Gaussian kernel suppresses only high-frequency spatial information. Subtracting one image from the other preserves spatial information that lies between the range of frequencies that are preserved in the two blurred images. Thus, the difference of Gaussians is a band-pass filter that discards all but a handful of spatial frequencies that are present in the original grayscale image. There are many ways to handle image translation, one way to handle translation problems on images is template matching, it is used to compare the intensities of the pixels, using the SAD (Sum of absolute differences) measure. The other way include a pixel in the search image with coordinates (xs, ys) has intensity Is(xs, ys) and a pixel in the template with coordinates (xt, yt) has intensity It(xt, yt). Thus the absolute difference in the pixel intensities is defined as Diff(xs, ysxt, yt) = | Is(xs, ys) – It(xt, yt) |. Image processing also includes image segmentation. Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation procedure bringstheprocess towards successful solution of imaging problems that require objects to be identified individually. Then the representation and description of image is provided. Knowledge base is used to store the information about an image that can be later utilize for object recognition. The algorithms for image extraction and detectioninclude background subtraction and blob detection algorithm. Blob detection is an algorithm used to determine if a group of connecting pixels are related to each other. This is useful for identifying separate objects in a scene, or counting the number of objects in a scene. The Background subtraction is a technique in the fields of image processing and computer vision wherein an image's foreground is extracted for further processing (object recognition etc.) The background subtraction method has some disadvantages such as:
  • 4. International Journal of Engineering and Techniques - Volume 1 Issue 5, September-October 2015 ISSN: 2395-1303 http://www.ijetjournal.org Page 58 Background subtraction can be a powerfulallied when it comes to segmenting objects in a scene. The method, however, has some build-in limitations that are exposed especially when processing video of outdoor scenes. First of all, the method requires the background to be empty when learning the background model for each pixel. This can be a challenge in a natural scene where moving objects may always be present. One solution is to median filter all training samples for each pixel. This will eliminated pixels where an object is moving through the scene and the resulting model of the pixel will be a true background pixel. An extension is to first order all training pixels (as done in the median filter) and then calculate the average of the pixels closest to the median. This will provide both a mean and variance per pixel. Such approaches assume that each pixel is covered by objects less than half the time in the training period. Another problem is that when processing outdoor video a pixel may cover more than one background. This will result in poor segmentation of pixel during background subtraction. Another problem in outdoor video is shadows due to strong sunlight. Such shadow pixels can easily appear different from the learnt background model and hence be incorrectly classified as object pixels. IV. ALGORITHMS A. Background Subtraction It is also known as Foreground Detection [7], is a technique in the fields of image processing and computer vision wherein an image's foreground is extracted for further processing (object recognition etc.). Generally an image's regions of interest are objects (humans, cars, text etc.) in its foreground. After the stage of image pre-processing (which may include image delousing, post processing like morphology etc.) object localization is required which may make use of this technique. Background subtraction is a widely used approach for detecting moving objects in videos from static cameras. The rationale in the approach is that of detecting the moving objects from the difference between the current frame and a reference frame, often called “background image”, or “background model”. Background subtraction is mostly done if the image in question is a part of a video stream. Background subtraction provides important cues for numerous applications in computer vision. It includes the following steps: step1:Motion detection, it is done by using segmentation where the moving projects are segmented from the background, i.e. take an image as background and take the frames obtained at the time t, denoted by I(t) to compare with the background image denoted by B. step 2:We can segment out the objects simply by using image subtraction technique of computer vision meaning for each pixels in I(t), take the pixel value denoted by P[I(t)] and subtract it with the corresponding pixels at the same position on the background image denoted as P[B].In mathematical equation, it is written as: P [F(t)]=P[I(t)]-P[B] step3:The background is assumed to be the frame at time t. This difference image would show some intensity for the pixel locations which have changed in the two frames.This approach will only work for cases where all foreground pixels are moving and all background pixels are static. step4:A threshold "Threshold" is put on this difference image to improve the subtraction. |P [F(t)]-PF(t+1)] |>{Threshold} This means that the difference image's pixel's intensities are 'thresholded' or filtered on the basis of value of Threshold. The accuracy of this approach is dependent on speed of movement in the scene. Faster movements may require higher thresholds. Threshold: The simplest thresholding methods replace each pixel in an image with a black pixel if the image intensity Ii,j is less than some fixed constant T (that is, Ii,j <T), or a white pixel if the image intensity is greater than that constant. In the example image on the right, this results in the dark tree becoming completely black, and the white snow becoming complete white. Multiband thresholding: Colour images can also be thresholded. One approach is to designate a separate threshold for each of the RGB components of the image and then combine them with an AND operation. This reflects the way the camera works
  • 5. International Journal of Engineering and Techniques ISSN: 2395-1303 and how the data is stored in the computer, but it does not correspond to the way that people recognize color. Therefore, the HSL and HSV color models are more often used; since hue is a circu quantity it requires circular thresholding. B. Blob Detection Algorithm Blob detection is an algorithm used to determine if a group of connecting pixels are related to each other[8]. This is useful for identifying separate objects in a scene, or counting the number of objects in a scene. In computer vision, blob detection methods are aimed at detecting regions in a digital image that differ in properties, such as brightness or color, compared to surrounding regions. A blob is a region of an image in which some properties are constant or approximately constant; all the points in a blob can be considered to be similar to each other. To find colored blobs, you should convert your color image from RGB to HSV format so that the colors are easier to separate. Check given images taken by camera at different times and correspondences displacements or changes. Apply Filter with Gaussian at different scales: This is done by just repeatedly filtering with the same Gaussian. Blob detectors is based on the Laplacian of the Gaussian (LoG). Given an input image f(x, y), this image is convolved by a Gaussian kernel at a certain scale‘t’ to give a scale space representation L(x, y; t) = g(x, y, t) * f(x, y). Then, the result of applying the Laplacian operator is computed, which usually results in strong positive responses for dark blobs of extent strong negative responses for bright blobs of similar size. To automatically capture blobs of different (unknown) size in the image domain, a multi approach is therefore necessary. Now Subtract image filtered at one scale with image filtered at previous scale. Then do the Template matching International Journal of Engineering and Techniques - Volume 1 Issue 5, September 1303 http://www.ijetjournal.org and how the data is stored in the computer, but it does not correspond to the way that people recognize color. Therefore, the HSL and HSV color models are more often used; since hue is a circular quantity it requires circular thresholding. Blob detection is an algorithm used to determine if a group of connecting pixels are related to each . This is useful for identifying separate the number of objects in a scene. In computer vision, blob detection methods are aimed at detecting regions in a digital image that differ in properties, such as brightness or color, compared to surrounding regions. A blob is a region of an image in which some properties are constant or approximately constant; all the points in a blob can be considered to be similar to each other. To find colored blobs, you should convert your color image from RGB to HSV format so that the colors are easier to eck given images taken by camera at displacements Filter with Gaussian at different scales: This is done by just repeatedly filtering Blob detectors is based on the Gaussian (LoG). Given an input image f(x, y), this image is convolved by a to give a scale space representation L(x, y; t) = g(x, y, t) * f(x, y). Then, the result of applying the Laplacian operator , which usually results in strong positive responses for dark blobs of extent √2 and strong negative responses for bright blobs of similar size. To automatically capture blobs of different (unknown) size in the image domain, a multi-scale Now Subtract image filtered at one scale with image filtered at Template matching,a basic method of template matching uses a convolution mask (template), tailored to a specific feature of the search image, which we want to detect. The convolution output will be highest at places where the imagestructure matches the mask structure, where large image values get multiplied by large mask values. Implementation:1. Pick a part of the search image to use as a template: Let the search image be S(x, y), where (x, y) represent the coordinates of each pixel in the search image. Let the template be T(x t, y t represent the coordinates of each pixel in template.2. Then simply move the center (or the origin) of the template T(x t, y point in the search image and calculate the sum of products between the coefficients in S(x, y) and T(x t, y t) over the whole area spanned by the template. As all possible positions of the template with respect to the search image are considered, the position with the highest score is the best position. This method is also referred to Filtering' and the template is called a filter mask. IV. CONCLUSION Sign languages are one of the main communication methods used by deaf people, but opposed to common thought, there is no universal sign language: every country or even r uses its own set of signs. The use of sign language in digitalsystemscanenhancecommunicationinbothdire ctions: animatedavatarscan synthesizesignalsbasedonvoiceortextrecognition;an dsignlanguagecanbetranslated into text or sound based on images, videos and sensors input. The latest is the ultimate goal of this research, but it is not a simple spelling of spoken language, so that recognizing isolated signs or letters of the alphabet (which has been a common approach)isnotsufcientforitstranscripti aticinterpretation. Thesystem will provide the output in the form of text which is equivalent to the recognized sign language hand con system will ease and encourage the interaction of common people with the handicapped people sinc the common people would no longer be required to learn the various sign languages in order to September-October 2015 Page 59 basic method of template matching uses a convolution mask (template), tailored to a specific feature of the search image, which we want to detect. The convolution output will be highest at places where the imagestructure matches the mask large image values get multiplied Implementation:1. Pick a part of the search image to use as a template: Let the search image be S(x, y), where (x, y) represent the coordinates of each pixel in the search image. t), where (x t, y t) represent the coordinates of each pixel in template.2. Then simply move the center (or the , y t) over each (x, y) point in the search image and calculate the sum of ts in S(x, y) and T(x ) over the whole area spanned by the template. As all possible positions of the template with respect to the search image are considered, the position with the highest score is the best position. This method is also referred to as 'Linear Spatial Filtering' and the template is called a filter mask. Sign languages are one of the main communication methods used by deaf people, but opposed to common thought, there is no universal sign language: every country or even regional group uses its own set of signs. The use of sign language digitalsystemscanenhancecommunicationinbothdire ctions: animatedavatarscan synthesizesignalsbasedonvoiceortextrecognition;an dsignlanguagecanbetranslated into text or sound videos and sensors input. The latest is the ultimate goal of this research, but it is not a simple spelling of spoken language, so that recognizing isolated signs or letters of the alphabet (which has been a common cientforitstranscriptionandautom aticinterpretation. Thesystem will provide the output in the form of text which is equivalent to the recognized sign language hand conguration. This system will ease and encourage the interaction of common people with the handicapped people since the common people would no longer be required to learn the various sign languages in order to
  • 6. International Journal of Engineering and Techniques - Volume 1 Issue 5, September-October 2015 ISSN: 2395-1303 http://www.ijetjournal.org Page 60 communicate with them. This system could be applied at various tasks, be it commercial or non- commercial, where there is involvement of handicapped people. Handicapped people can benet from this system in their day to day life whenever they need to easily convey their message through their sign language to common people. ACKNOWLEDGMENT It gives us great pleasure in presenting the preliminary project report on ‘IMAGE BASED SIGN LANGUAGE RECOGNITION ON ANDROID’. We would like to take this opportunity to thank our internal guide Prof. S. S. Raskar for giving us all the help and guidance we needed. We are really grateful to them for their kind support. Their valuable suggestions were very helpful. We are also grateful to Prof. N.F.Shaikh, Head of Computer Engineering Department,Modern Education Society’s College of Engineering for her indispensable support, suggestions. REFERENCES 1. M. Mohandes, M. Deriche, and J. Liu , “Image-Based andSensor-Based Approaches to Arabic Sign Language Recognition”, IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, VOL. 44, NO. 4, AUGUST 2014 2. Ravikiran J, Kavi Mahesh, Suhas Mahishi, Dheeraj R, Sudheender S, Nitin V Pujari, “Finger Detection for Sign Language Recognition”, Proceedings of the International MultiConference of Engineers and Computer Scientists 2009 Vol I IMECS 2009, March 18 - 20, 2009, Hong Kong 3. Son Lam Phung, Member, IEEE,Abdesselam Bouzerdoum, Sr. Member, IEEE, and Douglas Chai, Sr. Member, IEEE “Skin Segmentation Using Color Pixel Classification: Analysis and Comparison”, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 27, NO. 1, JANUARY 2005 4. Ashish Sethi, Hemanth S,Kuldeep Kumar,Bhaskara Rao N,Krishnan R,“SignPro- An Application Suite for Deaf and Dumb”,IJCSET May 2012 Vol 2, Issue 5,1203-1206 5. William T. Freeman, Michal Roth, “Orientation Histograms for Hand Gesture Recognition”,IEEE Intl. Wkshp. on Automatic Face and Gesture Recognition, Zurich, June, 1995 6. V.Nayakwadi, N. B. Pokale, “Natural Hand Gestures Recognition System for Intelligent HCI,” International Journal of Computer Applications Technology and Research, 2013 7. Massimo Piccardi,“Backgroundsubtractiontechniques: areview”,IEEEInternationalConferenceonSyste ms,ManandCybernetics,2004 8. Anne Kaspers, “BlobDetection”.