International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 09 Issue: 03 | Mar 2022 www.irjet.net p-ISSN: 2395-0072
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 1199
Hand Gesture Identification using Mediapipe
Ketan Gomase1, Akshata Dhanawade2, Prasad Gurav3, Sandesh Lokare4
1-4Student, Department of Electronics & Telecommunication Engineering, Atharva College of Engineering, Mumbai,
Maharashtra, India
Dr. Jyoti Dange5
5Professor & Project guide, Department of Electronics & Telecommunication Engineering, Atharva College of
Engineering, Mumbai, Maharashtra, India
---------------------------------------------------------------------***----------------------------------------------------------------------
Abstract - Lack of speech is taken into account to be a true
incapacity. Individuals with these disabilities use a range of
ways to speak with others, there are several forms accessible
for his or her communication, one in every of the foremost
common varieties of communication further as language.
Language is employed by the deaf and also the non-listening
share data with their community etc. Electronicrecognitionof
language deals from sign language to the touchandcontinues
till text / speaking. Bit is classified as permanentandversatile.
But finding an immediate bit is less complicated than seeing a
dynamic bit however each awareness systems are vital in
human society. Steps in recognizing language are delineate
during this study. Knowledge acquisition, pre-processing and
modification of knowledge, feature extraction, segmentation
and obtained results are assessed. Alternative pointers for
future analysis within the space also are advised.
Key Words: Mediapipe, linguisticcommunicationrecognition
[SLR], KNN, Hand answer, laptop Interaction with Humans.
1.INTRODUCTION
Sign language victimization signing is meant for the deaf
community, which may be used as a way of communication
between friends and family of the deaf and therefore the
deaf. Signing Recognition is one amongst the quickest
growing and difficult areas of analysis these days. Several
new methods are developed recently during this field.
During this project, we'll develop a program to translate
signing into Open CV. It outlines a way that acknowledges
yankee signing (ASL) and interprets it into commonplace
text.
1.1 Motivation and Background
Sign Language Recognition try to develop algorithms and
strategies foraccuratelydistinctivethesequencesofsymbols
made and understanding their which means. Several SLR
strategies abuse the matter as Gesture Recognition(GR).
Thus analysis has to this point targeted on distinctive the
positive characteristics andstrategiesofdifferentiationso as
to properly label a given signal from a group of potential
indicators. However, signing is over simply a set of well-
articulated gestures.
1.2 What is Gesture recognition ?
Gesture recognition could be a subject in engineering
moreover language technology for the aim of translating an
individual bit with mathematical algorithms. The sub-
discipline of pc vision. Gestures will return from any body
movement or position however typically seems on the face
or hand. This focus within the field includes emotional
recognition from facial and hand bit recognition. Users will
use easy bit to regulate or act with devices while not the bit
touching them. Several strategies are developed
victimization cameras and pc vision algorithms to translate
the signal language.
1.3 Sign Language
Sign languages [also referred to as sign languages] are
languages that use visual cues to convey that means. Sign
languages are expressed in language likewise as non-sign
language objects. Sign languages are complete natural
languages with their own synchronic linguistics andlexicon.
Sign languages don't seem to be universal and don't seem to
be wide understood, though there are some hanging
similarities between sign languages. Linguists think about
each spoken and signed communications to be natural sorts
of language, that means that they each evolved into an
obscure aging method, one that lasted longer and evolved
over time while not careful designing.Languageshouldn't be
confused with visual communication, a kind of
communication while not voice.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 09 Issue: 03 | Mar 2022 www.irjet.net p-ISSN: 2395-0072
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 1200
Figure 1: Sign Languages
1.4 Mediapipe Framework [Hand solution]
MediaPipe Hands could be a reliable hand and finger
following device resolution. It usesmachinelearning(ML)to
grasp twenty one 3D native hand marks from only 1 frame.
Though fashionable ways rely mostly on the powerful
desktop locations for discovery, our approach benefits real-
time performance on mobile phones, even scales to several
hands. We have a tendency to hope to allow you this handy
plan functioning on in depth analysis and development
society can end in cases of misuse, to push new applications
and new analysis ways. MediaPipe Hands uses associate
integrated cc pipe of the various models operating together:
The palm detection model that works onthecompleteimage
and returns the direct-directed hand binding box. Hand
gesture model applicable to image-cut region outlined by a
palm detector once returns 3D hand key points with high
responsibility. This strategy is analogous to the one
employed in our MediaPipeFaceMeshresolution,employing
a face detector and a face detector a landmark model.
1.5 Objective
The objective of this project was to form a neural network
that would distinguish between the yankee language (ASL)
alphabet characters, if a written signature is provided. This
project is that the opening move in making a possible
language translator, which may take communication in
language and translate it into written and oral language.
Such a translator will greatly scale back the barrier between
several deaf and onerous of hearing individualsinorderthat
they'll higher communicate with others in their daily
activities.
1.6 Summary
Improving language application for the deaf will be it's
important, as they're going to be ready to simply
communicate with them and people WHO don't perceive
language. Our program aims to require a basic step to shut
the affiliation the gap between the people and therefore the
deaf and dumb victimization language.The mostfocusofthis
work is creative thinking a vision-based system for
distinctive spelled characters signing. Reason for selectinga
vision-based system has got to do with the actual fact that it
provides a straightforward and correct means a way to
communicate between someone anda pc.Duringthisreport,
the varied stages of 36:26 were thought-about sections of
nation Alphabets (a to z) and ten sections by numbers (0-9).
We've got used Google's Mediapipe Frame-Work Solutions
has improved hand recognition model and may realize
twenty one 3D Landmarks of Palm. Thus we'll attempt to
know it and the way to use this Library Python to realizeour
goal. It reaches the limit ninety five.7% accuracy in palm
detection. Employing a regular cross entropy loss and no
decoder that gives eighty six.22% base.
2. LITERATURE SURVEY
In recent years, a lot of analysis has been done on linguistic
communication recognition. This recognition technology is
split into 2 categories: -
2.1 Vision Based Approach
This methodology takes photos on camera as bit
information. The vision-based approach focuses heavily on
touch-captured pictures and brings out the most and
recognizable feature. Color belts were usedatthestart ofthe
vision-based approach. The most disadvantage of this
methodology was the quality color to be applied to the
fingers. Then use blank hands rather than coloured ribbons.
This creates a difficult downside as these systems need
background, uninterrupted lighting, personal frames and a
camera to realize period performance. Additionally, such
systems should be developed to fulfill thenecessities,aswell
as accuracy and strength..
Figure 2: Sample of Vision Based Technique
Theoretical analysis is predicated on however individuals
understand data concerning their surroundings,
nevertheless it's in all probability the foremost tough to use
effectively. Many completely different ways are tested up to
now. The primary is to create a three-dimensional human
hand model. The model is compared at hand pictures with
one or additional cameras, and therefore the parameters
comparable to the form of the palm and therefore the
combined angles square measure calculable. These
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 09 Issue: 03 | Mar 2022 www.irjet.net p-ISSN: 2395-0072
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 1201
parameters square measure then accustomed produce the
bit section. The second is to require an image mistreatment
the camera and extract bound options and people options
square measure used as input within the partition
algorithmic rule to separate.
2.2 Sensor Based Approach
This technique collects knowledge generated mistreatment
completely differentsensors. Theinfowasthenanalyzedand
conclusions were drawn in accordance with the popularity
model. Within the case of hand perception differing types of
sensors square measure used and placed on the hand, once
the hand makes any bit, the info is recorded soanalyzed. The
primary sensing element used was knowledge gloves so
LEDs. The introduction of the primary knowledge glove was
created in 1977. The sensor-based approachimpairsnatural
hand movements because of the utilization of external
hardware. The good disadvantage is that advanced touches
can't be created mistreatment this technique.
Figure 3: Gloves with flex sensor mounted on it
In this program is that the electro mechanical mechanism
designed and controlled mistreatment time period bit. I the
system is made in situ of a tiny low controller mistreatment
Keil and MPLAB tools. Bit recognition is performed on the
goal of amendment resistance is felt by the versatile sensing
element. These sensors square measure connected to the
hand gloves wherever the input is put in in a very given
system. The system is meant to be divided into 2 parts like
transmitter and receiver. Sender the half are in hand gloves
wherever the info soundsandprocessedviaPIC16F7487and
sent repeatedly to the recipient class. RF technology is
employed to transfer knowledge to the receiver a part of the
two.4 GHz frequency. ARM seven (LPC2148) processor is
employed to receive knowledge. Here from knowledge
obtained, character foretold and matched the character to
that the character is known and displayed on the LCD.
Differing types of courses square measure ready of system
designed and tested in real time. Planned the system will be
used for numerous applications like in vacant
instrumentation, industries, disabled employees etc.
3. IMPLEMENTATION METHODOLOGY
MediaPipe Hands uses a Machine Learning Pipeline that
integrates multiple co-working models: A palm-type
acquisition model that works in an exceedingly complete
image and returns a set hand-held binding box. A
handwriting model that works with a cropped image
location outlined by a palm detector and restores3Dreliable
key points.
Thus to create an internet website we'vegottophotographa
minimum of 25-30 picturespermark and withthismodel we
will get twenty one hand points. i.e. links [x, y, z]. x and y
square measure commonto mention[0.0,1.0]thedimension
and height of the image severally. The zrepresentsthedepth
of the landmark and also the depth of the arm at the
foundation, and also the smaller the worth the nearer the
camera becomes. When creating the websitewill predict the
sign with the assistance of the acceptable Model. We'll use
the KNN formula.
3.1 Hardware & Software Requirement
1) Windows laptop or Linux, Python put in and Libraries.
2) CMOS detector (WebCam)
3) Hand bit For Visibility
Computer code we tend to accustomed acknowledgeProject
Signature Recognition:
1) Python put in Windows Os or Linux Os Machine.
2) Mainframe - Intel core i5 ninth information.
3) GPU - Nvidia GTX 1050 Ti.
4) 720p60 net Camera
5) Python three.8.6 and IDE like VS, Spyder etc.
6) Libraries: OpenCV , Tensorflow, Keras, MediaPipe and
plenty of additional basic
7) KNN (The nearest neighbors) from the Sklearn Library of
Python.
3.2 Result
This sign language recognizer is able to recognize alphabets
(A-Z). All the signs can be recognized real-time. The current
system has only been trained on a very small database.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 09 Issue: 03 | Mar 2022 www.irjet.net p-ISSN: 2395-0072
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 1202
Figure 4,5,6,7: Showing result of D,W,U respectively.
4. CONCLUSIONS
Sign Language using Mediapipe and recognition through
Computer vision was partially successful and accurate an
average of 17 FPS with an average accuracy of 86 to 91%.
The question of perfection is another attempt to deal with it
in the days to come. One hand touch detection recognition
was the theme and the biggest problem with which it
worked with. Mediapipe achieves an average accuracy of
95.7% palm discovery. Using the normal loss of cross
entropy and not. The decoder provides an 86.22% base. So
the Scope of the Futureof this study will be Good to improve
Human Computer Interoperability (HCI) using a very
powerful and fast algorithm.
REFERENCES
[1] IJTRS-V2-I7-005 Volume 2 Issue VII, August 2017survey
on hand gesture techniques forsign language recognition.
[2] Umang Patel & Aarti G. Ambekar “Moment based sign
language recognition for IndianLanguages “2017 Third
International Conference on Computing, Communication,
Control and Automation (ICCUBEA).
[3] Lih-Jen kau, Wan-Lin Su, Pei-Ju Yu, Sin-Jhan Wei “A
Realtime Portable Sign LanguageTranslation System''
Department of Electronic Engineering, National Taipei
University ofTechnology No.1, Sec. 3, Chung-Hsiao E. Rd.,
Taipei 10608, Taiwan, R.O.C.
[4] Obtaining hand gesture parameters using Image
Processing by Alisha Pradhan andB.B.V.L. Deepak ,2015
International Conference on Smart Technology and
Management(ICSTM).
[5] Vision based sign language translation device
(conference paper: February 2013 byYellapu Madhuri and
Anburajan Mariamichael).
[6] K-nearest correlated neighbour classification for Indian
sign language gesture recognition using feature
extraction(by Bhumika Gupta, Pushkar Shukla and Ankush
Mittal).
[7] Vikram Sharma M, Virtual Talk for deaf, mute, blind and
normal humans, Texas instruments India Educator’s
conference, 2013.
[8] Hand in Hand: Automatic Sign Language to English
Translation, by Daniel Stein, Philippe Dreuw, Hermann Ney
and Sara Morrissey, Andy Way.
[9] Er. Aditi Kalsh, Dr. N.S. Garewal “Sign Language
Recognition System'' International Journal ofComputational
Engineering Research 2013
[10] Prakash B Gaikwad, Dr. V.K.Bairagi,” Hand Gesture
Recognition for Dumb People using Indian Sign Language”,
International Journal of Advanced Research in computer
Science and Software Engineering, pp:193-194, 2014.
[11] https://en.wikipedia.org/wiki/Sign_language

Hand Gesture Identification

  • 1.
    International Research Journalof Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 09 Issue: 03 | Mar 2022 www.irjet.net p-ISSN: 2395-0072 © 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 1199 Hand Gesture Identification using Mediapipe Ketan Gomase1, Akshata Dhanawade2, Prasad Gurav3, Sandesh Lokare4 1-4Student, Department of Electronics & Telecommunication Engineering, Atharva College of Engineering, Mumbai, Maharashtra, India Dr. Jyoti Dange5 5Professor & Project guide, Department of Electronics & Telecommunication Engineering, Atharva College of Engineering, Mumbai, Maharashtra, India ---------------------------------------------------------------------***---------------------------------------------------------------------- Abstract - Lack of speech is taken into account to be a true incapacity. Individuals with these disabilities use a range of ways to speak with others, there are several forms accessible for his or her communication, one in every of the foremost common varieties of communication further as language. Language is employed by the deaf and also the non-listening share data with their community etc. Electronicrecognitionof language deals from sign language to the touchandcontinues till text / speaking. Bit is classified as permanentandversatile. But finding an immediate bit is less complicated than seeing a dynamic bit however each awareness systems are vital in human society. Steps in recognizing language are delineate during this study. Knowledge acquisition, pre-processing and modification of knowledge, feature extraction, segmentation and obtained results are assessed. Alternative pointers for future analysis within the space also are advised. Key Words: Mediapipe, linguisticcommunicationrecognition [SLR], KNN, Hand answer, laptop Interaction with Humans. 1.INTRODUCTION Sign language victimization signing is meant for the deaf community, which may be used as a way of communication between friends and family of the deaf and therefore the deaf. Signing Recognition is one amongst the quickest growing and difficult areas of analysis these days. Several new methods are developed recently during this field. During this project, we'll develop a program to translate signing into Open CV. It outlines a way that acknowledges yankee signing (ASL) and interprets it into commonplace text. 1.1 Motivation and Background Sign Language Recognition try to develop algorithms and strategies foraccuratelydistinctivethesequencesofsymbols made and understanding their which means. Several SLR strategies abuse the matter as Gesture Recognition(GR). Thus analysis has to this point targeted on distinctive the positive characteristics andstrategiesofdifferentiationso as to properly label a given signal from a group of potential indicators. However, signing is over simply a set of well- articulated gestures. 1.2 What is Gesture recognition ? Gesture recognition could be a subject in engineering moreover language technology for the aim of translating an individual bit with mathematical algorithms. The sub- discipline of pc vision. Gestures will return from any body movement or position however typically seems on the face or hand. This focus within the field includes emotional recognition from facial and hand bit recognition. Users will use easy bit to regulate or act with devices while not the bit touching them. Several strategies are developed victimization cameras and pc vision algorithms to translate the signal language. 1.3 Sign Language Sign languages [also referred to as sign languages] are languages that use visual cues to convey that means. Sign languages are expressed in language likewise as non-sign language objects. Sign languages are complete natural languages with their own synchronic linguistics andlexicon. Sign languages don't seem to be universal and don't seem to be wide understood, though there are some hanging similarities between sign languages. Linguists think about each spoken and signed communications to be natural sorts of language, that means that they each evolved into an obscure aging method, one that lasted longer and evolved over time while not careful designing.Languageshouldn't be confused with visual communication, a kind of communication while not voice.
  • 2.
    International Research Journalof Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 09 Issue: 03 | Mar 2022 www.irjet.net p-ISSN: 2395-0072 © 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 1200 Figure 1: Sign Languages 1.4 Mediapipe Framework [Hand solution] MediaPipe Hands could be a reliable hand and finger following device resolution. It usesmachinelearning(ML)to grasp twenty one 3D native hand marks from only 1 frame. Though fashionable ways rely mostly on the powerful desktop locations for discovery, our approach benefits real- time performance on mobile phones, even scales to several hands. We have a tendency to hope to allow you this handy plan functioning on in depth analysis and development society can end in cases of misuse, to push new applications and new analysis ways. MediaPipe Hands uses associate integrated cc pipe of the various models operating together: The palm detection model that works onthecompleteimage and returns the direct-directed hand binding box. Hand gesture model applicable to image-cut region outlined by a palm detector once returns 3D hand key points with high responsibility. This strategy is analogous to the one employed in our MediaPipeFaceMeshresolution,employing a face detector and a face detector a landmark model. 1.5 Objective The objective of this project was to form a neural network that would distinguish between the yankee language (ASL) alphabet characters, if a written signature is provided. This project is that the opening move in making a possible language translator, which may take communication in language and translate it into written and oral language. Such a translator will greatly scale back the barrier between several deaf and onerous of hearing individualsinorderthat they'll higher communicate with others in their daily activities. 1.6 Summary Improving language application for the deaf will be it's important, as they're going to be ready to simply communicate with them and people WHO don't perceive language. Our program aims to require a basic step to shut the affiliation the gap between the people and therefore the deaf and dumb victimization language.The mostfocusofthis work is creative thinking a vision-based system for distinctive spelled characters signing. Reason for selectinga vision-based system has got to do with the actual fact that it provides a straightforward and correct means a way to communicate between someone anda pc.Duringthisreport, the varied stages of 36:26 were thought-about sections of nation Alphabets (a to z) and ten sections by numbers (0-9). We've got used Google's Mediapipe Frame-Work Solutions has improved hand recognition model and may realize twenty one 3D Landmarks of Palm. Thus we'll attempt to know it and the way to use this Library Python to realizeour goal. It reaches the limit ninety five.7% accuracy in palm detection. Employing a regular cross entropy loss and no decoder that gives eighty six.22% base. 2. LITERATURE SURVEY In recent years, a lot of analysis has been done on linguistic communication recognition. This recognition technology is split into 2 categories: - 2.1 Vision Based Approach This methodology takes photos on camera as bit information. The vision-based approach focuses heavily on touch-captured pictures and brings out the most and recognizable feature. Color belts were usedatthestart ofthe vision-based approach. The most disadvantage of this methodology was the quality color to be applied to the fingers. Then use blank hands rather than coloured ribbons. This creates a difficult downside as these systems need background, uninterrupted lighting, personal frames and a camera to realize period performance. Additionally, such systems should be developed to fulfill thenecessities,aswell as accuracy and strength.. Figure 2: Sample of Vision Based Technique Theoretical analysis is predicated on however individuals understand data concerning their surroundings, nevertheless it's in all probability the foremost tough to use effectively. Many completely different ways are tested up to now. The primary is to create a three-dimensional human hand model. The model is compared at hand pictures with one or additional cameras, and therefore the parameters comparable to the form of the palm and therefore the combined angles square measure calculable. These
  • 3.
    International Research Journalof Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 09 Issue: 03 | Mar 2022 www.irjet.net p-ISSN: 2395-0072 © 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 1201 parameters square measure then accustomed produce the bit section. The second is to require an image mistreatment the camera and extract bound options and people options square measure used as input within the partition algorithmic rule to separate. 2.2 Sensor Based Approach This technique collects knowledge generated mistreatment completely differentsensors. Theinfowasthenanalyzedand conclusions were drawn in accordance with the popularity model. Within the case of hand perception differing types of sensors square measure used and placed on the hand, once the hand makes any bit, the info is recorded soanalyzed. The primary sensing element used was knowledge gloves so LEDs. The introduction of the primary knowledge glove was created in 1977. The sensor-based approachimpairsnatural hand movements because of the utilization of external hardware. The good disadvantage is that advanced touches can't be created mistreatment this technique. Figure 3: Gloves with flex sensor mounted on it In this program is that the electro mechanical mechanism designed and controlled mistreatment time period bit. I the system is made in situ of a tiny low controller mistreatment Keil and MPLAB tools. Bit recognition is performed on the goal of amendment resistance is felt by the versatile sensing element. These sensors square measure connected to the hand gloves wherever the input is put in in a very given system. The system is meant to be divided into 2 parts like transmitter and receiver. Sender the half are in hand gloves wherever the info soundsandprocessedviaPIC16F7487and sent repeatedly to the recipient class. RF technology is employed to transfer knowledge to the receiver a part of the two.4 GHz frequency. ARM seven (LPC2148) processor is employed to receive knowledge. Here from knowledge obtained, character foretold and matched the character to that the character is known and displayed on the LCD. Differing types of courses square measure ready of system designed and tested in real time. Planned the system will be used for numerous applications like in vacant instrumentation, industries, disabled employees etc. 3. IMPLEMENTATION METHODOLOGY MediaPipe Hands uses a Machine Learning Pipeline that integrates multiple co-working models: A palm-type acquisition model that works in an exceedingly complete image and returns a set hand-held binding box. A handwriting model that works with a cropped image location outlined by a palm detector and restores3Dreliable key points. Thus to create an internet website we'vegottophotographa minimum of 25-30 picturespermark and withthismodel we will get twenty one hand points. i.e. links [x, y, z]. x and y square measure commonto mention[0.0,1.0]thedimension and height of the image severally. The zrepresentsthedepth of the landmark and also the depth of the arm at the foundation, and also the smaller the worth the nearer the camera becomes. When creating the websitewill predict the sign with the assistance of the acceptable Model. We'll use the KNN formula. 3.1 Hardware & Software Requirement 1) Windows laptop or Linux, Python put in and Libraries. 2) CMOS detector (WebCam) 3) Hand bit For Visibility Computer code we tend to accustomed acknowledgeProject Signature Recognition: 1) Python put in Windows Os or Linux Os Machine. 2) Mainframe - Intel core i5 ninth information. 3) GPU - Nvidia GTX 1050 Ti. 4) 720p60 net Camera 5) Python three.8.6 and IDE like VS, Spyder etc. 6) Libraries: OpenCV , Tensorflow, Keras, MediaPipe and plenty of additional basic 7) KNN (The nearest neighbors) from the Sklearn Library of Python. 3.2 Result This sign language recognizer is able to recognize alphabets (A-Z). All the signs can be recognized real-time. The current system has only been trained on a very small database.
  • 4.
    International Research Journalof Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 09 Issue: 03 | Mar 2022 www.irjet.net p-ISSN: 2395-0072 © 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 1202 Figure 4,5,6,7: Showing result of D,W,U respectively. 4. CONCLUSIONS Sign Language using Mediapipe and recognition through Computer vision was partially successful and accurate an average of 17 FPS with an average accuracy of 86 to 91%. The question of perfection is another attempt to deal with it in the days to come. One hand touch detection recognition was the theme and the biggest problem with which it worked with. Mediapipe achieves an average accuracy of 95.7% palm discovery. Using the normal loss of cross entropy and not. The decoder provides an 86.22% base. So the Scope of the Futureof this study will be Good to improve Human Computer Interoperability (HCI) using a very powerful and fast algorithm. REFERENCES [1] IJTRS-V2-I7-005 Volume 2 Issue VII, August 2017survey on hand gesture techniques forsign language recognition. [2] Umang Patel & Aarti G. Ambekar “Moment based sign language recognition for IndianLanguages “2017 Third International Conference on Computing, Communication, Control and Automation (ICCUBEA). [3] Lih-Jen kau, Wan-Lin Su, Pei-Ju Yu, Sin-Jhan Wei “A Realtime Portable Sign LanguageTranslation System'' Department of Electronic Engineering, National Taipei University ofTechnology No.1, Sec. 3, Chung-Hsiao E. Rd., Taipei 10608, Taiwan, R.O.C. [4] Obtaining hand gesture parameters using Image Processing by Alisha Pradhan andB.B.V.L. Deepak ,2015 International Conference on Smart Technology and Management(ICSTM). [5] Vision based sign language translation device (conference paper: February 2013 byYellapu Madhuri and Anburajan Mariamichael). [6] K-nearest correlated neighbour classification for Indian sign language gesture recognition using feature extraction(by Bhumika Gupta, Pushkar Shukla and Ankush Mittal). [7] Vikram Sharma M, Virtual Talk for deaf, mute, blind and normal humans, Texas instruments India Educator’s conference, 2013. [8] Hand in Hand: Automatic Sign Language to English Translation, by Daniel Stein, Philippe Dreuw, Hermann Ney and Sara Morrissey, Andy Way. [9] Er. Aditi Kalsh, Dr. N.S. Garewal “Sign Language Recognition System'' International Journal ofComputational Engineering Research 2013 [10] Prakash B Gaikwad, Dr. V.K.Bairagi,” Hand Gesture Recognition for Dumb People using Indian Sign Language”, International Journal of Advanced Research in computer Science and Software Engineering, pp:193-194, 2014. [11] https://en.wikipedia.org/wiki/Sign_language