SlideShare a Scribd company logo
1 of 6
Download to read offline
VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021)
F-104
www.viva-technology.org/New/IJRI
Sign Language recognition using image-based hand gesture
recognition techniques
Prasad Suhas Tandel1
, Prof. Sonia Dubey2
1
(Department of MCA, Viva School Of MCA/ University of Mumbai, India)
2
(Department of MCA, Viva School Of MCA/ University of Mumbai, India)
Abstract : Touch is one of the most common forms of sign language used in oral communication. It is most
commonly used by deaf and dumb people who have difficulty hearing or speaking. Communication between
them or ordinary people. Various sign-language programs have been developed by many manufacturers
around the world, but they are relatively flexible and affordable for end users. Therefore, this paper has
presented software that introduces a type of system that can automatically detect sign language to help deaf
and mute people communicate better with other people or ordinary people. Pattern recognition and hand
recognition are developing fields of research. Being an integral part of meaningless hand-to-hand
communication plays a major role in our daily lives. The handwriting system gives us a new, natural, easy-to-
use communication system with a computer that is very common to humans. Considering the similarity of the
human condition with four fingers and one thumb, the software aims to introduce a real-time hand recognition
system based on the acquisition of some of the structural features such as position, mass centroid, finger
position, thumb instead of raised or folded finger.
Keywords – Communication System, hand recognition, pattern, sign language, Touch
I. INTRODUCTION
Deaf people are often deprived of normal contact with other people in the community. It has been shown that
they sometimes find it difficult to communicate with ordinary people with their hands, as very few of them are
seen by most people. Since people with hearing or hearing problems cannot speak as normal people so they
should rely on some form of visual communication most of the time. Sign language is a major means of
communication in a deaf and dumb community. Like any other language it has acquired a grammar and
vocabulary but uses a visual exchange system. The problem arises when mute or deaf people try to express their
feelings to others with the help of these sign language programs. This is because ordinary people usually do not
know these grammars. As a result, it has been found that contact with a deaf person is limited only to his family
or to the deaf community. The importance of sign language is emphasized by the growing public acceptance and
funding of the international project. In this age of Technology, the need for computer-based programming is in
dire need of a dumb community. However, researchers have been attacking the problem for some time now and
the results show some promise.
The technology of interest is designed for speech recognition but no real marketing product of brand recognition
is actually available in the current market. The idea is to make computers understand human language and
develop a user-friendly computer interface (HCI). Making a computer understand speech, facial expressions,
and touch is a step in the right direction. Gestures are information that can be exchanged with words. One can
make countless movements at a time. As human touch is seen visually, it is a matter of great interest to
computer-generated researchers. This project aims to determine human mobility by performing HCI. Coding of
these actions in machine language requires a complex programming algorithm. In our project we focus on Image
Analysis and Template Matching to find the best product.
II. LITERATURE REVIEW
VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021)
F-105
www.viva-technology.org/New/IJRI
As illustrated in Figure 1, hand gestures for human–computer interaction (HCI) started with the
invention of the data glove sensor. It provided simple computer recognition instructions. The gloves have used a
variety of sensors to capture the movement of the hand and its position by finding the right links for the palm
area and the fingers. The various sensors that used the same process based on the bending angle were the
bending sensor, the small migration sensor, the optical fibre transducer, the flexible sensors and the
accelerometer sensor. These sensors exploit different physical principles according to their type.
Figure 1. Different techniques for hand gestures. (a) Glove-based attached sensor either connected to the
computer or portable; (b) computer vision–based camera using a marked glove or just a naked
Although the techniques mentioned above have provided good outcomes, they have various limitations that
make them eligible for the elderly, who may experience discomfort and confusion due to wire connection
problems. In addition, older people suffering from chronic diseases that lead to loss of muscle function may not
be able to wear and take off their gloves, making them uncomfortable and preventing them from using them for
long periods of time[1]. These sensors may also cause skin damage, infection or inappropriate reactions in
people with sensitive skin or those who have burns. In addition, some sensors are quite expensive. Some of
these problems were discussed in a study by Lamberti and Camastra, who developed a computer-vision system
based on coloured marked gloves. Although this study did not require the attachment of sensors, it still required
coloured gloves to be worn.
The most effective commercial glove is by far the VPL Data Glove [2]. Built by Zimmerman during the 1970’s.
It is based upon patented optical fiber sensors along the back of the fingers. Star-ner and Pentland developed a
glove-environmental system that can detect 40 signals from American Sign Language (ASL) at 5Hz.
Another research is by Hyeon-Kyu Lee and Jin H. Kim presented work on real-time hand-gesture ecognition
using HMM (Hidden Markov Model). Kjeldsen and Kendersi devised a technique for doing skin-tone mentation
in HSV space, depending on the basis that the skin tone in the images takes on the volume attached to the HSV
space. They also started a program that uses a neural network of back distribution to detect touch from hand-
separated images[1]. Etsuko Ueda and Jososhio Matsumoto have outlined the process of the novel a possible
rating scale used for human-based communication, in this way, hand circuits are extracted from multiple images
acquired a multi-idea camera system, and creating a voxel Model [12].
This paper also discusses these methods in detail and summarizes some modern research under different
considerations (type of camera used, image or video editing, type of method, partition algorithm used,
recognition rate, type of region of interest processing, number of gestures, application area, limitation or
invariant factor, and detection range achieved and in some cases data set use, runtime speed, hardware run, type
of error). In addition, the review highlights the most popular applications associated with this topic
VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021)
F-106
www.viva-technology.org/New/IJRI
III. HAND GESTURE METHODS
The proposed plan consists of two main categories: (1) Hand separation (2) Hand sign recognition. The
block diagram shows the effectiveness of the proposed system. Hand features are an important criterion for
separation to distinguish between hand touches. These elements must be able to adapt to different hands and
touches by different people. In this system, we used histograms of targeted gradients (HOG) as an element
adjective. It is better than other definitions because it is able to adapt to changing light and rotation of objects.
We do not look at the whole picture, but we divide it into smaller cells and then the pixels inside the cells on the
edge or gradient direction histogram are calculated
3.1 Preparing the dataset
We have made a database of 26 English-language alphabets in Indian Sign Language. Each sign
it is made up of two different people with a different structure of conditions in different lighting
conditions. Videos are recorded on camera and each video is framed by frame to photos and then
arranged in 100 frames and added to get about 250 photos for each sign. The data was then divided
into 4800 training images and 1200 test images
3.2 Image Processing and edge detection
Before performing a feature, the images should be processed in such a way that only the most
important details are considered and the unwanted, disturbing and overpowering details are
ignored. Images are first converted to 100 * 100-pixel size for fast countdown [3]. The image is then
converted to Gray and finally converted to a binary image. At the same time, skin colour is obtained
using the Y-Cb-Cr model. Finally, the edge detection was done with a Canny edge detector. The
process is illustrated in figure below
Output of skin colour detection: a) Original cropped image, b) Grey scale converted image, c) Skin
colour detection output
VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021)
F-107
www.viva-technology.org/New/IJRI
3.3 Feature Extraction
Extracting data from data is a critical part of any acquisition process. It can be done using various methods such
as Fourier Descriptor, Scale Invariant Feature Transform (SIFT) or Principle Component Analysis (PCA).
Histogram of Oriented Gradients (HOG) is another effective way to extract a feature. In this paper, we used the
HOG method to extract the feature. The basic idea of HOG is that an object or shape within an image can be
represented by strong distribution gradients or edge indicators. In this way, the image is divided into smaller
cells and calculates the gradient histogram of each cell. It creates a barrel and incorporates a histogram of
different samples depending on the size and angle. To improve accuracy, all cells in the image are normal so
that they are not affected by light contrast. Finally, the HOG feature vector counts for the whole image.
3.4 Template matching and sign recognition
The element vector generated in the above step is inserted into the image separator. In this paper, we have used
Support Vector Machine (SVM) to distinguish. By using SVM fragmentation we can increase accuracy and
avoid data overload. In SVM data objects are arranged in a n-dimensional space where n is the number of
elements. Each feature is associated with a link value. It then acquires a hyperplane that separates the classes.
The model is kept in sign language recognition in real time.
3.5 Text to speech conversion
This section falls under Artificial Intelligence [9] where the template matching function becomes successfully
the same image and then translated into text and audio format. For this purpose, the methods described earlier
used for conversion.
We used the Google to Speech API text to convert sign language into audio. It is one of the best texts in the API
talk available. Unlike other TTS APIs, this API creates a human-like voice. Sign language is translated using the
steps above and the result is given to write a speech function that converts it into sound. In this system, we see
and hear simultaneous sign-language translations that make it much easier to use.
IV. ALPHABET RECOGNITION
Following table shows the values assigned to each finger.[8] Binary Alphabet calculation: It is possible to
display total (25
−1) i.e. 31 gestures using the fingers of a single hand, and (210
−1) i.e. 1023 gestures if both
hands are used.
Figure show binary finger tapping tool showing the significant values assigned to fingers by referring to
gesture table shows code of each alphabet.
VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021)
F-108
www.viva-technology.org/New/IJRI
Alphabet code
V. CONCLUSION
In this paper, we have explained the implementation of a system which recognize Sign Language and
translated to English. We discussed the importance of an ISL translator for communicating with the deaf and
dumb. In this system, a hand-held photo frame is captured using a webcam.
Our project aims to bridge the gap by introducing an inexpensive computer in the communication path
so that the sign language can be automatically captured, recognized and translated to speech for the benefit of
blind people. In the other direction, speech must be analysed and converted to either sign or textual display on
the screen for the benefit of the hearing impaired.
These frameworks are being processed to obtain improved features. A feature is then added to the
subtraction and segmentation algorithms to translate the token language into English text. This translation is
converted to speech using text to speech API. The program used the above algorithms to obtain the final result.
The proposed model is tested by a database containing 26 symbols from two different individuals. The results
show a complete accuracy of the system to be 88%. Our future research will work towards implementing this
model on a mobile application.
VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021)
F-109
www.viva-technology.org/New/IJRI
Acknowledgements
I am thankful to my college for giving me this opportunity to make this research a success. I give my special
thanks and sincere gratitude towards Prof. Sonia Dubey for encouraging me to complete this research paper,
guiding me and helping me through all the obstacles in the research.
Without her assistance, my research paper would have been impossible and also, I present my obligation
towards all our past years teachers who have bestowed deep understanding and knowledge in us, over the past
years. We are obliged to our parents and family members who always supported me greatly and encouraged me
in each and every step.
REFERENCES
Journal Papers:
[1] Christopher Lee and Yangsheng Xu, “Online, interactive learning of gestures for human robot interfaces” Carnegie Mellon University,
the Robotics Institute, Pittsburgh, Pennsylvania, USA, 1996
[2] Richard Watson, “Gesture recognition techniques”, Technical report, Trinity College, Department of Computer Science, Dublin, July,
Technical Report No. TCD-CS-93-11, 1993
[3] Ray Lockton, ”Hand Gesture Recognition using computer Vision”,4th year project report, Ballilol College , Department of
Engineering Science, Oxford University,2000
[4] Hai, P. T., Thinh, H. C., Van Phuc, B., & Kha, H. H. (2018). Automatic feature extraction for Vietnamese sign language recognition
using support vector machine. 2018 2nd International Conference on Recent Advances in Signal Processing, Telecommunications
& Computing (SigTelCom).
[5] Purva C. Badhe, Vaishali Kulkarni, “Indian Sign Language Translator using Gesture Recognition Algorithm”, 2015 IEEE
International Conference on Computer Graphics, Graphics and Information Security (CGVIS).
[6] P.Gajalaxmi, T, Sree Sharmila, “Sign Language Recognition for Invariant features based on multiclass Support Vector Machine with
BeamECOC Optimization”, IEEE International Conference on Power, Control, Signals and Instrumentation Engineering (ICPCSI-
2017).
[7] Etsuko Ueda, Yoshio Matsumoto, Masakazu Imai, Tsukasa Ogasawara, ”Hand Pose Estimation for Vision Based Human
Interface”, IEEE Transactions on Industrial Electronics,Vol.50,No.4,pp.676-684,2003.
[8] International Multi Conference of Engineers and Computer Scientists 2009”, Hong Kong , Vol I IMECS 2009, March 18 - 20, 2009
[9] Ms. Rashmi D. Kyatanavar, Prof. P. R. Futane, “Comparative Study of Sign Language Recognition Systems”, Department of
Computer Engineering, Sinhgad College of Engineering, Pune, India International Journal of Scientific and Research
Publications, Volume 2, Issue 6, June 2012 ISSN 2250-315
[10] Md Azher Uddin, Shayhan Ameen Chowdhury, “Hand Sign language recognition for Bangla Alphabet using Support Vector
Machines”, 2016 International Conference on Innovations in Science Engineerig and Technology (ICISET).
[11] Muhammad Aminur Rahaman, Mahmood Jasim, Md. Haider Ali and Md. Hasanuzzaman, “Real-Time Computer Vision based
Bengali Sign Language Recognition”, 2014 17th International Conference on Computer and Information Technology (ICCIT).
[12] Etsuko Ueda, Yoshio Matsumoto, Masakazu Imai, Tsukasa Ogasawara, ”Hand Pose Estimation for Vision Based Human Interface”,
IEEE Transactions on Industrial Electronics,Vol.50,No.4,pp.676-684,2003.
[13] Starner T, Weaver J, Pentland A (1998) Real-time American sign language recognition using desk and wearable computer based
video. IEEE Trans Pattern Anal Mach Intell 20:1371–1375
[14] Starner T, Pentland A (1997) Real-time American sign language recognition from video using hidden Markov models. In: Motion-
based –243
Books:
[15] “digital image processing” (2nd Edition) Rafael C. Gonzalez (Author), Richard E. Woods (Author) Publication Date: January
15, 2002 | ISBN-10: 0201180758 | ISBN-13: 978-0201180756.

More Related Content

What's hot

Real time Myanmar Sign Language Recognition System using PCA and SVM
Real time Myanmar Sign Language Recognition System using PCA and SVMReal time Myanmar Sign Language Recognition System using PCA and SVM
Real time Myanmar Sign Language Recognition System using PCA and SVMijtsrd
 
Real time human-computer interaction
Real time human-computer interactionReal time human-computer interaction
Real time human-computer interactionijfcstjournal
 
Hand Gesture Recognition System for Human-Computer Interaction with Web-Cam
Hand Gesture Recognition System for Human-Computer Interaction with Web-CamHand Gesture Recognition System for Human-Computer Interaction with Web-Cam
Hand Gesture Recognition System for Human-Computer Interaction with Web-Camijsrd.com
 
Biometric Class Attendace System
Biometric Class Attendace SystemBiometric Class Attendace System
Biometric Class Attendace SystemAgbona Azeez
 
IRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET Journal
 
An HCI Principles based Framework to Support Deaf Community
An HCI Principles based Framework to Support Deaf CommunityAn HCI Principles based Framework to Support Deaf Community
An HCI Principles based Framework to Support Deaf CommunityIJEACS
 
SMARCOS Abstract Paper submitted to ICCHP 2012
SMARCOS Abstract Paper submitted to ICCHP 2012SMARCOS Abstract Paper submitted to ICCHP 2012
SMARCOS Abstract Paper submitted to ICCHP 2012Smarcos Eu
 
IRJET- Navigation and Camera Reading System for Visually Impaired
IRJET- Navigation and Camera Reading System for Visually ImpairedIRJET- Navigation and Camera Reading System for Visually Impaired
IRJET- Navigation and Camera Reading System for Visually ImpairedIRJET Journal
 
Human computer interaction with Nalaemton
Human computer interaction with NalaemtonHuman computer interaction with Nalaemton
Human computer interaction with NalaemtonNalaemton S
 
Human Computer Interface
Human Computer InterfaceHuman Computer Interface
Human Computer Interfacesanathkumar
 
IRJET- Voice Assistant for Visually Impaired People
IRJET-  	  Voice Assistant for Visually Impaired PeopleIRJET-  	  Voice Assistant for Visually Impaired People
IRJET- Voice Assistant for Visually Impaired PeopleIRJET Journal
 
human computer interface
human computer interfacehuman computer interface
human computer interfaceSantosh Kumar
 
Design of a hand geometry based biometric system
Design of a hand geometry based biometric systemDesign of a hand geometry based biometric system
Design of a hand geometry based biometric systemBhavi Bhatia
 
Human computerinterface
Human computerinterfaceHuman computerinterface
Human computerinterfaceKumar Aryan
 
Ubitous computing ppt
Ubitous computing pptUbitous computing ppt
Ubitous computing pptjolly9293
 

What's hot (20)

Real time Myanmar Sign Language Recognition System using PCA and SVM
Real time Myanmar Sign Language Recognition System using PCA and SVMReal time Myanmar Sign Language Recognition System using PCA and SVM
Real time Myanmar Sign Language Recognition System using PCA and SVM
 
Real time human-computer interaction
Real time human-computer interactionReal time human-computer interaction
Real time human-computer interaction
 
Hand Gesture Recognition System for Human-Computer Interaction with Web-Cam
Hand Gesture Recognition System for Human-Computer Interaction with Web-CamHand Gesture Recognition System for Human-Computer Interaction with Web-Cam
Hand Gesture Recognition System for Human-Computer Interaction with Web-Cam
 
Biometric Class Attendace System
Biometric Class Attendace SystemBiometric Class Attendace System
Biometric Class Attendace System
 
IRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET- Sign Language Interpreter
IRJET- Sign Language Interpreter
 
An HCI Principles based Framework to Support Deaf Community
An HCI Principles based Framework to Support Deaf CommunityAn HCI Principles based Framework to Support Deaf Community
An HCI Principles based Framework to Support Deaf Community
 
Mie presentation
Mie presentationMie presentation
Mie presentation
 
SMARCOS Abstract Paper submitted to ICCHP 2012
SMARCOS Abstract Paper submitted to ICCHP 2012SMARCOS Abstract Paper submitted to ICCHP 2012
SMARCOS Abstract Paper submitted to ICCHP 2012
 
Human Computer Interaction of an Information System
Human Computer Interaction of an Information SystemHuman Computer Interaction of an Information System
Human Computer Interaction of an Information System
 
IRJET- Navigation and Camera Reading System for Visually Impaired
IRJET- Navigation and Camera Reading System for Visually ImpairedIRJET- Navigation and Camera Reading System for Visually Impaired
IRJET- Navigation and Camera Reading System for Visually Impaired
 
Human computer interaction with Nalaemton
Human computer interaction with NalaemtonHuman computer interaction with Nalaemton
Human computer interaction with Nalaemton
 
Interactive Screen
Interactive ScreenInteractive Screen
Interactive Screen
 
Human Computer Interface
Human Computer InterfaceHuman Computer Interface
Human Computer Interface
 
IRJET- Voice Assistant for Visually Impaired People
IRJET-  	  Voice Assistant for Visually Impaired PeopleIRJET-  	  Voice Assistant for Visually Impaired People
IRJET- Voice Assistant for Visually Impaired People
 
human computer interface
human computer interfacehuman computer interface
human computer interface
 
Design of a hand geometry based biometric system
Design of a hand geometry based biometric systemDesign of a hand geometry based biometric system
Design of a hand geometry based biometric system
 
Human computerinterface
Human computerinterfaceHuman computerinterface
Human computerinterface
 
Nikppt
NikpptNikppt
Nikppt
 
Ubitous computing ppt
Ubitous computing pptUbitous computing ppt
Ubitous computing ppt
 
14 5 b8
14 5 b814 5 b8
14 5 b8
 

Similar to 183

DHWANI- THE VOICE OF DEAF AND MUTE
DHWANI- THE VOICE OF DEAF AND MUTEDHWANI- THE VOICE OF DEAF AND MUTE
DHWANI- THE VOICE OF DEAF AND MUTEIRJET Journal
 
DHWANI- THE VOICE OF DEAF AND MUTE
DHWANI- THE VOICE OF DEAF AND MUTEDHWANI- THE VOICE OF DEAF AND MUTE
DHWANI- THE VOICE OF DEAF AND MUTEIRJET Journal
 
Real-Time Sign Language Detector
Real-Time Sign Language DetectorReal-Time Sign Language Detector
Real-Time Sign Language DetectorIRJET Journal
 
A gesture recognition system for the Colombian sign language based on convolu...
A gesture recognition system for the Colombian sign language based on convolu...A gesture recognition system for the Colombian sign language based on convolu...
A gesture recognition system for the Colombian sign language based on convolu...journalBEEI
 
Sign Language Recognition using Mediapipe
Sign Language Recognition using MediapipeSign Language Recognition using Mediapipe
Sign Language Recognition using MediapipeIRJET Journal
 
Projects1920 c6 fina;
Projects1920 c6 fina;Projects1920 c6 fina;
Projects1920 c6 fina;TabassumBanu5
 
A Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign LanguageA Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign Languageijsrd.com
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysispaperpublications3
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysispaperpublications3
 
IRJET - Sign Language Recognition using Neural Network
IRJET - Sign Language Recognition using Neural NetworkIRJET - Sign Language Recognition using Neural Network
IRJET - Sign Language Recognition using Neural NetworkIRJET Journal
 
Basic Gesture Based Communication for Deaf and Dumb
Basic Gesture Based Communication for Deaf and DumbBasic Gesture Based Communication for Deaf and Dumb
Basic Gesture Based Communication for Deaf and DumbRSIS International
 
Eye(I) Still Know! – An App for the Blind Built using Web and AI
Eye(I) Still Know! – An App for the Blind Built using Web and AIEye(I) Still Know! – An App for the Blind Built using Web and AI
Eye(I) Still Know! – An App for the Blind Built using Web and AIDr. Amarjeet Singh
 
Sign Language Recognition
Sign Language RecognitionSign Language Recognition
Sign Language RecognitionIRJET Journal
 
Sign Language Recognition using Deep Learning
Sign Language Recognition using Deep LearningSign Language Recognition using Deep Learning
Sign Language Recognition using Deep LearningIRJET Journal
 
IRJET- Hand Gesture Recognition for Deaf and Dumb
IRJET- Hand Gesture Recognition for Deaf and DumbIRJET- Hand Gesture Recognition for Deaf and Dumb
IRJET- Hand Gesture Recognition for Deaf and DumbIRJET Journal
 
Paper id 23201490
Paper id 23201490Paper id 23201490
Paper id 23201490IJRAT
 
Deep convolutional neural network for hand sign language recognition using mo...
Deep convolutional neural network for hand sign language recognition using mo...Deep convolutional neural network for hand sign language recognition using mo...
Deep convolutional neural network for hand sign language recognition using mo...journalBEEI
 

Similar to 183 (20)

DHWANI- THE VOICE OF DEAF AND MUTE
DHWANI- THE VOICE OF DEAF AND MUTEDHWANI- THE VOICE OF DEAF AND MUTE
DHWANI- THE VOICE OF DEAF AND MUTE
 
DHWANI- THE VOICE OF DEAF AND MUTE
DHWANI- THE VOICE OF DEAF AND MUTEDHWANI- THE VOICE OF DEAF AND MUTE
DHWANI- THE VOICE OF DEAF AND MUTE
 
Real-Time Sign Language Detector
Real-Time Sign Language DetectorReal-Time Sign Language Detector
Real-Time Sign Language Detector
 
A gesture recognition system for the Colombian sign language based on convolu...
A gesture recognition system for the Colombian sign language based on convolu...A gesture recognition system for the Colombian sign language based on convolu...
A gesture recognition system for the Colombian sign language based on convolu...
 
Sign Language Recognition using Mediapipe
Sign Language Recognition using MediapipeSign Language Recognition using Mediapipe
Sign Language Recognition using Mediapipe
 
Projects1920 c6 fina;
Projects1920 c6 fina;Projects1920 c6 fina;
Projects1920 c6 fina;
 
A Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign LanguageA Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign Language
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysis
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysis
 
delna's journal
delna's journaldelna's journal
delna's journal
 
IRJET - Sign Language Recognition using Neural Network
IRJET - Sign Language Recognition using Neural NetworkIRJET - Sign Language Recognition using Neural Network
IRJET - Sign Language Recognition using Neural Network
 
Basic Gesture Based Communication for Deaf and Dumb
Basic Gesture Based Communication for Deaf and DumbBasic Gesture Based Communication for Deaf and Dumb
Basic Gesture Based Communication for Deaf and Dumb
 
Eye(I) Still Know! – An App for the Blind Built using Web and AI
Eye(I) Still Know! – An App for the Blind Built using Web and AIEye(I) Still Know! – An App for the Blind Built using Web and AI
Eye(I) Still Know! – An App for the Blind Built using Web and AI
 
Sign Language Recognition
Sign Language RecognitionSign Language Recognition
Sign Language Recognition
 
gesture-recognition
gesture-recognitiongesture-recognition
gesture-recognition
 
Sign Language Recognition using Deep Learning
Sign Language Recognition using Deep LearningSign Language Recognition using Deep Learning
Sign Language Recognition using Deep Learning
 
D0361015021
D0361015021D0361015021
D0361015021
 
IRJET- Hand Gesture Recognition for Deaf and Dumb
IRJET- Hand Gesture Recognition for Deaf and DumbIRJET- Hand Gesture Recognition for Deaf and Dumb
IRJET- Hand Gesture Recognition for Deaf and Dumb
 
Paper id 23201490
Paper id 23201490Paper id 23201490
Paper id 23201490
 
Deep convolutional neural network for hand sign language recognition using mo...
Deep convolutional neural network for hand sign language recognition using mo...Deep convolutional neural network for hand sign language recognition using mo...
Deep convolutional neural network for hand sign language recognition using mo...
 

More from vivatechijri

Understanding the Impact and Challenges of Corona Crisis on Education Sector...
Understanding the Impact and Challenges of Corona Crisis on  Education Sector...Understanding the Impact and Challenges of Corona Crisis on  Education Sector...
Understanding the Impact and Challenges of Corona Crisis on Education Sector...vivatechijri
 
LEADERSHIP ONLY CAN LEAD THE ORGANIZATION TOWARDS IMPROVEMENT AND DEVELOPMENT
LEADERSHIP ONLY CAN LEAD THE ORGANIZATION  TOWARDS IMPROVEMENT AND DEVELOPMENT  LEADERSHIP ONLY CAN LEAD THE ORGANIZATION  TOWARDS IMPROVEMENT AND DEVELOPMENT
LEADERSHIP ONLY CAN LEAD THE ORGANIZATION TOWARDS IMPROVEMENT AND DEVELOPMENT vivatechijri
 
A study on solving Assignment Problem
A study on solving Assignment ProblemA study on solving Assignment Problem
A study on solving Assignment Problemvivatechijri
 
Structural and Morphological Studies of Nano Composite Polymer Gel Electroly...
Structural and Morphological Studies of Nano Composite  Polymer Gel Electroly...Structural and Morphological Studies of Nano Composite  Polymer Gel Electroly...
Structural and Morphological Studies of Nano Composite Polymer Gel Electroly...vivatechijri
 
Theoretical study of two dimensional Nano sheet for gas sensing application
Theoretical study of two dimensional Nano sheet for gas sensing  applicationTheoretical study of two dimensional Nano sheet for gas sensing  application
Theoretical study of two dimensional Nano sheet for gas sensing applicationvivatechijri
 
METHODS FOR DETECTION OF COMMON ADULTERANTS IN FOOD
METHODS FOR DETECTION OF COMMON  ADULTERANTS IN FOODMETHODS FOR DETECTION OF COMMON  ADULTERANTS IN FOOD
METHODS FOR DETECTION OF COMMON ADULTERANTS IN FOODvivatechijri
 
The Business Development Ethics
The Business Development EthicsThe Business Development Ethics
The Business Development Ethicsvivatechijri
 
An Alternative to Hard Drives in the Coming Future:DNA-BASED DATA STORAGE
An Alternative to Hard Drives in the Coming Future:DNA-BASED DATA STORAGEAn Alternative to Hard Drives in the Coming Future:DNA-BASED DATA STORAGE
An Alternative to Hard Drives in the Coming Future:DNA-BASED DATA STORAGEvivatechijri
 
Enhancing The Capability of Chatbots
Enhancing The Capability of ChatbotsEnhancing The Capability of Chatbots
Enhancing The Capability of Chatbotsvivatechijri
 
Smart Glasses Technology
Smart Glasses TechnologySmart Glasses Technology
Smart Glasses Technologyvivatechijri
 
Future Applications of Smart Iot Devices
Future Applications of Smart Iot DevicesFuture Applications of Smart Iot Devices
Future Applications of Smart Iot Devicesvivatechijri
 
Cross Platform Development Using Flutter
Cross Platform Development Using FlutterCross Platform Development Using Flutter
Cross Platform Development Using Fluttervivatechijri
 
Recommender Systems
Recommender SystemsRecommender Systems
Recommender Systemsvivatechijri
 
Light Fidelity(LiFi)- Wireless Optical Networking Technology
Light Fidelity(LiFi)- Wireless Optical Networking TechnologyLight Fidelity(LiFi)- Wireless Optical Networking Technology
Light Fidelity(LiFi)- Wireless Optical Networking Technologyvivatechijri
 
Social media platform and Our right to privacy
Social media platform and Our right to privacySocial media platform and Our right to privacy
Social media platform and Our right to privacyvivatechijri
 
THE USABILITY METRICS FOR USER EXPERIENCE
THE USABILITY METRICS FOR USER EXPERIENCETHE USABILITY METRICS FOR USER EXPERIENCE
THE USABILITY METRICS FOR USER EXPERIENCEvivatechijri
 
Google File System
Google File SystemGoogle File System
Google File Systemvivatechijri
 
A Study of Tokenization of Real Estate Using Blockchain Technology
A Study of Tokenization of Real Estate Using Blockchain TechnologyA Study of Tokenization of Real Estate Using Blockchain Technology
A Study of Tokenization of Real Estate Using Blockchain Technologyvivatechijri
 

More from vivatechijri (20)

Understanding the Impact and Challenges of Corona Crisis on Education Sector...
Understanding the Impact and Challenges of Corona Crisis on  Education Sector...Understanding the Impact and Challenges of Corona Crisis on  Education Sector...
Understanding the Impact and Challenges of Corona Crisis on Education Sector...
 
LEADERSHIP ONLY CAN LEAD THE ORGANIZATION TOWARDS IMPROVEMENT AND DEVELOPMENT
LEADERSHIP ONLY CAN LEAD THE ORGANIZATION  TOWARDS IMPROVEMENT AND DEVELOPMENT  LEADERSHIP ONLY CAN LEAD THE ORGANIZATION  TOWARDS IMPROVEMENT AND DEVELOPMENT
LEADERSHIP ONLY CAN LEAD THE ORGANIZATION TOWARDS IMPROVEMENT AND DEVELOPMENT
 
A study on solving Assignment Problem
A study on solving Assignment ProblemA study on solving Assignment Problem
A study on solving Assignment Problem
 
Structural and Morphological Studies of Nano Composite Polymer Gel Electroly...
Structural and Morphological Studies of Nano Composite  Polymer Gel Electroly...Structural and Morphological Studies of Nano Composite  Polymer Gel Electroly...
Structural and Morphological Studies of Nano Composite Polymer Gel Electroly...
 
Theoretical study of two dimensional Nano sheet for gas sensing application
Theoretical study of two dimensional Nano sheet for gas sensing  applicationTheoretical study of two dimensional Nano sheet for gas sensing  application
Theoretical study of two dimensional Nano sheet for gas sensing application
 
METHODS FOR DETECTION OF COMMON ADULTERANTS IN FOOD
METHODS FOR DETECTION OF COMMON  ADULTERANTS IN FOODMETHODS FOR DETECTION OF COMMON  ADULTERANTS IN FOOD
METHODS FOR DETECTION OF COMMON ADULTERANTS IN FOOD
 
The Business Development Ethics
The Business Development EthicsThe Business Development Ethics
The Business Development Ethics
 
Digital Wellbeing
Digital WellbeingDigital Wellbeing
Digital Wellbeing
 
An Alternative to Hard Drives in the Coming Future:DNA-BASED DATA STORAGE
An Alternative to Hard Drives in the Coming Future:DNA-BASED DATA STORAGEAn Alternative to Hard Drives in the Coming Future:DNA-BASED DATA STORAGE
An Alternative to Hard Drives in the Coming Future:DNA-BASED DATA STORAGE
 
Enhancing The Capability of Chatbots
Enhancing The Capability of ChatbotsEnhancing The Capability of Chatbots
Enhancing The Capability of Chatbots
 
Smart Glasses Technology
Smart Glasses TechnologySmart Glasses Technology
Smart Glasses Technology
 
Future Applications of Smart Iot Devices
Future Applications of Smart Iot DevicesFuture Applications of Smart Iot Devices
Future Applications of Smart Iot Devices
 
Cross Platform Development Using Flutter
Cross Platform Development Using FlutterCross Platform Development Using Flutter
Cross Platform Development Using Flutter
 
3D INTERNET
3D INTERNET3D INTERNET
3D INTERNET
 
Recommender Systems
Recommender SystemsRecommender Systems
Recommender Systems
 
Light Fidelity(LiFi)- Wireless Optical Networking Technology
Light Fidelity(LiFi)- Wireless Optical Networking TechnologyLight Fidelity(LiFi)- Wireless Optical Networking Technology
Light Fidelity(LiFi)- Wireless Optical Networking Technology
 
Social media platform and Our right to privacy
Social media platform and Our right to privacySocial media platform and Our right to privacy
Social media platform and Our right to privacy
 
THE USABILITY METRICS FOR USER EXPERIENCE
THE USABILITY METRICS FOR USER EXPERIENCETHE USABILITY METRICS FOR USER EXPERIENCE
THE USABILITY METRICS FOR USER EXPERIENCE
 
Google File System
Google File SystemGoogle File System
Google File System
 
A Study of Tokenization of Real Estate Using Blockchain Technology
A Study of Tokenization of Real Estate Using Blockchain TechnologyA Study of Tokenization of Real Estate Using Blockchain Technology
A Study of Tokenization of Real Estate Using Blockchain Technology
 

Recently uploaded

Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxbritheesh05
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.eptoze12
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxPoojaBan
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and usesDevarapalliHaritha
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxvipinkmenon1
 

Recently uploaded (20)

Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptx
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptx
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and uses
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptx
 

183

  • 1. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021) ISSN(Online): 2581-7280 VIVA Institute of Technology 9th National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021) F-104 www.viva-technology.org/New/IJRI Sign Language recognition using image-based hand gesture recognition techniques Prasad Suhas Tandel1 , Prof. Sonia Dubey2 1 (Department of MCA, Viva School Of MCA/ University of Mumbai, India) 2 (Department of MCA, Viva School Of MCA/ University of Mumbai, India) Abstract : Touch is one of the most common forms of sign language used in oral communication. It is most commonly used by deaf and dumb people who have difficulty hearing or speaking. Communication between them or ordinary people. Various sign-language programs have been developed by many manufacturers around the world, but they are relatively flexible and affordable for end users. Therefore, this paper has presented software that introduces a type of system that can automatically detect sign language to help deaf and mute people communicate better with other people or ordinary people. Pattern recognition and hand recognition are developing fields of research. Being an integral part of meaningless hand-to-hand communication plays a major role in our daily lives. The handwriting system gives us a new, natural, easy-to- use communication system with a computer that is very common to humans. Considering the similarity of the human condition with four fingers and one thumb, the software aims to introduce a real-time hand recognition system based on the acquisition of some of the structural features such as position, mass centroid, finger position, thumb instead of raised or folded finger. Keywords – Communication System, hand recognition, pattern, sign language, Touch I. INTRODUCTION Deaf people are often deprived of normal contact with other people in the community. It has been shown that they sometimes find it difficult to communicate with ordinary people with their hands, as very few of them are seen by most people. Since people with hearing or hearing problems cannot speak as normal people so they should rely on some form of visual communication most of the time. Sign language is a major means of communication in a deaf and dumb community. Like any other language it has acquired a grammar and vocabulary but uses a visual exchange system. The problem arises when mute or deaf people try to express their feelings to others with the help of these sign language programs. This is because ordinary people usually do not know these grammars. As a result, it has been found that contact with a deaf person is limited only to his family or to the deaf community. The importance of sign language is emphasized by the growing public acceptance and funding of the international project. In this age of Technology, the need for computer-based programming is in dire need of a dumb community. However, researchers have been attacking the problem for some time now and the results show some promise. The technology of interest is designed for speech recognition but no real marketing product of brand recognition is actually available in the current market. The idea is to make computers understand human language and develop a user-friendly computer interface (HCI). Making a computer understand speech, facial expressions, and touch is a step in the right direction. Gestures are information that can be exchanged with words. One can make countless movements at a time. As human touch is seen visually, it is a matter of great interest to computer-generated researchers. This project aims to determine human mobility by performing HCI. Coding of these actions in machine language requires a complex programming algorithm. In our project we focus on Image Analysis and Template Matching to find the best product. II. LITERATURE REVIEW
  • 2. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021) ISSN(Online): 2581-7280 VIVA Institute of Technology 9th National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021) F-105 www.viva-technology.org/New/IJRI As illustrated in Figure 1, hand gestures for human–computer interaction (HCI) started with the invention of the data glove sensor. It provided simple computer recognition instructions. The gloves have used a variety of sensors to capture the movement of the hand and its position by finding the right links for the palm area and the fingers. The various sensors that used the same process based on the bending angle were the bending sensor, the small migration sensor, the optical fibre transducer, the flexible sensors and the accelerometer sensor. These sensors exploit different physical principles according to their type. Figure 1. Different techniques for hand gestures. (a) Glove-based attached sensor either connected to the computer or portable; (b) computer vision–based camera using a marked glove or just a naked Although the techniques mentioned above have provided good outcomes, they have various limitations that make them eligible for the elderly, who may experience discomfort and confusion due to wire connection problems. In addition, older people suffering from chronic diseases that lead to loss of muscle function may not be able to wear and take off their gloves, making them uncomfortable and preventing them from using them for long periods of time[1]. These sensors may also cause skin damage, infection or inappropriate reactions in people with sensitive skin or those who have burns. In addition, some sensors are quite expensive. Some of these problems were discussed in a study by Lamberti and Camastra, who developed a computer-vision system based on coloured marked gloves. Although this study did not require the attachment of sensors, it still required coloured gloves to be worn. The most effective commercial glove is by far the VPL Data Glove [2]. Built by Zimmerman during the 1970’s. It is based upon patented optical fiber sensors along the back of the fingers. Star-ner and Pentland developed a glove-environmental system that can detect 40 signals from American Sign Language (ASL) at 5Hz. Another research is by Hyeon-Kyu Lee and Jin H. Kim presented work on real-time hand-gesture ecognition using HMM (Hidden Markov Model). Kjeldsen and Kendersi devised a technique for doing skin-tone mentation in HSV space, depending on the basis that the skin tone in the images takes on the volume attached to the HSV space. They also started a program that uses a neural network of back distribution to detect touch from hand- separated images[1]. Etsuko Ueda and Jososhio Matsumoto have outlined the process of the novel a possible rating scale used for human-based communication, in this way, hand circuits are extracted from multiple images acquired a multi-idea camera system, and creating a voxel Model [12]. This paper also discusses these methods in detail and summarizes some modern research under different considerations (type of camera used, image or video editing, type of method, partition algorithm used, recognition rate, type of region of interest processing, number of gestures, application area, limitation or invariant factor, and detection range achieved and in some cases data set use, runtime speed, hardware run, type of error). In addition, the review highlights the most popular applications associated with this topic
  • 3. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021) ISSN(Online): 2581-7280 VIVA Institute of Technology 9th National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021) F-106 www.viva-technology.org/New/IJRI III. HAND GESTURE METHODS The proposed plan consists of two main categories: (1) Hand separation (2) Hand sign recognition. The block diagram shows the effectiveness of the proposed system. Hand features are an important criterion for separation to distinguish between hand touches. These elements must be able to adapt to different hands and touches by different people. In this system, we used histograms of targeted gradients (HOG) as an element adjective. It is better than other definitions because it is able to adapt to changing light and rotation of objects. We do not look at the whole picture, but we divide it into smaller cells and then the pixels inside the cells on the edge or gradient direction histogram are calculated 3.1 Preparing the dataset We have made a database of 26 English-language alphabets in Indian Sign Language. Each sign it is made up of two different people with a different structure of conditions in different lighting conditions. Videos are recorded on camera and each video is framed by frame to photos and then arranged in 100 frames and added to get about 250 photos for each sign. The data was then divided into 4800 training images and 1200 test images 3.2 Image Processing and edge detection Before performing a feature, the images should be processed in such a way that only the most important details are considered and the unwanted, disturbing and overpowering details are ignored. Images are first converted to 100 * 100-pixel size for fast countdown [3]. The image is then converted to Gray and finally converted to a binary image. At the same time, skin colour is obtained using the Y-Cb-Cr model. Finally, the edge detection was done with a Canny edge detector. The process is illustrated in figure below Output of skin colour detection: a) Original cropped image, b) Grey scale converted image, c) Skin colour detection output
  • 4. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021) ISSN(Online): 2581-7280 VIVA Institute of Technology 9th National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021) F-107 www.viva-technology.org/New/IJRI 3.3 Feature Extraction Extracting data from data is a critical part of any acquisition process. It can be done using various methods such as Fourier Descriptor, Scale Invariant Feature Transform (SIFT) or Principle Component Analysis (PCA). Histogram of Oriented Gradients (HOG) is another effective way to extract a feature. In this paper, we used the HOG method to extract the feature. The basic idea of HOG is that an object or shape within an image can be represented by strong distribution gradients or edge indicators. In this way, the image is divided into smaller cells and calculates the gradient histogram of each cell. It creates a barrel and incorporates a histogram of different samples depending on the size and angle. To improve accuracy, all cells in the image are normal so that they are not affected by light contrast. Finally, the HOG feature vector counts for the whole image. 3.4 Template matching and sign recognition The element vector generated in the above step is inserted into the image separator. In this paper, we have used Support Vector Machine (SVM) to distinguish. By using SVM fragmentation we can increase accuracy and avoid data overload. In SVM data objects are arranged in a n-dimensional space where n is the number of elements. Each feature is associated with a link value. It then acquires a hyperplane that separates the classes. The model is kept in sign language recognition in real time. 3.5 Text to speech conversion This section falls under Artificial Intelligence [9] where the template matching function becomes successfully the same image and then translated into text and audio format. For this purpose, the methods described earlier used for conversion. We used the Google to Speech API text to convert sign language into audio. It is one of the best texts in the API talk available. Unlike other TTS APIs, this API creates a human-like voice. Sign language is translated using the steps above and the result is given to write a speech function that converts it into sound. In this system, we see and hear simultaneous sign-language translations that make it much easier to use. IV. ALPHABET RECOGNITION Following table shows the values assigned to each finger.[8] Binary Alphabet calculation: It is possible to display total (25 −1) i.e. 31 gestures using the fingers of a single hand, and (210 −1) i.e. 1023 gestures if both hands are used. Figure show binary finger tapping tool showing the significant values assigned to fingers by referring to gesture table shows code of each alphabet.
  • 5. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021) ISSN(Online): 2581-7280 VIVA Institute of Technology 9th National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021) F-108 www.viva-technology.org/New/IJRI Alphabet code V. CONCLUSION In this paper, we have explained the implementation of a system which recognize Sign Language and translated to English. We discussed the importance of an ISL translator for communicating with the deaf and dumb. In this system, a hand-held photo frame is captured using a webcam. Our project aims to bridge the gap by introducing an inexpensive computer in the communication path so that the sign language can be automatically captured, recognized and translated to speech for the benefit of blind people. In the other direction, speech must be analysed and converted to either sign or textual display on the screen for the benefit of the hearing impaired. These frameworks are being processed to obtain improved features. A feature is then added to the subtraction and segmentation algorithms to translate the token language into English text. This translation is converted to speech using text to speech API. The program used the above algorithms to obtain the final result. The proposed model is tested by a database containing 26 symbols from two different individuals. The results show a complete accuracy of the system to be 88%. Our future research will work towards implementing this model on a mobile application.
  • 6. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021) ISSN(Online): 2581-7280 VIVA Institute of Technology 9th National Conference on Role of Engineers in Nation Building – 2021 (NCRENB-2021) F-109 www.viva-technology.org/New/IJRI Acknowledgements I am thankful to my college for giving me this opportunity to make this research a success. I give my special thanks and sincere gratitude towards Prof. Sonia Dubey for encouraging me to complete this research paper, guiding me and helping me through all the obstacles in the research. Without her assistance, my research paper would have been impossible and also, I present my obligation towards all our past years teachers who have bestowed deep understanding and knowledge in us, over the past years. We are obliged to our parents and family members who always supported me greatly and encouraged me in each and every step. REFERENCES Journal Papers: [1] Christopher Lee and Yangsheng Xu, “Online, interactive learning of gestures for human robot interfaces” Carnegie Mellon University, the Robotics Institute, Pittsburgh, Pennsylvania, USA, 1996 [2] Richard Watson, “Gesture recognition techniques”, Technical report, Trinity College, Department of Computer Science, Dublin, July, Technical Report No. TCD-CS-93-11, 1993 [3] Ray Lockton, ”Hand Gesture Recognition using computer Vision”,4th year project report, Ballilol College , Department of Engineering Science, Oxford University,2000 [4] Hai, P. T., Thinh, H. C., Van Phuc, B., & Kha, H. H. (2018). Automatic feature extraction for Vietnamese sign language recognition using support vector machine. 2018 2nd International Conference on Recent Advances in Signal Processing, Telecommunications & Computing (SigTelCom). [5] Purva C. Badhe, Vaishali Kulkarni, “Indian Sign Language Translator using Gesture Recognition Algorithm”, 2015 IEEE International Conference on Computer Graphics, Graphics and Information Security (CGVIS). [6] P.Gajalaxmi, T, Sree Sharmila, “Sign Language Recognition for Invariant features based on multiclass Support Vector Machine with BeamECOC Optimization”, IEEE International Conference on Power, Control, Signals and Instrumentation Engineering (ICPCSI- 2017). [7] Etsuko Ueda, Yoshio Matsumoto, Masakazu Imai, Tsukasa Ogasawara, ”Hand Pose Estimation for Vision Based Human Interface”, IEEE Transactions on Industrial Electronics,Vol.50,No.4,pp.676-684,2003. [8] International Multi Conference of Engineers and Computer Scientists 2009”, Hong Kong , Vol I IMECS 2009, March 18 - 20, 2009 [9] Ms. Rashmi D. Kyatanavar, Prof. P. R. Futane, “Comparative Study of Sign Language Recognition Systems”, Department of Computer Engineering, Sinhgad College of Engineering, Pune, India International Journal of Scientific and Research Publications, Volume 2, Issue 6, June 2012 ISSN 2250-315 [10] Md Azher Uddin, Shayhan Ameen Chowdhury, “Hand Sign language recognition for Bangla Alphabet using Support Vector Machines”, 2016 International Conference on Innovations in Science Engineerig and Technology (ICISET). [11] Muhammad Aminur Rahaman, Mahmood Jasim, Md. Haider Ali and Md. Hasanuzzaman, “Real-Time Computer Vision based Bengali Sign Language Recognition”, 2014 17th International Conference on Computer and Information Technology (ICCIT). [12] Etsuko Ueda, Yoshio Matsumoto, Masakazu Imai, Tsukasa Ogasawara, ”Hand Pose Estimation for Vision Based Human Interface”, IEEE Transactions on Industrial Electronics,Vol.50,No.4,pp.676-684,2003. [13] Starner T, Weaver J, Pentland A (1998) Real-time American sign language recognition using desk and wearable computer based video. IEEE Trans Pattern Anal Mach Intell 20:1371–1375 [14] Starner T, Pentland A (1997) Real-time American sign language recognition from video using hidden Markov models. In: Motion- based –243 Books: [15] “digital image processing” (2nd Edition) Rafael C. Gonzalez (Author), Richard E. Woods (Author) Publication Date: January 15, 2002 | ISBN-10: 0201180758 | ISBN-13: 978-0201180756.