3. Speech impaired people use hand signs
and gestures to Communicate. Normal
people face difficulty in understanding
their language. Hence there is a need of a
system which recognizes the different
signs, gestures and conveys the
information to the normal people. It
bridges the gap between physically
challenged people and normal people.
4. ï±Communication is imparting ,sharing and
conveying of information ,news, ideas, and
feelings.
ï±Sign language is one of the way of non verbal
communication which is gaining impetus and
strong foothold due to its applications in large
number of fields.
ï±Most prominent application of this method is
its usage by differently disabled persons like
deaf.
ï±Gesture means a movement of hand or head
that expresses something
5. SL NO. TITLE AUTHOR YEAR
01 HAND GESTURE RECOGNITION BASED ON
COMPUTER VISION
MUNIR OUDAH, ALI AL-
NAJI AND JAVAAN CHAHL
2020
02 DESIGN OF HUMAN MACHINE
INTERACTIVE SYSTEM BASED ON HAND
GESTURE RECOGNITION
XIAOFEI JI, ZHIBO WANG 2019
03 HAND GESTURE RECOGNITION FOR
REAL TIME HUMAN INTERACTION
SYSTEM
POONAM SONWALKAR,
TANUJA SAKHARE, ASHWINI
PATIL, SONAL KALE
2015
6. MODEL 1:Hand Gesture Recognition on Digital Image Processing Using
MATLAB
ï§ It was found by Team of Researches and Engineers Working in Field of
Computer vision and Image Processing
ï§ This model is combination of digital image processing techniques and
machine learning algorithms
ï§ Limitations:
o Limited Recognition of dynamic gestures
o High Computational Requirements
o Sensitivity to Hand Orientation and Position
7. MODEL 2:System For Recognition Of Indian SignLanguage Of Deaf People
Using OTSUâS Algorithm
ï§ It Was Found By Team of Researches and Engineers From SIT ,India
ï§ OTSUâS Algorithm Uses Image Processing Techniques To Classify Hand
Gestures
ï§ Limitations:
o Low Accuracy
o Difficulty in Adapting To New Users
o Limited No of Hand Gestures
9. PROPOSED SYSTEM
MODEL NAME
Our proposed system is sign language recognition
system using convolution neural networks which
recognizes various hand gestures by capturing video
and converting it into frames. Then the hand pixels
are segmented and the image it obtained and sent for
comparison to the trained model. Thus our system is
more robust in getting exact text labels of letters.
20. I developed an effective method for dynamic
hand gesture recognition with 2D Convolutional
Neural Networks. which accurately gives result in
all conditions . My future work will include more
adaptive selection of the optimal hyper-
parameters of the CNNs, and investigating robust
classifiers that can classify higher level dynamic
gestures including activities and motion contexts
21. The proposed sign language recognition system used to recognize sign language
letters can be further extended to recognize gestures facial expressions. Instead
of displaying letter labels it will be more appropriate to display sentences as
more appropriate translation of language. This also increases readability. The
scope of different sign languages can be increased. More training data can be
added to detect the letter with more accuracy. This project can further be
extended to convert the signs to speech.
22. ï± [1] S. Mitra and T. Acharya. Gesture recognition: A survey. IEEE
Systems, Man, and Cybernetics, 37:311â324, 2007.
ï± [2] V. I. Pavlovic, R. Sharma, and T. S. Huang. Visual interpretation of
hand gestures for human-computer interaction: A review. PAMI,
19:677â695, 1997.
ï± [3J. J. LaViola Jr. An introduction to 3D gestural interfaces. In
SIGGRAPH Course, 2014.
ï± [4] S. B. Wang, A. Quattoni, L. Morency, D. Demirdjian, and T. Darrell.
Hidden conditional random fields for gesture recognition. In CVPR,
pages 1521â1527, 2006