HAND GESTURE RECOGNITION
Group Member -:
Deepak Kumar Agrahari (1901320130033)
Anurag Prajapati (1901320130019)
Saurabh Kumar (1901320130089)
Branch – Information Technology (4th- Year)
TABLE OF CONTENT
1. Abstract
2. Introduction
Objective
Proposed System
3. Keyword and Definition
4. System Required
Hardware Requirements
Software Requirements
5. System Design
General Overview
Use case Diagram
E R diagram
6. Implementation Details
Front-End
Back-End
7. Future Scope
8. References
Abstract
Hand gesture recognition systems have gotten a lot of interest in recent years due to their wide range
of applications and ability to easily connect with machines via human-computer interaction. A survey
of modern hand gesture recognition systems is offered in this study. With challenges of gesture
system, key issues of hand gesture recognition are highlighted. A review of recent postures and
gestures recognition systems is also offered. A summary of hand gesture research results, databases,
and a comparison of the main gesture recognition phases are also provided. Finally, the benefits and
downsides of the various systems are explored.
Crop inspections are now partial, expensive, and time-consuming. Chronological satellite photos of
crops can be used to train and model neural networks for detecting illicit crops, but if we wish to do
inter or intra class classification on various crops, the prediction will degrade due to the lack of
information in satellite images. As a result, we used genuine field photographs to train the model.
Once the patterns of a crop have been determined, the neural network's output model may be
systematically predicted.
Introduction
• The majority of deaf-and-mute people use sign language produced by body actions such as hand gestures, body motion
eyes and facial expressions to communicate amongst each other and with non-impaired people in their daily life. However
it has become a barrier for mute and deaf communities which intend to integrate into society. Therefore, it is significant to
have a medium that can recognize and translate gesture into understandable words by common people, as the information
carried by hand gestures is always principal in sign language. To bridge the communication gap, a hand gesture recognition
system for Sign Language Recognition (SLR) is required.
OBJECTIVE
• This project aims to design a real-time vision-based hand gesture recognition system with machine learning techniques
which potentially makes deaf-and-mute people life easier. In practice, signs are always continuously spelled words mixing
both dynamic and static gestures, so the wanted recognition system should be able to recognize both dynamic and static
gestures in ASL with a promising accuracy.
PROPOSED SYSTEM
• The Gesture Recognition System takes the input hand gestures through the in-built web camera at a resolution of 320 x 240
pixels. The images are captured in a high intensity environment directed to illuminate the image source which is held at
black background so as to avoid shadow effects. The images are captured at a specified distance (typically 1.5 – 2 ft)
between camera and signer. The gestures are given by palm side of right hand. The captured video is then processed for
Hand motion detection and it is done using SAD. Then the segmentation of hand is carried out. The segmented hand image
is used for finding features. These features are used for gesture recognition.
Keyword and Definition
System Required
• Hardware Requirement
• Intel core i5 10th generation is used as a processor because it is fast than other processors an provide
reliable and stable and we can run our pc for longtime. By using this processor we can keep on
developing our project without any worries.
• 8 GB RAM is used as it will provide fast reading and writing capabilities and will in turn support in
processing.
• Webcam Required.
• Software Requirement
• Operating system - Windows 10 is used as the operating system as it is stable and supports more features
and is more user friendly
• OpenCV - OpenCV is a Python open-source library, which is used for computer vision in Artificial
intelligence, Machine Learning, face recognition, etc. In OpenCV, the CV is an abbreviation form of a
computer vision, which is defined as a field of study that helps computers to understand the content of
the digital images such as photographs and videos.
• Keras - Keras is a deep learning API written in Python, running on top of the machine learning
platform TensorFlow. It was developed with a focus on enabling fast experimentation. Being able to go
from idea to result as fast as possible is key to doing good research.
System Design
• General Overview
Camera Input
Hand motion
detection and
segmentation
Feature Extraction
Gesture Recognition
Slide show
control
Use Case Diagram
E R Diagram
Implementation Details
• Front-End
• Python
• OpenCV
• Keras
• ANN
• CNN
• ML
Future Scope
• The scope of this project is to build a real time gesture classification system that can automatically
detect gestures in natural lighting condition. In order to accomplish this objective, a real time
gesture based system is developed to identify gestures.
• Make disable person capable to communicate with able person.
• Make a system to establish a way of sharing thought and ideas of D&M person.
• Use to detect, recognise and interpret the hand gesture through computer vision.
References
https://www.opencv.org
https://en.Wikipedia.org/wiki/TensorFlow
https://python.com
https://google.com
https://en.Wikipedia.org/wiki/Convolutional_nueral_network
https://php.com
https://MachineLearning.com

HAND GESTURE RECOGNITION.ppt (1).pptx

  • 1.
    HAND GESTURE RECOGNITION GroupMember -: Deepak Kumar Agrahari (1901320130033) Anurag Prajapati (1901320130019) Saurabh Kumar (1901320130089) Branch – Information Technology (4th- Year)
  • 2.
    TABLE OF CONTENT 1.Abstract 2. Introduction Objective Proposed System 3. Keyword and Definition 4. System Required Hardware Requirements Software Requirements 5. System Design General Overview Use case Diagram E R diagram 6. Implementation Details Front-End Back-End 7. Future Scope 8. References
  • 3.
    Abstract Hand gesture recognitionsystems have gotten a lot of interest in recent years due to their wide range of applications and ability to easily connect with machines via human-computer interaction. A survey of modern hand gesture recognition systems is offered in this study. With challenges of gesture system, key issues of hand gesture recognition are highlighted. A review of recent postures and gestures recognition systems is also offered. A summary of hand gesture research results, databases, and a comparison of the main gesture recognition phases are also provided. Finally, the benefits and downsides of the various systems are explored. Crop inspections are now partial, expensive, and time-consuming. Chronological satellite photos of crops can be used to train and model neural networks for detecting illicit crops, but if we wish to do inter or intra class classification on various crops, the prediction will degrade due to the lack of information in satellite images. As a result, we used genuine field photographs to train the model. Once the patterns of a crop have been determined, the neural network's output model may be systematically predicted.
  • 4.
    Introduction • The majorityof deaf-and-mute people use sign language produced by body actions such as hand gestures, body motion eyes and facial expressions to communicate amongst each other and with non-impaired people in their daily life. However it has become a barrier for mute and deaf communities which intend to integrate into society. Therefore, it is significant to have a medium that can recognize and translate gesture into understandable words by common people, as the information carried by hand gestures is always principal in sign language. To bridge the communication gap, a hand gesture recognition system for Sign Language Recognition (SLR) is required. OBJECTIVE • This project aims to design a real-time vision-based hand gesture recognition system with machine learning techniques which potentially makes deaf-and-mute people life easier. In practice, signs are always continuously spelled words mixing both dynamic and static gestures, so the wanted recognition system should be able to recognize both dynamic and static gestures in ASL with a promising accuracy. PROPOSED SYSTEM • The Gesture Recognition System takes the input hand gestures through the in-built web camera at a resolution of 320 x 240 pixels. The images are captured in a high intensity environment directed to illuminate the image source which is held at black background so as to avoid shadow effects. The images are captured at a specified distance (typically 1.5 – 2 ft) between camera and signer. The gestures are given by palm side of right hand. The captured video is then processed for Hand motion detection and it is done using SAD. Then the segmentation of hand is carried out. The segmented hand image is used for finding features. These features are used for gesture recognition.
  • 5.
  • 6.
    System Required • HardwareRequirement • Intel core i5 10th generation is used as a processor because it is fast than other processors an provide reliable and stable and we can run our pc for longtime. By using this processor we can keep on developing our project without any worries. • 8 GB RAM is used as it will provide fast reading and writing capabilities and will in turn support in processing. • Webcam Required. • Software Requirement • Operating system - Windows 10 is used as the operating system as it is stable and supports more features and is more user friendly • OpenCV - OpenCV is a Python open-source library, which is used for computer vision in Artificial intelligence, Machine Learning, face recognition, etc. In OpenCV, the CV is an abbreviation form of a computer vision, which is defined as a field of study that helps computers to understand the content of the digital images such as photographs and videos. • Keras - Keras is a deep learning API written in Python, running on top of the machine learning platform TensorFlow. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result as fast as possible is key to doing good research.
  • 7.
    System Design • GeneralOverview Camera Input Hand motion detection and segmentation Feature Extraction Gesture Recognition Slide show control
  • 8.
  • 9.
  • 10.
    Implementation Details • Front-End •Python • OpenCV • Keras • ANN • CNN • ML
  • 11.
    Future Scope • Thescope of this project is to build a real time gesture classification system that can automatically detect gestures in natural lighting condition. In order to accomplish this objective, a real time gesture based system is developed to identify gestures. • Make disable person capable to communicate with able person. • Make a system to establish a way of sharing thought and ideas of D&M person. • Use to detect, recognise and interpret the hand gesture through computer vision.
  • 12.