A Hand Gesture Sign
Language to Text
Real Time Interpreter
using Google Mediapipe
Artificial Intelligence
Contents
• Hand Gesture & Sign Language.
• Problem statement.
• Background work.
• Cons
• What is Mediapipe?
• How it Work
• Conclusion.
Hand Gesture & Sign Language
• Aimed at innovating a device to help those without the knowledge of sign
language to communicate with the group of people.
• How might we improve communication between the hearing-impaired or mute by
using a real time visual hand gesture/sign language to text interpreter?
Problem Statement
The proposed system is based on the concept of
• Machine learning.
• Artificial intelligence using the deep learning neural network for
feature extraction and classification.
• Google Mediapipe Hand Gesture Recognition API.
• OPenCV(Open Source Computer Vision Library).
Background Work
Drawback : With openCV
Hand gesture detection was too dependent on many factors like:
• clean background.
• proper lighting for an accurate hand detection.
How can we Overcome it?
This inconsistency lead the team to research on other possible ways to
make detection more reliable and accurate which lead us to Google’s
Mediapipe.
Mediapipe
Mediapipe is a graph-based framework for building multimodal
(video, audio, and sensor) applied machine learning pipelines.
Mediapipe Hands
Hand Landmark with Mediapipe
How it works?
Capturing Gesture Data
A system for real-time hand gesture sign language
recognition, which we can use to communicate with deaf
and mute people.
Conclusion
Hand gesture sign language to text real time interpreter using google mediapipe artificial intelligence

Hand gesture sign language to text real time interpreter using google mediapipe artificial intelligence

  • 1.
    A Hand GestureSign Language to Text Real Time Interpreter using Google Mediapipe Artificial Intelligence
  • 2.
    Contents • Hand Gesture& Sign Language. • Problem statement. • Background work. • Cons • What is Mediapipe? • How it Work • Conclusion.
  • 3.
    Hand Gesture &Sign Language
  • 4.
    • Aimed atinnovating a device to help those without the knowledge of sign language to communicate with the group of people. • How might we improve communication between the hearing-impaired or mute by using a real time visual hand gesture/sign language to text interpreter? Problem Statement
  • 5.
    The proposed systemis based on the concept of • Machine learning. • Artificial intelligence using the deep learning neural network for feature extraction and classification. • Google Mediapipe Hand Gesture Recognition API. • OPenCV(Open Source Computer Vision Library). Background Work
  • 6.
    Drawback : WithopenCV Hand gesture detection was too dependent on many factors like: • clean background. • proper lighting for an accurate hand detection. How can we Overcome it? This inconsistency lead the team to research on other possible ways to make detection more reliable and accurate which lead us to Google’s Mediapipe.
  • 7.
    Mediapipe Mediapipe is agraph-based framework for building multimodal (video, audio, and sensor) applied machine learning pipelines. Mediapipe Hands
  • 8.
  • 9.
  • 10.
  • 11.
    A system forreal-time hand gesture sign language recognition, which we can use to communicate with deaf and mute people. Conclusion