This document describes a study on hand gesture identification using the Mediapipe framework. The goal is to develop a system to translate American Sign Language (ASL) gestures into text by recognizing 21 3D landmarks on the hand. It discusses related work on sign language recognition using both vision-based and sensor-based approaches. The implementation methodology section describes using Mediapipe's hand tracking model to detect hand landmarks and then using KNN classification to identify the ASL alphabet gestures. Results show the system can currently recognize ASL alphabet signs in real-time with 86-91% accuracy on average. Future work includes improving the system with more training data to increase accuracy and expand the vocabulary of signs recognized.