This document presents a survey of technologies for hand sign language recognition and translation to text using machine learning. It discusses using CNN models to identify hand gestures in real-time from video input and translate the gestures to words rather than individual letters for better communication between deaf and hearing people. The system architecture involves hand detection, gesture recognition using a CNN model, and a login system for users. Previous approaches discussed include using sequential pattern mining and hidden Markov models on extracted motion features from video frames. The goal is to build an effective communication medium between deaf and hearing individuals.