This document presents a gesture recognition system using computer vision and convolutional neural networks. It discusses developing classifiers to recognize hand gestures and facial expressions. A dataset of 87,000 images is used to train models to classify 26 letters of the American Sign Language alphabet, as well as additional classes for space, delete and nothing. The models are trained using transfer learning with MobileNet, achieving validation accuracies of over 90% for hand gesture classification and implementing a system that recognizes and translates gestures in real-time. It concludes the paper developed robust models for American Sign Language translation and facial expression recognition using CNNs.