This document describes a system called MuteCom that uses machine learning and computer vision to enable communication between deaf or mute people and people who do not understand sign language. The system takes hand gestures as input from a webcam and uses a TensorFlow-Keras model trained on American Sign Language databases to recognize the gestures and output the predicted meaning in text and/or speech. The goal is to bridge the communication gap by allowing deaf/mute people to express themselves easily to others through a mobile application that recognizes their hand signs.