This document describes a research project that aims to develop a sign language recognition system using convolutional neural networks. The system would use a CNN model trained on images of hand signs to detect and identify signs in real-time video from a camera. The goal is to help facilitate communication between deaf or mute individuals and others. Key steps include collecting a dataset of labeled hand sign images, preprocessing the images, developing and training a CNN model, and using the model to recognize signs from live video input. The researchers believe this tool could help boost employment opportunities for deaf and mute communities by enabling easier communication without an interpreter.