This document discusses the development of a sign language recognition system using computer vision and machine learning techniques. It begins with background on the need for such a system to help deaf individuals communicate using technology. The system works by detecting hand signs with a camera and identifying them using a convolutional neural network model. It follows a waterfall development approach with requirements including a laptop, Python software, and sufficient lighting. Benefits are helping learn sign language, while limitations include needing good lighting conditions. Future work could add subtitles to make the system more useful for media applications.