The document proposes a smart glass system that utilizes computer vision and deep learning to help visually impaired people navigate their environment. The system uses a camera to capture images, detect faces and objects using models like Faster R-CNN, and provides audio descriptions to the user, such as identifying known people and alerting about obstacles. It aims to make movement safer and easier for the visually impaired by recognizing surroundings through real-time voice messages. The system is designed to be low-cost, fast, and user-friendly. It evaluates the performance of its detection and recognition abilities through metrics like accuracy, sensitivity and specificity.