The document describes a smart assistance system developed for visually impaired people. The system uses technologies like object detection, speech recognition and text-to-speech to help visually impaired users recognize objects, search Wikipedia and send emails through voice commands. Specifically, it uses a YOLO v3 deep learning model trained on the COCO dataset to detect objects in images captured by a camera. It also integrates APIs to search Wikipedia and uses Python libraries for speech recognition, text-to-speech and email functionality. The system was developed in Python and aims to provide more independence to visually impaired users in their daily lives.