This research explores the recognition of Brazilian Sign Language (Libras) gestures using 3D sensor data and neural networks, addressing the needs of the deaf community in Brazil. It includes a case study with six participants recording various gestures, employing a multi-layer perceptron model that achieved a classification accuracy of 61.54% on unseen data. Challenges were faced in gathering accurate gesture samples and technical limitations of the chosen 3D sensor, with recommendations for future improvements and exploration of alternative models.