My portfolio project presentation for Data Science Retreat Batch 09 Demo Day:
Utilising convolutional neural networks and transfer learning for translating American Sign Language (fingerspelling) from images/videos into text in real-time
Further details & code available on GitHub: https://github.com/BelalC/sign2text
2. Objectives:
• Translate American Sign Language (ASL) from
images into text (Sign2Text)
• Focus on the ASL alphabet (A-Z)
Real-time translation of
Sign Language into text
15. Appendix
Convolutional neural networks, transfer learning & more
• http://setosa.io/ev/image-kernels/
• http://cs231n.github.io/
• http://cs231n.github.io/transfer-learning/
Editor's Notes
Sign language statistics:
5-10% of world population is deaf/hard-of-hearing
rely on sign language
English dictionary: 171,476 words in use
ASL dictionary: ~10-30,000 words + lots of regional/country dialect
fingerspelling used for names, emphasis and ‘loan’ words from English
ResNet - Residual net with 50 layers (Microsoft)
VGG16 - 16 layers (Uni of Oxford)
my CNN - 2 conv layers + 1 max pool
Links to ASL/local sign languages communities
Host on a web app for education
Open-source on GitHub -> ready to train on other examples
#Hand detection?
#Expand training examples/words?