SlideShare a Scribd company logo
1 of 1
Download to read offline
Abdelrahmane Bray supported by James Aymer
UROP
Starting Point
The starting point for this research was a Kinect-based game developed
as part of the PURR (Prescription Software for Use in Recovery and
Rehabilitation) project, being undertaken with the Royal Berkshire
Hospital and Headway Brain Injury Trust to aid motor rehabilitation of
patients.
Fig. 1 Skeletal Position Tracking Fig. 2 Signal Officer Game
Speech for Language Development &
Rehabilitation
The Signal Officer game was developed using C# and Unity 3D, which
enables a patient's movement to be sensed and tracked using skeletal
positioning, in order to promote physical rehabilitation. The focus of
the UROP project work was to investigate:
1. How audio and speech can be integrated into Kinect-based games
for speech and language development or therapy such that they are
intuitive to set-up and use, engaging to play and which provide
useful data for therapists.
2. The differing capabilities of the Microsoft Kinect -1 (released 2010)
and Kinect-2 (released 2013) sensors in order to understand and
identify their respective capabilities in relation to the specified
problem; to help ascertain the technical challenges associated with
developing audio and speech based games; and to identify the most
appropriate languages, APIs (Application Programming Interfaces)
and SDKs (Software Development Kits) to use with the Kinect
sensors for development of speech-based games.
3. To report these findings in order that they can be used in the
development of games targeted at speech development and
therapy.
Way Forward
The results of this work are being fed into three research proposals
currently under preparation by the School of Systems Engineering,
School of English Language and Applied Linguistics, School of
Psychology and Clinical Languages, the Institute of Education, the Royal
Berkshire Hospital and Headway Brain Injury Trust related to speech and
language development and rehabilitation.
Acknowledgements
• Professor Rachel McCrindle, School of Systems Engineering, Project Supervisor
• School of English Language and Applied Linguistics, School of Psychology and Clinical Languages, Institute of
Education, Royal Berkshire Hospital, Headway Brain Injury Trust who are collaborating on this work
Introduction
Since its initial inception in 2010 by Microsoft Corporation, the
Microsoft Kinect has become well known for its diversity as a motion
sensing input device. The Kinect is an immersive product, capable of
detecting and tracking an end user's movement via skeletal
positioning, whilst also offering high quality voice and facial
recognition capabilities.
The Kinect is capable of capturing a user's speech using its in-built
microphone array, which subsequently offers more accurate sound
quality than a single microphone. Speech recognition is one of the key
functionalities of the Kinect's natural user interface, and offers
developers an affordable solution for the development of speech
therapy and developmental applications.
Rationale
There is a wealth of empirical research which corroborates the
necessity of being able to communicate effectively using language, and
how centric it is to a human's development. Within the UK, 2.5 million
people suffer with some form of speech difficulty, which requires
continued treatment with the aid of language therapists (RCSLT;
NIDCD). It is also reported that nationally, one in six children have
difficulty in learning to talk and understand speech patterns (RCLST).
The aim of this project was to capitalize upon the low cost
development that the Kinect intrinsically offers, to investigate how
Kinect-based games can be used to promote speech development in
both a therapeutic and educational context.
Using the Kinect's microphone array and speech recognition
capabilities, Kinect-based games can be created to help young children
develop and enhance their language and communication skills, as well
to help patients regain their communication and cognitive skills whilst
undergoing rehabilitation following stroke or other brain trauma. By
blurring the distinction between play and therapy, the ludic nature of
the games will help people with speech and language deficits to better
organise, interpret and use linguistic knowledge and in doing so
enhance their social well-being and integration.
Collaboration with experts in the fields of Linguistics and Phonetics,
Speech Therapy and Education ensures that the games produced are
both engaging and fit for purpose.
Microsoft Kinect Capabilities
Fig. 1 Kinect-1 Sensor Fig. 2 Kinect Sound File
Communication and Speech Development for
Educational and Therapeutic Domains Utilizing the
Kinect
References
1. Microsoft, Speech, [Online] Accessed November 10
th
2014, Available http://www.msdn.microsoft.com/en-
us/library/jj131034.aspx.
2. Microsoft, Kinect For Windows, [Online] Accessed November 10
th
2014, Available http://www.microsoft.com/en-
us/kinectforwindows.
3. RCSLT, Roral College of Speech and Language Therapists, The All Part Parliamentary Group on Speech and
Language Difficulties, [Online] Accessed Noveber 10
th
2014, Available
http://www.rcslt.org/about/parliamentary_work/appg_sld_history
The Kinect offers
• Skeletal motion analysis (20 joints or 25 joints per user)
• Multi-array microphone
• Facial recognition
• Voice recognition
• Depth sensor

More Related Content

Similar to Alexander Wolff & James Aymer UROP Poster FINAL

lalalalla bababb ppto doquudnousqjpisjpojpox
lalalalla bababb ppto doquudnousqjpisjpojpoxlalalalla bababb ppto doquudnousqjpisjpojpox
lalalalla bababb ppto doquudnousqjpisjpojpoxArushiSthapak
 
Arti Languages Pre Seed Send Ahead Pitchdeck 2024.pdf
Arti Languages Pre Seed Send Ahead Pitchdeck 2024.pdfArti Languages Pre Seed Send Ahead Pitchdeck 2024.pdf
Arti Languages Pre Seed Send Ahead Pitchdeck 2024.pdfwill854175
 
IRJET- Hand Gesture based Recognition using CNN Methodology
IRJET- Hand Gesture based Recognition using CNN MethodologyIRJET- Hand Gesture based Recognition using CNN Methodology
IRJET- Hand Gesture based Recognition using CNN MethodologyIRJET Journal
 
Microsoft Kinect in Healthcare
Microsoft Kinect in HealthcareMicrosoft Kinect in Healthcare
Microsoft Kinect in HealthcareGSW
 
Exploring Advanced Deep Learning Projects.pdf
Exploring Advanced Deep Learning Projects.pdfExploring Advanced Deep Learning Projects.pdf
Exploring Advanced Deep Learning Projects.pdfprakashdm2024
 
IRJET - Sign Language Text to Speech Converter using Image Processing and...
IRJET -  	  Sign Language Text to Speech Converter using Image Processing and...IRJET -  	  Sign Language Text to Speech Converter using Image Processing and...
IRJET - Sign Language Text to Speech Converter using Image Processing and...IRJET Journal
 
Arti Languages Pre Seed Pitchdeck 2024.pdf
Arti Languages Pre Seed Pitchdeck 2024.pdfArti Languages Pre Seed Pitchdeck 2024.pdf
Arti Languages Pre Seed Pitchdeck 2024.pdfwill854175
 
Communication Skills Improving Assistance
Communication Skills Improving AssistanceCommunication Skills Improving Assistance
Communication Skills Improving Assistanceijtsrd
 
Forey: An Android Application for the Visually Impaired
Forey: An Android Application for the Visually ImpairedForey: An Android Application for the Visually Impaired
Forey: An Android Application for the Visually ImpairedIRJET Journal
 
Computer aided Cognition support systems
Computer aided Cognition support systemsComputer aided Cognition support systems
Computer aided Cognition support systemsSanjay Goel
 
Sign Language Recognition using Deep Learning
Sign Language Recognition using Deep LearningSign Language Recognition using Deep Learning
Sign Language Recognition using Deep LearningIRJET Journal
 
NeuroVR 1.5 - Open Source VR system for neuroscience and behavioral healthcare
NeuroVR 1.5 - Open Source VR system for neuroscience and behavioral healthcareNeuroVR 1.5 - Open Source VR system for neuroscience and behavioral healthcare
NeuroVR 1.5 - Open Source VR system for neuroscience and behavioral healthcareRiva Giuseppe
 
A Review Paper on Speech Based Emotion Detection Using Deep Learning
A Review Paper on Speech Based Emotion Detection Using Deep LearningA Review Paper on Speech Based Emotion Detection Using Deep Learning
A Review Paper on Speech Based Emotion Detection Using Deep LearningIRJET Journal
 
EVALUATION OF CHATBOT TECHNOLOGY: THE CASE OF GREECE
EVALUATION OF CHATBOT TECHNOLOGY: THE CASE OF GREECEEVALUATION OF CHATBOT TECHNOLOGY: THE CASE OF GREECE
EVALUATION OF CHATBOT TECHNOLOGY: THE CASE OF GREECEkevig
 
Evaluation of Chatbot Technology: The Case of Greece
Evaluation of Chatbot Technology: The Case of GreeceEvaluation of Chatbot Technology: The Case of Greece
Evaluation of Chatbot Technology: The Case of Greecekevig
 
Projects1920 c6 fina;
Projects1920 c6 fina;Projects1920 c6 fina;
Projects1920 c6 fina;TabassumBanu5
 
Proposal presentation(reasearch)
Proposal presentation(reasearch)Proposal presentation(reasearch)
Proposal presentation(reasearch)Dulanjana Kasun
 
Deep convolutional neural network for hand sign language recognition using mo...
Deep convolutional neural network for hand sign language recognition using mo...Deep convolutional neural network for hand sign language recognition using mo...
Deep convolutional neural network for hand sign language recognition using mo...journalBEEI
 
An HCI Principles based Framework to Support Deaf Community
An HCI Principles based Framework to Support Deaf CommunityAn HCI Principles based Framework to Support Deaf Community
An HCI Principles based Framework to Support Deaf CommunityIJEACS
 

Similar to Alexander Wolff & James Aymer UROP Poster FINAL (20)

lalalalla bababb ppto doquudnousqjpisjpojpox
lalalalla bababb ppto doquudnousqjpisjpojpoxlalalalla bababb ppto doquudnousqjpisjpojpox
lalalalla bababb ppto doquudnousqjpisjpojpox
 
Arti Languages Pre Seed Send Ahead Pitchdeck 2024.pdf
Arti Languages Pre Seed Send Ahead Pitchdeck 2024.pdfArti Languages Pre Seed Send Ahead Pitchdeck 2024.pdf
Arti Languages Pre Seed Send Ahead Pitchdeck 2024.pdf
 
IRJET- Hand Gesture based Recognition using CNN Methodology
IRJET- Hand Gesture based Recognition using CNN MethodologyIRJET- Hand Gesture based Recognition using CNN Methodology
IRJET- Hand Gesture based Recognition using CNN Methodology
 
Microsoft Kinect in Healthcare
Microsoft Kinect in HealthcareMicrosoft Kinect in Healthcare
Microsoft Kinect in Healthcare
 
Exploring Advanced Deep Learning Projects.pdf
Exploring Advanced Deep Learning Projects.pdfExploring Advanced Deep Learning Projects.pdf
Exploring Advanced Deep Learning Projects.pdf
 
IRJET - Sign Language Text to Speech Converter using Image Processing and...
IRJET -  	  Sign Language Text to Speech Converter using Image Processing and...IRJET -  	  Sign Language Text to Speech Converter using Image Processing and...
IRJET - Sign Language Text to Speech Converter using Image Processing and...
 
Arti Languages Pre Seed Pitchdeck 2024.pdf
Arti Languages Pre Seed Pitchdeck 2024.pdfArti Languages Pre Seed Pitchdeck 2024.pdf
Arti Languages Pre Seed Pitchdeck 2024.pdf
 
Communication Skills Improving Assistance
Communication Skills Improving AssistanceCommunication Skills Improving Assistance
Communication Skills Improving Assistance
 
Forey: An Android Application for the Visually Impaired
Forey: An Android Application for the Visually ImpairedForey: An Android Application for the Visually Impaired
Forey: An Android Application for the Visually Impaired
 
Computer aided Cognition support systems
Computer aided Cognition support systemsComputer aided Cognition support systems
Computer aided Cognition support systems
 
Sign Language Recognition using Deep Learning
Sign Language Recognition using Deep LearningSign Language Recognition using Deep Learning
Sign Language Recognition using Deep Learning
 
NeuroVR 1.5 - Open Source VR system for neuroscience and behavioral healthcare
NeuroVR 1.5 - Open Source VR system for neuroscience and behavioral healthcareNeuroVR 1.5 - Open Source VR system for neuroscience and behavioral healthcare
NeuroVR 1.5 - Open Source VR system for neuroscience and behavioral healthcare
 
A Review Paper on Speech Based Emotion Detection Using Deep Learning
A Review Paper on Speech Based Emotion Detection Using Deep LearningA Review Paper on Speech Based Emotion Detection Using Deep Learning
A Review Paper on Speech Based Emotion Detection Using Deep Learning
 
EVALUATION OF CHATBOT TECHNOLOGY: THE CASE OF GREECE
EVALUATION OF CHATBOT TECHNOLOGY: THE CASE OF GREECEEVALUATION OF CHATBOT TECHNOLOGY: THE CASE OF GREECE
EVALUATION OF CHATBOT TECHNOLOGY: THE CASE OF GREECE
 
Evaluation of Chatbot Technology: The Case of Greece
Evaluation of Chatbot Technology: The Case of GreeceEvaluation of Chatbot Technology: The Case of Greece
Evaluation of Chatbot Technology: The Case of Greece
 
Projects1920 c6 fina;
Projects1920 c6 fina;Projects1920 c6 fina;
Projects1920 c6 fina;
 
Proposal presentation(reasearch)
Proposal presentation(reasearch)Proposal presentation(reasearch)
Proposal presentation(reasearch)
 
183
183183
183
 
Deep convolutional neural network for hand sign language recognition using mo...
Deep convolutional neural network for hand sign language recognition using mo...Deep convolutional neural network for hand sign language recognition using mo...
Deep convolutional neural network for hand sign language recognition using mo...
 
An HCI Principles based Framework to Support Deaf Community
An HCI Principles based Framework to Support Deaf CommunityAn HCI Principles based Framework to Support Deaf Community
An HCI Principles based Framework to Support Deaf Community
 

Alexander Wolff & James Aymer UROP Poster FINAL

  • 1. Abdelrahmane Bray supported by James Aymer UROP Starting Point The starting point for this research was a Kinect-based game developed as part of the PURR (Prescription Software for Use in Recovery and Rehabilitation) project, being undertaken with the Royal Berkshire Hospital and Headway Brain Injury Trust to aid motor rehabilitation of patients. Fig. 1 Skeletal Position Tracking Fig. 2 Signal Officer Game Speech for Language Development & Rehabilitation The Signal Officer game was developed using C# and Unity 3D, which enables a patient's movement to be sensed and tracked using skeletal positioning, in order to promote physical rehabilitation. The focus of the UROP project work was to investigate: 1. How audio and speech can be integrated into Kinect-based games for speech and language development or therapy such that they are intuitive to set-up and use, engaging to play and which provide useful data for therapists. 2. The differing capabilities of the Microsoft Kinect -1 (released 2010) and Kinect-2 (released 2013) sensors in order to understand and identify their respective capabilities in relation to the specified problem; to help ascertain the technical challenges associated with developing audio and speech based games; and to identify the most appropriate languages, APIs (Application Programming Interfaces) and SDKs (Software Development Kits) to use with the Kinect sensors for development of speech-based games. 3. To report these findings in order that they can be used in the development of games targeted at speech development and therapy. Way Forward The results of this work are being fed into three research proposals currently under preparation by the School of Systems Engineering, School of English Language and Applied Linguistics, School of Psychology and Clinical Languages, the Institute of Education, the Royal Berkshire Hospital and Headway Brain Injury Trust related to speech and language development and rehabilitation. Acknowledgements • Professor Rachel McCrindle, School of Systems Engineering, Project Supervisor • School of English Language and Applied Linguistics, School of Psychology and Clinical Languages, Institute of Education, Royal Berkshire Hospital, Headway Brain Injury Trust who are collaborating on this work Introduction Since its initial inception in 2010 by Microsoft Corporation, the Microsoft Kinect has become well known for its diversity as a motion sensing input device. The Kinect is an immersive product, capable of detecting and tracking an end user's movement via skeletal positioning, whilst also offering high quality voice and facial recognition capabilities. The Kinect is capable of capturing a user's speech using its in-built microphone array, which subsequently offers more accurate sound quality than a single microphone. Speech recognition is one of the key functionalities of the Kinect's natural user interface, and offers developers an affordable solution for the development of speech therapy and developmental applications. Rationale There is a wealth of empirical research which corroborates the necessity of being able to communicate effectively using language, and how centric it is to a human's development. Within the UK, 2.5 million people suffer with some form of speech difficulty, which requires continued treatment with the aid of language therapists (RCSLT; NIDCD). It is also reported that nationally, one in six children have difficulty in learning to talk and understand speech patterns (RCLST). The aim of this project was to capitalize upon the low cost development that the Kinect intrinsically offers, to investigate how Kinect-based games can be used to promote speech development in both a therapeutic and educational context. Using the Kinect's microphone array and speech recognition capabilities, Kinect-based games can be created to help young children develop and enhance their language and communication skills, as well to help patients regain their communication and cognitive skills whilst undergoing rehabilitation following stroke or other brain trauma. By blurring the distinction between play and therapy, the ludic nature of the games will help people with speech and language deficits to better organise, interpret and use linguistic knowledge and in doing so enhance their social well-being and integration. Collaboration with experts in the fields of Linguistics and Phonetics, Speech Therapy and Education ensures that the games produced are both engaging and fit for purpose. Microsoft Kinect Capabilities Fig. 1 Kinect-1 Sensor Fig. 2 Kinect Sound File Communication and Speech Development for Educational and Therapeutic Domains Utilizing the Kinect References 1. Microsoft, Speech, [Online] Accessed November 10 th 2014, Available http://www.msdn.microsoft.com/en- us/library/jj131034.aspx. 2. Microsoft, Kinect For Windows, [Online] Accessed November 10 th 2014, Available http://www.microsoft.com/en- us/kinectforwindows. 3. RCSLT, Roral College of Speech and Language Therapists, The All Part Parliamentary Group on Speech and Language Difficulties, [Online] Accessed Noveber 10 th 2014, Available http://www.rcslt.org/about/parliamentary_work/appg_sld_history The Kinect offers • Skeletal motion analysis (20 joints or 25 joints per user) • Multi-array microphone • Facial recognition • Voice recognition • Depth sensor