On October 23rd, 2014, we updated our
By continuing to use LinkedIn’s SlideShare service, you agree to the revised terms, so please take a few minutes to review them.
Sign language translator ieee power pointPresentation Transcript
Dept of Biomedical EngineeringAUTOMATIC LANGUAGE TRANSLATION SOFTWAREFOR AIDING COMMUNICATION BETWEEN INDIANSIGN LANGUAGE AND SPOKEN ENGLISH USINGLABVIEWByYELLAPU MADHURI,Reg.No.1651110002,MTECH II YEAR,SRM University.Guided byMs.G.ANITHAAssistant professor (O.G) /BME
Dept of Biomedical EngineeringINTRODUCTION SIGN LANGUAGE (SL)Natural way of communication of speech and/or hearing-impaired people. SIGNMovement of one or both hands, accompanied with facial expression, whichcorresponds to a specific meaning. TRANSLATORCommunication between speech and/or sound impaired person and person that donot understand sign language, avoiding by this way the intervention of anintermediate person. And allow communication using their natural way of speaking.
Dept of Biomedical EngineeringANATOMY OF HUMAN EAR
Dept of Biomedical EngineeringEVENTS INVOLVED IN HEARING
Dept of Biomedical EngineeringSPEECH CHAIN
Dept of Biomedical EngineeringAIM To develop a mobile interactive application software for automatic translation ofIndian sign language into speech in English and vice-versa to assist thecommunication between speech and/or hearing impaired people with normalpeople. This language translator should be able to translate one handed fingerspelling input of Indian Sign language alphabets A-Z and numbers 1-9 into spokenEnglish audio output and 165 spoken English words input to Indian Sign languagepicture display output.
Dept of Biomedical EngineeringGRAPHICAL ABSTRACT
Dept of Biomedical EngineeringOBJECTIVE For Sign to Speech conversion1. Acquire images using the inbuilt camera of the device.2. Perform vision analysis functions in the operating system and provide speech outputthrough the inbuilt audio device. For Speech to Sign conversion1. Acquire speech input using the inbuilt microphone of the device.2. Perform speech analysis functions in the operating system and provide visual signoutput through the inbuilt display device. Minimize hardware requirements and expense.
Dept of Biomedical EngineeringLITERATURE REVIEW1. Jose l. Hernandez-rebollar et al Discusses a novel approach for capturing and translating isolatedgestures of ASL into spoken and written words using combinedacceleglove and a two-link arm skeleton.2. Paschaloudi N.Vassilia et al [may2006]Extensible system to recognize GSL modules for signs or finger-spelled words, using isolation or combined neural networks3. Beifang yi [ may 2006] Explorations in the areas of computer graphics, interface design,and human-computer interactions with emphasis on softwaredevelopment and implementation in ASLT4. Andreas domingo et alASLT using pattern-matching algorithm.5. Rini akmeliawatil et al[may 2007]ASLT for real-time english translation of the malaysia SL usingneural networks.6. Abang irfan halil et al[ 2007]Extent of development details on recognition system by usingstate-of-the-art graphical programming software
Dept of Biomedical EngineeringALGORITHM CRITERION1. REAL-TIME2. VISION-BASED3. AUTOMATIC AND CONTINUOUS OPERATION4. EFFICIENT TRANSLATION
Dept of Biomedical EngineeringMATERIALSSoftware Tools used: National Instruments LabVIEW and toolkits LABVIEW 2012 version Vision Development Module Vision acquisition ModuleHardware tools used: Laptop inbuilt webcamera- Acer Crystal Eye Laptop inbuilt speaker-Acer eAudio
Dept of Biomedical EngineeringGUI OF SOFTWARE
Dept of Biomedical EngineeringPAGE 2- SPEECH TO SIGN LANGUAGE TRANSLATOR
Dept of Biomedical EngineeringBLOCK DIAGRAM OF SPEECH TO SIGN LANGUAGETRANSLATOR
Dept of Biomedical EngineeringFLOW CHART OF SPEECH TO SIGN LANGUAGE TRANSLATION
Dept of Biomedical EngineeringWINDOWS SPEECH RECOGNITION TUTORIAL
Dept of Biomedical EngineeringWINDOWS SPEECH RECOGNITION SOFTWARE GUI
Dept of Biomedical EngineeringUSER INTRFACE OF SPEECH TO SIGN LANGUAGE TRANSLATOR
Dept of Biomedical EngineeringPAGE 3- TEMPLATE PREPARATION
Dept of Biomedical EngineeringIMAGE ACQUISITION SEQUENCE OF FRAMES
Dept of Biomedical EngineeringUSER INTERFACE OF TEMPLATE PREPARATION FOR SIGNLANGUAGE TO ENGLISH TRANSLATION
Dept of Biomedical EngineeringFLOW CHART OF TEMPLATE PREPARATION FOR SIGN LANGUAGE TOENGLISH TRANSLATION
Dept of Biomedical EngineeringPAGE 4- PATTERN MATCHING
Dept of Biomedical EngineeringUSER INTERFACE OF PATTERN MATCHING FOR SIGN LANGUAGETO ENGLISH TRANSLATION
Dept of Biomedical EngineeringBLOCK DIAGRAM OF SIGN LANGUAGE TO SPEECHTRANSLATOR
Dept of Biomedical EngineeringDATABASE OF ONE HANDED ALPHABETS AND NUMBERS OF SIGN LANGUAGE
Dept of Biomedical EngineeringADVANTAGES Eliminates the need for an interpreter for communication between sign languageand speech language. Easy to incorporate and execute in any supporting operating system. Real time translation. Does not require any additional hardware.
Dept of Biomedical EngineeringFUTURE APPLICATIONSWeb conferenceCOMPUTER AND VIDEO GAMESPRECISION SURGERYDOMESTIC APPLICATIONSWEARABLE COMPUTERS
Dept of Biomedical EngineeringCHALLENGESBackground subtraction for robust usage.Making the system user independent.Pattern matching training.
Dept of Biomedical EngineeringLIMITATIONS System is trained on a limited database.. Possibility of misinterpretation for closely related gestures. Translates only static signs. Not trained to translate dynamic signs. Facial expressions are not considered. Possibility of misinterpretation for words of similar pronunciation.
Dept of Biomedical EngineeringCONCLUSION The feature vectors which include whole image frames containing all the aspects ofthe sign are considered. The geometric features which are extracted from the signers’ dominant hand, improvethe accuracy of the system to a great degree. Training the speech recognition for shorter phrases is difficult than longer phrases.
Dept of Biomedical EngineeringFUTURE WORK To increase the performance and accuracy of the ASLT, the quality of the trainingdatabase used should be enhanced to ensure that the ASLT picks up correct andsignificant characteristics in each individual sign and further improve theperformance more efficiently. Current collaboration with Assistive Technology researchers and members of theDeaf community for continued design work should be considered for continuedprogress. This project did not focus on facial expressions although it is well known that facialexpressions convey important part of sign-languages. This system can be implemented in many application areas examples includeaccessing government websites whereby no video clip for deaf and mute is availableor filling out forms online whereby no interpreter may be present to help.
Dept of Biomedical EngineeringREFERENCES Andreas Domingo, Rini Akmeliawati, Kuang Ye Chow ‘Pattern Matching for AutomaticSign Language Translation System using LabVIEW’, International Conference onIntelligent and Advanced Systems 2007. Beifang Yi Dr. Frederick C. Harris ‘A Framework for a Sign Language InterfacingSystem’, A dissertation submitted in partial fulllment of the requirements for the degreeof Doctor of Philosophy in Computer Science and Engineering May 2006 University ofNevada, Reno. Helene Brashear & Thad Starner ‘Using Multiple Sensors for Mobile Sign LanguageRecognition’, ETH - Swiss Federal Institute of Technology Wearable ComputingLaboratory 8092 Zurich, Switzerland flukowicz, junker email@example.com