Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Template abstract_book


Published on

  • Be the first to comment

  • Be the first to like this

Template abstract_book

  1. 1. Automatic Language Translation Software For Aiding Communication Between Indian Sign Language And SpokenEnglish Using LabviewYellapu Madhuri*, G.Anitha*** 2nd year M.Tech, ** Assistant ProfessorDepartment of Biomedical Engineering,SRM University, Kattankulathur-603203, Tamilnadu, Language (SL) is the natural way of communication of speech and/orhearing-impaired people. A sign is a movement of one or both hands,accompanied with facial expression, which corresponds to a specific meaning.This paper presents SIGN LANGUAGE TRANSLATION software forautomatic translation of Indian sign language into spoken English and viceversa to assist the communication between speech and/or hearing impairedpeople and hearing people. It could be used by deaf community as a translatorto people that do not understand sign language, avoiding by this way theintervention of an intermediate person for interpretation and allowcommunication using their natural way of speaking. The proposed software isstandalone executable interactive application program developed usingLABVIEW software that can be implemented in any standard windowsoperating laptop, desktop or an IOS mobile phone to operate with the camera,processor and audio device. For sign to speech translation, the one handed SLgestures of the user are captured using camera; vision analysis functions areperformed in the operating system and provide corresponding speech outputthrough audio device. For speech to SL translation the speech input of the useris acquired by microphone; speech analysis functions are performed andprovide SL gesture picture display of corresponding speech input. Theexperienced lag time for translation is little because of parallel processing andallows for instantaneous translation from finger and hand movements to speechand speech inputs to SL gestures. This system is trained to translate one handedSL representations of alphabets (A-Z), numbers (1-9) to speech and 165 wordphrases to SL gestures The training database of inputs can be easily extended toexpand the system applications. The software does not require the user to useany special hand gloves. The results are found to be highly consistent,reproducible, with fairly high precision and accuracy.AIM :To develop a mobile interactive application program for automatictranslation of Indian sign language into spoken English and vice-versa to assistthe communication between Deaf people and hearing people. The SL translatorshould be able to translate one handed Indian Sign language finger spelling inputof alphabets (A-Z) and numbers (1-9) to spoken English audio output and 165spoken English word input to Indian Sign language picture display output.OBJECTIVES:•To acquire one handed SL finger spelling of alphabets (A to Z) and numbers (1to 9) to produce spoken English audio output.•To acquire spoken English word input to produce Indian Sign language picturedisplay output.•To create an executable file to make the software a standalone application.•To implement the software and optimize the parameters to improve the accuracyof translation.•To minimize hardware requirements and thus expense while achieving highprecision of translation.There is a need for monitoring cerebral perfusionMATERIALSSoftware Tools used: National InstrumentsLabVIEW and toolkits•LABVIEW 2012 version•Vision Development Module•Vision acquisition moduleHardware tools used•Laptop inbuilt webcamera- Acer Crystal Eye•Laptop inbuilt speaker-Acer eAudioMETHOD:The software is a standalone application. To install the file, follow theinstructions that appear in the executable installer file. After installing theapplication, a Graphical user interfacing (GUI) window opens, from which thefull application can be used. The GUI has been created to run the entireapplication from a single window. It has four pages, each page corresponds to aspecific application.PAGE 1 gives a detailed demo of the total software usage.PAGE 2 is for speech to sign language translation.When the “start” button is pressed, a command is sent to the Windows 7 inbuiltSpeech Recognizer and it opens a mini window at the top. The first time it isstarted, a tutorial session begins which gives instructions to setup the microphoneand recognize the user’s voice input. Configure the speech recognition software.After the initial training, from the next time the program is executed, it startsspeech recognition automatically. To train the system for a different user orchange the microphone settings, right click on the Speech Recognizer windowand select “Start Speech Tutorial”. To stop the speech recognition software say“Stop listening”. To start speech recognition again say “Start Listening”. Whenthe user utters any of the words listed in the “Phrases” it is displayed in the“Command” indicator. A SL gesture picture corresponding to the speech input isdisplayed in the “Sign” picture indicator. The score of speech input correlationwith the trained word is displayed in the “Score” numeric indicator. Use the exitbutton to exit the application of speech to SL translation.PAGE 3 is for template preparation for sign to speech translation.To execute the template preparation module, press the “Start” button.Choose the camera to acquire images to be used as templates, from the “CameraName” list. The acquired image is displayed on “Image” picture indicator. If thedisplay image is good to be used for preparing a template, press “Snap frame”.The snapped image is displayed on “Snap Image” picture display. Draw a regionof interest to prepare the template and press “Learn”. The image region in theselected portion of the snapped frame is saved to the folder specified fortemplates. The saved template image is displayed on “Template Image” picturedisplay. Press “Stop” button to stop execution of template preparation module.PAGE 4 is for Sign to speech translation.Press the “Start” button to start the program. Choose the camera to acquireimages to be used for pattern matching, from the “Camera Name” list. Thecaptured images are displayed on the “Input Image” picture display. Press the“Match” button to start comparing the acquired input image with the templateimages in the data base. In each iteration the input image is checked for patternmatch with one template. When the input image matches with the template image,the loop halts. The “Match” LED glows and the matched template is displayed onthe “Template Image” indicator. The loop iteration count is used for triggering acase structure. Depending on the iteration count value a specific case is selectedand gives a string output. Otherwise the loop continues to next iteration where theinput image is checked for pattern match with a new template. The informationin the string output from case structure is displayed on the “Matched Pattern”alphanumeric indicator. It also initiates the .NET speech synthesizer to give anaudio output through the speaker.Figure 1.1 Events involved in hearing Figure 1.2 Speech chainFigure 1.3 Graphical Abstract
  2. 2. [1]. Yellapu Madhuri, G.Anitha (2013) “VISION-BASED SIGNLANGUAGE TRANSLATION DEVICE” International Conference onInformation Communication & Embedded systems ICICES 2013 in associationwith IEEE, S.A engineering College, Chennai. ISBN No. 978-1-4673-5787-6G.Tracking Id: 13cse213.[2]. Yellapu Madhuri, G.Anitha (2013) “Automatic Language TranslationSoftware for Interpreting Sign Language and Speech in English”, has beenawarded Silver medal in paper presentation Research Day 2013 at SRMUniversity, Chennai.[3]. Yellapu Madhuri, G.Anitha (2013) submission entitled "SIGNLANGUAGE TRANSLATOR" has been assigned the following manuscript.number: IMAVIS-D-13-00011 by Elsevier Editorial Systems- Image andVision Computing journal[4]. Yellapu Madhuri, G.Anitha (2013) submission entitled “VISION-BASEDSIGN LANGUAGE TRANSLATOR” is Accepted for publicationin International Journal of Engineering and Science Invention (IJESI) report of manuscript id: A11023.[5]. Yellapu Madhuri, G.Anitha (2013) submission entitled “SIGNLANGUAGE TRANSLATION DEVICE” is Accepted for publicationin The International Journal of Engineering and Science (THE IJES) Review report of manuscript id: 13026.[6]. Yellapu Madhuri, G.Anitha (2013) submission entitled "AutomaticLanguage Translation Software for Interpreting Sign Language and Speech inEnglish" has been assigned a tracking number of NCOMMS-13-02048 byNature Communications[1]. Jose L. Hernandez-Rebollar1, Nicholas Kyriakopoulos1, Robert W.Lindeman2 ‘A New Instrumented Approach For Translating AmericanSign Language Into Sound And Text’, Proceedings of the Sixth IEEEInternational Conference on Automatic Face and Gesture Recognition(FGR’04) 0-7695-2122-3/04 $ 20.00 © 2004 IEEE.[2]. K. Abe, H. Saito, S. Ozawa: Virtual 3D Interface System via HandMotion Recognition From Two Cameras. IEEE Trans. Systems, Man,and Cybernetics, Vol. 32, No. 4, pp. 536–540, July 2002.[3]. Paschaloudi N. Vassilia, Margaritis G. Konstantinos "Listening todeaf: A Greek sign language translator’, 0-7803-9521-2/06/$20.00§2006IEEEName: YELLAPU MADHURIReg.No:1651110002M.Tech (BiomedialEngineering)Mobile no: 09441571241E.Mail:ymadhury@rediffmail.comIn this work, a vision based sign language recognition system using LABVIEW forautomatic sign language translation has been presented. This approach uses thefeature vectors which include whole image frames containing all the aspects of thesign. This project has investigated the different issues of this new approach to SLrecognition to recognize on the hand sign language alphabets and numbers usingappearance based features which are extracted directly from a video stream recordedwith a conventional camera making recognition system more practical. Althoughsign language contains many different aspects from manual and non-manual cues,the position, the orientation and the configuration or shape of the dominant hand ofthe signer conveys a large portion of the information of the signs. Therefore, thegeometric features which are extracted from the signers’ dominant hand, improvethe accuracy of the system to a great degree. This project did not focus on facialexpressions although it is well known that facial expressions convey important partof sign-languages. The facial expressions can e.g. be extracted by tracking thesigners’ face. Then, the most discriminative features can be selected by employing adimensionality reduction method and this cue could also be fused into therecognition system.The sign language translator is able to translate alphabets (A-Z) andnumbers (1-9). All the signs can be translated real-time. But signs that aresimilar in posture and gesture to another sign can be misinterpreted,resulting in a decrease in accuracy of the system. The current system hasonly been trained on a very small database. Since there will always bevariation in either the signers hand posture or motion trajectory, a largerdatabase accommodating a larger variety of hand posture for each sign isrequired. The speech recognition program requires the user to take up atutorial of 10 minutes. During the training, the program learns the accent ofthe user for speech recognition. It is observed that, the longer the user usedthe program , the higher the accuracy of speech recognition.This paper presents a novel approach for gesture detection. This approachhas two main steps: i) template preparation, and ii) gesture detection. Thetemplate preparation technique presented here has some important featuresfor gesture recognition including robustness against slight rotation, smallnumber of required features and device independence. For gesture detection,a pattern matching technique is used. The gesture recognition techniquepresented here can be used with a variety of front-end input systems such asvision based input , hand and eye tracking, digital tablet, mouse, and digitalglove. Much previous work has focused on isolated sign languagerecognition with clear pauses after each sign. These pauses make it a mucheasier problem than continuous recognition without pauses between theindividual signs, because explicit segmentation of a continuous input streaminto the individual signs is very difficult. For this reason, and because of co-articulation effects, work on isolated recognition often does not generalizeeasily to continuous recognition. But the proposed software captures theinput images as an AVI sequence of continuous images. This allows forcontinuous input image acquisition without pauses. But each image frame isprocessed individually and checked for pattern matching. This techniqueovercomes the problem of processing continuous images at the same timehaving input stream without pauses.For Speech to SL translation words of similar pronunciation aresometimes misinterpreted. This problem can be avoided by clearlypronouncing the words and with extended training and increasing usage. Thespeech recognition technique introduced in this article can be used with avariety of front-end input systems such as computer and video games,precision surgery, domestic applications and wearable computers.Figure 1.4 Block diagram of SL Figure 1.5 Block diagram of Speechto speech translation to sign language translationFigure 1.6 PAGE 3-GUI of templatepreparationFigure 1.8 PAGE 2-GUI of Speech toSL translationFigure 1.9 GUI of windowsspeech recognition tutorialFigure 1.7 PAGE 4-GUI of SL tospeech translationFigure 1.10 Database of SL finger spelling Alphabets and Numbers