By:   Khalid El-Darymli  G0327887 Speech to Sign Language Interpreter System Supervisor:   Dr. Othman O. Khalifa Internati...
OUTLINE <ul><li>Problem statement. </li></ul><ul><li>Research goal and objectives. </li></ul><ul><li>Main parts of our sys...
Problem Statement <ul><li>There is no free software, let alone one with a reasonable price, to convert speech into sign la...
RESEARCH GOAL AND OBJECTIVES   <ul><li>Design and Manipulation of  Speech to Sign Language Interpreter System .  </li></ul...
Main Parts of Speech to Sign Language Interpreter System Speech-Recognition  Engine ASL pre-recorded  Video-clips Database...
Automatic Speech Recognition ( ASR ): <ul><li>SR  systems are clustered according to three categories:  Isolated   vs.   c...
The Structure of SR Engine (LVCSR) Signal  Processing AM P ( A 1 , …, A T  | P 1 ,… , P k ) Dictionary P ( P 1 , P 2 , …, ...
SIGNAL PROCESSING (FRONT-END)  : Pre-emphasis Framing Windowing Speech  waveform  y[n] y t ` [n] Power Spectrum  Calculati...
Speech waveform of a phoneme “ae” <ul><li>Explanatory Example </li></ul>After pre-emphasis and Hamming windowing Power spe...
TRAINING <ul><li>Acoustic Model (AM): </li></ul><ul><li>The  AM   provides a mapping between a unit of speech and an HMM t...
HMM s <ul><li>HMM is defined by the model parameters   =(A, B, π) . </li></ul><ul><li>For each acoustic segment, there is...
Dictionary : <ul><li>Dictionary is a file contains pronunciations for all the words of interest to the decoder.  </li></ul...
Language Model (LM): <ul><li>It is a statistical LM where the speaker could be talking about any arbitrary topic.  </li></...
RECOGNITION   <ul><li>Given an input speech utterance the goal is to  UNVEIL  the  BEST  hidden state sequence.  </li></ul...
The Veterbi Beam search   <ul><li>Initialization: </li></ul><ul><li>For  </li></ul><ul><li>Goto  XX </li></ul><ul><li>Recu...
SIGN LANGUAGE   <ul><li>Sign Language  is a communication system using gestures that are interpreted visually.  </li></ul>...
AMERICAN SIGN LANGUAGE  ( ASL ) <ul><li>ASL  is the dominant sign language in the US, anglophone Canada and parts of Mexic...
ASL ALPHABETS <ul><li>It is a manual alphabet representing all the letters of the English alphabet, using only the hands. ...
SIGNED ENGLISH ( SE ): <ul><li>SE  is a reasonable manual parallel to English.  </li></ul><ul><li>The idea behind  SE  and...
ASL  vs.  SE  (an Example) It is alright if you have a lot ASL  Translation SE  Translation IT I S ALL RIGHT IF YOU HAVE A...
DEMONSTRATION OF THE ASL IN OUR SW: A number of 2,600 ASL prerecorded video clips In case of nonbasic word, extract the ba...
Speech to Sign Language Interpreter System -  MILESTONE Thesis Writing Outline & Progress SW Development & Progress % Draf...
Thank You <ul><li>Your Questions Are  </li></ul><ul><li>Most Welcomed </li></ul>
Upcoming SlideShare
Loading in …5
×

Speech To Sign Language Interpreter System

5,487
-1

Published on

Published in: Technology, Business
1 Comment
1 Like
Statistics
Notes
No Downloads
Views
Total Views
5,487
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
268
Comments
1
Likes
1
Embeds 0
No embeds

No notes for slide

Speech To Sign Language Interpreter System

  1. 1. By: Khalid El-Darymli G0327887 Speech to Sign Language Interpreter System Supervisor: Dr. Othman O. Khalifa International Islamic University Malaysia Kulliyyah of Engineering, ECE Dept.
  2. 2. OUTLINE <ul><li>Problem statement. </li></ul><ul><li>Research goal and objectives. </li></ul><ul><li>Main parts of our system. </li></ul><ul><li>The structure of ASR: </li></ul><ul><ul><li>SP, </li></ul></ul><ul><ul><li>Training: AM, Dictionary and LM, </li></ul></ul><ul><ul><li>and Decoding: the Veterbi beam search. </li></ul></ul><ul><li>Sign Language, ASL and ASL alphabets. </li></ul><ul><li>Signed English. </li></ul><ul><li>Demo. of ASL in our SW. </li></ul><ul><li>Milestone. </li></ul>
  3. 3. Problem Statement <ul><li>There is no free software, let alone one with a reasonable price, to convert speech into sign language in live mode. </li></ul><ul><li>There is only one software commercially available to convert uttered speech in live mode to a video sign language </li></ul><ul><li>This software is called iCommunicator and in order to purchase it deaf person has to pay USD 6,499! </li></ul>! IS IT FAIR ?
  4. 4. RESEARCH GOAL AND OBJECTIVES <ul><li>Design and Manipulation of Speech to Sign Language Interpreter System . </li></ul><ul><li>The SW is open source and freely available which in turn will benefit the deaf community. </li></ul><ul><li>To fill the gap between deaf and nondeaf people in two senses. Firstly, by using this SW for educational purposes for deaf people and secondly, by facilitating the communication between deaf and nondeaf people. </li></ul><ul><li>To increase independence and self-confidence of the deaf person. </li></ul><ul><li>To increase opportunities for advancement and success in education, employment, personal relationships, and public access venues. </li></ul><ul><li>To improve quality of life. </li></ul>
  5. 5. Main Parts of Speech to Sign Language Interpreter System Speech-Recognition Engine ASL pre-recorded Video-clips Database Recognized Text ASL Translation Continuous Input Speech Recognized Text
  6. 6. Automatic Speech Recognition ( ASR ): <ul><li>SR systems are clustered according to three categories: Isolated vs. continuous , speaker dependent vs. speaker independent and small vs. large vocabulary . </li></ul><ul><li>The expected task of our software entails using a large vocabulary , speaker independent and continuous speech recognizer. </li></ul>SR Engine Recognized Text Input Voice
  7. 7. The Structure of SR Engine (LVCSR) Signal Processing AM P ( A 1 , …, A T | P 1 ,… , P k ) Dictionary P ( P 1 , P 2 , …, P k | W ) LM P ( W n | W 1 , …, W n-1 ) X={x 1 ,x 2 , …, x T } Hypothesis Evaluation Decoder P(X | W)*P(W) TRAINING DECODING Best Hypotheses H = {W 1 , W 2 , …, W k } W BEST Input Audio
  8. 8. SIGNAL PROCESSING (FRONT-END) : Pre-emphasis Framing Windowing Speech waveform y[n] y t ` [n] Power Spectrum Calculation y t [n] Mel Filterbank S t [k] ln| | 2 IDFT 13 c t [n] 13  c t [n] 13  c t [n] x[n] , 16-bits integer data S t [m] Pre-emphasis  is the pre-emphasis parameter. <ul><li> MFCC computation: </li></ul><ul><li>The MFCC is a representation defined as the real cepstrum of a windowed </li></ul><ul><li>short-time signal derived from the FFT of that signal. </li></ul><ul><li>MFCC computation consists of performing the inverse DFT on the logarithm </li></ul><ul><li>of the magnitude of the filterbank output: </li></ul><ul><li>TYPICALLY FOR SPEECH RECOGNITION ONLY </li></ul><ul><li>THE FIRST 13 COEFFICIENTS ARE USED. </li></ul><ul><li>Framing and Windowing </li></ul><ul><li>Typical frame duration in speech recognition is 10 ms, </li></ul><ul><li>while typical window duration is 25 ms. </li></ul><ul><li>The mel filterbank: </li></ul><ul><li>It is used to extract spectral features </li></ul><ul><li>of speech through properly integrating </li></ul><ul><li>a spectrum at defined frequency ranges. </li></ul><ul><li>The transfer function of the triangular </li></ul><ul><li>mel-weighting filters H m [k] is given by: </li></ul><ul><li>The mel-spectrum of the power spectrum is computed by: </li></ul><ul><li>where k is the DFT domain index, N is the length of the DFT, and M is total number </li></ul><ul><li>of triangular mel-weighting filters. </li></ul><ul><li>Power Spectrum </li></ul><ul><li>SFT calculated using: </li></ul><ul><li>TO reduce computational complexity, is evaluated only for a discrete number of </li></ul><ul><li> values  =2  k/N then the DFT of all frames of the signal is obtained: </li></ul><ul><li>The phase information of the DFT samples of each frame is discarded </li></ul><ul><li>Final output of this stage is: </li></ul><ul><li>Delta and Double Delta computation </li></ul><ul><li>First and Second order differences may be used to capture the </li></ul><ul><li>dynamic evolution of the signal. </li></ul><ul><li>The first order delta MFCC computed from: </li></ul><ul><li>The second order delta MFCC computed from: </li></ul><ul><li>The final output of the FE processing would comprise 39 features </li></ul><ul><li>vector (observations vector X t ) per each processed frame. </li></ul>
  9. 9. Speech waveform of a phoneme “ae” <ul><li>Explanatory Example </li></ul>After pre-emphasis and Hamming windowing Power spectrum MFCC
  10. 10. TRAINING <ul><li>Acoustic Model (AM): </li></ul><ul><li>The AM provides a mapping between a unit of speech and an HMM that can be scored against incoming features provided by the Front-End. </li></ul><ul><li>It contains a pool of a Hidden Markov Models (HMM). </li></ul><ul><li>For large vocabularies each word is represented as a sequence of phonemes, accordingly there has to be an AM per each phoneme, moreover, it has to be depending on the context (e.g. co-articulation) and even the context dependence may cross word boundary. </li></ul><ul><li>Phones are then further refined into context-dependent triphones , i.e. , phones occurring in given left and right phonetic contexts. </li></ul><ul><li>It is the process of learning the AM, Dictionary and LM . </li></ul>AM P ( A 1 , …, A T | P 1 ,… , P k ) Dictionary P ( P 1 , P 2 , …, P k | W ) LM P ( W n | W 1 , …, W n-1 )
  11. 11. HMM s <ul><li>HMM is defined by the model parameters  =(A, B, π) . </li></ul><ul><li>For each acoustic segment, there is a probability distribution across acoustic observations b i (k) . </li></ul><ul><li>The leading technique is to represent the acoustic observations as a mixture Gaussian distribution or shortly Gaussian Mixtures (GM). </li></ul>S 0 S 1 S 2 S 3 a 00 a 11 a 22 b 0 (k) b 1 (k) b 2 (k)
  12. 12. Dictionary : <ul><li>Dictionary is a file contains pronunciations for all the words of interest to the decoder. </li></ul><ul><li>For large vocabulary speech recognizers pronunciations are specified as a linear sequence of phonemes. </li></ul><ul><li>Some digits pronunciations: </li></ul><ul><li>ZERO  Z IH R O EIGHT  EY TD </li></ul><ul><li>Multiple pronunciations </li></ul><ul><li>ACTUALLY  AE K CH AX W AX L IY </li></ul><ul><li>ACTUALLY(2nd)  AE K SH AX L IY </li></ul><ul><li>ACTUALLY(3rd)  AE K SH L IY </li></ul><ul><li>Compound words: </li></ul><ul><li>WANT_TO  W AA N AX </li></ul>AM P ( A 1 , …, A T | P 1 ,… , P k ) Dictionary P ( P 1 , P 2 , …, P k | W ) LM P ( W n | W 1 , …, W n-1 )
  13. 13. Language Model (LM): <ul><li>It is a statistical LM where the speaker could be talking about any arbitrary topic. </li></ul><ul><li>The main used model is the n-gram statistics and in particular trigram (n=3), P(W t |W t-1 ,W t-2 ). </li></ul><ul><li>Bigram and Unigram LMs have to be employed as well. </li></ul>AM P ( A 1 , …, A T | P 1 ,… , P k ) Dictionary P ( P 1 , P 2 , …, P k | W ) LM P ( W n | W 1 , …, W n-1 )
  14. 14. RECOGNITION <ul><li>Given an input speech utterance the goal is to UNVEIL the BEST hidden state sequence. </li></ul><ul><li>Let S=(s 1 ,s 2 ,…,s T ) be the sequence of states that are recognized and x t be the feature samples computed at time t , where the feature sequence from time 1 to t is indicated as: X=(x 1 ,x 2 ,…,x t ) . </li></ul><ul><li>Accordingly, the sequence of recognized states S* could be obtained by: S*=ArgMax P(S,X|  ) . </li></ul>Dynamic Structure Search Algorithm S * Static Structure  S t , P(x t ,{s t }| {s t-1 } ,  ) {S t-1 } x t
  15. 15. The Veterbi Beam search <ul><li>Initialization: </li></ul><ul><li>For </li></ul><ul><li>Goto XX </li></ul><ul><li>Recursive Step: </li></ul><ul><li>For { </li></ul><ul><ul><li>Goto XX } </li></ul></ul><ul><li>Backtracking: </li></ul><ul><li>XX: </li></ul><ul><li>For </li></ul><ul><li>Find p t (s t * )= Max[V t (i)] </li></ul><ul><li>Calculate the threshold </li></ul><ul><li>For { </li></ul><ul><li>If p t (s t =j) MEMORIZE both V t (j) and path &quot; j &quot; </li></ul><ul><li>Else DISCARD V t (j) } </li></ul><ul><li>Return </li></ul>
  16. 16. SIGN LANGUAGE <ul><li>Sign Language is a communication system using gestures that are interpreted visually. </li></ul><ul><li>As a whole, sign languages share the same modality , a sign, but they differ from country to country. </li></ul>
  17. 17. AMERICAN SIGN LANGUAGE ( ASL ) <ul><li>ASL is the dominant sign language in the US, anglophone Canada and parts of Mexico. </li></ul><ul><li>Currently, approximately 450,000 deaf people in the United States use ASL as their primary language </li></ul><ul><li>ASL  signs follow a certain order, just as words do in spoken English. However, in ASL one sign can express meaning that would necessitate the use of several words in speech. </li></ul><ul><li>The grammar of ASL uses spatial locations, motion, and context to indicate syntax. </li></ul>
  18. 18. ASL ALPHABETS <ul><li>It is a manual alphabet representing all the letters of the English alphabet, using only the hands. </li></ul><ul><li>Making words using a manual alphabet is called fingerspelling . </li></ul><ul><li>Manual alphabets are a part of sign languages </li></ul><ul><li>For ASL, the one-handed manual alphabet is used. </li></ul><ul><li>Fingerspelling is used to complement the vocabulary of ASL when spelling individual letters of a word is the preferred or only option, such as with proper names or the titles of works. </li></ul>Aa Bb Cc Dd Ee Ff Gg Hh Ii Jj Kk Ll Mm Nn Oo Pp Qq Rr Ss Tt Uu Vv Ww Xx Yy Zz
  19. 19. SIGNED ENGLISH ( SE ): <ul><li>SE is a reasonable manual parallel to English. </li></ul><ul><li>The idea behind SE and other signing systems parallel to English is that deaf people will learn English better if they are exposed, visually through signs, to the grammatical features of English. </li></ul><ul><li>SE uses two kinds of gestures: sign words and sign markers . </li></ul><ul><li>Each sign word stands for a separate entry in a Standard English dictionary. </li></ul><ul><li>The sign words are signed in the same order as words appear in an English sentence. Sign words are presented in singular, non-past form. </li></ul><ul><li>Sign markers are added to these basic signs to show, for example, that you are talking about more than one thing or that some thing has happened in the past. </li></ul><ul><li>When this does not represent the word in mind, the manual alphabet can be used to fingerspell the word. </li></ul><ul><li>Most of signs in SE are taken from the American Sign Language. But these signs are now used in the same order as English words and with the same meaning. </li></ul>
  20. 20. ASL vs. SE (an Example) It is alright if you have a lot ASL Translation SE Translation IT I S ALL RIGHT IF YOU HAVE A LOT
  21. 21. DEMONSTRATION OF THE ASL IN OUR SW: A number of 2,600 ASL prerecorded video clips In case of nonbasic word, extract the basic word out of it Recognized Word (SR engine’s output) Is the basic word within the ASL database vocabulary? The American Manual Alphabet Only in case of a nonbasic input word, append some suitable marker Final Output None of the database contents matched the input basic word No Yes Fingerspelling of the original input word The equivalent ASL video clip of the input word, some marker could be appended
  22. 22. Speech to Sign Language Interpreter System - MILESTONE Thesis Writing Outline & Progress SW Development & Progress % Drafted Chapter 2: State-of-the-Art of SR Chapter 3: Sphinx SR Chapter 4: Sphinx Decoder Chapter 5: Sign Language Chapter 6: SW Demo ., Conclusions & Further Work Appendices SR Engine ASL Database Overall Integrated SW Chapter 1: Introduction % Completed
  23. 23. Thank You <ul><li>Your Questions Are </li></ul><ul><li>Most Welcomed </li></ul>
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×