SlideShare a Scribd company logo
1 of 10
Download to read offline
IJESM Volume 2, Issue 4 ISSN: 2320-0294
_________________________________________________________
A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories
Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A.
International Journal of Engineering, Science and Mathematics
http://www.ijmra.us
1
December
2013
Vocal Translation For Muteness People Using
Speech Synthesizer
C.Nijusekar*
P.Jenopaul**
Abstract —
The research perform has enabled a mute man can speak without surgery. An electrode placed on the neck to get the
vibration from blabbering voice of the person and also implement the special speech synthesizer for producing him
vowels. It possible for the disable person to produce vowels by thinking of them, using a speech synthesizer. In the future,
this breakthrough may help erase the word of speech disability.
keywords— blabbering voice, speech disability, articulation disorders, speech synthesiser, & average magnitude
difference function ,
* PSN College of engineering & Technology, Tamilnadu, India
** Assistant professor, PSN College of engineering & Technology. Tamilnadu, India
IJESM Volume 2, Issue 4 ISSN: 2320-0294
_________________________________________________________
A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories
Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A.
International Journal of Engineering, Science and Mathematics
http://www.ijmra.us
2
December
2013
I. INTRODUCTION
The muteness person has inability to speak properly. The Speech of a person was judged to be
disordered if the person’s speech was not understood by the listener, drew attention to the
manner in which he/she spoke than to the meaning, and was aesthetically unpleasant. It also
includes those whose speech is not understood due to defects in speech, such as stammering,
nasal voice, hoarse voice and discordant voice and articulation defects. Persons with speech
disability were categorised as.
 Persons who could not speak at all;
 Persons who could speak only in single words;
 Persons who could speak only unintelligibly;
 Persons who stammered;
 Persons who could speak with abnormal voice; like nasal voice, hoarse voice and discordant
voice etc.,
 Persons who had any other speech defects, such as articulation defects, etc.
II MEDICAL TREATMENTS AVAILABLE
For the speech disabled people, it is possible for them to get their voice by surgery- Cochlear
Implant surgery. However this surgery can be performed only to speech disabled children below
the age of 6. Considering the risk of surgery, it also costs around 8-11 lakhs. So it’s practically
not possible for every person to get a surgery. So, we engineers have to innovate to help them
reach the same level as us in our society.
II PROPOSED SYSTEM FOR PERSONS WHO COULD SPEAK ONLY IN SINGLE WORDS.
Speech sound disorders may be subdivided into two primary types, articulation disorders (also
called phonetic disorders) and phonemic disorders (also called phonological disorders). However,
some may have a mixed disorder in which both articulation and phonological problems exist.
Though speech sound disorders are associated with childhood, some residual errors may persist
into adulthood.
A.Articulation Disorders.
Articulation disorders (also called phonetic disorders or simply "artic disorders" for short) are
based on difficulty learning to physically produce the intended phonemes. Articulation disorders
have to do with the main articulators which are the lips, teeth, alveolar ridge, hard palate, velum,
glottis, and the tongue. If the disorder has anything to do with any of these articulators, then it's
an articulation disorder. There are usually fewer errors than with a phonemic disorder, and
distortions are more likely (though any omissions, additions, and substitutions may also be
present)
III PROPOSED SYSTEM
Mutes man can serve as a bridge between two different peoples, the interactive voice response
systems connect telephone users with the information they need, from anywhere at any time any
language
IJESM Volume 2, Issue 4 ISSN: 2320-0294
_________________________________________________________
A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories
Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A.
International Journal of Engineering, Science and Mathematics
http://www.ijmra.us
3
December
2013
Figure 1. Overview of Proposed System
The project aims to thoroughly explore the theoretical and practical aspects of human vocal translator techniques.
To this end a number of specific goals were proposed at the start of the project and detailed knowledge of the
challenges faced by speech interactivity embedded modules techniques.
The speech synthesizer is the major role of my project, the Speech synthesis is the artificial production of project
is muteness people vocal translation dictionary that is speech synthesizer of disability people speech. A computer
system used for this purpose is called a speech synthesizer, and can be implemented in software and hardware.
A. Neural amplifier
Integrated, low-power, low-noise CMOS neural amplifiers have recently grown in importance as large
microelectrode arrays have begun to be practical. With an eye to a future where thousands of signals must be
transmitted over a limited
bandwidth link or be processed in situ, we are developing low power neural amplifiers with integrated pre-filtering
and measurements of the spike signal to facilitate spike-sorting and data reduction prior to transmission to a data-
acquisition system. We have fabricated a prototype circuit in a commercially available 1.5µm, 2-metal, 2-poly
CMOS process that occupies approximately 91,000 square µm. We report circuit characteristics for a 1.5V power
supply, suitable for single cell battery operation. In one specific configuration, the circuit bandpass filters the
incoming signal from 22Hz to 6.7kHz while providing a gain of 42.5dB. With an amplifier power consumption of
0.8 µW, the rms input-referred noise is 20.6µV
Figure 2. Neural amplifier
1. Frequency Response and Gain
Although we designed the amplifier for a gain of 40dB, we measure a midband voltage gain of approximately
42.5dB (gain = 133). Independent control of the low-frequency corner using rbias and the high-frequency corner
using ampbias was confirmed and is shown in Figs. 3 and 4.
Voice
Coder Voice data
Stored words
Correlation
process
Voice output
Vibration
signal
from
speech
disorder
patient
IJESM Volume 2, Issue 4 ISSN: 2320-0294
_________________________________________________________
A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories
Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A.
International Journal of Engineering, Science and Mathematics
http://www.ijmra.us
4
December
2013
Figure 3. Frequency response of the amplifier as rbias is changed, leaving ampbias fixed at 0.586V. The parameter rbias controls the low-
frequency corner of the bandpass filter, making it possible to filter out some of the 60 Hz interference present in any recording situation.
Figure 4. Frequency response of the amplifier as ampbias is changed, leaving rbias fixed at 1.27V. The parameter ampbias controls the high-
frequency corner of the bandpass filter.
Using the parameters: ampbias=0.610V and rbias=1.27V, the corner frequencies of the bandpass response (-3dB
frequencies) occur at 22Hz and 6.7kHz. Using ampbias = 0.620V & rbias = 1.35V, the corner frequencies occur at
182Hz and 9kHz.
B. Speech synthesiser
Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called
a speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system
converts normal language text into speech; other systems render symbolic linguistic representations like phonetic
transcriptions into speech Synthesized speech can be created by concatenating pieces of recorded speech that are
stored in a database. Systems differ in the size of the stored speech units; a system that stores phones or diphones
provides the largest output range, but may lack clarity. For specific usage domains, the storage of entire words or
sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of the vocal tract and
other human voice characteristics to create a completely "synthetic" voice output.
The quality of a speech synthesizer is judged by its similarity to the human voice and by its ability to be
understood. An intelligible text-to-speech program allows people with visual impairments or reading disabilities to
listen to written works on a home computer. Many computer operating systems have included speech synthesizers
since the early 1990s.
1. Diphone synthesis
Diphone synthesis uses a minimal speech database containing all the diphones (sound-to-sound transitions)
occurring in a language. The number of diphones depends on the phonotactics of the language: for example, Spanish
IJESM Volume 2, Issue 4 ISSN: 2320-0294
_________________________________________________________
A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories
Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A.
International Journal of Engineering, Science and Mathematics
http://www.ijmra.us
5
December
2013
has about 800 diphones, and German about 2500. In diphone synthesis, only one example of each diphone is
contained in the speech database. At runtime, the target prosody of a sentence is superimposed on these minimal
units by means of digital signal processing techniques such as linear predictive coding, PSOLA or MBROLA.
Diphone synthesis suffers from the sonic glitches of concatenate synthesis and the robotic-sounding nature of
formant synthesis, and has few of the advantages of either approach other than small size. As such, its use in
commercial applications is declining,[citation needed] although it continues to be used in research because there are
a number of freely available software implementations.
C. 18-Bit Audio Codec
The PCM3000/3001 is a low-cost, single-chip stereo audio codec (analog – to – digital and digital-to-analog
converter) with single-ended analog voltage input and output. Both ADCs and DACs employ delta-sigma
modulation with 64-times oversampling. The ADCs include digital decimation filter and the DACs include an8-
times oversampling digital interpolation filter. The DACs also include digital attenuation, de-emphasis, infinite zero
detection and soft mute to form a complete subsystem. The PCM3000/3001 operates with left-justified, right-
justified, I2
or DSP data The PCM3000 can be programmed with a three – wire serial interface for special features
and data formats The PCM3001 can be pin-programmed for data formats.
Figure 5.ADC Block Diagram
The PCM3000 ADC consists of a band-gap reference, a stereo single-to-differential converter, a fully differential
5th-order delta-sigma modulator, a decimation filter (including digital high pass), and a serial interface circuit. The
block diagram in this data sheet illustrates the architecture of the ADC section.
IJESM Volume 2, Issue 4 ISSN: 2320-0294
_________________________________________________________
A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories
Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A.
International Journal of Engineering, Science and Mathematics
http://www.ijmra.us
6
December
2013
Figure 6. 5th
order of ADC Diagram
1. PCM Audio Interface
The four-wire digital audio interface for the PCM3000 DOUT (pin 19). can operate with seven different data for
Figure 17. Analog Front-End (Single-Channel)mats. For the PCM3000, these formats are selected through program
register 3 in the software mode. For the PCM3000, data formats are selected by pin-strapping the three format pins.
Figure 7. Analog Front-End (Single-Channel)
D. Hardware Architecture
A programmable hardware artefact, or machine, that lacks its software program is impotent;
even as a software artefact, or program, is equally impotent unless it can be used to alter the
sequential states of a suitable (hardware) machine.
However, a hardware machine and its software program can be designed to perform an almost
illimitable number of abstract and physical tasks. Within the computer and software engineering
disciplines (and, often, other engineering disciplines, such as communications), then, the terms
hardware, software, and system came to distinguish between the hardware that runs a software
program, the software, and the hardware device complete with its program.
IJESM Volume 2, Issue 4 ISSN: 2320-0294
_________________________________________________________
A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories
Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A.
International Journal of Engineering, Science and Mathematics
http://www.ijmra.us
7
December
2013
Figure 8. Hardware Architecture of speech synthesizer
1. Voice control
Speaker recognition is the identification of the person who is speaking by characteristics of
their voices (voice biometrics), also called voice control There is a difference between speaker
recognition (recognizing who is speaking) and speech recognition (recognizing what is being
said). These two terms are frequently confused, and "voice recognition" can be used for both.
In addition, there is a difference between the act of authentication (commonly referred to as
speaker verification or speaker authentication) and identification. Finally, there is a difference
between speaker recognition (recognizing who is speaking) and speaker divarication
(recognizing when the same speaker is speaking). Recognizing the speaker can simplify the task
of translating speech in systems that have been trained on specific person's voices or it can be
used to authenticate or verify the identity of a speaker as part of a security process.
2. Dictionary Query
The Dictionary query is an enquiry form of memory. The memory pre-defined to store all the
words to produce the sounds.
3. Fixed Dialog
This query when the user gives the commands it will select the word from the dictionary.
4. Programmable Dialog
If the query is not available the memory it request to him to
Produce the alternate vowels.
E. Simple Manner for Start/End Points Detection
Before processing feature extraction and DTW, we simply detect the start/end points by speech energy (the
square value of the acoustic amplitude). Fig. 4 indicates our simple manner to speed up the start/end point detection
for recording speech streams with different length. For each speech stream, we arrange equal-size memory spaces
for them and initialize their contents as silence data. When a recording process starts, our system start to wait until
an energy sum of 80 speech samples exceeds an experimental threshold. And the speech amplitudes sampled from
ADC buffer are starting to write into the memory. The end point is detected and the memory writing process is
stopped while an energy sum of 80 speech samples is under an experimental threshold.
When the DTW process starts to compute distance between speech stream pairs, we only take the non-silence
data from the memory for processing DTW. By using this memory access technique, the start points of different
IJESM Volume 2, Issue 4 ISSN: 2320-0294
_________________________________________________________
A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories
Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A.
International Journal of Engineering, Science and Mathematics
http://www.ijmra.us
8
December
2013
speech streams can be kept pace for computing DTW distance. This manner can easily record speech streams with
different length. Considering to a resource-limited device, it can speed up the start/end point detections for a real-
time system.
Figure 9. Start/End Points Detection
F. frame-synchronous pitch feature extraction
The original AMDF (Average Magnitude Difference Function) was proposed by Ross et al. in 1974 [6]. It is
defined as following
Where s(n) denotes the speech sample sequence.
In this paper, the circular AMDF (C-AMDF) [7] and modified AMDF (M-AMDF) [8] are integrated to extract
circular and modified AMDF (CM-AMDF) pitch features. With the mixed segment containing voiced and unvoiced
speech, the CA-MDF can give the pitch period of the voiced part. It is defined as follows:
Where k = 1,...,N , N is the length of segment, s (n) w denotes the speech sample sequence with the rectangular
window and mod(n + k,N) represents the modulo operation, meaning that
n+ k modulo N. But sometimes the global minimum valley found from AMDF-based method is not the first local
minimum valley especially for voiced speech with good stability and periodicity. To avoid the shortcoming
mentioned above, the transform function presented is defined as:
Where max {( )} max R P k C = and max n is the index of maximum PC(k). In general, the pitch period is estimated
from the short term AMDF as following.
IJESM Volume 2, Issue 4 ISSN: 2320-0294
_________________________________________________________
A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories
Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A.
International Journal of Engineering, Science and Mathematics
http://www.ijmra.us
9
December
2013
Where max k and min k are respectively the possible maximum and minimum value. For each pitch of
segment i, the frequency is calculated by
Where R S is the sampling rate of speech signal. Fig. 10(b) and (c) show two AMDFs of the same segment.
It indicates clearly that the global minimum valley of basic AMDF is 4th local minimum valley, as ―P‖ indicated in
Fig. 10(b), whereas the global maximum peak of M-AMDF is first local maximum peak, as ―P‖ indicated in Fig. 10
(c). Since we just search the global maximum peak point of M-AMDF during pitch detection, it can greatly reduce
the errors of pitch multiples existed in other AMDF.
Fig. 10. (a) Original speech, (b) CAMDF for original speech, and (c) MAMDF for CAMDF results.
The flowchart of our frame-synchronous extraction scheduling is shown in Fig. 6. After a speech frame (32
ms) is received from the ADC, both of the CM-AMDF speech feature extraction and memory saving are finished
before the incoming of next frame. In our experiments, the total computation time of CM-AMDF computation and
memory saving only takes about 0.115 ms which is faster than the speech sampling period (0.125 ms); thus making
our system to be workable for real-time demand.
Fig. 11. Program scheduling for real-time speech feature extraction and recording.
G. Dynamic Time Warping Recognizer based CM-AMDF
Template matching has long history in speech recognition application [4]. Basically, it is based
on the minimum distance between the test and the reference patterns along the aligned Path,
which is obtained using dynamic programming (DP). In the conventional DP-matching technique,
the plane of grids shown in Fig. 7 is generally utilized. This figure also shows an example of the
IJESM Volume 2, Issue 4 ISSN: 2320-0294
_________________________________________________________
A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories
Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A.
International Journal of Engineering, Science and Mathematics
http://www.ijmra.us
10
December
2013
alignment path. Global distance is the distance between the test and the reference patterns along
the alignment path, and is computed along the alignment path.
IV CONCLUSIONS
This project can be extended to use voice transmission such as the application of voice
recognition for who have specific learning difficulties and also people who do not share a
common language. And it can be implement to railway enquiries, airways, conference also
applicable for real time communications.
V REFERENCES
[1] C. Kim and K.Seo, ―Robust DTW-based recognition algorithm for handheld consumer devices,‖
IEEE Trans. Consumer Electronics, vol. 51 (2), pp. 699-709, 2005.
[2] W. Hess, Pitch Determination of Speech Signal. New York: Springer-Verlag, 1983.
[3] T. E. Tremain, ―The Government Standard Linear Predictive Coding Algorithm: LPC-10,‖ Speech
Technology Magazine, pp. 40-49, April 1982.
[4] L. R. Rabiner and B. H. Juang, Fundamentals of Speech Recognition.Englewood Cliffs, NJ: Prentice
Hall, 1993.
[5] C. Lévy, G. Linarès, and P. Nocera, ―Comparison of several acoustic modeling techniques and
decoding algorithms for embedded speech recognition systems,‖ Workshop on DSP in Mobile and
VehicularSystems, Nagoya, Japan, Apr. 2003.
[6] M. J. Ross et al., ―Average Magnitude Difference Function Pitch Extractor,‖ IEEE Trans. Acoust.,
Speech, Signal Processing, vol. ASSP 22, pp. 353-362, 1974.
[7] Y. M. Zeng, Z. Y. Wu, H. B. Liu, and L. Zou, ‖ Modified AMDF pitchdetection algorithm,‖ IEEE
Int. Conf. Machine Learning andCybernetics, pp. 470-473, Phoenix, AZ, Nov. 2003.
[8] Y. M. Zeng, Z. Y. Wu, H. B. Liu, and L. Zou, ‖ Modified AMDF pitchdetection algorithm,‖ IEEE
Int. Conf. Acoustics, Speech, and Signal Processing, pp. 341-344, Xi An, China, Nov. 2002.

More Related Content

What's hot

Speech processing strategies for cochlear prostheses the past, present and fu...
Speech processing strategies for cochlear prostheses the past, present and fu...Speech processing strategies for cochlear prostheses the past, present and fu...
Speech processing strategies for cochlear prostheses the past, present and fu...iaemedu
 
Automatic Speech Recognition
Automatic Speech RecognitionAutomatic Speech Recognition
Automatic Speech RecognitionYogesh Vijay
 
Speech to text conversion
Speech to text conversionSpeech to text conversion
Speech to text conversionankit_saluja
 
TEXT-SPEECH PPT.pptx
TEXT-SPEECH PPT.pptxTEXT-SPEECH PPT.pptx
TEXT-SPEECH PPT.pptxNsaroj kumar
 
Speech Recognition Technology
Speech Recognition TechnologySpeech Recognition Technology
Speech Recognition TechnologySrijanKumar18
 
Speech recognition-using-wavelet-transform
Speech recognition-using-wavelet-transformSpeech recognition-using-wavelet-transform
Speech recognition-using-wavelet-transformvidhateswapnil
 
Speech Recognition by Iqbal
Speech Recognition by IqbalSpeech Recognition by Iqbal
Speech Recognition by IqbalIqbal
 
A seminar report on speech recognition technology
A seminar report on speech recognition technologyA seminar report on speech recognition technology
A seminar report on speech recognition technologySrijanKumar18
 
AUTOMATIC SPEECH RECOGNITION- A SURVEY
AUTOMATIC SPEECH RECOGNITION- A SURVEYAUTOMATIC SPEECH RECOGNITION- A SURVEY
AUTOMATIC SPEECH RECOGNITION- A SURVEYIJCERT
 
Speech Recognition
Speech RecognitionSpeech Recognition
Speech Recognitionfathitarek
 
Speech Recognition
Speech RecognitionSpeech Recognition
Speech RecognitionHugo Moreno
 
Speech Recognition: Transcription and transformation of human speech
Speech Recognition: Transcription and transformation of human speechSpeech Recognition: Transcription and transformation of human speech
Speech Recognition: Transcription and transformation of human speechSubmissionResearchpa
 
IRJET- Communication Aid for Deaf and Dumb People
IRJET- Communication Aid for Deaf and Dumb PeopleIRJET- Communication Aid for Deaf and Dumb People
IRJET- Communication Aid for Deaf and Dumb PeopleIRJET Journal
 
Role of language engineering to preserve endangered languages
Role of language engineering to preserve endangered languagesRole of language engineering to preserve endangered languages
Role of language engineering to preserve endangered languagesDr. Amit Kumar Jha
 

What's hot (16)

Speech processing strategies for cochlear prostheses the past, present and fu...
Speech processing strategies for cochlear prostheses the past, present and fu...Speech processing strategies for cochlear prostheses the past, present and fu...
Speech processing strategies for cochlear prostheses the past, present and fu...
 
Automatic Speech Recognition
Automatic Speech RecognitionAutomatic Speech Recognition
Automatic Speech Recognition
 
K010416167
K010416167K010416167
K010416167
 
Speech to text conversion
Speech to text conversionSpeech to text conversion
Speech to text conversion
 
TEXT-SPEECH PPT.pptx
TEXT-SPEECH PPT.pptxTEXT-SPEECH PPT.pptx
TEXT-SPEECH PPT.pptx
 
Speech Recognition Technology
Speech Recognition TechnologySpeech Recognition Technology
Speech Recognition Technology
 
Speech recognition-using-wavelet-transform
Speech recognition-using-wavelet-transformSpeech recognition-using-wavelet-transform
Speech recognition-using-wavelet-transform
 
Speech Recognition by Iqbal
Speech Recognition by IqbalSpeech Recognition by Iqbal
Speech Recognition by Iqbal
 
A seminar report on speech recognition technology
A seminar report on speech recognition technologyA seminar report on speech recognition technology
A seminar report on speech recognition technology
 
AUTOMATIC SPEECH RECOGNITION- A SURVEY
AUTOMATIC SPEECH RECOGNITION- A SURVEYAUTOMATIC SPEECH RECOGNITION- A SURVEY
AUTOMATIC SPEECH RECOGNITION- A SURVEY
 
Speech Recognition
Speech RecognitionSpeech Recognition
Speech Recognition
 
Speech Recognition
Speech RecognitionSpeech Recognition
Speech Recognition
 
Speech Recognition: Transcription and transformation of human speech
Speech Recognition: Transcription and transformation of human speechSpeech Recognition: Transcription and transformation of human speech
Speech Recognition: Transcription and transformation of human speech
 
IRJET- Communication Aid for Deaf and Dumb People
IRJET- Communication Aid for Deaf and Dumb PeopleIRJET- Communication Aid for Deaf and Dumb People
IRJET- Communication Aid for Deaf and Dumb People
 
Automatic Speech Recognition
Automatic Speech RecognitionAutomatic Speech Recognition
Automatic Speech Recognition
 
Role of language engineering to preserve endangered languages
Role of language engineering to preserve endangered languagesRole of language engineering to preserve endangered languages
Role of language engineering to preserve endangered languages
 

Similar to Vocal Translation For Muteness People Using Speech Synthesizer

Assistive technology for hearing and speech disorders
Assistive technology for hearing and speech disordersAssistive technology for hearing and speech disorders
Assistive technology for hearing and speech disordersAlexander Decker
 
Sensory Aids for Persons with Auditory Impairments
Sensory Aids for Persons with Auditory ImpairmentsSensory Aids for Persons with Auditory Impairments
Sensory Aids for Persons with Auditory ImpairmentsDamian T. Gordon
 
Silent sound interface
Silent sound interfaceSilent sound interface
Silent sound interfaceJeevitha Reddy
 
LIP READING: VISUAL SPEECH RECOGNITION USING LIP READING
LIP READING: VISUAL SPEECH RECOGNITION USING LIP READINGLIP READING: VISUAL SPEECH RECOGNITION USING LIP READING
LIP READING: VISUAL SPEECH RECOGNITION USING LIP READINGIRJET Journal
 
An Introduction to Various Features of Speech SignalSpeech features
An Introduction to Various Features of Speech SignalSpeech featuresAn Introduction to Various Features of Speech SignalSpeech features
An Introduction to Various Features of Speech SignalSpeech featuresSivaranjan Goswami
 
Lorm glove for deaf-blind person
Lorm glove for deaf-blind personLorm glove for deaf-blind person
Lorm glove for deaf-blind personAnjali Singh
 
IRJET- A Smart Glove for the Dumb and Deaf
IRJET-  	  A Smart Glove for the Dumb and DeafIRJET-  	  A Smart Glove for the Dumb and Deaf
IRJET- A Smart Glove for the Dumb and DeafIRJET Journal
 
WEARABLE VIBRATION-BASED DEVICE FOR HEARING-IMPAIRED PEOPLE USING ACOUSTIC SC...
WEARABLE VIBRATION-BASED DEVICE FOR HEARING-IMPAIRED PEOPLE USING ACOUSTIC SC...WEARABLE VIBRATION-BASED DEVICE FOR HEARING-IMPAIRED PEOPLE USING ACOUSTIC SC...
WEARABLE VIBRATION-BASED DEVICE FOR HEARING-IMPAIRED PEOPLE USING ACOUSTIC SC...IRJET Journal
 
HANDICAP HUMAN INTERACTION DEVICE
HANDICAP HUMAN INTERACTION DEVICEHANDICAP HUMAN INTERACTION DEVICE
HANDICAP HUMAN INTERACTION DEVICEijsrd.com
 
Silent sound technology final report
Silent sound technology final reportSilent sound technology final report
Silent sound technology final reportLohit Dalal
 
Essay On Cochlear Implants
Essay On Cochlear ImplantsEssay On Cochlear Implants
Essay On Cochlear ImplantsCarmen Martin
 
Unit 1 speech processing
Unit 1 speech processingUnit 1 speech processing
Unit 1 speech processingazhagujaisudhan
 
GLOVE BASED GESTURE RECOGNITION USING IR SENSOR
GLOVE BASED GESTURE RECOGNITION USING IR SENSORGLOVE BASED GESTURE RECOGNITION USING IR SENSOR
GLOVE BASED GESTURE RECOGNITION USING IR SENSORIRJET Journal
 
Unspoken Words Recognition: A Review
Unspoken Words Recognition: A ReviewUnspoken Words Recognition: A Review
Unspoken Words Recognition: A Reviewidescitation
 

Similar to Vocal Translation For Muteness People Using Speech Synthesizer (20)

Assistive technology for hearing and speech disorders
Assistive technology for hearing and speech disordersAssistive technology for hearing and speech disorders
Assistive technology for hearing and speech disorders
 
Gesture Vocalizer
Gesture VocalizerGesture Vocalizer
Gesture Vocalizer
 
Gs3412321244
Gs3412321244Gs3412321244
Gs3412321244
 
Sensory Aids for Persons with Auditory Impairments
Sensory Aids for Persons with Auditory ImpairmentsSensory Aids for Persons with Auditory Impairments
Sensory Aids for Persons with Auditory Impairments
 
Silent sound interface
Silent sound interfaceSilent sound interface
Silent sound interface
 
Enable talk
Enable talkEnable talk
Enable talk
 
LIP READING: VISUAL SPEECH RECOGNITION USING LIP READING
LIP READING: VISUAL SPEECH RECOGNITION USING LIP READINGLIP READING: VISUAL SPEECH RECOGNITION USING LIP READING
LIP READING: VISUAL SPEECH RECOGNITION USING LIP READING
 
An Introduction to Various Features of Speech SignalSpeech features
An Introduction to Various Features of Speech SignalSpeech featuresAn Introduction to Various Features of Speech SignalSpeech features
An Introduction to Various Features of Speech SignalSpeech features
 
Lorm glove for deaf-blind person
Lorm glove for deaf-blind personLorm glove for deaf-blind person
Lorm glove for deaf-blind person
 
IRJET- A Smart Glove for the Dumb and Deaf
IRJET-  	  A Smart Glove for the Dumb and DeafIRJET-  	  A Smart Glove for the Dumb and Deaf
IRJET- A Smart Glove for the Dumb and Deaf
 
WEARABLE VIBRATION-BASED DEVICE FOR HEARING-IMPAIRED PEOPLE USING ACOUSTIC SC...
WEARABLE VIBRATION-BASED DEVICE FOR HEARING-IMPAIRED PEOPLE USING ACOUSTIC SC...WEARABLE VIBRATION-BASED DEVICE FOR HEARING-IMPAIRED PEOPLE USING ACOUSTIC SC...
WEARABLE VIBRATION-BASED DEVICE FOR HEARING-IMPAIRED PEOPLE USING ACOUSTIC SC...
 
HANDICAP HUMAN INTERACTION DEVICE
HANDICAP HUMAN INTERACTION DEVICEHANDICAP HUMAN INTERACTION DEVICE
HANDICAP HUMAN INTERACTION DEVICE
 
Silent sound technology final report
Silent sound technology final reportSilent sound technology final report
Silent sound technology final report
 
Kc3517481754
Kc3517481754Kc3517481754
Kc3517481754
 
Essay On Cochlear Implants
Essay On Cochlear ImplantsEssay On Cochlear Implants
Essay On Cochlear Implants
 
Unit 1 speech processing
Unit 1 speech processingUnit 1 speech processing
Unit 1 speech processing
 
GLOVE BASED GESTURE RECOGNITION USING IR SENSOR
GLOVE BASED GESTURE RECOGNITION USING IR SENSORGLOVE BASED GESTURE RECOGNITION USING IR SENSOR
GLOVE BASED GESTURE RECOGNITION USING IR SENSOR
 
75 78
75 7875 78
75 78
 
75 78
75 7875 78
75 78
 
Unspoken Words Recognition: A Review
Unspoken Words Recognition: A ReviewUnspoken Words Recognition: A Review
Unspoken Words Recognition: A Review
 

More from IJESM JOURNAL

SEMI-PARAMETRIC ESTIMATION OF Px,y({(x,y)/x >y}) FOR THE POWER FUNCTION DISTR...
SEMI-PARAMETRIC ESTIMATION OF Px,y({(x,y)/x >y}) FOR THE POWER FUNCTION DISTR...SEMI-PARAMETRIC ESTIMATION OF Px,y({(x,y)/x >y}) FOR THE POWER FUNCTION DISTR...
SEMI-PARAMETRIC ESTIMATION OF Px,y({(x,y)/x >y}) FOR THE POWER FUNCTION DISTR...IJESM JOURNAL
 
Comparative study of Terminating Newton Iterations: in Solving ODEs
Comparative study of Terminating Newton Iterations: in Solving ODEsComparative study of Terminating Newton Iterations: in Solving ODEs
Comparative study of Terminating Newton Iterations: in Solving ODEsIJESM JOURNAL
 
ON HOMOGENEOUS TERNARY QUADRATIC DIOPHANTINE EQUATION
ON HOMOGENEOUS TERNARY QUADRATIC DIOPHANTINE EQUATIONON HOMOGENEOUS TERNARY QUADRATIC DIOPHANTINE EQUATION
ON HOMOGENEOUS TERNARY QUADRATIC DIOPHANTINE EQUATIONIJESM JOURNAL
 
HOMOGENOUS BI-QUADRATIC WITH FIVE UNKNOWNS
HOMOGENOUS BI-QUADRATIC WITH FIVE UNKNOWNSHOMOGENOUS BI-QUADRATIC WITH FIVE UNKNOWNS
HOMOGENOUS BI-QUADRATIC WITH FIVE UNKNOWNSIJESM JOURNAL
 
STUDY OF RECESSIONAL PATTERN OF GLACIERS OF DHAULIGANGA VALLEY, UTTRAKHAND, I...
STUDY OF RECESSIONAL PATTERN OF GLACIERS OF DHAULIGANGA VALLEY, UTTRAKHAND, I...STUDY OF RECESSIONAL PATTERN OF GLACIERS OF DHAULIGANGA VALLEY, UTTRAKHAND, I...
STUDY OF RECESSIONAL PATTERN OF GLACIERS OF DHAULIGANGA VALLEY, UTTRAKHAND, I...IJESM JOURNAL
 
Implementation Of TPM with Case study
Implementation Of TPM with Case studyImplementation Of TPM with Case study
Implementation Of TPM with Case studyIJESM JOURNAL
 
DIFFERENTIAL OPERATORS AND STABILITY ANALYSIS OF THE WAGE FUNCTION
DIFFERENTIAL OPERATORS AND STABILITY ANALYSIS OF THE WAGE FUNCTIONDIFFERENTIAL OPERATORS AND STABILITY ANALYSIS OF THE WAGE FUNCTION
DIFFERENTIAL OPERATORS AND STABILITY ANALYSIS OF THE WAGE FUNCTIONIJESM JOURNAL
 
Vocal Translation For Muteness People Using Speech Synthesizer
Vocal Translation For Muteness People Using Speech SynthesizerVocal Translation For Muteness People Using Speech Synthesizer
Vocal Translation For Muteness People Using Speech SynthesizerIJESM JOURNAL
 
Green supply chain management in Indian Electronics & Telecommunication Industry
Green supply chain management in Indian Electronics & Telecommunication IndustryGreen supply chain management in Indian Electronics & Telecommunication Industry
Green supply chain management in Indian Electronics & Telecommunication IndustryIJESM JOURNAL
 
BER Performance Analysis of Open and Closed Loop Power Control in LTE
BER Performance Analysis of Open and Closed Loop Power Control in LTEBER Performance Analysis of Open and Closed Loop Power Control in LTE
BER Performance Analysis of Open and Closed Loop Power Control in LTEIJESM JOURNAL
 
Rainfall Trends and Variability in Tamil Nadu (1983 - 2012): Indicator of Cli...
Rainfall Trends and Variability in Tamil Nadu (1983 - 2012): Indicator of Cli...Rainfall Trends and Variability in Tamil Nadu (1983 - 2012): Indicator of Cli...
Rainfall Trends and Variability in Tamil Nadu (1983 - 2012): Indicator of Cli...IJESM JOURNAL
 
The Political, Legal & Technological Environment in Global Scenario
The Political, Legal & Technological Environment in Global ScenarioThe Political, Legal & Technological Environment in Global Scenario
The Political, Legal & Technological Environment in Global ScenarioIJESM JOURNAL
 
SUPPLIER SELECTION AND EVALUATION – AN INTEGRATED APPROACH OF QFD & AHP
SUPPLIER SELECTION AND EVALUATION – AN INTEGRATED APPROACH OF QFD & AHPSUPPLIER SELECTION AND EVALUATION – AN INTEGRATED APPROACH OF QFD & AHP
SUPPLIER SELECTION AND EVALUATION – AN INTEGRATED APPROACH OF QFD & AHPIJESM JOURNAL
 
FURTHER CHARACTERIZATIONS AND SOME APPLICATIONS OF UPPER AND LOWER WEAKLY QUA...
FURTHER CHARACTERIZATIONS AND SOME APPLICATIONS OF UPPER AND LOWER WEAKLY QUA...FURTHER CHARACTERIZATIONS AND SOME APPLICATIONS OF UPPER AND LOWER WEAKLY QUA...
FURTHER CHARACTERIZATIONS AND SOME APPLICATIONS OF UPPER AND LOWER WEAKLY QUA...IJESM JOURNAL
 
INVARIANCE OF SEPARATION AXIOMS IN ISOTONIC SPACES WITH RESPECT TO PERFECT MA...
INVARIANCE OF SEPARATION AXIOMS IN ISOTONIC SPACES WITH RESPECT TO PERFECT MA...INVARIANCE OF SEPARATION AXIOMS IN ISOTONIC SPACES WITH RESPECT TO PERFECT MA...
INVARIANCE OF SEPARATION AXIOMS IN ISOTONIC SPACES WITH RESPECT TO PERFECT MA...IJESM JOURNAL
 
MATHEMATICAL MODELLING OF POWER OUTPUT IN A WIND ENERGY CONVERSION SYSTEM
MATHEMATICAL MODELLING OF POWER OUTPUT IN A WIND ENERGY CONVERSION SYSTEMMATHEMATICAL MODELLING OF POWER OUTPUT IN A WIND ENERGY CONVERSION SYSTEM
MATHEMATICAL MODELLING OF POWER OUTPUT IN A WIND ENERGY CONVERSION SYSTEMIJESM JOURNAL
 
INTRODUCING AN INTEGRATING FACTOR IN STUDYING THE WAGE EQUATION
INTRODUCING AN INTEGRATING FACTOR IN STUDYING THE WAGE EQUATIONINTRODUCING AN INTEGRATING FACTOR IN STUDYING THE WAGE EQUATION
INTRODUCING AN INTEGRATING FACTOR IN STUDYING THE WAGE EQUATIONIJESM JOURNAL
 
ANALYTICAL STUDY OF WAVE MOTION OF A FALLING LIQUID FILM PAST A VERTICAL PLATE
ANALYTICAL STUDY OF WAVE MOTION OF A FALLING LIQUID FILM PAST A VERTICAL PLATEANALYTICAL STUDY OF WAVE MOTION OF A FALLING LIQUID FILM PAST A VERTICAL PLATE
ANALYTICAL STUDY OF WAVE MOTION OF A FALLING LIQUID FILM PAST A VERTICAL PLATEIJESM JOURNAL
 
APPLICATION OF THE METHOD OF VARIATION OF PARAMETERS: MATHEMATICAL MODEL FOR ...
APPLICATION OF THE METHOD OF VARIATION OF PARAMETERS: MATHEMATICAL MODEL FOR ...APPLICATION OF THE METHOD OF VARIATION OF PARAMETERS: MATHEMATICAL MODEL FOR ...
APPLICATION OF THE METHOD OF VARIATION OF PARAMETERS: MATHEMATICAL MODEL FOR ...IJESM JOURNAL
 
Ab Initio Study of Pressure Induced Structural, Magnetic and Electronic Prope...
Ab Initio Study of Pressure Induced Structural, Magnetic and Electronic Prope...Ab Initio Study of Pressure Induced Structural, Magnetic and Electronic Prope...
Ab Initio Study of Pressure Induced Structural, Magnetic and Electronic Prope...IJESM JOURNAL
 

More from IJESM JOURNAL (20)

SEMI-PARAMETRIC ESTIMATION OF Px,y({(x,y)/x >y}) FOR THE POWER FUNCTION DISTR...
SEMI-PARAMETRIC ESTIMATION OF Px,y({(x,y)/x >y}) FOR THE POWER FUNCTION DISTR...SEMI-PARAMETRIC ESTIMATION OF Px,y({(x,y)/x >y}) FOR THE POWER FUNCTION DISTR...
SEMI-PARAMETRIC ESTIMATION OF Px,y({(x,y)/x >y}) FOR THE POWER FUNCTION DISTR...
 
Comparative study of Terminating Newton Iterations: in Solving ODEs
Comparative study of Terminating Newton Iterations: in Solving ODEsComparative study of Terminating Newton Iterations: in Solving ODEs
Comparative study of Terminating Newton Iterations: in Solving ODEs
 
ON HOMOGENEOUS TERNARY QUADRATIC DIOPHANTINE EQUATION
ON HOMOGENEOUS TERNARY QUADRATIC DIOPHANTINE EQUATIONON HOMOGENEOUS TERNARY QUADRATIC DIOPHANTINE EQUATION
ON HOMOGENEOUS TERNARY QUADRATIC DIOPHANTINE EQUATION
 
HOMOGENOUS BI-QUADRATIC WITH FIVE UNKNOWNS
HOMOGENOUS BI-QUADRATIC WITH FIVE UNKNOWNSHOMOGENOUS BI-QUADRATIC WITH FIVE UNKNOWNS
HOMOGENOUS BI-QUADRATIC WITH FIVE UNKNOWNS
 
STUDY OF RECESSIONAL PATTERN OF GLACIERS OF DHAULIGANGA VALLEY, UTTRAKHAND, I...
STUDY OF RECESSIONAL PATTERN OF GLACIERS OF DHAULIGANGA VALLEY, UTTRAKHAND, I...STUDY OF RECESSIONAL PATTERN OF GLACIERS OF DHAULIGANGA VALLEY, UTTRAKHAND, I...
STUDY OF RECESSIONAL PATTERN OF GLACIERS OF DHAULIGANGA VALLEY, UTTRAKHAND, I...
 
Implementation Of TPM with Case study
Implementation Of TPM with Case studyImplementation Of TPM with Case study
Implementation Of TPM with Case study
 
DIFFERENTIAL OPERATORS AND STABILITY ANALYSIS OF THE WAGE FUNCTION
DIFFERENTIAL OPERATORS AND STABILITY ANALYSIS OF THE WAGE FUNCTIONDIFFERENTIAL OPERATORS AND STABILITY ANALYSIS OF THE WAGE FUNCTION
DIFFERENTIAL OPERATORS AND STABILITY ANALYSIS OF THE WAGE FUNCTION
 
Vocal Translation For Muteness People Using Speech Synthesizer
Vocal Translation For Muteness People Using Speech SynthesizerVocal Translation For Muteness People Using Speech Synthesizer
Vocal Translation For Muteness People Using Speech Synthesizer
 
Green supply chain management in Indian Electronics & Telecommunication Industry
Green supply chain management in Indian Electronics & Telecommunication IndustryGreen supply chain management in Indian Electronics & Telecommunication Industry
Green supply chain management in Indian Electronics & Telecommunication Industry
 
BER Performance Analysis of Open and Closed Loop Power Control in LTE
BER Performance Analysis of Open and Closed Loop Power Control in LTEBER Performance Analysis of Open and Closed Loop Power Control in LTE
BER Performance Analysis of Open and Closed Loop Power Control in LTE
 
Rainfall Trends and Variability in Tamil Nadu (1983 - 2012): Indicator of Cli...
Rainfall Trends and Variability in Tamil Nadu (1983 - 2012): Indicator of Cli...Rainfall Trends and Variability in Tamil Nadu (1983 - 2012): Indicator of Cli...
Rainfall Trends and Variability in Tamil Nadu (1983 - 2012): Indicator of Cli...
 
The Political, Legal & Technological Environment in Global Scenario
The Political, Legal & Technological Environment in Global ScenarioThe Political, Legal & Technological Environment in Global Scenario
The Political, Legal & Technological Environment in Global Scenario
 
SUPPLIER SELECTION AND EVALUATION – AN INTEGRATED APPROACH OF QFD & AHP
SUPPLIER SELECTION AND EVALUATION – AN INTEGRATED APPROACH OF QFD & AHPSUPPLIER SELECTION AND EVALUATION – AN INTEGRATED APPROACH OF QFD & AHP
SUPPLIER SELECTION AND EVALUATION – AN INTEGRATED APPROACH OF QFD & AHP
 
FURTHER CHARACTERIZATIONS AND SOME APPLICATIONS OF UPPER AND LOWER WEAKLY QUA...
FURTHER CHARACTERIZATIONS AND SOME APPLICATIONS OF UPPER AND LOWER WEAKLY QUA...FURTHER CHARACTERIZATIONS AND SOME APPLICATIONS OF UPPER AND LOWER WEAKLY QUA...
FURTHER CHARACTERIZATIONS AND SOME APPLICATIONS OF UPPER AND LOWER WEAKLY QUA...
 
INVARIANCE OF SEPARATION AXIOMS IN ISOTONIC SPACES WITH RESPECT TO PERFECT MA...
INVARIANCE OF SEPARATION AXIOMS IN ISOTONIC SPACES WITH RESPECT TO PERFECT MA...INVARIANCE OF SEPARATION AXIOMS IN ISOTONIC SPACES WITH RESPECT TO PERFECT MA...
INVARIANCE OF SEPARATION AXIOMS IN ISOTONIC SPACES WITH RESPECT TO PERFECT MA...
 
MATHEMATICAL MODELLING OF POWER OUTPUT IN A WIND ENERGY CONVERSION SYSTEM
MATHEMATICAL MODELLING OF POWER OUTPUT IN A WIND ENERGY CONVERSION SYSTEMMATHEMATICAL MODELLING OF POWER OUTPUT IN A WIND ENERGY CONVERSION SYSTEM
MATHEMATICAL MODELLING OF POWER OUTPUT IN A WIND ENERGY CONVERSION SYSTEM
 
INTRODUCING AN INTEGRATING FACTOR IN STUDYING THE WAGE EQUATION
INTRODUCING AN INTEGRATING FACTOR IN STUDYING THE WAGE EQUATIONINTRODUCING AN INTEGRATING FACTOR IN STUDYING THE WAGE EQUATION
INTRODUCING AN INTEGRATING FACTOR IN STUDYING THE WAGE EQUATION
 
ANALYTICAL STUDY OF WAVE MOTION OF A FALLING LIQUID FILM PAST A VERTICAL PLATE
ANALYTICAL STUDY OF WAVE MOTION OF A FALLING LIQUID FILM PAST A VERTICAL PLATEANALYTICAL STUDY OF WAVE MOTION OF A FALLING LIQUID FILM PAST A VERTICAL PLATE
ANALYTICAL STUDY OF WAVE MOTION OF A FALLING LIQUID FILM PAST A VERTICAL PLATE
 
APPLICATION OF THE METHOD OF VARIATION OF PARAMETERS: MATHEMATICAL MODEL FOR ...
APPLICATION OF THE METHOD OF VARIATION OF PARAMETERS: MATHEMATICAL MODEL FOR ...APPLICATION OF THE METHOD OF VARIATION OF PARAMETERS: MATHEMATICAL MODEL FOR ...
APPLICATION OF THE METHOD OF VARIATION OF PARAMETERS: MATHEMATICAL MODEL FOR ...
 
Ab Initio Study of Pressure Induced Structural, Magnetic and Electronic Prope...
Ab Initio Study of Pressure Induced Structural, Magnetic and Electronic Prope...Ab Initio Study of Pressure Induced Structural, Magnetic and Electronic Prope...
Ab Initio Study of Pressure Induced Structural, Magnetic and Electronic Prope...
 

Recently uploaded

Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvLewisJB
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.eptoze12
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...Chandu841456
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptSAURABHKUMAR892774
 
Internship report on mechanical engineering
Internship report on mechanical engineeringInternship report on mechanical engineering
Internship report on mechanical engineeringmalavadedarshan25
 
pipeline in computer architecture design
pipeline in computer architecture  designpipeline in computer architecture  design
pipeline in computer architecture designssuser87fa0c1
 
Effects of rheological properties on mixing
Effects of rheological properties on mixingEffects of rheological properties on mixing
Effects of rheological properties on mixingviprabot1
 
Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfme23b1001
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidNikhilNagaraju
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)dollysharma2066
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxPoojaBan
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfROCENODodongVILLACER
 

Recently uploaded (20)

Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvv
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Serviceyoung call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.ppt
 
Internship report on mechanical engineering
Internship report on mechanical engineeringInternship report on mechanical engineering
Internship report on mechanical engineering
 
pipeline in computer architecture design
pipeline in computer architecture  designpipeline in computer architecture  design
pipeline in computer architecture design
 
Effects of rheological properties on mixing
Effects of rheological properties on mixingEffects of rheological properties on mixing
Effects of rheological properties on mixing
 
young call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Serviceyoung call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Service
 
Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdf
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfid
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptx
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdf
 

Vocal Translation For Muteness People Using Speech Synthesizer

  • 1. IJESM Volume 2, Issue 4 ISSN: 2320-0294 _________________________________________________________ A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A. International Journal of Engineering, Science and Mathematics http://www.ijmra.us 1 December 2013 Vocal Translation For Muteness People Using Speech Synthesizer C.Nijusekar* P.Jenopaul** Abstract — The research perform has enabled a mute man can speak without surgery. An electrode placed on the neck to get the vibration from blabbering voice of the person and also implement the special speech synthesizer for producing him vowels. It possible for the disable person to produce vowels by thinking of them, using a speech synthesizer. In the future, this breakthrough may help erase the word of speech disability. keywords— blabbering voice, speech disability, articulation disorders, speech synthesiser, & average magnitude difference function , * PSN College of engineering & Technology, Tamilnadu, India ** Assistant professor, PSN College of engineering & Technology. Tamilnadu, India
  • 2. IJESM Volume 2, Issue 4 ISSN: 2320-0294 _________________________________________________________ A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A. International Journal of Engineering, Science and Mathematics http://www.ijmra.us 2 December 2013 I. INTRODUCTION The muteness person has inability to speak properly. The Speech of a person was judged to be disordered if the person’s speech was not understood by the listener, drew attention to the manner in which he/she spoke than to the meaning, and was aesthetically unpleasant. It also includes those whose speech is not understood due to defects in speech, such as stammering, nasal voice, hoarse voice and discordant voice and articulation defects. Persons with speech disability were categorised as.  Persons who could not speak at all;  Persons who could speak only in single words;  Persons who could speak only unintelligibly;  Persons who stammered;  Persons who could speak with abnormal voice; like nasal voice, hoarse voice and discordant voice etc.,  Persons who had any other speech defects, such as articulation defects, etc. II MEDICAL TREATMENTS AVAILABLE For the speech disabled people, it is possible for them to get their voice by surgery- Cochlear Implant surgery. However this surgery can be performed only to speech disabled children below the age of 6. Considering the risk of surgery, it also costs around 8-11 lakhs. So it’s practically not possible for every person to get a surgery. So, we engineers have to innovate to help them reach the same level as us in our society. II PROPOSED SYSTEM FOR PERSONS WHO COULD SPEAK ONLY IN SINGLE WORDS. Speech sound disorders may be subdivided into two primary types, articulation disorders (also called phonetic disorders) and phonemic disorders (also called phonological disorders). However, some may have a mixed disorder in which both articulation and phonological problems exist. Though speech sound disorders are associated with childhood, some residual errors may persist into adulthood. A.Articulation Disorders. Articulation disorders (also called phonetic disorders or simply "artic disorders" for short) are based on difficulty learning to physically produce the intended phonemes. Articulation disorders have to do with the main articulators which are the lips, teeth, alveolar ridge, hard palate, velum, glottis, and the tongue. If the disorder has anything to do with any of these articulators, then it's an articulation disorder. There are usually fewer errors than with a phonemic disorder, and distortions are more likely (though any omissions, additions, and substitutions may also be present) III PROPOSED SYSTEM Mutes man can serve as a bridge between two different peoples, the interactive voice response systems connect telephone users with the information they need, from anywhere at any time any language
  • 3. IJESM Volume 2, Issue 4 ISSN: 2320-0294 _________________________________________________________ A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A. International Journal of Engineering, Science and Mathematics http://www.ijmra.us 3 December 2013 Figure 1. Overview of Proposed System The project aims to thoroughly explore the theoretical and practical aspects of human vocal translator techniques. To this end a number of specific goals were proposed at the start of the project and detailed knowledge of the challenges faced by speech interactivity embedded modules techniques. The speech synthesizer is the major role of my project, the Speech synthesis is the artificial production of project is muteness people vocal translation dictionary that is speech synthesizer of disability people speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software and hardware. A. Neural amplifier Integrated, low-power, low-noise CMOS neural amplifiers have recently grown in importance as large microelectrode arrays have begun to be practical. With an eye to a future where thousands of signals must be transmitted over a limited bandwidth link or be processed in situ, we are developing low power neural amplifiers with integrated pre-filtering and measurements of the spike signal to facilitate spike-sorting and data reduction prior to transmission to a data- acquisition system. We have fabricated a prototype circuit in a commercially available 1.5µm, 2-metal, 2-poly CMOS process that occupies approximately 91,000 square µm. We report circuit characteristics for a 1.5V power supply, suitable for single cell battery operation. In one specific configuration, the circuit bandpass filters the incoming signal from 22Hz to 6.7kHz while providing a gain of 42.5dB. With an amplifier power consumption of 0.8 µW, the rms input-referred noise is 20.6µV Figure 2. Neural amplifier 1. Frequency Response and Gain Although we designed the amplifier for a gain of 40dB, we measure a midband voltage gain of approximately 42.5dB (gain = 133). Independent control of the low-frequency corner using rbias and the high-frequency corner using ampbias was confirmed and is shown in Figs. 3 and 4. Voice Coder Voice data Stored words Correlation process Voice output Vibration signal from speech disorder patient
  • 4. IJESM Volume 2, Issue 4 ISSN: 2320-0294 _________________________________________________________ A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A. International Journal of Engineering, Science and Mathematics http://www.ijmra.us 4 December 2013 Figure 3. Frequency response of the amplifier as rbias is changed, leaving ampbias fixed at 0.586V. The parameter rbias controls the low- frequency corner of the bandpass filter, making it possible to filter out some of the 60 Hz interference present in any recording situation. Figure 4. Frequency response of the amplifier as ampbias is changed, leaving rbias fixed at 1.27V. The parameter ampbias controls the high- frequency corner of the bandpass filter. Using the parameters: ampbias=0.610V and rbias=1.27V, the corner frequencies of the bandpass response (-3dB frequencies) occur at 22Hz and 6.7kHz. Using ampbias = 0.620V & rbias = 1.35V, the corner frequencies occur at 182Hz and 9kHz. B. Speech synthesiser Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech Synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database. Systems differ in the size of the stored speech units; a system that stores phones or diphones provides the largest output range, but may lack clarity. For specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of the vocal tract and other human voice characteristics to create a completely "synthetic" voice output. The quality of a speech synthesizer is judged by its similarity to the human voice and by its ability to be understood. An intelligible text-to-speech program allows people with visual impairments or reading disabilities to listen to written works on a home computer. Many computer operating systems have included speech synthesizers since the early 1990s. 1. Diphone synthesis Diphone synthesis uses a minimal speech database containing all the diphones (sound-to-sound transitions) occurring in a language. The number of diphones depends on the phonotactics of the language: for example, Spanish
  • 5. IJESM Volume 2, Issue 4 ISSN: 2320-0294 _________________________________________________________ A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A. International Journal of Engineering, Science and Mathematics http://www.ijmra.us 5 December 2013 has about 800 diphones, and German about 2500. In diphone synthesis, only one example of each diphone is contained in the speech database. At runtime, the target prosody of a sentence is superimposed on these minimal units by means of digital signal processing techniques such as linear predictive coding, PSOLA or MBROLA. Diphone synthesis suffers from the sonic glitches of concatenate synthesis and the robotic-sounding nature of formant synthesis, and has few of the advantages of either approach other than small size. As such, its use in commercial applications is declining,[citation needed] although it continues to be used in research because there are a number of freely available software implementations. C. 18-Bit Audio Codec The PCM3000/3001 is a low-cost, single-chip stereo audio codec (analog – to – digital and digital-to-analog converter) with single-ended analog voltage input and output. Both ADCs and DACs employ delta-sigma modulation with 64-times oversampling. The ADCs include digital decimation filter and the DACs include an8- times oversampling digital interpolation filter. The DACs also include digital attenuation, de-emphasis, infinite zero detection and soft mute to form a complete subsystem. The PCM3000/3001 operates with left-justified, right- justified, I2 or DSP data The PCM3000 can be programmed with a three – wire serial interface for special features and data formats The PCM3001 can be pin-programmed for data formats. Figure 5.ADC Block Diagram The PCM3000 ADC consists of a band-gap reference, a stereo single-to-differential converter, a fully differential 5th-order delta-sigma modulator, a decimation filter (including digital high pass), and a serial interface circuit. The block diagram in this data sheet illustrates the architecture of the ADC section.
  • 6. IJESM Volume 2, Issue 4 ISSN: 2320-0294 _________________________________________________________ A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A. International Journal of Engineering, Science and Mathematics http://www.ijmra.us 6 December 2013 Figure 6. 5th order of ADC Diagram 1. PCM Audio Interface The four-wire digital audio interface for the PCM3000 DOUT (pin 19). can operate with seven different data for Figure 17. Analog Front-End (Single-Channel)mats. For the PCM3000, these formats are selected through program register 3 in the software mode. For the PCM3000, data formats are selected by pin-strapping the three format pins. Figure 7. Analog Front-End (Single-Channel) D. Hardware Architecture A programmable hardware artefact, or machine, that lacks its software program is impotent; even as a software artefact, or program, is equally impotent unless it can be used to alter the sequential states of a suitable (hardware) machine. However, a hardware machine and its software program can be designed to perform an almost illimitable number of abstract and physical tasks. Within the computer and software engineering disciplines (and, often, other engineering disciplines, such as communications), then, the terms hardware, software, and system came to distinguish between the hardware that runs a software program, the software, and the hardware device complete with its program.
  • 7. IJESM Volume 2, Issue 4 ISSN: 2320-0294 _________________________________________________________ A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A. International Journal of Engineering, Science and Mathematics http://www.ijmra.us 7 December 2013 Figure 8. Hardware Architecture of speech synthesizer 1. Voice control Speaker recognition is the identification of the person who is speaking by characteristics of their voices (voice biometrics), also called voice control There is a difference between speaker recognition (recognizing who is speaking) and speech recognition (recognizing what is being said). These two terms are frequently confused, and "voice recognition" can be used for both. In addition, there is a difference between the act of authentication (commonly referred to as speaker verification or speaker authentication) and identification. Finally, there is a difference between speaker recognition (recognizing who is speaking) and speaker divarication (recognizing when the same speaker is speaking). Recognizing the speaker can simplify the task of translating speech in systems that have been trained on specific person's voices or it can be used to authenticate or verify the identity of a speaker as part of a security process. 2. Dictionary Query The Dictionary query is an enquiry form of memory. The memory pre-defined to store all the words to produce the sounds. 3. Fixed Dialog This query when the user gives the commands it will select the word from the dictionary. 4. Programmable Dialog If the query is not available the memory it request to him to Produce the alternate vowels. E. Simple Manner for Start/End Points Detection Before processing feature extraction and DTW, we simply detect the start/end points by speech energy (the square value of the acoustic amplitude). Fig. 4 indicates our simple manner to speed up the start/end point detection for recording speech streams with different length. For each speech stream, we arrange equal-size memory spaces for them and initialize their contents as silence data. When a recording process starts, our system start to wait until an energy sum of 80 speech samples exceeds an experimental threshold. And the speech amplitudes sampled from ADC buffer are starting to write into the memory. The end point is detected and the memory writing process is stopped while an energy sum of 80 speech samples is under an experimental threshold. When the DTW process starts to compute distance between speech stream pairs, we only take the non-silence data from the memory for processing DTW. By using this memory access technique, the start points of different
  • 8. IJESM Volume 2, Issue 4 ISSN: 2320-0294 _________________________________________________________ A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A. International Journal of Engineering, Science and Mathematics http://www.ijmra.us 8 December 2013 speech streams can be kept pace for computing DTW distance. This manner can easily record speech streams with different length. Considering to a resource-limited device, it can speed up the start/end point detections for a real- time system. Figure 9. Start/End Points Detection F. frame-synchronous pitch feature extraction The original AMDF (Average Magnitude Difference Function) was proposed by Ross et al. in 1974 [6]. It is defined as following Where s(n) denotes the speech sample sequence. In this paper, the circular AMDF (C-AMDF) [7] and modified AMDF (M-AMDF) [8] are integrated to extract circular and modified AMDF (CM-AMDF) pitch features. With the mixed segment containing voiced and unvoiced speech, the CA-MDF can give the pitch period of the voiced part. It is defined as follows: Where k = 1,...,N , N is the length of segment, s (n) w denotes the speech sample sequence with the rectangular window and mod(n + k,N) represents the modulo operation, meaning that n+ k modulo N. But sometimes the global minimum valley found from AMDF-based method is not the first local minimum valley especially for voiced speech with good stability and periodicity. To avoid the shortcoming mentioned above, the transform function presented is defined as: Where max {( )} max R P k C = and max n is the index of maximum PC(k). In general, the pitch period is estimated from the short term AMDF as following.
  • 9. IJESM Volume 2, Issue 4 ISSN: 2320-0294 _________________________________________________________ A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A. International Journal of Engineering, Science and Mathematics http://www.ijmra.us 9 December 2013 Where max k and min k are respectively the possible maximum and minimum value. For each pitch of segment i, the frequency is calculated by Where R S is the sampling rate of speech signal. Fig. 10(b) and (c) show two AMDFs of the same segment. It indicates clearly that the global minimum valley of basic AMDF is 4th local minimum valley, as ―P‖ indicated in Fig. 10(b), whereas the global maximum peak of M-AMDF is first local maximum peak, as ―P‖ indicated in Fig. 10 (c). Since we just search the global maximum peak point of M-AMDF during pitch detection, it can greatly reduce the errors of pitch multiples existed in other AMDF. Fig. 10. (a) Original speech, (b) CAMDF for original speech, and (c) MAMDF for CAMDF results. The flowchart of our frame-synchronous extraction scheduling is shown in Fig. 6. After a speech frame (32 ms) is received from the ADC, both of the CM-AMDF speech feature extraction and memory saving are finished before the incoming of next frame. In our experiments, the total computation time of CM-AMDF computation and memory saving only takes about 0.115 ms which is faster than the speech sampling period (0.125 ms); thus making our system to be workable for real-time demand. Fig. 11. Program scheduling for real-time speech feature extraction and recording. G. Dynamic Time Warping Recognizer based CM-AMDF Template matching has long history in speech recognition application [4]. Basically, it is based on the minimum distance between the test and the reference patterns along the aligned Path, which is obtained using dynamic programming (DP). In the conventional DP-matching technique, the plane of grids shown in Fig. 7 is generally utilized. This figure also shows an example of the
  • 10. IJESM Volume 2, Issue 4 ISSN: 2320-0294 _________________________________________________________ A Quarterly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories Indexed & Listed at: Ulrich's Periodicals Directory ©, U.S.A., Open J-Gage, India as well as in Cabell’s Directories of Publishing Opportunities, U.S.A. International Journal of Engineering, Science and Mathematics http://www.ijmra.us 10 December 2013 alignment path. Global distance is the distance between the test and the reference patterns along the alignment path, and is computed along the alignment path. IV CONCLUSIONS This project can be extended to use voice transmission such as the application of voice recognition for who have specific learning difficulties and also people who do not share a common language. And it can be implement to railway enquiries, airways, conference also applicable for real time communications. V REFERENCES [1] C. Kim and K.Seo, ―Robust DTW-based recognition algorithm for handheld consumer devices,‖ IEEE Trans. Consumer Electronics, vol. 51 (2), pp. 699-709, 2005. [2] W. Hess, Pitch Determination of Speech Signal. New York: Springer-Verlag, 1983. [3] T. E. Tremain, ―The Government Standard Linear Predictive Coding Algorithm: LPC-10,‖ Speech Technology Magazine, pp. 40-49, April 1982. [4] L. R. Rabiner and B. H. Juang, Fundamentals of Speech Recognition.Englewood Cliffs, NJ: Prentice Hall, 1993. [5] C. Lévy, G. Linarès, and P. Nocera, ―Comparison of several acoustic modeling techniques and decoding algorithms for embedded speech recognition systems,‖ Workshop on DSP in Mobile and VehicularSystems, Nagoya, Japan, Apr. 2003. [6] M. J. Ross et al., ―Average Magnitude Difference Function Pitch Extractor,‖ IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP 22, pp. 353-362, 1974. [7] Y. M. Zeng, Z. Y. Wu, H. B. Liu, and L. Zou, ‖ Modified AMDF pitchdetection algorithm,‖ IEEE Int. Conf. Machine Learning andCybernetics, pp. 470-473, Phoenix, AZ, Nov. 2003. [8] Y. M. Zeng, Z. Y. Wu, H. B. Liu, and L. Zou, ‖ Modified AMDF pitchdetection algorithm,‖ IEEE Int. Conf. Acoustics, Speech, and Signal Processing, pp. 341-344, Xi An, China, Nov. 2002.