Silent sound interface

5,830 views

Published on

Published in: Education, Technology

Silent sound interface

  1. 1. SEMINAR BY: GUIDED BY:JEEVITHA R Ms VIDYA S BENNUR1ec08ec018
  2. 2. CONTENTS•Introduction•What is speech?•Sources of information•Brain computer interface (BCI)•Speech synthesis•speech synthesis technologies•Block diagram•Features• Methods of producing Electromyography Image processing• Applications•In fiction•Reference
  3. 3. •You are in a theatre or a noisy restaurant or a bus etc., where there is lot ofnoise around is a big issue while talking on a mobile phone. But in future thisproblem is eliminated with “silent sound technology”, a new technologyunveiled at the CeBIT fare. It transforms lip movements into a computergenerated voice for the listener at the other end of the listener• Silent speech is a device that allows speech communication without usingthe sound made when people vocalize their speech sounds. As such it is atype of electronic lip reader. It works by computer identifying phonemes thatan individual pronounces from non auditory sources of information abouttheir speech movements. These are then used to recreate the speech usingspeech synthesis
  4. 4. •The device uses electromyography, monitoring tiny muscular movementsthat occur when we speak and converting them into electrical pulses thatcan be turned into speech without a sound uttered. It also uses imageprocessing technique that converts digital data into a film image withminimal corrections and calibration.
  5. 5.  Speech is the vocalized form of human communication. It is based upon thesyntactic combination of lexical and names that are drawn from very large(usually to about 10,000 different words) vocabularies. A gestural form of human communication exists for the deaf in the form ofsign language. Speech in some cultures has become the basis of a writtenlanguage, often one that differs in its vocabulary, syntax and phonetics from itsassociated spoken one, a situation called diglossia
  6. 6. Sources of information:Vocal tract Bone conduction
  7. 7. The vocal tract is the cavity in human beings and inanimals where sound that is produced at the sound source (larynx in mammals;syrinx in birds) is filtered.
  8. 8. Bone conduction is the conduction of sound to the inner ear through the bonesof the skull.Some hearing aids employ bone conduction, achieving an effect equivalent tohearing directly by means of the ears. A headset is ergonomically positioned on thetemple and cheek and the electromechanical transducer, which converts electricsignals into mechanical vibrations, sends sound to the internal ear through thecranial bones. Likewise, a microphone can be used to record spoken sounds viabone conduction. The first description, in 1923, of a bone conduction hearing aidwas Hugo Gernsback’s "Osophone", which he later elaborated on with his"Phonosone".
  9. 9. Categories:•Ordinary products•Hearing aids•Specialized communication products Advantages: Ears free High sound clarity in very noisy environment Can have a perception of stereo sound Disadvantages: Some implementations require more power than headphones. Less clear recording and playback than headphones.
  10. 10. A brain computer interface is often called as mind machine interface(MMI)orsometimes called direct neural interface is a direct communication pathwaybetween the brain and an external deviceThe field of BCI research and development has since focused primarily onneuroprosthetics applications that aim at restoring damaged hearing, sight andmovement. Thanks to the remarkable cortical plasticity of the brain, signals fromimplanted prostheses can, after adaptation, be handled by the brain like naturalsensor or effecter channels. Following years of animal experimentation, the firstneuroprosthetic devices implanted in humans appeared in the mid- 90s
  11. 11. Speech synthesis is the artificial production of human speech. Acomputer system used for this purpose is called a speechsynthesizer, and can be implemented in software or hardware. Synthesized speech can be created by concatenating pieces ofrecorded speech that are stored in a database. Systems differ in thesize of the stored speech units; a system that stores phones ordi phones provides the largest output range, but may lack clarity.
  12. 12. Speech synthesizing process:The quality of a speech synthesizer is judged by its it’s similarity to the humanvoice and by its ability to be understood. An intelligible text-to-speech programallows people with visual impairments or reading disabilities to listen to writtenworks on a home computer. Many computer operating systems have includedspeech synthesizers since the early 1980’s.
  13. 13. The most important qualities of speech synthesis system are naturalness andintelligibility . Naturalness describes how closely the output sounds like humanspeech, while intelligibility is the ease with which the output is understood. There are 8 types of Synthesizing technologies such that they are : a) Concatenative synthesis b) Unit selection synthesisc) Di phone synthesisd) Domain-specific synthesise) Formant synthesisf) Articulatory synthesisg) HMM-based synthesish) Sine wave synthesis
  14. 14. CONCATENATIVE SYNTHESIS: Concatenative synthesis is based on the concatenation (or stringingtogether) of segments of recorded speech. Generally, Concatenative synthesisproduces the most natural-sounding synthesized speech.UNIT SELECTION SYNTHESIS: Unit selection synthesis uses large databases of recorded speech. Duringdatabase creation, each recorded utterance is segmented into some or all of thefollowing: individual phones, di phones, half-phones, syllables, morphemes, words, phrases, and sentences. DI PHONE SYNTHESIS: Di phone synthesis uses a minimal speech database containing all the diphones(sound-to-sound transitions) occurring in a language. The number of diphones depends on the phonotactics of the language: for example, Spanish hasabout 800 di phones and German about 2500. In di phone synthesis, only oneexample of each di phone is contained in the speech database.
  15. 15. Domain specific synthesis: Domain-specific synthesis concatenates prerecorded words andphrases to create complete utterances. It is used in applications where thevariety of texts the system will output is limited to a particular domain, liketransit schedule announcements or weather reports.Format synthesis: Format synthesis does not use human speech samples at runtime.Instead the synthesized speech output is created using additive synthesisand an acoustic model (physical modeling synthesis). Parameters such asfundamental frequency, voicing, and noise levels are varied over time tocreate a waveform of artificial speech. This method is sometimes calledrules-based synthesis
  16. 16. ARTICULATORY SYNTHESIS: Articulatory synthesis refers to computational techniques forsynthesizing speech based on models of the human vocal tract and thearticulation processes occurring there. Until recently, articulatory synthesismodels have not been incorporated into commercial speech synthesissystems.HMM BASED SYNTHESIS:HMM-based synthesis is a synthesis method based on hidden Markovmodels, also called Statistical Parametric Synthesis. In this system, thefrequency spectrum (vocal tract), fundamental frequency (vocalsource), and duration (prosody) of speech are modeled simultaneously byHMMs. Speech waveforms are generated from HMMs themselves based onthe maximum likelihood criterion.
  17. 17. SINE WAVE SYNTHESIS: Sine wave synthesis is a technique for synthesizingspeech by replacing the formants (main bands of energy) withpure tone whistles.
  18. 18. BLOCK DIAGRAM :
  19. 19. FEATURES: AUDIO SPOTLIGHT:The Audio Spotlight transmitters generate a column of sound betweenthree and five degrees wider than the transmitter. It converts ordinaryaudio into high-frequency ultrasonic signals that are outside the range ofnormal hearing. As these sound waves push out from the source, theyinteract with air pressure to create audible sounds.Sound field distribution is shown with equal loudness contours for astandard 1 KHz tone. The center area is louder at 100% amplitude, whilethe sound level just outside the illustrated beam area is less than 10%.Audio spotlight systems are much sensitive to listener distance thantraditional loudspeakers, but maximum performance is attained at roughly1-2m (3-6feet) from the listener.Typical levels are 80dB SPL at 1 KHz for As-16 and 85dB SPL for AS-24models. The larger AS-24 can output about twice the power and twice lowfrequency range.
  20. 20. This simulation is fixed for fixed source size(0.4m/16”) with varying wavelength.From the statements above, we expect to see an unidirectional response for a largewavelength relative to source, and higher directivity as wavelength decreases.
  21. 21. METHODS OF PRODUCINGELECTROMYOGRAPHY IMAGE PROCESSING
  22. 22. ELECTROMYOGRAPHY: It is a technique for evaluating and recording the electrical activityproduced by skeletal muscles. EMG is performed using an instrumentcalled an electromyography, to produce a record called anelectromyogram. An electromyography detects the electrical potentialgenerated by muscle cells when these cells are electrically orneurologically activated.
  23. 23. Electromyographic sensors attached to the face records the electric signalsproduced by the facial muscles, compare them with pre recorded signalpattern of spoken words .When there is a match that sound is transmitted on to the other end of theline and person at the other end listen to the spoken words.
  24. 24. For such an interface ,we should use 4 kinds of TRANSDUCERS . They are as follows :- 1.Vibration sensors 2.Pressure sensor 3.Electromagnetic sensor 4.Motion sensorIMAGE POCESSING:•The simplest form of image processing converts the data tape into a film imagewith minimal corrections and calibrations.
  25. 25. Digital data Pre processing Feature extractionImage enhancement Selection of training dataManual interpretation Decision and classification Ancillary data Supervised Unsupervised Classification output Post processing operation Assess memory Maps and imageries Reports Data
  26. 26. As we know in space there is no medium for sound to travel therefore thistechnology can be best utilized by astronauts.We can make silent calls even if we are standing in a crowded place.This technology is helpful for people without vocal cord or those who aresuffering from Aphasia (speaking disorder ).This technology can be used for communication In nasty environment.To tell a secret PIN no. , or credit card no. on the phone now be easy as there isno one eavesdrop anymore.Since the electrical signals are universal they can be translated into any language.Native speakers can translate it before sending it to the other side. Hence it can beconverted into any language of choice currently being German, English & French.
  27. 27. Translation into majority of languages but for languages such as Chinesedifferent tone holds different meaning, facial movements being the same. Hencethis technology is difficult to apply in such situations.From security point of view recognizing who you are talking to gets complicated.Even differentiating between people and emotions cannot be done. This meansyou will always feel you are talking to a robot.This device presently needs nine leads to be attached to our face which is quiteimpractical to make it usable.
  28. 28.  Silent sound technology gives way to a bright future to speech recognition technology from simple voice commands to memorandum dictated over the phone all this is fairly possible in noisy public places. Without having electrodes hanging all around your face, these electrodes will be• It may have features like lip reading based on image recognition & processing rather than electromyography.• Nano technology will be a mentionable step towards making the device handy.
  29. 29.  Engineers claim that the device is working with 99 percent efficiency. It is difficult to compare SSI technologies directly in a meaningful way. Since many of the systems are still preliminary, it would not make sense, for example, to compare speech recognition scores or synthesis quality at this stage. With a few abstractions, however, it is possible to shed light on the range of applicability and the potential for future commercialization of the different methods.

×