SlideShare a Scribd company logo
Automatic Speech Recognition
Manthan Gandhi
N.B.Mehta College (Bordi)
Automatic speech recognition
What is the task?
What are the main difficulties?
How is it approached?
How good is it?
How much better could it be?

2/34
What is the task?
Getting a computer to understand spoken language
By “understand” we might mean
React appropriately
Convert the input speech into another medium, e.g. text

Several variables impinge on this (see later)

3/34
How do humans do it?

Articulation produces
sound waves which
the ear conveys to the brain
for processing
4/34
How might computers do it?
Acoustic waveform

Acoustic signal

Digitization
Acoustic analysis of the speech signal
Linguistic interpretation

5/34

Speech recognition
What’s hard about that?
 Digitization
 Converting analogue signal into digital representation

 Signal processing
 Separating speech from background noise

 Phonetics
 Variability in human speech

 Phonology
 Recognizing individual sound distinctions (similar phonemes)

 Lexicology and syntax
 Disambiguating homophones
 Features of continuous speech

 Syntax and pragmatics
 Interpreting prosodic features

 Pragmatics
6/34

 Filtering of performance errors (disfluencies)
Digitization
Analogue to digital conversion
Sampling and quantizing
Use filters to measure energy levels for various points on

the frequency spectrum
Knowing the relative importance of different frequency
bands (for speech) makes this process more efficient
E.g. high frequency sounds are less informative, so can
be sampled using a broader bandwidth (log scale)

7/34
Separating speech from background
noise
Noise cancelling microphones
Two mics, one facing speaker, the other facing away
Ambient noise is roughly same for both mics

Knowing which bits of the signal relate to speech
Spectrograph analysis

8/34
Variability in individuals’ speech
Variation among speakers due to
Vocal range (f0, and pitch range – see later)
Voice quality (growl, whisper, physiological elements such as

nasality, adenoidality, etc)
ACCENT !!! (especially vowel systems, but also consonants,
allophones, etc.)
Variation within speakers due to
Health, emotional state
Ambient conditions

Speech style: formal read vs spontaneous
9/34
Speaker-(in)dependent systems
 Speaker-dependent systems
 Require “training” to “teach” the system your individual idiosyncracies
 The more the merrier, but typically nowadays 5 or 10 minutes is enough
 User asked to pronounce some key words which allow computer to infer details of the

user’s accent and voice
 Fortunately, languages are generally systematic
 More robust
 But less convenient
 And obviously less portable

 Speaker-independent systems
 Language coverage is reduced to compensate need to be flexible in phoneme
identification
 Clever compromise is to learn on the fly
10/34
Identifying phonemes
Differences between some phonemes are sometimes very

small
May be reflected in speech signal (eg vowels have more or less

distinctive f1 and f2)
Often show up in coarticulation effects (transition to next
sound)
 e.g. aspiration of voiceless stops in English

Allophonic variation

11/34
Disambiguating homophones
Mostly differences are recognised by humans by context

and need to make sense
It’s hard to wreck a nice beach
What dime’s a neck’s drain to stop port?

Systems can only recognize words that are in their

lexicon, so limiting the lexicon is an obvious ploy
Some ASR systems include a grammar which can help
disambiguation

12/34
(Dis)continuous speech
Discontinuous speech much easier to recognize
Single words tend to be pronounced more clearly

Continuous speech involves contextual coarticulation effects
Weak forms
Assimilation
Contractions

13/34
Interpreting prosodic features
Pitch, length and loudness are used to indicate “stress”
All of these are relative
On a speaker-by-speaker basis
And in relation to context

Pitch and length are phonemic in some languages

14/34
Pitch
Pitch contour can be extracted from speech signal
But pitch differences are relative
One man’s high is another (wo)man’s low
Pitch range is variable

Pitch contributes to intonation
But has other functions in tone languages

Intonation can convey meaning

15/34
Length
Length is easy to measure but difficult to interpret
Again, length is relative
It is phonemic in many languages
Speech rate is not constant – slows down at the end of a

sentence

16/34
Loudness
Loudness is easy to measure but difficult to interpret
Again, loudness is relative

17/34
Performance errors
Performance “errors” include
Non-speech sounds
Hesitations
False starts, repetitions

Filtering implies handling at syntactic level or above
Some disfluencies are deliberate and have pragmatic effect –

this is not something we can handle in the near future

18/34
Approaches to ASR
Template matching
Knowledge-based (or rule-based) approach
Statistical approach:
Noisy channel model + machine learning

19/34
Template-based approach
Store examples of units (words, phonemes), then find the

example that most closely fits the input
Extract features from speech signal, then it’s “just” a
complex similarity matching problem, using solutions
developed for all sorts of applications
OK for discrete utterances, and a single user

20/34
Template-based approach
Hard to distinguish very similar templates
And quickly degrades when input differs from templates
Therefore needs techniques to mitigate this degradation:
More subtle matching techniques
Multiple templates which are aggregated

Taken together, these suggested …

21/34
Rule-based approach
Use knowledge of phonetics and linguistics to guide search

process
Templates are replaced by rules expressing everything
(anything) that might help to decode:
Phonetics, phonology, phonotactics
Syntax
Pragmatics

22/34
Rule-based approach
Typical approach is based on “blackboard” architecture:
At each decision point, lay out the possibilities
Apply rules to determine which sequences are permitted

Poor performance due to
Difficulty to express rules
Difficulty to make rules interact
Difficulty to know how to improve the system

23/34

s
ʃ

k
h
p
t

i:
iə
ɪ

ʃ
tʃ
h
s
 Identify individual phonemes
 Identify words
 Identify sentence structure and/or meaning
 Interpret prosodic features (pitch, loudness, length)
24/34
Statistics-based approach
Can be seen as extension of template-based approach, using

more powerful mathematical and statistical tools
Sometimes seen as “anti-linguistic” approach
Fred Jelinek (IBM, 1988): “Every time I fire a linguist my

system improves”

25/34
Statistics-based approach
Collect a large corpus of transcribed speech recordings
Train the computer to learn the correspondences (“machine

learning”)
At run time, apply statistical processes to search through the
space of all possible solutions, and pick the statistically most
likely one

26/34
Machine learning
Acoustic and Lexical Models
Analyse training data in terms of relevant features
Learn from large amount of data different possibilities
 different phone sequences for a given word
 different combinations of elements of the speech signal for a given
phone/phoneme
Combine these into a Hidden Markov Model expressing the

probabilities

27/34
HMMs for some words

28/34
Language model
Models likelihood of word given previous word(s)
n-gram models:
Build the model by calculating bigram or trigram probabilities

from text training corpus
Smoothing issues

29/34
The Noisy Channel Model

• Search through space of all possible
sentences
• Pick the one that is most probable given
the waveform
30/34
The Noisy Channel Model
Use the acoustic model to give a set of likely phone

sequences
Use the lexical and language models to judge which of these
are likely to result in probable word sequences
The trick is having sophisticated algorithms to juggle the
statistics
A bit like the rule-based approach except that it is all learned
automatically from data

31/34
Evaluation
Funders have been very keen on competitive quantitative

evaluation
Subjective evaluations are informative, but not cost-effective
For transcription tasks, word-error rate is popular (though
can be misleading: all words are not equally important)
For task-based dialogues, other measures of understanding
are needed

32/34
Comparing ASR systems
Factors include
Speaking mode: isolated words vs continuous speech
Speaking style: read vs spontaneous
“Enrollment”: speaker (in)dependent
Vocabulary size (small <20 … large > 20,000)
Equipment: good quality noise-cancelling mic … telephone
Size of training set (if appropriate) or rule set
Recognition method

33/34
Remaining problems

 Robustness – graceful degradation, not catastrophic failure
 Portability – independence of computing platform
 Adaptability – to changing conditions (different mic, background noise, new

speaker, new task domain, new language even)
 Language Modelling – is there a role for linguistics in improving the language
models?
 Confidence Measures – better methods to evaluate the absolute correctness of
hypotheses.
 Out-of-Vocabulary (OOV) Words – Systems must have some method of
detecting OOV words, and dealing with them in a sensible way.
 Spontaneous Speech – disfluencies (filled pauses, false starts, hesitations,
ungrammatical constructions etc) remain a problem.
 Prosody –Stress, intonation, and rhythm convey important information for word
recognition and the user's intentions (e.g., sarcasm, anger)
 Accent, dialect and mixed language – non-native speech is a huge problem,
especially where code-switching is commonplace
34/34

More Related Content

What's hot

Speech Recognition
Speech RecognitionSpeech Recognition
Speech RecognitionHugo Moreno
 
Speech Recognition
Speech RecognitionSpeech Recognition
Speech RecognitionAhmed Moawad
 
Speech recognition An overview
Speech recognition An overviewSpeech recognition An overview
Speech recognition An overviewsajanazoya
 
Digital speech processing lecture1
Digital speech processing lecture1Digital speech processing lecture1
Digital speech processing lecture1Samiul Parag
 
Speech recognition system seminar
Speech recognition system seminarSpeech recognition system seminar
Speech recognition system seminarDiptimaya Sarangi
 
Automatic speech recognition
Automatic speech recognitionAutomatic speech recognition
Automatic speech recognitionRichie
 
Voice Recognition
Voice RecognitionVoice Recognition
Voice RecognitionAmrita More
 
Unit 1 speech processing
Unit 1 speech processingUnit 1 speech processing
Unit 1 speech processingazhagujaisudhan
 
Deep Learning For Speech Recognition
Deep Learning For Speech RecognitionDeep Learning For Speech Recognition
Deep Learning For Speech Recognitionananth
 
Speech synthesis technology
Speech synthesis technologySpeech synthesis technology
Speech synthesis technologyKalluri Madhuri
 
Voice input and speech recognition system in tourism/social media
Voice input and speech recognition system in tourism/social mediaVoice input and speech recognition system in tourism/social media
Voice input and speech recognition system in tourism/social mediacidroypaes
 
Speech Recognition
Speech Recognition Speech Recognition
Speech Recognition Goa App
 
Automatic speech recognition system
Automatic speech recognition systemAutomatic speech recognition system
Automatic speech recognition systemAlok Tiwari
 
speech processing basics
speech processing basicsspeech processing basics
speech processing basicssivakumar m
 
Speech Recognition System By Matlab
Speech Recognition System By MatlabSpeech Recognition System By Matlab
Speech Recognition System By MatlabAnkit Gujrati
 
Speech recognition system
Speech recognition systemSpeech recognition system
Speech recognition systemRipal Ranpara
 
Speech to text conversion
Speech to text conversionSpeech to text conversion
Speech to text conversionankit_saluja
 
Speech recognition
Speech recognitionSpeech recognition
Speech recognitionCharu Joshi
 

What's hot (20)

Speech Recognition
Speech RecognitionSpeech Recognition
Speech Recognition
 
Speech Recognition
Speech RecognitionSpeech Recognition
Speech Recognition
 
Speech recognition An overview
Speech recognition An overviewSpeech recognition An overview
Speech recognition An overview
 
Digital speech processing lecture1
Digital speech processing lecture1Digital speech processing lecture1
Digital speech processing lecture1
 
Speech recognition system seminar
Speech recognition system seminarSpeech recognition system seminar
Speech recognition system seminar
 
Automatic speech recognition
Automatic speech recognitionAutomatic speech recognition
Automatic speech recognition
 
Voice Recognition
Voice RecognitionVoice Recognition
Voice Recognition
 
Unit 1 speech processing
Unit 1 speech processingUnit 1 speech processing
Unit 1 speech processing
 
Deep Learning For Speech Recognition
Deep Learning For Speech RecognitionDeep Learning For Speech Recognition
Deep Learning For Speech Recognition
 
Speech synthesis technology
Speech synthesis technologySpeech synthesis technology
Speech synthesis technology
 
Voice input and speech recognition system in tourism/social media
Voice input and speech recognition system in tourism/social mediaVoice input and speech recognition system in tourism/social media
Voice input and speech recognition system in tourism/social media
 
Speech Recognition
Speech Recognition Speech Recognition
Speech Recognition
 
Automatic speech recognition system
Automatic speech recognition systemAutomatic speech recognition system
Automatic speech recognition system
 
speech processing basics
speech processing basicsspeech processing basics
speech processing basics
 
Speech Recognition System
Speech Recognition SystemSpeech Recognition System
Speech Recognition System
 
Speech Recognition System By Matlab
Speech Recognition System By MatlabSpeech Recognition System By Matlab
Speech Recognition System By Matlab
 
Speech recognition system
Speech recognition systemSpeech recognition system
Speech recognition system
 
Speech to text conversion
Speech to text conversionSpeech to text conversion
Speech to text conversion
 
Voice recognition
Voice recognitionVoice recognition
Voice recognition
 
Speech recognition
Speech recognitionSpeech recognition
Speech recognition
 

Viewers also liked

Automatic Speech Recognition
Automatic Speech RecognitionAutomatic Speech Recognition
Automatic Speech RecognitionYogesh Vijay
 
Automatic speech recognition system using deep learning
Automatic speech recognition system using deep learningAutomatic speech recognition system using deep learning
Automatic speech recognition system using deep learningAnkan Dutta
 
Des modèles graphiques probabilistes aux modeles graphiques de durée
Des modèles graphiques probabilistes aux modeles graphiques de duréeDes modèles graphiques probabilistes aux modeles graphiques de durée
Des modèles graphiques probabilistes aux modeles graphiques de duréeUniversity of Nantes
 
Automatic speech recognition
Automatic speech recognitionAutomatic speech recognition
Automatic speech recognitionboddu syamprasad
 
Automatic speech recognition system
Automatic speech recognition systemAutomatic speech recognition system
Automatic speech recognition systemAlok Tiwari
 
AUTOMATIC SPEECH RECOGNITION- A SURVEY
AUTOMATIC SPEECH RECOGNITION- A SURVEYAUTOMATIC SPEECH RECOGNITION- A SURVEY
AUTOMATIC SPEECH RECOGNITION- A SURVEYIJCERT
 
Noise Adaptive Training for Robust Automatic Speech Recognition
Noise Adaptive Training for Robust Automatic Speech RecognitionNoise Adaptive Training for Robust Automatic Speech Recognition
Noise Adaptive Training for Robust Automatic Speech Recognitionأحلام انصارى
 
Rajul computer presentation
Rajul computer presentationRajul computer presentation
Rajul computer presentationNeetu Jain
 
Blackboard Pattern
Blackboard PatternBlackboard Pattern
Blackboard Patterntcab22
 
Speech Recognition by Iqbal
Speech Recognition by IqbalSpeech Recognition by Iqbal
Speech Recognition by IqbalIqbal
 
Blackboard architecture pattern
Blackboard architecture patternBlackboard architecture pattern
Blackboard architecture patternaish006
 
"Automatic speech recognition for mobile applications in Yandex" — Fran Campi...
"Automatic speech recognition for mobile applications in Yandex" — Fran Campi..."Automatic speech recognition for mobile applications in Yandex" — Fran Campi...
"Automatic speech recognition for mobile applications in Yandex" — Fran Campi...Yandex
 
Text Prompted Remote Speaker Authentication : Joint Speech and Speaker Recogn...
Text Prompted Remote Speaker Authentication : Joint Speech and Speaker Recogn...Text Prompted Remote Speaker Authentication : Joint Speech and Speaker Recogn...
Text Prompted Remote Speaker Authentication : Joint Speech and Speaker Recogn...gt_ebuddy
 
Lamini&farsane traitement de_signale
Lamini&farsane traitement de_signaleLamini&farsane traitement de_signale
Lamini&farsane traitement de_signaleAsmae Lamini
 
Speech Recognition with Deep Neural Networks (D3L2 Deep Learning for Speech a...
Speech Recognition with Deep Neural Networks (D3L2 Deep Learning for Speech a...Speech Recognition with Deep Neural Networks (D3L2 Deep Learning for Speech a...
Speech Recognition with Deep Neural Networks (D3L2 Deep Learning for Speech a...Universitat Politècnica de Catalunya
 
Speech Recognition Technology
Speech Recognition TechnologySpeech Recognition Technology
Speech Recognition TechnologySeminar Links
 
Artificial intelligence Speech recognition system
Artificial intelligence Speech recognition systemArtificial intelligence Speech recognition system
Artificial intelligence Speech recognition systemREHMAT ULLAH
 

Viewers also liked (20)

Automatic Speech Recognition
Automatic Speech RecognitionAutomatic Speech Recognition
Automatic Speech Recognition
 
Automatic speech recognition system using deep learning
Automatic speech recognition system using deep learningAutomatic speech recognition system using deep learning
Automatic speech recognition system using deep learning
 
Des modèles graphiques probabilistes aux modeles graphiques de durée
Des modèles graphiques probabilistes aux modeles graphiques de duréeDes modèles graphiques probabilistes aux modeles graphiques de durée
Des modèles graphiques probabilistes aux modeles graphiques de durée
 
Asr
AsrAsr
Asr
 
Automatic speech recognition
Automatic speech recognitionAutomatic speech recognition
Automatic speech recognition
 
Automatic speech recognition system
Automatic speech recognition systemAutomatic speech recognition system
Automatic speech recognition system
 
Automatic speech recognition
Automatic speech recognitionAutomatic speech recognition
Automatic speech recognition
 
AUTOMATIC SPEECH RECOGNITION- A SURVEY
AUTOMATIC SPEECH RECOGNITION- A SURVEYAUTOMATIC SPEECH RECOGNITION- A SURVEY
AUTOMATIC SPEECH RECOGNITION- A SURVEY
 
Noise Adaptive Training for Robust Automatic Speech Recognition
Noise Adaptive Training for Robust Automatic Speech RecognitionNoise Adaptive Training for Robust Automatic Speech Recognition
Noise Adaptive Training for Robust Automatic Speech Recognition
 
An Introduction To Speech Recognition
An Introduction To Speech RecognitionAn Introduction To Speech Recognition
An Introduction To Speech Recognition
 
Rajul computer presentation
Rajul computer presentationRajul computer presentation
Rajul computer presentation
 
Blackboard Pattern
Blackboard PatternBlackboard Pattern
Blackboard Pattern
 
Speech Recognition by Iqbal
Speech Recognition by IqbalSpeech Recognition by Iqbal
Speech Recognition by Iqbal
 
Blackboard architecture pattern
Blackboard architecture patternBlackboard architecture pattern
Blackboard architecture pattern
 
"Automatic speech recognition for mobile applications in Yandex" — Fran Campi...
"Automatic speech recognition for mobile applications in Yandex" — Fran Campi..."Automatic speech recognition for mobile applications in Yandex" — Fran Campi...
"Automatic speech recognition for mobile applications in Yandex" — Fran Campi...
 
Text Prompted Remote Speaker Authentication : Joint Speech and Speaker Recogn...
Text Prompted Remote Speaker Authentication : Joint Speech and Speaker Recogn...Text Prompted Remote Speaker Authentication : Joint Speech and Speaker Recogn...
Text Prompted Remote Speaker Authentication : Joint Speech and Speaker Recogn...
 
Lamini&farsane traitement de_signale
Lamini&farsane traitement de_signaleLamini&farsane traitement de_signale
Lamini&farsane traitement de_signale
 
Speech Recognition with Deep Neural Networks (D3L2 Deep Learning for Speech a...
Speech Recognition with Deep Neural Networks (D3L2 Deep Learning for Speech a...Speech Recognition with Deep Neural Networks (D3L2 Deep Learning for Speech a...
Speech Recognition with Deep Neural Networks (D3L2 Deep Learning for Speech a...
 
Speech Recognition Technology
Speech Recognition TechnologySpeech Recognition Technology
Speech Recognition Technology
 
Artificial intelligence Speech recognition system
Artificial intelligence Speech recognition systemArtificial intelligence Speech recognition system
Artificial intelligence Speech recognition system
 

Similar to Automatic speech recognition

Automatic Speech Recognition.ppt
Automatic Speech Recognition.pptAutomatic Speech Recognition.ppt
Automatic Speech Recognition.pptRudraSaraswat3
 
Automatic speech recognition
Automatic speech recognitionAutomatic speech recognition
Automatic speech recognitionanshu shrivastava
 
Automatic speech recognition
Automatic speech recognitionAutomatic speech recognition
Automatic speech recognitionanshu shrivastava
 
speech recognition and removal of disfluencies
speech recognition and removal of disfluenciesspeech recognition and removal of disfluencies
speech recognition and removal of disfluenciesAnkit Sharma
 
Automatic speech recognition
Automatic speech recognitionAutomatic speech recognition
Automatic speech recognitionArif A.
 
SECOND LANGUAGE RESEARCH.pptx
SECOND LANGUAGE RESEARCH.pptxSECOND LANGUAGE RESEARCH.pptx
SECOND LANGUAGE RESEARCH.pptxssuser1ac0fa
 
Respeaking conferences
Respeaking conferencesRespeaking conferences
Respeaking conferencesSaveria Arma
 
Respeaking conferences
Respeaking conferencesRespeaking conferences
Respeaking conferencesSaveria Arma
 
Speech to text conversion
Speech to text conversionSpeech to text conversion
Speech to text conversionankit_saluja
 
LANGUAGE II READING AND COMPREHENDING TEXT.docx
LANGUAGE II READING AND COMPREHENDING TEXT.docxLANGUAGE II READING AND COMPREHENDING TEXT.docx
LANGUAGE II READING AND COMPREHENDING TEXT.docxsmile790243
 
The role of linguistic information for shallow language processing
The role of linguistic information for shallow language processingThe role of linguistic information for shallow language processing
The role of linguistic information for shallow language processingConstantin Orasan
 
Automated Language Assessment Scoring and impact on instruction
Automated Language Assessment Scoring and impact on instructionAutomated Language Assessment Scoring and impact on instruction
Automated Language Assessment Scoring and impact on instructiontfarny
 
HCI 3e - Ch 10: Universal design
HCI 3e - Ch 10:  Universal designHCI 3e - Ch 10:  Universal design
HCI 3e - Ch 10: Universal designAlan Dix
 

Similar to Automatic speech recognition (20)

Automatic Speech Recognition.ppt
Automatic Speech Recognition.pptAutomatic Speech Recognition.ppt
Automatic Speech Recognition.ppt
 
Automatic speech recognition
Automatic speech recognitionAutomatic speech recognition
Automatic speech recognition
 
Automatic speech recognition
Automatic speech recognitionAutomatic speech recognition
Automatic speech recognition
 
speech recognition and removal of disfluencies
speech recognition and removal of disfluenciesspeech recognition and removal of disfluencies
speech recognition and removal of disfluencies
 
ch1.pdf
ch1.pdfch1.pdf
ch1.pdf
 
Automatic speech recognition
Automatic speech recognitionAutomatic speech recognition
Automatic speech recognition
 
SECOND LANGUAGE RESEARCH.pptx
SECOND LANGUAGE RESEARCH.pptxSECOND LANGUAGE RESEARCH.pptx
SECOND LANGUAGE RESEARCH.pptx
 
Respeaking conferences
Respeaking conferencesRespeaking conferences
Respeaking conferences
 
Respeaking conferences
Respeaking conferencesRespeaking conferences
Respeaking conferences
 
Respeaking conferences
Respeaking conferencesRespeaking conferences
Respeaking conferences
 
Speech to text conversion
Speech to text conversionSpeech to text conversion
Speech to text conversion
 
LANGUAGE II READING AND COMPREHENDING TEXT.docx
LANGUAGE II READING AND COMPREHENDING TEXT.docxLANGUAGE II READING AND COMPREHENDING TEXT.docx
LANGUAGE II READING AND COMPREHENDING TEXT.docx
 
universaldesign
 universaldesign universaldesign
universaldesign
 
Kc3517481754
Kc3517481754Kc3517481754
Kc3517481754
 
Coping tactics in interpretation
Coping tactics in interpretationCoping tactics in interpretation
Coping tactics in interpretation
 
The role of linguistic information for shallow language processing
The role of linguistic information for shallow language processingThe role of linguistic information for shallow language processing
The role of linguistic information for shallow language processing
 
Automated Language Assessment Scoring and impact on instruction
Automated Language Assessment Scoring and impact on instructionAutomated Language Assessment Scoring and impact on instruction
Automated Language Assessment Scoring and impact on instruction
 
Assign
AssignAssign
Assign
 
HCI 3e - Ch 10: Universal design
HCI 3e - Ch 10:  Universal designHCI 3e - Ch 10:  Universal design
HCI 3e - Ch 10: Universal design
 
ReseachPaper
ReseachPaperReseachPaper
ReseachPaper
 

Recently uploaded

What's New in Teams Calling, Meetings and Devices April 2024
What's New in Teams Calling, Meetings and Devices April 2024What's New in Teams Calling, Meetings and Devices April 2024
What's New in Teams Calling, Meetings and Devices April 2024Stephanie Beckett
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
 
Introduction to Open Source RAG and RAG Evaluation
Introduction to Open Source RAG and RAG EvaluationIntroduction to Open Source RAG and RAG Evaluation
Introduction to Open Source RAG and RAG EvaluationZilliz
 
WSO2CONMay2024OpenSourceConferenceDebrief.pptx
WSO2CONMay2024OpenSourceConferenceDebrief.pptxWSO2CONMay2024OpenSourceConferenceDebrief.pptx
WSO2CONMay2024OpenSourceConferenceDebrief.pptxJennifer Lim
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Alison B. Lowndes
 
Exploring UiPath Orchestrator API: updates and limits in 2024 🚀
Exploring UiPath Orchestrator API: updates and limits in 2024 🚀Exploring UiPath Orchestrator API: updates and limits in 2024 🚀
Exploring UiPath Orchestrator API: updates and limits in 2024 🚀DianaGray10
 
Demystifying gRPC in .Net by John Staveley
Demystifying gRPC in .Net by John StaveleyDemystifying gRPC in .Net by John Staveley
Demystifying gRPC in .Net by John StaveleyJohn Staveley
 
Optimizing NoSQL Performance Through Observability
Optimizing NoSQL Performance Through ObservabilityOptimizing NoSQL Performance Through Observability
Optimizing NoSQL Performance Through ObservabilityScyllaDB
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxAbida Shariff
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
 
10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka Doktorová10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka DoktorováCzechDreamin
 
Powerful Start- the Key to Project Success, Barbara Laskowska
Powerful Start- the Key to Project Success, Barbara LaskowskaPowerful Start- the Key to Project Success, Barbara Laskowska
Powerful Start- the Key to Project Success, Barbara LaskowskaCzechDreamin
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Product School
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...Product School
 
IoT Analytics Company Presentation May 2024
IoT Analytics Company Presentation May 2024IoT Analytics Company Presentation May 2024
IoT Analytics Company Presentation May 2024IoTAnalytics
 
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo DiehlFuture Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo DiehlPeter Udo Diehl
 
SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...
SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...
SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...CzechDreamin
 
AI revolution and Salesforce, Jiří Karpíšek
AI revolution and Salesforce, Jiří KarpíšekAI revolution and Salesforce, Jiří Karpíšek
AI revolution and Salesforce, Jiří KarpíšekCzechDreamin
 
Free and Effective: Making Flows Publicly Accessible, Yumi Ibrahimzade
Free and Effective: Making Flows Publicly Accessible, Yumi IbrahimzadeFree and Effective: Making Flows Publicly Accessible, Yumi Ibrahimzade
Free and Effective: Making Flows Publicly Accessible, Yumi IbrahimzadeCzechDreamin
 

Recently uploaded (20)

What's New in Teams Calling, Meetings and Devices April 2024
What's New in Teams Calling, Meetings and Devices April 2024What's New in Teams Calling, Meetings and Devices April 2024
What's New in Teams Calling, Meetings and Devices April 2024
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
 
Introduction to Open Source RAG and RAG Evaluation
Introduction to Open Source RAG and RAG EvaluationIntroduction to Open Source RAG and RAG Evaluation
Introduction to Open Source RAG and RAG Evaluation
 
WSO2CONMay2024OpenSourceConferenceDebrief.pptx
WSO2CONMay2024OpenSourceConferenceDebrief.pptxWSO2CONMay2024OpenSourceConferenceDebrief.pptx
WSO2CONMay2024OpenSourceConferenceDebrief.pptx
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
Exploring UiPath Orchestrator API: updates and limits in 2024 🚀
Exploring UiPath Orchestrator API: updates and limits in 2024 🚀Exploring UiPath Orchestrator API: updates and limits in 2024 🚀
Exploring UiPath Orchestrator API: updates and limits in 2024 🚀
 
Demystifying gRPC in .Net by John Staveley
Demystifying gRPC in .Net by John StaveleyDemystifying gRPC in .Net by John Staveley
Demystifying gRPC in .Net by John Staveley
 
Optimizing NoSQL Performance Through Observability
Optimizing NoSQL Performance Through ObservabilityOptimizing NoSQL Performance Through Observability
Optimizing NoSQL Performance Through Observability
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
 
10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka Doktorová10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka Doktorová
 
Powerful Start- the Key to Project Success, Barbara Laskowska
Powerful Start- the Key to Project Success, Barbara LaskowskaPowerful Start- the Key to Project Success, Barbara Laskowska
Powerful Start- the Key to Project Success, Barbara Laskowska
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
IoT Analytics Company Presentation May 2024
IoT Analytics Company Presentation May 2024IoT Analytics Company Presentation May 2024
IoT Analytics Company Presentation May 2024
 
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo DiehlFuture Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
 
SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...
SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...
SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...
 
AI revolution and Salesforce, Jiří Karpíšek
AI revolution and Salesforce, Jiří KarpíšekAI revolution and Salesforce, Jiří Karpíšek
AI revolution and Salesforce, Jiří Karpíšek
 
Free and Effective: Making Flows Publicly Accessible, Yumi Ibrahimzade
Free and Effective: Making Flows Publicly Accessible, Yumi IbrahimzadeFree and Effective: Making Flows Publicly Accessible, Yumi Ibrahimzade
Free and Effective: Making Flows Publicly Accessible, Yumi Ibrahimzade
 

Automatic speech recognition

  • 1. Automatic Speech Recognition Manthan Gandhi N.B.Mehta College (Bordi)
  • 2. Automatic speech recognition What is the task? What are the main difficulties? How is it approached? How good is it? How much better could it be? 2/34
  • 3. What is the task? Getting a computer to understand spoken language By “understand” we might mean React appropriately Convert the input speech into another medium, e.g. text Several variables impinge on this (see later) 3/34
  • 4. How do humans do it? Articulation produces sound waves which the ear conveys to the brain for processing 4/34
  • 5. How might computers do it? Acoustic waveform Acoustic signal Digitization Acoustic analysis of the speech signal Linguistic interpretation 5/34 Speech recognition
  • 6. What’s hard about that?  Digitization  Converting analogue signal into digital representation  Signal processing  Separating speech from background noise  Phonetics  Variability in human speech  Phonology  Recognizing individual sound distinctions (similar phonemes)  Lexicology and syntax  Disambiguating homophones  Features of continuous speech  Syntax and pragmatics  Interpreting prosodic features  Pragmatics 6/34  Filtering of performance errors (disfluencies)
  • 7. Digitization Analogue to digital conversion Sampling and quantizing Use filters to measure energy levels for various points on the frequency spectrum Knowing the relative importance of different frequency bands (for speech) makes this process more efficient E.g. high frequency sounds are less informative, so can be sampled using a broader bandwidth (log scale) 7/34
  • 8. Separating speech from background noise Noise cancelling microphones Two mics, one facing speaker, the other facing away Ambient noise is roughly same for both mics Knowing which bits of the signal relate to speech Spectrograph analysis 8/34
  • 9. Variability in individuals’ speech Variation among speakers due to Vocal range (f0, and pitch range – see later) Voice quality (growl, whisper, physiological elements such as nasality, adenoidality, etc) ACCENT !!! (especially vowel systems, but also consonants, allophones, etc.) Variation within speakers due to Health, emotional state Ambient conditions Speech style: formal read vs spontaneous 9/34
  • 10. Speaker-(in)dependent systems  Speaker-dependent systems  Require “training” to “teach” the system your individual idiosyncracies  The more the merrier, but typically nowadays 5 or 10 minutes is enough  User asked to pronounce some key words which allow computer to infer details of the user’s accent and voice  Fortunately, languages are generally systematic  More robust  But less convenient  And obviously less portable  Speaker-independent systems  Language coverage is reduced to compensate need to be flexible in phoneme identification  Clever compromise is to learn on the fly 10/34
  • 11. Identifying phonemes Differences between some phonemes are sometimes very small May be reflected in speech signal (eg vowels have more or less distinctive f1 and f2) Often show up in coarticulation effects (transition to next sound)  e.g. aspiration of voiceless stops in English Allophonic variation 11/34
  • 12. Disambiguating homophones Mostly differences are recognised by humans by context and need to make sense It’s hard to wreck a nice beach What dime’s a neck’s drain to stop port? Systems can only recognize words that are in their lexicon, so limiting the lexicon is an obvious ploy Some ASR systems include a grammar which can help disambiguation 12/34
  • 13. (Dis)continuous speech Discontinuous speech much easier to recognize Single words tend to be pronounced more clearly Continuous speech involves contextual coarticulation effects Weak forms Assimilation Contractions 13/34
  • 14. Interpreting prosodic features Pitch, length and loudness are used to indicate “stress” All of these are relative On a speaker-by-speaker basis And in relation to context Pitch and length are phonemic in some languages 14/34
  • 15. Pitch Pitch contour can be extracted from speech signal But pitch differences are relative One man’s high is another (wo)man’s low Pitch range is variable Pitch contributes to intonation But has other functions in tone languages Intonation can convey meaning 15/34
  • 16. Length Length is easy to measure but difficult to interpret Again, length is relative It is phonemic in many languages Speech rate is not constant – slows down at the end of a sentence 16/34
  • 17. Loudness Loudness is easy to measure but difficult to interpret Again, loudness is relative 17/34
  • 18. Performance errors Performance “errors” include Non-speech sounds Hesitations False starts, repetitions Filtering implies handling at syntactic level or above Some disfluencies are deliberate and have pragmatic effect – this is not something we can handle in the near future 18/34
  • 19. Approaches to ASR Template matching Knowledge-based (or rule-based) approach Statistical approach: Noisy channel model + machine learning 19/34
  • 20. Template-based approach Store examples of units (words, phonemes), then find the example that most closely fits the input Extract features from speech signal, then it’s “just” a complex similarity matching problem, using solutions developed for all sorts of applications OK for discrete utterances, and a single user 20/34
  • 21. Template-based approach Hard to distinguish very similar templates And quickly degrades when input differs from templates Therefore needs techniques to mitigate this degradation: More subtle matching techniques Multiple templates which are aggregated Taken together, these suggested … 21/34
  • 22. Rule-based approach Use knowledge of phonetics and linguistics to guide search process Templates are replaced by rules expressing everything (anything) that might help to decode: Phonetics, phonology, phonotactics Syntax Pragmatics 22/34
  • 23. Rule-based approach Typical approach is based on “blackboard” architecture: At each decision point, lay out the possibilities Apply rules to determine which sequences are permitted Poor performance due to Difficulty to express rules Difficulty to make rules interact Difficulty to know how to improve the system 23/34 s ʃ k h p t i: iə ɪ ʃ tʃ h s
  • 24.  Identify individual phonemes  Identify words  Identify sentence structure and/or meaning  Interpret prosodic features (pitch, loudness, length) 24/34
  • 25. Statistics-based approach Can be seen as extension of template-based approach, using more powerful mathematical and statistical tools Sometimes seen as “anti-linguistic” approach Fred Jelinek (IBM, 1988): “Every time I fire a linguist my system improves” 25/34
  • 26. Statistics-based approach Collect a large corpus of transcribed speech recordings Train the computer to learn the correspondences (“machine learning”) At run time, apply statistical processes to search through the space of all possible solutions, and pick the statistically most likely one 26/34
  • 27. Machine learning Acoustic and Lexical Models Analyse training data in terms of relevant features Learn from large amount of data different possibilities  different phone sequences for a given word  different combinations of elements of the speech signal for a given phone/phoneme Combine these into a Hidden Markov Model expressing the probabilities 27/34
  • 28. HMMs for some words 28/34
  • 29. Language model Models likelihood of word given previous word(s) n-gram models: Build the model by calculating bigram or trigram probabilities from text training corpus Smoothing issues 29/34
  • 30. The Noisy Channel Model • Search through space of all possible sentences • Pick the one that is most probable given the waveform 30/34
  • 31. The Noisy Channel Model Use the acoustic model to give a set of likely phone sequences Use the lexical and language models to judge which of these are likely to result in probable word sequences The trick is having sophisticated algorithms to juggle the statistics A bit like the rule-based approach except that it is all learned automatically from data 31/34
  • 32. Evaluation Funders have been very keen on competitive quantitative evaluation Subjective evaluations are informative, but not cost-effective For transcription tasks, word-error rate is popular (though can be misleading: all words are not equally important) For task-based dialogues, other measures of understanding are needed 32/34
  • 33. Comparing ASR systems Factors include Speaking mode: isolated words vs continuous speech Speaking style: read vs spontaneous “Enrollment”: speaker (in)dependent Vocabulary size (small <20 … large > 20,000) Equipment: good quality noise-cancelling mic … telephone Size of training set (if appropriate) or rule set Recognition method 33/34
  • 34. Remaining problems  Robustness – graceful degradation, not catastrophic failure  Portability – independence of computing platform  Adaptability – to changing conditions (different mic, background noise, new speaker, new task domain, new language even)  Language Modelling – is there a role for linguistics in improving the language models?  Confidence Measures – better methods to evaluate the absolute correctness of hypotheses.  Out-of-Vocabulary (OOV) Words – Systems must have some method of detecting OOV words, and dealing with them in a sensible way.  Spontaneous Speech – disfluencies (filled pauses, false starts, hesitations, ungrammatical constructions etc) remain a problem.  Prosody –Stress, intonation, and rhythm convey important information for word recognition and the user's intentions (e.g., sarcasm, anger)  Accent, dialect and mixed language – non-native speech is a huge problem, especially where code-switching is commonplace 34/34