This document summarizes a research paper on developing a syllable-based speech recognition system for the Myanmar language. The proposed system has three main components: feature extraction, phone recognition, and decoding. Feature extraction transforms speech into acoustic feature vectors. Phone recognition computes likelihoods of acoustic observations given linguistic units like phones. Decoding uses acoustic and language models to find the most likely sequence of words. The paper discusses building acoustic and language models for Myanmar. The acoustic model is trained using Hidden Markov Models and Gaussian mixture models. The language model is an n-gram model built using syllable segmentation of text. Developing the first speech recognition system for Myanmar poses technical challenges due to its tonal syllabic structure.
An expert system for automatic reading of a text written in standard arabicijnlc
In this work we present our expert system of Automatic reading or speech synthesis based on a text
written in Standard Arabic, our work is carried out in two great stages: the creation of the sound data
base, and the transformation of the written text into speech (Text To Speech TTS). This transformation is
done firstly by a Phonetic Orthographical Transcription (POT) of any written Standard Arabic text with
the aim of transforming it into his corresponding phonetics sequence, and secondly by the generation of
the voice signal which corresponds to the chain transcribed. We spread out the different of conception of
the system, as well as the results obtained compared to others works studied to realize TTS based on
Standard Arabic.
PUNJABI SPEECH SYNTHESIS SYSTEM USING HTKijistjournal
This paper describes an Hidden Markov Model-based Punjabi text-to-speech synthesis system (HTS), in which speech waveform is generated from Hidden Markov Models themselves, and applies it to Punjabi speech synthesis using the general speech synthesis architecture of HTK (HMM Tool Kit). This Hidden Markov Model based TTS can be used in mobile phones for stored phone directory or messages. Text messages and caller’s identity in English language are mapped to tokens in Punjabi language which are further concatenated to form speech with certain rules and procedures.
To build the synthesizer we recorded the speech database and phonetically segmented it, thus first extracting context-independent monophones and then context-dependent triphones. For e.g. for word bharat monophones are a, bh, t etc. & triphones are bh-a+r. These speech utterances and their phone level transcriptions (monophones and triphones) are the inputs to the speech synthesis system. System outputs the sequence of phonemes after resolving various ambiguities regarding selection of phonemes using word network files e.g. for the word Tapas the output phoneme sequence is ਤ,ਪ,ਸ instead of phoneme sequence ਟ,ਪ,ਸ .
SMATalk: Standard Malay Text to Speech Talk SystemCSCJournals
This paper presents a rule-based text- to- speech (TTS) Synthesis System for Standard Malay, namely SMaTTS. The proposed system using sinusoidal method and some pre- recorded wave files in generating speech for the system. The use of phone database significantly decreases the amount of computer memory space used, thus making the system very light and embeddable. The overall system was comprised of two phases the Natural Language Processing (NLP) that consisted of the high-level processing of text analysis, phonetic analysis, text normalization and morphophonemic module. The module was designed specially for SM to overcome few problems in defining the rules for SM orthography system before it can be passed to the DSP module. The second phase is the Digital Signal Processing (DSP) which operated on the low-level process of the speech waveform generation. A developed an intelligible and adequately natural sounding formant-based speech synthesis system with a light and user-friendly Graphical User Interface (GUI) is introduced. A Standard Malay Language (SM) phoneme set and an inclusive set of phone database have been constructed carefully for this phone-based speech synthesizer. By applying the generative phonology, a comprehensive letter-to-sound (LTS) rules and a pronunciation lexicon have been invented for SMaTTS. As for the evaluation tests, a set of Diagnostic Rhyme Test (DRT) word list was compiled and several experiments have been performed to evaluate the quality of the synthesized speech by analyzing the Mean Opinion Score (MOS) obtained. The overall performance of the system as well as the room for improvements was thoroughly discussed.
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...ijcsit
Speech processing is considered as crucial and an intensive field of research in the growth of robust and efficient speech recognition system. But the accuracy for speech recognition still focuses for variation of context, speaker’s variability, and environment conditions. In this paper, we stated curvelet based Feature Extraction (CFE) method for speech recognition in noisy environment and the input speech signal is decomposed into different frequency channels using the characteristics of curvelet transform for reduce the computational complication and the feature vector size successfully and they have better accuracy, varying window size because of which they are suitable for non –stationary signals. For better word classification and recognition, discrete hidden markov model can be used and as they consider time distribution of
speech signals. The HMM classification method attained the maximum accuracy in term of identification rate for informal with 80.1%, scientific phrases with 86%, and control with 63.8 % detection rates. The objective of this study is to characterize the feature extraction methods and classification phage in speech
recognition system. The various approaches available for developing speech recognition system are compared along with their merits and demerits. The statistical results shows that signal recognition accuracy will be increased by using discrete Curvelet transforms over conventional methods.
A Marathi Hidden-Markov Model Based Speech Synthesis Systemiosrjce
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is a double blind peer reviewed International Journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.
Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels
An expert system for automatic reading of a text written in standard arabicijnlc
In this work we present our expert system of Automatic reading or speech synthesis based on a text
written in Standard Arabic, our work is carried out in two great stages: the creation of the sound data
base, and the transformation of the written text into speech (Text To Speech TTS). This transformation is
done firstly by a Phonetic Orthographical Transcription (POT) of any written Standard Arabic text with
the aim of transforming it into his corresponding phonetics sequence, and secondly by the generation of
the voice signal which corresponds to the chain transcribed. We spread out the different of conception of
the system, as well as the results obtained compared to others works studied to realize TTS based on
Standard Arabic.
PUNJABI SPEECH SYNTHESIS SYSTEM USING HTKijistjournal
This paper describes an Hidden Markov Model-based Punjabi text-to-speech synthesis system (HTS), in which speech waveform is generated from Hidden Markov Models themselves, and applies it to Punjabi speech synthesis using the general speech synthesis architecture of HTK (HMM Tool Kit). This Hidden Markov Model based TTS can be used in mobile phones for stored phone directory or messages. Text messages and caller’s identity in English language are mapped to tokens in Punjabi language which are further concatenated to form speech with certain rules and procedures.
To build the synthesizer we recorded the speech database and phonetically segmented it, thus first extracting context-independent monophones and then context-dependent triphones. For e.g. for word bharat monophones are a, bh, t etc. & triphones are bh-a+r. These speech utterances and their phone level transcriptions (monophones and triphones) are the inputs to the speech synthesis system. System outputs the sequence of phonemes after resolving various ambiguities regarding selection of phonemes using word network files e.g. for the word Tapas the output phoneme sequence is ਤ,ਪ,ਸ instead of phoneme sequence ਟ,ਪ,ਸ .
SMATalk: Standard Malay Text to Speech Talk SystemCSCJournals
This paper presents a rule-based text- to- speech (TTS) Synthesis System for Standard Malay, namely SMaTTS. The proposed system using sinusoidal method and some pre- recorded wave files in generating speech for the system. The use of phone database significantly decreases the amount of computer memory space used, thus making the system very light and embeddable. The overall system was comprised of two phases the Natural Language Processing (NLP) that consisted of the high-level processing of text analysis, phonetic analysis, text normalization and morphophonemic module. The module was designed specially for SM to overcome few problems in defining the rules for SM orthography system before it can be passed to the DSP module. The second phase is the Digital Signal Processing (DSP) which operated on the low-level process of the speech waveform generation. A developed an intelligible and adequately natural sounding formant-based speech synthesis system with a light and user-friendly Graphical User Interface (GUI) is introduced. A Standard Malay Language (SM) phoneme set and an inclusive set of phone database have been constructed carefully for this phone-based speech synthesizer. By applying the generative phonology, a comprehensive letter-to-sound (LTS) rules and a pronunciation lexicon have been invented for SMaTTS. As for the evaluation tests, a set of Diagnostic Rhyme Test (DRT) word list was compiled and several experiments have been performed to evaluate the quality of the synthesized speech by analyzing the Mean Opinion Score (MOS) obtained. The overall performance of the system as well as the room for improvements was thoroughly discussed.
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...ijcsit
Speech processing is considered as crucial and an intensive field of research in the growth of robust and efficient speech recognition system. But the accuracy for speech recognition still focuses for variation of context, speaker’s variability, and environment conditions. In this paper, we stated curvelet based Feature Extraction (CFE) method for speech recognition in noisy environment and the input speech signal is decomposed into different frequency channels using the characteristics of curvelet transform for reduce the computational complication and the feature vector size successfully and they have better accuracy, varying window size because of which they are suitable for non –stationary signals. For better word classification and recognition, discrete hidden markov model can be used and as they consider time distribution of
speech signals. The HMM classification method attained the maximum accuracy in term of identification rate for informal with 80.1%, scientific phrases with 86%, and control with 63.8 % detection rates. The objective of this study is to characterize the feature extraction methods and classification phage in speech
recognition system. The various approaches available for developing speech recognition system are compared along with their merits and demerits. The statistical results shows that signal recognition accuracy will be increased by using discrete Curvelet transforms over conventional methods.
A Marathi Hidden-Markov Model Based Speech Synthesis Systemiosrjce
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is a double blind peer reviewed International Journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.
Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels
Hindi digits recognition system on speech data collected in different natural...csandit
This paper presents a baseline digits speech recognizer for Hindi language. The recording environment is different for all speakers, since the data is collected in their respective homes. The different environment refers to vehicle horn noises in some road facing rooms, internal background noises in some rooms like opening doors, silence in some rooms etc. All these recordings are used for training acoustic model. The Acoustic Model is trained on 8 speakers’ audio data. The vocabulary size of the recognizer is 10 words. HTK toolkit is used for building
acoustic model and evaluating the recognition rate of the recognizer. The efficiency of the recognizer developed on recorded data, is shown at the end of the paper and possible directions for future research work are suggested.
Named Entity Recognition using Hidden Markov Model (HMM)kevig
Named Entity Recognition (NER) is the subtask of Natural Language Processing (NLP) which is the branch of artificial intelligence. It has many applications mainly in machine translation, text to speech synthesis, natural language understanding, Information Extraction, Information retrieval, question answering etc. The aim of NER is to classify words into some predefined categories like location name, person name, organization name, date, time etc. In this paper we describe the Hidden Markov Model (HMM) based approach of machine learning in detail to identify the named entities. The main idea behind the use of HMM model for building NER system is that it is language independent and we can apply this system for any language domain. In our NER system the states are not fixed means it is of dynamic in nature one can use it according to their interest. The corpus used by our NER system is also not domain specific
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Approach of Syllable Based Unit Selection Text- To-Speech Synthesis System fo...iosrjce
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is a double blind peer reviewed International Journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels.
Implementation of Text To Speech for Marathi Language Using Transcriptions Co...IJERA Editor
This research paper presents the approach towards converting text to speech using new methodology. The text to
speech conversion system enables user to enter text in Marathi and as output it gets sound. The paper presents
the steps followed for converting text to speech for Marathi language and the algorithm used for it. The focus of
this paper is based on the tokenisation process and the orthographic representation of the text that shows the
mapping of letter to sound using the description of language’s phonetics. Here the main focus is on the text to
IPA transcription concept. It is in fact, a system that translates text to IPA transcription which is the primary
stage for text to speech conversion. The whole procedure for converting text to speech involves a great deal of
time as it’s not an easy task and requires efforts.
Approach To Build A Marathi Text-To-Speech System Using Concatenative Synthes...IJERA Editor
Marathi is one of the oldest languages in India. This research paper describes the development of Marathi Textto-
Speech System (TTS). In Marathi TTS the input is Marathi text in Unicode. The voices are sampled from real
recorded speech. The objective of a text to speech system is to convert an arbitrary text into its corresponding
spoken waveform. Speech synthesis is a process of building machinery that can generate human-like speech
from any text input to imitate human speakers. Text processing and speech generation are two main components
of a text to speech system. To build a natural sounding speech synthesis system, it is essential that text
processing component produce an appropriate sequence of phonemic units. Generation of sequence of phonetic
units for a given standard word is referred to as letter to phoneme rule or text to phoneme rule. The
complexity of these rules and their derivation depends upon the nature of the language. The quality of a speech
synthesizer is judged by its closeness to the natural human voice and understandability. In this research paper we
described an approach to build a Marathi TTS system using concatenative synthesis method with syllable as a
basic unit of concatenation.
Evaluation of Hidden Markov Model based Marathi Text-ToSpeech Synthesis SystemIJERA Editor
The objective of this paper is to evaluate the quality of HMM based Marathi TTS system. The main advantage of HMM technique is its ability to allow the variation in voice easily. The output speeches produced in this method have greater impact on emotion, style and intonation. The naturalness and intelligibility are the two important parameters to decide the quality of synthetic speech. Depending on the parameters specified the results of synthetic speech are categorized into 4 categories: natural speech, high quality synthetic speech, low quality synthetic speech and moderate quality synthetic speech. The results are obtained by using CT, DRT and MOS test.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
A Novel Approach for Rule Based Translation of English to Marathiaciijournal
This paper presents a design for rule-based machine translation system for English to Marathi language pair. The machine translation system will take input script as English sentence and parse with the help of Stanford parser. The Stanford parser will be used for main purposes on the source side processing, in the machine translation system. English to Marathi Bilingual dictionary is going to be created. The system will take the parsed output and separate the source text word by word and searches for their corresponding target words in the bilingual dictionary. The hand coded rules are written for Marathi inflections and also reordering rules are there. After applying the reordering rules, English sentence will be syntactically reordered to suit Marathi language
Systems variability modeling a textual model mixing class and feature conceptsijcsit
System’s reusability and cost are very important in software product line design area. Developers’ goal is
to increase system reusability and decreasing cost and efforts for building components from scratch for
each software configuration. This can be reached by developing software product line (SPL). To handle
SPL engineering process, several approaches with several techniques were developed. One of these
approaches is called separated approach. It requires separating the commonalities and variability for
system’s components to allow configuration selection based on user defined features. Textual notationbased
approaches have been used for their formal syntax and semantics to represent system features and
implementations. But these approaches are still weak in mixing features (conceptual level) and classes
(physical level) that guarantee smooth and automatic configuration generation for software releases. The
absence of methodology supporting the mixing process is a real weakness. In this paper, we enhanced
SPL’s reusability by introducing some meta-features, classified according to their functionalities. As a first
consequence, mixing class and feature concepts is supported in a simple way using class interfaces and
inherent features for smooth move from feature model to class model. And as a second consequence, the
mixing process is supported by a textual design and implementation methodology, mixing class and feature
models by combining their concepts in a single language. The supported configuration generation process
is simple, coherent, and complete.
In this paper, we presented a method to retrieve documents with unstructured text data written in different
languages. Apart from the ordinary document retrieval systems, the proposed system can also process
queries with terms in more than one language. Unicode, the universally accepted encoding standard is used
to present the data in a common platform while converting the text data into Vector Space Model. We got
notable F measure values in the experiments irrespective of languages used in documents and queries.
S TRUCTURAL F EATURES F OR R ECOGNITION O F H AND W RITTEN K ANNADA C ...ijcsit
Research in image processing involves many active a
reas, of these Recognition of Handwritten character
holds lots of promises and is challenging one .The
idea is to enable the computer to be able to recogn
ize
intelligibly hand written inputs In this paper, a
new method that uses structural features and suppo
rt
vector Machine (SVM) classifier for recognition of
Handwritten Kannada characters is presented. On an
average recognition accuracy of 89.84 % and 85.14%
for handwritten Kannada vowels and Consonants
obtained with this proposed method, inspite of inhe
rent variations
Variability modeling for customizable saas applicationsijcsit
Most of current Software-as-a-Service (SaaS) applications are developed as customizable service-oriented
applications that serve a large number of tenants (users) by one application instance. The current rapid
evolution of SaaS applications increases the demand to study the commonality and variability in software
product lines that produce customizable SaaS applications. During runtime, Customizability is required to
achieve different tenants’ requirements. During the development process, defining and realizing
commonalty and variability in SaaS applications’ families is required to develop reusable, flexible, and
customizable SaaS applications at lower costs, in shorter time, and with higher quality. In this paper,
Orthogonal Variability Model (OVM) is used to model variability in a separated model, which is used to
generate simple and understandable customization model. Additionally, Service oriented architecture
Modeling Language (SoaML) is extended to define and realize commonalty and variability during the
development of SaaS applications.
A preliminary survey on optimized multiobjective metaheuristic methods for da...ijcsit
The present survey provides the state-of-the-art of research, copiously devoted to Evolutionary Approach
(EAs) for clustering exemplified with a diversity of evolutionary computations. The Survey provides a
nomenclature that highlights some aspects that are very important in the context of evolutionary data
clustering. The paper missions the clustering trade-offs branched out with wide-ranging Multi Objective
Evolutionary Approaches (MOEAs) methods. Finally, this study addresses the potential challenges of
MOEA design and data clustering, along with conclusions and recommendations for novice and
researchers by positioning most promising paths of future research.
State of the art of agile governance a systematic reviewijcsit
Context: Agility at the business level requires Information Technology (IT) environment flexible and
customizable, as well as effective and responsive governance in order to deliver value faster, better, and
cheaper to the business. Objective: To understand better this context, our paper seeks to investigate how
the domain of agile governance has evolved, as well as to derive implications for research and practice.
Method: We conducted a systematic review about the state of art of the agile governance up to and
including 2013. Our search strategy identified 1992 studies in 10 databases, of which 167 had the potential
to answer our research questions. Results: We organized the studies into four major groups: software
engineering, enterprise, manufacturing and multidisciplinary; classifying them into 16 emerging
categories. As a result, the review provides a convergent definition for agile governance, six metaprinciples,
and a map of findings organized by topic and classified by relevance and convergence.
Conclusion: The found evidence lead us to believe that agile governance is a relatively new, wide and
multidisciplinary area focused on organizational performance and competitiveness that needs to be more
intensively studied. Finally, we made improvements and additions to the methodological approach for
systematic reviews and qualitative studies.
Data mining model for the data retrieval from central server configurationijcsit
A server, which is to keep track of heavy document traffic, is unable to filter the documents that are most
relevant and updated for continuous text search queries. This paper focuses on handling continuous text
extraction sustaining high document traffic. The main objective is to retrieve recent updated documents
that are most relevant to the query by applying sliding window technique. Our solution indexes the
streamed documents in the main memory with structure based on the principles of inverted file, and
processes document arrival and expiration events with incremental threshold-based method. It also ensures
elimination of duplicate document retrieval using unsupervised duplicate detection. The documents are
ranked based on user feedback and given higher priority for retrieval.
Hindi digits recognition system on speech data collected in different natural...csandit
This paper presents a baseline digits speech recognizer for Hindi language. The recording environment is different for all speakers, since the data is collected in their respective homes. The different environment refers to vehicle horn noises in some road facing rooms, internal background noises in some rooms like opening doors, silence in some rooms etc. All these recordings are used for training acoustic model. The Acoustic Model is trained on 8 speakers’ audio data. The vocabulary size of the recognizer is 10 words. HTK toolkit is used for building
acoustic model and evaluating the recognition rate of the recognizer. The efficiency of the recognizer developed on recorded data, is shown at the end of the paper and possible directions for future research work are suggested.
Named Entity Recognition using Hidden Markov Model (HMM)kevig
Named Entity Recognition (NER) is the subtask of Natural Language Processing (NLP) which is the branch of artificial intelligence. It has many applications mainly in machine translation, text to speech synthesis, natural language understanding, Information Extraction, Information retrieval, question answering etc. The aim of NER is to classify words into some predefined categories like location name, person name, organization name, date, time etc. In this paper we describe the Hidden Markov Model (HMM) based approach of machine learning in detail to identify the named entities. The main idea behind the use of HMM model for building NER system is that it is language independent and we can apply this system for any language domain. In our NER system the states are not fixed means it is of dynamic in nature one can use it according to their interest. The corpus used by our NER system is also not domain specific
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Approach of Syllable Based Unit Selection Text- To-Speech Synthesis System fo...iosrjce
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is a double blind peer reviewed International Journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels.
Implementation of Text To Speech for Marathi Language Using Transcriptions Co...IJERA Editor
This research paper presents the approach towards converting text to speech using new methodology. The text to
speech conversion system enables user to enter text in Marathi and as output it gets sound. The paper presents
the steps followed for converting text to speech for Marathi language and the algorithm used for it. The focus of
this paper is based on the tokenisation process and the orthographic representation of the text that shows the
mapping of letter to sound using the description of language’s phonetics. Here the main focus is on the text to
IPA transcription concept. It is in fact, a system that translates text to IPA transcription which is the primary
stage for text to speech conversion. The whole procedure for converting text to speech involves a great deal of
time as it’s not an easy task and requires efforts.
Approach To Build A Marathi Text-To-Speech System Using Concatenative Synthes...IJERA Editor
Marathi is one of the oldest languages in India. This research paper describes the development of Marathi Textto-
Speech System (TTS). In Marathi TTS the input is Marathi text in Unicode. The voices are sampled from real
recorded speech. The objective of a text to speech system is to convert an arbitrary text into its corresponding
spoken waveform. Speech synthesis is a process of building machinery that can generate human-like speech
from any text input to imitate human speakers. Text processing and speech generation are two main components
of a text to speech system. To build a natural sounding speech synthesis system, it is essential that text
processing component produce an appropriate sequence of phonemic units. Generation of sequence of phonetic
units for a given standard word is referred to as letter to phoneme rule or text to phoneme rule. The
complexity of these rules and their derivation depends upon the nature of the language. The quality of a speech
synthesizer is judged by its closeness to the natural human voice and understandability. In this research paper we
described an approach to build a Marathi TTS system using concatenative synthesis method with syllable as a
basic unit of concatenation.
Evaluation of Hidden Markov Model based Marathi Text-ToSpeech Synthesis SystemIJERA Editor
The objective of this paper is to evaluate the quality of HMM based Marathi TTS system. The main advantage of HMM technique is its ability to allow the variation in voice easily. The output speeches produced in this method have greater impact on emotion, style and intonation. The naturalness and intelligibility are the two important parameters to decide the quality of synthetic speech. Depending on the parameters specified the results of synthetic speech are categorized into 4 categories: natural speech, high quality synthetic speech, low quality synthetic speech and moderate quality synthetic speech. The results are obtained by using CT, DRT and MOS test.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
A Novel Approach for Rule Based Translation of English to Marathiaciijournal
This paper presents a design for rule-based machine translation system for English to Marathi language pair. The machine translation system will take input script as English sentence and parse with the help of Stanford parser. The Stanford parser will be used for main purposes on the source side processing, in the machine translation system. English to Marathi Bilingual dictionary is going to be created. The system will take the parsed output and separate the source text word by word and searches for their corresponding target words in the bilingual dictionary. The hand coded rules are written for Marathi inflections and also reordering rules are there. After applying the reordering rules, English sentence will be syntactically reordered to suit Marathi language
Systems variability modeling a textual model mixing class and feature conceptsijcsit
System’s reusability and cost are very important in software product line design area. Developers’ goal is
to increase system reusability and decreasing cost and efforts for building components from scratch for
each software configuration. This can be reached by developing software product line (SPL). To handle
SPL engineering process, several approaches with several techniques were developed. One of these
approaches is called separated approach. It requires separating the commonalities and variability for
system’s components to allow configuration selection based on user defined features. Textual notationbased
approaches have been used for their formal syntax and semantics to represent system features and
implementations. But these approaches are still weak in mixing features (conceptual level) and classes
(physical level) that guarantee smooth and automatic configuration generation for software releases. The
absence of methodology supporting the mixing process is a real weakness. In this paper, we enhanced
SPL’s reusability by introducing some meta-features, classified according to their functionalities. As a first
consequence, mixing class and feature concepts is supported in a simple way using class interfaces and
inherent features for smooth move from feature model to class model. And as a second consequence, the
mixing process is supported by a textual design and implementation methodology, mixing class and feature
models by combining their concepts in a single language. The supported configuration generation process
is simple, coherent, and complete.
In this paper, we presented a method to retrieve documents with unstructured text data written in different
languages. Apart from the ordinary document retrieval systems, the proposed system can also process
queries with terms in more than one language. Unicode, the universally accepted encoding standard is used
to present the data in a common platform while converting the text data into Vector Space Model. We got
notable F measure values in the experiments irrespective of languages used in documents and queries.
S TRUCTURAL F EATURES F OR R ECOGNITION O F H AND W RITTEN K ANNADA C ...ijcsit
Research in image processing involves many active a
reas, of these Recognition of Handwritten character
holds lots of promises and is challenging one .The
idea is to enable the computer to be able to recogn
ize
intelligibly hand written inputs In this paper, a
new method that uses structural features and suppo
rt
vector Machine (SVM) classifier for recognition of
Handwritten Kannada characters is presented. On an
average recognition accuracy of 89.84 % and 85.14%
for handwritten Kannada vowels and Consonants
obtained with this proposed method, inspite of inhe
rent variations
Variability modeling for customizable saas applicationsijcsit
Most of current Software-as-a-Service (SaaS) applications are developed as customizable service-oriented
applications that serve a large number of tenants (users) by one application instance. The current rapid
evolution of SaaS applications increases the demand to study the commonality and variability in software
product lines that produce customizable SaaS applications. During runtime, Customizability is required to
achieve different tenants’ requirements. During the development process, defining and realizing
commonalty and variability in SaaS applications’ families is required to develop reusable, flexible, and
customizable SaaS applications at lower costs, in shorter time, and with higher quality. In this paper,
Orthogonal Variability Model (OVM) is used to model variability in a separated model, which is used to
generate simple and understandable customization model. Additionally, Service oriented architecture
Modeling Language (SoaML) is extended to define and realize commonalty and variability during the
development of SaaS applications.
A preliminary survey on optimized multiobjective metaheuristic methods for da...ijcsit
The present survey provides the state-of-the-art of research, copiously devoted to Evolutionary Approach
(EAs) for clustering exemplified with a diversity of evolutionary computations. The Survey provides a
nomenclature that highlights some aspects that are very important in the context of evolutionary data
clustering. The paper missions the clustering trade-offs branched out with wide-ranging Multi Objective
Evolutionary Approaches (MOEAs) methods. Finally, this study addresses the potential challenges of
MOEA design and data clustering, along with conclusions and recommendations for novice and
researchers by positioning most promising paths of future research.
State of the art of agile governance a systematic reviewijcsit
Context: Agility at the business level requires Information Technology (IT) environment flexible and
customizable, as well as effective and responsive governance in order to deliver value faster, better, and
cheaper to the business. Objective: To understand better this context, our paper seeks to investigate how
the domain of agile governance has evolved, as well as to derive implications for research and practice.
Method: We conducted a systematic review about the state of art of the agile governance up to and
including 2013. Our search strategy identified 1992 studies in 10 databases, of which 167 had the potential
to answer our research questions. Results: We organized the studies into four major groups: software
engineering, enterprise, manufacturing and multidisciplinary; classifying them into 16 emerging
categories. As a result, the review provides a convergent definition for agile governance, six metaprinciples,
and a map of findings organized by topic and classified by relevance and convergence.
Conclusion: The found evidence lead us to believe that agile governance is a relatively new, wide and
multidisciplinary area focused on organizational performance and competitiveness that needs to be more
intensively studied. Finally, we made improvements and additions to the methodological approach for
systematic reviews and qualitative studies.
Data mining model for the data retrieval from central server configurationijcsit
A server, which is to keep track of heavy document traffic, is unable to filter the documents that are most
relevant and updated for continuous text search queries. This paper focuses on handling continuous text
extraction sustaining high document traffic. The main objective is to retrieve recent updated documents
that are most relevant to the query by applying sliding window technique. Our solution indexes the
streamed documents in the main memory with structure based on the principles of inverted file, and
processes document arrival and expiration events with incremental threshold-based method. It also ensures
elimination of duplicate document retrieval using unsupervised duplicate detection. The documents are
ranked based on user feedback and given higher priority for retrieval.
INSTRUCTOR PERSPECTIVES OF MOBILE LEARNING PLATFORM: AN EMPIRICAL STUDYijcsit
Mobile learning (m-Learning) is the cutting-edge learning platform to really gain traction, driven mostly bythe huge uptake in smartphones and their ever-increasing uses within the educational society. Education has long benefitted from the proliferation of technology; however, m-Learning adoption has not proceededat the pace one might expect. There is a disconnect between the rate of adoption of the underlying platform (smartphones) and the use of that technology within learning. The reasons behind this have been the subject of several research studies. However, previous studies have mostly focused on investigating the critical success factors (CSFs) from the student perspectives. In this research, we have carried out anextensive study of the six factors that impact the success of m-Learning from instructors’ perspectives. The
results of the research showed that three factors – technical competence of instructors, Instructors’
autonomy, and blended learning – are the most important elements that contribute to m-Learning adoption
from instructors’ perspectives.
The Impact of Frequent Use Email When Creating Account at the Websites on the...ijcsit
This research aims to measure the impact of frequent use of emails when creating account at the websites
on the privacy and security of the user (a survey study conducted on a sample of email users' views). The
sample, 200 people of the Jordanian society, includes employees of commercial and communication
companies, banks, university students, employees and faculty members as well as computer centers at
universities. All have emails and are able to use the computer and internet. A questionnaire has been
prepared for this purpose aims to measure the variables of the study. SPSS program was used to analyze
the results. The study revealed the existence of a statistical significant impact of frequent use of email
account when creating an account at the Internet sites on the security and privacy of the user. The study
concluded a number of conclusions and recommendations
Evaluation of image segmentation and filtering with ann in the papaya leafijcsit
Precision agriculture is area with lack of cheap technology. The refinement of the production system brings
large advantages to the producer and the use of images makes the monitoring a more cheap methodology.
Macronutrients monitoring can to determine the health and vulnerability of the plant in specific stages. In
this paper is analyzed the method based on computational intelligence to work with image segmentation in
the identification of symptoms of plant nutrient deficiency. Artificial neural networks are evaluated for
image segmentation and filtering, several variations of parameters and insertion impulsive noise were
evaluated too. Satisfactory results are achieved with artificial neural for segmentation same with high
noise levels.
Speech processing is considered as crucial and an intensive field of research in the growth of robust and efficient speech recognition system. But the accuracy for speech recognition still focuses for variation of context, speaker’s variability, and environment conditions. In this paper, we stated curvelet based Feature Extraction (CFE) method for speech recognition in noisy environment and the input speech signal is decomposed into different frequency channels using the characteristics of curvelet transform for reduce the computational complication and the feature vector size successfully and they have better accuracy, varying window size because of which they are suitable for non –stationary signals. For better word classification and recognition, discrete hidden markov model can be used and as they consider time distribution of speech signals. The HMM classification method attained the maximum accuracy in term of identification rate for informal with 80.1%, scientific phrases with 86%, and control with 63.8 % detection rates. The objective of this study is to characterize the feature extraction methods and classification phage in speech recognition system. The various approaches available for developing speech recognition system are compared along with their merits and demerits. The statistical results shows that signal recognition accuracy will be increased by using discrete Curvelet transforms over conventional methods.
The primary goal of this paper is to provide an overview of existing Text-To-Speech (TTS) Techniques by highlighting its usage and advantage. First Generation Techniques includes Formant Synthesis and Articulatory Synthesis. Formant Synthesis works by using individually controllable formant filters, which can be set to produce accurate estimations of the vocal-track transfer function. Articulatory Synthesis produces speech by direct modeling of Human articulator behavior. Second Generation Techniques incorporates Concatenative synthesis and Sinusoidal synthesis. Concatenative synthesis generates speech output by concatenating the segments of recorded speech. Generally, Concatenative synthesis generates the natural sounding synthesized speech. Sinusoidal Synthesis use a harmonic model and decompose each frame into a set of harmonics of an estimated fundamental frequency. The model parameters are the amplitudes and periods of the harmonics. With these, the value of the fundamental can be changed while keeping the same basic spectral..In adding, Third Generation includes Hidden Markov Model (HMM) and Unit Selection Synthesis.HMM trains the parameter module and produce high quality Speech. Finally, Unit Selection operates by selecting the best sequence of units from a large speech database which matches the specification.
Isolated Word Recognition System For Tamil Spoken Language Using Back Propaga...CSEIJJournal
Speech recognition has been an active research topic for more than 50 years. Interacting with the
computer through speech is one of the active scientific research fields particularly for the disable
community who face variety of difficulties to use the computer. Such research in Automatic Speech
Recognition (ASR) is investigated for different languages because each language has its specific features.
Especially the need for ASR system in Tamil language has been increased widely in the last few years. In
this paper, a speech recognition system for individually spoken word in Tamil language using multilayer
feed forward network is presented. To implement the above system, initially the input signal is
preprocessed using four types of filters namely preemphasis, median, average and Butterworth bandstop
filter in order to remove the background noise and to enhance the signal. The performance of these filters
are measured based on MSE and PSNR values. The best filtered signal is taken as the input for the further
process of ASR system
PUNJABI SPEECH SYNTHESIS SYSTEM USING HTKijistjournal
This paper describes an Hidden Markov Model-based Punjabi text-to-speech synthesis system (HTS), in which speech waveform is generated from Hidden Markov Models themselves, and applies it to Punjabi speech synthesis using the general speech synthesis architecture of HTK (HMM Tool Kit). This Hidden Markov Model based TTS can be used in mobile phones for stored phone directory or messages. Text messages and caller’s identity in English language are mapped to tokens in Punjabi language which are further concatenated to form speech with certain rules and procedures.
To build the synthesizer we recorded the speech database and phonetically segmented it, thus first extracting context-independent monophones and then context-dependent triphones. For e.g. for word bharat monophones are a, bh, t etc. & triphones are bh-a+r. These speech utterances and their phone level transcriptions (monophones and triphones) are the inputs to the speech synthesis system. System outputs the sequence of phonemes after resolving various ambiguities regarding selection of phonemes using word network files e.g. for the word Tapas the output phoneme sequence is ਤ,ਪ,ਸ instead of phoneme sequence ਟ,ਪ,ਸ .
EFFECT OF DYNAMIC TIME WARPING ON ALIGNMENT OF PHRASES AND PHONEMESkevig
Speech synthesis and recognition are the basic techniques used for man-machine communication. This type
of communication is valuable when our hands and eyes are busy in some other task such as driving a
vehicle, performing surgery, or firing weapons at the enemy. Dynamic time warping (DTW) is mostly used
for aligning two given multidimensional sequences. It finds an optimal match between the given sequences.
The distance between the aligned sequences should be relatively lesser as compared to unaligned
sequences. The improvement in the alignment may be estimated from the corresponding distances. This
technique has applications in speech recognition, speech synthesis, and speaker transformation. The
objective of this research is to investigate the amount of improvement in the alignment corresponding to the
sentence based and phoneme based manually aligned phrases. The speech signals in the form of twenty five
phrases were recorded from each of six speakers (3 males and 3 females). The recorded material was
segmented manually and aligned at sentence and phoneme level. The aligned sentences of different speaker
pairs were analyzed using HNM and the HNM parameters were further aligned at frame level using DTW.
Mahalanobis distances were computed for each pair of sentences. The investigations have shown more than
20 % reduction in the average Mahalanobis distances.
Effect of Dynamic Time Warping on Alignment of Phrases and Phonemeskevig
Speech synthesis and recognition are the basic techniques used for man-machine communication. This type
of communication is valuable when our hands and eyes are busy in some other task such as driving a
vehicle, performing surgery, or firing weapons at the enemy. Dynamic time warping (DTW) is mostly used
for aligning two given multidimensional sequences. It finds an optimal match between the given sequences.
The distance between the aligned sequences should be relatively lesser as compared to unaligned
sequences. The improvement in the alignment may be estimated from the corresponding distances. This
technique has applications in speech recognition, speech synthesis, and speaker transformation. The
objective of this research is to investigate the amount of improvement in the alignment corresponding to the
sentence based and phoneme based manually aligned phrases. The speech signals in the form of twenty five
phrases were recorded from each of six speakers (3 males and 3 females). The recorded material was
segmented manually and aligned at sentence and phoneme level. The aligned sentences of different speaker
pairs were analyzed using HNM and the HNM parameters were further aligned at frame level using DTW.
Mahalanobis distances were computed for each pair of sentences. The investigations have shown more than
20 % reduction in the average Mahalanobis distances.
American Standard Sign Language Representation Using Speech Recognitionpaperpublications3
Abstract: For many deaf people, sign language is the principle means of communication. This increases the isolation of hearing impaired people. This paper presents a system prototype that is able to automatically recognize speech which helps to communicate more effectively with the hearing or speech impaired people. This system recognizes speech signal . Recognized spoken words are represented using American standard sign language via a robotic arm and also on the computer using visual basic .In this project a software package is provided to convert the speech signal, (which does not have any meaning for the deaf and the dumb) into the sign language. The main purpose of this project is to bridge the communication and expression gap between the normal people who cannot understand the sign language, and the deaf and dumb who cannot understand the normal speech.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Effect of MFCC Based Features for Speech Signal Alignmentskevig
The fundamental techniques used for man-machine communication include Speech synthesis, speech
recognition, and speech transformation. Feature extraction techniques provide a compressed
representation of the speech signals. The HNM analyses and synthesis provides high quality speech with
less number of parameters. Dynamic time warping is well known technique used for aligning two given
multidimensional sequences. It locates an optimal match between the given sequences. The improvement in
the alignment is estimated from the corresponding distances. The objective of this research is to investigate
the effect of dynamic time warping on phrases, words, and phonemes based alignments. The speech signals
in the form of twenty five phrases were recorded. The recorded material was segmented manually and
aligned at sentence, word, and phoneme level. The Mahalanobis distance (MD) was computed between the
aligned frames. The investigation has shown better alignment in case of HNM parametric domain. It has
been seen that effective speech alignment can be carried out even at phrase level
EFFECT OF MFCC BASED FEATURES FOR SPEECH SIGNAL ALIGNMENTSijnlc
The fundamental techniques used for man-machine communication include Speech synthesis, speech
recognition, and speech transformation. Feature extraction techniques provide a compressed
representation of the speech signals. The HNM analyses and synthesis provides high quality speech with
less number of parameters. Dynamic time warping is well known technique used for aligning two given
multidimensional sequences. It locates an optimal match between the given sequences. The improvement in
the alignment is estimated from the corresponding distances. The objective of this research is to investigate
the effect of dynamic time warping on phrases, words, and phonemes based alignments. The speech signals
in the form of twenty five phrases were recorded. The recorded material was segmented manually and
aligned at sentence, word, and phoneme level. The Mahalanobis distance (MD) was computed between the
aligned frames. The investigation has shown better alignment in case of HNM parametric domain. It has
been seen that effective speech alignment can be carried out even at phrase level.
Phrase Identification is one of the most critical and widely studied in Natural Language processing (NLP) tasks. Verb Phrase Identification within a sentence is very useful for a variety of application on NLP. One of the core enabling technologies required in NLP applications is a Morphological Analysis. This paper presents the Myanmar Verb Phrase Identification and Translation Algorithm and develops a Markov Model with Morphological Analysis. The system is based on Rule-Based Maximum Matching Approach. In Machine Translation, Large amount of information is needed to guide the translation process. Myanmar Language is inflected language and there are very few creations and researches of Lexicon in Myanmar, comparing to other language such as English, French and Czech etc. Therefore, this system is proposed Myanmar Verb Phrase identification and translation model based on Syntactic Structure and Morphology of Myanmar Language by using Myanmar- English bilingual lexicon. Markov Model is also used to reformulate the translation probability of Phrase pairs. Experiment results showed that proposed system can improve translation quality by applying morphological analysis on Myanmar Language.
MULTILINGUAL SPEECH IDENTIFICATION USING ARTIFICIAL NEURAL NETWORKijitcs
Speech technology is an emerging technology and automatic speech recognition has made advances in recent years. Many researches has been performed for many foreign and regional languages. But at present the multilingual speech processing technology has been attracting for research purpose. This paper tries to propose a methodology for developing a bilingual speech identification system for Assamese and English language based on artificial neural network.
SPEAKER VERIFICATION USING ACOUSTIC AND PROSODIC FEATURESacijjournal
In this paper we report the experiment carried out on recently collected speaker recognition database
namely Arunachali Language Speech Database (ALS-DB)to make a comparative study on the
performance of acoustic and prosodic features for speaker verification task.The speech database consists
of speech data recorded from 200 speakers with Arunachali languages of North-East India as mother
tongue. The collected database is evaluated using Gaussian mixture model-Universal Background Model
(GMM-UBM) based speaker verification system. The acoustic feature considered in the present study is
Mel-Frequency Cepstral Coefficients (MFCC) along with its derivatives.The performance of the system
has been evaluated for both acoustic feature and prosodic feature individually as well as in
combination.It has been observed that acoustic feature, when considered individually, provide better
performance compared to prosodic features. However, if prosodic features are combined with acoustic
feature, performance of the system outperforms both the systems where the features are considered
individually. There is a nearly 5% improvement in recognition accuracy with respect to the system where
acoustic features are considered individually and nearly 20% improvement with respect to the system
where only prosodic features are considered.
Malayalam Isolated Digit Recognition using HMM and PLP cepstral coefficientijait
Development of Malayalam speech recognition system is in its infancy stage; although many works have been done in other Indian languages. In this paper we present the first work on speaker independent Malayalam isolated speech recognizer based on PLP (Perceptual Linear Predictive) Cepstral Coefficient and Hidden Markov Model (HMM). The performance of the developed system has been evaluated with different number of states of HMM (Hidden Markov Model). The system is trained with 21 male and female speakers in the age group ranging from 19 to 41 years. The system obtained an accuracy of 99.5% with the unseen data.
The state-of-the-art Automatic Speech Recognition (ASR) systems lack the ability to identify spoken words if they have non-standard pronunciations. In this paper, we present a new classification algorithm to identify pronunciation variants. It uses Dynamic Phone Warping (DPW) technique to compute the
pronunciation-by-pronunciation phonetic distance and a threshold critical distance criterion for the classification. The proposed method consists of two steps; a training step to estimate a critical distance
parameter using transcribed data and in the second step, use this critical distance criterion to classify the input utterances into the pronunciation variants and OOV words.
The algorithm is implemented using Java language. The classifier is trained on data sets from TIMIT
speech corpus and CMU pronunciation dictionary. The confusion matrix and precision, recall and accuracy performance metrics are used for the performance evaluation. Experimental results show significant performance improvement over the existing classifiers.
Emotional telugu speech signals classification based on k nn classifiereSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Group Presentation 2 Economics.Ariana Buscigliopptx
5215ijcseit01
1. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
DOI : 10.5121/ijcseit.2015.5201 1
SYLLABLE-BASED SPEECH RECOGNITION SYSTEM
FOR MYANMAR
Wunna Soe1
and Dr. Yadana Thein2
1
University of Computer Studies, Yangon (UCSY), Yangon Myanmar
2
Department of Computer Hardware, University of Computer Studies, Yangon (UCSY),
Yangon, Myanmar
ABSTRACT
This proposed system is syllable-based Myanmar speech recognition system. There are three stages:
Feature Extraction, Phone Recognition and Decoding. In feature extraction, the system transforms the
input speech waveform into a sequence of acoustic feature vectors, each vector representing the
information in a small time window of the signal. And then the likelihood of the observation of feature
vectors given linguistic units (words, phones, subparts of phones) is computed in the phone recognition
stage. Finally, the decoding stage takes the Acoustic Model (AM), which consists of this sequence of
acoustic likelihoods, plus an phonetic dictionary of word pronunciations, combined with the Language
Model (LM). The system will produce the most likely sequence of words as the output. The system creates
the language model for Myanmar by using syllable segmentation and syllable based n-gram method.
KEYWORDS
Speech Recognition, Language Model, Myanmar, Syllable
1. INTRODUCTION
Speech recognition is one of the major tasks in natural language processing (NLP). Speech
recognition is the process by which a computer maps an acoustic speech signal to text. In general,
there are three speech recognition system; speaker dependent system, speaker independent
system, and speaker adaptive system. The speaker dependent systems are trained and learnt based
on a single speaker and can recognize the speech of that trained one speaker. The speaker
independent systems can recognize any speaker and these systems are the most difficult to
develop and most expansive and accuracy is lower than speaker dependent systems, but more
flexible. A speaker adaptive system is built to adapt its processes to the characteristics of new
speakers.
In other way, there are two types of speech recognition system: continuous speech recognition
system, and isolated-word speech recognition system. An isolated-word recognition system
performs single words at a time – requiring a pause between saying each word. A continuous
speech system recognizes on speech in which words are connected together, i.e. not separated by
pause.
Generally, most speech recognition systems are implemented mainly based on one of the Hidden
Markov Model (HMM), deep belief neural network, dynamic time wrapping.
Myanmar language is a tonal, syllable-timed language and largely monosyllabic and analytic
language, with a subject-object-verb word order. Myanmar language has 9 parts of speech and is
2. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
2
spoken by 32 million as a first language and as a second language by 10 million. Any Myanmar
speech recognition engine has not been before.
In this paper, we mainly focus on the Myanmar phonetic structure for speech recognition system.
This paper is organized as follow. In section2, we discuss the related works of the areas of speech
recognition system based on syllables models. In section 3, we describe characteristics of
Myanmar phones and syllables. In section4, we present the architecture of speech recognition
system. In section 5 and 6, we discuss how to build the acoustic model and language model for
speech recognition system. In section 7, we mention the phonetic dictionary of speech recognition
system. Finally, we conclude the results of proposed system and difficulties and limitations of this
system.
2. RELATED WORK
Many researchers have been work for speech recognition based on syllable in other languages.
But in our language, Myanmar, there is no one for implementing speech recognition system based
on syllable. In the following paragraphs, we present some of the related work in the area of
syllable-based speech recognition systems for other languages and speech recognition for
Myanmar language.
Piotr Majewski expressed a syllable-based language model for highly inflectional language like
Polish. The author demonstrated that syllables are useful sub-word units in language modeling of
Polish. Syllable-based model is a very promising choice for modeling language in many cases
such as small available corpora or highly inflectional language.[7]
R. Thangarajan, A.M. Natarajan, and M. Selvam expressed the Syllable modeling in continuous
speech recognition for Tamil language. In this paper, two methodologies are proposed which
demonstrate the syllable’s significance in speech recognition. In the first methodology, modeling
syllable as an acoustic unit is suggested and context independent (CI) syllable models are trained
and tested. The second methodology proposes integration of syllable information in the
conventional triphone or context dependent (CD) phone modeling.[8]
Xunying Liu James L. Hieronymus Mark J. F. Gales and Philip C. Woodland presented Syllable
language models for Mandarin speech recognition. In this paper character level language models
were used as an approximation of allowed syllable sequences that follow Mandarin Chinese
syllabiotactic rules. A range of combination schemes were investigated to integrate character
sequence level constraints into a standard word based speech recognition system.[9]
Ingyin Khaing presented Myanmar Continuous Speech Recognition System Based on DTW and
HMM. In this paper, we found that combinations of LPC, MFCC and GTCC techniques are
applied in feature extraction part of that system. The HMM method is extended by combining it
with the DTW algorithm in order to combine the advantages of these two powerful pattern
recognition technique. [10]
3. MYANMAR SYLLABLE
Myanmar language is a member of the Sino-Tibetan family of languages of which the Tibetan-
Myanmar subfamily forms a part. Myanmar script derives from Brahmi script. There are basic 12
vowels and 33 consonants and 4 medial in Myanmar language. In Myanmar language, words are
formed by combining basic characters with extended characters. Myanmar syllables can stand one
3. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
3
or more extended characters by combining consonants to form compound words. Myanmar
characters 33 consonants are described as the following table.
Table 1. Myanmar Consonants
The sequential extension of the 12 basic vowels results in 22 vowels listed in the original
thinbongyi. These 22 extension vowels are described as the following table.
Table 2. Basic and Extension Vowels
4. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
4
4. ARCHITECTURAL OVERVIEW OF SPEECH RECOGNITION SYSTEM
Generally, a speech recognition system takes the speech as input and voice data as knowledge
base and then the output result is the text as the below the figure 1. The knowledge base is the
data that derives the decoder of the speech recognition system. The knowledge base is created by
three sets of data:
• Dictionary
• Acoustic Model
• Language Model
Figure 1. Speech Recognition System
The dictionary contains a mapping from word to phones. An acoustic model contains acoustic
properties for each senone, state of the phone. A language model is used restrict word search. It
defines which word could follow previously recognized words (remember that matching is a
sequential process) and helps to significantly restrict the matching process by stripping words that
are not probable.
5. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
5
The proposed speech recognition system has three main components: Feature Extraction, Phone
Recognition, and Decoding. The architecture for a simplified speech recognition system is as
follow.
In the following architecture, we can compute the most probable sequence W given some
observation sequence O. We can choose the sentence which the product of two probabilities for
each sentence is greatest as the following equation.[1]
Figure 2. A Simple Discriminative Speech Recognition System Overview
ˆW = arg max
W ∈ L
P(O |W )P(W )
(1)
In the equation (1), the acoustic model can computed the observation likelihood, P(O/W). The
language model can get for computing the prior probability, P(W).[1]
4.1. Feature Extraction
The feature extraction is the transformation stage of speech waveform into a sequence of acoustic
feature vectors. The feature vectors represent the information in a small time window of the
6. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
6
signal. The acoustic waveform is sampled into frames (usually 10, 15, or 20 milliseconds of
frame size) that are transformed into spectral features as the following figure. Each time frame
(window) is thus represented by a vector of around 39 features representing this spectral
information. [1]
There are seven steps in feature extraction process:
1. Pre-emphasis
2. Windowing
3. Discrete Fourier Transform
4. Mel Filter Bank
5. Log
6. Inverse Discrete Fourier Transform
7. Deltas and Energy.
Figure 3. Windowing Process of Feature Extraction
The pre-emphasis stage is to boost the amount of energy in the high frequencies. Information
from these higher formats more available to the acoustic model can be made by boosting the high
frequency energy and this process can improve phone detection accuracy. The waveform is
extracted the roughly stationary portion of speech by using a window which is non-zero inside
some region and zero elsewhere, running this window across the speech signal and extracting the
waveform inside this window. The method for extracting spectral information for discrete
frequency bands for a discrete-time signal is the discrete Fourier transform (DFT).
The form of the model used in Mel Frequency Cepstral Coefficient (MFCC) is to wrap the
frequencies output by the DFT onto the Mel. A Mel is a unit of pitch. In general, the human
response to signal level is logarithmic; humans are less sensitive to slight differences in amplitude
at high amplitudes than at low amplitudes. In addition, the feature estimates less sensitive to
variations in input can be made by using a log, such as power variations due to the distance
between the speaker and the microphone. The next step in MFCC feature extraction is the
Frame Size
20ms
Frame
Shift
4ms
7. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
7
computation of the cepstrum, also called as the spectrum of the log of the spectrum. The cepstrum
can be seen as the inverse DFT of the log magnitude of the DFT of a signal.
The extraction of the cepstrum with the inverse DFT from the previous steps results in 12 cepstral
coefficients for each frame. The energy in a frame, the 13th
feature, is the sum over time of the
power of the samples in the frame. And the delta value estimates the slope using a wider context
of frames.
4.2. Phone Recognition
The phone recognition stage computes the phone likelihood of the observed spectral feature
vectors given Myanmar phone units or subparts of phones. In this proposed system, we use the
Gaussian Mixture Model (GMM) classifiers to compute for each HMM state q, corresponding to
a phone or sub phone, the likelihood of a given feature vector given this phone p(o/q). We can
compute the Gaussian Mixture Model (GMM) as the following equation. [1]
−Σ−−
Σ
=Σ −
=
∑ )()(
2
1
exp
||)2(
1
),|( 1
2/12/
1
jk
T
jk
jk
D
M
k
jkjkjk xxcxf µµ
π
µ
(2)
In the equation (2), M is the number of Gaussian Models, called mixture weights. D is the
dimensionality and in this system, it has 39 dimensions.
Most speech recognition algorithms are based on computing observation probabilities directly on
the real-valued, continuous input feature vector. The acoustic models are based on the
computation of a probability density function (pdf) over a continuous space. By far the most
common method for computing acoustic likelihoods is the Gaussian mixture model (GMM) pdfs,
although neural networks, support vector machines (SVM), and conditional random fields
(CRFs), are also used.
4.3. Decoding
In decoding stage, the proposed system used the Viterbi algorithm as the decoder. The decoder is
the heart of the speech recognition process. The task of the decoder is to find the best hidden
sequence of states by using the sequence of observations as inputs. First, the decoder selects the
next set of likely states and then scores the incoming features against these states. The decoder
prunes low scoring states and finally generates the result.
5. HOW TO BUILD ACOUSTIC MODEL
The acoustic model is trained by analyzing large corpora of Myanmar language speech. Hidden
Markov Models (HMM) represent each unit of speech in the acoustic model. HMMs are used by
a scorer to calculate the acoustic probability for a particular unit of speech. Each state of an HMM
is represented by a set of Gaussian mixture density functions. A Gaussian mixture model (GMM)
is a parametric probability density function represented as a weighted sum of Gaussian
component densities. There are many acoustic model training tools. Among them we choose the
sphinxtrain tool to build acoustic model for a new language, Myanmar.
To build speech recognition system for a single speaker, we collect recording files for an hour.
Each file has 7 seconds average length. The parameters of the acoustic model of the sound units
8. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
8
using feature vectors, are learnt by the trainer. This is called a training database. The file structure
of the database is
/etc
/db_name.dic
/db_name.phone
/db_name.lm.DMP
/db_name.filler
/db_name_train.fileids
/db_name_train.transcription
/db_name_test.fileids
/db_name_test.transcription
/wav
/speaker_1
/file1.wav
/file2.wav
…
In the above file structure, etc, wav, and speaker_1 are folder names. The db_name.dic file is
phonetic dictionary that maps words and phones. The db_name.phone is the phone set file that
has one phone per line. The db_name.lm.DMP is language model file. It may be in ARPA format
or in DMP format. The db_name.filler file is a filler dictionary that contains filler phones (not-
covered by language model non-linguistic sounds like breathe, hmm or laugh). The
db_name_train.fileids is a text file listing the names of the recordings one by line for training. The
db_name_test.fileids is also a text file listing the names of the recordings one by line for testing.
The db_name_train.transcription is a text file that contains the list of the transcription for each
audio file for training. The db_name_test.transcription is also a text file listing the transcription
for each audio file for testing. The wav files (filename.wav) that we used are recording files that
have specific sample rate - 16 kHz, 16 bit, mono. [4]
After training, the acoustic model is located in db_name.cd_cont_<number_of senones> folder.
<number_of senones> is the number of senones produced by the training tool. This folder is
under the model_parameters folder auto generated by sphinxtrain. In the
db_name.cd_cont_<number_of senones> folder, the model should have the following files:
/db_name.cd_cont_<number_of senones>
/mdef
/feat.params
/mixture_weights
/means
/noisedict
/transition_matrices
/variances.
The feat.params file contains feature extraction parameters, a list of options used to configure
feature extraction. The mdef file is the definition file that maps the triphone contexts and GMM
ids (senones). The means file is Gaussian codebook variances. The variances file consists of the
Gaussian codebook variances. The mixture_weights describes the mixtures for Gaussians. The
transition_matrices file contains HMM transition matrices. The noisedict file is the dictionary for
filler words.
9. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
9
6. HOW TO BUILD LANGUAGE MODEL
The language model describes what is likely to be spoken in a particular context. There are two
types of models that are used in speech recognition systems - grammars and statistical language
models. The grammar-type of language model describe very simple types of languages for
command and control, and they are usually written by hand or generated automatically with plain
code. The statistical language model uses stochastic approach called n-gram language model. An
N-gram is an N-token sequence of words: a b2-gram (bigram) is a two-word sequence; a 3-gram
(trigram) is a three-word sequence. N-gram conditional probabilities can be computed from plain
text based on the relative frequency of word sequences. By another way, there are two types of
statistical language model. The first is the close-vocabulary language model. The close-
vocabulary language model assumes that the test set can only contain words from the given
lexicon. There are no unknown words in the close-vocabulary model. An open-vocabulary
language model is one in which we model the possible unknown words in the test set by adding a
pseudo-word called <UNK>. An open-vocabulary model contains the training process for the
probabilities of the unknown word model.
There are many approach and tools to create the statistical language models. We use CMU
language modeling toolkit to create n-gram language model. The language model toolkit expects
its input to be in the form of normalized text files, with utterances delimited by <s> and </s>
tags.[4] In this pre-processing step, we make syllable based way for creating normalized text files.
The syllable-based normalization as is the following:
(normalized sentence)
Before normalization we split one syllable by syllable from the sentences. In syllable
segmentation process, we use rule-based segmentation to spilt syllables from sentences. The
output is a 3-gram language model based on vocabularies given from normalized text file. But in
our output language model file is based on Myanmar syllables. In this system, we chose to use the
close-vocabulary model.
The output language model file is as the ARPA format or binary format. The ARPA format
language model file is shown as the following figure 4.
10. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
10
Figure 4. 3-grams (3 syllables sequences) in Language Model File
7. PHONETIC DICTIONARY (LEXICON)
A phonetic dictionary is a text file that contains a mapping from words to phones. It is also a
lexicon that is a list of words, with a pronunciation for each word expressed as a phone sequence.
The phone sequence can be specified by a lexicon. Each phone HMM sequence is composed of
some sub phones, each with a Gaussian emission likelihood model. Example of the phonetic
dictionary is as follow in the table 3.
Table 3. Part of Phonetic Dictionary
11. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
11
7. RESULT OF SPEECH RECOGNITION
In figure 6, hidden states are in circles and observations are in squares. Dotted line(unfilled)
circles indicate illegal transitions. For a given state qj at time t, the vale αt (j) is computed as
follow:
(3)
In equation (3), vt (j) is the Viterbi probability at time t and aij is the transition probability from
previous state qi to current state qj . bj(ot) is the state observation likelihood of the observation ot
given the current state j.[1]
Figure 5. A Hidden Markov Model for relating feature values and Myanmar syllables
)()(max)( 1
1
tjijt
N
i
t obaivjv −
=
=
12. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
12
Figure 6. The Viterbi trellis for computing the best sequence of Myanmar Syllable
8. CONCLUSION AND FUTURE WORK
The standard evaluation metric for speech recognition systems is word error rate (WER). The
word error rate is based on how much the word string returned by recognizer differs from a
correct or reference transcription. But the proposed system use syllable error rate (SER) as
evaluation metric instead of word error rate. Therefore, the result is based on how much syllable
string returned by recognition engine differs from a correct or reference transcription. Our
proposed system is just speaker dependent system at present and language model is also closed-
vocabulary type. In the future, this proposed system will be developed as a speaker independent
speech recognition system and language model will also hope to be an open-vocabulary type.
REFERENCES
[1] Daniel Jurafsky, and James H. Martin Smith (2009), Speech and Language Processing, Pearson
Education Ltd., Upper Saddle River, New Jersey 07458
[2] Myanmar Language Commission (2011), Myanmar-English Dictionary, Department of Myanmar
Language Commission, Ministry of Education, Union of Myanmar
[3] Willie Walker, Paul Lamere, Philip Kwok, Bhiksha Raj, Rita Singh, Evandro Gouvea, Peter Wolf,
and Jole Woelfel (2004), “Sphinx 4: A Flexible Open Source Framework for Speech Recognition”,
SMLI TR2004-0811, Sun Microsystems Inc.
[4] Hassan Satori, Hussein Hiyassat, Mostafa Harti, and Noureddine Chenfour (2009), “Investigation
Arabic Speech Recognition Using CMU Sphinx System”, The International Arab Journal of
Information Technology, Vol. 6, April
[5] http://en.wikipedia.org/wiki/Speech_recognition
[6] http://cmusphinx.sourceforge.net/
[7] Piotr Majewski (2008), “Syllable Based Language Model for Large Vocabulary Continuous Speech
Recognition of Polish”, University of Łód´z, Faculty of Mathematics and Computer Science ul.
Banacha 22, 90-238 Łód´z, Poland, P. Sojka et al. (Eds.): TSD 2008, LNAI 5246, pp. 397–401
13. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.2, April 2015
13
[8] R. Thangarajan, A.M. Natarajan, M. Selvam(2009), “Syllable modeling in continuous speech
recognition for Tamil language”, Department of Information Technology, Kongu Engineering
College, Perundurai 638 052, Erode, India, Int J Speech Technol (2009) 12: 47–57
[9] Xunying Liu, James L. Hieronymus, Mark J. F. Gales and Philip C. Woodland (2013), “Syllable
language models for Mandarin speech recognition: Exploiting character language models”,
Cambridge University Engineering Department, Cambridge, United Kingdom, J. Acoust. Soc. Am.
133 (1), January 2013
[10] Ingyin Khaing (2013), “Myanmar Continuous Speech Recognition System Based on DTW and
HMM”, Department of Information and Technology, University of Technology (Yatanarpon Cyber
City),near Pyin Oo Lwin, Myanmar, International Journal of Innovations in Engineering and
Technology (IJIET), Vol. 2 Issue 1 February 2013
[11] Ciro Martins, António Teixeira, João Neto (2004), “Language Models in Automatic Speech
Recognition”, VOL. 4, Nº 2, JANEIRO 2004, L2F – Spoken Language Systems Lab; INESC-ID/IST,
Lisbon
[12] Edward W. D. Whittake, Statistical Language Modeling for Automatic Speech Recognition of
Russian and English, Trinity College, University of Cambridge
[13] Mohammad Bahrani, Hossein Sameti, Nazila Hafezi, and Saeedeh Momtazi (2008), “A New Word
Clustering Method for Building N-Gram Language Models in Continuous Speech Recognition
Systems”, Speech Processing Lab, Computer Engineering Department, Sharif University of
Technology, Tehran, Iran, N.T. Nguyen et al. (Eds.): IEA/AIE 2008, LNAI 5027, pp. 286–293
Authors
Dr. Yadana Thein is working as an associate professor at department of computer hardware technology in
University of Computer Studies, Yangon. She received master degree from University of Computer
Studies, Yangon. She received doctoral degree at the same university. She interest in Natural Language
Processing.
Wunna Soe is at present Ph.D candidate student from University of Computer Studies,
Yangon. He received Master of Computer Science (M.C.Sc.) from University of Computer
Studies, Mandalay (UCSM). His current research is Automatic Speech Recognition and
Natural Language Processing.