This paper describes the use of Naive Bayes to address the task of assigning function tags and context free
grammar (CFG) to parse Myanmar sentences. Part of the challenge of statistical function tagging for
Myanmar sentences comes from the fact that Myanmar has free-phrase-order and a complex
morphological system. Function tagging is a pre-processing step for parsing. In the task of function
tagging, we use the functional annotated corpus and tag Myanmar sentences with correct segmentation,
POS (part-of-speech) tagging and chunking information. We propose Myanmar grammar rules and apply
context free grammar (CFG) to find out the parse tree of function tagged Myanmar sentences. Experiments
show that our analysis achieves a good result with parsing of simple sentences and three types of complex
sentences.
STATISTICAL FUNCTION TAGGING AND GRAMMATICAL RELATIONS OF MYANMAR SENTENCEScscpconf
This paper describes a context free grammar (CFG) based grammatical relations for Myanmar
sentences which combine corpus-based function tagging system. Part of the challenge of
statistical function tagging for Myanmar sentences comes from the fact that Myanmar has freephrase-order
and a complex morphological system. Function tagging is a pre-processing step to
show grammatical relations of Myanmar sentences. In the task of function tagging, which tags
the function of Myanmar sentences with correct segmentation, POS (part-of-speech) tagging
and chunking information, we use Naive Bayesian theory to disambiguate the possible function
tags of a word. We apply context free grammar (CFG) to find out the grammatical relations of
the function tags. We also create a functional annotated tagged corpus for Myanmar and propose the grammar rules for Myanmar sentences. Experiments show that our analysis achieves a good result with simple sentences and complex sentences.
ATTENTION-BASED SYLLABLE LEVEL NEURAL MACHINE TRANSLATION SYSTEM FOR MYANMAR ...kevig
Neural machine translation is a new approach to machine translation that has shown the effective results
for high-resource languages. Recently, the attention-based neural machine translation with the large scale
parallel corpus plays an important role to achieve high performance for translation results. In this
research, a parallel corpus for Myanmar-English language pair is prepared and attention-based neural
machine translation models are introduced based on word to word level, character to word level, and
syllable to word level. We do the experiments of the proposed model to translate the long sentences and to
address morphological problems. To decrease the low resource problem, source side monolingual data are
also used. So, this work investigates to improve Myanmar to English neural machine translation system.
The experimental results show that syllable to word level neural mahine translation model obtains an
improvement over the baseline systems.
ATTENTION-BASED SYLLABLE LEVEL NEURAL MACHINE TRANSLATION SYSTEM FOR MYANMAR ...ijnlc
Neural machine translation is a new approach to machine translation that has shown the effective results
for high-resource languages. Recently, the attention-based neural machine translation with the large scale
parallel corpus plays an important role to achieve high performance for translation results. In this
research, a parallel corpus for Myanmar-English language pair is prepared and attention-based neural
machine translation models are introduced based on word to word level, character to word level, and
syllable to word level. We do the experiments of the proposed model to translate the long sentences and to
address morphological problems. To decrease the low resource problem, source side monolingual data are
also used. So, this work investigates to improve Myanmar to English neural machine translation system.
The experimental results show that syllable to word level neural mahine translation model obtains an
improvement over the baseline systems.
Phrase Identification is one of the most critical and widely studied in Natural Language processing (NLP) tasks. Verb Phrase Identification within a sentence is very useful for a variety of application on NLP. One of the core enabling technologies required in NLP applications is a Morphological Analysis. This paper presents the Myanmar Verb Phrase Identification and Translation Algorithm and develops a Markov Model with Morphological Analysis. The system is based on Rule-Based Maximum Matching Approach. In Machine Translation, Large amount of information is needed to guide the translation process. Myanmar Language is inflected language and there are very few creations and researches of Lexicon in Myanmar, comparing to other language such as English, French and Czech etc. Therefore, this system is proposed Myanmar Verb Phrase identification and translation model based on Syntactic Structure and Morphology of Myanmar Language by using Myanmar- English bilingual lexicon. Markov Model is also used to reformulate the translation probability of Phrase pairs. Experiment results showed that proposed system can improve translation quality by applying morphological analysis on Myanmar Language.
SYLLABLE-BASED NEURAL NAMED ENTITY RECOGNITION FOR MYANMAR LANGUAGEkevig
This paper contributes the first evaluation of neural
network models on NER task for Myanmar language. The experimental results show that those neural
sequence models can produce promising results compared to the baseline CRF model. Among those neural
architectures, bidirectional LSTM network added CRF layer above gives the highest F-score value. This
work also aims to discover the effectiveness of neural network approaches to Myanmar textual processing
as well as to promote further researches on this understudied language.
SYLLABLE-BASED NEURAL NAMED ENTITY RECOGNITION FOR MYANMAR LANGUAGEijnlc
Named Entity Recognition (NER) for Myanmar Language is essential to Myanmar natural language processing research work. In this work, NER for Myanmar language is treated as a sequence tagging problem and the effectiveness of deep neural networks on NER for Myanmar language has been investigated. Experiments are performed by applying deep neural network architectures on syllable level Myanmar contexts. Very first manually annotated NER corpus for Myanmar language is also constructed and proposed. In developing our in-house NER corpus, sentences from online news website and also sentences supported from ALT-Parallel-Corpus are also used. This ALT corpus is one part of the Asian Language Treebank (ALT) project under ASEAN IVO. This paper contributes the first evaluation of neural network models on NER task for Myanmar language. The experimental results show that those neural sequence models can produce promising results compared to the baseline CRF model. Among those neural architectures, bidirectional LSTM network added CRF layer above gives the highest F-score value. This work also aims to discover the effectiveness of neural network approaches to Myanmar textual processing as well as to promote further researches on this understudied language.
STATISTICAL FUNCTION TAGGING AND GRAMMATICAL RELATIONS OF MYANMAR SENTENCEScscpconf
This paper describes a context free grammar (CFG) based grammatical relations for Myanmar
sentences which combine corpus-based function tagging system. Part of the challenge of
statistical function tagging for Myanmar sentences comes from the fact that Myanmar has freephrase-order
and a complex morphological system. Function tagging is a pre-processing step to
show grammatical relations of Myanmar sentences. In the task of function tagging, which tags
the function of Myanmar sentences with correct segmentation, POS (part-of-speech) tagging
and chunking information, we use Naive Bayesian theory to disambiguate the possible function
tags of a word. We apply context free grammar (CFG) to find out the grammatical relations of
the function tags. We also create a functional annotated tagged corpus for Myanmar and propose the grammar rules for Myanmar sentences. Experiments show that our analysis achieves a good result with simple sentences and complex sentences.
ATTENTION-BASED SYLLABLE LEVEL NEURAL MACHINE TRANSLATION SYSTEM FOR MYANMAR ...kevig
Neural machine translation is a new approach to machine translation that has shown the effective results
for high-resource languages. Recently, the attention-based neural machine translation with the large scale
parallel corpus plays an important role to achieve high performance for translation results. In this
research, a parallel corpus for Myanmar-English language pair is prepared and attention-based neural
machine translation models are introduced based on word to word level, character to word level, and
syllable to word level. We do the experiments of the proposed model to translate the long sentences and to
address morphological problems. To decrease the low resource problem, source side monolingual data are
also used. So, this work investigates to improve Myanmar to English neural machine translation system.
The experimental results show that syllable to word level neural mahine translation model obtains an
improvement over the baseline systems.
ATTENTION-BASED SYLLABLE LEVEL NEURAL MACHINE TRANSLATION SYSTEM FOR MYANMAR ...ijnlc
Neural machine translation is a new approach to machine translation that has shown the effective results
for high-resource languages. Recently, the attention-based neural machine translation with the large scale
parallel corpus plays an important role to achieve high performance for translation results. In this
research, a parallel corpus for Myanmar-English language pair is prepared and attention-based neural
machine translation models are introduced based on word to word level, character to word level, and
syllable to word level. We do the experiments of the proposed model to translate the long sentences and to
address morphological problems. To decrease the low resource problem, source side monolingual data are
also used. So, this work investigates to improve Myanmar to English neural machine translation system.
The experimental results show that syllable to word level neural mahine translation model obtains an
improvement over the baseline systems.
Phrase Identification is one of the most critical and widely studied in Natural Language processing (NLP) tasks. Verb Phrase Identification within a sentence is very useful for a variety of application on NLP. One of the core enabling technologies required in NLP applications is a Morphological Analysis. This paper presents the Myanmar Verb Phrase Identification and Translation Algorithm and develops a Markov Model with Morphological Analysis. The system is based on Rule-Based Maximum Matching Approach. In Machine Translation, Large amount of information is needed to guide the translation process. Myanmar Language is inflected language and there are very few creations and researches of Lexicon in Myanmar, comparing to other language such as English, French and Czech etc. Therefore, this system is proposed Myanmar Verb Phrase identification and translation model based on Syntactic Structure and Morphology of Myanmar Language by using Myanmar- English bilingual lexicon. Markov Model is also used to reformulate the translation probability of Phrase pairs. Experiment results showed that proposed system can improve translation quality by applying morphological analysis on Myanmar Language.
SYLLABLE-BASED NEURAL NAMED ENTITY RECOGNITION FOR MYANMAR LANGUAGEkevig
This paper contributes the first evaluation of neural
network models on NER task for Myanmar language. The experimental results show that those neural
sequence models can produce promising results compared to the baseline CRF model. Among those neural
architectures, bidirectional LSTM network added CRF layer above gives the highest F-score value. This
work also aims to discover the effectiveness of neural network approaches to Myanmar textual processing
as well as to promote further researches on this understudied language.
SYLLABLE-BASED NEURAL NAMED ENTITY RECOGNITION FOR MYANMAR LANGUAGEijnlc
Named Entity Recognition (NER) for Myanmar Language is essential to Myanmar natural language processing research work. In this work, NER for Myanmar language is treated as a sequence tagging problem and the effectiveness of deep neural networks on NER for Myanmar language has been investigated. Experiments are performed by applying deep neural network architectures on syllable level Myanmar contexts. Very first manually annotated NER corpus for Myanmar language is also constructed and proposed. In developing our in-house NER corpus, sentences from online news website and also sentences supported from ALT-Parallel-Corpus are also used. This ALT corpus is one part of the Asian Language Treebank (ALT) project under ASEAN IVO. This paper contributes the first evaluation of neural network models on NER task for Myanmar language. The experimental results show that those neural sequence models can produce promising results compared to the baseline CRF model. Among those neural architectures, bidirectional LSTM network added CRF layer above gives the highest F-score value. This work also aims to discover the effectiveness of neural network approaches to Myanmar textual processing as well as to promote further researches on this understudied language.
A COMPARATIVE STUDY OF FEATURE SELECTION METHODSkevig
This article focuses on evaluating and comparing the available feature selection methods in general versatility regarding authorship attribution problems and tries to identify which method is the most effective. The discussions on general versatility of feature selection methods and its connection in selecting the appropriate features for varying data were done. In addition, different languages, different types of features, different systems for calculating the accuracy of SVM (support vector machine), and different criteria for determining the rank of feature selection methods were used to measure the general versatility of these methods together. The analysis results indicate the best feature selection method is different for each dataset; however, some methods can always extract useful information to discriminate the classes. The chi-square was proved to be a better method overall.
The important problem of word segmentation in Thai language is sentential noun phrase. The existing
studies try to minimize the problem. But there is no research that solves this problem directly. This study
investigates the approach to resolve this problem using conditional random fields which is a probabilistic
model to segment and label sequence data. The results present that the corrected data of noun phrase was
detected more than 78.61 % based on our technique.
Natural Language Toolkit (NLTK) is a generic platform to process the data of various natural (human)
languages and it provides various resources for Indian languages also like Hindi, Bangla, Marathi and so
on. In the proposed work, the repositories provided by NLTK are used to carry out the processing of Hindi
text and then further for analysis of Multi word Expressions (MWEs). MWEs are lexical items that can be
decomposed into multiple lexemes and display lexical, syntactic, semantic, pragmatic and statistical
idiomaticity. The main focus of this paper is on processing and analysis of MWEs for Hindi text. The
corpus used for Hindi text processing is taken from the famous Hindi novel “KaramaBhumi by Munshi
PremChand”. The result analysis is done using the Hindi corpus provided by Resource Centre for Indian
Language Technology Solutions (CFILT). Results are analysed to justify the accuracy of the proposed
work.
A COMPARATIVE STUDY OF FEATURE SELECTION METHODSijnlc
Text analysis has been attracting increasing attention in this data era. Selecting effective features from datasets is a particular important part in text classification studies. Feature selection excludes irrelevant features from the classification task, reduces the dimensionality of a dataset, and improves the accuracy and performance of identification. So far, so many feature selection methods have been proposed, however,
it remains unclear which method is the most effective in practice. This article focuses on evaluating and comparing the available feature selection methods in general versatility regarding authorship attribution problems and tries to identify which method is the most effective. The discussions on general versatility of feature selection methods and its connection in selecting the appropriate features for varying data were
done. In addition, different languages, different types of features, different systems for calculating the accuracy of SVM (support vector machine), and different criteria for determining the rank of feature selection methods were used to measure the general versatility of these methods together. The analysis
results indicate the best feature selection method is different for each dataset; however, some methods can always extract useful information to discriminate the classes. The chi-square was proved to be a better method overall.
A COMPARATIVE STUDY OF FEATURE SELECTION METHODSkevig
Text analysis has been attracting increasing attention in this data era. Selecting effective features from
datasets is a particular important part in text classification studies. Feature selection excludes irrelevant
features from the classification task, reduces the dimensionality of a dataset, and improves the accuracy
and performance of identification. So far, so many feature selection methods have been proposed, however,
it remains unclear which method is the most effective in practice. This article focuses on evaluating and
comparing the available feature selection methods in general versatility regarding authorship attribution
problems and tries to identify which method is the most effective. The discussions on general versatility of
feature selection methods and its connection in selecting the appropriate features for varying data were
done. In addition, different languages, different types of features, different systems for calculating the
accuracy of SVM (support vector machine), and different criteria for determining the rank of feature
selection methods were used to measure the general versatility of these methods together. The analysis
results indicate the best feature selection method is different for each dataset; however, some methods can
always extract useful information to discriminate the classes. The chi-square was proved to be a better
method overall.
ADVANCEMENTS ON NLP APPLICATIONS FOR MANIPURI LANGUAGEijnlc
Manipuri is both a minority and morphologically rich language with genetic features similar to Tibeto Burman languages. It has Subject-Object-Verb (SOV) order, agglutinative verb morphology and is monosyllabic. Morphology and syntax are not clearly distinguished in this language. Natural Language
Processing (NLP) is a useful research field of computer science that deals with processing of a large amount of natural language corpus. The NLP applications encompass E-Dictionary, Morphological Analyzer, Reduplicated Multi-Word Expression (RMWE), Named Entity Recognition (NER), Part of Speech
(POS) Tagging, Machine Translation (MT), Word Net, Word Sense Disambiguation (WSD) etc. In this paper, we present a study on the advancements in NLP applications for Manipuri language, at the same time presenting a comparison table of the approaches and techniques adopted and the results obtained of each of the applications followed by a detail discussion of each work.
ADVANCEMENTS ON NLP APPLICATIONS FOR MANIPURI LANGUAGEkevig
Manipuri is both a minority and morphologically rich language with genetic features similar to Tibeto Burman languages. It has Subject-Object-Verb (SOV) order, agglutinative verb morphology and ismonosyllabic. Morphology and syntax are not clearly distinguished in this language. Natural Language Processing (NLP) is a useful research field of computer science that deals with processing of a large amount of natural language corpus. The NLP applications encompass E-Dictionary, Morphological
Analyzer, Reduplicated Multi-Word Expression (RMWE), Named Entity Recognition (NER), Part of Speech (POS) Tagging, Machine Translation (MT), Word Net, Word Sense Disambiguation (WSD) etc. In this paper, we present a study on the advancements in NLP applications for Manipuri language, at the same time presenting a comparison table of the approaches and techniques adopted and the results obtained of each of the applications followed by a detail discussion of each work.
NAMED ENTITY RECOGNITION FROM BENGALI NEWSPAPER DATAijnlc
Due to the dramatic growth of internet use, the amount of unstructured Bengali text data has increased
enormous. It is therefore essential to extract event intelligently from it. The progress in technologies in
natural language processing (NLP) for information extraction that is used to locate and classify content in
news data according to predefined categories such as person name, place name, organization name, date,
time etc. The current named entity recognition (NER), which is a subtask of NLP, plays a vital rule to
achieve human level performance on specific documents such as newspapers to effectively identify entities.
The purpose of this research is to introduce NER system in Bengali news data to identify events of specified
things in running text based on regular expression and Bengali grammar. In so doing, I have designed and
evaluated part-of-speech (POS) tags to recognize proper nouns. In this thesis, I have explained Hidden
Markov Model (HMM) based approach for developing NER system from Bengali news data.
Myanmar news summarization using different word representations IJECEIAES
There is enormous amount information available in different forms of sources and genres. In order to extract useful information from a massive amount of data, automatic mechanism is required. The text summarization systems assist with content reduction keeping the important information and filtering the non-important parts of the text. Good document representation is really important in text summarization to get relevant information. Bag-ofwords cannot give word similarity on syntactic and semantic relationship. Word embedding can give good document representation to capture and encode the semantic relation between words. Therefore, centroid based on word embedding representation is employed in this paper. Myanmar news summarization based on different word embedding is proposed. In this paper, Myanmar local and international news are summarized using centroid-based word embedding summarizer using the effectiveness of word representation approach, word embedding. Experiments were done on Myanmar local and international news dataset using different word embedding models and the results are compared with performance of bag-of-words summarization. Centroid summarization using word embedding performs comprehensively better than centroid summarization using bag-of-words.
Language Combinatorics: A Sentence Pattern Extraction Architecture Based on C...Waqas Tariq
A \"sentence pattern\" in modern Natural Language Processing is often considered as a subsequent string of words (n-grams). However, in many branches of linguistics, like Pragmatics or Corpus Linguistics, it has been noticed that simple n-gram patterns are not sufficient to reveal the whole sophistication of grammar patterns. We present a language independent architecture for extracting from sentences more sophisticated patterns than n-grams. In this architecture a \"sentence pattern\" is considered as n-element ordered combination of sentence elements. Experiments showed that the method extracts significantly more frequent patterns than the usual n-gram approach.
Taxonomy extraction from automotive natural language requirements using unsup...ijnlc
In this paper we present a novel approach to semi-automatically learn concept hierarchies from natural
language requirements of the automotive industry. The approach is based on the distributional hypothesis
and the special characteristics of domain-specific German compounds. We extract taxonomies by using
clustering techniques in combination with general thesauri. Such a taxonomy can be used to support
requirements engineering in early stages by providing a common system understanding and an agreedupon
terminology. This work is part of an ontology-driven requirements engineering process, which builds
on top of the taxonomy. Evaluation shows that this taxonomy extraction approach outperforms common
hierarchical clustering techniques.
A COMPUTATIONAL APPROACH FOR ANALYZING INTER-SENTENTIAL ANAPHORIC PRONOUNS IN...ijnlc
This paper presents a strategy and a computational model for solving inter-sentential anaphoric pronouns
in Vietnamese paragraphs composing simple sentences. The strategy is proposed based on grammatical
features of nouns and the focus phenomenon when using pronouns in Vietnamese. In this research, we
consider only nouns and pronouns which are human objects in the paragraph, and each anaphoric
pronoun will appear one time in one sentence and can appear in adjacent sentences. The computational
model is implemented in Prolog and based on applying and improving the models of Mark Johnson and
Ewan Klein, had been improved by Covington and Schmitz, with theoretical background of Discourse
Representation Theory.Analysis of test results shows that this approach which based on linguistic theories
helps for well solving inter-sentential anaphoric pronouns in Vietnamese paragraphs.
A survey on phrase structure learning methods for text classificationijnlc
Text classification is a task of automatic classification of text into one of the predefined categories. The
problem of text classification has been widely studied in different communities like natural language
processing, data mining and information retrieval. Text classification is an important constituent in many
information management tasks like topic identification, spam filtering, email routing, language
identification, genre classification, readability assessment etc. The performance of text classification
improves notably when phrase patterns are used. The use of phrase patterns helps in capturing non-local
behaviours and thus helps in the improvement of text classification task. Phrase structure extraction is the
first step to continue with the phrase pattern identification. In this survey, detailed study of phrase structure
learning methods have been carried out. This will enable future work in several NLP tasks, which uses
syntactic information from phrase structure like grammar checkers, question answering, information
extraction, machine translation, text classification. The paper also provides different levels of classification
and detailed comparison of the phrase structure learning methods.
International Journal of Engineering and Science Invention (IJESI) inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
Effect of Query Formation on Web Search Engine Resultskevig
Query in a search engine is generally based on natural language. A query can be expressed in more than
one way without changing its meaning as it depends on thinking of human being at a particular moment.
Aim of the searcher is to get most relevant results immaterial of how the query has been expressed. In the
present paper, we have examined the results of search engine for change in coverage and similarity of first
few results when a query is entered in two semantically same but in different formats. Searching has been
made through Google search engine. Fifteen pairs of queries have been chosen for the study. The t-test has
been used for the purpose and the results have been checked on the basis of total documents found,
similarity of first five and first ten documents found in the results of a query entered in two different
formats. It has been found that the total coverage is same but first few results are significantly different.
Investigations of the Distributions of Phonemic Durations in Hindi and Dogrikevig
Speech generation is one of the most important areas of research in speech signal processing which is now gaining a serious attention. Speech is a natural form of communication in all living things. Computers with the ability to understand speech and speak with a human like voice are expected to contribute to the development of more natural man-machine interface. However, in order to give those functions that are even closer to those of human beings, we must learn more about the mechanisms by which speech is produced and perceived, and develop speech information processing technologies that can generate a more natural sounding systems. The so described field of stud, also called speech synthesis and more prominently acknowledged as text-to-speech synthesis, originated in the mid eighties because of the emergence of DSP and the rapid advancement of VLSI techniques. To understand this field of speech, it is necessary to understand the basic theory of speech production. Every language has different phonetic alphabets and a different set of possible phonemes and their combinations.
For the analysis of the speech signal, we have carried out the recording of five speakers in Dogri (3 male and 5 females) and eight speakers in Hindi language (4 male and 4 female). For estimating the durational distributions, the mean of mean of ten instances of vowels of each speaker in both the languages has been calculated. Investigations have shown that the two durational distributions differ significantly with respect to mean and standard deviation. The duration of phoneme is speaker dependent. The whole investigation can be concluded with the end result that almost all the Dogri phonemes have shorter duration, in comparison to Hindi phonemes. The period in milli seconds of same phonemes when uttered in Hindi were found to be longer compared to when they were spoken by a person with Dogri as his mother tongue. There are many applications which are directly of indirectly related to the research being carried out. For instance the main application may be for transforming Dogri speech into Hindi and vice versa, and further utilizing this application, we can develop a speech aid to teach Dogri to children. The results may also be useful for synthesizing the phonemes of Dogri using the parameters of the phonemes of Hindi and for building large vocabulary speech recognition systems.
More Related Content
Similar to Parsing of Myanmar Sentences With Function Tagging
A COMPARATIVE STUDY OF FEATURE SELECTION METHODSkevig
This article focuses on evaluating and comparing the available feature selection methods in general versatility regarding authorship attribution problems and tries to identify which method is the most effective. The discussions on general versatility of feature selection methods and its connection in selecting the appropriate features for varying data were done. In addition, different languages, different types of features, different systems for calculating the accuracy of SVM (support vector machine), and different criteria for determining the rank of feature selection methods were used to measure the general versatility of these methods together. The analysis results indicate the best feature selection method is different for each dataset; however, some methods can always extract useful information to discriminate the classes. The chi-square was proved to be a better method overall.
The important problem of word segmentation in Thai language is sentential noun phrase. The existing
studies try to minimize the problem. But there is no research that solves this problem directly. This study
investigates the approach to resolve this problem using conditional random fields which is a probabilistic
model to segment and label sequence data. The results present that the corrected data of noun phrase was
detected more than 78.61 % based on our technique.
Natural Language Toolkit (NLTK) is a generic platform to process the data of various natural (human)
languages and it provides various resources for Indian languages also like Hindi, Bangla, Marathi and so
on. In the proposed work, the repositories provided by NLTK are used to carry out the processing of Hindi
text and then further for analysis of Multi word Expressions (MWEs). MWEs are lexical items that can be
decomposed into multiple lexemes and display lexical, syntactic, semantic, pragmatic and statistical
idiomaticity. The main focus of this paper is on processing and analysis of MWEs for Hindi text. The
corpus used for Hindi text processing is taken from the famous Hindi novel “KaramaBhumi by Munshi
PremChand”. The result analysis is done using the Hindi corpus provided by Resource Centre for Indian
Language Technology Solutions (CFILT). Results are analysed to justify the accuracy of the proposed
work.
A COMPARATIVE STUDY OF FEATURE SELECTION METHODSijnlc
Text analysis has been attracting increasing attention in this data era. Selecting effective features from datasets is a particular important part in text classification studies. Feature selection excludes irrelevant features from the classification task, reduces the dimensionality of a dataset, and improves the accuracy and performance of identification. So far, so many feature selection methods have been proposed, however,
it remains unclear which method is the most effective in practice. This article focuses on evaluating and comparing the available feature selection methods in general versatility regarding authorship attribution problems and tries to identify which method is the most effective. The discussions on general versatility of feature selection methods and its connection in selecting the appropriate features for varying data were
done. In addition, different languages, different types of features, different systems for calculating the accuracy of SVM (support vector machine), and different criteria for determining the rank of feature selection methods were used to measure the general versatility of these methods together. The analysis
results indicate the best feature selection method is different for each dataset; however, some methods can always extract useful information to discriminate the classes. The chi-square was proved to be a better method overall.
A COMPARATIVE STUDY OF FEATURE SELECTION METHODSkevig
Text analysis has been attracting increasing attention in this data era. Selecting effective features from
datasets is a particular important part in text classification studies. Feature selection excludes irrelevant
features from the classification task, reduces the dimensionality of a dataset, and improves the accuracy
and performance of identification. So far, so many feature selection methods have been proposed, however,
it remains unclear which method is the most effective in practice. This article focuses on evaluating and
comparing the available feature selection methods in general versatility regarding authorship attribution
problems and tries to identify which method is the most effective. The discussions on general versatility of
feature selection methods and its connection in selecting the appropriate features for varying data were
done. In addition, different languages, different types of features, different systems for calculating the
accuracy of SVM (support vector machine), and different criteria for determining the rank of feature
selection methods were used to measure the general versatility of these methods together. The analysis
results indicate the best feature selection method is different for each dataset; however, some methods can
always extract useful information to discriminate the classes. The chi-square was proved to be a better
method overall.
ADVANCEMENTS ON NLP APPLICATIONS FOR MANIPURI LANGUAGEijnlc
Manipuri is both a minority and morphologically rich language with genetic features similar to Tibeto Burman languages. It has Subject-Object-Verb (SOV) order, agglutinative verb morphology and is monosyllabic. Morphology and syntax are not clearly distinguished in this language. Natural Language
Processing (NLP) is a useful research field of computer science that deals with processing of a large amount of natural language corpus. The NLP applications encompass E-Dictionary, Morphological Analyzer, Reduplicated Multi-Word Expression (RMWE), Named Entity Recognition (NER), Part of Speech
(POS) Tagging, Machine Translation (MT), Word Net, Word Sense Disambiguation (WSD) etc. In this paper, we present a study on the advancements in NLP applications for Manipuri language, at the same time presenting a comparison table of the approaches and techniques adopted and the results obtained of each of the applications followed by a detail discussion of each work.
ADVANCEMENTS ON NLP APPLICATIONS FOR MANIPURI LANGUAGEkevig
Manipuri is both a minority and morphologically rich language with genetic features similar to Tibeto Burman languages. It has Subject-Object-Verb (SOV) order, agglutinative verb morphology and ismonosyllabic. Morphology and syntax are not clearly distinguished in this language. Natural Language Processing (NLP) is a useful research field of computer science that deals with processing of a large amount of natural language corpus. The NLP applications encompass E-Dictionary, Morphological
Analyzer, Reduplicated Multi-Word Expression (RMWE), Named Entity Recognition (NER), Part of Speech (POS) Tagging, Machine Translation (MT), Word Net, Word Sense Disambiguation (WSD) etc. In this paper, we present a study on the advancements in NLP applications for Manipuri language, at the same time presenting a comparison table of the approaches and techniques adopted and the results obtained of each of the applications followed by a detail discussion of each work.
NAMED ENTITY RECOGNITION FROM BENGALI NEWSPAPER DATAijnlc
Due to the dramatic growth of internet use, the amount of unstructured Bengali text data has increased
enormous. It is therefore essential to extract event intelligently from it. The progress in technologies in
natural language processing (NLP) for information extraction that is used to locate and classify content in
news data according to predefined categories such as person name, place name, organization name, date,
time etc. The current named entity recognition (NER), which is a subtask of NLP, plays a vital rule to
achieve human level performance on specific documents such as newspapers to effectively identify entities.
The purpose of this research is to introduce NER system in Bengali news data to identify events of specified
things in running text based on regular expression and Bengali grammar. In so doing, I have designed and
evaluated part-of-speech (POS) tags to recognize proper nouns. In this thesis, I have explained Hidden
Markov Model (HMM) based approach for developing NER system from Bengali news data.
Myanmar news summarization using different word representations IJECEIAES
There is enormous amount information available in different forms of sources and genres. In order to extract useful information from a massive amount of data, automatic mechanism is required. The text summarization systems assist with content reduction keeping the important information and filtering the non-important parts of the text. Good document representation is really important in text summarization to get relevant information. Bag-ofwords cannot give word similarity on syntactic and semantic relationship. Word embedding can give good document representation to capture and encode the semantic relation between words. Therefore, centroid based on word embedding representation is employed in this paper. Myanmar news summarization based on different word embedding is proposed. In this paper, Myanmar local and international news are summarized using centroid-based word embedding summarizer using the effectiveness of word representation approach, word embedding. Experiments were done on Myanmar local and international news dataset using different word embedding models and the results are compared with performance of bag-of-words summarization. Centroid summarization using word embedding performs comprehensively better than centroid summarization using bag-of-words.
Language Combinatorics: A Sentence Pattern Extraction Architecture Based on C...Waqas Tariq
A \"sentence pattern\" in modern Natural Language Processing is often considered as a subsequent string of words (n-grams). However, in many branches of linguistics, like Pragmatics or Corpus Linguistics, it has been noticed that simple n-gram patterns are not sufficient to reveal the whole sophistication of grammar patterns. We present a language independent architecture for extracting from sentences more sophisticated patterns than n-grams. In this architecture a \"sentence pattern\" is considered as n-element ordered combination of sentence elements. Experiments showed that the method extracts significantly more frequent patterns than the usual n-gram approach.
Taxonomy extraction from automotive natural language requirements using unsup...ijnlc
In this paper we present a novel approach to semi-automatically learn concept hierarchies from natural
language requirements of the automotive industry. The approach is based on the distributional hypothesis
and the special characteristics of domain-specific German compounds. We extract taxonomies by using
clustering techniques in combination with general thesauri. Such a taxonomy can be used to support
requirements engineering in early stages by providing a common system understanding and an agreedupon
terminology. This work is part of an ontology-driven requirements engineering process, which builds
on top of the taxonomy. Evaluation shows that this taxonomy extraction approach outperforms common
hierarchical clustering techniques.
A COMPUTATIONAL APPROACH FOR ANALYZING INTER-SENTENTIAL ANAPHORIC PRONOUNS IN...ijnlc
This paper presents a strategy and a computational model for solving inter-sentential anaphoric pronouns
in Vietnamese paragraphs composing simple sentences. The strategy is proposed based on grammatical
features of nouns and the focus phenomenon when using pronouns in Vietnamese. In this research, we
consider only nouns and pronouns which are human objects in the paragraph, and each anaphoric
pronoun will appear one time in one sentence and can appear in adjacent sentences. The computational
model is implemented in Prolog and based on applying and improving the models of Mark Johnson and
Ewan Klein, had been improved by Covington and Schmitz, with theoretical background of Discourse
Representation Theory.Analysis of test results shows that this approach which based on linguistic theories
helps for well solving inter-sentential anaphoric pronouns in Vietnamese paragraphs.
A survey on phrase structure learning methods for text classificationijnlc
Text classification is a task of automatic classification of text into one of the predefined categories. The
problem of text classification has been widely studied in different communities like natural language
processing, data mining and information retrieval. Text classification is an important constituent in many
information management tasks like topic identification, spam filtering, email routing, language
identification, genre classification, readability assessment etc. The performance of text classification
improves notably when phrase patterns are used. The use of phrase patterns helps in capturing non-local
behaviours and thus helps in the improvement of text classification task. Phrase structure extraction is the
first step to continue with the phrase pattern identification. In this survey, detailed study of phrase structure
learning methods have been carried out. This will enable future work in several NLP tasks, which uses
syntactic information from phrase structure like grammar checkers, question answering, information
extraction, machine translation, text classification. The paper also provides different levels of classification
and detailed comparison of the phrase structure learning methods.
International Journal of Engineering and Science Invention (IJESI) inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
Effect of Query Formation on Web Search Engine Resultskevig
Query in a search engine is generally based on natural language. A query can be expressed in more than
one way without changing its meaning as it depends on thinking of human being at a particular moment.
Aim of the searcher is to get most relevant results immaterial of how the query has been expressed. In the
present paper, we have examined the results of search engine for change in coverage and similarity of first
few results when a query is entered in two semantically same but in different formats. Searching has been
made through Google search engine. Fifteen pairs of queries have been chosen for the study. The t-test has
been used for the purpose and the results have been checked on the basis of total documents found,
similarity of first five and first ten documents found in the results of a query entered in two different
formats. It has been found that the total coverage is same but first few results are significantly different.
Investigations of the Distributions of Phonemic Durations in Hindi and Dogrikevig
Speech generation is one of the most important areas of research in speech signal processing which is now gaining a serious attention. Speech is a natural form of communication in all living things. Computers with the ability to understand speech and speak with a human like voice are expected to contribute to the development of more natural man-machine interface. However, in order to give those functions that are even closer to those of human beings, we must learn more about the mechanisms by which speech is produced and perceived, and develop speech information processing technologies that can generate a more natural sounding systems. The so described field of stud, also called speech synthesis and more prominently acknowledged as text-to-speech synthesis, originated in the mid eighties because of the emergence of DSP and the rapid advancement of VLSI techniques. To understand this field of speech, it is necessary to understand the basic theory of speech production. Every language has different phonetic alphabets and a different set of possible phonemes and their combinations.
For the analysis of the speech signal, we have carried out the recording of five speakers in Dogri (3 male and 5 females) and eight speakers in Hindi language (4 male and 4 female). For estimating the durational distributions, the mean of mean of ten instances of vowels of each speaker in both the languages has been calculated. Investigations have shown that the two durational distributions differ significantly with respect to mean and standard deviation. The duration of phoneme is speaker dependent. The whole investigation can be concluded with the end result that almost all the Dogri phonemes have shorter duration, in comparison to Hindi phonemes. The period in milli seconds of same phonemes when uttered in Hindi were found to be longer compared to when they were spoken by a person with Dogri as his mother tongue. There are many applications which are directly of indirectly related to the research being carried out. For instance the main application may be for transforming Dogri speech into Hindi and vice versa, and further utilizing this application, we can develop a speech aid to teach Dogri to children. The results may also be useful for synthesizing the phonemes of Dogri using the parameters of the phonemes of Hindi and for building large vocabulary speech recognition systems.
May 2024 - Top10 Cited Articles in Natural Language Computingkevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
Effect of Singular Value Decomposition Based Processing on Speech Perceptionkevig
Speech is an important biological signal for primary mode of communication among human being and also the most natural and efficient form of exchanging information among human in speech. Speech processing is the most important aspect in signal processing. In this paper the theory of linear algebra called singular value decomposition (SVD) is applied to the speech signal. SVD is a technique for deriving important parameters of a signal. The parameters derived using SVD may further be reduced by perceptual evaluation of the synthesized speech using only perceptually important parameters, where the speech signal can be compressed so that the information can be transformed into compressed form without losing its quality. This technique finds wide applications in speech compression, speech recognition, and speech synthesis. The objective of this paper is to investigate the effect of SVD based feature selection of the input speech on the perception of the processed speech signal. The speech signal which is in the form of vowels \a\, \e\, \u\ were recorded from each of the six speakers (3 males and 3 females). The vowels for the six speakers were analyzed using SVD based processing and the effect of the reduction in singular values was investigated on the perception of the resynthesized vowels using reduced singular values. Investigations have shown that the number of singular values can be drastically reduced without significantly affecting the perception of the vowels.
Identifying Key Terms in Prompts for Relevance Evaluation with GPT Modelskevig
Relevance evaluation of a query and a passage is essential in Information Retrieval (IR). Recently, numerous studies have been conducted on tasks related to relevance judgment using Large Language Models (LLMs) such as GPT-4,
demonstrating significant improvements. However, the efficacy of LLMs is considerably influenced by the design of the prompt. The purpose of this paper is to
identify which specific terms in prompts positively or negatively impact relevance
evaluation with LLMs. We employed two types of prompts: those used in previous
research and generated automatically by LLMs. By comparing the performance of
these prompts in both few-shot and zero-shot settings, we analyze the influence of
specific terms in the prompts. We have observed two main findings from our study.
First, we discovered that prompts using the term ‘answer’ lead to more effective
relevance evaluations than those using ‘relevant.’ This indicates that a more direct
approach, focusing on answering the query, tends to enhance performance. Second,
we noted the importance of appropriately balancing the scope of ‘relevance.’ While
the term ‘relevant’ can extend the scope too broadly, resulting in less precise evaluations, an optimal balance in defining relevance is crucial for accurate assessments.
The inclusion of few-shot examples helps in more precisely defining this balance.
By providing clearer contexts for the term ‘relevance,’ few-shot examples contribute
to refine relevance criteria. In conclusion, our study highlights the significance of
carefully selecting terms in prompts for relevance evaluation with LLMs.
Identifying Key Terms in Prompts for Relevance Evaluation with GPT Modelskevig
Relevance evaluation of a query and a passage is essential in Information Retrieval (IR). Recently, numerous studies have been conducted on tasks related to relevance judgment using Large Language Models (LLMs) such as GPT-4, demonstrating significant improvements. However, the efficacy of LLMs is considerably influenced by the design of the prompt. The purpose of this paper is to identify which specific terms in prompts positively or negatively impact relevance evaluation with LLMs. We employed two types of prompts: those used in previous research and generated automatically by LLMs. By comparing the performance of these prompts in both few-shot and zero-shot settings, we analyze the influence of specific terms in the prompts. We have observed two main findings from our study. First, we discovered that prompts using the term ‘answer’ lead to more effective relevance evaluations than those using ‘relevant.’ This indicates that a more direct approach, focusing on answering the query, tends to enhance performance. Second, we noted the importance of appropriately balancing the scope of ‘relevance.’ While the term ‘relevant’ can extend the scope too broadly, resulting in less precise evaluations, an optimal balance in defining relevance is crucial for accurate assessments. The inclusion of few-shot examples helps in more precisely defining this balance. By providing clearer contexts for the term ‘relevance,’ few-shot examples contribute to refine relevance criteria. In conclusion, our study highlights the significance of carefully selecting terms in prompts for relevance evaluation with LLMs.
In recent years, great advances have been made in the speed, accuracy, and coverage of automatic word
sense disambiguator systems that, given a word appearing in a certain context, can identify the sense of
that word. In this paper we consider the problem of deciding whether same words contained in different
documents are related to the same meaning or are homonyms. Our goal is to improve the estimate of the
similarity of documents in which some words may be used with different meanings. We present three new
strategies for solving this problem, which are used to filter out homonyms from the similarity computation.
Two of them are intrinsically non-semantic, whereas the other one has a semantic flavor and can also be
applied to word sense disambiguation. The three strategies have been embedded in an article document
recommendation system that one of the most important Italian ad-serving companies offers to its customers.
Genetic Approach For Arabic Part Of Speech Taggingkevig
With the growing number of textual resources available, the ability to understand them becomes critical.
An essential first step in understanding these sources is the ability to identify the parts-of-speech in each
sentence. Arabic is a morphologically rich language, which presents a challenge for part of speech
tagging. In this paper, our goal is to propose, improve, and implement a part-of-speech tagger based on a
genetic algorithm. The accuracy obtained with this method is comparable to that of other probabilistic
approaches.
Rule Based Transliteration Scheme for English to Punjabikevig
Machine Transliteration has come out to be an emerging and a very important research area in the field of
machine translation. Transliteration basically aims to preserve the phonological structure of words. Proper
transliteration of name entities plays a very significant role in improving the quality of machine translation.
In this paper we are doing machine transliteration for English-Punjabi language pair using rule based
approach. We have constructed some rules for syllabification. Syllabification is the process to extract or
separate the syllable from the words. In this we are calculating the probabilities for name entities (Proper
names and location). For those words which do not come under the category of name entities, separate
probabilities are being calculated by using relative frequency through a statistical machine translation
toolkit known as MOSES. Using these probabilities we are transliterating our input text from English to
Punjabi.
Improving Dialogue Management Through Data Optimizationkevig
In task-oriented dialogue systems, the ability for users to effortlessly communicate with machines and computers through natural language stands as a critical advancement. Central to these systems is the dialogue manager, a pivotal component tasked with navigating the conversation to effectively meet user goals by selecting the most appropriate response. Traditionally, the development of sophisticated dialogue management has embraced a variety of methodologies, including rule-based systems, reinforcement learning, and supervised learning, all aimed at optimizing response selection in light of user inputs. This research casts a spotlight on the pivotal role of data quality in enhancing the performance of dialogue managers. Through a detailed examination of prevalent errors within acclaimed datasets, such as Multiwoz 2.1 and SGD, we introduce an innovative synthetic dialogue generator designed to control the introduction of errors precisely. Our comprehensive analysis underscores the critical impact of dataset imperfections, especially mislabeling, on the challenges inherent in refining dialogue management processes.
Document Author Classification using Parsed Language Structurekevig
Over the years there has been ongoing interest in detecting authorship of a text based on statistical properties of the text, such as by using occurrence rates of noncontextual words. In previous work, these techniques have been used, for example, to determine authorship of all of The Federalist Papers. Such methods may be useful in more modern times to detect fake or AI authorship. Progress in statistical natural language parsers introduces the possibility of using grammatical structure to detect authorship. In this paper we explore a new possibility for detecting authorship using grammatical structural information extracted using a statistical natural language parser. This paper provides a proof of concept, testing author classification based on grammatical structure on a set of “proof texts,” The Federalist Papers and Sanditon which have been as test cases in previous authorship detection studies. Several features extracted from the statisticalnaturallanguage parserwere explored: all subtrees of some depth from any level; rooted subtrees of some depth, part of speech, and part of speech by level in the parse tree. It was found to be helpful to project the features into a lower dimensional space. Statistical experiments on these documents demonstrate that information from a statistical parser can, in fact, assist in distinguishing authors.
Rag-Fusion: A New Take on Retrieval Augmented Generationkevig
Infineon has identified a need for engineers, account managers, and customers to rapidly obtain product information. This problem is traditionally addressed with retrieval-augmented generation (RAG) chatbots, but in this study, I evaluated the use of the newly popularized RAG-Fusion method. RAG-Fusion combines RAG and reciprocal rank fusion (RRF) by generating multiple queries, reranking them with reciprocal scores and fusing the documents and scores. Through manually evaluating answers on accuracy, relevance, and comprehensiveness, I found that RAG-Fusion was able to provide accurate and comprehensive answers due to the generated queries contextualizing the original query from various perspectives. However, some answers strayed off topic when the generated queries' relevance to the original query is insufficient. This research marks significant progress in artificial intelligence (AI) and natural language processing (NLP) applications and demonstrates transformations in a global and multi-industry context.
Performance, Energy Consumption and Costs: A Comparative Analysis of Automati...kevig
The common practice in Machine Learning research is to evaluate the top-performing models based on their performance. However, this often leads to overlooking other crucial aspects that should be given careful consideration. In some cases, the performance differences between various approaches may be insignificant, whereas factors like production costs, energy consumption, and carbon footprint should be taken into account. Large Language Models (LLMs) are widely used in academia and industry to address NLP problems. In this study, we present a comprehensive quantitative comparison between traditional approaches (SVM-based) and more recent approaches such as LLM (BERT family models) and generative models (GPT2 and LLAMA2), using the LexGLUE benchmark. Our evaluation takes into account not only performance parameters (standard indices), but also alternative measures such as timing, energy consumption and costs, which collectively contribute to the carbon footprint. To ensure a complete analysis, we separately considered the prototyping phase (which involves model selection through training-validation-test iterations) and the in-production phases. These phases follow distinct implementation procedures and require different resources. The results indicate that simpler algorithms often achieve performance levels similar to those of complex models (LLM and generative models), consuming much less energy and requiring fewer resources. These findings suggest that companies should consider additional considerations when choosing machine learning (ML) solutions. The analysis also demonstrates that it is increasingly necessary for the scientific world to also begin to consider aspects of energy consumption in model evaluations, in order to be able to give real meaning to the results obtained using standard metrics (Precision, Recall, F1 and so on).
Evaluation of Medium-Sized Language Models in German and English Languagekevig
Large language models (LLMs) have garnered significant attention, but the definition of “large” lacks clarity. This paper focuses on medium-sized language models (MLMs), defined as having at least six billion parameters but less than 100 billion. The study evaluates MLMs regarding zero-shot generative question answering, which requires models to provide elaborate answers without external document retrieval. The paper introduces an own test dataset and presents results from human evaluation. Results show that combining the best answers from different MLMs yielded an overall correct answer rate of 82.7% which is better than the 60.9% of ChatGPT. The best MLM achieved 71.8% and has 33B parameters, which highlights the importance of using appropriate training data for fine-tuning rather than solely relying on the number of parameters. More fine-grained feedback should be used to further improve the quality of answers. The open source community is quickly closing the gap to the best commercial models.
IMPROVING DIALOGUE MANAGEMENT THROUGH DATA OPTIMIZATIONkevig
In task-oriented dialogue systems, the ability for users to effortlessly communicate with machines and
computers through natural language stands as a critical advancement. Central to these systems is the
dialogue manager, a pivotal component tasked with navigating the conversation to effectively meet user
goals by selecting the most appropriate response. Traditionally, the development of sophisticated dialogue
management has embraced a variety of methodologies, including rule-based systems, reinforcement
learning, and supervised learning, all aimed at optimizing response selection in light of user inputs. This
research casts a spotlight on the pivotal role of data quality in enhancing the performance of dialogue
managers. Through a detailed examination of prevalent errors within acclaimed datasets, such as
Multiwoz 2.1 and SGD, we introduce an innovative synthetic dialogue generator designed to control the
introduction of errors precisely. Our comprehensive analysis underscores the critical impact of dataset
imperfections, especially mislabeling, on the challenges inherent in refining dialogue management
processes.
Document Author Classification Using Parsed Language Structurekevig
Over the years there has been ongoing interest in detecting authorship of a text based on statistical properties of the
text, such as by using occurrence rates of noncontextual words. In previous work, these techniques have been used,
for example, to determine authorship of all of The Federalist Papers. Such methods may be useful in more modern
times to detect fake or AI authorship. Progress in statistical natural language parsers introduces the possibility of
using grammatical structure to detect authorship. In this paper we explore a new possibility for detecting authorship
using grammatical structural information extracted using a statistical natural language parser. This paper provides a
proof of concept, testing author classification based on grammatical structure on a set of “proof texts,” The Federalist
Papers and Sanditon which have been as test cases in previous authorship detection studies. Several features extracted
of some depth, part of speech, and part of speech by level in the parse tree. It was found to be helpful to project the
features into a lower dimensional space. Statistical experiments on these documents demonstrate that information
from a statistical parser can, in fact, assist in distinguishing authors.
RAG-FUSION: A NEW TAKE ON RETRIEVALAUGMENTED GENERATIONkevig
Infineon has identified a need for engineers, account managers, and customers to rapidly obtain product
information. This problem is traditionally addressed with retrieval-augmented generation (RAG) chatbots,
but in this study, I evaluated the use of the newly popularized RAG-Fusion method. RAG-Fusion combines
RAG and reciprocal rank fusion (RRF) by generating multiple queries, reranking them with reciprocal
scores and fusing the documents and scores. Through manually evaluating answers on accuracy,
relevance, and comprehensiveness, I found that RAG-Fusion was able to provide accurate and
comprehensive answers due to the generated queries contextualizing the original query from various
perspectives. However, some answers strayed off topic when the generated queries' relevance to the
original query is insufficient. This research marks significant progress in artificial intelligence (AI) and
natural language processing (NLP) applications and demonstrates transformations in a global and multiindustry context
Performance, energy consumption and costs: a comparative analysis of automati...kevig
The common practice in Machine Learning research is to evaluate the top-performing models based on their
performance. However, this often leads to overlooking other crucial aspects that should be given careful
consideration. In some cases, the performance differences between various approaches may be insignificant, whereas factors like production costs, energy consumption, and carbon footprint should be taken into
account. Large Language Models (LLMs) are widely used in academia and industry to address NLP problems. In this study, we present a comprehensive quantitative comparison between traditional approaches
(SVM-based) and more recent approaches such as LLM (BERT family models) and generative models (GPT2 and LLAMA2), using the LexGLUE benchmark. Our evaluation takes into account not only performance
parameters (standard indices), but also alternative measures such as timing, energy consumption and costs,
which collectively contribute to the carbon footprint. To ensure a complete analysis, we separately considered the prototyping phase (which involves model selection through training-validation-test iterations) and
the in-production phases. These phases follow distinct implementation procedures and require different resources. The results indicate that simpler algorithms often achieve performance levels similar to those of
complex models (LLM and generative models), consuming much less energy and requiring fewer resources.
These findings suggest that companies should consider additional considerations when choosing machine
learning (ML) solutions. The analysis also demonstrates that it is increasingly necessary for the scientific
world to also begin to consider aspects of energy consumption in model evaluations, in order to be able to
give real meaning to the results obtained using standard metrics (Precision, Recall, F1 and so on).
EVALUATION OF MEDIUM-SIZED LANGUAGE MODELS IN GERMAN AND ENGLISH LANGUAGEkevig
Large language models (LLMs) have garnered significant attention, but the definition of “large” lacks
clarity. This paper focuses on medium-sized language models (MLMs), defined as having at least six
billion parameters but less than 100 billion. The study evaluates MLMs regarding zero-shot generative
question answering, which requires models to provide elaborate answers without external document
retrieval. The paper introduces an own test dataset and presents results from human evaluation. Results
show that combining the best answers from different MLMs yielded an overall correct answer rate of
82.7% which is better than the 60.9% of ChatGPT. The best MLM achieved 71.8% and has 33B
parameters, which highlights the importance of using appropriate training data for fine-tuning rather than
solely relying on the number of parameters. More fine-grained feedback should be used to further improve
the quality of answers. The open source community is quickly closing the gap to the best commercial
models.
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
Parsing of Myanmar Sentences With Function Tagging
1. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
9
PARSING OF MYANMAR SENTENCES WITH
FUNCTION TAGGING
Win Win Thant1
, Tin Myat Htwe2
and Ni Lar Thein3
1,3
University of Computer Studies, Yangon, Myanmar
winwinthant@gmail.com
nilarthein@gmail.com
2
Natural Language Processing Laboratory
University of Computer Studies, Yangon, Myanmar
tinmyathtwe@gmail.com
ABSTRACT
This paper describes the use of Naive Bayes to address the task of assigning function tags and context free
grammar (CFG) to parse Myanmar sentences. Part of the challenge of statistical function tagging for
Myanmar sentences comes from the fact that Myanmar has free-phrase-order and a complex
morphological system. Function tagging is a pre-processing step for parsing. In the task of function
tagging, we use the functional annotated corpus and tag Myanmar sentences with correct segmentation,
POS (part-of-speech) tagging and chunking information. We propose Myanmar grammar rules and apply
context free grammar (CFG) to find out the parse tree of function tagged Myanmar sentences. Experiments
show that our analysis achieves a good result with parsing of simple sentences and three types of complex
sentences.
KEYWORDS
Function tagging, Parsing, Naive Bayes theory, Context free grammar, Myanmar sentences
1. INTRODUCTION
The natural language processing community is in the strong position of having many available
approaches to solve some of its most fundamental problems [1]. We have taken Myanmar
language for information processing. Myanmar is an agglutinative language with a very
productive inflectional system. This means that for any NLP application on Myanmar to be
successful, some amount of functional analysis is necessary. Without it, the development of
grammatical relations would not be feasible due to the sparse data problem bound to exist in the
training data. Our approach is a part of the Myanmar to English machine translation project. If
high quality translation is to be achieved, language understanding is a necessity. One problem in
Myanmar language processing is the lack of grammatical regularity in the language. This leads to
very complex Myanmar grammar in order to obtain satisfactory results, which in term increases
the complexity in the parsing process, it is desired that simple grammar is to be used.
Our proposed method makes use of two components. They are function tagging and parsing.
Function tags are useful for any application trying to follow the thread of the text –they find the
‘who does what’ of each clause, which can be useful to gain information about the situation or to
learn more about the behaviour of words in the sentence [2]. The goal of function tagging is to
assign syntactic categories like subject, object, time and location to each word in the text
document. In case of function tagging, we use Naive Bayes theory and the functional annotated
2. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
10
tagged corpus. Parsing is the process of analyzing a text or sentence that is made up of a sequence
of words called tokens, and to determine its grammatical structure with respect to a given
grammatical rules. The goal of the second one is to produce the parse tree of the sentences in
Myanmar text.
In our approach, we take the chunk level phrase with the combination of POS tag and its category
which is the output of a fully described morphological analyzer [3][4], which is very important
for agglutinative languages like Myanmar. A small corpus annotated manually serves as training
data because the large scale Myanmar Corpus is unavailable at present. Since the large-scale
annotated corpora, such as Penn Treebank, have been built in English, statistical knowledge
extracted from them has been shown to be more and more crucial for natural language
disambiguation [5]. As a distinctive language, Myanmar has many characteristics different from
English. The use of statistical information efficiently in Myanmar language is still a virgin land
waiting to explore.
The rest of the paper is organized as in the followings. Next, in the Related Work section, we
analyze previous efforts related to the tasks of function tagging and parsing. Section 3 explains
Myanmar language. Section 4 describes about corpus statistics. Section 5 explains the procedure
of proposed system. Section 6 includes the function tag sets. Section 7 describes about the
proposed grammar for Myanmar language. Function tagging model is presented in section 8.
Section 9 describes about parsing of Myanmar simple and complex sentences. Section 10
explains about experimental results. Finally the conclusion of the paper is presented.
2. RELATED WORK
Blaheta and Johnson [6] addressed the task of function tags assignment. They used a statistical
algorithm based on a set of features grouped in trees, rather than chains. The advantage was that
features can better contribute to overall performance for cases when several features are sparse.
When such features are conditioned in a chain model the sparseness of a feature can have a
dilution effect of an ulterior (conditioned) one.
Mihai Lintean and Vasile Rus[7] described the use of two machine learning techniques, naive
Bayes and decision trees, to address the task of assigning function tags to nodes in a syntactic
parse tree. They used a set of features inspired from Blaheta and Johnson [6]. The set of classes
they used in their model corresponds to the set of functional tags in Penn Treebank. To generate
the training data, they have considered only nodes with functional tags, ignoring nodes unlabeled
with such tags. They trained the classifiers on sections 1-21 from Wall Street Journal (WSJ) part
of Penn Treebank and used section 23 to evaluate the generated classifiers.
Yong-uk Park and Hyuk-chul Kwon [8] tried to disambiguate for syntactic analysis system by
many dependency rules and segmentation. Segmentation is made during parsing. If two adjacent
morphemes have no syntactic relations, their syntactic analyzer makes new segment between
these two morphemes, and find out all possible partial parse trees of that segmentation and
combine them into complete parse trees. Also they used adjacent-rule and adverb
subcategorization to disambiguate of syntactic analysis. Their syntactic analyzer system used
morphemes for the basic unit of parsing. They made all possible partial parse trees on each
segmentation process, and tried to combine them into complete parse trees.
Mark-Jan Nederhof and Giorgio Satta[9] considered the problem of parsing non-recursive
context-free grammars, i.e., context-free grammars that generate finite languages and presented
two tabular algorithms for these grammars. They presented their parsing algorithm, based on the
3. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
11
CYK (Cocke–Younger–Kasami) algorithm and Earley’s alogrithm. As parsing CFG (context-
free grammar), they have taken a small hand-written grammar of about 100 rules. They have
ordered the input grammars by size, according to the number of nonterminals (or the number of
nodes in the forest, following the terminology by Langkilde (2000)).
Kyongho Min and William H. Wilson [10] discussed the robustness of four efficient syntactic
error-correcting parsing algorithms that are based on chart parsing with a context-free grammar.
They implemented four versions of a bottom-up error-correcting chart parser: a basic bottom-up
chart parser, and chart parsers employing selectivity, top-down filtering, and a combination of
selectivity and a top-down filtering. They detected and corrected syntactic errors using a system
component called IFSCP (Ill-Formed Sentence Chart Parser) described by Min & Wilson (1994),
together with a spelling correction module. They tested 4 different lengths of sentences (3, 5, 7,
and 11) and 5 different error types, with a grammar of 210 context-free rules designed to parse a
simple declarative sentence with no conjunctions, passivisation, or relative clauses.
3. MYANMAR LANGUAGE
Myanmar (formerly known as Burma) is one of the South-East Asian countries. There are 135
ethnic groups living in Myanmar. These ethnic groups speak more than one language and use
different scripts to present their respective languages. There are a total of 109 languages spoken
by the people living in Myanmar [11]. The Myanmar language is the official language and is
more than one thousand years old.
3.1. Features of Myanmar Language
Generally Myanmar sentence follows the subject, object, and verb pattern. However the
interchange of subject, object is acceptable. Unlike English language Myanmar is syntax of
relatively free-phrase-order language. Myanmar phrases can be written in any order as long as the
verb phrase is at the end of sentence. This can be easily illustrated with the example “သူသည္
စာအုပ္ကို စားပြဲေပၚတြင္ ထားသည္။” (He places the book on the table) as shown in table 1. All are valid
sentences [12].
Table 1. Word order in Myanmar language
Case Myanmar Sentences Word order
Case 1 သူ စာအုပ္ကို စားပြဲေပၚတြင္ ထားသည္။ (Subj-Obj-Pla-Verb)
Case 2 သူ စားပြဲေပၚတြင္ စာအုပ္ကို ထားသည္။ (Subj-Pla-Obj-Verb)
Case 3 စာအုပ္ကို စားပြဲေပၚတြင္ သူ ထားသည္။ (Obj-Pla-Subj-Verb)
Case 4 စာအုပ္ကို သူ စားပြဲေပၚတြင္ ထားသည္။ (Obj-Subj-Pla-Verb)
Case 5 စားပြဲေပၚတြင္ သူ စာအုပ္ကို ထားသည္။ (Pla-Subj-Obj-Verb)
Case 6 စားပြဲေပၚတြင္ စာအုပ္ကို သူ ထားသည္။ (Pla-Obj-Subj-Verb)
In all the cases, subject is သူ (He), object is စာအုပ္ကို (the book), place is စားပြဲေပၚတြင္ (on the table)
and verb is ထားသည္ (places). From the above example, it is clear that phrase order does not
determine the functional structure in Myanmar language and permits scrambling. Myanmar
language follows Subject-Object-Verb orders in contradiction with English language.
4. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
12
3.2. Issues of Myanmar Language
The highly agglutinative language like Myanmar, nouns and verbs get inflected. Many times we
need to depend on syntactic function or context to decide upon whether the particular word is a
noun or adjective or adverb or post position [12]. This leads to the complexity in Myanmar
grammatical relations. A noun may be categorized as common, proper or compound. Similarly,
verb may be finite, infinite, gerund or contingent.
A number of issues are affecting the function tagging for Myanmar language.
• The subject or object of the sentence can be skipped, and still be a valid sentence.
For example:
ရန္ကုန္ - ႔ - သြားသည္။
Yangon - to - go
(Go to Yangon)
• Myanmar language makes prominent usage of particles, which are untranslatable words
that are suffixed or prefixed to words to indicate level of respect, grammatical tense, or
mood.
For example:
ေမာင္ေမာင္ - မ်ား - ပထမ - ဆု - ရ - လွ်င္ - သူ႔မိဘမ်ား - က - အ့ံၾသ - လိမ့္မည္။
Mg Mg - particle - first - prize - wins - if - his parents - PPM - surprise - will
(If Mg Mg wins the first prize, his parents will surprise.)
• In Myanmar language, an adjective can specialize before or after a noun unlike other
languages.
For example:
သူသည္ - ခ်မ္းသာေသာ - လူ -တစ္ေယာက္ -ျဖစ္သည္။
He - rich - man - a - is
(or)
သူသည္ - လူ - ခ်မ္းသာ - တစ္ေယာက္ -ျဖစ္သည္။
He - man - rich - a - is
(He is a rich man.)
• The subject /object can be another sentence, which does not contain subject or object.
For example:
ကေလးမ်ားသစ္ပင္ေအာက္တြင္ကစားေနသည္ ကို ကၽြန္ေတာ္ျမင္သည္။
(I see the children playing under the tree.)
• The postpositions of subject phrases or object phrases can be hidden.
For example:
သူသည္- ဆရာ၀န္ -တစ္ေယာက္ - ျဖစ္သည္။
He - doctor - a - is
(or)
သူ - ဆရာ၀န္ - တစ္ေယာက္ - ျဖစ္သည္။
He - doctor - a - is
(He is a doctor.)
• The postpositions of time phrases or place phrases can be omitted.
For example:
သူမ - ေက်ာင္း - သို႔ - သြားသည္။
She - school - to - goes
(or)
သူမ - ေက်ာင္း - သြားသည္။
She - school - goes
(She goes to school.)
5. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
13
These issues will cause a lot of problem during function tagging, and a lot of possible tags will be
resulted.
3.3. Grammar of Myanmar Language
Grammar studies the rules behind languages. The aspect of grammar that does not concern
meaning directly is called syntax. Myanmar (syntax: SOV), because of its use of postposition
(wi.Bat), would probably be defined as a “postpositional language”, whereas English (syntax:
SVO) because of its use of preposition would probably be defined as a “prepositional language”.
There are really only two parts of speech in Myanmar, the noun and the verb, instead of the
usually accepted eight parts (Pe Maung Tin 1956:195). Most Myanmar linguists [13] accepted
there are eight parts of speech in Myanmar. Myanmar nouns and verbs need the help of suffixes
or particles to show grammatical relations.
For example:
ေက်ာင္းသူမ်ားသာ ဂုဏ္ထူးရသည္။
သူတို႔သည္ အတန္းထဲမွာ ႐ွိၾက၏။
Myanmar is a highly verb-prominent language and that suppression of the subject and omission
of personal pronouns in connected text result in a reduced role of nominals. This observation
misses the critical role of postposition particles marking sentential arguments and also of the verb
itself being so marked. The key to the view of Myanmar being structures by nominals is found in
the role of the particles. Some particles modify the word's part of speech. Among the most
prominent of these is the particle အ, which is prefixed to verbs and adjectives to form nouns or
adverbs.There is a wide variety of particles in Myanmar [14].
For example:
သူတို႔သည္ မႏ ၱေလးတြင္ ၈ ရက္ တိတိ လည္ခဲ့သည္။
Stewart remarked that "The Grammar of Burmese is almost entirely a matter of the correct use of
particles"(Stewart 1956: xi). How one understands the role of the particles is probably a matter of
one's purpose.
3.4. Syntacic Structure of Myanmar Language
It is known that many postpositions can be used in a Myanmar sentence. If the words can be
misplaced in a sentence, the sentence can be abnormal. There are two kinds of sentence as a
sentence construction. They are simple sentence (SS) and complex sentence (CS). In simple
sentence, other phrases such as object, time, and place can be added between subject and verb.
There are two kinds of clause in a complex sentence called independent clause(IC) and dependent
clause (DC).There must be at least one independent clause in a sentence. But there can be more
than one dependent clause in it. IC contains sentence’s final particle (sfp) at the end of a sentence
[15].
SS=IC+sfp
CS=DC...+IC+sfp
IC may be noun phrase or verb or combination of both.
IC=N... (မ်က္မွန္ႏွင့္ေက်ာင္းသား)
IC=V (စား)
6. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
14
IC=N...+V (ဘုရားမွာပန္းနဲ႔ဆီမီးလွဴ)
DC is the same as IC but it must contain a clause marker (cm) in the end.
DC=N...+cm (ေက်ာင္းကဆရာ+ပဲ)
DC=V+cm (ေရာက္+ရင္)
DC=N...+V+cm (စိတ္ထား+ျဖဴ+မွ)
4. CORPUS STATISTICS
Corpus is a large and structured set of texts. It is used to do statistical analysis, checking
occurrences or validating linguistic rules on a specific universe. Besides, it is a fundamental basis
of many researches in Natural Language Processing (NLP). Building of the corpus will be helpful
for development NLP tools (such as grammar rules, spelling checking, etc). However, there are
very few creations and researches of corpora in Myanmar, comparing to other language such as
English.
We collected several types of Myanmar texts to construct a corpus. Our corpus is to be built
manually. We extended the POS tagged corpus that is proposed in [3]. The chunk and function
tags are manually added to the POS tagged corpus. The number of sentences is about 3900
sentences with average word length 15 and it is not a balanced corpus that is a bit biased on
Myanmar textbooks of middle school. The corpus size is bigger and bigger because the tested
sentences are automatically added to the corpus. In table 2, Myanmar grammar books and
websites are text collections. Example corpus sentence is shown in figure 1.
Table 2. Corpus statistics
Text types # of sentences
Myanmar textbooks of middle school 1250
Myanmar Grammar books 628
Myanmar Newspapers 730
Myanmar websites 970
Others 325
Total 3903
VC@Active[မိုး႐ြာ/verb.common]#CC@CCS[လွ်င္/cc.sent]#NC@Subj[ကေလး/n.person,မ်ား/part.number]#NC@
PPla[လမ္း/n.location]#PPC@PlaP[ေပၚတြင္/ppm.place]#NC@Obj[ေဘာလံုး/n.objects]#VC@Active [ကန္
ၾက/verb.common]#SFC@Null[သည္/sf]။
Figure 1. A sentence in the corpus
7. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
15
5. PROPOSED SYSTEM
The procedure of the proposed approach is shown in the following figure.
Accept input Myanmar sentence with segmentation,
POS tagging and chunking
Extract one POS tag and its category from each chunk
Choose the possible function tags for each POS tag
by using Naive Bayes Theory
Display the sentence with function tags
Parse the function tags by using CFG rules with the proposed grammar
Display the parse tree as an output
Figure 2. Proposed system
6. FUNCTION TAGSET
Function tagging is a process of assigning syntactic categories like subject, object, time and
location to each word in the text document. These are conceptually appealing by encoding an
event in the format of “who did what to whom, where, when”, which provides useful semantic
information of the sentences. We use the function tags that is proposed in [16] because it is easier
to maintain and can add new language features. The function tagset is shown in table 3.
Table 3. Function tagset
Tag Description Example
Active
Subj
PSubj
SubjP
Obj
PObj
ObjP
PIobj
IobjP
Pla
PPla
PlaP
Tim
PTim
Verb
Subject
Subject
Postposition of Subject
Object
Object
Postposition of Object
Indirect Object
Postposition of Indirect Object
Place
Place
Postposition of Place
Time
Time
စားသည္
သူ
သူ
သည္
ေကာ္ဖီ
ေကာ္ဖီ
ကို
မလွ
အား
ရန္ကုန္
ရန္ကုန္
သို႔
8. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
16
TimP
PExt
ExtP
PSim
SimP
PCom
ComP
POwn
OwnP
Ada
PcomplS
PcomplP
PPcomplO
PcomplOP
PUse
UseP
PCau
CauP
PAim
AimP
CCS
CCM
CCC
CCP
CCA
Postposition of Time
Extract
Postposition of Extract
Similie
Postposition of Similie
Compare
Postposition of Compare
Own
Postposition of Own
Adjective
Subject Complement
Object Complement
Object Complement
Postposition of Object Complement
Use
Postposition of Use
Cause
Postposition of Cause
Aim
Postposition of Aim
Join the sentences
Join the meanings
Join the words
Join with particles
Join as an adjective
မနက္
မနက္
တြင္
ေက်ာင္းသားမ်ား
အနက္
မင္းသမီး
ကဲ့သို႔
သူ႔ဦးေလး
ႏွင့္အတူ
သူ
၏
လွ
သူသည္ဆရာျဖစ္သည္
ေ႐ႊကိုလက္စြပ္လုပ္ သည္
ထြန္းထြန္း
ဟု
တုတ္
ျဖင့္
မိုး
ေၾကာင့္
အေမ႔
အတြက္
လွ်င္
ထို႔ေၾကာင့္
ႏွင့္
ကို
မည့္
7. PROPOSED GRAMMAR FOR MYANMAR SENTENCES
Since it is impossible to cover all types of sentences in Myanmar language, we have taken some
portion of the sentence and try to make grammar for them. Myanmar is free-phrase-order
language. In Myanmar language, we see that one sentence can be written in different forms for
the same meaning, i.e. the positions of the tags are not fixed. So we cannot restrict the grammar
rule for one sentence. The grammar rule may be very long, but we have to accept it. The grammar
rule we have tried to make, may not work for all the sentences in Myanmar language because we
have not considered all types of sentences. Some of the sentences are shown below, which are
used to make the grammar rules.
သူ-သည္-ေက်ာင္း-သို႔-သြား-သည္။ (Subj-Pla-Verb)
သူ-သည္-ေက်ာင္းသားတစ္ေယာက္-ျဖစ္-သည္။ (Subj-PcomplS-Verb)
ေကာင္စီ၀င္-အျဖစ္-သူ႔-ကို-လူထု-က-ေရြး-သည္။ (PcomplO-Obj-Subj-Verb)
ေမာင္လွ-သည္-ေခြး-ကို-တုတ္-ျဖင့္-ရိုက္-သည္။ (Subj-Obj-Use-Verb)
သူ-သည္-ဆရာ႔-ကို-စာအုပ္-ေပး-သည္။ (Subj-Obj-Iobj-Verb)
သူမ-သည္-လူနာမ်ား-ကို-ေဆြမ်ိဳးမ်ား-ကဲ႔သို႔-ျပဳစု-သည္။ (Subj-Obj-Sim-Verb)
9. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
17
ကေလးမ်ား-သည္-အေဖာ္-ေၾကာင့္-ပ်က္စီး-သည္။ (Subj-Cau-Verb)
သစ္႐ြက္တို႔-သည္-တေပါင္းလ-၌-ေၾကြ-သည္။ (Subj-Tim-Verb)
တရားသူၾကီး-သည္-ခိုးမႈ-ကို-တရား႐ံုး-၌-နံနက္-က-စစ္ေဆး-သည္။ (Subj-Obj-Pla-Tim-Verb)
အေမသည္-သူ႔သားအတြက္-မုန္႔ကို-ေစ်းမွ-မနက္က-ဝယ္ခဲ႔သည္။ (Subj-Aim-Obj-Pla-Tim-Verb)
Our proposed grammar for Myanmar Sentences:
Sentence →I-sent | I-sent CC I-sent | CCM I-sent | Obj-sent I-sent | Subj-sent I-sent
I-sent →Subj Obj Pla Active | Subj Active | Com Pla Active | Subj PcomplS Active
CC →CCS | CCP
Subj -sent →I-sent CCA Subj
Obj -sent →I-sent CCA Obj
Subj →PSubj SubjP
Subj →Subj
Obj →PObj ObjP
Obj →Obj
Pla →PPla PlaP
PcomplO →PPcomplO PcomplOP
Use →PUse UseP
Sim →PSim SimP
8. FUNCTION TAGGING
8.1 Naive Bayes Classifier
Before one can build naive Bayesian based classifier, one needs to collect training data. The
training data is a set of problem instances. Each instance consists of values for each of the defined
features of the underlying model and the corresponding class, i.e. function tag in our case. The
development of a Naive Bayes classifier involves learning how much each function tag should
be trusted for the decisions it makes [17]. It is well-matched to the function tagging problem.
The Naïve Bayesian classifier is a term in Bayesian statistics dealing with a simple probabilistic
classifier based on applying Bayes’ theorem with strong (naïve) independence assumptions. It
assumes independence among input features. Therefore, given an input vector, its target class can
be found by choosing the one with the highest posterior probability. The probability
model for a classifier is a conditional model.
P (ck|x1, x2, … , xi) =P(ck)* P(x1,x2,…,xi | ck) (1)
Let X=x1, x2, x3, … (xi, i >=1 and X are features)
C=c1, c2, c3, … (ck , k>=1 and C are classes)
P (ck|x1, x2, … , xi) is referred to as the posterior probability
P (ck) as the prior probability
P(x1, x2,…,xi|ck) as the log likelihood
8.2. Function Tagging by Using Naïve Bayes Theory
The labels such as subject, object, time, etc. are named as function tags. By function, it is meant
that action or state which a sentence describes. The system operates at word-level with the
assumption that input sentences are pre-segmented, pos-tagged and chunked.
10. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
18
Each proposed function tag is regarded as a class and the task is to find what class/tag a given
word in a sentence belongs to a set of predefined classes/tags. A feature is a POS tag word with
category. The category of a word is added to the POS tag to obtain more accurate lexical
information. It can be formed from the features of that word.
For example: Ma Ma is a clever student.
Ma Ma [ မမ(n.person) သည္(ppm.subj) ] clever [ စာေတာ္ေသာ(adj.dem) ] student [ေက်ာင္းသူ(n.person)]
a [ တစ္(part.number) ေယာက္(part.type) ] is [ ျဖစ္(v.common) သည္ (sf.declarative) ]
Noun has 16 categories such as animals, person, objects, food, location, etc. There are 47 categories in our
corpus. We show some features of Myanmar words as shown in table 4.
Table 4. Features
Feature English Myanmar
n.food apple ပန္းသီး
pron.possessive his သူ႕
ppm.time at တြင္
adj.dem happy ေပ်ာ္ရႊင္ေသာ
part.support can ႏိုင္
cc.mean so ထို႔ေၾကာင့္
v.common go သြား
sf.declarative null ၏
In Myanmar language, some words have same meaning but in different features as shown in table
5. For example:
• Ma Ma and Hla Hla are friends.
• He lives with his uncle.
• He hits the dog with the stick.
In these three sentences, English words (and, with, with) have the same Myanmar meaning (ႏွင့္).
Table 5. Same word with different features
Feature English Myanmar
cc.chunk and ႏွင့္
ppm.compare with ႏွင့္
ppm.use with ႏွင့္
A class is a one of the proposed function tags. Same word may have different function tags as
shown in table 6.
Table 6. Function tags
Function tags English Myanmar
PcomplS He has a house. အိမ္
PPla He lives in a house. အိမ္
PSubj A house is near the school. အိမ္
PObj He buys a house. အိမ္
11. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
19
There are many chunks in a sentence such as NC (noun chunk), PPC (postpositional chunk), AC
(adjectival chunk), RC (adverbial chunk), CC (conjunctional chunk), SFC (sentence’s final
chunk) and VC (verb chunk). The chunk types are shown in table 7.
Table 7. Chunk types
No. Chunk Type English Example
1 Noun Chunk they NC[သူတို႔/pron.person]
2 Postpositional Chunk at PPC[တြင္/ppm.place]
3 Adjectival Chunk brave AC[ရဲရင္႔/adj.dem]
4 Adverbial Chunk quickly RC[လ်င္ျမန္စြာ/adv.manner]
5 Conjunctional Chunk or CC[သို႔မဟုတ္/cc.chunk]
6 Sentence Final Chunk - SFC[၏/sf.declarative]
7 Verb Chunk help VC[ကူညီ/v.common]
A chunk contains a Myanmar head word and its modifier. It can contain more than one POS tag
and one of the POS tags is selected with respect to the chunk type. In the following chunk, the
POS tag (n.animals) is selected with respect to the chunk type (NC).
For example: NC [ေခြး/n.animals,တစ္/part.number,ေကာင္/part.type]
If the noun chunk (NC) contains more than one noun, the last noun (n.food) is selected as a main
word according to the nature of Myanmar language.
For example: NC [ေဆာင္းရာသီ/n.time,သီးႏွံပင္/n.food,မ်ား/part.number]
There are many possible function tags (t1, t2…tk) for each POS tag with category (pc). These
possible tags are retrieved from the training corpus by using the following equation that is prior
probability as shown in Table 8.
P (tk|pc) = C (tk,pc)/C(pc) (2)
Table 8. Sample data for POS/function tag pairs with probability
POS tags Function tags : Probability
ppm.use UseP:1.0
n.natural PSubj:0.209, Subj:0.2985, PPla:0.1343, PObj:0.1642, PcomplS:0.0448,
PPcomplO:0.0149, PCau:0.0448, PSim:0.0149, PAim:0.0299,
Obj:0.0299, PCom:0.0149
pron.possessive PIobj:0.1111, PSubj:0.2222, PObj:0.6667
cc.chunk CCC:1.0
adj.dem PcomplS:0.0192, Ada:0.9808
n.animal Subj:0.1212, PObj:0.3333, PcomplS:0.1212, PSubj:0.2727, PSim:0.0606,
Obj:0.0303, PAim:0.0303, PUse:0.0303
v.common Active:1.0
part.eg PcomplOP:0.5455, SimP:0.4545
12. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
20
We calculate the probability between next function tags (n1, n2…nj) and previous possible tags by
using the following equation that is log likelihood as shown in Table 9.
P (nj|tk) = C (nj,tk)/C(tk) (3)
Table 9. Sample data for function/function tag pairs with probability
Function tags Function tags : Probability
CCC Subj:0.271, Active:0.2452 , PObj:0.1226, Obj:0.129, PTim:0.0194
PcomplS:0.0516, PPla:0.0516, Pla:0.0387, Tim:0.0194, PSubj:0.0387
PCau:0.0065, PAim:0.0065
Subj CCC:0.2047, Active:0.5436, PTim:0.0067, PCom:0.0067, Ada:0.0604,
PDir:0.0067, Tim:0.0134, Pla:0.0101, PUse:0.0034, PSim:0.0101,
PLea:0.0134, CCA:0.0268, Obj:0.0503, PPla:0.0235,PObj:0.0168
CCS:0.0034
PCau CCC:0.1111, CauP:0.8889
PExt ExtP:1.0
UseP Active:0.5652, PObj:0.087, Subj:0.087, PArr:0.0435, PTim:0.087,
CCA:0.0435, PcomplS:0.0435, Obj:0.0435
PPla CCC:0.056, PlaP:0.936, PPla:0.0080
Obj CCC:0.2667, Active:0.6917, AimP:0.0083, Subj:0.0083, CCA:0.0083
Ada:0.0167
PcomplO Active:1.0
Possible function tags are disambiguated by using Naïve Bayesian method. We multiply the
probabilities from (2) and (3) and choose the function tag with the largest number as the posterior
probability.
Technically, the task of function tags assignment is to generate a sentence that has correct
function tags attached to certain words.
Our description of the function tagging process refers to the example as shown in figure 3, which
illustrates the sentence (“မမႏွင့္လွလွသည္ ေက်ာင္းသို႔ စက္ဘီးျဖင့္ သြားသည္။” (Ma Ma and Hla Hla go to
school by bicycle). This sentence is represented as a sequence of word-tags as “noun verb
conjunction noun ppm pronoun verb”. It is described as a sequence of chunk as “NC VC CC NC
PPC NC VC SFC”.
(a) NC[မမ/n.person]#CC[ႏွင့္/cc.chunk]#NC[လွလွ/n.person]#PPC[သည္/ppm.subj]#NC[ေက်ာင္း/n.location]
#PPC[သို႔/ppm.place]#NC[စက္ဘီး/n.objects]#PPC[ျဖင့္/ppm.use]#VC[သြား/v.common]#SFC[သည္/sf]။
(b) PSubj[မမ]#CCC[ႏွင့္]#PSubj[လွလွ]#SubjP[သည္]#PPla[ေက်ာင္း]#PlaP[သို႔]#PUse[စက္ဘီး]#UseP[ျဖင့္]
#Active[သြားသည္]။
Figure 3. An overview of function tagging of the sentence
(a)The input POS-tagged and chunk sentence (b) The output sentence with function tags
9. Parsing
9.1. Context Free Grammar for Myanmar Sentences
The LANGUAGE defined by a CFG (context-free grammar) is the set of strings derivable from
the start symbol S (for Sentence). The core of a CFG grammar is a set of production rules that
13. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
21
replaces single variables with strings of variables and symbols. The grammar generates all strings
that, starting with a special start variable, can be obtained by applying the production rules until
no variables remain. A CFG is usually thought in two ways: a device for generating sentences, or
a device if assigning a structure to a given sentence. We use CFG for grammatical relations of
function tags.
A CFG is a 4-tuple <N,Σ,P,S> consisting of
• A set of non-terminal symbols N
• A set of terminal symbols Σ
• A set of productions P
– A-> α
– A is a non-terminal
– α is a string of symbols from the infinite set of strings (ΣU N)*
• A designated start symbol S
9.2. Parsing Simple Sentences
A simple sentence contains one subject and one verb. We can construct simple sentences in many
different forms.
• Constructed by adding adjective and adverb
Adjective + Subject + Adjective + Object + Adverb + Verb
ဝေသာ +ေကာင္ေလးသည္ + ခ်ိဳေသာ + ကိတ္မုန္႔ကို + လ်င္ျမန္စြာ + စားသည္။
Fat + boy + sweet + cake + quickly +eat
(A fat boy eats quickly the sweet cake.)
• Constructed by using different set of phrases
Subject phrase + Object phrase + Verb
ဦးဘ၏သားသည္ + ဦးထုပ္အနီႏွင့္ေကာင္ေလးကို +ရွာသည္။
U Ba’s son + boy with the red hat + find
(U Ba’s son finds a boy with the red hat.)
• Constructed by omitting subject
Object + Time + Verb
ဆံပင္ကို +တနဂၤေႏြေန႔တြင္+ေလွ်ာ္သည္။
Hair + in Sunday + wash
(Wash the hair in Sunday.)
• Constructed by omitting verb
Subject + Subject’s complement+ Sentence’s final particle
သူက + ဆရာ +ပါ။
He + teacher + null
(He is a teacher.)
Consider a simple declarative sentence “သူတို႔သည္ ေမာင္ဘကို ေခါင္းေဆာင္ အျဖစ္ ေရြးခ်ယ္ခဲ့ သည္။” (They
selected Mg Ba as a leader).
The structure of the above sentence is Subj-Obj-PcomplO-Active. This is a correct sentence
according to the Myanmar literature.
14. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
22
(a) NC[သူတို႔/pron.possessive]#PPC[သည္/ppm.subj]#NC[ေမာင္ဘ/n.person]#PPC[ကို/ppm.obj]#NC
[ေခါင္းေဆာင္/n.person]#PPC[အျဖစ္/part.eg]#VC[ေရြးခ်ယ္/v.common,ခဲ့/part.support]#SFC[သည္/
sf]။
(b) PSubj[သူတို႔]#SubjP[သည္]#PObj[ေမာင္ဘ]#ObjP[ကို]#PPcomplO[ေခါင္းေဆာင္
]#PcomplOP[အျဖစ္] # Active[ေရြးခ်ယ္ခဲ့သည္]။
(c)
Sentence [start]
I-sent [Sentence→I-sent]
Subj Obj PcomplO Active [I-sent→ Subj Obj PcomplO Active]
PSubj SubjP Obj PcomplO Active [Subj → PSubj SubjP]
PSubj SubjP PObj ObjP PcomplO Active [Obj → PObj ObjP]
PSubj SubjP PObj ObjP PPcomplO
PcomplOP Active
[PcomplO→PPcomplO PcomplOP ]
(d)
Figure 4. (a) The tagged and chunk simple sentence (b) The function tagged sentence
(c) Grammar derivation for simple sentence (d) The parse tree with function tags
9.3. Parsing Complex Sentences
Complex sentence has more than one verb. It contains at least two simple sentences.
Simple sentences are joined with postpositions, particles or conjunctions. There are three
types of complex sentences.
9.3.1. Two simple sentences are joined with postpositions
Consider a complex sentence “သူေရကူးေနသည္ ကို ကၽြန္ေတာ္ ေတြ႔သည္။” (I see that he is swimming).
In this sentence, two simple sentence သူေရကူးေနသည္ (he is swimming) and ကၽြန္ေတာ္ ေတြ႔သည္ (I
see) is joined by postposition ကို (that). The structure of the above sentence is Subj-Active-CCP-
Subj-Active. This is a correct sentence according to the Myanmar literature.
(a) NC [သူ/pron.person] # VC [ေရကူးေနသည္/v.common] # CC [ကို/cc.obj] # NC
[ကၽြန္ေတာ္/pron.person] # VC [ေတြ႔/v.common] # SFC [သည္/sf]။
(b) Subj[သူ] # Active[ေရကူးေနသည္] # CCP[ကို] # Subj[ကၽြန္ေတာ္] # Active[ေတြ႔သည္]။
15. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
23
(c)
Sentence [start]
I-sent CCP I-sent [Sentence→I-sent CCP I-sent]
Subj Active CCP I-sent [I-sent→ Subj Active]
Subj Active CCP Subj Active [I-sent→Subj Active]
(d)
Figure 5. (a) The tagged and chunk complex sentence joined with postposition (CCP)
(b) The function tagged sentence (c) Grammar derivation (d) The parse tree with function tags
9.3.2. Two simple sentences are joined with particles
In figure 7, the sentence “အေဖေပးေသာစာအုပ္သည္ ေကာင္းသည္။” (The book that is given by my father
is good.) is illustrated. It is described as a sequence of chunk as “NC VC CC NC PPC AC SFC”
and the sentence structure (Sentence) contains separate constituents for the subject sentence
(Subj-sent) and independent sentence (I-sent), which contains other phrases.
(a) NC [အေဖ/n.person] # VC [ေပး/v.common] # CC [ေသာ/cc.adj] # NC [စာအုပ္/n.objects] # PPC
[သည္/ppm.subj] # AC [ေကာင္း/adj.dem] # SFC [သည္/sf]။
(b) Subj[အေဖ]#Active[ေပး]#CCA[ေသာ]#PObj[စာအုပ္]#ObjP[သည္]#Active[ေကာင္းသည္]။
(c)
Sentence [start]
Subj-sent I-sent [Sentence→Subj-sent I-sent]
I-Sent CCA Subj I-sent [Subj-sent→ I-Sent CCA Subj]
Subj Active CCA Subj I-sent [I-sent→Subj Active]
Subj Active CCA PSubj SubjP I-sent [Subj → PSubj SubjP]
Subj Active CCA PSubj SubjP Ada [I-sent → Ada ]
(d)
Figure 6. (a) The tagged and chunk complex sentence joined with particle (CCA)
(b) The function tagged sentence (c) Grammar derivation (d) The parse tree with function tags
16. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
24
9.3.3. Two simple sentences are joined with conjunctions
Consider a complex sentence “သူလိမၼာ ေသာေၾကာင့္ ဆရာမ်ားက သူ႔ကို ခ်စ္ၾကသည္။” (As he is clever, the
teachers love him).
In this sentence, two simple sentence သူလိမၼာ (he is clever) and ဆရာမ်ားက သူ႔ကို ခ်စ္ၾကသည္ (the
teachers love him) is joined by postposition ေသာေၾကာင့္ (as). The structure of the above sentence is
Subj-Ada-CCS- Subj-Obj-Active. This is a correct sentence according to the Myanmar literature.
(a) NC [သူ/pron.person] # AC [လိမၼာ/adj.dem] # CC [ေသာေၾကာင့္/cc.sent] # NC [ဆရာမ်ား/n.objects] # PPC
[က/ppm.subj] # NC [သူ႔/pron.possessive] # PPC [ကို/ppm.obj] # VC [ခ်စ္ၾက/v.common] # SFC [သည္/sf]။
(b) Subj[သူ]#Ada[လိမၼာ]#CCS[ေသာေၾကာင့္]#PSubj[ဆရာမ်ား]#SubjP[က]#PObj [သူ႔/pron.possessive] # ObjP
[ကို/ppm.obj] # VC [ခ်စ္ၾက/v.common] # SFC [သည္/sf]။
(c)
Sentence [start]
I-sent CCS I-sent [Sentence→I-sent CCS I-sent]
Subj Ada CCS I-sent [I-sent→Subj Ada]
Subj Ada CCS Subj Obj Active [I-sent→Subj Obj Active]
Subj Ada CCS PSubj SubjP Obj Active [Subj → PSubj SubjP]
Subj Ada CCS PSubj SubjP PObj ObjP Active [Obj → PObj ObjP]
(d)
Figure 7. (a) The tagged and chunk complex sentence joined with conjunction (CCS)
(b) The function tagged sentence (c) Grammar derivation (d) The parse tree with function tags
10. EXPERIMENTAL RESULTS
In our corpus, all sentences can be further classified as two sets. One is simple sentence set, in
which every sentence has no more than 15 words. The other is complex sentence set, in which
every sentence has more than 15 words. There are 1600 simple sentences and 2300 complex
sentences in the corpus.
For evaluation purpose, different numbers of sentences collecting from Myanmar textbooks of
middle school and Myanmar historical books are used as a test set. There are about 2200
sentences in the test set. After implementation of the system using the grammar, it has been seen
that the system can easily generates the parse tree for a sentence if the sentence structure satisfies
the grammar rules. Our program tests only the sentence structure according to the grammar rules.
So if the sentence structure satisfies the grammar rule, program recognizes the sentence as a
correct sentence and generates a parse tree. Otherwise it gives output as an error.
17. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
25
Table 10 shows the overall performance for the proposed system. The proposed system yield
96.68% of precision, 93.05% of recall and 94.83% of f-measure for simple sentence. Performance
comparisons between the various sentence types are shown in figure 8.
Precision= %
100
×
Sentences
rOfCorrect
TotalNumbe
nces
rrectSente
NumberOfCo
Recall= %
100
×
entences
ngCorrectS
tualExisti
NumberOfAc
nces
rrectSente
NumberOfCo
F-Measure=2*
call
ecision
call
ecision
Re
Pr
Re
*
Pr
+
Table 10. Compared results of each sentence types
Sentence Type Actual Recognized Correct Precision Recall F-Measure
Simple 720 693 670 96.68% 93.05% 94.83%
Complex joined
with CCP
455 420 394 93.81% 88.54% 91.09%
Complex joined
with CCA
370 351 319 90.88% 86.22% 88.48%
Complex joined
with CCS
665 640 593 92.66% 89.17% 90.88%
80
82
84
86
88
90
92
94
96
98
Simple Complex
joined with
CCP
Complex
joined with
CCA
Complex
joined with
CCS
Percentage
(%)
Precision
Recall
F-measure
Figure 8. Performance Comparisons between the Various Sentence Types
11. CONCLUSION AND FUTURE WORK
In the task of assigning function tag, we chose Naïve Bayes model for its simplicity and
user-friendliness. We apply context-free grammar for parsing because it is easier to
maintain and can add new language features. The parse tree can be built by using
function tags. As function tagging is a pre-processing step for parsing, the errors occurred in the
task of function tagging affect the parse tree. The corpus may be balanced because Naïve
18. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
26
Bayesian framework probability simply describes uncertainty. The corpus creation is time
consuming. The corpus is the resource for the development of Myanmar to English translation
system and we expect the corpus to be continually expanded in the future because the tested
sentence can be added into the corpus.
In this work we have considered limited number of Myanmar sentences to construct the grammar
rules. In future work we have to consider as many sentences as we can and some more tags for
constructing the grammar rules because Myanmar language is a free-phrase-order language.
Word position for one sentence may not be same in the other sentences. So we can not restrict
the grammar rules for some limited number of sentences.
REFERENCES
[1] John C. Henderson and Eric Brill. “Exploiting Diversity in Natural Language Processing:
Combining Parsers”.
[2] Blaheta, D (2003) “Function tagging”. Ph.D. Dissertation, Brown University. Advisor-Eugene
Charniak.
[3] Phyu Hnin Myint (2010) “Assigning automatically Part-of-Speech tags to build tagged corpus for
Myanmar language”, The Fifth Conference on Parallel Soft Computing, Yangon, Myanmar.
[4] Phyu Hnin Myint (2011) “Chunk Tagged Corpus Creation for Myanmar Language”. In Proceedings
of the ninth International Conference on Computer Applications, Yangon, Myanmar.
[5] Eugene Charniak (1997) “Statistical parsing with a context-free grammar and word statistics”. In
Proceedings of the Fourteenth National Conference on Artificial Intelligence, pages 598-603,
Menlo Park.
[6] Blaheta, D., and Johnson, M (2000) “Assigning function tags to parsed text”. In Proceedings of the
1st Annual Meeting of the North American Chapter of the Association for Computational Linguistics,
234–240.
[7] Mihai Lintean and Vasile Rus (2007) “Naive Bayes and Decision Trees for Function Tagging”. In
Proceedings of the International Conference of the Florida Artificial Intelligence Research Society
(FLAIRS) 2007, Key West, FL, May (in press).
[8] Yong-uk Park and Hyuk-chul Kwon (2008) “Korean Syntactic Analysis using Dependency Rules and
Segmentation “, Proceedings of the Seventh International Conference on Advanced Language
Processing and Web Information Technology(ALPIT2008), Vol.7, pp.59-63, China, July 23-25.
[9] Mark-Jan Nederhof and Giorgio Satta (2002) “Parsing Non-Recursive Context-Free Grammars”. In
Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL
ANNUAL'02), July 7-12, Pages 112-119, Philadelphia, Pennsylvania, USA.
[10] Kyongho Min & William H. Wilson, “Are Efficient Natural Language Parsers Robust?” School of
Computer Science & Engineering,University of New South Wales, Sydney NSW 2052 Australia
[11] Ethnologue (2005) Languages of the World, 15th Edition, Dallas, Tex.: SIL International. Online
version: http://www.ethnologue.com/. Edited by Raymond G. Gordon, Jr.
[12] Myanmar Thudda (1986) vol. 1 to 5 in Bur-Myan, Text-book Committee, Basic Edu., Min. of Edu.,
Myanmar, ca.
[13] Shwe Pyi Soe, U (2010) ျမန္မာဘာသာစကား Aspects of Myanmar Language
[14] Thaung Lwin, U (1978) နည္းသစ္ျမန္မာသဒၵါ
[15] Ko Lay, U (2003) ျမန္မာသဒၵါဖြဲ႔စည္းပံု Ph.D. Dissertation, Myanmar Department, University of Educaion.
[16] Win Win Thant (2010) “Naive Bayes for function tagging in Myanmar Language”, The Fifth
Conference on Parallel Soft Computing, Yangon, Myanmar, 2010.
[17] Leon Versteegen (1999) “The Simple Bayesian Classifier as a Classification Algorithm”.
[18] Y. Tsuruoka and K. Tsujii (2005) “Chunk parsing revisited”. In Proceedings of the Ninth
International Workshop on Parsing Technologies. Vancouver, Canada.
[19] Michael Collins (1996) “A New Statistical Parser Based on Bigram Lexical Dependencies”. In
Proceedings of ACL-96, pp. 184–191
.
19. International Journal on Natural Language Computing (IJNLC) Vol. 1, No.1, April 2012
27
Authors
Win Win Thant is a Ph.D research student. She received B.C.Sc (Bachelor of Computer
Science) degree in 2003, B.C.Sc (Hons.) degree in 2004 and M.C.Sc (Master of Computer
Science) degree in 2007. She is also an Assistant Lecturer of U.C.S.Y (University of
Computer Studies, Yangon). She has published papers in International conferences and
International Journals. Her research interests include Natural Language Processing,
Artificial Intelligence and Machine Translation.
Tin Myat Htwe is an Associate Professor of U.C.S.Y. She obtained Ph.D degree of Information Technlogy
from University of Computer Studies, Yangon. Her research interests include Natural Language Processing,
Data Mining and Artificial Intelligence. She has published papers in International conferences and
International Journals.
Ni Lar Thein is a Rector of U.C.S.Y. She obtained B.Sc. (Chem.), B.Sc. (Hons) and M.Sc. (Computer
Science) from Yangon University and Ph.D. (Computer Engg.) from Nanyang Technological University,
Singapore in 2003. Her research interests include Software Engineering, Artificial Intelligence and Natural
Language Processing. She has published papers in International conferences and International Journals.