The document describes the VNSCalendar system, which allows users to manage their personal calendar using Vietnamese speech commands. It has three key capabilities:
1. It can recognize Vietnamese speech commands and questions, convert them to text, analyze the syntax and semantics, generate database queries, and return results to the user.
2. The system's semantic processing mechanism analyzes syntax and semantics of Vietnamese commands and questions using Definite Clause Grammar and formal semantics techniques.
3. An evaluation of the system found it could accurately recognize over 91% of Vietnamese speech commands after being built and tested in a PC environment.
BIDIRECTIONAL LONG SHORT-TERM MEMORY (BILSTM)WITH CONDITIONAL RANDOM FIELDS (...ijnlc
This study investigates the effectiveness of Knowledge Named Entity Recognition in Online Judges (OJs). OJs are lacking in the classification of topics and limited to the IDs only. Therefore a lot of time is consumed in finding programming problems more specifically in knowledge entities.A Bidirectional Long Short-Term Memory (BiLSTM) with Conditional Random Fields (CRF) model is applied for the recognition of knowledge named entities existing in the solution reports.For the test run, more than 2000 solution reports are crawled from the Online Judges and processed for the model output. The stability of the model is
also assessed with the higher F1 value. The results obtained through the proposed BiLSTM-CRF model are more effectual (F1: 98.96%) and efficient in lead-time.
IRJET- Survey on Generating Suggestions for Erroneous Part in a SentenceIRJET Journal
This document discusses using deep learning approaches like long short-term memory (LSTM) neural networks to generate suggestions for erroneous parts of sentences in Indian languages. Indian languages pose unique challenges due to their morphological richness and structure differences from English. The document reviews natural language processing techniques like recurrent neural networks, convolutional neural networks, and LSTMs. It proposes using LSTMs to model sentence structure and generate possible corrections for errors in an unsupervised manner. The goal is to develop this technique for morphologically complex Indian languages like Malayalam.
The document discusses performance analysis of the ext4 file system in Linux. It provides characteristics of ext4 including compatibility, extents, multiblock allocation, fast filesystem checking, and larger file and system sizes compared to ext3. It analyzes performance of ext3 and ext4 for small, medium, and large files, finding ext4 to outperform ext3 significantly. It outlines future directions for evolving the ext file system such as supporting even larger filesystem and file sizes, improving support for sparse and fragmented files, and allowing dynamic resizing of the number of inodes.
Resolving the semantics of vietnamese questions in v news qaict systemijaia
Recently we have built a VNewsQA/ICT system which can read the titles of Vietnamese news in the domain
of information and communication technology, then process and use them to answer the Vietnamese
questions of users. The architecture of VNewsQA/ICT system has two main components: 1) the first
component treats the simple Vietnamese sentences as its natural language textual data which is used to
answer the user’s questions; 2) the second component resolves the semantics of Vietnamese questions
which query the system. This paper introduces a semantic representation model and a processing model to
revolve the Vietnamese questions in VNewsQA/ICT system. These semantic representation and processing
models are able to resolve the semantics of eight Vietnamese question classes which are used in our
system.
Tagging based Efficient Web Video Event CategorizationEditor IJCATR
Web video categorization is one of the emerging research fields in the computer vision domain due to its massive volume
growth in the internet which demands to discover events. Due to insufficient, noisy information and large intra class disparity makes it
more daunting task to recognize the events. Most of the recent works focus on constrained (fixed camera, known environment) videos
with supervised labelling to categorize the web videos. In this paper, we propose the subject based Part-Of- Speech (POS) Tagging
technique with the assist of Named Entity Recognition (NER) and WordNet is applied on YouTube video titles to discover the events
based on the subject, not on the objects visualized in the videos. Unsupervised learning method is used on high level features (titles)
because of incoming videos are not known and large intra-class variations. For the experiment, we have chosen topics from Google
Zeitgeist and downloaded the related videos from YouTube. A novel conclusion is derived from the experimental result that use of low
level features will lead to a poor classification in discovering intra class events based on the subject of the videos
QUESTION ANALYSIS FOR ARABIC QUESTION ANSWERING SYSTEMS ijnlc
The first step of processing a question in Question Answering(QA) Systems is to carry out a detailed analysis of the question for the purpose of determining what it is asking for and how to perfectly approach answering it. Our Question analysis uses several techniques to analyze any question given in natural language: a Stanford POS Tagger & parser for Arabic language, a named entity recognizer, tokenizer,
Stop-word removal, Question expansion, Question classification and Question focus extraction components. We employ numerous detection rules and trained classifier using features from this analysis to detect important elements of the question, including: 1) the portion of the question that is a referring to the answer (the focus); 2) different terms in the question that identify what type of entity is being asked for (the
lexical answer types); 3) Question expansion ; 4) a process of classifying the question into one or more of several and different types; and We describe how these elements are identified and evaluate the effect of accurate detection on our question-answering system using the Mean Reciprocal Rank(MRR) accuracy measure.
FFEA 2016 -10 Website Mistakes Even Great Marketers Can MakeSaffire
This document provides 11 common website mistakes that marketers can make and how to avoid them. It recommends using current programming languages and plug-ins, optimizing for search engines and mobile users, including clear calls to action, prioritizing photos and video over just text, and collecting analytics to improve content and outreach over time. The overall message is that websites need frequent updates, multichannel content, and data-driven optimization to effectively engage audiences.
The document outlines 5 steps to developing a smart compensation plan: 1) gain executive support by emphasizing compensation's impact on retention and the bottom line, 2) define your compensation strategy by determining goals and market, 3) develop a market-based pay structure using appropriate job evaluation and market data, 4) build pay ranges by identifying differentials, pay grades, and guidelines for movement, and 5) implement a total rewards plan by finalizing all compensation elements, budgets, outliers, and empowering managers. Following these steps can help attract and retain top talent through a compensation plan aligned with business needs.
BIDIRECTIONAL LONG SHORT-TERM MEMORY (BILSTM)WITH CONDITIONAL RANDOM FIELDS (...ijnlc
This study investigates the effectiveness of Knowledge Named Entity Recognition in Online Judges (OJs). OJs are lacking in the classification of topics and limited to the IDs only. Therefore a lot of time is consumed in finding programming problems more specifically in knowledge entities.A Bidirectional Long Short-Term Memory (BiLSTM) with Conditional Random Fields (CRF) model is applied for the recognition of knowledge named entities existing in the solution reports.For the test run, more than 2000 solution reports are crawled from the Online Judges and processed for the model output. The stability of the model is
also assessed with the higher F1 value. The results obtained through the proposed BiLSTM-CRF model are more effectual (F1: 98.96%) and efficient in lead-time.
IRJET- Survey on Generating Suggestions for Erroneous Part in a SentenceIRJET Journal
This document discusses using deep learning approaches like long short-term memory (LSTM) neural networks to generate suggestions for erroneous parts of sentences in Indian languages. Indian languages pose unique challenges due to their morphological richness and structure differences from English. The document reviews natural language processing techniques like recurrent neural networks, convolutional neural networks, and LSTMs. It proposes using LSTMs to model sentence structure and generate possible corrections for errors in an unsupervised manner. The goal is to develop this technique for morphologically complex Indian languages like Malayalam.
The document discusses performance analysis of the ext4 file system in Linux. It provides characteristics of ext4 including compatibility, extents, multiblock allocation, fast filesystem checking, and larger file and system sizes compared to ext3. It analyzes performance of ext3 and ext4 for small, medium, and large files, finding ext4 to outperform ext3 significantly. It outlines future directions for evolving the ext file system such as supporting even larger filesystem and file sizes, improving support for sparse and fragmented files, and allowing dynamic resizing of the number of inodes.
Resolving the semantics of vietnamese questions in v news qaict systemijaia
Recently we have built a VNewsQA/ICT system which can read the titles of Vietnamese news in the domain
of information and communication technology, then process and use them to answer the Vietnamese
questions of users. The architecture of VNewsQA/ICT system has two main components: 1) the first
component treats the simple Vietnamese sentences as its natural language textual data which is used to
answer the user’s questions; 2) the second component resolves the semantics of Vietnamese questions
which query the system. This paper introduces a semantic representation model and a processing model to
revolve the Vietnamese questions in VNewsQA/ICT system. These semantic representation and processing
models are able to resolve the semantics of eight Vietnamese question classes which are used in our
system.
Tagging based Efficient Web Video Event CategorizationEditor IJCATR
Web video categorization is one of the emerging research fields in the computer vision domain due to its massive volume
growth in the internet which demands to discover events. Due to insufficient, noisy information and large intra class disparity makes it
more daunting task to recognize the events. Most of the recent works focus on constrained (fixed camera, known environment) videos
with supervised labelling to categorize the web videos. In this paper, we propose the subject based Part-Of- Speech (POS) Tagging
technique with the assist of Named Entity Recognition (NER) and WordNet is applied on YouTube video titles to discover the events
based on the subject, not on the objects visualized in the videos. Unsupervised learning method is used on high level features (titles)
because of incoming videos are not known and large intra-class variations. For the experiment, we have chosen topics from Google
Zeitgeist and downloaded the related videos from YouTube. A novel conclusion is derived from the experimental result that use of low
level features will lead to a poor classification in discovering intra class events based on the subject of the videos
QUESTION ANALYSIS FOR ARABIC QUESTION ANSWERING SYSTEMS ijnlc
The first step of processing a question in Question Answering(QA) Systems is to carry out a detailed analysis of the question for the purpose of determining what it is asking for and how to perfectly approach answering it. Our Question analysis uses several techniques to analyze any question given in natural language: a Stanford POS Tagger & parser for Arabic language, a named entity recognizer, tokenizer,
Stop-word removal, Question expansion, Question classification and Question focus extraction components. We employ numerous detection rules and trained classifier using features from this analysis to detect important elements of the question, including: 1) the portion of the question that is a referring to the answer (the focus); 2) different terms in the question that identify what type of entity is being asked for (the
lexical answer types); 3) Question expansion ; 4) a process of classifying the question into one or more of several and different types; and We describe how these elements are identified and evaluate the effect of accurate detection on our question-answering system using the Mean Reciprocal Rank(MRR) accuracy measure.
FFEA 2016 -10 Website Mistakes Even Great Marketers Can MakeSaffire
This document provides 11 common website mistakes that marketers can make and how to avoid them. It recommends using current programming languages and plug-ins, optimizing for search engines and mobile users, including clear calls to action, prioritizing photos and video over just text, and collecting analytics to improve content and outreach over time. The overall message is that websites need frequent updates, multichannel content, and data-driven optimization to effectively engage audiences.
The document outlines 5 steps to developing a smart compensation plan: 1) gain executive support by emphasizing compensation's impact on retention and the bottom line, 2) define your compensation strategy by determining goals and market, 3) develop a market-based pay structure using appropriate job evaluation and market data, 4) build pay ranges by identifying differentials, pay grades, and guidelines for movement, and 5) implement a total rewards plan by finalizing all compensation elements, budgets, outliers, and empowering managers. Following these steps can help attract and retain top talent through a compensation plan aligned with business needs.
A Novel Approach for Rule Based Translation of English to Marathiaciijournal
This paper presents a design for rule-based machine translation system for English to Marathi language pair. The machine translation system will take input script as English sentence and parse with the help of Stanford parser. The Stanford parser will be used for main purposes on the source side processing, in the machine translation system. English to Marathi Bilingual dictionary is going to be created. The system will take the parsed output and separate the source text word by word and searches for their corresponding target words in the bilingual dictionary. The hand coded rules are written for Marathi inflections and also reordering rules are there. After applying the reordering rules, English sentence will be syntactically reordered to suit Marathi language
A Novel Approach for Rule Based Translation of English to Marathiaciijournal
This paper presents a design for rule-based machine translation system for English to Marathi language
pair. The machine translation system will take input script as English sentence and parse with the help of
Stanford parser. The Stanford parser will be used for main purposes on the source side processing, in the
machine translation system. English to Marathi Bilingual dictionary is going to be created. The system will
take the parsed output and separate the source text word by word and searches for their corresponding
target words in the bilingual dictionary. The hand coded rules are written for Marathi inflections and also
reordering rules are there. After applying the reordering rules, English sentence will be syntactically
reordered to suit Marathi language.
A Novel Approach for Rule Based Translation of English to Marathiaciijournal
This paper presents a design for rule-based machine translation system for English to Marathi language pair. The machine translation system will take input script as English sentence and parse with the help of Stanford parser. The Stanford parser will be used for main purposes on the source side processing, in the machine translation system. English to Marathi Bilingual dictionary is going to be created. The system will take the parsed output and separate the source text word by word and searches for their corresponding target words in the bilingual dictionary. The hand coded rules are written for Marathi inflections and also reordering rules are there. After applying the reordering rules, English sentence will be syntactically reordered to suit Marathi language
Advancements in Hindi-English Neural Machine Translation: Leveraging LSTM wit...IRJET Journal
The document discusses advancements in neural machine translation models for the Hindi-English language pair using Long Short-Term Memory (LSTM) networks with an attention mechanism. It provides details on preprocessing the parallel Hindi-English dataset, developing encoder-decoder LSTM models with attention, training the models over multiple epochs, and evaluating the trained models on test data. The proposed LSTM model achieves over 90% accuracy on Hindi to English translation tasks, demonstrating better performance than recurrent neural network baselines.
Modeling of Speech Synthesis of Standard Arabic Using an Expert Systemcsandit
This document describes an expert system for speech synthesis of Standard Arabic text. It involves two main stages: 1) creation of a sound database and 2) text-to-speech transformation. The transformation process involves phonetic orthographic transcription of the text and then generating voice signals corresponding to the transcribed phonetic sequence. The expert system uses a knowledge base containing sound data and rewriting rules. It transcribes text using graphemes as basic units and then concatenates sound units from the database to synthesize speech. Tests achieved a 96% success rate in pronouncing sentences correctly. Future work aims to improve prosody and develop fully automatic signal segmentation.
Automatic Text Summarization using Natural Language ProcessingIRJET Journal
The document discusses automatic text summarization using natural language processing. It describes using the Simplified Lesk calculation and WordNet to assess the importance of sentences in a document and perform word sense disambiguation. The proposed approach assesses sentence weights using Simplified Lesk and orders them by weight. It then selects sentences for the summary based on a given summarization percentage. The approach provides good results for summaries up to 50% of the original text and acceptable results up to 25%.
A NEURAL MACHINE LANGUAGE TRANSLATION SYSTEM FROM GERMAN TO ENGLISHIRJET Journal
The document describes a neural machine translation system that translates from German to English. It uses a Transformer model integrated with Fuzzy Semantic Representation and Latent Topic Representation to handle rare words and capture sentence context. FSR groups rare words together and LTR uses CNN to represent sentence context as topic vectors. The model achieves improved translation performance on the WMT En-De corpus compared to baselines by handling out-of-vocabulary words and incorporating sentence level context.
SENTIMENT ANALYSIS IN MYANMAR LANGUAGE USING CONVOLUTIONAL LSTM NEURAL NETWORKijnlc
In recent years, there has been an increasing use of social media among people in Myanmar and writing review on social media pages about the product, movie, and trip are also popular among people. Moreover, most of the people are going to find the review pages about the product they want to buy before deciding whether they should buy it or not. Extracting and receiving useful reviews over interesting products is very important and time consuming for people. Sentiment analysis is one of the important processes for extracting useful reviews of the products. In this paper, the Convolutional LSTM neural network architecture is proposed to analyse the sentiment classification of cosmetic reviews written in Myanmar Language. The paper also intends to build the cosmetic reviews dataset for deep learning and sentiment lexicon in Myanmar Language.
Sentiment Analysis In Myanmar Language Using Convolutional Lstm Neural Networkkevig
In recent years, there has been an increasing use of social media among people in Myanmar and writing
review on social media pages about the product, movie, and trip are also popular among people. Moreover,
most of the people are going to find the review pages about the product they want to buy before deciding
whether they should buy it or not. Extracting and receiving useful reviews over interesting products is very
important and time consuming for people. Sentiment analysis is one of the important processes for extracting
useful reviews of the products. In this paper, the Convolutional LSTM neural network architecture is
proposed to analyse the sentiment classification of cosmetic reviews written in Myanmar Language. The
paper also intends to build the cosmetic reviews dataset for deep learning and sentiment lexicon in Myanmar
Language.
BIDIRECTIONAL LONG SHORT-TERM MEMORY (BILSTM)WITH CONDITIONAL RANDOM FIELDS (...kevig
This study investigates the effectiveness of Knowledge Named Entity Recognition in Online Judges (OJs). OJs are lacking in the classification of topics and limited to the IDs only. Therefore a lot of time is consumed in finding programming problems more specifically in knowledge entities.A Bidirectional Long Short-Term Memory (BiLSTM) with Conditional Random Fields (CRF) model is applied for the recognition of knowledge named entities existing in the solution reports.For the test run, more than 2000 solution reports are crawled from the Online Judges and processed for the model output. The stability of the model is also assessed with the higher F1 value. The results obtained through the proposed BiLSTM-CRF model are more effectual (F1: 98.96%) and efficient in lead-time.
The document proposes a personal scheduling system that uses genetic algorithms and natural language processing. It introduces an adaptive genetic algorithm to solve the split-able scheduling problem for personal tasks. The system applies natural language processing techniques to allow users to enter tasks using natural language sentences. The genetic algorithm is implemented within the JSKE scheduling system to optimize schedules generated by a constructive heuristic algorithm.
IRJET- Factoid Question and Answering SystemIRJET Journal
This document describes a factoid question answering system that uses neural networks and the Tensorflow framework. The system takes in a text document and question as input. It then processes the input using techniques like gated recurrent units and support vector machines to classify the question. The system calculates attention between facts and the question, modifies its memory, and identifies the word closest to the answer to output as the response. Key aspects of the system include training a question answering engine with Tensorflow, storing and retrieving data, and generating the final answer.
AUTOMATED SQL QUERY GENERATOR BY UNDERSTANDING A NATURAL LANGUAGE STATEMENTijnlc
This project aims to develop a system which converts a natural language statement into MySQL query to retrieve information from respective database. The system mainly focuses on creation of complex queries which includes nested queries with more than two-level depth, queries with aggregate functions, having clause, group by clause and co-related queries which are formed due to constraint on aggregate function. The natural language input statement taken from the user is passed through various OpenNLP natural language processing techniques like Tokenization, Parts of Speech Tagging, Stemming and Lemmatization to get the statement in the desired form. The statement is further processed to extract the type of query, the basic clause, which specifies the required entities from the database and the condition clause, which specifies constraints on the basic clause. The final query is generated by converting the basic and condition clauses to their query form and then concatenating the condition query to the basic query. Currently, the system works only with MySQL database.
Developing a hands-free interface to operate a Computer using voice commandMohammad Liton Hossain
The main focus of this study is to help a handicap person to operate a computer by voice command. It can be used to operate the entire computer functions on the user’s voice commands. It makes use of the Speech Recognition technology that allows the computer system to identify and recognize words spoken by a human using a microphone. This Software will be able to recognize spoken words and enable user to interact with the computer. This interaction includes user giving commands to his computer which will then respond by performing several tasks, actions or operations depending on the commands they gave. For Example: Opening /closing a file in computer, YouTube automation using voice command, Google search using voice command, make a note using voice command, calculation by calculator using voice command etc.
The document is a project presentation on automatic keyword extraction for text summarization. It discusses extracting keywords and compiling the most salient parts of a text into a short summary. The methodology uses natural language processing and word embedding to understand text for computers. It performs pre-processing like removing stop words. It then extracts features like word frequency and scores sentences to generate a summary. The results are evaluated and future work to improve accuracy and reduce redundancy is discussed.
IRJET - Voice based Natural Language Query ProcessingIRJET Journal
This document describes a voice-based natural language query processing system that allows non-expert users to interact with a database using natural language queries. The system takes a user's spoken query as input, converts it to text using speech recognition, analyzes the text to generate a SQL query, executes the SQL query against the database, and displays the results in a table. The system addresses challenges like ambiguity through techniques such as tokenization, lexical analysis, syntactic analysis, and semantic analysis to map the natural language query to a valid SQL query.
A template based algorithm for automatic summarization and dialogue managemen...eSAT Journals
Abstract This paper describes an automated approach for extracting significant and useful events from unstructured text. The goal of research is to come out with a methodology which helps in extracting important events such as dates, places, and subjects of interest. It would be also convenient if the methodology helps in presenting the users with a shorter version of the text which contain all non-trivial information. We also discuss implementation of algorithms which exactly does this task, developed by us. Key Words: Cosine Similarity, Information, Natural Language, Summarization, Text Mining
Design and Development of a Malayalam to English Translator- A Transfer Based...Waqas Tariq
This paper describes a transfer based scheme for translating Malayalam, a Dravidian language, to English. This system inputs Malayalam sentences and outputs equivalent English sentences. The system comprises of a preprocessor for splitting the compound words, a morphological parser for context disambiguation and chunking, a syntactic structure transfer module and a bilingual dictionary. All the modules are morpheme based to reduce dictionary size. The system does not rely on a stochastic approach and it is based on a rule-based architecture along with various linguistic knowledge components of both Malayalam and English. The system uses two sets of rules: rules for Malayalam morphology and rules for syntactic structure transfer from Malayalam to English. The system is designed using artificial intelligence techniques.
This document summarizes an article on automatic document clustering. It discusses how documents are preprocessed using techniques like tokenization, stop word removal, and stemming before being clustered. Clustering algorithms group similar documents together based on similarity metrics like TF-IDF. Documents can be searched by name, extension, or keywords to retrieve them from the clustered storage in a faster and more organized manner. Future work may include clustering multimedia files and improving algorithm performance.
hExarAbax makkAmasjix samayaM anni rojulu 5:00 am - 9:00 pm
(Mecca Masjid timings in Hyderabad - All days 5:00 am - 9:00 pm)
User query: makkAmasjix PIju eVMwa?
(What is the fee for Mecca Masjid?)
POS-tagger: makkAmasjix PIju/WQ eVMwa
Replace with root word: makkAmasjix PIju/WQ eMwa
Context Handler: Updates context to 'makkAmasjix'
Advanced Filter: Keywords - makkAmas
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
More Related Content
Similar to SEMANTIC PROCESSING MECHANISM FOR LISTENING AND COMPREHENSION IN VNSCALENDAR SYSTEM
A Novel Approach for Rule Based Translation of English to Marathiaciijournal
This paper presents a design for rule-based machine translation system for English to Marathi language pair. The machine translation system will take input script as English sentence and parse with the help of Stanford parser. The Stanford parser will be used for main purposes on the source side processing, in the machine translation system. English to Marathi Bilingual dictionary is going to be created. The system will take the parsed output and separate the source text word by word and searches for their corresponding target words in the bilingual dictionary. The hand coded rules are written for Marathi inflections and also reordering rules are there. After applying the reordering rules, English sentence will be syntactically reordered to suit Marathi language
A Novel Approach for Rule Based Translation of English to Marathiaciijournal
This paper presents a design for rule-based machine translation system for English to Marathi language
pair. The machine translation system will take input script as English sentence and parse with the help of
Stanford parser. The Stanford parser will be used for main purposes on the source side processing, in the
machine translation system. English to Marathi Bilingual dictionary is going to be created. The system will
take the parsed output and separate the source text word by word and searches for their corresponding
target words in the bilingual dictionary. The hand coded rules are written for Marathi inflections and also
reordering rules are there. After applying the reordering rules, English sentence will be syntactically
reordered to suit Marathi language.
A Novel Approach for Rule Based Translation of English to Marathiaciijournal
This paper presents a design for rule-based machine translation system for English to Marathi language pair. The machine translation system will take input script as English sentence and parse with the help of Stanford parser. The Stanford parser will be used for main purposes on the source side processing, in the machine translation system. English to Marathi Bilingual dictionary is going to be created. The system will take the parsed output and separate the source text word by word and searches for their corresponding target words in the bilingual dictionary. The hand coded rules are written for Marathi inflections and also reordering rules are there. After applying the reordering rules, English sentence will be syntactically reordered to suit Marathi language
Advancements in Hindi-English Neural Machine Translation: Leveraging LSTM wit...IRJET Journal
The document discusses advancements in neural machine translation models for the Hindi-English language pair using Long Short-Term Memory (LSTM) networks with an attention mechanism. It provides details on preprocessing the parallel Hindi-English dataset, developing encoder-decoder LSTM models with attention, training the models over multiple epochs, and evaluating the trained models on test data. The proposed LSTM model achieves over 90% accuracy on Hindi to English translation tasks, demonstrating better performance than recurrent neural network baselines.
Modeling of Speech Synthesis of Standard Arabic Using an Expert Systemcsandit
This document describes an expert system for speech synthesis of Standard Arabic text. It involves two main stages: 1) creation of a sound database and 2) text-to-speech transformation. The transformation process involves phonetic orthographic transcription of the text and then generating voice signals corresponding to the transcribed phonetic sequence. The expert system uses a knowledge base containing sound data and rewriting rules. It transcribes text using graphemes as basic units and then concatenates sound units from the database to synthesize speech. Tests achieved a 96% success rate in pronouncing sentences correctly. Future work aims to improve prosody and develop fully automatic signal segmentation.
Automatic Text Summarization using Natural Language ProcessingIRJET Journal
The document discusses automatic text summarization using natural language processing. It describes using the Simplified Lesk calculation and WordNet to assess the importance of sentences in a document and perform word sense disambiguation. The proposed approach assesses sentence weights using Simplified Lesk and orders them by weight. It then selects sentences for the summary based on a given summarization percentage. The approach provides good results for summaries up to 50% of the original text and acceptable results up to 25%.
A NEURAL MACHINE LANGUAGE TRANSLATION SYSTEM FROM GERMAN TO ENGLISHIRJET Journal
The document describes a neural machine translation system that translates from German to English. It uses a Transformer model integrated with Fuzzy Semantic Representation and Latent Topic Representation to handle rare words and capture sentence context. FSR groups rare words together and LTR uses CNN to represent sentence context as topic vectors. The model achieves improved translation performance on the WMT En-De corpus compared to baselines by handling out-of-vocabulary words and incorporating sentence level context.
SENTIMENT ANALYSIS IN MYANMAR LANGUAGE USING CONVOLUTIONAL LSTM NEURAL NETWORKijnlc
In recent years, there has been an increasing use of social media among people in Myanmar and writing review on social media pages about the product, movie, and trip are also popular among people. Moreover, most of the people are going to find the review pages about the product they want to buy before deciding whether they should buy it or not. Extracting and receiving useful reviews over interesting products is very important and time consuming for people. Sentiment analysis is one of the important processes for extracting useful reviews of the products. In this paper, the Convolutional LSTM neural network architecture is proposed to analyse the sentiment classification of cosmetic reviews written in Myanmar Language. The paper also intends to build the cosmetic reviews dataset for deep learning and sentiment lexicon in Myanmar Language.
Sentiment Analysis In Myanmar Language Using Convolutional Lstm Neural Networkkevig
In recent years, there has been an increasing use of social media among people in Myanmar and writing
review on social media pages about the product, movie, and trip are also popular among people. Moreover,
most of the people are going to find the review pages about the product they want to buy before deciding
whether they should buy it or not. Extracting and receiving useful reviews over interesting products is very
important and time consuming for people. Sentiment analysis is one of the important processes for extracting
useful reviews of the products. In this paper, the Convolutional LSTM neural network architecture is
proposed to analyse the sentiment classification of cosmetic reviews written in Myanmar Language. The
paper also intends to build the cosmetic reviews dataset for deep learning and sentiment lexicon in Myanmar
Language.
BIDIRECTIONAL LONG SHORT-TERM MEMORY (BILSTM)WITH CONDITIONAL RANDOM FIELDS (...kevig
This study investigates the effectiveness of Knowledge Named Entity Recognition in Online Judges (OJs). OJs are lacking in the classification of topics and limited to the IDs only. Therefore a lot of time is consumed in finding programming problems more specifically in knowledge entities.A Bidirectional Long Short-Term Memory (BiLSTM) with Conditional Random Fields (CRF) model is applied for the recognition of knowledge named entities existing in the solution reports.For the test run, more than 2000 solution reports are crawled from the Online Judges and processed for the model output. The stability of the model is also assessed with the higher F1 value. The results obtained through the proposed BiLSTM-CRF model are more effectual (F1: 98.96%) and efficient in lead-time.
The document proposes a personal scheduling system that uses genetic algorithms and natural language processing. It introduces an adaptive genetic algorithm to solve the split-able scheduling problem for personal tasks. The system applies natural language processing techniques to allow users to enter tasks using natural language sentences. The genetic algorithm is implemented within the JSKE scheduling system to optimize schedules generated by a constructive heuristic algorithm.
IRJET- Factoid Question and Answering SystemIRJET Journal
This document describes a factoid question answering system that uses neural networks and the Tensorflow framework. The system takes in a text document and question as input. It then processes the input using techniques like gated recurrent units and support vector machines to classify the question. The system calculates attention between facts and the question, modifies its memory, and identifies the word closest to the answer to output as the response. Key aspects of the system include training a question answering engine with Tensorflow, storing and retrieving data, and generating the final answer.
AUTOMATED SQL QUERY GENERATOR BY UNDERSTANDING A NATURAL LANGUAGE STATEMENTijnlc
This project aims to develop a system which converts a natural language statement into MySQL query to retrieve information from respective database. The system mainly focuses on creation of complex queries which includes nested queries with more than two-level depth, queries with aggregate functions, having clause, group by clause and co-related queries which are formed due to constraint on aggregate function. The natural language input statement taken from the user is passed through various OpenNLP natural language processing techniques like Tokenization, Parts of Speech Tagging, Stemming and Lemmatization to get the statement in the desired form. The statement is further processed to extract the type of query, the basic clause, which specifies the required entities from the database and the condition clause, which specifies constraints on the basic clause. The final query is generated by converting the basic and condition clauses to their query form and then concatenating the condition query to the basic query. Currently, the system works only with MySQL database.
Developing a hands-free interface to operate a Computer using voice commandMohammad Liton Hossain
The main focus of this study is to help a handicap person to operate a computer by voice command. It can be used to operate the entire computer functions on the user’s voice commands. It makes use of the Speech Recognition technology that allows the computer system to identify and recognize words spoken by a human using a microphone. This Software will be able to recognize spoken words and enable user to interact with the computer. This interaction includes user giving commands to his computer which will then respond by performing several tasks, actions or operations depending on the commands they gave. For Example: Opening /closing a file in computer, YouTube automation using voice command, Google search using voice command, make a note using voice command, calculation by calculator using voice command etc.
The document is a project presentation on automatic keyword extraction for text summarization. It discusses extracting keywords and compiling the most salient parts of a text into a short summary. The methodology uses natural language processing and word embedding to understand text for computers. It performs pre-processing like removing stop words. It then extracts features like word frequency and scores sentences to generate a summary. The results are evaluated and future work to improve accuracy and reduce redundancy is discussed.
IRJET - Voice based Natural Language Query ProcessingIRJET Journal
This document describes a voice-based natural language query processing system that allows non-expert users to interact with a database using natural language queries. The system takes a user's spoken query as input, converts it to text using speech recognition, analyzes the text to generate a SQL query, executes the SQL query against the database, and displays the results in a table. The system addresses challenges like ambiguity through techniques such as tokenization, lexical analysis, syntactic analysis, and semantic analysis to map the natural language query to a valid SQL query.
A template based algorithm for automatic summarization and dialogue managemen...eSAT Journals
Abstract This paper describes an automated approach for extracting significant and useful events from unstructured text. The goal of research is to come out with a methodology which helps in extracting important events such as dates, places, and subjects of interest. It would be also convenient if the methodology helps in presenting the users with a shorter version of the text which contain all non-trivial information. We also discuss implementation of algorithms which exactly does this task, developed by us. Key Words: Cosine Similarity, Information, Natural Language, Summarization, Text Mining
Design and Development of a Malayalam to English Translator- A Transfer Based...Waqas Tariq
This paper describes a transfer based scheme for translating Malayalam, a Dravidian language, to English. This system inputs Malayalam sentences and outputs equivalent English sentences. The system comprises of a preprocessor for splitting the compound words, a morphological parser for context disambiguation and chunking, a syntactic structure transfer module and a bilingual dictionary. All the modules are morpheme based to reduce dictionary size. The system does not rely on a stochastic approach and it is based on a rule-based architecture along with various linguistic knowledge components of both Malayalam and English. The system uses two sets of rules: rules for Malayalam morphology and rules for syntactic structure transfer from Malayalam to English. The system is designed using artificial intelligence techniques.
This document summarizes an article on automatic document clustering. It discusses how documents are preprocessed using techniques like tokenization, stop word removal, and stemming before being clustered. Clustering algorithms group similar documents together based on similarity metrics like TF-IDF. Documents can be searched by name, extension, or keywords to retrieve them from the clustered storage in a faster and more organized manner. Future work may include clustering multimedia files and improving algorithm performance.
hExarAbax makkAmasjix samayaM anni rojulu 5:00 am - 9:00 pm
(Mecca Masjid timings in Hyderabad - All days 5:00 am - 9:00 pm)
User query: makkAmasjix PIju eVMwa?
(What is the fee for Mecca Masjid?)
POS-tagger: makkAmasjix PIju/WQ eVMwa
Replace with root word: makkAmasjix PIju/WQ eMwa
Context Handler: Updates context to 'makkAmasjix'
Advanced Filter: Keywords - makkAmas
Similar to SEMANTIC PROCESSING MECHANISM FOR LISTENING AND COMPREHENSION IN VNSCALENDAR SYSTEM (20)
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
SEMANTIC PROCESSING MECHANISM FOR LISTENING AND COMPREHENSION IN VNSCALENDAR SYSTEM
1. International Journal on Natural Language Computing (IJNLC) Vol. 2, No.2, April 2013
DOI : 10.5121/ijnlc.2013.2201 1
SEMANTIC PROCESSING MECHANISM FOR
LISTENING AND COMPREHENSION IN
VNSCALENDAR SYSTEM
Thien Khai Tran1
& Dang Tuan Nguyen2
1
Natural Language and Knowledge Engineering Research Group, University of
Information Technology, Vietnam National University - Ho Chi Minh City, Vietnam.
tkthien@nlke-group.net
2
Faculty of Computer Science, University of Information Technology, Vietnam National
University - Ho Chi Minh City, Vietnam.
dangnt@uit.edu.vn
ABSTRACT
This paper presents some generalities about the VNSCalendar system, a tool able to understand users’
voice commands, would help users with managing and querying their personal calendar by Vietnamese
speech. The main feature of this system consists in the fact that it is equipped with a mechanism of
analyzing syntax and semantics of Vietnamese commands and questions. The syntactic and semantic
processing of Vietnamese sentences is solved by using DCG (Definite Clause Grammar) and the methods of
formal semantics. This is the first system in this field of voice application, which is equipped an effective
semantic processing mechanism of Vietnamese language. Having been built and tested in PC environment,
our system proves its accuracy attaining more than 91%.
KEYWORDS
Speech Recognition, Voice Application, Vietnamese, Natural Language Processing, Formal Semantics.
1. INTRODUCTION
In 2012, Vietnam saw many remarkable publications displayed by groups devoting to spoken
Vietnamese recognition researches from Institute of Information Technology (Vietnamese
Academy of Science and Technology) and University of Science, VNU-HCM. It is worth
mentioning the works of Thang Vu and Mai Luong [13] as well as Quan Vu et al. [1], [3], [5], [9].
The authors crucially concentrated on improving the efficiency of their voice recognition system,
such as the Quan Vu et al. ‘s one which obtained the precision rate of over than 93% and this
group successfully built many voice applications on this base. Nevertheless, all the applications
have not been accompanied with a efficient semantic processing mechanism yet, which is the
important mechanism in view of helping the system with understanding commands.
In this paper, we introduce the VNSCalendar system. It is a tool, as a combination of the spoken
language recognition and the written language processing, would help users with managing and
querying their personal calendar by Vietnamese speech commands. Our system can recognize
many forms of Vietnamese speech commands and questions, convert them into text, resolve their
syntax and semantic analysis, then, generate database queries, and finally, return the results to
2. International Journal on Natural Language Computing (IJNLC) Vol. 2, No.2, April 2013
2
user. The work of resolving syntactic parsing and semantic analysis of Vietnamese commands
and question is based on using DCG [2], [7] and computational techniques of formal semantics
[6], [10], [11].
In this research, we focus on semantic processing of Vietnamese commands and questions. We
only deal with a Vietnamese speech recognition task by using HTK (Hidden Markov Model
Toolkit) [12] and build a training data as well as testing data for it.
2. SYSTEM ARCHITECTURE
In accordance with Thien Khai Tran [14], our system is designed to carry out these functions as
bellow:
1. Add events: add events into the calendar by Vietnamese speech.
2. Delete events: remove events out of the calendar by Vietnamese speech.
3. Edit events: edit events from the calendar by Vietnamese speech.
4. Query events: query events from the calendar by Vietnamese speech.
5.
VNSCalendar fulfills these above functions in observing the further scenario:
The interaction between users and system can be presented in brief as following steps:
Step 0 Listening stage
Step 1 User says to the system by Vietnamese.
Step 2 The speech sentence is converted into the Vietnamese text sentence thanks to the Speech
Recognizer.
Step 3 The system analyzes the syntax structure and gets the key information of the text sentence.
(3.1) If the input sentence is a command:
If it is an add command:
- The system adds the associated event to the schedule calendar and confirms the
result to user.
- Return Step 0.
If it is a delete command:
- The system deletes the associated event from the schedule calendar and confirms
the result to user.
- Return Step 0.
If it is an edit command:
- The system deletes the event needing to edit and adds the associated event to the
schedule calendar and then confirms the result to user.
- Return Step 0.
(3.2) If the input sentence is a query
- The system executes the query, searches information in the database and shows the
result to user.
- Return Step 0.
(3.3) In case the syntax is incorrect, the system will inform user of it and user can take another
3. International Journal on Natural Language Computing (IJNLC) Vol. 2, No.2, April 2013
3
command/query.
To realize the functions in observing the above scenario, the system must be composed of
following components:
1. Automatic speech recognizer (ASR): identify words that user speaks, then convert them
into written text.
2. Vietnamese language processor: resolve the syntax and semantic representations of all
the command sentences or query sentences of user.
3. Central processor:
o Transform the semantic representations of the command / query sentences into the
SQL commands and execute it.
o Filter, organize, and return the results to user.
4. Database: store schedule information
Figure 1. Architecture of VNSCalendar ( Source in [Thien Khai Tran 2013] )
3. AUTOMATIC SPEECH RECOGNIZER
In VNSCalendar system, we have used HTK [12] to build the Automatic Speech Recognition
component. According to the approach of Quan Vu et al. [9], we have applied the context-
dependent model based on triphone [12] to recognize keywords and grammar terms.
4. International Journal on Natural Language Computing (IJNLC) Vol. 2, No.2, April 2013
4
3.1 Training Data
The speech corpus has 1,045 sentences. Total audio training covers 77 minutes. All speech was
sampled at 16000Hz, 16bit by PCM format in a relatively quiet environment with a single
speaker.
The lexical comprises of 96 keywords and 19 grammar terms as shown in Table 1 and Table 2.
There are some compound words that we cannot translate their extracted single word meaning.
Table 1. List of keywords (Source: Thien Khai Tran [14])
ăn
(eat)
ba
(three)
bài
(lesson)
bạn
(friend)
bảy
(seven)
báo
(newspaper)
bốn
(four)
buổi
(session)
cà cáo
chiều
(evening)
chín
(nine)
chồng
(husband)
chủ
con
(child)
công
cơm
(rice)
cùng
(with)
cuối
(end)
dạy
(teach)
đầu
(early)
đề
đi
(go)
điện
đọc
(read)
đồng
đón
(get)
du
dự
(attend)
gặp
(visit)
giờ
(hour)
hàng
(goods)
hai
(two)
học
(study)
họp
(meeting)
hội hôm khách
làm
(do)
lịch
(calendar)
lớp
(class)
mai
(tomorrow)
mẹ
(mother)
một
(one)
mốt
(the day
after
tomorrow)
mười
(ten)
mươi
này
(this)
năm
(year)
nay
(this)
ngày
(day)
nghe
(listen)
nghiệp
người
(person)
nhận
(receive)
nhật
nhậu
(drinking)
nhà
(home)
nhạc
(music)
ở
(in)
phê
phim
(film)
phút
(minute)
quận
(district)
sách
(book)
sáng
(morning)
sáu
(six)
sau
(next)
sếp
(boss)
tập
(do)
tài
tại
(at)
tác
tám
(eight)
thầu
(bid)
thầy
(teacher)
thân
(relative)
thao thảo
tháng
(month)
thể
thi
(exam)
thứ
tiệc
(party)
tối
(night)
tới
(next)
trưa
(afternoon)
trường
(school)
tuần
(week)
tư
(fourth)
ty
uống
(drink)
văn
vợ
(wife)
với
(with)
xem
(watch)
5. International Journal on Natural Language Computing (IJNLC) Vol. 2, No.2, April 2013
5
Table 2. List of grammar terms (Source: Thien Khai Tran [14])
ai
(who)
có gì hay khi
không
(yes or not)
lúc
(at/ in/ on)
đâu nào nhỈ
sửa
(edit)
tạo
(create)
thành
(become)
thay
(edit)
thể
thêm
(add)
vào
(to)
vô
(into)
xóa
(delete)
3.2. Steps to build the Automatic Speech Recognizer
Figure 2. Steps to build the Automatic Speech Recognizer. (Source: Thien Khai Tran [14])
4. VIETNAMESE LANGUAGE PROCESSING
The Vietnamese language processing component analyzes syntax and semantics of Vietnamese
commands and questions. Semantic processing aims at computing the semantic structures of the
commands or questions. These semantic structures are presented by FOL (First-Order Logic) [2],
[7]. In this research, we use techniques of computational semantics [6], [10], [11].
4.1. Commands
The semantic structures of Add, Del, Edit commands are listed in Table 3, Table 4, Table 5.
6. International Journal on Natural Language Computing (IJNLC) Vol. 2, No.2, April 2013
6
Table 3. Semantic structures of Add-command (Source: Thien Khai Tran [14])
1 add(time,work(action,obj))
2 add(time,work(action,obj),place)
3 add(time,work(action,obj),person)
4 add(time,work(action,obj),person,place)
Example 1: Thêm sự kiện dự thầu tại công ty lúc chín giờ thứ ba tuần sau. (Add event of bidding
at the company at 9 am next Tuesday)
The syntactic and semantic rules in DCG are defined as below:
command(P) --> add_34(P), pp_time(PP), {arg(3, P, PP)}.
add_34(P) --> w_add_3(P), w_calendar, vp(VP), np_place(Place), {arg(1, P, VP)},
{arg(2, P, Place)}.
w_add_3(add(X, Y, Z)) --> [thêm].
w_calendar --> [sự, kiện].
vp(work(Action, Obj)) --> verb(Action), noun_comp(Obj).
verb(action(dự)) --> [dự].
noun_comp(obj(thầu)) --> [thầu].
pp_time(time(Gio, Thu, Tuan)) --> pp_hour(Gio), pp_wday(Thu), pp_week(Tuan).
pp_hour(Gio) --> w_at, whathour(Gio), w_hour.
w_at --> [lúc].
whathour(hhour(chín)) --> [chín].
w_hour --> [giờ].
pp_wday(Thu) --> w_on, w_wday, whatwday(Thu).
w_on --> [].
w_wday --> [thứ].
whatwday(wday(ba)) --> [ba].
pp_week(Tuan) --> w_on, w_week, whatweek(Tuan).
w_week --> [tuần].
whatweek(week(sau)) --> [sau].
np_place(Place) --> w_at, whatplace(Place).
w_at --> [tại].
whatplace(place(công, ty)) --> [công, ty].
These syntactic and semantic rules determine the semantic structure of this command as below:
add(time(hour(chín), wday(ba), week(sau)), work(action(dự), obj(thầu)), place(công ty)).
This semantic structure is the structure 2 in Table 3.
7. International Journal on Natural Language Computing (IJNLC) Vol. 2, No.2, April 2013
7
Table 4. Semantic structures of Del-command (Source: Thien Khai Tran [14])
5 del(time,work(action,obj))
6 del(time,work(action,obj),place)
7 del(time,work(action,obj),person)
8 del(time,work(action,obj),person,place)
Example 2: Loại bỏ khỏi lịch mười ba giờ ngày hai tám tháng mười hai báo cáo đề tài.
(Delete event of reporting topic at 13 o’clock on December 28th)
The syntactic & semantic rules in DCG are defined as below:
command(P) --> del_21(P), vp(VP), {arg(2, P, VP)}.
del_21(P) --> w_del(P), calendar_d, pp(PP), {arg(1, P, PP)}.
vp(work(Action, Obj)) --> v_work(Action), n_work(Obj).
v_work(action(báo, cáo)) --> [báo, cáo].
n_work(obj(đề, tài)) --> [đề, tài].
w_del(del(X, Y)) --> [loại, bỏ].
calendar_d --> [khỏi, lịch].
pp(time(Gio, Ngay, Thang)) --> hour(Gio), day(Ngay), month(Thang).
hour(Gio) --> w_at, what_hour(Gio), w_hour.
what_hour(hour(mười, ba)) --> [mười, ba].
day(Ngay) --> w_at, w_day, what_day(Ngay).
what_day(mday(hai, tám)) --> [hai, tám].
month(Thang) --> w_at, w_month, what_month(Thang).
what_month(mmonth(mười, hai)) --> [mười, hai].
w_at --> [].
w_hour --> [giờ].
w_day --> [ngày].
w_month --> [tháng].
These syntactic and semantic rules determine the semantic structure of this command as below:
del(time(hour(mười, ba), mday(hai, tám), mmonth(mười, hai)), work(action(báo, cáo),
obj(đề, tài)))
This semantic structure is the structure 5 in Table 4.
8. International Journal on Natural Language Computing (IJNLC) Vol. 2, No.2, April 2013
8
Table 5. Semantic structures of Edit-command (Source: Thien Khai Tran [14])
9 edit(del ( time, work(action, obj)), add(time, work(action, obj)))
10 edit(del(time, work(action, obj)), add(time, work(action, obj), person))
11 edit(del(time, work(action, obj)), add(time, work(action, obj), place))
12 edit(del(time,work(action, obj)), add(time, work(action, obj), person, place))
13 edit(del(time,work(action, obj), person), add(time, work(action, obj))
14 edit(del(time,work(action, obj), person), add(time, work(action, obj), person)
15 edit(del(time,work(action, obj), person), add(time, work(action, obj), place)
16 edit(del(time,work(action, obj), person), add(time, work(action, obj), person, place)
17 edit(del(time,work(action, obj), place), add(time, work(action, obj))
18 edit(del(time,work(action, obj), place), add(time, work(action, obj), person)
19 edit(del(time,work(action, obj), place), add(time, work(action, obj), place)
20 edit(del(time,work(action, obj), place), add(time, work(action, obj), place)
21 edit(del(time,work(action, obj), person, place), add(time, work(action, obj))
22 edit(del(time,work(action, obj), person, place), add(time, work(action, obj), person)
23 edit(del(time, work(action, obj), person, place), add(time, work(action, obj), place)
24 edit(del(time, work(action, obj), person, place), add(time, work(action, obj), person, place)
Example 3: Chỉnh lại dự thầu tại công ty ngày mười tám tháng chín thành đi công tác ngày mười
tám tháng chín. (Adjust bidding at the company on September 18th
by business traveling on
September 18th
)
The syntactic and semantic rules determine the semantic structure of this command as below:
del(time(hour(mười, ba), mday(hai, tám), mmonth(mười, hai)), work(action(báo, cáo),
obj(đề, tài)))
This semantic structure is the structure 17 in Table 5.
4.2. Questions
The semantic structures of questions forms are listed in Table 6, Table 7, Table 8, Table 9, Table
10, Table 11, Table 12.
Table 6. Semantic structures of Yes/No question (Source: Thien Khai Tran [14])
9. International Journal on Natural Language Computing (IJNLC) Vol. 2, No.2, April 2013
9
1 yesno(time, action, obj)
2 yesno(time, action, obj, place)
3 yesno(time, action, obj, person)
4 yesno(time, action, obj, person, place)
Example 4: Mai có đi học không? (Going to school tomorrow or not?)
The syntactic and semantic rules determine the semantic structure of this question as below:
yesno(time(dday(mai)), action(đi), obj(học)).
This semantic structure is structure 1 in Table 6.
Table 7. Semantic structures of WHAT question (Source: Thien Khai Tran [14])
5 workQuery(query(action), query(obj), time)
Example 5: Cuối tuần sau có làm gì không? (Any job for next weekend?)
The syntactic & semantic rules in DCG are defined as below:
query(workQuery(Action, Obj, Time)) --> pp_time(Time), interrogative1, verb_ query
(Action), noun_ query (Obj), interrogative2.
pp_time(time(Tuan)) --> pp_week(Tuan).
pp_week(week(Prep, Tuan)) --> w_at, pp_prep(Prep), w_week, whatweek(Tuan).
pp_prep(prep(cuối)) --> [cuối].
w_week --> [tuần].
whatweek(week(sau)) --> [sau].
interrogative1 --> [có].
interrogative2 --> [không].
verb_query(query(action)) --> w_do.
noun_query(query(obj)) --> w_what.
w_do --> [làm].
w_what --> [gì].
These syntactic and semantic rules determine the semantic structure of this question as below:
workQuery(query(action), query(obj),time(prep(cuối),week(sau))).
This semantic structure is the structure 5 in Table 7.
10. International Journal on Natural Language Computing (IJNLC) Vol. 2, No.2, April 2013
10
Table 8. Semantic structures of Go_WHERE question (Source: Thien Khai Tran [14])
6 gowhereQuery(query(obj), verb_go, time)
7 gowhereQuery(query(obj), verb_go, person, time)
8 gowhereQuery(query(obj), verb_go, place, time)
9 gowhereQuery(query(obj), verb_go, person, place, time)
Example 6: Tháng mười đi đâu nhỉ? (Where to go in October?)
The syntactic and semantic rules determine the semantic structure of this question as below:
gowhereQuery(query(obj), verb_go(đi), time(mmonth(mười))).
This semantic structure is the structure 6 in Table 8.
Table 9. Semantic structures of Visit_WHOM question (Source: Thien Khai Tran [14])
10 visitwhomQuery(query(obj), verb_visit, time)
11 visitwhomQuery(query(obj), verb_visit, person, time)
12 visitwhomQuery(query(obj), verb_visit, place, time)
13
visitwhomQuery(query(obj), verb_visit, person, place,
time)
Example 7: Tuần sau có gặp ai không nhỉ? (Whom to meet next week?)
The syntactic and semantic rules determine the semantic structure of this question as below:
visitwhomQuery(query(obj),verb_visit(gặp),time(week(sau)))
This semantic structure is the structure 10 in Table 9.
Table 10. Semantic structures of With_WHOM question (Source: Thien Khai Tran [14])
14 personQuery(query(person),action,obj,time)
15 personQuery(query(person),action,obj,place,time)
Example 8: Mốt báo cáo đề tài với ai?
The syntactic and semantic rules determine the semantic structure of this question as below:
personQuery(query(person),action(báo, cáo),obj(đề, tài),time(dday(mốt))).
11. International Journal on Natural Language Computing (IJNLC) Vol. 2, No.2, April 2013
11
This semantic structure is the structure 14 in Table 10.
Table 11. Semantic structures of WHERE question (Source: Thien Khai Tran [14])
16 placeQuery(query(place),action,obj,time)
17 placeQuery(query(place),action,obj,person,time)
Example 9: Mốt báo cáo đề tài với bạn ở đâu?
The syntactic and semantic rules determine the semantic structure of this question as below:
placeQuery(query(place),action(báo, cáo),obj(đề, tài), person(bạn), time(dday(mốt))).
This semantic structure is the structure 17 in Table 11.
Table 12. Semantic structures of WHEN question (Source: Thien Khai Tran [14])
18 timeQuery(query(time),action,obj)
19 timeQuery(query(time),action,obj,person)
20 timeQuery(query(time),action,obj,person,place)
Example 10: Đi du lịch khi nào? (When to travel?)
The syntactic and semantic rules determine the semantic structure of this question as below:
timeQuery(query(time),action(đi),obj(du lịch)).
This semantic structure is the structure 18 in Table 12.
4.3. Pragmatic Semantics Processing
This research also carries out timing situations in relation to pragmatic semantics in sentences. All
of timing units will be formatted in a standard timing structure: “yyyy:mm:dd hh:mm”.
Table 13. Hypothesis for pragmatic semantics of timing
Keyword Context Pragmatic Semantics
Đầu
(beginning of)
đầu tuần
(beginning of week)
Monday Tuesday
đầu tháng
(beginning of month)
day 1 day 10 of month
đầu năm
(beginning of year)
January March
12. International Journal on Natural Language Computing (IJNLC) Vol. 2, No.2, April 2013
12
Cuối
(end of)
cuồi tuần
(end of week)
Saturday Sunday
cuối tháng
(end of month)
day 20 day 31 of month
cuối năm
(end of year)
month 10 12 of year
Sau = Tới
(next)
tuần sau
(next week)
next Monday next Sunday
tháng sau
(next month)
current month (recording
time) + 1
năm sau
(next year)
current year (recording time)
+ 1
Sáng
(morning)
4 am 10 am
Trưa
(noon)
10 am 15 pm
Chiều
(evening)
15 pm 18 pm
Tối
(night)
18 pm 24 am
Mai
(tomorrow)
ngày mai
(tomorrow)
current day + 1
Mốt
(the day after
tomorrow)
Ngày mốt
(the day after
tomorrow)
current day + 2
Table 14. Examples of schedule time prediction (Source: Thien Khai Tran [14])
Recording time Timing phrase Prediction time
prep session day week month
Thứ 5,
Ngày 13-9-2012
(Sep.13, 2012)
sáng
(morning)
mai
(tomorrow)
4h 10h 14/9/2012
(4am-10am Sep.
14, 2012)
sau
(next)
từ 17/9 23/9
(Sep. 17 Sep.19)
cuối
(end
of)
tới
(next)
từ 20/10 31/10
(Oct. 20 Oct. 31)
này
(this)
10/9 16/9
(Sep. 10 Sep. 16)
13. International Journal on Natural Language Computing (IJNLC) Vol. 2, No.2, April 2013
13
Table 15. Examples of timing part of commands and questions (Source: Thien Khai Tran [14])
Command sentence Schedule time Query sentence
8g sáng thứ 3 tuần sau đi học
(8am next Tuesday, going to
school)
8h ngày 18/9/2012
(8am Sep. 18, 2012)
Ngày 18-9 có đi học không?
(Going to school on Sept. 18, or
not?)
Gặp khách hàng 15g30 ngày mốt
(visiting partner at 15.30pm the
day after tomorrow)
15h ngày 15/9/2012
(15am Sep. 15, 2012)
Chiều ngày 15 có làm gì không?
(Any job for the evening of 15?)
Ngày 28-10 đi du lịch với nhà
(On Oct, 28 travelling with
family)
Ngày 28-10
(Oct. 28)
Cuối tháng sau có đi đâu không?
(Going anywhere by the end of
next month?)
Example 11a: Thêm sự kiện dự thầu tại công ty lúc chín giờ thứ ba tuần sau. (Add event of
bidding at the company at 9 am next Tuesday)
The semantic structure: add(time(“2012:09:25 09:00”), action(dự), obj(thầu), place(công ty)).
Example 11b: Cuối tuần sau có làm gì không? (Any job for next weekend? )
The semantic structure: workQuery(query(action), query(obj), time(“2012:09:29 00:00” –
“2012:09:30 00:00”))
The SQL command generated by above semantic structure will query all records in database with
“time” field satisfying: “2012:09:29 00:00” ≤ time ≤ “2012:09:30 00:00”.
5. EXPERIMENTS AND EVALUATION
As mentioned in Thien Khai Tran [14], we have separately carried out tests the two components:
Automatic Speech Recognizer and Vietnamese Language Processing. Afterwards, the synthetic
experimental step of whole system has achieved
.
5.1. Speech Recognizer
5.1.1. Evaluation Score
The speech recognition performance is typically evaluated in terms of Word Error Rate (WER),
which can then be computed as: WER= (S + D + I) / N x 100% [12], where N is the total number
of words in the testing data, S denotes the total number of substitution errors, D is the total
number of deletion errors and I is the total number of insertion errors.
We make use of Word Accuracy (WA) [12] instead, which is computed as WA = (1 – (S + D + I)
/ N) x 100%, to report performance of the speech recognizer.
14. International Journal on Natural Language Computing (IJNLC) Vol. 2, No.2, April 2013
14
5.1.2. Performance
We have valued the accuracy of the Vietnamese speech recognizer component with 60 sentences,
a single speaker in a relatively quiet environment. The results prove that WA can reach 98,09%
and average time processing is 1.4 seconds/ sentence.
There are two main reasons explaining that high score: 1- This is a single speaker-dependent
speech recognition, 2- We have used context-dependent model based on triphone for speech
recognizer component, and made a strict grammar rules for the recognizer.
5.2. Vietnamese Language Processing
We have done manual tests including 60 sentences for evaluating the performance of the
Vietnamese processing component. They are pattern sentences found in 48 semantic structures
which have built in view of the system. The latter is capable of handling all the pattern sentences.
5.3. System Experiments
The system has been built as a PC-based application by MS Visual C# 2010 and SWI-Prolog
version 6.2.1 for Windows NT/2000/XP/Vista/7.
Table 16. Experimental Environments
Number of Commands 38
Number of Questions 22
Environment in-door
Sampling rate 16 kHz
Quantization 16 bits
Format PCM
The system correctly analyzes and executes 55/60 of the spoken commands in Vietnamese
language. The fault cases must be remained at the speech recognition step. So, our system
demonstrates its accuracy attaining more than 91%. About 2.8 seconds for a command is spent as
the average feedback time of the system.
We have also evaluated the capacity of handling the semantic commands of the system by using
other approaches - keyword matching, phrase matching, such as longest matching algorithm used
by Quan Vu et al. [9].
The results showed the 39 tested sentences out of 60 incorrect (65%). That proves the
considerable improvement of the correctness of VNSpeechCalender system in handling the
semantic commands and questions.
15. International Journal on Natural Language Computing (IJNLC) Vol. 2, No.2, April 2013
15
6. CONCLUSION
This paper has presented the architectural model of VNSCalendar system as well as our approach
to build it. The Vietnamese Laguaguage Processing component, which can analyze syntax and
semantics of some Vietnamese speech commands and questions forms, is centered on this system.
With this research, we have provided evidence of the importance of syntax and semantics
processing in voice applications, specially Vietnamese speech applications. In next steps, our jobs
to be accomplished have essential characters in executing an independent speech recognition and
widen vocabulary to realize the application as well as to develop simillar applications based on
this research background.
REFERENCES
[1] Duong Dau, Minh Le, Cuong Le and Quan Vu, (2012). “A Robust Vietnamese Voice Server for
Automated Directory Assistance Application”. RIVF-VLSP 2012. Ho Chi Minh City, Viet Nam.
[2] Fernando C. N. Pereira and Stuart M. Shieber, (2005). Prolog and Natural-Language Analysis.
Microtome Publishing, Massachusetts.
[3] Hue Nguyen, Truong Tran, Nhi Le, Nhut Pham and Quan Vu, (2012). “iSago: The Vietnamese
Mobile Speech Assistant for Food-court and Restaurant Location”. RIVF-VLSP 2012. Ho Chi Minh
City, Viet Nam.
[4] Jan Wielemaker, (2012). SWI-Prolog version 6.2.2. Internet: www.swi-prolog.org. [Now. 22, 2012].
[5] Nhut Pham and Quan Vu, (2010). “A Spoken Dialog System for Stock Information Inquiry”. in
Proc. IT@EDU. Ho Chi Minh City, Viet Nam.
[6] Patrick Blackburn, Johan Bos, (2007). Representation and Inference for Natural Language: A First
Course in Computational Semantics. CSLI Press, Chicago.
[7] Pierre M. Nugues, (2006). An Introduction to Language Processing with Perl & Prolog. Springer-
Verlag, Berlin.
[8] Po-Chuan-Lin, (2010). “Personal Speech Calendar with Timing Keywords Aware and Schedule
Time Prediction Functions”. TENCON 2010 IEEE, Japan, pp. 746 - 750.
[9] Quan Vu et al., (2012). “Nghiên cứu xây dựng hệ thống Voice Server và ứng dụng cho các dịch vụ
trả lời tự động qua điện thoại”. Technical report, Research project, HCM City Department of Science
and Technology, Viet Nam.
[10] Richard Montague (1974). Formal Philosophy: Selected Papers of Richard Montague. Bell & Howell
Information & Lea, New Haven.
[11] Sandiway Fong, (2006). LING 364: Introduction to Formal Semantics.
www.dingo.sbs.arizona.edu/~sandiway. [Nov. 22, 2012].
[12] Steve Young et al., (2006). The HTK Book (version 3.4). [On-line]. Available:
www.htk.eng.cam.ac.uk/docs/docs.shtml [Nov. 1, 2012].
[13] Thang Vu and Mai Luong, (2012). “The Development of Vietnamese Corpora Toward Speech
Translation System”. RIVF-VLSP 2012. Ho Chi Minh City, Viet Nam.
[14] Thien Khai Tran, (2013). “Xây dựng công cụ trợ giúp quản lý và truy vấn lịch cá nhân bằng tiếng
nói”. Master’s thesis. Faculty of Computer Science, University of Information Technology, Vietnam
National University - Ho Chi Minh City, Viet Nam.