Abstract
Natural language ambiguity is a situation involving some words having multiple meanings/senses. This paper discusses natural
language ambiguity and its types. Further we propose a knowledge based solution to resolve various types of ambiguity occurring
in Marathi language text. The task of resolving semantic and lexical ambiguity occurring in words to obtain the actual sense is
referred as Word Sense Disambiguation (WSD). Marathi language is the official and commonly spoken language of Maharashtra
state in India. Plenty of words in Marathi are spelled same as well as uttered same but are semantically (meaning-wise/ sensewise)
different. During the automatic translation, these words lead to ambiguity. Our method successfully removes the ambiguity
by identifying the correct sense of the given text from the predefined possible senses available in Marathi Wordnet using word and
sentence rules. The method is applicable only for word level ambiguity. Structural ambiguity is not handled by this system. This
system may be successfully used as a subsystem in other Natural Language Processing (NLP) applications.
Key Words: Word Sense Disambiguation, Natural Language Processing, Marathi, Marathi Wordnet, ambiguity,
knowledge based
A SURVEY OF GRAMMAR CHECKERS FOR NATURAL LANGUAGEScsandit
This document summarizes and reviews various grammar checkers for natural languages. It begins by defining key concepts in natural language processing like computational linguistics and grammar checking. It then describes the general working of grammar checkers, which involves preprocessing text, analyzing morphology and syntax, and identifying grammatical errors. The document surveys grammar checking approaches for several languages like rule-based, statistical, and hybrid methods. Specific grammar checkers are discussed for languages like Afan Oromo, Amharic, Swedish, Icelandic, Nepali, and Portuguese. The review concludes by analyzing the features and limitations of existing grammar checking systems.
Investigating the Difficulties Faced by Iraqi EFL Learners in Using Light Verbs
Zahraa Adnan Fadhil Al-Murib,
Department of English, College of Education, Islamic University, Babylon, Iraq
This study deals with light verb constructions as being syntactically functional and semantically bleached. It aims at investigating the difficulties faced by Iraqi EFL Learners (fourth-year university students) in recognizing and producing this kind of constructions. Thus, to achieve this aim, it is hypothesized that Iraqi EFL Learners are unable to collocate the light verb with its proper noun phrase both at the recognition and the production levels. And in order to achieve the aim of the study and examine its hypothesis, the following procedures are adopted: (1) Presenting a theoretical background about light verb constructions including their definition, structure, and their relationship with collocations, (2) Designing a diagnostic test to investigate the difficulties that might face the learners while being exposed to these constructions, (3) Analyzing the findings of the test to reveal the learner’s recognition and performance in dealing with light verb constructions.
Keywords: Light Verb Constructions, Collocations, Difficulties
The Sixth International Conference on Languages, Linguistics, Translation and Literature
9-10 October 2021 , Ahwaz
For more information, please visit the conference website:
WWW.LLLD.IR
This document discusses natural language processing (NLP) and its role in augmentative and alternative communication. It begins with introducing NLP as a field of artificial intelligence that aims to allow computers to understand and process human language. It then describes augmentative and alternative communication as methods of non-verbal communication that are useful for those with speech impairments. The document goes on to discuss the different levels of NLP processing from phonology to pragmatics. It also outlines common approaches to NLP, including symbolic and statistical methods. The role of NLP is seen as enabling computerized communication to support augmentative communication.
The Ekegusii Determiner Phrase Analysis in the Minimalist ProgramBasweti Nobert
Among some of the recent syntactic developments, the noun phrase has been reanalyzed
as a determiner phrase (DP). This study analyses the Ekegusii determiner
phrase (DP) with an inquiry into the relationship between agreement of the INFL
(sentence) and concord in the noun phrase (determiner phrase). It hypothesizes that
the Ekegusii sentential Agreement has a symmetrical relationship with the Ekegusii
Determiner Phrase internal concord and feature checking theory and full
interpretation (FI) in the Minimalist Program is adequate in the analysis of the
internal structure of the Ekegusii DP. In employing the Minimalist Program (MP),
the study shall first seek to establish the domain of the NP in the Ekegusii DP and
go ahead to do an investigation into the adequacy of the Minimalist Program in
analyzing the Ekegusii DP. This study is also geared towards establishing the order
of determiners in the DP between the D-head and the NP complement. The study
concludes that the principles of feature checking and full interpretation in the
minimalist program are mutually crucial in ensuring that Ekegusii constructions (DP
and even the sentence) are grammatical (converge). This emphasizes the fact that
the MP is adequate in Ekegusii DP analysis.
Automatic classification of bengali sentences based on sense definitions pres...ijctcm
Based on the sense definition of words available in the Bengali WordNet, an attempt is made to classify the
Bengali sentences automatically into different groups in accordance with their underlying senses. The input
sentences are collected from 50 different categories of the Bengali text corpus developed in the TDIL
project of the Govt. of India, while information about the different senses of particular ambiguous lexical
item is collected from Bengali WordNet. In an experimental basis we have used Naive Bayes probabilistic
model as a useful classifier of sentences. We have applied the algorithm over 1747 sentences that contain a
particular Bengali lexical item which, because of its ambiguous nature, is able to trigger different senses
that render sentences in different meanings. In our experiment we have achieved around 84% accurate
result on the sense classification over the total input sentences. We have analyzed those residual sentences
that did not comply with our experiment and did affect the results to note that in many cases, wrong
syntactic structures and less semantic information are the main hurdles in semantic classification of
sentences. The applicational relevance of this study is attested in automatic text classification, machine
learning, information extraction, and word sense disambiguation
Attitude od teachers_and_students_towards_classroom_code_switching-libreStoic Mills
This document summarizes a research study on the attitudes of teachers and students towards code switching in English literature classes at the university level in Pakistan. A questionnaire was used to collect data from 12 teachers and 288 students from 4 universities. The findings show that most students and teachers have a positive attitude towards code switching between English, Urdu, and Punjabi in class. Students reported that code switching helps them better understand concepts and does not confuse them. Teachers indicated they code switch for communication, control, and explanation purposes. Overall, the study found that code switching is viewed positively and as beneficial for teaching and learning in multilingual university classrooms in Pakistan.
Relationship between Creativity and Tolerance of Ambiguity to Understand Metaphorical Polysemy: A Pilot Study
Maha Ounis,
Department of English, Faculty of Letters and Humanities, University of Sfax, Tunisia
This study focuses on lexical ambiguity as polysemous words were proved to hinder meaning understanding. In an attempt to operationalize polysemous words from a cognitive perspective, the researcher deduces that metaphorical polysemy engenders words with basic and peripheral (or metaphorical) senses. Participants were asked to answer Multiple Stimulus Types Ambiguity Tolerance Scale II (MSTAT II). Then, they answered Wallach and Kogan Creativity Test, which revealed a slight positive relationship with MSTAT II. Furthermore, the results show two things. First, there is a positive relationship between creativity and semi-technical vocabulary tests scores. Second, there is a link between creativity level and understanding the prototypical meanings of words. Contrarily, MSTAT II and prototypical meanings tests scores correlation is very weak.
Keywords: Adult Language Learners, Multilingualism, Language Testing, Language Learning, Foreign Languages
The Sixth International Conference on Languages, Linguistics, Translation and Literature
9-10 October 2021 , Ahwaz
For more information, please visit the conference website:
WWW.LLLD.IR
A SURVEY OF GRAMMAR CHECKERS FOR NATURAL LANGUAGEScsandit
This document summarizes and reviews various grammar checkers for natural languages. It begins by defining key concepts in natural language processing like computational linguistics and grammar checking. It then describes the general working of grammar checkers, which involves preprocessing text, analyzing morphology and syntax, and identifying grammatical errors. The document surveys grammar checking approaches for several languages like rule-based, statistical, and hybrid methods. Specific grammar checkers are discussed for languages like Afan Oromo, Amharic, Swedish, Icelandic, Nepali, and Portuguese. The review concludes by analyzing the features and limitations of existing grammar checking systems.
Investigating the Difficulties Faced by Iraqi EFL Learners in Using Light Verbs
Zahraa Adnan Fadhil Al-Murib,
Department of English, College of Education, Islamic University, Babylon, Iraq
This study deals with light verb constructions as being syntactically functional and semantically bleached. It aims at investigating the difficulties faced by Iraqi EFL Learners (fourth-year university students) in recognizing and producing this kind of constructions. Thus, to achieve this aim, it is hypothesized that Iraqi EFL Learners are unable to collocate the light verb with its proper noun phrase both at the recognition and the production levels. And in order to achieve the aim of the study and examine its hypothesis, the following procedures are adopted: (1) Presenting a theoretical background about light verb constructions including their definition, structure, and their relationship with collocations, (2) Designing a diagnostic test to investigate the difficulties that might face the learners while being exposed to these constructions, (3) Analyzing the findings of the test to reveal the learner’s recognition and performance in dealing with light verb constructions.
Keywords: Light Verb Constructions, Collocations, Difficulties
The Sixth International Conference on Languages, Linguistics, Translation and Literature
9-10 October 2021 , Ahwaz
For more information, please visit the conference website:
WWW.LLLD.IR
This document discusses natural language processing (NLP) and its role in augmentative and alternative communication. It begins with introducing NLP as a field of artificial intelligence that aims to allow computers to understand and process human language. It then describes augmentative and alternative communication as methods of non-verbal communication that are useful for those with speech impairments. The document goes on to discuss the different levels of NLP processing from phonology to pragmatics. It also outlines common approaches to NLP, including symbolic and statistical methods. The role of NLP is seen as enabling computerized communication to support augmentative communication.
The Ekegusii Determiner Phrase Analysis in the Minimalist ProgramBasweti Nobert
Among some of the recent syntactic developments, the noun phrase has been reanalyzed
as a determiner phrase (DP). This study analyses the Ekegusii determiner
phrase (DP) with an inquiry into the relationship between agreement of the INFL
(sentence) and concord in the noun phrase (determiner phrase). It hypothesizes that
the Ekegusii sentential Agreement has a symmetrical relationship with the Ekegusii
Determiner Phrase internal concord and feature checking theory and full
interpretation (FI) in the Minimalist Program is adequate in the analysis of the
internal structure of the Ekegusii DP. In employing the Minimalist Program (MP),
the study shall first seek to establish the domain of the NP in the Ekegusii DP and
go ahead to do an investigation into the adequacy of the Minimalist Program in
analyzing the Ekegusii DP. This study is also geared towards establishing the order
of determiners in the DP between the D-head and the NP complement. The study
concludes that the principles of feature checking and full interpretation in the
minimalist program are mutually crucial in ensuring that Ekegusii constructions (DP
and even the sentence) are grammatical (converge). This emphasizes the fact that
the MP is adequate in Ekegusii DP analysis.
Automatic classification of bengali sentences based on sense definitions pres...ijctcm
Based on the sense definition of words available in the Bengali WordNet, an attempt is made to classify the
Bengali sentences automatically into different groups in accordance with their underlying senses. The input
sentences are collected from 50 different categories of the Bengali text corpus developed in the TDIL
project of the Govt. of India, while information about the different senses of particular ambiguous lexical
item is collected from Bengali WordNet. In an experimental basis we have used Naive Bayes probabilistic
model as a useful classifier of sentences. We have applied the algorithm over 1747 sentences that contain a
particular Bengali lexical item which, because of its ambiguous nature, is able to trigger different senses
that render sentences in different meanings. In our experiment we have achieved around 84% accurate
result on the sense classification over the total input sentences. We have analyzed those residual sentences
that did not comply with our experiment and did affect the results to note that in many cases, wrong
syntactic structures and less semantic information are the main hurdles in semantic classification of
sentences. The applicational relevance of this study is attested in automatic text classification, machine
learning, information extraction, and word sense disambiguation
Attitude od teachers_and_students_towards_classroom_code_switching-libreStoic Mills
This document summarizes a research study on the attitudes of teachers and students towards code switching in English literature classes at the university level in Pakistan. A questionnaire was used to collect data from 12 teachers and 288 students from 4 universities. The findings show that most students and teachers have a positive attitude towards code switching between English, Urdu, and Punjabi in class. Students reported that code switching helps them better understand concepts and does not confuse them. Teachers indicated they code switch for communication, control, and explanation purposes. Overall, the study found that code switching is viewed positively and as beneficial for teaching and learning in multilingual university classrooms in Pakistan.
Relationship between Creativity and Tolerance of Ambiguity to Understand Metaphorical Polysemy: A Pilot Study
Maha Ounis,
Department of English, Faculty of Letters and Humanities, University of Sfax, Tunisia
This study focuses on lexical ambiguity as polysemous words were proved to hinder meaning understanding. In an attempt to operationalize polysemous words from a cognitive perspective, the researcher deduces that metaphorical polysemy engenders words with basic and peripheral (or metaphorical) senses. Participants were asked to answer Multiple Stimulus Types Ambiguity Tolerance Scale II (MSTAT II). Then, they answered Wallach and Kogan Creativity Test, which revealed a slight positive relationship with MSTAT II. Furthermore, the results show two things. First, there is a positive relationship between creativity and semi-technical vocabulary tests scores. Second, there is a link between creativity level and understanding the prototypical meanings of words. Contrarily, MSTAT II and prototypical meanings tests scores correlation is very weak.
Keywords: Adult Language Learners, Multilingualism, Language Testing, Language Learning, Foreign Languages
The Sixth International Conference on Languages, Linguistics, Translation and Literature
9-10 October 2021 , Ahwaz
For more information, please visit the conference website:
WWW.LLLD.IR
Role of Speech Therapy in Overcoming Lexical Deficit in Adult Broca’s Aphasia
Tanzeela Abid & Dr. Habibullah Pathan,
English Language Development Centre, Faculty of Science, Technology and Humanities, Mehran University of Engineering and Technology, Pakistan
This is an exploratory study and qualitative in nature. Unit of exploration is ‘Adult Broca’s Aphasic Patients.’ This paper aims to explore the function and integrity of ‘Speech Therapy’ for adult Broca’s aphasia. Aphasia is the after-effect of brain damage, commonly found in left hemisphere which disrupts language faculty. The present study focuses on ‘Lexical’ aspect of language in which an individual faces trouble in processing of words. In Broca’s aphasia affected individual suffers from diminished capability of speaking/communication. To recover such diminished capabilities, speech therapy is utilized. This study intends to investigate the effectiveness of speech therapy that how speech therapy helps to adult Broca’s aphasia to recover their speaking or conversing skills? Participants of the study are ‘Speech therapists.’ Purposeful sampling, particularly Snowball sampling has been undertaken. Semi-structured interviews have been conducted from five speech therapists, which have been analyzed through thematic analysis under the light of ‘Sketch Model’ given by De ruiter and De beer (2013). The Findings of the study suggest that speech therapy may be proved helpful for Broca’s aphasia to recover their communicating capabilities but it requires much time (minimum 6 months). Moreover, recovery depends upon certain factors such as age, level of disorder and willingness.
Keywords: Broca’s Aphasia, Lexical Deficit, Speech Therapy, Communication, Speaking Skills
The Sixth International Conference on Languages, Linguistics, Translation and Literature
9-10 October 2021 , Ahwaz
For more information, please visit the conference website:
WWW.LLLD.IR
Full Articles (Volume Two) - The Sixth International Conference on Languages, Linguistics, Translation and Literature
Ahwaz, Iran
9-10 October 2021
For more information, please visit the conference website:
WWW.LLLD.IR
--- International Standard Book Number (ISBN): 978-622-94212-0-8
--- According to the governmental approval (The Ministry): 2260614
--- Iranian National Standard Number of Book (Number of National Library of Islamic Republic of Iran): 8679332
--- The Dewey Decimal Classification: 410
--- The Library of Congress Classification: P23
--- Publisher: Ahwaz Publication of Research and Sciences (The Ministry Approval Number: 16171)
Please feel free to write if there is any query.
The Conference Secretariat,
Ahwaz 61335-4619 Iran
(+98) 61-32931199
(+98) 61-32931198
(+98) 916-5088772 (WhatsApp Number)
WWW.LLLD.IR, Email: info@pahi.ir
Words can have more than one distinct meaning and many words can be interpreted in multiple ways
depending on the context in which they occur. The process of automatically identifying the meaning of
a polysemous word in a sentence is a fundamental task in Natural Language Processing (NLP). This
phenomenon poses challenges to Natural Language Processing systems. There have been many efforts
on word sense disambiguation for English; however, the amount of efforts for Amharic is very little.
Many natural language processing applications, such as Machine Translation, Information Retrieval,
Question Answering, and Information Extraction, require this task, which occurs at the semantic level.
In this thesis, a knowledge-based word sense disambiguation method that employs Amharic WordNet
is developed. Knowledge-based Amharic WSD extracts knowledge from word definitions and relations
among words and senses. The proposed system consists of preprocessing, morphological analysis and
disambiguation components besides Amharic WordNet database. Preprocessing is used to prepare the
input sentence for morphological analysis and morphological analysis is used to reduce various forms
of a word to a single root or stem word. Amharic WordNet contains words along with its different
meanings, synsets and semantic relations with in concepts. Finally, the disambiguation component is
used to identify the ambiguous words and assign the appropriate sense of ambiguous words in a
sentence using Amharic WordNet by using sense overlap and related words.
We have evaluated the knowledge-based Amharic word sense disambiguation using Amharic
WordNet system by conducting two experiments. The first one is evaluating the effect of Amharic
WordNet with and without morphological analyzer and the second one is determining an optimal
windows size for Amharic WSD. For Amharic WordNet with morphological analyzer and Amharic
WordNet without morphological analyzer we have achieved an accuracy of 57.5% and 80%,
respectively. In the second experiment, we have found that two-word window on each side of the
ambiguous word is enough for Amharic WSD. The test results have shown that the proposed WSD
methods have performed better than previous Amharic WSD methods.
Keywords: Natural Language Processing, Amharic WordNet, Word Sense Disambiguation,
Knowledge Based Approach, Lesk Algorithm
کتیب الملخصات - المؤتمر الدولي السادس حول القضايا الراهنة للغات، علم اللغة، الترجمة و الأدب
9-10 أكتوبر 2021 ، الأهواز
لمزید من المعلومات، ﯾرﺟﯽ زﯾﺎرة ﻣوﻗﻌﻧﺎ اﻹﻟﮐﺗروﻧﻲ : WWW.LLLD.IR
لا تتردد فی مراسلتنا للاجابة عن ای استفسارات.
اللجنة المنظمة للمؤتمر،
الأهواز / الصندوق البريدی 61335-4619:
الهاتف :32931199-61 (98+)
الفاکس:32931198-61(98+)
النقال و رقم للتواصل عبر الواتس اب : 9165088772(98+)
WWW.LLLD.IR، البريد اﻹﻟﮑﺘﺮوﻧﻲ: info@pahi.ir
The Presentation contains about Word Sense Diassambiguation. I had tried to explain about the Word Sense in terms of Python language. But it can be also done using nltk.
A NEW METHOD OF TEACHING FIGURATIVE EXPRESSIONS TOIRANIAN LANGUAGE LEARNERScscpconf
In teaching languages, if we only consider direct relationship between form and meaning in language and leave psycholinguistic aside, this approach is not a successful practice and language learners won't be able to make a successful relation in the real contexts. The present study intends to answer this question: is the teaching method in which salient meaning is taught more successful than the method in which literal or figurative meaning is taught or not? To answer the research question, 30 students were selected. Every ten people are formed as a group and three such groups were formed. Twenty figurative expressions were taught to every group. Group one was taught the figurative meaning of every expression. Group two was taught the literal meaning and group three was taught the salient meaning. Then three groups were tested. After analyzing data, we concluded that there was a significant difference in mean grades between classes and the class trained under graded salience hypothesis was more successful. This shows that traditional teaching methods must be revised.
Investigating the Integration of Culture Teaching in Foreign Language Classroom: A Case Study
Dr. Samah Benzerroug (Department of English) & Dr. Souhila Benzerroug (Department of French),
Teacher Training College of Bouzareah, Algiers, Algeria
Many scholars argue that language and culture are closely related to each other and hence the teaching of a foreign language cannot take place without the teaching of its corresponding culture which helps promoting language learning and enhancing learners’ motivation and performance (Corbett, J. (2003); (1996); Hinkel, E. (1999); Kramsch, C. (2006)). This being the case, the present study aims at putting emphasis on the importance and significance of integrating culture teaching in foreign language classroom in the Algerian school. It seeks to investigate whether foreign language teachers grant significant value and interest to the foreign language culture. Therefore, a descriptive analysis of the English and French textbooks of the secondary education was carried out to identify and examine the way the cultural dimensions are being dealt with. In addition, a survey was conducted by addressing a questionnaire to a number of secondary school teachers of English and French to investigate to what extent they consider culture teaching in their classroom. The research results revealed that despite the fact that there is a move towards fostering culture teaching, the textbooks still offer few tasks that deal with cultural aspects and teachers are still unfamiliar with the techniques to promote it in the classroom, thus they neglect culture teaching and prefer to focus on other aspects in the class like accuracy, fluency and language skills development. In light of these findings, a number of considerable implications and recommendation are presented to foreign language teachers and language policy decision-makers to stress the importance of integrating culture teaching and adequately implement it in the classroom.
Keywords: Foreign Language, Culture, Teaching, Integrating, Classroom
The Sixth International Conference on Languages, Linguistics, Translation and Literature
9-10 October 2021 , Ahwaz
For more information, please visit the conference website:
WWW.LLLD.IR
This document provides an overview and summary of key concepts from Dr. Tajeddin's text analysis course in the fall of 2014. It discusses concepts like actual language usage, norms of usage as seen through corpus analysis, patterns of collocation identified through frequency analysis, how semantic prosodies can explain collocations, and concludes that computer analysis of corpora provides norms against which individual texts can be analyzed for implicatures or deviations from expectations.
This document summarizes a study that examines the relationship between oral and written discourse in second language English speakers. The study hypothesizes that there is a positive working relationship between the two. Data was collected from a French doctoral student, including a writing sample and recorded speech sample. While the results showed a weak relationship, analysis of the speech without fillers revealed native-like patterns. The document reviews theories on language acquisition and the influence of first language on second language learning.
Ali Akbar Dehkhoda, the prominent lexicographer, describes a person who has difficulty in grasping knowledge as someone who “Cannot understand something without knowing all its details.” If the knowledge required by somebody is in a language other than the person’s mother tongue, access to this knowledge will surely meet special difficulties resulting from the person’s lack of mastery over the second
language. Any project that can monitor knowledge sources written in English and change them into the
user’s language by employing a simple understandable model is capable of being a knowledge-based
project with a world view regarding text simplification. This article creates a knowledge system,
investigates some algorithms for analyzing contents of complex texts, and presents solutions for changing
such texts simple and understandable ones. Texts are automatically analyzed and their ambiguous points
are identified by software, but it is the author or the human agent who makes decisions concerning
omission of the ambiguities or correction of the texts.
Automatic text simplification evaluation aspectsiwan_rg
The document discusses automatic text simplification (ATS) and methods for evaluating ATS systems. It provides an overview of common evaluation metrics like BLEU, SARI, FKGL, and SAMSA and compares their abilities to measure simplicity, meaning preservation, and grammaticality. The document also outlines a proposed project to build a corpus and develop a graded reading scale to guide the simplification of Arabic fiction works for educational purposes.
The document discusses different approaches to machine translation, including rule-based, statistical, example-based, and dictionary-based approaches. It provides details on each approach, such as rule-based methods using linguistic rules and extensive lexicons, statistical methods relying on probabilistic models trained on parallel texts, example-based methods translating by analogy to examples in aligned corpora, and dictionary-based methods translating words directly with or without morphological analysis. The document also compares transfer-based and interlingual rule-based machine translation, noting interlingual methods aim to represent the source text independently of languages.
Word sense disambiguation using wsd specific wordnet of polysemy wordsijnlc
This paper presents a new model of WordNet that is used to disambiguate the correct sense of polysemy
word based on the clue words. The related words for each sense of a polysemy word as well as single sense
word are referred to as the clue words. The conventional WordNet organizes nouns, verbs, adjectives and
adverbs together into sets of synonyms called synsets each expressing a different concept. In contrast to the
structure of WordNet, we developed a new model of WordNet that organizes the different senses of
polysemy words as well as the single sense words based on the clue words. These clue words for each sense
of a polysemy word as well as for single sense word are used to disambiguate the correct meaning of the
polysemy word in the given context using knowledge based Word Sense Disambiguation (WSD) algorithms.
The clue word can be a noun, verb, adjective or adverb.
CITIZENSHIP ACT (591) OF GHANA AS A LOGIC THEOREM AND ITS SEMANTIC IMPLICATIONSijnlc
This document discusses formalizing the Citizenship Act of Ghana as a logic theorem. It begins with an abstract that introduces formalizing excerpts of the Act in First Order Logic to allow for semantic analysis by machines. The results reveal semantic consequences of the natural construction of the legal text and address ambiguities. The paper then provides context on challenges with legal language complexity and a literature review on past research formalizing legal texts with logic. It proceeds to summarize sections of the Citizenship Act and discusses modeling it with a logic theorem to establish clarity and reveal any semantic errors or ambiguities in the natural language text.
Natural Language Toolkit (NLTK) is a generic platform to process the data of various natural (human)
languages and it provides various resources for Indian languages also like Hindi, Bangla, Marathi and so
on. In the proposed work, the repositories provided by NLTK are used to carry out the processing of Hindi
text and then further for analysis of Multi word Expressions (MWEs). MWEs are lexical items that can be
decomposed into multiple lexemes and display lexical, syntactic, semantic, pragmatic and statistical
idiomaticity. The main focus of this paper is on processing and analysis of MWEs for Hindi text. The
corpus used for Hindi text processing is taken from the famous Hindi novel “KaramaBhumi by Munshi
PremChand”. The result analysis is done using the Hindi corpus provided by Resource Centre for Indian
Language Technology Solutions (CFILT). Results are analysed to justify the accuracy of the proposed
work.
The document discusses information retrieval based on word semantics in Arabic texts. It covers several key areas: (1) the challenges of natural language processing in Arabic due to its rich morphology; (2) the process of morphological analysis including preprocessing, stemming, and indexing terms; (3) the research problems of synonymy and polysemy in information retrieval; and (4) semantic approaches to address these problems including automatic discovery of similar words and synonym-based search methods.
This document discusses discourse analysis and discourse relations in Arabic texts. It defines discourse as written or spoken language and explains that texts have cohesive devices that link words, clauses, and sentences. There are two types of discourse relations: explicitly signaled relations indicated by connectives, and implicitly inferred relations without signaling. The document also describes different types of discourse connectives in Arabic and presents an annotation scheme for connectives and their arguments. It discusses related work on Arabic corpora and efforts to collect and analyze Arabic connectives. Finally, it introduces Rhetorical Structure Theory as one of the most popular theories for representing discourse structure.
CITIZENSHIP ACT (591) OF GHANA AS A LOGIC THEOREM AND ITS SEMANTIC IMPLICATIONSkevig
This document summarizes a paper that presents excerpts from Ghana's Citizenship Act formalized as a logic theorem. The paper aims to analyze the semantics of the legal text by formalizing it in logic. It discusses challenges with ambiguity in legal language and how formalization can help reveal semantic implications. It presents the Citizenship Act formalized with logical connectives and analyzes how this preserves clarity and allows for deductive reasoning through the text. The formalization is said to establish the intended semantics while preventing wrong meanings from ambiguity.
Abstract
Part of speech tagging plays an important role in developing natural language processing software. Part of speech tagging means assigning part of speech tag to each word of the sentence. The part of speech tagger takes a sentence as input and it assigns respective/appropriate part of speech tag to each word of that sentence. In this article I surveys the different work have done about odia POS tagging.
________________________________________________
The document discusses various ways to avoid ambiguity and vagueness in writing. It provides examples of ambiguous sentences and rewrites them in a clearer way. Some of the areas covered include using "which" and "that" correctly, placing the subject near the verb, replacing "-ing" forms with active verbs, and proper use of articles like "a", "an", and "the". The overall goal is to help writers construct sentences that have only one unambiguous interpretation.
There are two perspectives on moral values - idealism and relativism. Idealists believe in universal transcendent values of love, care and concern that exist beyond time and space. Relativists believe values depend on time and place and differ between cultures. Values are both taught through instruction and caught through example, with living examples having greater influence. Values have cognitive, affective and behavioral dimensions - they involve understanding, feeling, and acting upon values. True value formation requires developing all three dimensions through intellect and will.
Role of Speech Therapy in Overcoming Lexical Deficit in Adult Broca’s Aphasia
Tanzeela Abid & Dr. Habibullah Pathan,
English Language Development Centre, Faculty of Science, Technology and Humanities, Mehran University of Engineering and Technology, Pakistan
This is an exploratory study and qualitative in nature. Unit of exploration is ‘Adult Broca’s Aphasic Patients.’ This paper aims to explore the function and integrity of ‘Speech Therapy’ for adult Broca’s aphasia. Aphasia is the after-effect of brain damage, commonly found in left hemisphere which disrupts language faculty. The present study focuses on ‘Lexical’ aspect of language in which an individual faces trouble in processing of words. In Broca’s aphasia affected individual suffers from diminished capability of speaking/communication. To recover such diminished capabilities, speech therapy is utilized. This study intends to investigate the effectiveness of speech therapy that how speech therapy helps to adult Broca’s aphasia to recover their speaking or conversing skills? Participants of the study are ‘Speech therapists.’ Purposeful sampling, particularly Snowball sampling has been undertaken. Semi-structured interviews have been conducted from five speech therapists, which have been analyzed through thematic analysis under the light of ‘Sketch Model’ given by De ruiter and De beer (2013). The Findings of the study suggest that speech therapy may be proved helpful for Broca’s aphasia to recover their communicating capabilities but it requires much time (minimum 6 months). Moreover, recovery depends upon certain factors such as age, level of disorder and willingness.
Keywords: Broca’s Aphasia, Lexical Deficit, Speech Therapy, Communication, Speaking Skills
The Sixth International Conference on Languages, Linguistics, Translation and Literature
9-10 October 2021 , Ahwaz
For more information, please visit the conference website:
WWW.LLLD.IR
Full Articles (Volume Two) - The Sixth International Conference on Languages, Linguistics, Translation and Literature
Ahwaz, Iran
9-10 October 2021
For more information, please visit the conference website:
WWW.LLLD.IR
--- International Standard Book Number (ISBN): 978-622-94212-0-8
--- According to the governmental approval (The Ministry): 2260614
--- Iranian National Standard Number of Book (Number of National Library of Islamic Republic of Iran): 8679332
--- The Dewey Decimal Classification: 410
--- The Library of Congress Classification: P23
--- Publisher: Ahwaz Publication of Research and Sciences (The Ministry Approval Number: 16171)
Please feel free to write if there is any query.
The Conference Secretariat,
Ahwaz 61335-4619 Iran
(+98) 61-32931199
(+98) 61-32931198
(+98) 916-5088772 (WhatsApp Number)
WWW.LLLD.IR, Email: info@pahi.ir
Words can have more than one distinct meaning and many words can be interpreted in multiple ways
depending on the context in which they occur. The process of automatically identifying the meaning of
a polysemous word in a sentence is a fundamental task in Natural Language Processing (NLP). This
phenomenon poses challenges to Natural Language Processing systems. There have been many efforts
on word sense disambiguation for English; however, the amount of efforts for Amharic is very little.
Many natural language processing applications, such as Machine Translation, Information Retrieval,
Question Answering, and Information Extraction, require this task, which occurs at the semantic level.
In this thesis, a knowledge-based word sense disambiguation method that employs Amharic WordNet
is developed. Knowledge-based Amharic WSD extracts knowledge from word definitions and relations
among words and senses. The proposed system consists of preprocessing, morphological analysis and
disambiguation components besides Amharic WordNet database. Preprocessing is used to prepare the
input sentence for morphological analysis and morphological analysis is used to reduce various forms
of a word to a single root or stem word. Amharic WordNet contains words along with its different
meanings, synsets and semantic relations with in concepts. Finally, the disambiguation component is
used to identify the ambiguous words and assign the appropriate sense of ambiguous words in a
sentence using Amharic WordNet by using sense overlap and related words.
We have evaluated the knowledge-based Amharic word sense disambiguation using Amharic
WordNet system by conducting two experiments. The first one is evaluating the effect of Amharic
WordNet with and without morphological analyzer and the second one is determining an optimal
windows size for Amharic WSD. For Amharic WordNet with morphological analyzer and Amharic
WordNet without morphological analyzer we have achieved an accuracy of 57.5% and 80%,
respectively. In the second experiment, we have found that two-word window on each side of the
ambiguous word is enough for Amharic WSD. The test results have shown that the proposed WSD
methods have performed better than previous Amharic WSD methods.
Keywords: Natural Language Processing, Amharic WordNet, Word Sense Disambiguation,
Knowledge Based Approach, Lesk Algorithm
کتیب الملخصات - المؤتمر الدولي السادس حول القضايا الراهنة للغات، علم اللغة، الترجمة و الأدب
9-10 أكتوبر 2021 ، الأهواز
لمزید من المعلومات، ﯾرﺟﯽ زﯾﺎرة ﻣوﻗﻌﻧﺎ اﻹﻟﮐﺗروﻧﻲ : WWW.LLLD.IR
لا تتردد فی مراسلتنا للاجابة عن ای استفسارات.
اللجنة المنظمة للمؤتمر،
الأهواز / الصندوق البريدی 61335-4619:
الهاتف :32931199-61 (98+)
الفاکس:32931198-61(98+)
النقال و رقم للتواصل عبر الواتس اب : 9165088772(98+)
WWW.LLLD.IR، البريد اﻹﻟﮑﺘﺮوﻧﻲ: info@pahi.ir
The Presentation contains about Word Sense Diassambiguation. I had tried to explain about the Word Sense in terms of Python language. But it can be also done using nltk.
A NEW METHOD OF TEACHING FIGURATIVE EXPRESSIONS TOIRANIAN LANGUAGE LEARNERScscpconf
In teaching languages, if we only consider direct relationship between form and meaning in language and leave psycholinguistic aside, this approach is not a successful practice and language learners won't be able to make a successful relation in the real contexts. The present study intends to answer this question: is the teaching method in which salient meaning is taught more successful than the method in which literal or figurative meaning is taught or not? To answer the research question, 30 students were selected. Every ten people are formed as a group and three such groups were formed. Twenty figurative expressions were taught to every group. Group one was taught the figurative meaning of every expression. Group two was taught the literal meaning and group three was taught the salient meaning. Then three groups were tested. After analyzing data, we concluded that there was a significant difference in mean grades between classes and the class trained under graded salience hypothesis was more successful. This shows that traditional teaching methods must be revised.
Investigating the Integration of Culture Teaching in Foreign Language Classroom: A Case Study
Dr. Samah Benzerroug (Department of English) & Dr. Souhila Benzerroug (Department of French),
Teacher Training College of Bouzareah, Algiers, Algeria
Many scholars argue that language and culture are closely related to each other and hence the teaching of a foreign language cannot take place without the teaching of its corresponding culture which helps promoting language learning and enhancing learners’ motivation and performance (Corbett, J. (2003); (1996); Hinkel, E. (1999); Kramsch, C. (2006)). This being the case, the present study aims at putting emphasis on the importance and significance of integrating culture teaching in foreign language classroom in the Algerian school. It seeks to investigate whether foreign language teachers grant significant value and interest to the foreign language culture. Therefore, a descriptive analysis of the English and French textbooks of the secondary education was carried out to identify and examine the way the cultural dimensions are being dealt with. In addition, a survey was conducted by addressing a questionnaire to a number of secondary school teachers of English and French to investigate to what extent they consider culture teaching in their classroom. The research results revealed that despite the fact that there is a move towards fostering culture teaching, the textbooks still offer few tasks that deal with cultural aspects and teachers are still unfamiliar with the techniques to promote it in the classroom, thus they neglect culture teaching and prefer to focus on other aspects in the class like accuracy, fluency and language skills development. In light of these findings, a number of considerable implications and recommendation are presented to foreign language teachers and language policy decision-makers to stress the importance of integrating culture teaching and adequately implement it in the classroom.
Keywords: Foreign Language, Culture, Teaching, Integrating, Classroom
The Sixth International Conference on Languages, Linguistics, Translation and Literature
9-10 October 2021 , Ahwaz
For more information, please visit the conference website:
WWW.LLLD.IR
This document provides an overview and summary of key concepts from Dr. Tajeddin's text analysis course in the fall of 2014. It discusses concepts like actual language usage, norms of usage as seen through corpus analysis, patterns of collocation identified through frequency analysis, how semantic prosodies can explain collocations, and concludes that computer analysis of corpora provides norms against which individual texts can be analyzed for implicatures or deviations from expectations.
This document summarizes a study that examines the relationship between oral and written discourse in second language English speakers. The study hypothesizes that there is a positive working relationship between the two. Data was collected from a French doctoral student, including a writing sample and recorded speech sample. While the results showed a weak relationship, analysis of the speech without fillers revealed native-like patterns. The document reviews theories on language acquisition and the influence of first language on second language learning.
Ali Akbar Dehkhoda, the prominent lexicographer, describes a person who has difficulty in grasping knowledge as someone who “Cannot understand something without knowing all its details.” If the knowledge required by somebody is in a language other than the person’s mother tongue, access to this knowledge will surely meet special difficulties resulting from the person’s lack of mastery over the second
language. Any project that can monitor knowledge sources written in English and change them into the
user’s language by employing a simple understandable model is capable of being a knowledge-based
project with a world view regarding text simplification. This article creates a knowledge system,
investigates some algorithms for analyzing contents of complex texts, and presents solutions for changing
such texts simple and understandable ones. Texts are automatically analyzed and their ambiguous points
are identified by software, but it is the author or the human agent who makes decisions concerning
omission of the ambiguities or correction of the texts.
Automatic text simplification evaluation aspectsiwan_rg
The document discusses automatic text simplification (ATS) and methods for evaluating ATS systems. It provides an overview of common evaluation metrics like BLEU, SARI, FKGL, and SAMSA and compares their abilities to measure simplicity, meaning preservation, and grammaticality. The document also outlines a proposed project to build a corpus and develop a graded reading scale to guide the simplification of Arabic fiction works for educational purposes.
The document discusses different approaches to machine translation, including rule-based, statistical, example-based, and dictionary-based approaches. It provides details on each approach, such as rule-based methods using linguistic rules and extensive lexicons, statistical methods relying on probabilistic models trained on parallel texts, example-based methods translating by analogy to examples in aligned corpora, and dictionary-based methods translating words directly with or without morphological analysis. The document also compares transfer-based and interlingual rule-based machine translation, noting interlingual methods aim to represent the source text independently of languages.
Word sense disambiguation using wsd specific wordnet of polysemy wordsijnlc
This paper presents a new model of WordNet that is used to disambiguate the correct sense of polysemy
word based on the clue words. The related words for each sense of a polysemy word as well as single sense
word are referred to as the clue words. The conventional WordNet organizes nouns, verbs, adjectives and
adverbs together into sets of synonyms called synsets each expressing a different concept. In contrast to the
structure of WordNet, we developed a new model of WordNet that organizes the different senses of
polysemy words as well as the single sense words based on the clue words. These clue words for each sense
of a polysemy word as well as for single sense word are used to disambiguate the correct meaning of the
polysemy word in the given context using knowledge based Word Sense Disambiguation (WSD) algorithms.
The clue word can be a noun, verb, adjective or adverb.
CITIZENSHIP ACT (591) OF GHANA AS A LOGIC THEOREM AND ITS SEMANTIC IMPLICATIONSijnlc
This document discusses formalizing the Citizenship Act of Ghana as a logic theorem. It begins with an abstract that introduces formalizing excerpts of the Act in First Order Logic to allow for semantic analysis by machines. The results reveal semantic consequences of the natural construction of the legal text and address ambiguities. The paper then provides context on challenges with legal language complexity and a literature review on past research formalizing legal texts with logic. It proceeds to summarize sections of the Citizenship Act and discusses modeling it with a logic theorem to establish clarity and reveal any semantic errors or ambiguities in the natural language text.
Natural Language Toolkit (NLTK) is a generic platform to process the data of various natural (human)
languages and it provides various resources for Indian languages also like Hindi, Bangla, Marathi and so
on. In the proposed work, the repositories provided by NLTK are used to carry out the processing of Hindi
text and then further for analysis of Multi word Expressions (MWEs). MWEs are lexical items that can be
decomposed into multiple lexemes and display lexical, syntactic, semantic, pragmatic and statistical
idiomaticity. The main focus of this paper is on processing and analysis of MWEs for Hindi text. The
corpus used for Hindi text processing is taken from the famous Hindi novel “KaramaBhumi by Munshi
PremChand”. The result analysis is done using the Hindi corpus provided by Resource Centre for Indian
Language Technology Solutions (CFILT). Results are analysed to justify the accuracy of the proposed
work.
The document discusses information retrieval based on word semantics in Arabic texts. It covers several key areas: (1) the challenges of natural language processing in Arabic due to its rich morphology; (2) the process of morphological analysis including preprocessing, stemming, and indexing terms; (3) the research problems of synonymy and polysemy in information retrieval; and (4) semantic approaches to address these problems including automatic discovery of similar words and synonym-based search methods.
This document discusses discourse analysis and discourse relations in Arabic texts. It defines discourse as written or spoken language and explains that texts have cohesive devices that link words, clauses, and sentences. There are two types of discourse relations: explicitly signaled relations indicated by connectives, and implicitly inferred relations without signaling. The document also describes different types of discourse connectives in Arabic and presents an annotation scheme for connectives and their arguments. It discusses related work on Arabic corpora and efforts to collect and analyze Arabic connectives. Finally, it introduces Rhetorical Structure Theory as one of the most popular theories for representing discourse structure.
CITIZENSHIP ACT (591) OF GHANA AS A LOGIC THEOREM AND ITS SEMANTIC IMPLICATIONSkevig
This document summarizes a paper that presents excerpts from Ghana's Citizenship Act formalized as a logic theorem. The paper aims to analyze the semantics of the legal text by formalizing it in logic. It discusses challenges with ambiguity in legal language and how formalization can help reveal semantic implications. It presents the Citizenship Act formalized with logical connectives and analyzes how this preserves clarity and allows for deductive reasoning through the text. The formalization is said to establish the intended semantics while preventing wrong meanings from ambiguity.
Abstract
Part of speech tagging plays an important role in developing natural language processing software. Part of speech tagging means assigning part of speech tag to each word of the sentence. The part of speech tagger takes a sentence as input and it assigns respective/appropriate part of speech tag to each word of that sentence. In this article I surveys the different work have done about odia POS tagging.
________________________________________________
The document discusses various ways to avoid ambiguity and vagueness in writing. It provides examples of ambiguous sentences and rewrites them in a clearer way. Some of the areas covered include using "which" and "that" correctly, placing the subject near the verb, replacing "-ing" forms with active verbs, and proper use of articles like "a", "an", and "the". The overall goal is to help writers construct sentences that have only one unambiguous interpretation.
There are two perspectives on moral values - idealism and relativism. Idealists believe in universal transcendent values of love, care and concern that exist beyond time and space. Relativists believe values depend on time and place and differ between cultures. Values are both taught through instruction and caught through example, with living examples having greater influence. Values have cognitive, affective and behavioral dimensions - they involve understanding, feeling, and acting upon values. True value formation requires developing all three dimensions through intellect and will.
The document discusses different approaches to dealing with ambiguity in design. It presents an approach called "remove", which attempts to remove all ambiguity from a design brief by over-specifying requirements. However, this can make things more difficult to understand. A better approach is to "engage" with ambiguity by involving participants in activities to help them understand and resolve ambiguous aspects through discussion and prototyping. This process of making things concrete through participation is how ambiguity can be productively addressed and "meaning" created in a design.
TERMite is a semantic indexing engine that analyzes raw text at speeds of up to 1 million words per second, extracting structured data and enabling new discoveries. It manages ambiguity in scientific names. Supporting TERMite is a collection of over 80 vocabularies spanning life sciences, containing over 20 million synonyms, which are enriched through automated analysis and manual curation. TERMite can distinguish relevant terms from irrelevant mentions, identify patterns of entities such as gene-disease relationships, and enhance semantic search and discovery.
The document discusses different types of ambiguity in language, including lexical ambiguity where a word has more than one meaning, structural ambiguity where the structure of a sentence allows for multiple interpretations, referential ambiguity involving pronouns, scope ambiguity related to quantifiers, and pragmatic ambiguity which depends on the context. It provides examples like "I saw the man with glasses" and predicate logic representations of sentences involving quantifiers like "Every boy likes a girl" to illustrate how the same sentence can have multiple meanings depending on how the quantifiers are interpreted.
This document summarizes the key points of the Campus Journalism Act of 1991 in the Philippines. The act aims to promote campus journalism at the college level to develop students' critical thinking, ethics, and personal discipline. It requires all schools to establish student publications and holds editorial boards responsible for content. Student staff are selected through competitive exams and have security of tenure. The act also provides funding, training opportunities, and tax exemptions to support campus journalism.
- Semantics is the study of meaning in language. It focuses on the literal meaning of words and sentences. Pragmatics studies meaning based on context.
- Key terms in semantics include ambiguity, entailment, contradiction, compositionality, and metaphor. Compositionality is the principle that the meaning of an expression is determined by its parts and structure.
- Semantics analyzes features of words, semantic roles, lexical relations, theories of meaning, and more. Pragmatics examines how context influences meaning through speech acts, implicature, and deixis.
Ambiguity exists in language when words, phrases, or sentences can have more than one meaning. There are two main types of ambiguity: lexical, which occurs with individual words, and syntactic, which occurs at the sentence level. While ambiguity can cause misunderstandings, it is a fundamental property of natural languages and can also be used intentionally or taken advantage of in communication through context. Language cannot exist without some degree of ambiguity.
A talk comprised of various threads of thought from various Smithery projects across the last eighteen months, given at the open house at Loft Digital as part of London Technology Week. Starts with some conceptual thinking, ends with some examples and approaches for you to try at home...
An epic is a long narrative poem that relates the deeds of a larger-than-life hero who embodies the values of their society. Philippine epics were orally passed down through generations and chanted during celebrations. The hero of an epic embodies traits valued by their society like courage, determination, wisdom and goes on a long, dangerous journey where they perform valorous deeds by battling evil forces and saving others, demonstrating their virtue. One example of a Philippine epic is the story of Indarapatra and Sulayman which takes place in Magindanaw where four monsters attacked and features themes of bravery bringing honor and fighting for freedom and peace without living in fear.
This document provides an overview of syntax and generative grammar. It defines syntax as the way words are arranged to show relationships of meaning within and between sentences. Grammar is defined as the art of writing, but is now used to study language. Generative grammar uses formal rules to generate an infinite set of grammatical sentences. It distinguishes between deep structure and surface structure. Tree diagrams are used to represent syntactic structures with symbols like S, NP, VP. Phrase structure rules, lexical rules, and movement rules are discussed. Complement phrases and recursion are also explained.
grammaticality, deep & surface structure, and ambiguityDedew Deviarini
This document discusses English morphology and syntax. It covers several key topics:
1. What is syntax and syntactic structure, including parts of speech and phrase structure.
2. The difference between deep and surface structure, where deep structure is the underlying form and surface structure is the actual form after transformations.
3. Grammaticality, which refers to sentences that follow syntactic rules rather than other factors like meaning or truth.
4. Types of ambiguities, including lexical ambiguities due to ambiguous words, and structural ambiguities due to multiple possible syntactic trees.
This document outlines the main branches of linguistics, including theoretical (general) linguistics, descriptive/applied linguistics, micro linguistics, and macro linguistics. Theoretical linguistics is concerned with frameworks for describing languages, concepts and categories, and theories about universal aspects of language. Descriptive and applied linguistics describe data to confirm or refute language theories and apply concepts in areas like language teaching. Micro linguistics takes a narrow view of language structure, while macro linguistics takes a broad view relating language to other sciences and its application in daily life. Specific branches covered include phonetics, phonology, morphology, syntax, semantics, and more.
Natural Language Ambiguity and its Effect on Machine LearningIJMER
This document discusses natural language ambiguity and its effect on machine learning. It begins by introducing different types of ambiguity that exist in natural languages, including lexical, syntactic, semantic, discourse, and pragmatic ambiguities. It then examines how these ambiguities present challenges for computational linguistics and machine translation systems. Specifically, it notes that ambiguity is a major problem for computers in processing human language as they lack the world knowledge and context that humans use to resolve ambiguities. The document concludes by outlining the typical process of machine translation and how ambiguities can interfere with tasks like analysis, transfer, and generation of text in the target language.
Domain Specific Terminology Extraction (ICICT 2006)IT Industry
Imran Sarwar Bajwa, M. Imran Siddique, M. Abbas Choudhary, [2006], "Automatic Domain Specific Terminology Extraction using a Decision Support System", in IEEE 4th International Conference on Information and Communication Technology (ICICT 2006), Cairo, Egypt. pp:651-659
Natural language processing (NLP) aims to help computers understand human language. Ambiguity is a major challenge for NLP as words and sentences can have multiple meanings depending on context. There are different types of ambiguity including lexical ambiguity where a word has multiple meanings, syntactic ambiguity where sentence structure is unclear, and semantic ambiguity where meaning depends on broader context. NLP techniques like part-of-speech tagging and word sense disambiguation aim to resolve ambiguity by analyzing context.
A SURVEY OF GRAMMAR CHECKERS FOR NATURAL LANGUAGESLinda Garcia
This document summarizes several existing grammar checkers for various natural languages. It discusses rule-based, statistical, and hybrid approaches to grammar checking. Grammar checkers described include those for Afan Oromo, Amharic, Swedish, Icelandic, Nepali, and Portuguese. The document analyzes the approaches, methodologies, advantages, and limitations of each grammar checker.
The paper deals with the scope of linguistic description. It thus highlights the idea of idealization and how models of linguistic descriptions rely thoroughly on abstracting linguistic data.
In recent years, great advances have been made in the speed, accuracy, and coverage of automatic word
sense disambiguator systems that, given a word appearing in a certain context, can identify the sense of
that word. In this paper we consider the problem of deciding whether same words contained in different
documents are related to the same meaning or are homonyms. Our goal is to improve the estimate of the
similarity of documents in which some words may be used with different meanings. We present three new
strategies for solving this problem, which are used to filter out homonyms from the similarity computation.
Two of them are intrinsically non-semantic, whereas the other one has a semantic flavor and can also be
applied to word sense disambiguation. The three strategies have been embedded in an article document
recommendation system that one of the most important Italian ad-serving companies offers to its customers.
In recent years, great advances have been made in the speed, accuracy, and coverage of automatic word
sense disambiguator systems that, given a word appearing in a certain context, can identify the sense of
that word. In this paper we consider the problem of deciding whether same words contained in different
documents are related to the same meaning or are homonyms. Our goal is to improve the estimate of the
similarity of documents in which some words may be used with different meanings. We present three new
strategies for solving this problem, which are used to filter out homonyms from the similarity computation.
Two of them are intrinsically non-semantic, whereas the other one has a semantic flavor and can also be
applied to word sense disambiguation. The three strategies have been embedded in an article document
recommendation system that one of the most important Italian ad-serving companies offers to its customers
A performance of svm with modified lesk approach for word sense disambiguatio...eSAT Journals
Abstract WSD is a Technique used to find the correct meaning of a given word in any human language. Each human language has a problem called ambiguity of a word. To finds the correct meaning of any ambiguous word is easy for human but for a machine it is great issues No of work has done on WSD but not enough in Hindi language. My objective is to provide the training to the system so that it can easily find the correct meaning of the any ambiguous word in Hindi language .For this purpose I simple used a one existing technique name modified lesk approach and give its output to the SVM to get the better result and show that SVM is better in compare to modified Lesk Approach, In this paper I simply take nine Hindi ambiguous words and three different databases to show the result. Keywords: Support Vector Machine, NLP, Word Sense Disambiguation, Modified Lesk approach, Comparison
This research describes an attempt to establish a pedagogically useful list of the most frequent semantically non-compositional multi-word combinations for English for Journalism learners in an EFL context, who need to read English news in their field of study. The list was compiled from the NOW (News on the Web) Corpus, the largest English news database by far. In consideration of opaque multi-word combinations in widespread use and pedagogical value, the researcher applied a set of selection criteria when using the corpus. Based on frequency, meaningfulness, and semantic non-compositionality, a total of 318 non-compositional multi-word combinations of 2 to 5 words with the exclusion of phrasal verbs were selected and they accounted for approximately 2% of the total words in the corpus. The list, not highly technical in nature, contains the most commonly-used multi-word units traversing various topic areas and news readers may encounter these phrasal expressions very often. As with other individual word lists, it is hoped that this opaque expressions list may serve as a reference for English for Journalism teaching.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Survey”, International Journal of Computer Applications (0975 – 8887) Volume
118 – No. 18, May 2015.
5. Dr. Rashmi Jha, “Sanskrit: The Mother of All Programming Languages”, International Journal
of Computer Applications (0975 – 8887) Volume 82 – No.13, November 2013.
Linguistic Fundamentals in Translation and Translation StudiesSugey7
This document discusses the role of linguistics in translation. It begins by defining linguistics as the scientific study of language and explores its various branches including theoretical linguistics and applied linguistics. The document then explains how linguistics relates to and assists with translation. Specifically, it notes that translators need knowledge of linguistics to understand word functions and meanings in context. The document also summarizes several key levels of linguistics - including phonology, grammar, semantics, and context - that translators must understand to perform accurate translations between languages.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals
Natural Language Processing: State of The Art, Current Trends and Challengesantonellarose
Diksha Khurana1
, Aditya Koli1
, Kiran Khatter1,2 and Sukhdev Singh1,2
1Department of Computer Science and Engineering
Manav Rachna International University, Faridabad-121004, India
2Accendere Knowledge Management Services Pvt. Ltd., India
This document discusses corpus linguistics and quantitative research design. It defines a corpus as a large collection of texts used for linguistic analysis. Corpus linguistics allows researchers to empirically test hypotheses about language patterns and features based on large amounts of real-world data. Quantitative analysis of corpus data shows how frequently certain words, constructions, and patterns are used. Specialized corpora can focus on particular text types, languages, or learner language. Various software tools are used to analyze corpora through frequency lists, keyword lists, collocation analysis, and other methods.
The document discusses the distinction between words and rules in language. It argues that regular verb inflection, like "walk-walked", is produced through rules, while irregular inflection, like "come-came", is stored in memory as individual word pairs. However, patterns among irregular verbs complicate this distinction. Two theories have attempted to collapse the distinction: generative phonology uses minor rules for irregular forms, and connectionism uses memory patterns to store both regular and irregular forms. The author presents evidence supporting the traditional distinction between words stored in memory and rules for novel combinations, while acknowledging some properties of memory patterns.
Natural language processing (NLP) involves building models to understand human language through automated generation and understanding of text and speech. It is an interdisciplinary field that uses techniques from artificial intelligence, linguistics, and statistics. NLP has applications like machine translation, sentiment analysis, and summarization. There are two main approaches: statistical NLP which uses machine learning on large datasets, and linguistic approaches which utilize structured linguistic resources like lexicons. Key NLP tasks include part-of-speech tagging, parsing, named entity recognition, and more.
This document summarizes a research study on the difficulties of learning English idioms for non-native speakers and strategies to address these difficulties. The study explored the categories of idioms that create challenges, and suggested strategies like choosing idioms frequently used in the target language and discussing idioms to contrast literal and figurative meanings. The strategies discussed teaching idioms by defining them in context, using situational examples, drawing pictures, and acting them out dramatization to help learners understand idiomatic meanings.
International Journal of Engineering and Science Invention (IJESI) inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
. . . all human languages do share the same structure. More e.docxadkinspaige22
The document discusses the Universal Grammar (UG) approach to linguistics and second language acquisition. It proposes that UG posits that all human languages share the same underlying structure and are constrained by a set of innate linguistic principles and parameters. The UG approach aims to describe the mental representation of language, explain how language is acquired, and characterize what knowledge of language consists of. For second language learning, UG may fully constrain the second language grammar, or UG constraints may be impaired or not apply.
. . . all human languages do share the same structure. More e.docxShiraPrater50
. . . all human languages do share the same structure. More explicitly: they have
essentially the same primitive elements and rules of composition . . . , although
of course there may be variations, such as the obvious ones derived from the
arbitrary association between sounds and meanings . . .
(Moro, 2016, p. 15)
3.1 Introduction
In this chapter, we start to consider individual theoretical perspectives on
L2 learning in greater detail. Our first topic is the Universal Grammar (UG)
approach (the generative linguistics approach), developed by the American
linguist Noam Chomsky and numerous followers over the last few decades.
This has been the most influential linguistic theory in the field, and has
inspired a wealth of publications (for full-length treatments, see Hawkins,
2001; Herschensohn, 2000; Lardiere, 2007; Leung, 2009; Slabakova, 2016;
Snape & Kupisch, 2016; Thomas, 2004; White, 2003; Whong, Gil, & Mars-
den, 2013).
The main aim of linguistic theory is twofold: firstly, to characterize what
human languages are like (descriptive adequacy), and, secondly, to explain
why they are that way (explanatory adequacy). In terms of L2 acquisition,
a linguistic approach sets out to describe the evolving language produced
by L2 learners, and to explain its characteristics. UG is therefore a prop-
erty theory (as defined in Chapter 1); that is, it attempts to characterize
the underlying linguistic knowledge in L2 learners’ minds. In contrast, a
detailed examination of the learning process itself (transition theory) will
be the main concern of the cognitive approaches which we describe in
Chapters 4 and 5.
First in this chapter, we will give a broad definition of the aims of the
Chomskyan tradition in linguistic research, in order to identify the aspects
of second language acquisition (SLA) to which this tradition is most relevant.
Secondly, we will examine the concept of UG itself in some detail, and
finally we will consider its application in L2 learning research.
3 Linguistics and Language
Learning
The Universal Grammar
Approach
C
o
p
y
r
i
g
h
t
2
0
1
9
.
R
o
u
t
l
e
d
g
e
.
A
l
l
r
i
g
h
t
s
r
e
s
e
r
v
e
d
.
M
a
y
n
o
t
b
e
r
e
p
r
o
d
u
c
e
d
i
n
a
n
y
f
o
r
m
w
i
t
h
o
u
t
p
e
r
m
i
s
s
i
o
n
f
r
o
m
t
h
e
p
u
b
l
i
s
h
e
r
,
e
x
c
e
p
t
f
a
i
r
u
s
e
s
p
e
r
m
i
t
t
e
d
u
n
d
e
r
U
.
S
.
o
r
a
p
p
l
i
c
a
b
l
e
c
o
p
y
r
i
g
h
t
l
a
w
.
EBSCO Publishing : eBook Collection (EBSCOhost) - printed on 4/6/2020 6:13 AM via UNIVERSITY OF ESSEX
AN: 2006630 ; Mitchell, Rosamond, Myles, Florence, Marsden, Emma.; Second Language Learning Theories : Fourth Edition
Account: s9814295.main.ehost
82 Linguistics and Language Learning
3.2 Why a Universal Grammar?
3.2.1 Aims of Linguistic Research
The main goals of linguistic theory, as defined by Chomsky (1986a), are to
answer three basic questions about human language:
1. What constitutes knowledge of language?
2 ...
Similar to Exploiting rules for resolving ambiguity in marathi language text (20)
Mechanical properties of hybrid fiber reinforced concrete for pavementseSAT Journals
Abstract
The effect of addition of mono fibers and hybrid fibers on the mechanical properties of concrete mixture is studied in the present
investigation. Steel fibers of 1% and polypropylene fibers 0.036% were added individually to the concrete mixture as mono fibers and
then they were added together to form a hybrid fiber reinforced concrete. Mechanical properties such as compressive, split tensile and
flexural strength were determined. The results show that hybrid fibers improve the compressive strength marginally as compared to
mono fibers. Whereas, hybridization improves split tensile strength and flexural strength noticeably.
Keywords:-Hybridization, mono fibers, steel fiber, polypropylene fiber, Improvement in mechanical properties.
Material management in construction – a case studyeSAT Journals
Abstract
The objective of the present study is to understand about all the problems occurring in the company because of improper application
of material management. In construction project operation, often there is a project cost variance in terms of the material, equipments,
manpower, subcontractor, overhead cost, and general condition. Material is the main component in construction projects. Therefore,
if the material management is not properly managed it will create a project cost variance. Project cost can be controlled by taking
corrective actions towards the cost variance. Therefore a methodology is used to diagnose and evaluate the procurement process
involved in material management and launch a continuous improvement was developed and applied. A thorough study was carried
out along with study of cases, surveys and interviews to professionals involved in this area. As a result, a methodology for diagnosis
and improvement was proposed and tested in selected projects. The results obtained show that the main problem of procurement is
related to schedule delays and lack of specified quality for the project. To prevent this situation it is often necessary to dedicate
important resources like money, personnel, time, etc. To monitor and control the process. A great potential for improvement was
detected if state of the art technologies such as, electronic mail, electronic data interchange (EDI), and analysis were applied to the
procurement process. These helped to eliminate the root causes for many types of problems that were detected.
Managing drought short term strategies in semi arid regions a case studyeSAT Journals
Abstract
Drought management needs multidisciplinary action. Interdisciplinary efforts among the experts in various fields of the droughts
prone areas are helpful to achieve tangible and permanent solution for this recurring problem. The Gulbarga district having the total
area around 16, 240 sq.km, and accounts 8.45 per cent of the Karnataka state area. The district has been situated with latitude 17º 19'
60" North and longitude of 76 º 49' 60" east. The district is situated entirely on the Deccan plateau positioned at a height of 300 to
750 m above MSL. Sub-tropical, semi-arid type is one among the drought prone districts of Karnataka State. The drought
management is very important for a district like Gulbarga. In this paper various short term strategies are discussed to mitigate the
drought condition in the district.
Keywords: Drought, South-West monsoon, Semi-Arid, Rainfall, Strategies etc.
Life cycle cost analysis of overlay for an urban road in bangaloreeSAT Journals
Abstract
Pavements are subjected to severe condition of stresses and weathering effects from the day they are constructed and opened to traffic
mainly due to its fatigue behavior and environmental effects. Therefore, pavement rehabilitation is one of the most important
components of entire road systems. This paper highlights the design of concrete pavement with added mono fibers like polypropylene,
steel and hybrid fibres for a widened portion of existing concrete pavement and various overlay alternatives for an existing
bituminous pavement in an urban road in Bangalore. Along with this, Life cycle cost analyses at these sections are done by Net
Present Value (NPV) method to identify the most feasible option. The results show that though the initial cost of construction of
concrete overlay is high, over a period of time it prove to be better than the bituminous overlay considering the whole life cycle cost.
The economic analysis also indicates that, out of the three fibre options, hybrid reinforced concrete would be economical without
compromising the performance of the pavement.
Keywords: - Fatigue, Life cycle cost analysis, Net Present Value method, Overlay, Rehabilitation
Laboratory studies of dense bituminous mixes ii with reclaimed asphalt materialseSAT Journals
Abstract
The issue of growing demand on our nation’s roadways over that past couple of decades, decreasing budgetary funds, and the need to
provide a safe, efficient, and cost effective roadway system has led to a dramatic increase in the need to rehabilitate our existing
pavements and the issue of building sustainable road infrastructure in India. With these emergency of the mentioned needs and this
are today’s burning issue and has become the purpose of the study.
In the present study, the samples of existing bituminous layer materials were collected from NH-48(Devahalli to Hassan) site.The
mixtures were designed by Marshall Method as per Asphalt institute (MS-II) at 20% and 30% Reclaimed Asphalt Pavement (RAP).
RAP material was blended with virgin aggregate such that all specimens tested for the, Dense Bituminous Macadam-II (DBM-II)
gradation as per Ministry of Roads, Transport, and Highways (MoRT&H) and cost analysis were carried out to know the economics.
Laboratory results and analysis showed the use of recycled materials showed significant variability in Marshall Stability, and the
variability increased with the increase in RAP content. The saving can be realized from utilization of recycled materials as per the
methodology, the reduction in the total cost is 19%, 30%, comparing with the virgin mixes.
Keywords: Reclaimed Asphalt Pavement, Marshall Stability, MS-II, Dense Bituminous Macadam-II
Laboratory investigation of expansive soil stabilized with natural inorganic ...eSAT Journals
This document summarizes a study on stabilizing expansive black cotton soil with the natural inorganic stabilizer RBI-81. Laboratory tests were conducted to evaluate the effect of RBI-81 on the soil's engineering properties. The tests showed that with 2% RBI-81 and 28 days of curing, the unconfined compressive strength increased by around 250% and the CBR value improved by approximately 400% compared to the untreated soil. Overall, the study found that RBI-81 effectively improved the strength properties of the black cotton soil and its suitability as a soil stabilizer was supported.
Influence of reinforcement on the behavior of hollow concrete block masonry p...eSAT Journals
Abstract
Reinforced masonry was developed to exploit the strength potential of masonry and to solve its lack of tensile strength. Experimental
and analytical studies have been carried out to investigate the effect of reinforcement on the behavior of hollow concrete block
masonry prisms under compression and to predict ultimate failure compressive strength. In the numerical program, three dimensional
non-linear finite elements (FE) model based on the micro-modeling approach is developed for both unreinforced and reinforced
masonry prisms using ANSYS (14.5). The proposed FE model uses multi-linear stress-strain relationships to model the non-linear
behavior of hollow concrete block, mortar, and grout. Willam-Warnke’s five parameter failure theory has been adopted to model the
failure of masonry materials. The comparison of the numerical and experimental results indicates that the FE models can successfully
capture the highly nonlinear behavior of the physical specimens and accurately predict their strength and failure mechanisms.
Keywords: Structural masonry, Hollow concrete block prism, grout, Compression failure, Finite element method,
Numerical modeling.
Influence of compaction energy on soil stabilized with chemical stabilizereSAT Journals
This document summarizes a study on the influence of compaction energy on soil stabilized with a chemical stabilizer. Laboratory tests were conducted on locally available loamy soil treated with a patented polymer liquid stabilizer and compacted at four different energy levels. The study found that increasing the compaction effort increased the density of both untreated and treated soil, but the rate of increase was lower for stabilized soil. Treating the soil with the stabilizer improved its unconfined compressive strength and resilient modulus, and reduced accumulated plastic strain, with these properties further improved by higher compaction efforts. The stabilized soil exhibited strength and performance benefits compared to the untreated soil.
Geographical information system (gis) for water resources managementeSAT Journals
This document describes a hydrological framework developed in the form of a Hydrologic Information System (HIS) to meet the information needs of various government departments related to water management in a state. The HIS consists of a hydrological database coupled with tools for collecting and analyzing spatial and non-spatial water resources data. It also incorporates a hydrological model to indirectly assess water balance components over space and time. A web-based GIS portal was created to allow users to access and visualize the hydrological data, as well as outputs from the SWAT hydrological model. The framework is intended to facilitate integrated water resources planning and management across different administrative levels.
Forest type mapping of bidar forest division, karnataka using geoinformatics ...eSAT Journals
Abstract
The study demonstrate the potentiality of satellite remote sensing technique for the generation of baseline information on forest types
including tree plantation details in Bidar forest division, Karnataka covering an area of 5814.60Sq.Kms. The Total Area of Bidar
forest division is 5814Sq.Kms analysis of the satellite data in the study area reveals that about 84% of the total area is Covered by
crop land, 1.778% of the area is covered by dry deciduous forest, 1.38 % of mixed plantation, which is very threatening to the
environmental stability of the forest, future plantation site has been mapped. With the use of latest Geo-informatics technology proper
and exact condition of the trees can be observed and necessary precautions can be taken for future plantation works in an appropriate
manner
Keywords:-RS, GIS, GPS, Forest Type, Tree Plantation
Factors influencing compressive strength of geopolymer concreteeSAT Journals
Abstract
To study effects of several factors on the properties of fly ash based geopolymer concrete on the compressive strength and also the
cost comparison with the normal concrete. The test variables were molarities of sodium hydroxide(NaOH) 8M,14M and 16M, ratio of
NaOH to sodium silicate (Na2SiO3) 1, 1.5, 2 and 2.5, alkaline liquid to fly ash ratio 0.35 and 0.40 and replacement of water in
Na2SiO3 solution by 10%, 20% and 30% were used in the present study. The test results indicated that the highest compressive
strength 54 MPa was observed for 16M of NaOH, ratio of NaOH to Na2SiO3 2.5 and alkaline liquid to fly ash ratio of 0.35. Lowest
compressive strength of 27 MPa was observed for 8M of NaOH, ratio of NaOH to Na2SiO3 is 1 and alkaline liquid to fly ash ratio of
0.40. Alkaline liquid to fly ash ratio of 0.35, water replacement of 10% and 30% for 8 and 16 molarity of NaOH and has resulted in
compressive strength of 36 MPa and 20 MPa respectively. Superplasticiser dosage of 2 % by weight of fly ash has given higher
strength in all cases.
Keywords: compressive strength, alkaline liquid, fly ash
Experimental investigation on circular hollow steel columns in filled with li...eSAT Journals
Abstract
Composite Circular hollow Steel tubes with and without GFRP infill for three different grades of Light weight concrete are tested for
ultimate load capacity and axial shortening , under Cyclic loading. Steel tubes are compared for different lengths, cross sections and
thickness. Specimens were tested separately after adopting Taguchi’s L9 (Latin Squares) Orthogonal array in order to save the initial
experimental cost on number of specimens and experimental duration. Analysis was carried out using ANN (Artificial Neural
Network) technique with the assistance of Mini Tab- a statistical soft tool. Comparison for predicted, experimental & ANN output is
obtained from linear regression plots. From this research study, it can be concluded that *Cross sectional area of steel tube has most
significant effect on ultimate load carrying capacity, *as length of steel tube increased- load carrying capacity decreased & *ANN
modeling predicted acceptable results. Thus ANN tool can be utilized for predicting ultimate load carrying capacity for composite
columns.
Keywords: Light weight concrete, GFRP, Artificial Neural Network, Linear Regression, Back propagation, orthogonal
Array, Latin Squares
Experimental behavior of circular hsscfrc filled steel tubular columns under ...eSAT Journals
This document summarizes an experimental study that tested circular concrete-filled steel tube columns with varying parameters. 45 specimens were tested with different fiber percentages (0-2%), tube diameter-to-wall-thickness ratios (D/t from 15-25), and length-to-diameter (L/d) ratios (from 2.97-7.04). The results found that columns filled with fiber-reinforced concrete exhibited higher stiffness, equal ductility, and enhanced energy absorption compared to those filled with plain concrete. The load carrying capacity increased with fiber content up to 1.5% but not at 2.0%. The analytical predictions of failure load closely matched the experimental values.
Evaluation of punching shear in flat slabseSAT Journals
Abstract
Flat-slab construction has been widely used in construction today because of many advantages that it offers. The basic philosophy in
the design of flat slab is to consider only gravity forces; this method ignores the effect of punching shear due to unbalanced moments
at the slab column junction which is critical. An attempt has been made to generate generalized design sheets which accounts both
punching shear due to gravity loads and unbalanced moments for cases (a) interior column; (b) edge column (bending perpendicular
to shorter edge); (c) edge column (bending parallel to shorter edge); (d) corner column. These design sheets are prepared as per
codal provisions of IS 456-2000. These design sheets will be helpful in calculating the shear reinforcement to be provided at the
critical section which is ignored in many design offices. Apart from its usefulness in evaluating punching shear and the necessary
shear reinforcement, the design sheets developed will enable the designer to fix the depth of flat slab during the initial phase of the
design.
Keywords: Flat slabs, punching shear, unbalanced moment.
Evaluation of performance of intake tower dam for recent earthquake in indiaeSAT Journals
Abstract
Intake towers are typically tall, hollow, reinforced concrete structures and form entrance to reservoir outlet works. A parametric
study on dynamic behavior of circular cylindrical towers can be carried out to study the effect of depth of submergence, wall thickness
and slenderness ratio, and also effect on tower considering dynamic analysis for time history function of different soil condition and
by Goyal and Chopra accounting interaction effects of added hydrodynamic mass of surrounding and inside water in intake tower of
dam
Key words: Hydrodynamic mass, Depth of submergence, Reservoir, Time history analysis,
Evaluation of operational efficiency of urban road network using travel time ...eSAT Journals
This document evaluates the operational efficiency of an urban road network in Tiruchirappalli, India using travel time reliability measures. Traffic volume and travel times were collected using video data from 8-10 AM on various roads. Average travel times, 95th percentile travel times, and buffer time indexes were calculated to assess reliability. Non-motorized vehicles were found to most impact reliability on one road. A relationship between buffer time index and traffic volume was developed. Finally, a travel time model was created and validated based on length, speed, and volume.
Estimation of surface runoff in nallur amanikere watershed using scs cn methodeSAT Journals
Abstract
The development of watershed aims at productive utilization of all the available natural resources in the entire area extending from
ridge line to stream outlet. The per capita availability of land for cultivation has been decreasing over the years. Therefore, water and
the related land resources must be developed, utilized and managed in an integrated and comprehensive manner. Remote sensing and
GIS techniques are being increasingly used for planning, management and development of natural resources. The study area, Nallur
Amanikere watershed geographically lies between 110 38’ and 110 52’ N latitude and 760 30’ and 760 50’ E longitude with an area of
415.68 Sq. km. The thematic layers such as land use/land cover and soil maps were derived from remotely sensed data and overlayed
through ArcGIS software to assign the curve number on polygon wise. The daily rainfall data of six rain gauge stations in and around
the watershed (2001-2011) was used to estimate the daily runoff from the watershed using Soil Conservation Service - Curve Number
(SCS-CN) method. The runoff estimated from the SCS-CN model was then used to know the variation of runoff potential with different
land use/land cover and with different soil conditions.
Keywords: Watershed, Nallur watershed, Surface runoff, Rainfall-Runoff, SCS-CN, Remote Sensing, GIS.
Estimation of morphometric parameters and runoff using rs & gis techniqueseSAT Journals
This document summarizes a study that used remote sensing and GIS techniques to estimate morphometric parameters and runoff for the Yagachi catchment area in India over a 10-year period. Morphometric analysis was conducted to understand the hydrological response at the micro-watershed level. Daily runoff was estimated using the SCS curve number model. The results showed a positive correlation between rainfall and runoff. Land use/land cover changes between 2001-2010 were found to impact estimated runoff amounts. Remote sensing approaches provided an effective means to model runoff for this large, ungauged area.
Effect of variation of plastic hinge length on the results of non linear anal...eSAT Journals
Abstract The nonlinear Static procedure also well known as pushover analysis is method where in monotonically increasing loads are applied to the structure till the structure is unable to resist any further load. It is a popular tool for seismic performance evaluation of existing and new structures. In literature lot of research has been carried out on conventional pushover analysis and after knowing deficiency efforts have been made to improve it. But actual test results to verify the analytically obtained pushover results are rarely available. It has been found that some amount of variation is always expected to exist in seismic demand prediction of pushover analysis. Initial study is carried out by considering user defined hinge properties and default hinge length. Attempt is being made to assess the variation of pushover analysis results by considering user defined hinge properties and various hinge length formulations available in literature and results compared with experimentally obtained results based on test carried out on a G+2 storied RCC framed structure. For the present study two geometric models viz bare frame and rigid frame model is considered and it is found that the results of pushover analysis are very sensitive to geometric model and hinge length adopted. Keywords: Pushover analysis, Base shear, Displacement, hinge length, moment curvature analysis
Effect of use of recycled materials on indirect tensile strength of asphalt c...eSAT Journals
Abstract
Depletion of natural resources and aggregate quarries for the road construction is a serious problem to procure materials. Hence
recycling or reuse of material is beneficial. On emphasizing development in sustainable construction in the present era, recycling of
asphalt pavements is one of the effective and proven rehabilitation processes. For the laboratory investigations reclaimed asphalt
pavement (RAP) from NH-4 and crumb rubber modified binder (CRMB-55) was used. Foundry waste was used as a replacement to
conventional filler. Laboratory tests were conducted on asphalt concrete mixes with 30, 40, 50, and 60 percent replacement with RAP.
These test results were compared with conventional mixes and asphalt concrete mixes with complete binder extracted RAP
aggregates. Mix design was carried out by Marshall Method. The Marshall Tests indicated highest stability values for asphalt
concrete (AC) mixes with 60% RAP. The optimum binder content (OBC) decreased with increased in RAP in AC mixes. The Indirect
Tensile Strength (ITS) for AC mixes with RAP also was found to be higher when compared to conventional AC mixes at 300C.
Keywords: Reclaimed asphalt pavement, Foundry waste, Recycling, Marshall Stability, Indirect tensile strength.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Iron and Steel Technology Roadmap - Towards more sustainable steelmaking.pdf
Exploiting rules for resolving ambiguity in marathi language text
1. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 12 | Dec-2015, Available @ http://www.ijret.org 268
EXPLOITING RULES FOR RESOLVING AMBIGUITY IN MARATHI
LANGUAGE TEXT
Gauri Dhopavkar1,4
, Manali Kshirsagar2
, Latesh Malik3
1
Research Scholar, Department of CSE, GHRCE, Maharashtra, India
2
Professor, Department of CT, YCCE, Maharashtra, India
3
Professor, Department of CSE, GHRCE, Maharashtra, India
4
Faculty, Department of CT, YCCE, Maharashtra, India
Abstract
Natural language ambiguity is a situation involving some words having multiple meanings/senses. This paper discusses natural
language ambiguity and its types. Further we propose a knowledge based solution to resolve various types of ambiguity occurring
in Marathi language text. The task of resolving semantic and lexical ambiguity occurring in words to obtain the actual sense is
referred as Word Sense Disambiguation (WSD). Marathi language is the official and commonly spoken language of Maharashtra
state in India. Plenty of words in Marathi are spelled same as well as uttered same but are semantically (meaning-wise/ sense-
wise) different. During the automatic translation, these words lead to ambiguity. Our method successfully removes the ambiguity
by identifying the correct sense of the given text from the predefined possible senses available in Marathi Wordnet using word and
sentence rules. The method is applicable only for word level ambiguity. Structural ambiguity is not handled by this system. This
system may be successfully used as a subsystem in other Natural Language Processing (NLP) applications.
Key Words: Word Sense Disambiguation, Natural Language Processing, Marathi, Marathi Wordnet, ambiguity,
knowledge based
--------------------------------------------------------------------***----------------------------------------------------------------------
1. INTRODUCTION
All living things/organisms need to communicate with each
other for their survivor. They do it using different methods.
Many communicate through various event specific sounds,
gestures etc. Homo sapiens are intelligent among all the
species and they share their thoughts through Natural
Language. Through natural language, all ideas, thinking,
views are communicated accurately. Still, due to some
features of language, meanings of words may shift leading
to misunderstanding and miscommunications. Even if
literary parameters are followed strictly, by virtue of nature,
each natural language suffers from Ambiguity.
As per the Oxford dictionary[1], the term “Ambiguity”
refers to the state of having or expressing more than one
possible meaning or something open to more than one
possible meaning. It refers to the state in which any
linguistic entity, any symbol, a word or a sentence
(statement), any text can be understood in more than one
way. For humans, they being intelligent, may overcome
misunderstanding and miscommunication caused by
language ambiguities, by using various ways of analysis
naturally. But getting the jobs done with the help of machine
is a complex task as it lacks the knowledge and the common
sense reasoning. Ambiguity poses problems in majority of
the NLP tasks like Machine Translation, Text
Summarization and Named Entity Recognition etc. In order
to deal with the problem of ambiguity, it becomes necessary
to understand the reasons behind its occurrence, its types
and the various levels at which it may occur.
The paper is organized in following sections. Section 1
introduces the concept of ambiguity and its importance in
natural languages processing. In section 2, we discuss word
sense disambiguation, Section 3 focuses on literature
available in field of disambiguation. In section 4, we present
our approach to tackle the problem of ambiguity. Section 5
details the conclusion of the work presented by us.
1.1 Types of Ambiguity in NLP tasks
Ambiguity represents the state where it is confusing, unsure
to fix a precise meaning. It also becomes much cumbersome
to provide an explanation, since it involves different
meanings. Unclearness represented by ambiguity is because
of having more than one meaning. The basic ambiguity may
be a word with multiple senses. For example, consider the
word “fly” means “A type of insect or two winged creature”
or “to move through air”.
The outcome of ambiguity is the confusion in the mind of a
reader in case of written text. Ambiguity also creates
unnecessary confusion in hearer’s mind in case of speech.
Because of this confusion or uncertainty, no effective
communication is possible.
Ambiguities can be classified in different ways depending
upon the principles used for classification and the reason
behind the occurrence of ambiguity. Ambiguity is classified
as:
Intentional Ambiguity (related with any valid literary text.)
Unintentional Ambiguity (related with real world language
use).
2. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 12 | Dec-2015, Available @ http://www.ijret.org 269
Many times ambiguity may be considered as local or global
depending on the scope of it.
Marathi language philosophy indicates further division of
ambiguities as mentioned below: [2]
The ambiguity might occur due to same name given to the
whole and its part ambiguity- (e.g. "makaa/corn" etc.)
Act and object ambiguities : The action/act and object have
same name. example badagaa (object used for beating/action
taken against someone) , taancha aaNaNe (heel or bring in
problem).
The raw material and the finished product (e.g.
"khasa"/waaLaa/stem of one plant with sweet fragrance used
in water).
The name of community or the cast and its member is same.
For example, maaLii (gardener/name of caste), panDita(one
who performs sacred/holy work of religion).
1.2 Levels of Ambiguity
In literature, ambiguity may be decomposed into different
levels as below[3] [4]:
Phonological Ambiguity- Marathi example, (“थुंकू
नये.”(Do not spit) or “ थुंकू न ये.”(spit and come)),
Utterance sounds same but exactly opposite meaning.
Lexical Ambiguity (Polysemy or Word Sense
ambiguity)- In Marathi language, consider example
word showing lexical ambiguity as “warga”(वग) (
representing result of multiplication of a number by
itself, i. e. square or other meaning is class room or
class/standard).
Structural Ambiguity-Consider an example in Marathi,
Example: "taruNa muley aaNi mulii naataka
baghayalaa aale."(Young boys and girls came to watch
drama).
Some other levels of ambiguities are also found in
natural language text as mentioned below-
Transformational Ambiguity[5]
Scope Ambiguity
Pragmatic Ambiguity
Pun
2. WORD SENSE DISAMBIGUATION(WSD)
In order to understand the need of WSD, consider the
following example-
नद चे पा मोठे आहे.
(River basin is large)
तो या पदासाठ पा नाह .
(He is not eligible for this post)
In the above example पा word is ambiguous. In first
sentence it’s sense is interpreted as River Basin whereas
in second sentence it is representing sense as Eligibility
for some post. If the correct sense is not interpreted by the
machine, it will create chaotic situation.
The activity of identifying the accurate sense of a word as
well as sentence is considered as Word Sense
Disambiguation process. If the problem of Word Sense
disambiguation is not handled carefully, it may lead
to disastrous results in the application.
3. LITERATURE REVIEW
In this section we present literature survey carried out for
various efforts done to address WSD problem. From the
literature survey it is found that very less work is carried out
for Indian languages. In India, around 22 official languages
are spoken. But efforts for automated NLP tasks has been
initiated for very few Indian languages.
The WSD tasks are generally categorized as -Knowledge
based and Corpus Based. We limit our discussion only to
understand the work carried out for Knowledge based work:
3.1 Knowledge Based Disambiguation
The disambiguation task relying on knowledge bases uses
external lexical resources like dictionaries, thesauri or
lexical knowledge base (also considered as Computational
lexicon). These methods do not need large amounts of
training material as required in Supervised methods.
Machine-Readable Dictionaries (MRDs) are a ready-made
source of information about word senses and knowledge
about the world, which is required for WSD task. Amsler
made its first use and later on it became very popular. In this
method, the most plausible sense out of number of available
senses is chosen which maximizes the relatedness among all
the chosen senses.
Thesauri provide information about relationships among
words, most notably synonymy.
Enumerative and generative are the two popularly used
classes for constructing Computational lexicons. In
enumerative approach, the senses are explicitly provided,
while in generative approach, semantic information
associated with given words is underspecified. The
generation rules are used to derive precise sense
information.
Generative Lexicons work on lexicons. Related senses (i.e.,
systematic polysemy), are not enumerated but rather are
generated from rules which capture regularities in sense
creation, like for metonymy, meronymy, etc.
The knowledge based methods were used in the work
carried out by algorithms like Lesk[6], Walker[7], Agirre
and Rigau’s algorithm using conceptual density[8] etc.
The different approaches discussed in various papers [14]
use-
The selectional preferences and the arguments in which
exhaustive enumeration of various properties such as
arguments, selectional preference of the argument,
description of the properties of the word is required.
Based on this the selection preference criteria can be
decided.
Overlap approaches which make use of MRD such as
WordNet, thesaurus etc. It first identifies the features of
overlap in between the different senses of the word that
is ambiguous and also the features of the words that are
in the context.
Various drawbacks of knowledge based approach mentioned
in the paper [11] are:
Drawback of selectional restrictions is that it needs a
very large amount of database.
Drawback of overlap based approach includes that the
definition in MRD are limited and it rarely takes
3. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 12 | Dec-2015, Available @ http://www.ijret.org 270
distributional constraints of differential word sense in
account.
3.1.1 Supervised Disambiguation
This approach is based on a labeled training set and the
learning system has a training set of feature encoded inputs
and their appropriate sense label (category).
Paper [12] describes various approaches used for word sense
disambiguation. The approaches described are supervised
approaches.
TRUMP[17] (TRansportable Understanding Mechanism
Package) system functions on a “core” of knowledge about
language and uses a set of techniques for applying the core
knowledge within a stipulated domain, the information
about words, word meanings, and linguistic relations.
Yarowsky’s[13] algorithm uses minimal or no training data
often called as co training data. It uses small set of labeled
data with a large amount of unlabeled text to learn a
classifier with fine accuracy.
Ng and Lee’s[15] approach is a very well known approach
that uses a nearest neighbor approach called LEXAS. It uses
various knowledge sources like parts of speech of words,
morphological structure etc.
[16] Lin developed a supervised approach to disambiguate
word sense without the use of a classifier. One of the
disadvantage of his approach is that it lacks in modeling
specific for every word of the training corpus which effects
the accuracy of the algorithm. It highly relies on the
syntactic dependencies which helps in capturing the context
word.
4. PROPOSED METHODOLOGY
In this section we discuss the approach of WSD used in our
work.
For carrying out WSD in any language, initially the list of
senses of a word is assumed to be fixed with reference to
some dictionary of that language. This demands for the
availability of a sense inventory for the words in a language.
Till early 1990s, most dictionaries for English and other
languages were available only in paper form. Hardly a few
were available in electronic form and that too only
aavailable for a limited group of researchers. This was the
major hurdle in the NLP progress. At this point WordNet
entered the scene and was like a boon. It is a public-domain
electronic dictionary and thesaurus for different languages
freely available for research purposes.
(English WordNet is Created by George Miller and his team
at Princeton University, (WordNet (Miller 1990, Miller and
Fellbaum 1991, Fellbaum 1998). Based on the idea of
English Wordnet, other language Wordnets are also
designed. Dr. Pushpak Bhattacharya of IIT Bombay
alongwith his team has developed WordNets for Indian
languages like Hindi, Marathi etc. Marathi Wordnet is also a
Wordnet based of semantic relatedness of words. It is a
large electronic database organized as a semantic network. It
is constructed on paradigmatic relations involving
synonymy, hyponymy, antonymy, and entailment etc. It has
become the most widely used lexical database today for
NLP research in these languages, WordNet lists the different
senses (called synonym sets, or synsets) for each open-class
word (nouns, verbs, adjectives, and adverbs).
In our work we have used Marathi Wordnet 1.3[10].
Marathi Wordnet has following files-
Index_txt: Provides information about all words present in
Wordnet
Data_txt: Provides details of every word in index file.
Onto_txt : Provides ontology details of the words in data
file.
(data is obtained officially from IIT Mumbai ) [9, 10]
Structure of Data_txt:
Example:
00054558 03 02 चालणे:धकणे 0001 0400 00000143 |
हेतुपूततेला उपयोगी पडणारे असणे:"चूल पेटवायला ओल लाकडे
कशी चालतील?"
synset id=00054558
pos=03
number of words present in synset=02
synsets= चालणे, धकणे
number of relations lexical as well as
semantic=0001
four digit code relation id=0400
synset_id for which that relation exists=00000143
gloss= हेतुपूततेला उपयोगी पडणारे असणे
example sentence=:"चूलपेटवायला ओल लाकडे
कशी चालतील?"
pos: 1(noun), 2(adj), 3(verb), 4(adv)
Antonymy and Gradation relations are represented with the
help of two four digit code (first four digit represents
relation type and second four digit represents the order of
words from two synsets for which relation holds)
Consider the following example in index.txt file:
अंक 01 01 0400 05 00002878 00002910 00004049
00010697 00001958
In the above example:
word= अंक
pos= 01 {*pos: 1(noun), 2(adj), 3(verb), 4(adv)}
number of relations exists for word in all its senses=01
number of senses=05
Structure of Onto_txt :
00000043 0001 00000035 | मारक घटना *(Fatal Event)
{FTL उदाहरणे :- धरणीकं प इ याद }
onto id=00000043
0001 indicates that parent exists
parent onto id=00000035
onto description=(Fatal Event)
In order to parse these files we have designed different
parsers like onto_txt parser, index_txt parser and data_txt
parser Thus we get complete lexical as well as meaning
wise(sense) detail of the text under consideration.
4. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 12 | Dec-2015, Available @ http://www.ijret.org 271
4.1 Algorithm Steps
Our proposed Algorithm performs disambiguation through
the following steps:
1. Text or Discourse is Input (ambiguity might be present
or might not present)
2. Parse the files like index_txt, onto_txt, data_txt available
in Marathi WordNet 1.3 using specially designed
parsers.
3. Identify the presence of ambiguity by analysing
every word in the sentence (Check the parsed text for
the ambiguity.
4. Establish the relationship between the neighboring
words in the text using ontology information.
5. Resolve ambiguity with respect to sense closeness
with context words (by applying suitable rules.)
The Rules are written using grammatical & relational
information of Marathi language. The rules are
automatically generated and applied.
Two classes of rules are framed.
1. Word Rules
2. Sentence Rules
Rules are applied to each word irrespective of its position
and type. (position refers to the place of a word in the given
text and type refers to the grammatical category of the
word.)
Check for ambiguity module of phase II checks for the
following result:
i) If Multiple possible sense list is obtained, it indicates
presence of multiple ontology.
ii) For a single word, multiple rules may provide answers
indicating ambiguity.
The GUI of our system is shown in figure 1 below. It
includes buttons for all rules framed. It also shows the
ambiguous words list. The buttons are categorized in word
rule and sentence rule. The buttons indicating parsers of
Noun, Verb, Adjective and Adverb are also available on
GUI. It shows the time required for processing the sentence/
word in Milliseconds. The GUI provides keyboard for
entering Marathi text for checking ambiguity. Figure 1
provides snapshot of GUI of our system.
Every word's POS and morphological details with
ontological information is displayed on screen. The possible
senses of each word are shown on GUI.
The words having multiple senses are displayed as senses
Figure 1: GUI design of our system
Word Rules: Some words are not directly available in
dataset because they are attached with suffixes with some
characters and therefore it is necessary to separate suffix &
root word of these words. Words which are suffixed with
special words have different sense as compared to the sense
of their root word and therefore it is necessary to apply word
sense rules to identify the correct sense of words.
Sentence Rules : In a sentence, there are more than one
words. Here for each word of sentence we analyze that
word belongs to which sense. Sometimes, a word's exact
meaning is dependent on adjacent words. Hence the correct
meaning of such words is not obtained using Word Sense
Rules of Phase II. For example words like pandharaa ranga
(white colour), aajaparyanta (untill today) etc.
When more than one words of a sentence are used to set
correct sense of the sentence, then the rule is called as
Sentence Sense Rule.
4.2 Output of the system
The system developed is a knowledge based system which
uses Wordnet 1.3 ontologies for Marathi language. The
WSD system which is capable of performing
Disambiguation for the Marathi language at word level,
i.e. it resolves lexical and semantic ambiguity.
The output of the system i.e. disambiguation process is
represented in following snapshots, the system GUI consist
of number of rule list like word rule, sentence rule, pattern
rule; word list as per grammatical category.
In the output word details are displayed like:
Root word
Pos tag of word
Possible senses
Correct senses
ontology reference etc.
5. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 12 | Dec-2015, Available @ http://www.ijret.org 272
Figure 2: Snapshot of output
5. CONCLUSION
Thus the system discussed in this paper is a knowledge based
solution to the problem of word sense disambiguation in
Marathi language. At present WSD system is working at
word level. The system accuracy is around 92% which
include disambiguation of nouns, adjectives and verbs in
the given Marathi language text. The limitation of the
system is that it can only identify and resolve word level
ambiguity.
ACKNOWLEDGEMENT
We wish to express gratitude towards IIT Mumbai CFILT
experts for the valuable guidance extended in the work of
WSD carried out by us in Marathi.
REFERENCES
[1] www.oxforddictionary.com
[2] Dixit, Veena, Dethe, Satish and Joshi, Rushikesh K.
(2006), Design a nd Implementa tion of a Morphology-
ba sed Spellchecker for Ma ra thi, a n Indian Language,
In Special issue on Human, Language Technologies as a
challenge for Computer Science and Linguistics. Part I.
15, pages 309–316. Archives of Control Sciences.
[3] http://www.glottopedia.org/index.php/Ambiguity,_Poly
s emy_and_Vagueness
[4] John Lyons, "Introduction to theoretical linguistics,
Press Syndicate of University of Cambridge-digital
printing 2001(Google Books:)
[5] Diane D. Bornstein, "An Introduction to
Transformational Grammar" Book published by,
University press of America) (Google Books)
[6] Lesk, M, "Automatic sense disambiguation using
machine readable dictionaries: how to tell a pine cone
from an ice cream cone", In SIGDOC '86: Proceedings
of the 5th annual international conference on Systems
documentation, pages 24-26, New York, NY, USA.
ACM, (1986)
[7] Walker D. and Amsler R. 1986, "The Use of Machine
Readable Dictionaries in Sublanguage Analysis", in
Analyzing Language in Restricted Domains, Grishman
and Kittredge (eds), LEA Press, pp. 69-83, 1986.
[8] Agirre, Eneko & German Rigau. 1996, "Word sense
disambiguation using conceptual density", in
Proceedings of the 16th International Conference on
6. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 12 | Dec-2015, Available @ http://www.ijret.org 273
Computational Linguistics (COLING), Copenhagen,
Denmark, 1996
[9] http://www.cfilt.iitb.ac.in/wordnet/webhwn/
[10]http://www.cfilt.iitb.ac.in/wordnet/webmwn/
[11]Rohit Sharma, “Word Sense Disambiguation for Hindi
Language”, Thesis dissertation, Thapar University,
Patiala. July 2008.
[12]Saif Mohammad, “Combining Lexical and Syntactic
Features for Supervised Word Sense Disambiguation”,
Thesis dissertation, University of Minnesota, July 2003.
[13]Yarowsky, David, "Unsupervised word sense
disambiguation rivaling supervised methods",
Proceeding of the 33rd Annual Meeting of the
Association for Computational Linguistics (ACL),
Cambridge, MA, 189-196, 1995.
[14]Philip Resnik, "Selectional preference and sense
disambiguation", Proceedings of the ACL SIGLEX
Workshop on Tagging Text with Lexical
Semantics:Why, What, and How, 1997, pages 52–57.
[15]Jun Fu Cai, Wee Sun Lee, "Improving Word Sense
Disambiguation Using Topic Features", Proceedings of
the 2007 Joint Conference on Empirical Methods in
Natural Language Processing and Computational
Natural Language Learning, pp. 1015–1023, Prague,
June 2007. http://anthology.aclweb.org/D/D07/D07-
1.pdf#page =1049
[16]D. Lin, "Dependency-based evaluation of minipar",
Proceedings of the Workshop on the Evaluation of
Parsing Systems, First International Conference on
Language Resources and Evaluation, Granada, Spain,
May, 1998.
[17]Jacobs, Paul S, "Trump: A transportable language
understanding program", International Journal of
Intelligent Systems, 7:245-276, March 1992