The purpose of the current paper is to present an ontological analysis to the identification of a particular type of prepositional figures of speech via the identification of inconsistencies in ontological concepts. Prepositional noun phrases are used widely in a multiplicity of domains to describe real world events and activities. However, one aspect that makes a prepositional noun phrase poetical is that the latter suggests a semantic relationship between concepts that does not exist in the real world. The current paper shows that a set of rules based on WordNet classes and an ontology representing human behaviour and properties, can be used to identify figures of speech due to the discrepancies in the semantic relations of the concepts involved. Based on this realization, the paper describes a method for determining poetic vs. non-poetic prepositional figures of speech, using WordNet class hierarchies. The paper also addresses the problem of inconsistency resulting from the assertion of figures of speech in ontological knowledge bases, identifying the problems involved in their representation. Finally, it discusses how a contextualized approach might help to resolve this problem.
This document discusses metaphors and similes in literature. It defines metaphors as implied comparisons that identify one thing in terms of another, while defining similes as explicit comparisons using words like "like" or "as". The document examines different classifications of metaphors and similes proposed by various linguists. It explores how metaphors and similes can effectively convey meaning in a concise manner in literary texts. The conclusion is that figures of speech are imaginative tools that allow writers to implicitly or indirectly convey messages.
Idealization in cognitive and generative linguisticsBarbara Konat
The aim of this presentation is to compare processes of idealization and concretization in cognitive and generative linguistics.
According to idealization theory developed by Leszek Nowak (1977) each science uses methods of idealization in order to construct scientific models. The application of idealization theory presented here reveals steps of idealization (and respectively – concretization) in two linguistic approaches.
This document discusses representational theories of meaning, which hold that the meaning of linguistic expressions is based on mental representations or concepts in the mind. It presents the view that words are meaningful because they are associated with concepts, and that the relationship between language and the world is indirect, mediated by concepts in the mind. It examines different theories of what concepts could be, such as images, and notes that concepts must be more abstract than concrete images. Finally, it discusses two theories of concepts: the classical view that a concept is a set of necessary and sufficient features, and objections to this view, noting concepts have fuzzy boundaries and we often use words without full definitions.
Cognitive grammar is a cognitive linguistic theory developed by Ronald Langacker that views grammar as symbolic structures pairing semantic and phonological representations. It considers the basic units of language to be conventional pairings of semantic and phonological structures. Like construction grammar, cognitive grammar views grammar as extending to the entire language. Langacker developed the theory in his two volume work Foundations of Cognitive Grammar. He assumes linguistic structures are motivated by general cognitive processes and draws on principles of gestalt psychology.
This document provides an overview of cognitive linguistics and its approach to language and language learning. It discusses key aspects of cognitive linguistics including word grammar theory, which views language as a network of words and meanings rather than as a set of rules. The document also outlines how cognitive linguistics informs communicative language teaching practices through principles such as using meaningful tasks and focusing on language function and communication. Examples are provided to illustrate how vocabulary and grammar can be understood as interconnected networks.
Cognitive linguistics emerged from developments in linguistics in the 1960s-1970s. It views language as grounded in human cognition and experience rather than as an autonomous system. Some key principles of cognitive linguistics include: meaning arises from conceptualization rather than being truth-conditional; semantics is encyclopedic rather than compositional; and linguistic knowledge comes from language usage. Cognitive linguistics investigates various topics like categorization, grammar theories, discourse analysis, and language acquisition using these principles.
This document provides an overview of cognitive grammar, which views grammar as fully reducible to assemblies of symbolic structures linking semantic and phonological representations. It claims that grammar forms a continuum with the lexicon and is meaningful. Grammatical units pair semantic and phonological structures, just as lexical items do. Grammar is shaped by semantic and pragmatic factors. Cognitive grammar analyzes linguistic units in terms of conceptualization, construal, and symbolic linking between semantic and phonological structures. It views grammar as a means of symbolizing conceptual content through imagic construals.
The document provides an overview of cognitive approaches to grammar. It discusses key concepts in cognitive grammar including that grammar involves symbolic units representing form-meaning pairings. It also discusses how cognitive grammar is based on two commitments: a generalization commitment where cognitive linguistics aims to identify general principles underlying all aspects of language, and a cognitive commitment where theories align with what is known about the human mind and brain. The document outlines concepts in Langacker's cognitive grammar theory including attention, focal adjustments of selection, perspective and abstraction.
This document discusses metaphors and similes in literature. It defines metaphors as implied comparisons that identify one thing in terms of another, while defining similes as explicit comparisons using words like "like" or "as". The document examines different classifications of metaphors and similes proposed by various linguists. It explores how metaphors and similes can effectively convey meaning in a concise manner in literary texts. The conclusion is that figures of speech are imaginative tools that allow writers to implicitly or indirectly convey messages.
Idealization in cognitive and generative linguisticsBarbara Konat
The aim of this presentation is to compare processes of idealization and concretization in cognitive and generative linguistics.
According to idealization theory developed by Leszek Nowak (1977) each science uses methods of idealization in order to construct scientific models. The application of idealization theory presented here reveals steps of idealization (and respectively – concretization) in two linguistic approaches.
This document discusses representational theories of meaning, which hold that the meaning of linguistic expressions is based on mental representations or concepts in the mind. It presents the view that words are meaningful because they are associated with concepts, and that the relationship between language and the world is indirect, mediated by concepts in the mind. It examines different theories of what concepts could be, such as images, and notes that concepts must be more abstract than concrete images. Finally, it discusses two theories of concepts: the classical view that a concept is a set of necessary and sufficient features, and objections to this view, noting concepts have fuzzy boundaries and we often use words without full definitions.
Cognitive grammar is a cognitive linguistic theory developed by Ronald Langacker that views grammar as symbolic structures pairing semantic and phonological representations. It considers the basic units of language to be conventional pairings of semantic and phonological structures. Like construction grammar, cognitive grammar views grammar as extending to the entire language. Langacker developed the theory in his two volume work Foundations of Cognitive Grammar. He assumes linguistic structures are motivated by general cognitive processes and draws on principles of gestalt psychology.
This document provides an overview of cognitive linguistics and its approach to language and language learning. It discusses key aspects of cognitive linguistics including word grammar theory, which views language as a network of words and meanings rather than as a set of rules. The document also outlines how cognitive linguistics informs communicative language teaching practices through principles such as using meaningful tasks and focusing on language function and communication. Examples are provided to illustrate how vocabulary and grammar can be understood as interconnected networks.
Cognitive linguistics emerged from developments in linguistics in the 1960s-1970s. It views language as grounded in human cognition and experience rather than as an autonomous system. Some key principles of cognitive linguistics include: meaning arises from conceptualization rather than being truth-conditional; semantics is encyclopedic rather than compositional; and linguistic knowledge comes from language usage. Cognitive linguistics investigates various topics like categorization, grammar theories, discourse analysis, and language acquisition using these principles.
This document provides an overview of cognitive grammar, which views grammar as fully reducible to assemblies of symbolic structures linking semantic and phonological representations. It claims that grammar forms a continuum with the lexicon and is meaningful. Grammatical units pair semantic and phonological structures, just as lexical items do. Grammar is shaped by semantic and pragmatic factors. Cognitive grammar analyzes linguistic units in terms of conceptualization, construal, and symbolic linking between semantic and phonological structures. It views grammar as a means of symbolizing conceptual content through imagic construals.
The document provides an overview of cognitive approaches to grammar. It discusses key concepts in cognitive grammar including that grammar involves symbolic units representing form-meaning pairings. It also discusses how cognitive grammar is based on two commitments: a generalization commitment where cognitive linguistics aims to identify general principles underlying all aspects of language, and a cognitive commitment where theories align with what is known about the human mind and brain. The document outlines concepts in Langacker's cognitive grammar theory including attention, focal adjustments of selection, perspective and abstraction.
Construction grammar views grammatical constructions as the basic units of syntax, where a construction is a form-meaning pairing. It denies a strict distinction between lexicon and syntax, instead proposing a continuum. There are different models of how constructions are organized in the grammar, including a full entry model with minimal generalization, a usage-based model based on inductive learning, and models with varying degrees of inheritance. Construction grammar rejects derivation and adheres to no synonymy, viewing related constructions like active and passive as distinct. Various approaches within construction grammar emphasize different aspects, such as form (Berkeley construction grammar), psychological plausibility (Goldbergian construction grammar), or embodiment. The strength of construction grammar is its claim that many grammatical units
This document discusses language as an embryonic morphogenetic field, similar to how cells in a biological field respond to signals to develop structures. It proposes language develops through stages in a flexible field of possibilities constrained by rules. A table is presented modeling the English language field with the alphabet distributed in cells representing ratios that encode differences, like the differences between phonemes that get incorporated into words. The document argues current models of language are limited and reductionist, and a new model is needed to better understand language and improve mutual understanding between people.
Black max models-and_metaphors_studies_in_language and philosophymarce c.
This document is the preface to a book titled "Models and Metaphors" by Max Black. It is a collection of essays written since his previous book in 1954 that explore the relationship between language and philosophical problems. Though the topics covered are wide-ranging, Black hopes there is a consistent focus on how language bears on philosophical issues. He is grateful to students, colleagues, and publishers who have provided feedback and permission to reprint the essays. The book is dedicated to Susanna and David.
Ontology as a formal one. The language of ontology as the ontology itself: th...Vasil Penchev
“Formal ontology” is introduced first to programing languages in different ways. The most relevant one as to philosophy is as a generalization of “nth-order logic” and “nth-level language” for n=0. Then, the “zero-level language” is a theoretical reflection on the naïve attitude to the world: the “things and words” coincide by themselves. That approach corresponds directly to the philosophical phenomenology of Husserl or fundamental ontology of Heidegger. Ontology as the 0-level language may be researched as a formal ontology
Investigating the Difficulties Faced by Iraqi EFL Learners in Using Light Verbs
Zahraa Adnan Fadhil Al-Murib,
Department of English, College of Education, Islamic University, Babylon, Iraq
This study deals with light verb constructions as being syntactically functional and semantically bleached. It aims at investigating the difficulties faced by Iraqi EFL Learners (fourth-year university students) in recognizing and producing this kind of constructions. Thus, to achieve this aim, it is hypothesized that Iraqi EFL Learners are unable to collocate the light verb with its proper noun phrase both at the recognition and the production levels. And in order to achieve the aim of the study and examine its hypothesis, the following procedures are adopted: (1) Presenting a theoretical background about light verb constructions including their definition, structure, and their relationship with collocations, (2) Designing a diagnostic test to investigate the difficulties that might face the learners while being exposed to these constructions, (3) Analyzing the findings of the test to reveal the learner’s recognition and performance in dealing with light verb constructions.
Keywords: Light Verb Constructions, Collocations, Difficulties
The Sixth International Conference on Languages, Linguistics, Translation and Literature
9-10 October 2021 , Ahwaz
For more information, please visit the conference website:
WWW.LLLD.IR
Here is a definition of a sentence that aligns with my theory:
A sentence is a string of words that expresses a complete thought and is acquired by an MLAS system through its experiences of language used in context. Unlike a linguist's definition based on grammatical rules, a sentence for MLAS is not defined by its constituent parts but rather by whether it is used meaningfully in a context that an MLAS system encounters in its language learning simulations. The boundaries of what constitutes a sentence for MLAS may differ from linguist to linguist based on their theoretical frameworks, as MLAS generates sentences based on usage rather than attempting to fit language data into a predefined grammatical model.
A presentation focusing on Lev Vygotsky's psycholingusitic theory and how it may be applied through using poetry, haiku, readers' theatre and other dramatic vehicles.
APPLICATION OF DIMENSIONS OF MEANING ON JAMES JOYCE’S A PORTRAIT OF ARTIST AS...Fatima Gul
This document discusses key dimensions of meaning in semantics and applies them to analyze Chapter 1 of James Joyce's novel A Portrait of the Artist as a Young Man. It discusses several important dimensions of meaning, including reference and denotation, connotation, and sense relations. It then analyzes examples from Chapter 1, showing how characters and events reference real objects and concepts, and how different characters' views connote positive or negative associations. The analysis provides insight into how dimensions of meaning contribute to understanding the novel's communication of ideas.
PATENT DOCUMENT SUMMARIZATION USING CONCEPTUAL GRAPHSijnlc
In this paper a methodology to mine the concepts from documents and use these concepts to generate an
objective summary of the claims section of the patent documents is proposed. Conceptual Graph (CG)
formalism as proposed by Sowa (Sowa 1984) is used in this work for representing the concepts and their
relationships. Automatic identification of concepts and conceptual relations from text documents is a
challenging task. In this work the focus is on the analysis of the patent documents, mainly on the claim’s
section (Claim) of the documents. There are several complexities in the writing style of these documents as
they are technical as well as legal. It is observed that the general in-depth parsers available in the open
domain fail to parse the ‘claims section’ sentences in patent documents. The failure of in-depth parsers
has motivated us, to develop methodology to extract CGs using other resources. Thus in the present work
shallow parsing, NER and machine learning technique for extracting concepts and conceptual
relationships from sentences in the claim section of patent documents is used. Thus, this paper discusses i)
Generation of CG, a semantic network and ii) Generation of abstractive summary of the claims section of
the patent. The aim is to generate a summary which is 30% of the whole claim section. Here we use
Restricted Boltzmann Machines (RBMs), a deep learning technique for automatically extracting CGs. We
have tested our methodology using a corpus of 5000 patent documents from electronics domain. The results
obtained are encouraging and is comparable with the state of the art systems.
This document discusses the benefits of using comparative analysis when teaching cases in business schools. It argues that case analysis could be supplemented by comparing a few cases using a "compare-and-contrast" method, similar to approaches used in comparative literature. Comparative literature looks at relationships between identity and difference in texts, and how insights can be generated by juxtaposing texts. The document suggests case analysis could similarly benefit from formally comparing and contrasting cases to better understand technical and cultural differences and their influence on interpretation.
This document discusses constraint on Merge, the binary operation that combines syntactic elements in the Minimalist Program. It argues that Merge is not completely unconstrained and proposes a simple functional/lexical constraint. Under this proposal, only lexical roots (nouns) are inert items that cannot project or combine with other roots via Merge. Functional items can combine with other functional items or roots via Merge, building syntactic structures. This constrained version of Merge can help explain phenomena like grammaticalization patterns and cross-linguistic typological tendencies in a principled way.
The document discusses coherence and cohesion relations in discourse. It argues that there are two main types of cohesion: referential relations involving anaphora, and semantic/pragmatic relations involving connectives. It also suggests that there is a third type of cohesive marker, indexing or framing relations, signaled by initial adverbials that frame or label the propositions in a series of sentences.
Museums play an important role in China’s communication with the world and demonstrate to some extent China’s soft power. As China further strengthens its international exchanges, more and more people hope to know China through its history and culture. However, due to the lingual and cultural distinctions, there are many unavoidable problems in translating Chinese relic texts (Source Text or ST) into English texts (Target Text or TT). As Eugene Nida said in his functional equivalence theory, the Target Reader’s (TR) comprehension and appreciation of the translation is significant. Therefore, in translating relic texts, attention should be paid to how the TR can understand and accept the content. This thesis aims at finding proper translation principles and methods by analyzing the translations of the relic texts in Hubei Provincial Museum from the perspective of the core concepts of functional equivalence theory. Through a study on the functional equivalence theory, the thesis finds three principles of translating relic texts: accuracy, readability and acceptability. An analysis of the relic texts of Hubei Provincial Museum has led to several translation methods including addition, omission, paraphrasing and rewriting, which help to achieve the functional equivalence of relic texts translation.
ENHANCEMENT OF COGNITIVE ABILITIES OF AN AGENT-ROBOT ON THE BASIS OF IMAGE RE...ijaia
The objective of this work is to enhance the cognitive abilities of an agent robot (ARb), a model of a human. This article considers a new approach to ARb simulation, consisting of the utilization of a direct analogy between the functions of specific organs of the human brain/head and the "brain/head" of a model, between the life of a human and the "life" of a model. The model of a Homo Sapiens is constructed as an intellectual agent integrated with a humanoid robot. The task set is to construct an ARb that must be able to read and write, to understand its situation in the world, the meaning of its actions, and the semantics of the text it processes.
An ARb achieves the high cognitive abilities mentioned above because, like a human, from the moment of its "birth" it learns a natural language, first and foremost such as the names of objects and phenomena it sees and recognises. It is proposed that an ARb will be taught, including language teaching, in a group of both fellow robots and humans, under the tutelage of a teacher.
The growth of the cognitive abilities of an ARb is also achieved thanks to the evolution of a "population of reproducing agents" (our article [1]). For "reproduction", it is proposed that we should separate the "private life" of an ARb from its operating functions ("service"), i.e. to separate the "private" and the "service" spheres in the ARb software.
An in depth analysis of the manifestation of emotions and ideas through simil...Alexander Decker
This document discusses the use of similes in literature. It provides background on different types of meanings in language and defines similes as a figure of speech that makes an explicit comparison between two things using words like "like" or "as". The document then discusses various classifications of similes and their functions, including conveying meaning efficiently and having psychological or emotional effects on readers. Finally, it outlines questions that will guide an analysis of the similes used by author Somerset Maugham in his short stories to understand the objects/phenomena compared, meanings created, and intended effects on readers.
This document provides a comparison of English and Vietnamese metaphors for expressing happiness. It finds several conceptual metaphors that are shared between the two languages, including "Happiness is up", where happiness is associated with vertical elevation, and "Happiness is light", where happiness is equated with brightness. The document also examines explanations for both similarities and differences in emotional metaphors between cultures, noting the influence of personal, social, and historical experiences.
Cohesion refers to the use of linguistic devices like pronouns, conjunctions and lexical repetition to link sentences together, while coherence depends on the reader interpreting how the ideas in a text are related. A text can be cohesive through these linking devices without necessarily making logical sense or being coherent. Coherence is achieved when the reader can interpret how each sentence relates semantically to the overall meaning of the text.
The document discusses three levels of analysis for syntax:
1) The grammatical structure of sentences
2) The semantic structure of sentences
3) The organization of utterances
It argues that distinguishing these levels can avoid confusion in syntactic discussions. Analyzing sentences requires looking at interactions between the levels rather than separating them. The levels provide different but related perspectives for understanding language.
This document discusses lexicalization patterns between semantic elements and surface expressions in language. It outlines an approach to analyzing which semantic concepts are expressed by certain grammatical elements. Specifically, it will examine how verb roots and "satellites" convey semantic categories like motion, path, figure and ground. The analysis aims to identify typological patterns and universal principles in how different languages associate meanings with forms.
This document discusses different perspectives on the nature and scope of linguistic description. It summarizes Chomsky's view that language is an innate cognitive ability and reflects universal features of the human mind. However, it also discusses alternative views that see language as serving social functions of communication and control. Specifically, it outlines Halliday's view that language has ideational, interpersonal, and textual functions. The document also discusses the need to view linguistic competence as including both abstract knowledge and the ability to use that knowledge communicatively according to social conventions. Finally, it defines the concepts of syntagmatic and paradigmatic relationships that describe how linguistic units combine horizontally in language.
Construction grammar views grammatical constructions as the basic units of syntax, where a construction is a form-meaning pairing. It denies a strict distinction between lexicon and syntax, instead proposing a continuum. There are different models of how constructions are organized in the grammar, including a full entry model with minimal generalization, a usage-based model based on inductive learning, and models with varying degrees of inheritance. Construction grammar rejects derivation and adheres to no synonymy, viewing related constructions like active and passive as distinct. Various approaches within construction grammar emphasize different aspects, such as form (Berkeley construction grammar), psychological plausibility (Goldbergian construction grammar), or embodiment. The strength of construction grammar is its claim that many grammatical units
This document discusses language as an embryonic morphogenetic field, similar to how cells in a biological field respond to signals to develop structures. It proposes language develops through stages in a flexible field of possibilities constrained by rules. A table is presented modeling the English language field with the alphabet distributed in cells representing ratios that encode differences, like the differences between phonemes that get incorporated into words. The document argues current models of language are limited and reductionist, and a new model is needed to better understand language and improve mutual understanding between people.
Black max models-and_metaphors_studies_in_language and philosophymarce c.
This document is the preface to a book titled "Models and Metaphors" by Max Black. It is a collection of essays written since his previous book in 1954 that explore the relationship between language and philosophical problems. Though the topics covered are wide-ranging, Black hopes there is a consistent focus on how language bears on philosophical issues. He is grateful to students, colleagues, and publishers who have provided feedback and permission to reprint the essays. The book is dedicated to Susanna and David.
Ontology as a formal one. The language of ontology as the ontology itself: th...Vasil Penchev
“Formal ontology” is introduced first to programing languages in different ways. The most relevant one as to philosophy is as a generalization of “nth-order logic” and “nth-level language” for n=0. Then, the “zero-level language” is a theoretical reflection on the naïve attitude to the world: the “things and words” coincide by themselves. That approach corresponds directly to the philosophical phenomenology of Husserl or fundamental ontology of Heidegger. Ontology as the 0-level language may be researched as a formal ontology
Investigating the Difficulties Faced by Iraqi EFL Learners in Using Light Verbs
Zahraa Adnan Fadhil Al-Murib,
Department of English, College of Education, Islamic University, Babylon, Iraq
This study deals with light verb constructions as being syntactically functional and semantically bleached. It aims at investigating the difficulties faced by Iraqi EFL Learners (fourth-year university students) in recognizing and producing this kind of constructions. Thus, to achieve this aim, it is hypothesized that Iraqi EFL Learners are unable to collocate the light verb with its proper noun phrase both at the recognition and the production levels. And in order to achieve the aim of the study and examine its hypothesis, the following procedures are adopted: (1) Presenting a theoretical background about light verb constructions including their definition, structure, and their relationship with collocations, (2) Designing a diagnostic test to investigate the difficulties that might face the learners while being exposed to these constructions, (3) Analyzing the findings of the test to reveal the learner’s recognition and performance in dealing with light verb constructions.
Keywords: Light Verb Constructions, Collocations, Difficulties
The Sixth International Conference on Languages, Linguistics, Translation and Literature
9-10 October 2021 , Ahwaz
For more information, please visit the conference website:
WWW.LLLD.IR
Here is a definition of a sentence that aligns with my theory:
A sentence is a string of words that expresses a complete thought and is acquired by an MLAS system through its experiences of language used in context. Unlike a linguist's definition based on grammatical rules, a sentence for MLAS is not defined by its constituent parts but rather by whether it is used meaningfully in a context that an MLAS system encounters in its language learning simulations. The boundaries of what constitutes a sentence for MLAS may differ from linguist to linguist based on their theoretical frameworks, as MLAS generates sentences based on usage rather than attempting to fit language data into a predefined grammatical model.
A presentation focusing on Lev Vygotsky's psycholingusitic theory and how it may be applied through using poetry, haiku, readers' theatre and other dramatic vehicles.
APPLICATION OF DIMENSIONS OF MEANING ON JAMES JOYCE’S A PORTRAIT OF ARTIST AS...Fatima Gul
This document discusses key dimensions of meaning in semantics and applies them to analyze Chapter 1 of James Joyce's novel A Portrait of the Artist as a Young Man. It discusses several important dimensions of meaning, including reference and denotation, connotation, and sense relations. It then analyzes examples from Chapter 1, showing how characters and events reference real objects and concepts, and how different characters' views connote positive or negative associations. The analysis provides insight into how dimensions of meaning contribute to understanding the novel's communication of ideas.
PATENT DOCUMENT SUMMARIZATION USING CONCEPTUAL GRAPHSijnlc
In this paper a methodology to mine the concepts from documents and use these concepts to generate an
objective summary of the claims section of the patent documents is proposed. Conceptual Graph (CG)
formalism as proposed by Sowa (Sowa 1984) is used in this work for representing the concepts and their
relationships. Automatic identification of concepts and conceptual relations from text documents is a
challenging task. In this work the focus is on the analysis of the patent documents, mainly on the claim’s
section (Claim) of the documents. There are several complexities in the writing style of these documents as
they are technical as well as legal. It is observed that the general in-depth parsers available in the open
domain fail to parse the ‘claims section’ sentences in patent documents. The failure of in-depth parsers
has motivated us, to develop methodology to extract CGs using other resources. Thus in the present work
shallow parsing, NER and machine learning technique for extracting concepts and conceptual
relationships from sentences in the claim section of patent documents is used. Thus, this paper discusses i)
Generation of CG, a semantic network and ii) Generation of abstractive summary of the claims section of
the patent. The aim is to generate a summary which is 30% of the whole claim section. Here we use
Restricted Boltzmann Machines (RBMs), a deep learning technique for automatically extracting CGs. We
have tested our methodology using a corpus of 5000 patent documents from electronics domain. The results
obtained are encouraging and is comparable with the state of the art systems.
This document discusses the benefits of using comparative analysis when teaching cases in business schools. It argues that case analysis could be supplemented by comparing a few cases using a "compare-and-contrast" method, similar to approaches used in comparative literature. Comparative literature looks at relationships between identity and difference in texts, and how insights can be generated by juxtaposing texts. The document suggests case analysis could similarly benefit from formally comparing and contrasting cases to better understand technical and cultural differences and their influence on interpretation.
This document discusses constraint on Merge, the binary operation that combines syntactic elements in the Minimalist Program. It argues that Merge is not completely unconstrained and proposes a simple functional/lexical constraint. Under this proposal, only lexical roots (nouns) are inert items that cannot project or combine with other roots via Merge. Functional items can combine with other functional items or roots via Merge, building syntactic structures. This constrained version of Merge can help explain phenomena like grammaticalization patterns and cross-linguistic typological tendencies in a principled way.
The document discusses coherence and cohesion relations in discourse. It argues that there are two main types of cohesion: referential relations involving anaphora, and semantic/pragmatic relations involving connectives. It also suggests that there is a third type of cohesive marker, indexing or framing relations, signaled by initial adverbials that frame or label the propositions in a series of sentences.
Museums play an important role in China’s communication with the world and demonstrate to some extent China’s soft power. As China further strengthens its international exchanges, more and more people hope to know China through its history and culture. However, due to the lingual and cultural distinctions, there are many unavoidable problems in translating Chinese relic texts (Source Text or ST) into English texts (Target Text or TT). As Eugene Nida said in his functional equivalence theory, the Target Reader’s (TR) comprehension and appreciation of the translation is significant. Therefore, in translating relic texts, attention should be paid to how the TR can understand and accept the content. This thesis aims at finding proper translation principles and methods by analyzing the translations of the relic texts in Hubei Provincial Museum from the perspective of the core concepts of functional equivalence theory. Through a study on the functional equivalence theory, the thesis finds three principles of translating relic texts: accuracy, readability and acceptability. An analysis of the relic texts of Hubei Provincial Museum has led to several translation methods including addition, omission, paraphrasing and rewriting, which help to achieve the functional equivalence of relic texts translation.
ENHANCEMENT OF COGNITIVE ABILITIES OF AN AGENT-ROBOT ON THE BASIS OF IMAGE RE...ijaia
The objective of this work is to enhance the cognitive abilities of an agent robot (ARb), a model of a human. This article considers a new approach to ARb simulation, consisting of the utilization of a direct analogy between the functions of specific organs of the human brain/head and the "brain/head" of a model, between the life of a human and the "life" of a model. The model of a Homo Sapiens is constructed as an intellectual agent integrated with a humanoid robot. The task set is to construct an ARb that must be able to read and write, to understand its situation in the world, the meaning of its actions, and the semantics of the text it processes.
An ARb achieves the high cognitive abilities mentioned above because, like a human, from the moment of its "birth" it learns a natural language, first and foremost such as the names of objects and phenomena it sees and recognises. It is proposed that an ARb will be taught, including language teaching, in a group of both fellow robots and humans, under the tutelage of a teacher.
The growth of the cognitive abilities of an ARb is also achieved thanks to the evolution of a "population of reproducing agents" (our article [1]). For "reproduction", it is proposed that we should separate the "private life" of an ARb from its operating functions ("service"), i.e. to separate the "private" and the "service" spheres in the ARb software.
An in depth analysis of the manifestation of emotions and ideas through simil...Alexander Decker
This document discusses the use of similes in literature. It provides background on different types of meanings in language and defines similes as a figure of speech that makes an explicit comparison between two things using words like "like" or "as". The document then discusses various classifications of similes and their functions, including conveying meaning efficiently and having psychological or emotional effects on readers. Finally, it outlines questions that will guide an analysis of the similes used by author Somerset Maugham in his short stories to understand the objects/phenomena compared, meanings created, and intended effects on readers.
This document provides a comparison of English and Vietnamese metaphors for expressing happiness. It finds several conceptual metaphors that are shared between the two languages, including "Happiness is up", where happiness is associated with vertical elevation, and "Happiness is light", where happiness is equated with brightness. The document also examines explanations for both similarities and differences in emotional metaphors between cultures, noting the influence of personal, social, and historical experiences.
Cohesion refers to the use of linguistic devices like pronouns, conjunctions and lexical repetition to link sentences together, while coherence depends on the reader interpreting how the ideas in a text are related. A text can be cohesive through these linking devices without necessarily making logical sense or being coherent. Coherence is achieved when the reader can interpret how each sentence relates semantically to the overall meaning of the text.
The document discusses three levels of analysis for syntax:
1) The grammatical structure of sentences
2) The semantic structure of sentences
3) The organization of utterances
It argues that distinguishing these levels can avoid confusion in syntactic discussions. Analyzing sentences requires looking at interactions between the levels rather than separating them. The levels provide different but related perspectives for understanding language.
This document discusses lexicalization patterns between semantic elements and surface expressions in language. It outlines an approach to analyzing which semantic concepts are expressed by certain grammatical elements. Specifically, it will examine how verb roots and "satellites" convey semantic categories like motion, path, figure and ground. The analysis aims to identify typological patterns and universal principles in how different languages associate meanings with forms.
This document discusses different perspectives on the nature and scope of linguistic description. It summarizes Chomsky's view that language is an innate cognitive ability and reflects universal features of the human mind. However, it also discusses alternative views that see language as serving social functions of communication and control. Specifically, it outlines Halliday's view that language has ideational, interpersonal, and textual functions. The document also discusses the need to view linguistic competence as including both abstract knowledge and the ability to use that knowledge communicatively according to social conventions. Finally, it defines the concepts of syntagmatic and paradigmatic relationships that describe how linguistic units combine horizontally in language.
1) Artificial Intelligence research draws from many disciplines including formal logic, probability theory, linguistics and philosophy. Computational logic combines and improves upon traditional logic and decision theory.
2) The paper argues that the abductive logic programming (ALP) agent model is a powerful model of both descriptive and normative thinking. It includes production systems and is compatible with classical logic and decision theory.
3) The ALP agent model treats beliefs as describing the world and goals as describing how the world should be. Its semantics aim to generate actions and assumptions to make goals and observations true based on beliefs.
Lecture 2: From Semantics To Semantic-Oriented ApplicationsMarina Santini
From the "Natural Language Processing" LinkedIn group:
John Kontos, Professor of Artificial Intelligence
I wonder whether translating into formal logic is nothing more than transliteration which simply isolates the part of the text that can be reasoned upon using the simple inference mechanism of formal logic. The real problem I think lies with the part of text that CANNOT be translated one the one hand and the one that changes its meaning due to civilization advances. My own proposal is to leave NL text alone and try building inference mechanisms for the UNTRANSLATED text depending on the task requirements.
All the best
John"
Analysing The Grammar Of Clause As Message In Ann Iwuagwu s Arrow Of Destiny ...Sarah Morrow
This document summarizes a research paper that analyzes the grammar of clauses, specifically themes, in Ann Iwuagwu's novel Arrow of Destiny using Systemic Functional Linguistics. The study explores how themes are organized to convey textual meaning and define the author's literary style. Two extracts from the novel are analyzed in terms of topical, interpersonal, structural, and textual theme components and their frequencies. The study finds a predominance of topical unmarked and textual themes, pointing to prominent linguistic and rhetorical features of the novel.
The document discusses ontology and how it relates to building user profiles and updating interest scores. It presents an approach where ontological profiles are built using a reference ontology to classify and categorize web pages. Each user profile consists of concepts from the reference ontology assigned interest scores that are initially set to one. The profiles are treated as networks of concepts and the interest scores are updated using a spreading activation algorithm based on the user's current interactions. When a user selects documents, their profile is updated by modifying the interest scores of existing concepts. This allows the most relevant pages to rise to the top based on the updated interest scores.
A NATURAL LOGIC FOR ARTIFICIAL INTELLIGENCE, AND ITS RISKS AND BENEFITSijwscjournal
This paper is a multidisciplinary project proposal, submitted in the hopes that it may garner
enough interest to launch it with members of the AI research community along with linguists
and philosophers of mind and language interested in constructing a semantics for a natural
logic for AI. The paper outlines some of the major hurdles in the way of “semantics-driven”
natural language processing based on standard predicate logic and sketches out the steps to be
taken toward a “natural logic”, a semantic system explicitly defined on a well-regimented (but
indefinitely expandable) fragment of a natural language that can, therefore, be “intelligently”
processed by computers, using the semantic representations of the phrases of the fragment.
A Natural Logic for Artificial Intelligence, and its Risks and Benefits gerogepatton
This paper is a multidisciplinary project proposal, submitted in the hopes that it may garner enough interest to launch it with members of the AI research community along with linguists
and philosophers of mind and language interested in constructing a semantics for a natural logic for AI. The paper outlines some of the major hurdles in the way of “semantics-driven” natural language processing based on standard predicate logic and sketches out the steps to be
taken toward a “natural logic”, a semantic system explicitly defined on a well-regimented (but indefinitely expandable) fragment of a natural language that can, therefore, be “intelligently” processed by computers, using the semantic representations of the phrases of the fragment.
A NATURAL LOGIC FOR ARTIFICIAL INTELLIGENCE, AND ITS RISKS AND BENEFITSijasuc
This paper is a multidisciplinary project proposal, submitted in the hopes that it may garner
enough interest to launch it with members of the AI research community along with linguists
and philosophers of mind and language interested in constructing a semantics for a natural
logic for AI. The paper outlines some of the major hurdles in the way of “semantics-driven”
natural language processing based on standard predicate logic and sketches out the steps to be
taken toward a “natural logic”, a semantic system explicitly defined on a well-regimented (but
indefinitely expandable) fragment of a natural language that can, therefore, be “intelligently”
processed by computers, using the semantic representations of the phrases of the fragment.
Critical Narrative Analysis in Linguistics: Analysing a Homodiegetic Rape Nar...ChisomOgamba
The document presents a proposed model for conducting a Critical Narrative Analysis (CNA) of narratives in linguistics. The model is developed based on combining the frameworks of Critical Discourse Analysis and Narrative Analysis. It involves two stages: 1) a narrative analysis to shape events into a coherent story, and 2) a three-dimensional analysis involving text analysis, processing analysis, and social analysis based on Fairclough's model for CDA. The model is applied to analyze an excerpt from a rape narrative, identifying linguistic features, narrative elements, and exploring the social context through a sociocultural lens. The analysis finds fillers are most frequent, and examines how language conveys the narrator's experience and societal norms around
The document discusses the concept of linguistic competence as introduced by linguist Noam Chomsky in 1965. Chomsky defined competence as an individual's tacit, unconscious understanding of the rules that govern what is grammatically acceptable in their language. This notion of an innate language faculty was intended to address assumptions about language found in structuralist linguistics. The document then discusses how various theorists have both built upon and critiqued Chomsky's concept of competence, relating it to ideas around performance, norms, and the social institutions that shape language use.
contributions of lexicography and corpus linguistics to a theory of language ...ayfa
The document discusses the contributions of lexicography and corpus linguistics to a theory of language performance. It summarizes key points from Noam Chomsky's early work and the subsequent focus on competence over performance in linguistics. While acknowledging Chomsky's influence, it argues that corpus linguistics provides evidence that a theory of language should consider gradations of grammaticality rather than sharp divisions, and focus on what is probable rather than just possible in a language. It also notes that linguistic theory should aim to better characterize the cognitive and social realities of language use.
Semantic interpretation is the process of determining the intended meaning of natural language expressions. It involves resolving ambiguity, where a word, phrase or sentence can have multiple possible meanings. There are different types of ambiguity, including structural ambiguity due to unclear syntactic structure, and lexical ambiguity where a word has multiple meanings. Disambiguation involves combining models of the world, the speaker's mental state, language, and acoustics to determine the most likely intended meaning. Semantic interpretation is important for applications like call routing systems, to understand different ways callers may express the same core meaning.
Complementation as interpersonal grammar.pdfJamalAnwar10
This document discusses complementation as interpersonal grammar. It proposes that complement clauses do not serve as objects in matrix clauses, as is commonly analyzed. Rather, the grammatical relations between matrix and complement clauses are often conjugational relations of framing or scope, which construe interpersonal meaning. Framing relations indicate how a complement clause should be interactively interpreted, while scope relations modify the propositional content or speech-act status of the complement clause. The paper aims to provide evidence against analyses of complement clauses as embedded constituents and argue they involve distinct interpersonal grammatical relations instead.
This document discusses exploring metaphorical data in technical documents. It begins by describing how over 1,800 metaphors were collected from various websites and compiled into a CSV file. These include simple metaphors like "sea of fire" as well as more complex metaphors used by Shakespeare. The document then discusses using regular expressions and the Hadoop framework to analyze technical documents and identify if any of the collected metaphors are present. It summarizes several articles on metaphors, identifying five common types of metaphors used, and discusses rules for identifying metaphors. Finally, it discusses how metaphors are conceptual and how we structure our thinking based on common metaphors like "argument is war" and concepts of time.
Textual analysis and the importance of texting theoryShubham168021
This document defines and discusses analyzing texts rhetorically. It begins by defining a text as a set of symbols that can be read and interpreted. It then lists many types of texts that can be analyzed, from religious to academic works. The document goes on to define analysis as breaking down a text into its essential parts to understand how those parts work together and what is accomplished. Rhetorical analysis specifically considers the context and rhetorical situation of a text. It provides steps for analyzing texts rhetorically and questions to consider in rhetorical analysis.
Word meaning, sentence meaning, and syntactic meaningNick Izquierdo
This document discusses the relationship between word meaning, sentence meaning, and syntactic meaning. It challenges the view that word meaning is the sole source of conceptual content in sentences, and that syntactic structures only provide instructions for combining word meanings. The document argues that syntactic constructions have intrinsic meanings distinct from the words that make them up, and that construction meaning can override word meaning in some cases. It provides examples to demonstrate this, and argues that appeal to constructional meaning enhances theories of sentence semantics.
Similar to AN ONTOLOGICAL ANALYSIS AND NATURAL LANGUAGE PROCESSING OF FIGURES OF SPEECH (20)
PhotoQR: A Novel ID Card with an Encoded Viewgerogepatton
There is an increasing interest in developing techniques to identify and assess data to allow an easy and
continuous access to resources, services or places that require thorough ID control. Usually, in order to
give access to these resources, different kinds of documents are mandatory. In order to avoid forgeries
without the need of extra credentials, a new system –named photoQR, is here proposed. This system is
based on a ID card having two objects: one person’s picture (pre-processed via blur and/or swirl
techniques) and one QR code containing embedded data related to the picture. The idea is that the picture
and the QR code can assess each other by a proper hash value in the QR. The QR without the picture
cannot be assessed and vice versa. An open source prototype of the photoQR system has been implemented
in Python and can be used both in offline and real-time environments, which effectively combines security
concepts and image processing algorithms to obtain data assessment.
12th International Conference of Artificial Intelligence and Fuzzy Logic (AI ...gerogepatton
12th International Conference of Artificial Intelligence and Fuzzy Logic (AI & FL 2024) provides a
forum for researchers who address this issue and to present their work in a peer-reviewed forum. Authors
are solicited to contribute to the conference by submitting articles that illustrate research results, projects,
surveying works and industrial experiences that describe significant advances in the following areas, but
are not limited to these topics only.
International Journal of Artificial Intelligence & Applications (IJAIA)gerogepatton
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
10th International Conference on Artificial Intelligence and Applications (AI...gerogepatton
10th International Conference on Artificial Intelligence and Applications (AI 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Artificial Intelligence and its applications. The Conference looks for significant contributions to all major fields of the Artificial Intelligence, Soft Computing in theoretical and practical aspects. The aim of the Conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
International Journal of Artificial Intelligence & Applications (IJAIA)gerogepatton
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
May 2024 - Top 10 Read Articles in Artificial Intelligence and Applications (...gerogepatton
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.
3rd International Conference on Artificial Intelligence Advances (AIAD 2024)gerogepatton
3rd International Conference on Artificial Intelligence Advances (AIAD 2024) will act as a major forum for the presentation of innovative ideas, approaches, developments, and research projects in the area advanced Artificial Intelligence. It will also serve to facilitate the exchange of information between researchers and industry professionals to discuss the latest issues and advancement in the research area. Core areas of AI and advanced multi-disciplinary and its applications will be covered during the conferences.
International Journal of Artificial Intelligence & Applications (IJAIA)gerogepatton
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.
Information Extraction from Product Labels: A Machine Vision Approachgerogepatton
This research tackles the challenge of manual data extraction from product labels by employing a blend of
computer vision and Natural Language Processing (NLP). We introduce an enhanced model that combines
Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) in a Convolutional
Recurrent Neural Network (CRNN) for reliable text recognition. Our model is further refined by
incorporating the Tesseract OCR engine, enhancing its applicability in Optical Character Recognition
(OCR) tasks. The methodology is augmented by NLP techniques and extended through the Open Food
Facts API (Application Programming Interface) for database population and text-only label prediction.
The CRNN model is trained on encoded labels and evaluated for accuracy on a dedicated test set.
Importantly, our approach enables visually impaired individuals to access essential information on
product labels, such as directions and ingredients. Overall, the study highlights the efficacy of deep
learning and OCR in automating label extraction and recognition.
10th International Conference on Artificial Intelligence and Applications (AI...gerogepatton
10th International Conference on Artificial Intelligence and Applications (AI 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Artificial Intelligence and its applications. The Conference looks for significant contributions to all major fields of the Artificial Intelligence, Soft Computing in theoretical and practical aspects. The aim of the Conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
International Journal of Artificial Intelligence & Applications (IJAIA)gerogepatton
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.
Research on Fuzzy C- Clustering Recursive Genetic Algorithm based on Cloud Co...gerogepatton
Aiming at the problems of poor local search ability and precocious convergence of fuzzy C-cluster
recursive genetic algorithm (FOLD++), a new fuzzy C-cluster recursive genetic algorithm based on
Bayesian function adaptation search (TS) was proposed by incorporating the idea of Bayesian function
adaptation search into fuzzy C-cluster recursive genetic algorithm. The new algorithm combines the
advantages of FOLD++ and TS. In the early stage of optimization, fuzzy C-cluster recursive genetic
algorithm is used to get a good initial value, and the individual extreme value pbest is put into Bayesian
function adaptation table. In the late stage of optimization, when the searching ability of fuzzy C-cluster
recursive genetic is weakened, the short term memory function of Bayesian function adaptation table in
Bayesian function adaptation search algorithm is utilized. Make it jump out of the local optimal solution,
and allow bad solutions to be accepted during the search. The improved algorithm is applied to function
optimization, and the simulation results show that the calculation accuracy and stability of the algorithm
are improved, and the effectiveness of the improved algorithm is verified
International Journal of Artificial Intelligence & Applications (IJAIA)gerogepatton
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.
10th International Conference on Artificial Intelligence and Soft Computing (...gerogepatton
10th International Conference on Artificial Intelligence and Soft Computing (AIS 2024) will
provide an excellent international forum for sharing knowledge and results in theory, methodology, and
applications of Artificial Intelligence, Soft Computing. The Conference looks for significant
contributions to all major fields of the Artificial Intelligence, Soft Computing in theoretical and practical
aspects. The aim of the Conference is to provide a platform to the researchers and practitioners from
both academia as well as industry to meet and share cutting-edge development in the field.
International Journal of Artificial Intelligence & Applications (IJAIA)gerogepatton
Employee attrition refers to the decrease in staff numbers within an organization due to various reasons.
As it has a negative impact on long-term growth objectives and workplace productivity, firms have
recognized it as a significant concern. To address this issue, organizations are increasingly turning to
machine-learning approaches to forecast employee attrition rates. This topic has gained significant
attention from researchers, especially in recent times. Several studies have applied various machinelearning methods to predict employee attrition, producing different resultsdepending on the employed
methods, factors, and datasets. However, there has been no comprehensive comparative review of multiple
studies applying machine-learning models to predict employee attrition to date. Therefore, this study aims
to fill this gap by providing an overview of research conducted on applying machine learning to predict
employee attrition from 2019 to February 2024. A literature review of relevant studies was conducted,
summarized, and classified. Most studies agree on conducting comparative experiments with multiple
predictive models to determine the most effective one.From this literature survey, the RF algorithm and
XGB ensemble method are repeatedly the best-performing, outperforming many other algorithms.
Additionally, the application of deep learning to employee attrition prediction issues also shows promise.
While there are discrepancies in the datasets used in previous studies, it is notable that the dataset
provided by IBM is the most widely utilized. This study serves as a concise review for new researchers,
facilitating their understanding of the primary techniques employed in predicting employee attrition and
highlighting recent research trends in this field. Furthermore, it provides organizations with insight into
the prominent factors affecting employee attrition, as identified by studies, enabling them to implement
solutions aimed at reducing attrition rates.
10th International Conference on Artificial Intelligence and Applications (AI...gerogepatton
10th International Conference on Artificial Intelligence and Applications (AIFU 2024) is a forum for presenting new advances and research results in the fields of Artificial Intelligence. The conference will bring together leading researchers, engineers and scientists in the domain of interest from around the world. The scope of the conference covers all theoretical and practical aspects of the Artificial Intelligence.
International Journal of Artificial Intelligence & Applications (IJAIA)gerogepatton
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
AN ONTOLOGICAL ANALYSIS AND NATURAL LANGUAGE PROCESSING OF FIGURES OF SPEECH
1. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.1, January 2020
DOI : 10.5121/ijaia.2020.11102 17
AN ONTOLOGICAL ANALYSIS AND NATURAL
LANGUAGE PROCESSING OF FIGURES OF SPEECH
Christiana Panayiotou
Department of Communications and Internet Studies,
Technological University of Cyprus
ABSTRACT
The purpose of the current paper is to present an ontological analysis to the identification of a particular
type of prepositional figures of speech via the identification of inconsistencies in ontological concepts.
Prepositional noun phrases are used widely in a multiplicity of domains to describe real world events and
activities. However, one aspect that makes a prepositional noun phrase poetical is that the latter suggests a
semantic relationship between concepts that does not exist in the real world. The current paper shows that
a set of rules based on WordNet classes and an ontology representing human behaviour and properties,
can be used to identify figures of speech due to the discrepancies in the semantic relations of the concepts
involved. Based on this realization, the paper describes a method for determining poetic vs. non-poetic
prepositional figures of speech, using WordNet class hierarchies. The paper also addresses the problem of
inconsistency resulting from the assertion of figures of speech in ontological knowledge bases, identifying
the problems involved in their representation. Finally, it discusses how a contextualized approach might
help to resolve this problem.
KEYWORDS
Ontologies, NLP, Linguistic creativity
1. INTRODUCTION
Research in computational linguistic creativity has gained a renewed attention over the last
decade and falls under the auspices of Artificial Intelligence, Natural Language Processing and
Linguistics. Poetry, as a special form of creative writing makes intense use of identifiable
linguistic tools such as figures of speech [1]. Poetry, is characterized as Art partly due to its
aesthetic qualities which appeal to the human senses and due to its notional and semantic content.
In [2] poetry is defined as the art form in which human language is used for its aesthetic qualities
in addition to, or instead of, its notional and semantic content. Poetic writings frequently violate
the syntactical, phonological and semantic rules of natural language text [3]. However, they also
possess some distinct characteristics that help to identify poetic writings among other text and set
the grounds upon which the automatic recognition of poetic phrases can be done. Our analysis is
based on the realization that certain literary tools e.g. figures of speech [4] violate ontological
relationships among concepts in order to cause emotional and cognitive effects. Although we
focused on a very simple subset of these phrases, our ideas can be expanded to more complex
phrases in the future.
As a starting point to our work, a number of simple prepositional noun phrases were extracted
from the Gutenberg files containing poems by William Blake and the Complete Works of
Shakespeare [5]. Then, via the use of WordNet hypernym relations [6], conflict relations were
identified giving rise to figures of speech [1]. Results were promising since even at this primitive
stage we have been able to identify patterns leading to the identification of figures of speech. For
2. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.1, January 2020
18
example, let us consider the prepositional noun phrase: “the tent of God” appearing in the poem:
“The Little Black Boy”, by William Blake [8]. This prepositional noun phrase relates the words
“tent” and “God” in the particular type of natural language phrase. The first three superordinates
in the hypernym hierarchy of the word “tent” in WordNet [6], are: “shelter.n.01”,
“structure.n.01'”, and “artifact.n.01”. The word “god” has the two immediate superordinates (e.g.
the first two hypernyms in its hypernym hierarchy): “spiritual_being.n.01”, and “belief.n.01”. If
we were to prototype the relevant phrases for clarity, we would arrive at phrases of the form:
the SHELTER of SPIRITUAL_BEING
the ARTIFACT of SPIRITUAL_BEING
where in place of ARTIFACT we can place any noun word whose hypernym closure includes
“artifact.n.01” e.g. the sense 0 interpretation of noun “tent” in WordNet, and similarly in the
place of SPIRITUAL_BEING any noun word referring to a spiritual object whose hypernym
closure includes “spiritual_being.n.01”. Both of the above prototypes lead to semantic relations
that are counter-intuitive when interpreted in the real world. The extraction of rules aiming to
identify counter-intuitive relations between nouns, leading to the identification of figures of
speech, is discussed in section 3.
Our analysis also showed that there are no crisp boundaries in classification of textual entities and
that the nouns associated in two phrases, one being poetic and another being non-poetic, may
have common hypernyms in their hypernym closure. Also, although WordNet [6] is useful for
relating words to concepts and to concept hierarchies, each synset [6] is connected to other synsets
via the hypernym, hyponym and meronym semantic relations but not any other type of
ontological domain-specific semantic relations. Any other domain-specific ontological relations
need to be addressed via semantically enriched formalisms. Considering different senses [6] of
synsets, is beyond the scope of the current work. The purpose of the next section is to introduce
the notion of “figure of speech”, to overview existing work in computational poetry and to refer to
the fuzzy domain approaches towards classification and ontology generation. Section 3 provides a
set of rules underpinning the recognition of figures of speech and provides a method for the
derivation of figures of speech using WordNet hypernyms. It also provides a partially defined
ontology of humanly possessed attributes whose hierarchical concepts can be mapped to
hypernym classes and discusses the notion of consistency. Section 4 discusses practical
considerations regarding consistency handling and representation.
2. BACKGROUND AND RELATED WORK
The main focus of research in computational creativity over the past decade has been in the
creation of models capable of generating poetry and in the classification of poetry. Several
models have been created for the automatic generation of poetry. However, none of these works
to our awareness addresses the problem of automatic recognition of figure of speech [1] and the
ontological analysis of poetic text. In this section, we discuss some of these approaches and we
refer to the properties of poetic text that makes poetry generation and understanding a distinct
challenging problem on its own.
Part of our work requires the mapping of WordNet entities to an ontology modelling Human
attributes and behavior. Substantial research in the extraction of semantic relations from resources
using fuzzy classification and similarity measures and the automatic generation of fuzzy
ontologies has been cited in literature with applications in Information Retrieval [27, 28], e-
Learning [29] and Engineering [30]. The current work takes a semi-automatic iterative approach to
the mapping of WordNet entities to an ontology whereby figures of speech are recognized as
conflicting assertions giving rise to a set of rules used to extract figures of speech. The conflicting
3. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.1, January 2020
19
nature of assertions made by figures of speech has its own ontological significance into the
understanding and analysis of processes that lead to poetic texts.
The structure of this section is outlined as follows: In the next subsection we define the meaning
of the terms: figure of speech and personification, by reference to definitions provided by poetry
resources. Then, we discuss some of the most important poetry generation approaches in
literature, which has provided insight into the characteristics and complexities of poetic writings.
Finally, we refer to important work in the fuzzy domain related to fuzzy classification, similarity
measures and ontology generation.
2.1. Figures of speech
Before we discuss some of the most important works in this area, we need to explain the
meanings of figure of speech [1] and personification [7], used extensively in this work.
Knowledge of these terms, is essential to the understanding of the nature of the problem
addressed, i.e. the ontological analysis of of the characteristics of figures of speech that make
them distinct from other textual forms.
Definition 1 (Figure of Speech [1]) A figure of speech is a phrase or word having different
meanings than its literal meaning. It conveys meaning by identifying or comparing one thing with
another, which has connotation or meaning familiar to the audience. That is why it is helpful in
creating a vivid rhetorical effect.
The above notion forms a particular type of poetry device used to cause aesthetic and cognitive
events due to its paradoxical nature. A vivid example is the case of personification phrases
constituting “a particular type of figures of speech where non-human objects are portrayed in
such a way that we feel they have the ability to act like human beings” [7]. An example of
personification appears in the poem: “Two Sunflowers move in a yellow room” by William Blake,
whereby two sunflowers are assumed to be moving into a room and conversing with the poet,
saying: “Our traveling habits have tired us. Can you give us a room with a view?”. Another
example can be traced to Emily Dickinson's poem: “Because I could not stop for dead” whereby
Death is personified as shown in the following lines:
“Because I could not stop for Death.
He kindly stopped for me
The Carriage held but just Ourselves
And Immortality.
We slowly drove
He knew no haste. [...]”
In the above example, the lines: “he kindly stopped for me” and “he knew no haste” attribute
human qualities to Death and are vivid examples of personification phrases.
Figures of speech appear frequently in Poetry. Identifying these phrases, and investigating the
semantic relationships of the objects involved in them, sets the ground for further analysis of
poetic text. Neither the extraction, nor the identification, of these phrases have been addressed
before.
2.2. Existing Work in Computational Poetry
Among the first advocates of automatic generation of poetry is R.W. Bailey [9]. Toivanen et al.
[10] introduced predicates which explicitly recorded the possible words that can exist at each line
and each position in a poem combined with constraints e.g. that only one candidate word can
4. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.1, January 2020
20
exist in a particular position in a particular line of a poem. Although this approach is very
important in that it attempts to address the problem of poetry generation by formalizing the
syntactic features characterizing poetry it suffered from the need to record explicitly all the rules
and predicates about the data.
Manurung [11] proposed a poetry generation system, which, given some metrical constraints as
input, it uses the dynamic programming technique of chart generation to efficiently construct all
valid paraphrases of a natural language utterance. The chart could be used both as a transducer for
the production of logical form of utterance strings and as a generator from logical forms to strings
[11]. The above technique addresses the issue of semantic meaning and paraphrasing of utterances
via the use of a lexicon whose semantics subsume the semantics of input. With the advent of the
Semantic Web tools and in particular with the introduction of ontologies as a tool enabling the
representation of semantic relations between concepts, it is now possible to enrich the semantics
of the poems generated via the use of a wider range of semantic relations between the concepts
involved.
Another interesting technique was advocated in [12]. The basic strategy adopted in this case, was
to produce poetry in collaboration with the user. The task is accomplished by firstly parsing the
input line in order to analyze its poetic structure and then generating a new line. The output line
relied on syllabification engine and a Support Vector Machine (SVM).
Creative text does not conform to the normal production rules governing non-poetic natural
language text, creating an even more challenging problem. Manurung [13] refers to two distinct
aspects of poetry generation that make it a unique and difficult problem in NLP [13]:
1. The interdependent linguistic phenomena and surface constrains due to 'unity' of poetry.
Unity in this context refers to the fact that every single linguistic decision potentially
determines the success of the poem.
2. Lack of clear, well-defined communicative goal
Although poetic text does not adhere to a valid definition, it satisfies the following properties:
1. Meaningfulness: The text must convey some conceptual message that is meaningful
under some interpretation (this property actually holds for all types of text)
2. Grammaticality: A Poem must obey linguistic conventions that are prescribed by a
given grammar and lexicon. Although this property also holds for all types of text,
grammaticality in poetry is probably less constrained than that of ordinary texts and is
governed by figurative language tropes.
3. Poeticness: A poem must exhibit poetic features such as phonetic patterns, rhythmic
patterns and rhyme.
The above properties suggest that the semantic associations and tools used in poetry, impose
further challenges to the syntactic analysis and semantic representation of poetic phrases. Our
work focuses on the semantic aspects of a small subset of poetic tools in order to gain a better
understanding of their nature and derive associations that can help in the representation of more
complex phrases. The first two properties refer to all types of natural language text. However, the
extent to which the properties apply in poetic writings may differ. For example, as stated in [13],
“although this property (grammaticality) also holds for all types of text, grammaticality in poetry
is probably less constrained than that of ordinary texts and is governed by figurative language
5. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.1, January 2020
21
tropes”. Figurative language tropes (as stated in [14], constitute “a particular type of figures of
speech that play with and shift the expected and literal meaning of words”.
2.3. Fuzzy domain relevant work
One approach to the classification of textual tokens is via the use of fuzzy modelling of the
concepts of the domain. Fuzzy modelling gained its interest due its successful applications in the
industry for the solution of complex problems that cannot be resolved with precise mathematical
formulations or logic due to their uncertain and ambiguous nature. Examples of areas in which it
has been used successfully, are control systems, pattern recognition, expert systems, etc. The
importance of fuzzy modelling lies to the fact that it tries to emulate human-like reasoning. The
fundamental concept underpinning fuzzy modelling and logic, is the fuzzy set. As stated in [32] in
conventional set theory, sets of real objects, are equivalent to, and, isomorphically described by, a
unique membership function. However, the crisp boundaries of the above definition do not lend
themselves easily to real applications since they do not take into account the inherent uncertainty
and ambiguity of real life phenomena. In natural language, linguistic entities are characterized by
ambiguity and therefore strict mathematical formulations are not applicable. Considering a
universe of objects X, “a fuzzy set is a function m: X [0,1] when, and only when, it matches
some intuitively plausible semantic description of imprecise properties” [32]. This definition
allows an object to belong to more than one fuzzy set. Consequently, it is particularly useful to
the natural language domain and in particular to the classification and evaluation of similarity of
textual entities, due to the fact that in this domain, there is no single and complete knowledge of
the hierarchical relations between classes of words, neither of strict and precise definitions of
classes, in order to draw inferences. Fuzzy inferences are drawn via fuzzy rules which are
conditioned on fuzzy sets. Obviously, our identified set of rules (discussed in the next section)
that lead to the identification of figures of speech are not strict inference rules.
Fuzzy modelling, where input datasets are textual resources, is cited in literature for the solution
of IR and e-Learning problems. These works, are concerned primarily with the generation of
fuzzy ontologies, usually by extending the traditional formal concept analysis and by employing
fuzzy similarity measures to establish object-membership and hierarchical relations between
concepts. Also, recent works in the engineering domain [30, 31] provide further insight on fuzzy
divergence and similarity among objects, which has not been addressed before [30].
The FOGA framework for the fuzzy ontology extraction [33] builds on Formal Concept Analysis
(FCA) and fuzzy logic. The proposed framework consists of a fuzzy formal analysis component,
a fuzzy conceptual clustering component, and a fuzzy ontology generation component. The Fuzzy
Formal Concept Analysis (FFCA) component extends FCA so that membership to concepts is
represented as a membership value in a fuzzy formal context [33]. In order to avoid the
classification of similar objects into different classes due to small deviations in attribute values,
formal concepts of the fuzzy concept lattice produced by FFCA are clustered by the Fuzzy
Conceptual Clustering component. Finally, the Fuzzy Ontology generation component produces
the fuzzy ontology using the concept hierarchy created by fuzzy conceptual clustering. This is an
important framework for fuzzy inference, whose focus is the fuzzy classification of concepts. Our
focus in this paper is on the ontological analysis of the conflicts between the assertions made by
the literature devices under consideration and a real domain Human characteristics ontology.
Textual data is inherently ambiguous in nature and fuzzy modelling proved to be a viable way of
quantifying ambiguity.
In [29], lexical patterns leading to concept identification and extraction are determined by making
use of POS-tagging. Then, statistical measures are used to determine the importance of words,
and co-occurrences. The subsumption relations between concepts are computed according to a
6. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.1, January 2020
22
fuzzy relation membership function. Once the concepts and subsumption relations between
concepts are established, a taxonomy of fuzzy domain concepts is extracted via the
implementation of a fuzzy ontology construction algorithm.
In [34] an unsupervised approach is proposed to address a document classification problem which,
taking advantage of the existence of readymade ontologies from many different domains, it
assigns to each document a fuzzy mapping degree in the interval [0,1]. Each document
considered, is mapped to each domain ontology with a certain degree of membership. The
mapping degree of each document to each domain is then determined via fuzzy sets. This is an
important work showing how to use existing domain ontologies to evaluate similarity with
documents without any training data. It is however conditioned on the existence of relevant
domain ontologies.
An important contribution regarding the assignment of fuzzy similarity relations between objects
is cited in [30] and is concerned with the determination of the physical position, orientation and
size of an electrically charged ellipsoidal conductor subjected to an external electric field.
Crudely, the electrostatic potential of a given configuration of ellipsoidal conductor is represented
and an m x m image I to which, for each pixel, a grey level y(i) is associated. The determination
of membership of each y(i) to I is established via fuzzy sets. The method achieves comparable
results to well-known fuzzy clustering techniques.
The works mentioned in the above paragraphs share the characteristics of a fuzzy domain and
although their focus is on the fuzzy similarity and classification of objects in a fuzzy domain,
their results relating to the extraction of semantically related concepts from textual resources are
relevant to our work.
3. THE USE OF WORDNET IN THE RECOGNITION OF FIGURES OF SPEECH
Our data consists of poems extracted from the NLTK [15] Gutenberg file of William Blake and
the Complete works of Shakespeare. Information about poems was extracted via natural language
methods (e.g. regular expressions, parsing, etc.) and was inserted into a python dictionary for
further processing so that, for example, the entry poems_dictionary[k][j] holds information about
the jth
line of the kth
poem. The dictionary also includes information about the noun phrases,
prepositional phrases etc. Prepositional phrases were extracted via the use of the Pattern [16]
library for Python and grammar patterns. Using WordNet [6] and some basic rules (for example
the three following rules stated in Python) about the categories of words, we have been able to
derive a list of phrases adhering to the definition of figures of speech.
def rule1(c1, c2):
if concept_is(c1, “location”) and concept_is(c2, “imaginary_place”):
return True
def rule2(c1, c2):
if concept_is(c1, “physical_entity”) and concept_is(c2, “imaginary_place”):
return True
def rule3(c1, c2):
if concept_is(c1, “morality”) and concept_is(c2, “feeling”):
return True
In the table below we include more examples of rule instances presented as ordered pairs of
concepts with examples of figures of speech derived by these rules.
7. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.1, January 2020
23
Table 1. Examples of figures of speech derived from a set of conflict-identification rules
Concept1 Concept2 Example
offspring abstraction The daughter of beauty
morality feeling Virtue of Love
physical_entity imaginary_place Sword of Heaven
artifact feeling beams of love
furnishing body_part curtain of flesh
physical_entity spiritual_being vales of har
Our analysis above, has encouraged us to develop an algorithm in order to extract those pairs of
hypernyms that determine the type of phrase (i.e. poetic non-poetic), represented as a relation
between noun concepts, c1 and c2. Let us denote the set of all relations as Relations. The steps
followed in the algorithm, are outlined in the following subsection, including relevant definitions
as appropriate, for the clarification of the method used.
3.1. Derivation of poetic vs. non-poetic figures of speech
Firstly, we define the associated to relation for pairs of hypernyms, as follows:
Definition 2 ( relation Assoc) (hi, hj) is associated to r where r = (c1,c2) Relations, denoted
as: Assoc((hi, hj), r) iff exists p1 hypernym paths of c1 st. hi p1 and exists p2 hypernym
paths of c2 st. hj p2.
Informally, the term hypernym paths refers to a set of paths where each path is a list of synonyms
ordered via a taxonomic is-a relation from a lexical entity to its root. In compliance with
WordNet, we will use the function hypernym_paths(l) in the future, to obtain the set of paths for a
lexical entity l. Now we can define the set of all pairs of hypernyms where each pair is associated
to a relation in Relations:
Sall = {(hi, hj): Assoc((hi, hj), r) for all r Relations}
Then, we obtain the following sets:
S_np = {(hi, hj) in Sall where (c1, c2) in Non_Poetic_Relations}
S_p= {(hi, hj) in Sall where (c1, c2) in Poetic_Relations}
Sc = {(hi, hj) in Sall where (c1, c2) in Poetic_Relations and (c1, c2) in
Non_Poetic_Relations}
where Relations = Poetic_Relations Non_Poetic_Relations
From the above sets, we can now derive sets of pairs of hypernyms that are only related to a
specific type of relations and not others:
S'_p = {(hi, hj) in H x H: (hi, hj) in S_p and (hi, hj) not in S_np}
S'_np = {{(hi, hj) in H x H: (hi, hj) in S_np (hi, hj) not in S_p }
where H denotes the sets of all hypernyms.
Now, since Sc contains pairs of hypernyms that are related to both poetic and non-poetic relations,
it consists mainly from less specific hypernyms at the higher levels of hypernym hierarchies. We
eliminate those hypernym pairs in Sc whose more specific hypernyms in the hypernym hierarchy
8. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.1, January 2020
24
are already associated to either the poetic or the non-poetic relations. Firstly, let us define the
relation:
Definition 3 (relation higher_than) higher_than: (H x H) x (H x H) Boolean such that for any
two pairs of hypernyms (hi, hj), and (hk, hm): higher_than((hi, hj), (hk, hm))=True if exists p in
hypernym_paths(hi) st. hk is a superordinate of hi and exists p in hypernym_paths(hj) st. hm is a
superordinate of hj.
Then,
Sc := Sc - {(hk, hm): exists (hi, hj) in S'_p st. higher_than((hi, hj), (hk, hm))}
- {(hk, hm): exists (hi, hj) in S'_np st. higher_than((hi, hj), (hk, hm))}
We can then choose among the pairs of hypernyms in sets S'_p, S'_np those pairs that do not have
any superordinates in the same set.
S''_p = (hi, hj) in S'_p: not exists (hk, hm) in S'_p st. higher_than((hi, hj), (hk, hm))}
S''_np = {(hi, hj) in S'_np : not exists (hk, hm) in S'_np st. higher_than((hi, hj), (hk, hm))}
The importance of this restriction is that it gets the pairs of hypernyms as high as possible in their
respective hierarchies and avoids redundancy. In this way we have managed to extract many more
hypernym pairs relating to the classes of relations.
The results obtained via the use of the above approach are promising as shown in table 2 below.
We realize that the relations not classified as either poetic or non-poetic are partly due to further
processing required on our data at the preparation stage including the recognition of entities and
numbers.
Table 2. Summary of results
Poetic Non-poetic Total
Categorized as poetic 206 0 206
Categorized as non-poetic 0 4240 4240
Other 22 1302 1324
Total 228 5542 5770
3.2. Partially defined Ontology
An initial, partially defined ontology is also derived as shown in Figure 1 below, aiming to
provide a further ontological analysis of the associations between concepts used in texts
(including poetic texts) which can be built manually and augmented via mappings with the
hypernym hierarchies. We don’t assume one universal ontology, and partiality in this sense is
more realistic. This part of the ontology together with hierarchical information derived from
WordNet hypernym hierarchies help to identify semantic associations of concepts in figures of
speech. Let us consider, for example, the class Human_Gesture which includes the class
Facial_expressions which in turn includes the object Smile. Assuming that only humans can
make facial expressions in real life, we enforce the constraint that the domain of the relevant
property: has_facial_expression is Human. Then, every assertion stating that an entity belonging
to a different class has this property, will lead to a conflicting KB when added to the ontology.
The importance of a separate ontology is that hypernym hierarchies do not model important
concept relationships other than hierarchical. Hypernyms do not provide information about
disjointness relations between classes, and it cannot be inferred how, or, whether different entities
9. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.1, January 2020
25
are related. For example, how “tent” is related to Human, since WordNet [6] defines tent as a
“structure that provides privacy and protection from danger”. Having said that, we recognise that
to record all possible relationships between concepts explicitly would not be feasible. However,
abstracting away enables us to capture a wide range of relations using fewer classes.
Figure 1. First draft of an ontology of humanly possessed qualities, and disjoint classes.
To motivate our discussion further, let us consider the prepositional noun phrase: ‘smiles of
heaven’. If ‘Heaven’ is a member of the class ‘Supernatural’ in OR, which is disjoint with class
‘Human’, and the domain of property has_facial_expression is ‘Human’, then asserting that
‘Heaven’ has property has_facial_expression will end up in a conflicting KB. Nonetheless, this
inconsistency (please refer to the notion of inconsistency below) leads to the identification of a
personification phrase. Although it is not within the scope of this paper to provide a final solution
to the problem of inconsistency, we discuss possible ways of addressing inconsistency in the
following subsections either through reification or contextualized ontologies. But, in order to do
that we first need to define the Syntax and Semantics of terminological knowledge bases.
3.3. Basic Syntax and Semantics of Terminological Knowledge Bases
DL based formalisms, like OWL DL, are ‘a family of class-based knowledge representation
formalisms equipped with well-defined model-theoretic semantics’ [17]. In order to discuss
conflicts with ontological knowledge we firstly need to refer to the definition of an ontology, and
the notions of interpretation and satisfiability.
3.3.1. Ontology
An ontology in this paper is described as a structure < A, T > where T denotes a DL TBox (a set
of terminological axioms) and A denotes a DL ABox (a set of grounded assertions). An
interpretation I of an ontology O ≡ < A, T > consists of a domain ΔI
and an interpretation function
(.I
) such that the relations in Table 2 are satisfied. Note that the axioms referring to the domain
and range of properties take their usual meaning and are neglected due to limitation of space.
10. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.1, January 2020
26
Table 2. Syntax and Semantics of Basic DL.
Constructor Name Syntax Semantics
atomic concept A
abstract role R
individuals I
top concept
bottom concept
A
R
o
𝑡𝑜𝑝
⊥
AI
⊆ ΔI
RI
⊆ ΔI
× ΔI
oI
∈ ΔI
ΔI
∅
conjunction
disjunction
negation
exists restriction
value restriction
C1 ⊓ C2
C1 ⊔ C2
¬C
∃R.C
∀R.C
C1
I
∩ C2
I
C1
I
∪ C2
I
ΔI
− CI
{x| ∃y, (x, y) ∈ RI
⇒ y ∈ CI
}
{x| ∀y, (x, y) ∈ RI
⇒ y ∈ CI
}
Axiom Name Syntax Semantics
concept inclusion C1 ⊑ C2 C1
I
⊆ C2
I
concept assertion C(a) aI
∈ CI
role assertion R(a,b) (aI
, bI
) ∈ RI
The notion of satisfiability is closely related to the notion of consistency. A named concept C in
an ontology O is satisfiable if and only if there is an interpretation I such that CI
≠ ∅. The notion
of satisfiability allows us to define inconsistency as follows:
Definition 4 (Inconsistency) An ontology is inconsistent if and only if it has no interpretation.
Examples of inconsistent formulas (where the usual meanings of disjointness and domain apply)
are:
{A(b), A ⊑ B, A ⊑ ¬B}
{C(a), ¬C(a)}
{A(a), B(a), Disjoint(A, B)}
{Domain(R, A), Disjoint(A, B), R(a,b), B(a)}
4. PRACTICAL CONSIDERATIONS ABOUT CONSISTENCY HANDLING AND
REPRESENTATION
As stated already, the figures of speech cannot be added as assertions in the A-Box of an ontology
like OR since they will lead to inconsistency. In this sub-section we discuss how the existing tools
and formalisms can be used to represent figures of speech. Although we are currently talking
about the simplest form of figure of speech, the approaches considered may be extended to handle
more complex phrases.
4.1. Reification
One way to overcome the problem of in consistency, is to represent figures of speech separately,
as a particular class of (non-factual) statements, about which information is kept. This leads to the
idea of reification, which is supported by the RDF syntax. RDF [18] supported the reification of
statements via a special vocabulary [19] in order to represent information about triples. An
example of a reified figure of speech is included below:
_ex1: rdf:type rdf:Statement;
rdf:subject #Heaven;
rdf:predicate #has_facial_expression;
rdf:object #Smile;
11. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.1, January 2020
27
Information about _ex1 can be added as follows:
_ex1: exuri:has_author #Whiltman;
_ex1: exuri:appears_in #PoemId
In RDF, the subject of reification is intended to refer to a concrete realization of an RDF triple,
such as a document or surface syntax, rather than a triple considered as an abstract object [20].
Other forms of reification include N-ary relations [21], singleton properties [22], and Named
Graphs [23]. Each of these approaches aims to solve a different problem. For example, N-ary
relations [21] enable more than two individuals to participate in N-ary relations [21], Named
Graphs [23] enable the addition of provenance and trust information to web resources, and
Singleton Properties [22] enable the creation of properties for a single statement. Different N-ary
relation patterns are discussed in [21]. Each relation pattern uses a class to represent a relationship
and n new properties to represent the association of each participating entity to the relation. This
approach enables the addition of information about the entities participating to the relationship
that a triple cannot express on its own. However, most of the above methods suffer from
maintenance problems and the increased complexity caused by the number of new constructs
created. Further, the extent to which contextualization needs to be formalized depends on the
reasoning capabilities needed for the representation of figures of speech. An attempt to formalize
contextualization of resources is provided in a separate subsection below.
4.2. Contextualizing Ontologies
This section relies heavily on the work done in contextualized ontologies in [25] in order to derive
the basic conceptualization of a context-based representation of our domain. Before proceeding
further, we need to redefine Ontologies taking into consideration context.
A contextualized representation of ontologies adopts the principles of locality and compatibility
underpinning the local model semantics [26]. The principle of locality states that reasoning
requires only part of what is potentially available and the principle of compatibility states that
there is compatibility among the kinds of reasoning performed in different contexts [26]. In a
context-based ontology approach, each ontology is indexed e.g. by an index i and an ontology Oi
defines a language Li. Every expression that appears either with an index i or no index is assumed
to be in the language defined by Oi.
Definition 5 Let I be a set of indices, L be the disjoint union of C, R, and O, the set of strings
denoting concepts, roles and individuals, respectively. An OWL ontology with index i is a pair
<i, Oi> where i ∈ I and Oi = <Ti, Ai> where T and A are a T-box and A-box respectively in [25].
Since we are modelling a domain consisting of conflicting ontologies, then the space of
ontologies needs to be appropriately modelled, taking into consideration compatibility issues.
Informally an OWL space is a set of ontologies appropriately indexed. Following [25], an OWL
space is a family of ontologies <i, Oi>i∈ I such that every Oi is an ontology, and for each i ≠ j, the
j-foreign language of Oi is contained in the local language of Oj.
Definition 6 A context space is a pair: < {<i, Oi>}i∈I, {Mij}ij∈I > where {<i, Oi>}i∈I is an OWL
space (a set of indexed Ontologies) and {Mij}ij ∈I is a family of mappings from i to j for all pairs i,
j ∈ I.
Before we define our context space, let us define an interpretation for an owl space as stated in
[25] and a new relation < between indexed ontologies:
Definition 7 (OWL localized interpretation [25]) An OWL interpretation with local domains for
the OWL space <i, Oi>i∈I, is a family I={Ii}i∈I, where each Ii = <ΔI
, (.)Ii
>, called the local
interpretation of Oi is an interpretation of Li.
12. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.1, January 2020
28
The above definition deviates slightly from the definition of OWL interpretation with local
domains in [25] since it does not include holes [25]. Instead, we assume a subsumption relation
between the properties of the ontologies of the owl space assuming a top ontology whose
properties subsume the properties of the other ontologies in the owl space.
Definition 8 (relation <r) For any two indexed ontologies <p, Op>, and < t, Ot >, let I = {Ip , It} be
an OWL interpretation with the usual properties:
1. (𝑝: 𝐶)𝐼𝑡 = 𝐶 𝐼 𝑝 ∩ Δ𝐼𝑡
2. (𝑝: r)𝐼𝑡 = 𝑟 𝐼 𝑝 ∩ Δ𝐼𝑡 × Δ𝐼𝑡
3. (𝑝: 𝑎)𝐼𝑡 = 𝑎 𝐼𝑡
Then, the relation Op <r Ot holds between two indexed ontologies <p, Op> and < t, Ot > if and
only if (𝑝:r)𝐼𝑡 ⊆ (𝑡: 𝑟)𝐼𝑡
Let us define our OWL space as the set of indexed ontologies: {<poetic, Opoetic>, < partial,
Opartial>}, <top, Otop>, where Opoetic <r Otop and Opartial <r Otop. Notably, the OWL space may
contain more indexed ontologies. In our example, we assume that Opartial has mappings that relate
its concepts to WordNet hypernyms. Assume that the following axioms hold for the OWL space
just defined:
1. poetic: Smile ⊒ poetic: Facial_Expression
2. poetic: Supernatural(poetic: “heaven”)
3. poetic: Smile(poetic: “bigsmile”)
4. poetic: hasFacialExpression(poetic: “heaven”, poetic: “bigsmile”)
5. partial: Human_being ⊑ ¬partial: Supernatural
6. ∀partial: hasFacialExpression.Smile ⊑ partial: Human_being
7. poetic: Human_being ≡ partial: Human_being
8. poetic: Smile ≡ partial: Smile
9. poetic: Supernatural ≡ partial: Supernatural
10. poetic: hasFacialExpression ⊑ top: hasFacialExpression
11. partial: hasFacialExpression ⊑ top: hasFacialExpression
In the above OWL space, we assume that the domains and the interpretations of objects are the
same in all ontologies apart from the extensions of properties. If we let I = {Ipoetic, Ipartial, Itop} to be
the set of local interpretations of ontologies Opoetic, Opartial, Otop respectively, then it follows that
Itop ⊨ hasFacialExpression (poetic: “heaven”, poetic: “bigsmile”). At the same time, Itop ⊨
¬partial: hasFacialExpression(poetic: “heaven”, poetic: “bigsmile”.
4.3. Programming tools
RDFLib enables the creation of separate graphs and contexts in a Dataset using the RDFLib [24],
which can be very useful although, being an RDF triple based approach, it does not enable OWL
reasoning in the sense of contextualized ontologies discussed in the previous subsection. The
RDFLib package [24] enabled us to create named graphs and contexts, serialised and queried as
RDF triples sharing the same URI. Datasets in RDFLib can be queried using SPARQL. It can
also be used to create and store hierarchical relations, which in our case were mapped to
Hypernym hierarchies. Ontologies were created and modified semi-automatically to reflect the
OWL primitives with hypernym entities mapped to the partial ontology.
5. CONCLUSION AND FUTURE WORK
The current paper sets the foundations for the identification and semantic analysis of concepts
included in figures of speech by following a semi-manual ontological approach. With the use of a
set of a few basic rules concerning WordNet classes of entities and terminological knowledge
13. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.1, January 2020
29
derived from a real world ontology concerning the constraints on properties, we have been able to
identify a set of prepositional noun phrases constituting figures of speech. The results, were
encouraging suggesting that some poetic phrases used as literature tools can be recognized and
analyzed due to their contradictory nature when compared to a real domain ontology. Based on
these results we then suggested a method for the automatic categorization of poetic vs non-poetic
phrases of our domain by collecting the most relevant hypernyms associated with each of the
concepts involved. The semantic analysis gave us an in-depth understanding of the various
features of poetic vs non-poetic statements. In particular, the use of an ontology describing human
features can help (via the use of constraints, e.g. disjointness, domain and range) to identify
conflicting assertions made by personification phrases and can be mapped to hypernym classes.
Automating this task is currently under consideration. More complex phrases will be investigated
in the future as well as a refinement of ontological relationships and concepts used, in order to aid
the task of classification of phrases in parallel with an automated approach for the automatic
extraction of particular figures of speech.
The paper also reviewed some of the eminent fuzzy domain approaches to the generation of fuzzy
ontologies and the implementation of fuzzy measures of similarity. Our domain shares the
conditions of uncertainty and ambiguity of these approaches. Finally, we have considered the
challenges involved in the representation of figures of speech due to the inconsistency caused
when added to the A-Box of a real-domain ontology such as the one employed in order to
conceptualize observations regarding Human behavior and properties. We have referred to
different approaches and have considered the application of contextualized ontologies for the
solution of the inconsistency problem.
REFERENCES
[1] Literary Devices Editors, (2013) “Figure of Speech” [Online]. Available:
https://literarydevices.net/figure-of-speech/
[2] Poetry.org. (2005) “what is poetry” [Online]. Available: http://www.poetry.org/whatis.htm.
[3] Hugo Concalo Oliveira. (2009) “Automatic generation of poetry: an overview”, CISUC, DEI,
University of Coimbra, Tech. Rep. 1.
[4] Abrams, M.H and Geoffrey G. Harpham. (1999) “A Glossary of Literal terms, Boston”, MA:
Thomson, Wadsworth.
[5] Project Gutenberg. Retrieved February 21, 2016, [Online]. Available: www.gutenberg.org.
[6] Miller, George A. (1995) “WordNet: A Lexical Database for English”, Communications of the
ACM, Vol. 38, No. 11.
[7] Literary Devices Editors (2013) “Personification” [Online]. Available:
https://literarydevices.net/personification.
[8] Blake, W., and Blake, W. (1992) “Songs of innocence; and songs of experience”. New York: Dover.
[9] Bailey, R.W. (1974) “Computer Assisted Poetry: The writing machine is for everybody”, Computers
in Humanities, Cambridge University Press, pp283-295.
[10] Toivanen, J.M., Toivonen, H. and Valitutti, A. (2013) “Harnessing constraint Programming for
poetry composition”, In Proceedings of the Fourth International Conference on Computational
Creativity, pp160-167.
[11] Manurung, H.M. (1999) “Chart Generation of Rythm-Patterned Text”, Proceedings of the First
International Workshop on Literature in Cognition and Computers.
[12] Amitava Das and Bjorn Gamback. (2014) “Poetic Machine: Computational Creativity for Automatic
Poetry Generation in Bengali”, ICCC.
[13] Manurung, H. (2004) “An evolutionary algorithm approach to poetry generation”, PhD thesis,
University of Edinburgh.
[14] LitCharts Editors (2015) “Literary devices and terms” [Online]. Available:
https://www.litcharts.com/literary-devices-and-terms.
[15] Loper, Edward and Bird, Steven (2002) “NLTK: The Natural Language Toolkit”, Proceedings of the
ACL-20 Workshop on Effective Tools and Methodologies for Teaching Natural Language
Processing and Computational Linguistics, Association of Computational Linguistics, ETMTNLP,
Vol. 1, pp63-70.
14. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.1, January 2020
30
[16] De Smedt, T. and Daelemans, W. (2012) “Pattern for Python”, Journal of Machine Learning
Research, Vol. 13, Issue 1, pp2063-2067.
[17] Flouris, G., Huang, Z., Pan, J., Plexoudakis, D. and Wache, H. (2006) “Inconsistencies, Negations
and Changes in Ontologies”, Proceedings of AAAI Conference on Artificial Intelligence.
[18] Hayes, Patrick and Patel-Schneider Peter Editors (2014) “RDF 1.1 Semantics” [Online]. Available:
https://www.w3.org/TR/rdf11-mt.
[19] Brickley, Dan and Guha, R.V. (2004) “RDF Vocabulary Description Language 1.0:RDF Schema”
[Online]. Available: https://www.w3.org/TR/2004/REC-rdf-schema-20040210.
[20] Wood, D., Lanthaler, M and Cyganiak, R. (2014) “RDF 1.1 Concepts and Abstract Syntax”
[Online]. Available: https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225.
[21] Hayes, P. and Welty, C. (2006) “Defining N-ary Relations on the Semantic Web” [Online].
Available: https://www.w3.org/TR/swbp-n-aryRelations.
[22] Nguyen, V., Bodenraider, O. and Sheth, A. (2014) “Don’t like RDF Reification? Making Statements
about Statements Using Singleton Property, Proceedings of the International World Wide Web
Conference, pp759-770.
[23] Carrol, J., Bizer, C., Hayes, P. and Stickler, P. (2005) “Named Graphs, Provenance and Trust”,
Proceedings of the 14th International Conference on World Wide Web, pp613-622.
[24] Boettiger, C. (2018) “rdflib: A high level wrapper around the redland package for common rdf
publications, Zenodo Publisher.
[25] Bouquet, P., Giunchiglia, F., Harmelen, F., Serafini, L. and Stuckenschmidt, H. (2003) “C-OWL:
Contextualizing Ontologies”, Proceedings of the second International Semantic Web Conference,
Lecture Notes in Artificial Intelligence, No. 2870, pp164-179.
[26] Ghidini, C. and Giunchiglia, F. (2001) “Local Model Semantics, or Contextual Reasoning = Locality
+ Compatibility”, Journal of Artificial Intelligence, Vol. 127, No. 2, pp221-259.
[27] Leite, M.A., and Ricarte, I.L. (2008) “Using Multiple Related Ontologies in a Fuzzy Information
Retrieval Model” WONTO.
[28] Attia, Z.E., Gadallah, A.M., and Hefny, H.A. (2014) “Semantic Information Retrieval Model: Fuzzy
Ontology Approach”.
[29] Lau, R.Y., Song, D., Li, Y., Cheung, C., and Hao, J. (2009) “Toward a Fuzzy Domain Ontology
Extraction Method for Adaptive e-Learning”, IEEE Transactions on Knowledge and Data
Engineering, Vol. 21, pp800-813.
[30] Versaci, M., Foresta F.L., Morabito F.C. and Angiulli, G. (2018) “A fuzzy divergence approach for
solving electrostatic identification problems for NDT applications”, International Journal of Applied
Electromagnetics and Mechanics. Vol. 57. pp1-14.
[31] Cacciola, M., Calcagno, S., Megali, G., Pellicanò, D., Versaci, M., and Morabito, F.C. (2010)
“Wavelet Coherence and Fuzzy Subtractive Clustering for Defect Classification in Aeronautic
CFRP”, International Conference on Complex, Intelligent and Software Intensive Systems, pp101-
107.
[32] Bezdek, J.C. (1993) “Fuzzy models—What are they, and why?” [Editorial], IEEE Transactions on
Fuzzy Systems, Vol. 1.
[33] Quan, T.T Hui, S.C and Cao, T.H. (2004) “FOGA: A Fuzzy Ontology Generation Framework for
Scholarly Semantic Web”.
[34] Arab, A.M, Gadallah, A.M., and Salah, A. (2017) “Training-less Multi-Domain Classification
Approach using Ontology and Fuzzy sets”, International Journal of Computer Science and
Information Security. Vol 15. No.1. pp282-288.
[35] Panayiotou, C. (2019), "An Ontological Approach to the Extraction of Figures of Speech", The 5th
International Conference on Artificial Intelligence and Applications, Dubai.
AUTHOR
Professor at the Cyprus University of Dr. Christiana Panayiotou is an
Assistant Technology.