This document discusses translating normative utterances expressed in natural language to machine language through programming. It begins by providing context on intersemiotic translation and programming languages. It then distinguishes between regulative and constitutive rules, and explores how each could be translated to Java code. For regulative rules, which regulate pre-existing behaviors, an example is given of coding a rule that turns on a light when motion is detected. For constitutive rules, which define new types of behaviors, an example examines how an object-oriented programming approach could represent real-world devices and their functions through defined classes and properties.
Robust extended tokenization framework for romanian by semantic parallel text...ijnlc
Tokenization is considered a solved problem when reduced to just word borders identification, punctuation
and white spaces handling. Obtaining a high quality outcome from this process is essential for subsequent
NLP piped processes (POS-tagging, WSD). In this paper we claim that to obtain this quality we need to use
in the tokenization disambiguation process all linguistic, morphosyntactic, and semantic-level word-related
information as necessary. We also claim that semantic disambiguation performs much better in a bilingual
context than in a monolingual one. Then we prove that for the disambiguation purposes the bilingual text
provided by high profile on-line machine translation services performs almost to the same level with
human-originated parallel texts (Gold standard). Finally we claim that the tokenization algorithm
incorporated in TORO can be used as a criterion for on-line machine translation services comparative
quality assessment and we provide a setup for this purpose.
CITIZENSHIP ACT (591) OF GHANA AS A LOGIC THEOREM AND ITS SEMANTIC IMPLICATIONSijnlc
This paper presents excerpts of the natural text of the Citizenship Act of the Republic of Ghana, ACT 2000 (591), as a logic theorem in First Order Logic (FOL) language. The formalism of this piece of law is done to allow for semantic analysis of the text by machines. The results of this research also deals with the problem of ambiguities in the legal text, andreveals the semantic consequence of the natural construction or textual structure of the piece of law;one major problem ofthe semantics of legal text has been the problem of ambiguities, which largely affects interpretations that are alluded to legal statements. Some constructed sentences sometimesdo not reflect the semantic intentions of the composers of the sentences, due to inherent ambiguities or technical faults in the structure of some parts of the statements.The formalism of the Actin this paper provides logical proofs and deductions which are asserted to establish clarity and to reveal semantic errors, or otherwise preserve the semantic intentions of the Act
CITIZENSHIP ACT (591) OF GHANA AS A LOGIC THEOREM AND ITS SEMANTIC IMPLICATIONSkevig
This paper presents excerpts of the natural text of the Citizenship Act of the Republic of Ghana, ACT 2000
(591), as a logic theorem in First Order Logic (FOL) language. The formalism of this piece of law is done
to allow for semantic analysis of the text by machines. The results of this research also deals with the
problem of ambiguities in the legal text, and reveals the semantic consequence of the natural construction
or textual structure of the piece of law; one major problem of the semantics of legal text has been the
problem of ambiguities, which largely affects interpretations that are alluded to legal statements. Some
constructed sentences sometimes do not reflect the semantic intentions of the composers of the sentences,
due to inherent ambiguities or technical faults in the structure of some parts of the statements. The
formalism of the Act in this paper provides logical proofs and deductions which are asserted to establish
clarity and to reveal semantic errors, or otherwise preserve the semantic intentions of the Act.
A SURVEY OF GRAMMAR CHECKERS FOR NATURAL LANGUAGEScsandit
ABSTRACT
Natural Language processing is an interdisciplinary branch of linguistic and computer science studied under the Artificial Intelligence (AI) that gave birth to an allied area called
‘Computational Linguistic’ which focuses on processing of natural languages on computational devices. A natural language consists of a large number of sentences which are linguistic units involving one or more words linked together in accordance with a set of predefined rules called grammar. Grammar checking is the task of validating sentences syntactically and is a prominent tool within language engineering. Our review draws on the recent development of various grammar checkers to look at past, present and the future in a new light. Our review covers grammar checkers of many languages with the aim of seeking their approaches, methodologies for developing new tool and system as a whole. The survey concludes with the discussion of various features included in existing grammar checkers of foreign languages as well as a few Indian Languages.
A NOVEL APPROACH FOR NAMED ENTITY RECOGNITION ON HINDI LANGUAGE USING RESIDUA...kevig
Many Natural Language Processing (NLP) applications involve Named Entity Recognition (NER) as an important task, where it leads to improve the overall performance of NLP applications. In this paper the Deep learning techniques are used to perform NER task on Hindi text data as it found that as compared to English NER, Hindi language NER is not sufficiently done. This is a barrier for resource-scarce languages as many resources are not readily available. Many researchers use various techniques such as rule based, machine learning based and hybrid approaches to solve this problem. Deep learning based algorithms are being developed in large scale as an innovative approach now a days for the advanced NER models which will give the best results out of it. In this paper we devise a Novel architecture based on residual network architecture for preferably Bidirectional Long Short Term Memory (BiLSTM) with fasttext word embedding layers. For this purpose we use pre-trained word embedding to represent the words in the corpus where the NER tags of the words are defined as the used annotated corpora. BiLSTM Development of an NER system for Indian languages is a comparatively difficult task. In this paper, we have done the various experiments to compare the results of NER with normal embedding and fasttext embedding layers to analyse the performance of word embedding with different batch sizes to train the deep learning models. Here we present a state-of-the-art results with said approach F1 Score measures.
Conversational AI Agents have become mainstream today due to significant advancements in the methods required to build accurate models, such as machine learning and deep learning, and, secondly, because they are seen as a natural fit in a wide range of domains, such as healthcare, e-commerce, customer service, tourism, and education, that rely heavily on natural language conversations in day-to-day operations. This rapid increase in demand has been matched by an equally rapid rate of research and development, with new products being introduced on a daily basis.
Learn More:https://bit.ly/3tBkT81
Contact Us:
Website: https://www.phdassistance.com/
UK: +44 7537144372
India No:+91-9176966446
Email: info@phdassistance.com
Robust extended tokenization framework for romanian by semantic parallel text...ijnlc
Tokenization is considered a solved problem when reduced to just word borders identification, punctuation
and white spaces handling. Obtaining a high quality outcome from this process is essential for subsequent
NLP piped processes (POS-tagging, WSD). In this paper we claim that to obtain this quality we need to use
in the tokenization disambiguation process all linguistic, morphosyntactic, and semantic-level word-related
information as necessary. We also claim that semantic disambiguation performs much better in a bilingual
context than in a monolingual one. Then we prove that for the disambiguation purposes the bilingual text
provided by high profile on-line machine translation services performs almost to the same level with
human-originated parallel texts (Gold standard). Finally we claim that the tokenization algorithm
incorporated in TORO can be used as a criterion for on-line machine translation services comparative
quality assessment and we provide a setup for this purpose.
CITIZENSHIP ACT (591) OF GHANA AS A LOGIC THEOREM AND ITS SEMANTIC IMPLICATIONSijnlc
This paper presents excerpts of the natural text of the Citizenship Act of the Republic of Ghana, ACT 2000 (591), as a logic theorem in First Order Logic (FOL) language. The formalism of this piece of law is done to allow for semantic analysis of the text by machines. The results of this research also deals with the problem of ambiguities in the legal text, andreveals the semantic consequence of the natural construction or textual structure of the piece of law;one major problem ofthe semantics of legal text has been the problem of ambiguities, which largely affects interpretations that are alluded to legal statements. Some constructed sentences sometimesdo not reflect the semantic intentions of the composers of the sentences, due to inherent ambiguities or technical faults in the structure of some parts of the statements.The formalism of the Actin this paper provides logical proofs and deductions which are asserted to establish clarity and to reveal semantic errors, or otherwise preserve the semantic intentions of the Act
CITIZENSHIP ACT (591) OF GHANA AS A LOGIC THEOREM AND ITS SEMANTIC IMPLICATIONSkevig
This paper presents excerpts of the natural text of the Citizenship Act of the Republic of Ghana, ACT 2000
(591), as a logic theorem in First Order Logic (FOL) language. The formalism of this piece of law is done
to allow for semantic analysis of the text by machines. The results of this research also deals with the
problem of ambiguities in the legal text, and reveals the semantic consequence of the natural construction
or textual structure of the piece of law; one major problem of the semantics of legal text has been the
problem of ambiguities, which largely affects interpretations that are alluded to legal statements. Some
constructed sentences sometimes do not reflect the semantic intentions of the composers of the sentences,
due to inherent ambiguities or technical faults in the structure of some parts of the statements. The
formalism of the Act in this paper provides logical proofs and deductions which are asserted to establish
clarity and to reveal semantic errors, or otherwise preserve the semantic intentions of the Act.
A SURVEY OF GRAMMAR CHECKERS FOR NATURAL LANGUAGEScsandit
ABSTRACT
Natural Language processing is an interdisciplinary branch of linguistic and computer science studied under the Artificial Intelligence (AI) that gave birth to an allied area called
‘Computational Linguistic’ which focuses on processing of natural languages on computational devices. A natural language consists of a large number of sentences which are linguistic units involving one or more words linked together in accordance with a set of predefined rules called grammar. Grammar checking is the task of validating sentences syntactically and is a prominent tool within language engineering. Our review draws on the recent development of various grammar checkers to look at past, present and the future in a new light. Our review covers grammar checkers of many languages with the aim of seeking their approaches, methodologies for developing new tool and system as a whole. The survey concludes with the discussion of various features included in existing grammar checkers of foreign languages as well as a few Indian Languages.
A NOVEL APPROACH FOR NAMED ENTITY RECOGNITION ON HINDI LANGUAGE USING RESIDUA...kevig
Many Natural Language Processing (NLP) applications involve Named Entity Recognition (NER) as an important task, where it leads to improve the overall performance of NLP applications. In this paper the Deep learning techniques are used to perform NER task on Hindi text data as it found that as compared to English NER, Hindi language NER is not sufficiently done. This is a barrier for resource-scarce languages as many resources are not readily available. Many researchers use various techniques such as rule based, machine learning based and hybrid approaches to solve this problem. Deep learning based algorithms are being developed in large scale as an innovative approach now a days for the advanced NER models which will give the best results out of it. In this paper we devise a Novel architecture based on residual network architecture for preferably Bidirectional Long Short Term Memory (BiLSTM) with fasttext word embedding layers. For this purpose we use pre-trained word embedding to represent the words in the corpus where the NER tags of the words are defined as the used annotated corpora. BiLSTM Development of an NER system for Indian languages is a comparatively difficult task. In this paper, we have done the various experiments to compare the results of NER with normal embedding and fasttext embedding layers to analyse the performance of word embedding with different batch sizes to train the deep learning models. Here we present a state-of-the-art results with said approach F1 Score measures.
Conversational AI Agents have become mainstream today due to significant advancements in the methods required to build accurate models, such as machine learning and deep learning, and, secondly, because they are seen as a natural fit in a wide range of domains, such as healthcare, e-commerce, customer service, tourism, and education, that rely heavily on natural language conversations in day-to-day operations. This rapid increase in demand has been matched by an equally rapid rate of research and development, with new products being introduced on a daily basis.
Learn More:https://bit.ly/3tBkT81
Contact Us:
Website: https://www.phdassistance.com/
UK: +44 7537144372
India No:+91-9176966446
Email: info@phdassistance.com
Natural Language Processing Theory, Applications and Difficultiesijtsrd
The promise of a powerful computing device to help people in productivity as well as in recreation can only be realized with proper human machine communication. Automatic recognition and understanding of spoken language is the first step toward natural human machine interaction. Research in this field has produced remarkable results, leading to many exciting expectations and new challenges. This field is known as Natural language Processing. In this paper the natural language generation and Natural language understanding is discussed. Difficulties in NLU, applications and comparison with structured programming language are also discussed here. Mrs. Anjali Gharat | Mrs. Helina Tandel | Mr. Ketan Bagade "Natural Language Processing Theory, Applications and Difficulties" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd28092.pdf Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/28092/natural-language-processing-theory-applications-and-difficulties/mrs-anjali-gharat
A New Approach: Automatically Identify Proper Noun from Bengali Sentence for ...Syeful Islam
More than hundreds of millions of people of almost all levels of education and attitudes from different country communicate with
each other for different using various languages. Machine translation is highly demanding due to increasing the usage of web
based Communication. One of the major problem of Bengali translation is identified a naming word from a sentence, which is
relatively simple in English language, because such entities start with a capital letter. In Bangla we do not have concept of small
or capital letters and there is huge no. of different naming entity available in Bangla. Thus we find difficulties in understanding
whether a word is a proper noun or not. Here we have introduce a new approach to identify proper noun from a Bengali sentence
for UNL without storing huge no. of naming entity in word dictionary. The goal is to make possible Bangla sentence conversion
to UNL and vice versa with minimal storing word in dictionary.
Imran Sarwar Bajwa, M. Abbas Choudhary [2006], "A Rule Based System for Speech Language Context Understanding", International Journal of Donghua University (English Edition), Jun 2006, Vol. 23 No. 06, pp:39-42
It's a brief overview of Natural Language Processing using Python module NLTK.The codes for demonstration can be found from the github link given in the references slide.
Natural language processing provides a way in which human interacts with computer / machines by means of voice.
"Google Search by voice is the best example " which makes use of natural language processing..
A spell checker is an application program to
process the natural languages in machine readable format
effectively. Spelling checking and correction is a basic
necessity and a tedious work in any language, so we require
spell checker software to do this, which is the fundamental
necessity for any work. Spell checker is a set of program
which analyzes the wrongly used word and corrects it by the
most possible correct word. The challenging task here is the
work done for a Kannada language. In a software system
many Kannada words are typed in several formats since
Kannada has many fonts to write the grammar properly.
In this paper, we describe some techniques used in
Kannada language by a spell checker. We use NLP, which is
a field of computer science having relationship between
human (i.e., natural languages) and computers. Usually, we
have some modern NLP algorithms based on machine
learning to carry out the work.
A N H YBRID A PPROACH TO W ORD S ENSE D ISAMBIGUATION W ITH A ND W ITH...ijnlc
Word Sense Disambiguation is a classification of me
aning of word in a precise context which is a trick
y
task to perform in Natural Language Processing whic
h is used in application like machine translation,
information extraction and retrieval, automatic or
closed domain question answering system for the rea
son
that of its semantics perceptive. Researchers tried
for unsupervised and knowledge based learning
approaches however such approaches have not proved
more helpful. Various supervised learning
algorithms have been made, but in vain as the attem
pt of creating the training corpus which is a tagge
d
sense marked corpora is tricky. This paper presents
a hybrid approach for resolving ambiguity in a
sentence which is based on integrating lexical know
ledge and world knowledge. English Wordnet
developed at Princeton University, SemCor corpus an
d the JAWS library (Java API for WordNet
searching) has been used for this purpose.
Exploiting rules for resolving ambiguity in marathi language texteSAT Journals
Abstract
Natural language ambiguity is a situation involving some words having multiple meanings/senses. This paper discusses natural
language ambiguity and its types. Further we propose a knowledge based solution to resolve various types of ambiguity occurring
in Marathi language text. The task of resolving semantic and lexical ambiguity occurring in words to obtain the actual sense is
referred as Word Sense Disambiguation (WSD). Marathi language is the official and commonly spoken language of Maharashtra
state in India. Plenty of words in Marathi are spelled same as well as uttered same but are semantically (meaning-wise/ sensewise)
different. During the automatic translation, these words lead to ambiguity. Our method successfully removes the ambiguity
by identifying the correct sense of the given text from the predefined possible senses available in Marathi Wordnet using word and
sentence rules. The method is applicable only for word level ambiguity. Structural ambiguity is not handled by this system. This
system may be successfully used as a subsystem in other Natural Language Processing (NLP) applications.
Key Words: Word Sense Disambiguation, Natural Language Processing, Marathi, Marathi Wordnet, ambiguity,
knowledge based
Semantic Rules Representation in Controlled Natural Language in FluentEditorCognitum
Abstract. The purpose of this paper is to present a way of representation of semantic rules (SWRL) in controlled natural language (English) in order to facilitate understanding the rules by humans interacting with a machine. The rule representation is implemented in FluentEditor – ontology editor with controlled natural language (CNL). The representation can be used in a lot of domains where people interact with machines and use specialized interfaces to define knowledge in a system (semantic knowledge base), e.g. representing medical knowledge and guidelines, procedures in crisis management or in management of any coordination processes. Such knowledge bases are able to support decision making in any discipline provided there is a knowledge stored in a proper semantic way.
Machine Translation Approaches and Design AspectsIOSR Journals
Machine translation is a sub-field of computational linguistics that investigates the use of software to
translate text or speech from one natural language to another. On a basic level, MT performs simple
substitution of words in one natural language for words in another, but that alone usually cannot produce a
good translation of a text, because recognition of whole phrases and their closest counterparts in the target
language is needed. The paper focuses on Example Based Machine Translation (EBMT) system that translates
sentences from English to Hindi. Development of a machine translation (MT) system typically demands a large
volume of computational resources. For example, rule based MT systems require extraction of syntactic and
semantic knowledge in the form of rules, statistics-based MT systems require huge parallel corpus containing
sentences in the source languages and their translations in target language. Requirement of such computational
resources is much less in respect of EBMT. This makes development of EBMT systems for English to Hindi
translation feasible, where availability of large-scale computational resources is still scarce. Example based
machine translation relies on the database for its translation. The frequency of word occurrence is important for
translation.
Natural Language Processing Theory, Applications and Difficultiesijtsrd
The promise of a powerful computing device to help people in productivity as well as in recreation can only be realized with proper human machine communication. Automatic recognition and understanding of spoken language is the first step toward natural human machine interaction. Research in this field has produced remarkable results, leading to many exciting expectations and new challenges. This field is known as Natural language Processing. In this paper the natural language generation and Natural language understanding is discussed. Difficulties in NLU, applications and comparison with structured programming language are also discussed here. Mrs. Anjali Gharat | Mrs. Helina Tandel | Mr. Ketan Bagade "Natural Language Processing Theory, Applications and Difficulties" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd28092.pdf Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/28092/natural-language-processing-theory-applications-and-difficulties/mrs-anjali-gharat
A New Approach: Automatically Identify Proper Noun from Bengali Sentence for ...Syeful Islam
More than hundreds of millions of people of almost all levels of education and attitudes from different country communicate with
each other for different using various languages. Machine translation is highly demanding due to increasing the usage of web
based Communication. One of the major problem of Bengali translation is identified a naming word from a sentence, which is
relatively simple in English language, because such entities start with a capital letter. In Bangla we do not have concept of small
or capital letters and there is huge no. of different naming entity available in Bangla. Thus we find difficulties in understanding
whether a word is a proper noun or not. Here we have introduce a new approach to identify proper noun from a Bengali sentence
for UNL without storing huge no. of naming entity in word dictionary. The goal is to make possible Bangla sentence conversion
to UNL and vice versa with minimal storing word in dictionary.
Imran Sarwar Bajwa, M. Abbas Choudhary [2006], "A Rule Based System for Speech Language Context Understanding", International Journal of Donghua University (English Edition), Jun 2006, Vol. 23 No. 06, pp:39-42
It's a brief overview of Natural Language Processing using Python module NLTK.The codes for demonstration can be found from the github link given in the references slide.
Natural language processing provides a way in which human interacts with computer / machines by means of voice.
"Google Search by voice is the best example " which makes use of natural language processing..
A spell checker is an application program to
process the natural languages in machine readable format
effectively. Spelling checking and correction is a basic
necessity and a tedious work in any language, so we require
spell checker software to do this, which is the fundamental
necessity for any work. Spell checker is a set of program
which analyzes the wrongly used word and corrects it by the
most possible correct word. The challenging task here is the
work done for a Kannada language. In a software system
many Kannada words are typed in several formats since
Kannada has many fonts to write the grammar properly.
In this paper, we describe some techniques used in
Kannada language by a spell checker. We use NLP, which is
a field of computer science having relationship between
human (i.e., natural languages) and computers. Usually, we
have some modern NLP algorithms based on machine
learning to carry out the work.
A N H YBRID A PPROACH TO W ORD S ENSE D ISAMBIGUATION W ITH A ND W ITH...ijnlc
Word Sense Disambiguation is a classification of me
aning of word in a precise context which is a trick
y
task to perform in Natural Language Processing whic
h is used in application like machine translation,
information extraction and retrieval, automatic or
closed domain question answering system for the rea
son
that of its semantics perceptive. Researchers tried
for unsupervised and knowledge based learning
approaches however such approaches have not proved
more helpful. Various supervised learning
algorithms have been made, but in vain as the attem
pt of creating the training corpus which is a tagge
d
sense marked corpora is tricky. This paper presents
a hybrid approach for resolving ambiguity in a
sentence which is based on integrating lexical know
ledge and world knowledge. English Wordnet
developed at Princeton University, SemCor corpus an
d the JAWS library (Java API for WordNet
searching) has been used for this purpose.
Exploiting rules for resolving ambiguity in marathi language texteSAT Journals
Abstract
Natural language ambiguity is a situation involving some words having multiple meanings/senses. This paper discusses natural
language ambiguity and its types. Further we propose a knowledge based solution to resolve various types of ambiguity occurring
in Marathi language text. The task of resolving semantic and lexical ambiguity occurring in words to obtain the actual sense is
referred as Word Sense Disambiguation (WSD). Marathi language is the official and commonly spoken language of Maharashtra
state in India. Plenty of words in Marathi are spelled same as well as uttered same but are semantically (meaning-wise/ sensewise)
different. During the automatic translation, these words lead to ambiguity. Our method successfully removes the ambiguity
by identifying the correct sense of the given text from the predefined possible senses available in Marathi Wordnet using word and
sentence rules. The method is applicable only for word level ambiguity. Structural ambiguity is not handled by this system. This
system may be successfully used as a subsystem in other Natural Language Processing (NLP) applications.
Key Words: Word Sense Disambiguation, Natural Language Processing, Marathi, Marathi Wordnet, ambiguity,
knowledge based
Semantic Rules Representation in Controlled Natural Language in FluentEditorCognitum
Abstract. The purpose of this paper is to present a way of representation of semantic rules (SWRL) in controlled natural language (English) in order to facilitate understanding the rules by humans interacting with a machine. The rule representation is implemented in FluentEditor – ontology editor with controlled natural language (CNL). The representation can be used in a lot of domains where people interact with machines and use specialized interfaces to define knowledge in a system (semantic knowledge base), e.g. representing medical knowledge and guidelines, procedures in crisis management or in management of any coordination processes. Such knowledge bases are able to support decision making in any discipline provided there is a knowledge stored in a proper semantic way.
Machine Translation Approaches and Design AspectsIOSR Journals
Machine translation is a sub-field of computational linguistics that investigates the use of software to
translate text or speech from one natural language to another. On a basic level, MT performs simple
substitution of words in one natural language for words in another, but that alone usually cannot produce a
good translation of a text, because recognition of whole phrases and their closest counterparts in the target
language is needed. The paper focuses on Example Based Machine Translation (EBMT) system that translates
sentences from English to Hindi. Development of a machine translation (MT) system typically demands a large
volume of computational resources. For example, rule based MT systems require extraction of syntactic and
semantic knowledge in the form of rules, statistics-based MT systems require huge parallel corpus containing
sentences in the source languages and their translations in target language. Requirement of such computational
resources is much less in respect of EBMT. This makes development of EBMT systems for English to Hindi
translation feasible, where availability of large-scale computational resources is still scarce. Example based
machine translation relies on the database for its translation. The frequency of word occurrence is important for
translation.
Natural Language Processing: A comprehensive overviewBenjaminlapid1
Natural language processing enhances human-computer interaction by bridging the language gap. Uncover its applications and techniques in this comprehensive overview. Dive in now!
Association Rule Mining Based Extraction of Semantic Relations Using Markov L...IJwest
Ontology may be a conceptualization of a website into a human understandable, however machine-readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the intentional aspects of a site, whereas the denotative part is provided by a mental object that contains assertions about instances of concepts and relations. Semantic relation it might be potential to extract the whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations describe the linguistics relationships among the entities involve that is beneficial for a higher understanding of human language. The relation can be identified from the result of concept hierarchy extraction. The existing ontology learning process only produces the result of concept hierarchy extraction. It does not produce the semantic relation between the concepts. Here, we have to do the process of constructing the predicates and also first order logic formula. Here, also find the inference and learning weights using Markov Logic Network. To improve the relation of every input and also improve the relation between the contents we have to propose the concept of ARSRE. This method can find the frequent items between concepts and converting the extensibility of existing lightweight ontologies to formal one. The experimental results can produce the good extraction of semantic relations compared to state-of-art method.
Association Rule Mining Based Extraction of Semantic Relations Using Markov ...dannyijwest
Ontology may be a conceptualization of a website into a human understandable, however machine-
readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the
intentional aspects of a site, whereas the denotative part is provided by a mental object that contains
assertions about instances of concepts and relations. Semantic relation it might be potential to extract the
whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations
describe the linguistics relationships among the entities involve that is beneficial for a higher
understanding of human language. The relation can be identified from the result of concept hierarchy
extraction. The existing ontology learning process only produces the result of concept hierarchy extraction.
It does not produce the semantic relation between the concepts. Here, we have to do the process of
constructing the predicates and also first order logic formula. Here, also find the inference and learning
weights using Markov Logic Network. To improve the relation of every input and also improve the relation
between the contents we have to propose the concept of ARSRE.
This paper presents a vision on how the software development process could be a fully unified mechanized
process by getting benefits from the advances of Natural Language Processing and Program Synthesis
fields. The process begins from requirements written in natural language that is translated to sentences in
logical form. A program synthesizer gets those sentences in logical form (the translator's outcome) and
generates a source code. Finally, a compiler produces ready-to-run software. To find out how the building
blocks of our proposed approach works, we conducted an exploratory research on the literature in the
fields of Requirements Engineering, Natural Language Processing, and Program Synthesis. Currently, this
approach is difficult to accomplish in a fully automatic way due to the ambiguities inherent in natural
language, the reasoning context of the software, and the program synthesizer limitations in generating a
source code from logic.
54-6 September 2002, Edinburgh, UK, Pages 5-11.Bos, Fost.docxblondellchancy
5
4-6 September 2002, Edinburgh, UK, Pages 5-11.
Bos, Foster & Matheson (eds): Proceedings of the sixth workshop on the semantics and pragmatics of dialogue (EDILOG 2002),
5
4-6 September 2002, Edinburgh, UK, Pages 5-11.
Bos, Foster & Matheson (eds): Proceedings of the sixth workshop on the semantics and pragmatics of dialogue (EDILOG 2002),
5
4-6 September 2002, Edinburgh, UK, Pages 5-11.
Bos, Foster & Matheson (eds): Proceedings of the sixth workshop on the semantics and pragmatics of dialogue (EDILOG 2002),
5
4-6 September 2002, Edinburgh, UK, Pages 5-11.
Bos, Foster & Matheson (eds): Proceedings of the sixth workshop on the semantics and pragmatics of dialogue (EDILOG 2002),
5
4-6 September 2002, Edinburgh, UK, Pages 5-11.
Bos, Foster & Matheson (eds): Proceedings of the sixth workshop on the semantics and pragmatics of dialogue (EDILOG 2002),
Cooperation and Collaboration in Natural Command Language
Dialogues
J. Gabriel Amores and José F. Quesada
University of Seville
[email protected][email protected]
Abstract
This paper discusses cooperative and collabora-
tive behaviour in Natural Command Language
Dialogues (NCLDs). We first introduce Natural
Command Language Dialogues and then briefly
compare them with other types of dialogue. Co-
operation and collaboration in NCLDs is then
analyzed. Finally, a typology of conflicts in
NCLDs is proposed, for which a solution has
been implemented in two spoken dialogue proto-
types. Sample dialogues are taken from research
carried out under the Siridus and D’Homme Eu-
ropean projects.
1 Natural Command Language
Dialogues
A Natural Command Language (NCL) is a com-
mand language expressed through the medium
of natural language. We take NCLs as the set of
input and output natural language expressions
which are acceptable in a given application do-
main. This domain is semantically defined by
the functions (commands) known by the user
and the system, and the natural language vo-
cabulary which may be used to express those
commands. In addition, NCLs should contain
metalinguistic patterns and expressions typical
of human–like interaction. Natural Command
Language Dialogues are artificially constructed
models of action–oriented dialogues (including
knowledge representation and reasoning) able
to guide the interaction between the different
parts involved in a dialogue based on a NCL. A
NCLD should allow the following kinds of phe-
nomena:
• Multiple Task NCL: In contrast to
Task–Oriented Dialogue models, NCLDs
must be able to manage different tasks.
Thus, one of the main functions of NCLD
systems will be task detection, as will be
explained below.
• Context Dependency: Only at the di-
alogue level is it possible to understand
anaphora, ellipsis and other context de-
pendent constructions. From the dialogue
system design persepective, the treatment
of these discourse phenomena will imply
the representation and storage of the whole
dialogue history. For an illustration, see
(Quesada and Amo ...
Similar to An Intersemiotic Translation of Normative Utterances to Machine Language (20)
IJWEST CFP (9).pdfCALL FOR ARTICLES...! IS INDEXING JOURNAL...! Internationa...dannyijwest
Paper Submission
Authors are invited to submit papers for this journal through Email: ijwestjournal@airccse.org / ijwest@aircconline.com or through Submission System.
mportant Dates
Submission Deadline : June 01, 2024
Notification :July 01, 2024
Final Manuscript Due : July 08, 2024
Publication Date : Determined by the Editor-in-Chief
Here's where you can reach us : ijwestjournal@yahoo.com or ijwestjournal@airccse.org or ijwest@aircconline.com
Cybercrimes in the Darknet and Their Detections: A Comprehensive Analysis and...dannyijwest
Although the Dark web was originally used for maintaining privacy-sensitive communication for business or intelligence services for defence, government and business organizations, fighting against censorship and blocked content, later, the advantage of technologies behind the Dark web were abused by criminals to conduct crimes which involve drug dealing to the contract of assassinations in a widespread manner. Since the communication remains secure and untraceable, criminals can easily use dark web service via The Onion Router (TOR), can hide their illegal motives and can conceal their criminal activities. This makes it very difficult to monitor and detect cybercrimes over the dark web. With the evolution of machine learning, natural language processing techniques, computational big data applications and hardware, there is a growing interest in exploiting dark web data to monitor and detect criminal activities. Due to the anonymity provided by the Dark Web, the rapid disappearance and the change of the uniform resource locators (URLs) of the resources, it is not as easy to crawl the Drak web and get the data as the usual surface web which limits the researchers and law enforcement agencies to analyse the data. Therefore, there is an urgent need to study the technology behind the Dark web, its widespread abuse, its impact on society and the existing systems, to identify the sources of drug deal or terrorism activities. In this research, we analysed the predominant darker sides of the world wide web (WWW), their volumes, their contents and their ratios. We have performed the analysis of the larger malicious or hidden activities that occupy the major portions of the Dark net; tools and techniques used to identify cybercrimes which happen inside the dark web. We applied a systematic literature review (SLR) approach on the resources where the actual dark net data have been used for research purposes in several areas. From this SLR, we identified the approaches (tools and algorithms) which have been applied to analyse the Dark net data, the key gaps as well as the key contributions of the existing works in the literature. In our study, we find the main challenges to crawl the dark web and collect forum data are: scalability of crawler, content selection trade off, and social obligation for TOR crawler and the limitations of techniques used in automatic sentiment analysis to understand criminals’ forums and thereby monitor the forums. From the comprehensive analysis of existing tools, our study summarizes the most tools. However the forum topics rapidly change as their sources changes; criminals inject noises to obfuscate the forum’s main topic and thus remain undetectable. Therefore supervised techniques fail to address the above challenges. Semi-supervised techniques would be an interesting research direction.
FFO: Forest Fire Ontology and Reasoning System for Enhanced Alert and Managem...dannyijwest
Forest fires or wildfires pose a serious threat to property, lives, and the environment. Early detection and mitigation of such emergencies, therefore, play an important role in reducing the severity of the impact caused by wildfire. Unfortunately, there is often an improper or delayed mechanism for forest fire detection which leads to destruction and losses. These anomalies in detection can be due to defects in sensors or a lack of proper information interoperability among the sensors deployed in forests. This paper presents a lightweight ontological framework to address these challenges. Interoperability issues are caused due to heterogeneity in technologies used and heterogeneous data created by different sensors. Therefore, through the proposed Forest Fire Detection and Management Ontology (FFO), we introduce a standardized model to share and reuse knowledge and data across different sensors. The proposed ontology is validated using semantic reasoning and query processing. The reasoning and querying processes are performed on real-time data gathered from experiments conducted in a forest and stored as RDF triples based on the design of the ontology. The outcomes of queries and inferences from reasoning demonstrate that FFO is feasible for the early detection of wildfire and facilitates efficient process management subsequent to detection.
Call For Papers-10th International Conference on Artificial Intelligence and ...dannyijwest
** Registration is currently open **
Call for Research Papers!!!
Free – Extended Paper will be published as free of cost.
10th International Conference on Artificial Intelligence and Applications (AI 2024)
July 20 ~ 21, 2024, Toronto, Canada
https://csty2024.org/ai/index
Submission Deadline: May 11, 2024
Contact Us
Here's where you can reach us : ai@csty2024.org or ai.conference@yahoo.com
Submission System
https://csty2024.org/submission/index.php
#artificialintelligence #softcomputing #machinelearning #technology #datascience #python #deeplearning #tech #robotics #innovation #bigdata #coding #iot #computerscience #data #dataanalytics #engineering #robot #datascientist #software #automation #analytics #ml #pythonprogramming #programmer #digitaltransformation #developer #promptengineering #generativeai #genai #chatgpt
CALL FOR ARTICLES...! IS INDEXING JOURNAL...! International Journal of Web &...dannyijwest
Paper Submission
Authors are invited to submit papers for this journal through Email: ijwest@aircconline.com or through Submission System. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this Journal.
Important Dates
• Submission Deadline: March 16, 2024
• Notification : April 13, 2024
• Final Manuscript Due : April 20, 2024
• Publication Date : Determined by the Editor-in-Chief
Contact Us
Here's where you can reach us
ijwestjournal@yahoo.com or ijwestjournal@airccse.org or ijwest@aircconline.com
Submission URL : https://airccse.com/submissioncs/home.html
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
An Intersemiotic Translation of Normative Utterances to Machine Language
1. International Journal of Web & Semantic Technology (IJWesT) Vol.13, No.1, January 2022
DOI: 10.5121/ijwest.2022.13101 1
AN INTERSEMIOTIC TRANSLATION OF NORMATIVE
UTTERANCES TO MACHINE LANGUAGE
Andrea Addis1
and Olimpia Giuliana Loddo2
1
Infora, viale Elmas, Cagliari, Italy
2
Department of Law, University of Cagliari, Cagliari, Italy
ABSTRACT
Programming Languages (PL) effectively performs an intersemiotic translation from a natural language to
machine language. PL comprises a set of instructions to implement algorithms, i.e., to perform
(computational) tasks. Similarly to Normative Languages (NoL), PLs are formal languages that can
perform both regulative and constitutive functions. The paper presents the first results of interdisciplinary
research aimed at highlighting the similarities between NoL (social sciences) and PL (computer science)
through everyday life examples, exploiting Object-Oriented Programming Language tools and an Internet
of Things (IoT) system as a case study. Given the pandemic emergency, the urge to move part of our social
life to the digital world arose, together with the need to effectively transpose regulative rules and
constitutive rules through different strategies for translating a normative utterance expressed in natural
language.
KEYWORDS
Programming languages, Normative languages, Constitutive rules, Regulative rules.
1. INTRODUCTION
According to Jakobson, translation is a form of interpretation, where interpretation is the
“transposition of a sign into alternative signs having the same meaning”. He also claimed that
there are three ways one can interpret the verbal sign; “it can be translated into other signs of the
same language, into another language, or into another nonverbal system of symbols” [1, p. 60].
He called them respectively intralingual translation or rewording, interlingual translation or
translation proper, and intersemiotic translation or transmutation. The intersemiotic translation
does not involve merely different languages, but it is the interpretation of a sign part of a semiotic
system with another sign part of a different semiotic system. More precisely, intersemiotic
translation is generally understood to mean the transfer of verbal texts into other systems of
signification, such as visual, oral, gestural. The definition proposed by Jakobson is clear and
schematic, but it also oversimplifies the mechanism that leads to the final product of the
intersemiotic translation. There is rarely a direct relation “sign to sign” from a code to another
code. In fact, the source sign and the target sign belong respectively to extremely different
semiotic systems, making the task uneasy for the intersemiotic translator to find strategies that
preserve the most relevant part of the original meaning. The impossibility to cross a linear path
while performing a translation is verifiable, even interpreting different natural languages or
dialects, notwithstanding their structural similarities1
. The translation of a normative meaning
expressed in a natural language into a bitcode is an effective and exemplifying way to illustrate
1
E.g., the hypothesis in modern linguistics of the existence of a certain set of structural rules that are innate
to humans and independent of sensory experience [2].
2. International Journal of Web & Semantic Technology (IJWesT) Vol.13, No.1, January 2022
2
the mechanism of intersemiotic translation because it highlights the needs to diverge from a
unique set of encoding algorithms, i.e., the requirement of a bridge language, in our case a “High-
Level” Programming Language. A High-Level PL allows to convey the concept from natural
language to a human like representation with a sufficient abstraction from the details of the
computer hardware (from which the adjective high-level), allowing to cross the pure semantical
translation. Programming languages can explicitly represent deontic modalities (for example,
prohibition and obligation through state machines).
The transposition of numerous social and legal events in the IT field has made it necessary to
focus on the correspondence between what happens in the legal field and what can happen in the
digital world. Can a social reality be digitized without taking into account some essential
characteristics? Legally relevant forms of interaction, such as digital contracts and smart
contracts, are already taking place. For this reason, it is important to analyze the different
translation techniques into a programming language that considers the categories (such as
constitutive and regulative rules) identified by the social ontology. As far as we studied, however,
among the many examples of inter-semiotic translation examined by semiotics and linguistics
scholars, PL is rarely mentioned in the translation of natural languages into machine languages.
For the aims of the philosophy of normative language, a fascinating element of the translation of
a deontic proposition expressed in natural language into a programming language consists in the
fact that the programming language is essentially normative. Normally, the main purpose of the
programming language is not to let people know but to get things done. Programming language
has not a descriptive function but a regulative and constitutive function. The paper will focus on
the feasibility of intersemiotic translation between normative language and digital language. To
do that, we will study how to code a set of rules through a programming language. We will show
several case studies and perform some experiments exploiting both the game of chess which is a
phenomenon deeply analyzed both in computer science and in the philosophy of normative
language2
and DomoBuilder [3], an Internet of Things (IoT) system devoted to building complex
home domotics environments combining simple physical devices and intuitive sets of rules. The
paper is organized as follows: Section 2 will illustrate the reasons for choosing the bridge
language and tools to analyze and show how to code norms, section 3 will explore the process to
translate regulative and constitutive rules. Finally, section 4 will draw the conclusions.
2. PROGRAMMING LANGUAGES AS TOOLS FOR CODING NORMS
Programming languages (PL) are a subclass of formal languages comprising a set of instructions
used in computer programming to implement algorithms, i.e., to elaborate information (e.g., input
data) and produce output (e.g., analysis data, new information, and/or perform a task). PL can be
categorized in several ways that set side by side their different strategies for implementing
algorithms, as for instance, low-level languages that bind the developer to write code that
matches the internal representation of the machine architecture, and high-level languages that
allow expressing the code through a more abstract and human-like representation of the
algorithm. Another classification compares declarative programming languages and imperative
programming languages3
.
This classification shows how computer scientists and software engineers tend to add more layers
of semantic abstraction to write more readable and maintainable code, thus releasing the
developers from specifying how the program should achieve the result. In fact, Imperative
2
The rules of chess are among the most exploited examples of constitutive rules.
3
Please note that these two definitions carry an intrinsically different meaning from the one used in the
investigation on normative language.
3. International Journal of Web & Semantic Technology (IJWesT) Vol.13, No.1, January 2022
3
Programming Languages focus on the use of imperative utterances to change a program‟s state
and describe how it operates in a way similar to the expression of commands in natural language.
At the same time, Declarative Programming Languages allows the developer to describe what the
program should accomplish, focusing on its target [4]. We will explore in this context the
imperative paradigm because it is particularly popular and mirrors the hardware implementation
of almost all computers. Among the imperative languages sub-classes, the Object-Oriented
Programming Language (OOP) gained rapid growth in the ‟80s [5], its concept built on “objects”
abstracted through “classes”, (a) provide natural support for software modelling of real-world
objects or the abstract model to be reproduced; (b) allows easier maintenance of large projects;
(c) grants more organized code in the form of classes and concepts, favouring modularity and
code reuse. Within this paper, we will exploit Java, a general-purpose OOP language designed to
have as few hardware implementation dependencies as possible [6]. In Java, we will analyze the
path to set a norm through a programming language and check if the logical structure of the norm
affects the programming code.
3. REGULATIVE RULES VS CONSTITUTIVE RULES
John Searle [7] points out that not every rule has the same logical structure and draws the famous
distinction between regulative and constitutive rules.
According to Searle, regulative rules “regulate antecedently or independently existing forms of
behaviour; for example, many rules of etiquette regulate interpersonal relationships which exist
independently of the rules” [7, p. 34]. There is not any ontological relation between the rule and
the form of behaviour that the rule regulates. The regulative rule has no impact on the concept of
what it regulates or its individual instances. Are examples of regulative rules:
(i) “It is prohibited to smoke.”
(ii) “It is obligatory to wear a mask.”
(iii)“It is prohibited to walk on the grass.”
Usually, regulative rules establish implicitly or explicitly when they are binding. Regulative rules
characteristically have the form or can be comfortably paraphrased in the form “Do X” or “If Y
do X”.
So it is possible to make the mentioned rules explicit as follows:
(i) “If you are in a public library, don‟t smoke.”
(ii) “If you are in the park, do not walk on the grass.”
(iii)“If you are in a public space, wear a mask.”
Indeed, wearing a mask is a brute fact, such as walking, eating, swimming, turning on the light.
(Regulative) rules can regulate all these behaviours; however, their existence is independent of
the existence of any rule, and they cannot be valid or invalid.
According to Searle, unlike regulative rules, “Constitutive rules do not merely regulate, they
create or define new forms of behaviour” [7]. Constitutive rules establish a relationship with what
they regulate that is different from the relation between regulative rule and regulated behavior.
There is an ontological connection between the constitutive rule and what the rule regulates.
In the following paragraphs, we will try to check how to translate regulative rules and
constitutive rules in Java. To better understand these aspects, we will also exploit DomoBuilder,
4. International Journal of Web & Semantic Technology (IJWesT) Vol.13, No.1, January 2022
4
an Internet of Things (IoT) system devoted to building complex home domotics environments
that allow depicting an operative environment composed of real entities: objects (devices) and
actors (users). Every object is a “device” that only needs to describe itself within the environment
through the PME paradigm. It shares, in fact, a set of (P) properties deemed relevant for the users,
(M) methods allowing to change the state of its properties or perform tasks, and (E) events
allowing the system to be asynchronously informed about occurring changes of states. Let us
consider a trivial, though clarifying, example to highlight this concept: A light bulb.
public class LightBulb extends Device { (i)
public LightBulb() throws DeviceException {
set("description", "This is a lamp,
it can be switched on or off"); (ii)
putProperty(new DeviceProperty("state", (iii)
"Shows the status of the Light Bulb",
String.class.getName(),
"on|off", "off"));
putMethod(new DeviceMethod("set",
"Set the status of the LightBulb, i.e., (ON|OFF)",
String.class.getName(), "ON|OFF")); (iv)
[...]
@Override
public void onMethod(String name, String value) {
[...] (v)
This is all we need to know to describe and make available a new device. Let‟s explain what is
happening.
(i) LightBulb inherits all the properties and constraints of a generic DomoBuilder device.
According to the OOP paradigm of Inheritance, the LightBulb will derive most of its
features “extending” the generic DomoBuilder Device (e.g., the capability to describe
itself).
(ii) We set a property of the LightBulb common to all devices: the description. This will
make the description of its features available to the users or other devices.
(iii)We create a new property for light bulbs; in this case, we can‟t directly access it like the
description already described for any generic device; we need first to define it. We make
explicit its type and the allowed values.
(iv) In a similar fashion, we create a new method, an action that the device can execute. In
this case, the LightBulb can turn on the light. This is done according to the encapsulation
and obfuscation paradigms of OOP, i.e., to conceal the mechanism of the internal code.
(v) Here the developers implement the internals of the methods. In this case, an utterance
will simply be sent by serial communication through a relays system to an electrical
switch.
The ability to mash up heterogeneous devices and combine their functionalities gives rise to
complex systems enriching them with new powerful features and, eventually, brand new
components underlying new concepts. This is possible through an additional constituent of the
5. International Journal of Web & Semantic Technology (IJWesT) Vol.13, No.1, January 2022
5
paradigm: (R) rules. Rules allow encoding high-level behaviours with trivial devices, allowing
complex interactions between components and users.
In the next paragraph, we will show how simple it is to model such an architectural paradigm
given a PME(R) substrate.
3.1. Programming/Translating a Regulative Rule
A machine can operate in a non-digital environment and control other technological devices to
fulfil regulative rules. This is possible if the programmer sets the machine to make possible the
perception of the external environment and the other devices. To do so, the programmer has to
create new classes of objects through a schema very similar to Searle‟s formal description of
constitutive rules. According to Searle, constitutive rules can be formalized as follows,
X counts as Y in the context C
Where X is a brute fact, and Y is an institutional fact. However, a programmer who has to teach a
machine to follow the rules in a non-digital environment must create digital (symbolic) classes
that correspond to the material objects that the machine has to control (e.g. a lamp, an alarm
clock, a washing machine).
In this sense, the programmer follows a schema that can be described as the reverse of the
formula of constitutive rules. Where X is a digital object, Y is a material object with specific
functions, and C is a software development context.
According to Searle, “regulative rules characteristically have the form or can be comfortably
paraphrased in the form “Do X” or “If Y do X” [7, p. 34].
Regulative rules aim to be fulfilled, so they are addressed to a subject or a more or less wide class
of subjects. Regulative rules can be addressed to machines, and, following the form “if Y do X,”
it is possible to ask a machine to perform an action that exists independently from me and from
the machine, such as “if (conditions) then turn on the light”. Where “turning the light on or off” is
an action that preexists logically and chronologically this rule.
Let‟s exemplify this regulative rule inside a software development context. A user desires the
light to turn on when s/he‟s entering a room. This rule is hard-coded inside the platform, i.e., in a
meta-programming language fashion:
If a user enters the room, then turn on the light!
In Java language, we need to codify the meaning of “entering a room” given a sensor (e.g., a
webcam installed in the room) and code the action of triggering a light switch. The
implementation could be written as follows:
while (true) {
if (webcam.movement_detected) {
light_switch.turn_on(); } }
Where webcam and light switch are object instances of their corresponding classes, i.e., a
representation of real devices, the code is encapsulated in an infinite control loop (while(true)).
Nevertheless, in a real implementation context, an event-driven paradigm (asynchronous) is
preferred to this strategy (synchronous). This is possible by subscribing the light switch to the
6. International Journal of Web & Semantic Technology (IJWesT) Vol.13, No.1, January 2022
6
event caused by the movement detection (i.e., its change of state), acting consequently by
triggering an action, therefore relieving the processor from a continuous and computationally
onerous check.
This is the reason why DomoBuilder implements a PME(R) paradigm. As we have seen, the
Light Bulb shown above is a particularly simple device, it trivially exports its property of being
on or off, and its events will be triggered correspondingly after the transition of its state from on
to off and vice versa. Note that the method set we defined allows us to let users or other entities
(objects, devices) in the system to toggle its state.
Similarly, for the webcam, we will have a few properties. The webcam will internally collect
images corresponding to what happens in the environment. An internal algorithm will set to true
the property indicating that someone is in the room (i.e., trivial algorithms of movement detection
will update its properties as needed and triggering the corresponding events).
Public MovementDetector() throws DeviceException {
set("description", "Checks if is there anybody
out there");
putProperty(new DeviceProperty("movement",
"Asserts if there is movement",
boolean.class.getName(), "true|false", "false"));
putProperty(new DeviceProperty("presence",
"Asserts whether somebody has been
in this room recently",
boolean.class.getName(), "true|false", "false"));
Note that implementation internals are not discussed in this context, but the philosophy of
DomoBuilder is to make available very simple devices that implement very simple tasks so that
under the hood, there are often very few lines of code. The system complexity rises from rules
that combine behaviors.
The code to detect the movement is very simple; in fact, it just checks the camera input frame by
frame, detecting differences among them and updating the status of the device if a sensibility
threshold is passed.
It is possible to code a rule inside a machine establishing that at certain environmental conditions
(e.g., the presence movement), the machine will perform an action (turning the light on or off)
that is possible independently from any (social) regulation. In this sense, the act of turning on the
light is different in kind from the act of moving the bishop during a chess game. Thanks to its
architecture, DomoBuilder allows us to build a regulatory rule in a declarative form, further
abstracting the imperativeness of the Java language on which it is implemented. The rule can be
defined as follows:
7. International Journal of Web & Semantic Technology (IJWesT) Vol.13, No.1, January 2022
7
<rule>
<id>Automatic Light Example</id>
<conditions>Webcam.movement_detected==true</conditions>
<action>Light_Switch.toggle(ON)</action>
</rule>
Which expresses very clearly what is the outcome we expect from the user‟s interactions with the
home environment4
. Here it is clear how the presence of events that allow the asynchronous
handling of data alerting the system when a change has occurred mirror the concept of logic of
change [8].
During the Covid pandemic in 2020, many new regulative rules have been created, and many
existing ones have been strengthened or modified5
. The most consequences concerned online
gaming, which has dramatically risen in popularity, leading to the need for increasing controls to
maintain healthy and fairly competitive Internet chess online platforms. For example, improving
the Artificial Intelligence (AI) of the bots (software applications that run automated tasks over the
Internet) that automatically check the fairness of the players based on the deviation from their
usual game style.
It is, in fact, possible to program a machine to strengthen or check if people follow regulative
rules. For instance, let‟s think about the regulative rule “you must wear a mask”. A video-scanner
can notice if I wear a mask (to protect me and others from COVID19), and If I don‟t, it can
remind me of my obligation, or lock the door until I wear it, or even call the security service. Or
also, for the same aim, a scanner can check people‟s temperature to prevent their access if it is
outside the accepted range.
In DomoBuilder we would code the rule as follows:
<rule>
<id>Covid Security Door Locking</id>
<conditions>Webcam.movement_detected==true
AND Webcam.face_withmask_detected==true
AND Clock.time > 0800, < 1800 (working time)
</conditions>
<action>Door.toggle_lock(OFF)</action>
</rule>
In this case, a rule is composed of a set of conditions that must be simultaneously fulfilled in
order to unlock the door.
4
Note that further conditions and temporal constraints can be implemented in the same rule.
5
For example, during the Tata Steel Chess Tournament in 2021 a new rule allowed the organizers to
change the logistics of the game tables, in some cases raising a huge controversy in some really critical
game moments. See the Firouzja Controversy.
8. International Journal of Web & Semantic Technology (IJWesT) Vol.13, No.1, January 2022
8
3.2. Programming/Translating a Constitutive Rule
Constitutive rules are different in kind from regulative rules because whereas regulative rules
regulate preexisting behaviour, constitutive rules create what they regulate6
. A typical example of
constitutive rules in the game of chess is:
“The chessboard is composed of an 8 x 8 grid of 64 equal squares alternately light (the `white‟
squares) and dark (the „black‟ squares).”7
In Java, we could code the chessboard as a matrix:
String[][] board = new String[8][8];
In this case, we represented each tile of the chessboard with an Integer Number (an integer
number will, e.g., represent if a tile is occupied and by which piece), each piece may be placed
only within the 8x8 grid corresponding to the instantiated matrix (an array of arrays).
Keeping in mind that chess coordinates are expressed as 0-based integers (i.e., a..h 0..7, and
1..80..7) let us implement the function that asserts whether a tile is white or not. In natural
language, we have asserted that the tiles are alternately light and dark, but the concept of
alternative must be made explicit to a machine through an unambiguous algorithm. One way to
implement such a trivial algorithm is to check each tile through a hand-compiled list of dark
boxes:
If ((x==0 && y==0) || (x==0 && y==2) || (x==0 && y==4)...
|| ((x==1) && (y==1)) || ... (and so on for every tile)
return true; else return false;
However, this solution is extremely verbose, computationally inefficient, and not scalable (if the
size of the board changes, the list must be re-edited by hand).
In the same way, I would not create an algorithm to sum two numbers by writing a list of all
infinite possible sums. So we resort to stratagems (the above-mentioned bridge), in this case, by
simply writing:
Boolean is Dark(int x, int y){
return (x+y)%2==0; }
This function, not immediately intuitive, translates the constraints asserting that if the sum of the
coordinates passed to the function (integers x and y) is even, a dark square occupies those
coordinates on the grid. The concept of even is expressed as “the remainder of the division by 2
(indicated with the operator module “%”) of the sum of the coordinates is equal (“==” operator is
an equality check) to 0”. Let‟s try, for example, to check the e4 square of the chessboard: e4 is
Dark? Translates to: is Dark(4,3); which will be computed because7 is odd, as 7%2==1, so that
the function returns false. Therefore, e4 is not a dark but a light tile.
6
According to A.G. Conte [9] there are different kinds of constitutive rules. In fact, a constitutive rule can
be a condition for the existence of what it regulates (e.g. the rules of chess) or can establish conditions for
the validity of what it regulates (e.g. the signature of a contract or of a will). In this paper, we will focus on
the rules that are a condition of the existence of what they regulate.
7
Article 2.1. Laws of Chess.
9. International Journal of Web & Semantic Technology (IJWesT) Vol.13, No.1, January 2022
9
As you can see, in order to translate in code a constitutive rule, we were forced to find an
alternative way to represent the rule, still in near natural language but subject to the constraints of
semantic correctness and computational efficiency.
From a semiotic point of view, these rules build the meaning of the specific term. For instance, an
example of a constitutive rule is the following:
“A bishop moves any number of vacant squares diagonally as many open
squares as you like. The Bishop must remain on the same color square as
it started the game on.”
We cannot say “this is a bishop, and it (may) move(s) diagonally” because there is no bishop
before the “moves diagonally” rule.
Here also, the programmer could list the possible combinations of the moves of the chess pieces
on the chessboard, but this would make the software particularly inefficient. The programmer
must take into account one of the most important characteristics of the eidetic constitutive rules:
the eidetic constitutive rules can not be violated. If one does not act in accordance with the
constitutive rules, one can perform an act, even one with meaning, but it is in any case different
from the activity whose concept is constituted by the rules8
.
From a software point of view, a way to obligate this behaviour is to check the old and the new
position of the instance of a bishop on the chessboard, i.e., given a 8x8 chessboard represented
with the same matrix we used earlier, we can implement the check of a legal bishop move as
follows:
if ( (math.abs(bishop.old_y - bishop.new_y)) !=
(math.abs(bishop.old_x - bishop.new_x)) )
throw new InvalidPositionException();
Where math.abs(...) is a call to a math library function that returns the absolute value of the
arguments passed inside the parenthesis, and the != operator means “is different from”. In fact, if
the absolute differences (unsigned) of the x and y coordinates of the previous and current
positions are not equals, the bishop didn‟t move diagonally. So an exception must be thrown, or,
as usually happens in a software chessboard, the move is not allowed at all9
, which is translated
in the interface as the impossibility to move the bishop on that target tile. This rule is part of the
definition of the object called bishop.
Without this rule, the bishop would be unthinkable as bishop, and it would not be a bishop.
In addition to this, in programming languages such as Java, the creation of objects
corresponding to instantiating a class, is logically analogous to a constitutive eidetic rule. This is
a kind of rule that can not be violated. It would “not compile”, i.e., the program could not be
8
In the final chapter of the novel, “The Man Who Watched the Trains Go By” Georges Simenon describes
an interesting exemplification of what this means. The protagonist of the novel, Popinga, is finally defeated
and put in a mental institution. During a game of chess in the asylum garden, he takes the queen off the
board and drops it into a cup of coffee. This episode is mentioned by the narrator to show Popinga insanity,
since he is not making just an odd violation of a rule, he is out of the game like he is out of the society, he
stopped playing chess when he made an impossible move (dropping the queen in a cup is, in fact, not a
legal move).
9
Please note that for sake of simplicity checks about the chessboard boundaries or blocking pieces on the
bishop trajectory are omitted in this example.
10. International Journal of Web & Semantic Technology (IJWesT) Vol.13, No.1, January 2022
10
translated into bitcode. It is quite evident in online chess that I can move the bishop only in
accordance with the constitutive rule. If you “do not follow” these rules in a physical chessboard
and move random chess pieces, you are not violating any rule; you are simply “not playing
chess”.
It is interesting to note that in an online game web platform, constitutive rules are usually
implemented server side i.e., the backend implements the logic which will be made accessible via
an Application Programming Interface (API) for display on any device that implements the
interface (frontend).
Given their structure, digital systems allow the creation of constitutive rules from scratch in a
very simple way. For example, many chess platforms provide small variations on conventional
rules, creating new game scenarios such as the Fischer random chess, also known as Chess960,
the Atomic Chess, the King Race, and so on. This happens in DomoBuilder too, where the
programmer not only sets instructions to the machine but also creates new simplified semantic
entities in order to produce a representation of the world understandable to the machine that has a
logical form extremely similar to the one proposed by Searle for constitutive rules. Starting from
these new semantic entities, a new set of regulative rules allows to obtain a new application
behaviour given the same components.
4. CONCLUSIONS
To date, some programming languages that translate normative utterances (such as contractual
clauses) are developed to produce immediate legal effects (e.g., a money transfer in a smart
contract) and did not distinguish between violable regulative and inviolable constitutive rules.
However, this distinction is part of the social reality.
We tried to outline the limits and potential of this approach, starting from extremely simple
examples that are not strictly legal, but which take account of the typological diversity that exists
between the norms. It will be desirable to verify the applicability of such strategies to the legal
field in the future.
Norms can be expressed both in natural language and formal language. One of the debated
problems in deontics consists in finding a semantic that combines constitutive rules and
regulative rules. In this paper, we focused on a subclass of formal language, programming
language showing that in imperative OOP programming languages, it is possible to translate both
constitutive rules and regulative rules. Moreover, the combination between constitutive rules and
regulative rules is not only possible, but it is sometimes the optimal solution to implement a
system. Likely, an inflexible prescriptivist, a person that thinks that rules are only regulative and
there is nothing in the world such as constitutive rules, would be a bad programmer. The activity
of a programmer is mainly focused on rules. The programmer is indeed an emblematic example
of a subject that creates to rule [10] precisely, she/he often creates new forms of action through
constitutive rules. The programmer not only sets instructions to the machine but also creates a
representation of the world understandable to the machine. To do so, he creates new simplified
semantic entities that have a logical form extremely similar to the one proposed by Searle for
constitutive rules. However, these rules operate on reality in different ways. These rules
sometimes have a direct impact on an existing behaviour, but they are often a precondition for the
existence of that behaviour, i.e., they establish a condition that the user or the programmer has to
satisfy in order to validly perform an action. It is indeed possible to conceive a more articulate
typology of (constitutive) rules, and in this sense, some interesting attempts have been made
([11];[12]; [13]; [14]; [15]). A future challenge for this research is to distinguish different forms
11. International Journal of Web & Semantic Technology (IJWesT) Vol.13, No.1, January 2022
11
of constitutive rules and different strategies of translation. Clearly, the programmer deals with
rules that mainly regulate the interaction between man and machine. However, when they are
used to creating a digital social environment (e.g., a digital platform where different subjects who
are recognized by the machine as belonging to different classes interact), then their social
relevance becomes evident.
ACKNOWLEDGEMENTS
This research is part of the project “Legal Acts, Images, Institutions: The Form of the Legal Act
in the Era of Legal Design”, funded by Fondazione Sardegna in 2019.
REFERENCES
[1] Jakobson, R. (1971), “On linguistic aspects of translation”, in: Selected Writings, Mouton, New York.
[2] Cook, V. and M. Newson (1996), “Chomsky‟s universal grammar: an introduction,” Blackwell
Publishers, Oxford, OX, 2nd [updated] edition.
[3] Addis, A. and G. Armano (2010), “Domobuilder: A multiagent architecture for home automation,” in:
A. Omicini and M. Viroli, eds., Proceedings of the 11th WOA 2010 Workshop,
Daglioggettiagliagenti, Rimini, Italy, September 5-7, 2010, CEUR Workshop Proceedings 621, pp.
1–2.
[4] Sebesta, R.W. (2016), “Concepts of programming languages,” Pearson, Boston, eleventh edition.
[5] Black, A. P. (2013), “Object-oriented programming: Some history, and challenges for the next fifty
years,” Information and Computation 231, pp. 3–20.
[6] Arnold, K., J. Gosling and D. Holmes (2005), “The Java programming language,” Addison Wesley
Professional.
[7] Searle, J.R. (1969), “Speech Acts: An Essay in the Philosophy of Language,” Oxford University
Press, Oxford.
[8] Wright von, G. H. (1963), “Norm and Action,” The Humanities Press, New York.
[9] Weinberger, O. (1970), “Die Norm als Gedanke und Realität,” Österreichische Zeitschrift für
öffentliches Rechtes 20, pp. 203–216.
[10] Zelaniec, W. (2013), Create to rule: studies on constitutive rules, LED, Milano.
[11] Conte, AG (1986), “Fenomeni di fenomeni,” Rivista internazionale di Filosofia del diritto 63, pp. 29–
57.
[12] Azzoni, G. M. (1988), “Il concetto di condizione nella tipologia delle regole,” CEDAM, Padova.
[13] Conte Conte, A. G., “Regole eidetico costitutive e regole anankastico costitutive,” in: G. Lorini and L.
Passerini Glazel. Lorenzo, eds, Filosofie della norma, Giappichelli, Turin, 2012 pp. 107–117.
[14] Lorini, G.(2017), Anankastico in deontica, LED, Milano.
[15] Sun, X. and L. van der Torre (2014), “Combining constitutive and regulative norms in input/output
logic,” in: F. Cariani, D. Grossi, J. Meheus and X. Parent, eds., Deontic Logic and Normative
Systems, Springer, New York, pp. 241–257.
AUTHORS
Andrea Addis Founder, Senior Developer in Infora: a ICT startup with experience in
research and innovation. Former researcher of the University of Cagliari. Research
interests: Information Retrieval, Text Categorization, Multi Agent Systems
Olimpia G. Loddo is a post-doc fellow at the Department of Law, University of Cagliari.
Her main topic is intersemiotic legal translation. She has been working on the normative
language, phenomenology of law