This short and informal article shows that, although Godel's theorem is valid using classical logic, there exists some four-valued logical system that is able to prove that arithmetic is both sound and complete. This article also describes a four-valued Prolog in some informal, brief and intuitive manner.
The Logical Implication Table in Binary Propositional Calculus: Justification...ijcsta
Logic is the discipline concerned with providing valid general rules on which scientific reasoning and the resulting
propositions are based. To evaluate the validity of sentences in propositional calculus, we, typically, perform a
complete case analysis of all the possible truth-values assigned to the sentence’s propositional variables. Truth
tables provide a systematic method for performing such analysis in order to determine whether the sentence is valid,
satisfiable, contradictory, consistent, etc. However, in order to validate logical statements, we have to use valid truth
tables, i.e., truth tables that are provably consistent and justifiable by some natural criteria. The justification of the
truth table of some logical connectives is straightforward, due to the support of the table in everyday applications.
Nevertheless, the justification of one of the logical connectives, namely, the implication operator, has always been
difficult to build and understand. Though, the logical implication is arguably the most important operator because
of its applications as an inference engine for reasoning in science in general and control engineering in particular.
In this paper, the author presents this problem introducing a non-exhaustive proof, which justifies the logical
implication’s truth table in one phase. The author then proposes another optimal proof, discussing the points of
optimization and the effects of the resulting linguistic and philosophical interpretation on the scientific reasoning
processes. Finally, the paper envisions possible extension of the proposed methodology to solve similar problems
in various types of logic.
Completeness: From henkin's Proposition to Quantum ComputerVasil Penchev
The paper addresses Leon Henkin's proposition as a "lighthouse",
which can elucidate a vast territory of knowledge uniformly: logic, set theory,
information theory, and quantum mechanics: Two strategies to infinity are
equally relevant for it is as universal and thus complete as open and thus incomplete.
Henkin's, Godel's, Robert Jeroslow's, and Hartley Rogers'
proposition are reformulated so that both completeness and incompleteness to
be unified and thus reduced as a joint property of infinity and of all infinite sets.
However, only Henkin's proposition equivalent to an internal position to
infinity is consistent . This can be retraced back to set theory and its axioms,
where tha t of choice is a key. Quantum mechanics is forced to introduce infinity implicitly by Hilbert space, on which is founded its formalism. One can
demonstrate that some essential properties of quantum information,
entanglement, and quantum computer originate directly from infinity once it is
involved in quantum mechanics. Thus, these phenomena can be elucidated as
both complete and incomplete, after which choice is the border between them.
A special kind of invariance to the axiom of choice shared by quantum
mechanics is discussed to be involved that border between the completeness
and incompleteness of infinity in a consistent way. The so-called paradox of
Albert Einstein, Boris Podolsky, and Nathan Rosen is interpreted entirely in
the same terms only of set theory. Quantum computer can demonstrate
especially clearly the privilege of the internal position, or "observer'' , or "user" to infinity implied by Henkin's proposition as the only consist ent ones as to infinity. An essential area of contemporary knowledge may be synthesized from a single viewpoint.
Automated Education Propositional Logic Tool (AEPLT): Used For Computation in...CSCJournals
The Automated Education Propositional Logic Tool (AEPLT) is envisaged. The AEPLT is an automated tool that simplifies and aids in the calculation of the propositional logics of compound propositions of conjuction, disjunction, conditional, and bi-conditional. The AEPLT has an architecture where the user simply enters the propositional variables and the system maps them with the right connectives to form compound proposition or formulas that are calculated to give the desired solutions. The automation of the system gives a guarantee of coming up with correct solutions rather than the human mind going through all the possible theorems, axioms and statements, and due to fatigue one would bound to miss some steps. In addition the AEPL Tool has a user friendly interface that guides the user in executing operations of deriving solutions.
Developing AI Tools For A Writing Assistant: Automatic Detection of dt-mistak...CSCJournals
This paper describes a lightweight, scalable model that predicts whether a Dutch verb ends in -d, -t or -dt. The confusion of these three endings is a common Dutch spelling mistake. If the predicted ending is different from the ending as written by the author, the system will signal the dt-mistake. This paper explores various data sources to use in this classification task, such as the Europarl Corpus, the Dutch Parallel Corpus and a Dutch Wikipedia corpus. Different architectures are tested for the model training, focused on a transfer learning approach with ULMFiT. The trained model can predict the right ending with 99.4% accuracy, and this result is comparable to the current state-of-the-art performance. Adjustments to the training data and the use of other part-of-speech taggers may further improve this performance. As discussed in this paper, the main advantages of the approach are the short training time and the potential to use the same technique with other disambiguation tasks in Dutch or in other languages.
Presented at Dynabyte (13th April 2016)
I do not think it means what you think it means. What object means to many programmers is managers, views, controllers, getters and setters, megalithic frameworks, spaghetti inheritance, lots of mocks and large classes. But is this what object-oriented development is really about?
The original vision of objects was focused more on the problem domain than the solution domain. Objects were supposed to be small, properly encapsulated abstractions composed from other objects. Classes were a mechanism for expressing objects, not the editing black hole of the development process.
Think you know objects? This talk strips back the layers of habits, frameworks and legacy to cast objects in a new but old light.
The Logical Implication Table in Binary Propositional Calculus: Justification...ijcsta
Logic is the discipline concerned with providing valid general rules on which scientific reasoning and the resulting
propositions are based. To evaluate the validity of sentences in propositional calculus, we, typically, perform a
complete case analysis of all the possible truth-values assigned to the sentence’s propositional variables. Truth
tables provide a systematic method for performing such analysis in order to determine whether the sentence is valid,
satisfiable, contradictory, consistent, etc. However, in order to validate logical statements, we have to use valid truth
tables, i.e., truth tables that are provably consistent and justifiable by some natural criteria. The justification of the
truth table of some logical connectives is straightforward, due to the support of the table in everyday applications.
Nevertheless, the justification of one of the logical connectives, namely, the implication operator, has always been
difficult to build and understand. Though, the logical implication is arguably the most important operator because
of its applications as an inference engine for reasoning in science in general and control engineering in particular.
In this paper, the author presents this problem introducing a non-exhaustive proof, which justifies the logical
implication’s truth table in one phase. The author then proposes another optimal proof, discussing the points of
optimization and the effects of the resulting linguistic and philosophical interpretation on the scientific reasoning
processes. Finally, the paper envisions possible extension of the proposed methodology to solve similar problems
in various types of logic.
Completeness: From henkin's Proposition to Quantum ComputerVasil Penchev
The paper addresses Leon Henkin's proposition as a "lighthouse",
which can elucidate a vast territory of knowledge uniformly: logic, set theory,
information theory, and quantum mechanics: Two strategies to infinity are
equally relevant for it is as universal and thus complete as open and thus incomplete.
Henkin's, Godel's, Robert Jeroslow's, and Hartley Rogers'
proposition are reformulated so that both completeness and incompleteness to
be unified and thus reduced as a joint property of infinity and of all infinite sets.
However, only Henkin's proposition equivalent to an internal position to
infinity is consistent . This can be retraced back to set theory and its axioms,
where tha t of choice is a key. Quantum mechanics is forced to introduce infinity implicitly by Hilbert space, on which is founded its formalism. One can
demonstrate that some essential properties of quantum information,
entanglement, and quantum computer originate directly from infinity once it is
involved in quantum mechanics. Thus, these phenomena can be elucidated as
both complete and incomplete, after which choice is the border between them.
A special kind of invariance to the axiom of choice shared by quantum
mechanics is discussed to be involved that border between the completeness
and incompleteness of infinity in a consistent way. The so-called paradox of
Albert Einstein, Boris Podolsky, and Nathan Rosen is interpreted entirely in
the same terms only of set theory. Quantum computer can demonstrate
especially clearly the privilege of the internal position, or "observer'' , or "user" to infinity implied by Henkin's proposition as the only consist ent ones as to infinity. An essential area of contemporary knowledge may be synthesized from a single viewpoint.
Automated Education Propositional Logic Tool (AEPLT): Used For Computation in...CSCJournals
The Automated Education Propositional Logic Tool (AEPLT) is envisaged. The AEPLT is an automated tool that simplifies and aids in the calculation of the propositional logics of compound propositions of conjuction, disjunction, conditional, and bi-conditional. The AEPLT has an architecture where the user simply enters the propositional variables and the system maps them with the right connectives to form compound proposition or formulas that are calculated to give the desired solutions. The automation of the system gives a guarantee of coming up with correct solutions rather than the human mind going through all the possible theorems, axioms and statements, and due to fatigue one would bound to miss some steps. In addition the AEPL Tool has a user friendly interface that guides the user in executing operations of deriving solutions.
Developing AI Tools For A Writing Assistant: Automatic Detection of dt-mistak...CSCJournals
This paper describes a lightweight, scalable model that predicts whether a Dutch verb ends in -d, -t or -dt. The confusion of these three endings is a common Dutch spelling mistake. If the predicted ending is different from the ending as written by the author, the system will signal the dt-mistake. This paper explores various data sources to use in this classification task, such as the Europarl Corpus, the Dutch Parallel Corpus and a Dutch Wikipedia corpus. Different architectures are tested for the model training, focused on a transfer learning approach with ULMFiT. The trained model can predict the right ending with 99.4% accuracy, and this result is comparable to the current state-of-the-art performance. Adjustments to the training data and the use of other part-of-speech taggers may further improve this performance. As discussed in this paper, the main advantages of the approach are the short training time and the potential to use the same technique with other disambiguation tasks in Dutch or in other languages.
Presented at Dynabyte (13th April 2016)
I do not think it means what you think it means. What object means to many programmers is managers, views, controllers, getters and setters, megalithic frameworks, spaghetti inheritance, lots of mocks and large classes. But is this what object-oriented development is really about?
The original vision of objects was focused more on the problem domain than the solution domain. Objects were supposed to be small, properly encapsulated abstractions composed from other objects. Classes were a mechanism for expressing objects, not the editing black hole of the development process.
Think you know objects? This talk strips back the layers of habits, frameworks and legacy to cast objects in a new but old light.
Today we’ll try to cover a number of things:
1. Learning philosophy/philosophy of statistics
2. Situating the broad issues within philosophy of science
3. Little bit of logic
4. Probability and random variables
Week 14 April 28 & 30 - Love and Death Castillo, Chap. 9 .docxmelbruce90096
Week 14: April 28 & 30 - Love and Death
Castillo, Chap. 9 “Sofia, Who Would Never Again Let Her Husband Have the Last Word…”
Chap. 10 “Wherein Sofia Discovers La Loca’s Playmate…”
Chap. 11 “The Marriage of Sofia’s Faithful Daughter to her Cousin”
Chap. 12 “Of the Hideous Crime of Francisco el Penitente…”
1. For all chapters, identify the four levels of analysis: 1) metaphoric/symbolic; 2) literary; 3) sociological; and spiritual.
Chapter 9 “Sofia, Who Would Never Again Let Her Husband Have the Last Word…”
2. In this chapter, Sofia begins a transformation of her own. What is this transformation and what role does Esperanza play?Chapter 10
3. In this chapter, we return to La Loca, reading from her point of view. What do we learn from this, the youngest of Sofi’s daughters?
4. As Fe leaves Sofia’s home we realize she has not come to terms with what she went through when Tom broke off the engagement. What is Fe like now? Has she also changed?
5. What about Esperanza, what news about her? And what about “La Llorona, Chicana international astral-traveler”?
Chapter 11
6. Much happens to Fe in this chapter. Be able to recount all of Fe’s experiences and the relationship to big business, the U.S. government, and the medical profession.Chapter 12
7. What to make of this last chapter in Caridad and Francisco’s lives? What are the recurring themes and metaphors/symbolizes, etc.?
In preparation for the Opposing Viewpoints short paper due in Module Five, you will outline a position (thesis) on a topic of your choosing.
Using the Prewriting Template provided, outline two to three of your reasons for supporting your thesis and then also outline the objection’s position. Please note that the main purpose of this assignment is to formulate the strongest possible objection to your own position before responding to it.
You will be required to use at least four outside (i.e., other than the textbook) sources for this paper, two for each side of the issue. You do not need to do extensive reearch before completing the outline.
Possible topics: Affirmative Action, Abortion, State-Financed Health Care, Flat Tax...or anything you want. It is best to choose a position for which you can find reasonable arguments on both sides.
Click on the title above to turn in your outline.
First Paper (Opposing Viewpoints):
Critical Elements
Distinguished
Proficient
Emerging
Not Evident
Value
Main Elements
Includes almost all of the main elements and requirements and cites ample appropriate support to illustrate each element
(23-25)
Includes most of the main elements and requirements and cites appropriate support to illustrate each element
(20-22)
Includes some of the main elements and requirements
(18-19)
Does not include any of the main elements and requirements
(0-17)
25
Inquiry and Analysis
Explores multiple reasons and offers in-depth analysis of evidence to make informed conclusions about the issue
(18-20)
Explores so.
This book is written by LOIBANGUTI, BM, it is just an online copy provided for free. No part of this book mya be republished. but can be used and stored as a softcopy book, can be shared accordingly.
Classical logic has a serious limitation in that it cannot cope with the issues of vagueness and uncertainty
into which fall most modes of human reasoning. In order to provide a foundation for human knowledge
representation and reasoning in the presence of vagueness, imprecision, and uncertainty, fuzzy logic
should have the ability to deal with linguistic hedges, which play a very important role in the modification
of fuzzy predicates. In this paper, we extend fuzzy logic in narrow sense with graded syntax, introduced by
Nova´k et al., with many hedge connectives. In one case, each hedge does not have any dual one. In the
other case, each hedge can have its own dual one. The resulting logics are shown to also have the Pavelkastyle
completeness.
In the era of data-driven warfare, the integration of big data and machine learning (ML) techniques has
become paramount for enhancing defence capabilities. This research report delves into the applications of
big data and ML in the defence sector, exploring their potential to revolutionize intelligence gathering,
strategic decision-making, and operational efficiency. By leveraging vast amounts of data and advanced
algorithms, these technologies offer unprecedented opportunities for threat detection, predictive analysis,
and optimized resource allocation. However, their adoption also raises critical concerns regarding data
privacy, ethical implications, and the potential for misuse. This report aims to provide a comprehensive
understanding of the current state of big data and ML in defence, while examining the challenges and
ethical considerations that must be addressed to ensure responsible and effective implementation.
Cloud Computing, being one of the most recent innovative developments of the IT world, has been
instrumental not just to the success of SMEs but, through their productivity and innovative contribution to
the economy, has even made a remarkable contribution to the economic growth of the United States. To
this end, the study focuses on how cloud computing technology has impacted economic growth through
SMEs in the United States. Relevant literature connected to the variables of interest in this study was
reviewed, and secondary data was generated and utilized in the analysis section of this paper. The findings
of this paper revealed that there have been meaningful contributions that the usage of virtualization has
made in the commercial dealings of small firms in the United States, and this has also been reflected in the
economic growth of the country. This paper further revealed that as important as cloud-based software is,
some SMEs are still skeptical about how it can help improve their business and increase their bottom line
and hence have failed to adopt it. Apart from the SMEs, some notable large firms in different industries,
including information and educational services, have adopted cloud computing technology and hence
contributed to the economic growth of the United States. Lastly, findings from our inferential statistics
revealed that no discernible change has occurred in innovation between small and big businesses in the
adoption of cloud computing. Both categories of businesses adopt cloud computing in the same way, and
their contribution to the American economy has no significant difference in the usage of virtualization.
Today we’ll try to cover a number of things:
1. Learning philosophy/philosophy of statistics
2. Situating the broad issues within philosophy of science
3. Little bit of logic
4. Probability and random variables
Week 14 April 28 & 30 - Love and Death Castillo, Chap. 9 .docxmelbruce90096
Week 14: April 28 & 30 - Love and Death
Castillo, Chap. 9 “Sofia, Who Would Never Again Let Her Husband Have the Last Word…”
Chap. 10 “Wherein Sofia Discovers La Loca’s Playmate…”
Chap. 11 “The Marriage of Sofia’s Faithful Daughter to her Cousin”
Chap. 12 “Of the Hideous Crime of Francisco el Penitente…”
1. For all chapters, identify the four levels of analysis: 1) metaphoric/symbolic; 2) literary; 3) sociological; and spiritual.
Chapter 9 “Sofia, Who Would Never Again Let Her Husband Have the Last Word…”
2. In this chapter, Sofia begins a transformation of her own. What is this transformation and what role does Esperanza play?Chapter 10
3. In this chapter, we return to La Loca, reading from her point of view. What do we learn from this, the youngest of Sofi’s daughters?
4. As Fe leaves Sofia’s home we realize she has not come to terms with what she went through when Tom broke off the engagement. What is Fe like now? Has she also changed?
5. What about Esperanza, what news about her? And what about “La Llorona, Chicana international astral-traveler”?
Chapter 11
6. Much happens to Fe in this chapter. Be able to recount all of Fe’s experiences and the relationship to big business, the U.S. government, and the medical profession.Chapter 12
7. What to make of this last chapter in Caridad and Francisco’s lives? What are the recurring themes and metaphors/symbolizes, etc.?
In preparation for the Opposing Viewpoints short paper due in Module Five, you will outline a position (thesis) on a topic of your choosing.
Using the Prewriting Template provided, outline two to three of your reasons for supporting your thesis and then also outline the objection’s position. Please note that the main purpose of this assignment is to formulate the strongest possible objection to your own position before responding to it.
You will be required to use at least four outside (i.e., other than the textbook) sources for this paper, two for each side of the issue. You do not need to do extensive reearch before completing the outline.
Possible topics: Affirmative Action, Abortion, State-Financed Health Care, Flat Tax...or anything you want. It is best to choose a position for which you can find reasonable arguments on both sides.
Click on the title above to turn in your outline.
First Paper (Opposing Viewpoints):
Critical Elements
Distinguished
Proficient
Emerging
Not Evident
Value
Main Elements
Includes almost all of the main elements and requirements and cites ample appropriate support to illustrate each element
(23-25)
Includes most of the main elements and requirements and cites appropriate support to illustrate each element
(20-22)
Includes some of the main elements and requirements
(18-19)
Does not include any of the main elements and requirements
(0-17)
25
Inquiry and Analysis
Explores multiple reasons and offers in-depth analysis of evidence to make informed conclusions about the issue
(18-20)
Explores so.
This book is written by LOIBANGUTI, BM, it is just an online copy provided for free. No part of this book mya be republished. but can be used and stored as a softcopy book, can be shared accordingly.
Classical logic has a serious limitation in that it cannot cope with the issues of vagueness and uncertainty
into which fall most modes of human reasoning. In order to provide a foundation for human knowledge
representation and reasoning in the presence of vagueness, imprecision, and uncertainty, fuzzy logic
should have the ability to deal with linguistic hedges, which play a very important role in the modification
of fuzzy predicates. In this paper, we extend fuzzy logic in narrow sense with graded syntax, introduced by
Nova´k et al., with many hedge connectives. In one case, each hedge does not have any dual one. In the
other case, each hedge can have its own dual one. The resulting logics are shown to also have the Pavelkastyle
completeness.
In the era of data-driven warfare, the integration of big data and machine learning (ML) techniques has
become paramount for enhancing defence capabilities. This research report delves into the applications of
big data and ML in the defence sector, exploring their potential to revolutionize intelligence gathering,
strategic decision-making, and operational efficiency. By leveraging vast amounts of data and advanced
algorithms, these technologies offer unprecedented opportunities for threat detection, predictive analysis,
and optimized resource allocation. However, their adoption also raises critical concerns regarding data
privacy, ethical implications, and the potential for misuse. This report aims to provide a comprehensive
understanding of the current state of big data and ML in defence, while examining the challenges and
ethical considerations that must be addressed to ensure responsible and effective implementation.
Cloud Computing, being one of the most recent innovative developments of the IT world, has been
instrumental not just to the success of SMEs but, through their productivity and innovative contribution to
the economy, has even made a remarkable contribution to the economic growth of the United States. To
this end, the study focuses on how cloud computing technology has impacted economic growth through
SMEs in the United States. Relevant literature connected to the variables of interest in this study was
reviewed, and secondary data was generated and utilized in the analysis section of this paper. The findings
of this paper revealed that there have been meaningful contributions that the usage of virtualization has
made in the commercial dealings of small firms in the United States, and this has also been reflected in the
economic growth of the country. This paper further revealed that as important as cloud-based software is,
some SMEs are still skeptical about how it can help improve their business and increase their bottom line
and hence have failed to adopt it. Apart from the SMEs, some notable large firms in different industries,
including information and educational services, have adopted cloud computing technology and hence
contributed to the economic growth of the United States. Lastly, findings from our inferential statistics
revealed that no discernible change has occurred in innovation between small and big businesses in the
adoption of cloud computing. Both categories of businesses adopt cloud computing in the same way, and
their contribution to the American economy has no significant difference in the usage of virtualization.
Energy-constrained Wireless Sensor Networks (WSNs) have garnered significant research interest in
recent years. Multiple-Input Multiple-Output (MIMO), or Cooperative MIMO, represents a specialized
application of MIMO technology within WSNs. This approach operates effectively, especially in
challenging and resource-constrained environments. By facilitating collaboration among sensor nodes,
Cooperative MIMO enhances reliability, coverage, and energy efficiency in WSN deployments.
Consequently, MIMO finds application in diverse WSN scenarios, spanning environmental monitoring,
industrial automation, and healthcare applications.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication. IJCSIT publishes original research papers and review papers, as well as auxiliary material such as: research papers, case studies, technical reports etc.
With growing, Car parking increases with the number of car users. With the increased use of smartphones
and their applications, users prefer mobile phone-based solutions. This paper proposes the Smart Parking
Management System (SPMS) that depends on Arduino parts, Android applications, and based on IoT. This
gave the client the ability to check available parking spaces and reserve a parking spot. IR sensors are
utilized to know if a car park space is allowed. Its area data are transmitted using the WI-FI module to the
server and are recovered by the mobile application which offers many options attractively and with no cost
to users and lets the user check reservation details. With IoT technology, the smart parking system can be
connected wirelessly to easily track available locations.
Welcome to AIRCC's International Journal of Computer Science and Information Technology (IJCSIT), your gateway to the latest advancements in the dynamic fields of Computer Science and Information Systems.
Computer-Assisted Language Learning (CALL) are computer-based tutoring systems that deal with
linguistic skills. Adding intelligence in such systems is mainly based on using Natural Language
Processing (NLP) tools to diagnose student errors, especially in language grammar. However, most such
systems do not consider the modeling of student competence in linguistic skills, especially for the Arabic
language. In this paper, we will deal with basic grammar concepts of the Arabic language taught for the
fourth grade of the elementary school in Egypt. This is through Arabic Grammar Trainer (AGTrainer)
which is an Intelligent CALL. The implemented system (AGTrainer) trains the students through different
questions that deal with the different concepts and have different difficulty levels. Constraint-based student
modeling (CBSM) technique is used as a short-term student model. CBSM is used to define in small grain
level the different grammar skills through the defined skill structures. The main contribution of this paper
is the hierarchal representation of the system's basic grammar skills as domain knowledge. That
representation is used as a mechanism for efficiently checking constraints to model the student knowledge
and diagnose the student errors and identify their cause. In addition, satisfying constraints and the number
of trails the student takes for answering each question and fuzzy logic decision system are used to
determine the student learning level for each lesson as a long-term model. The results of the evaluation
showed the system's effectiveness in learning in addition to the satisfaction of students and teachers with its
features and abilities.
In the realm of computer security, the importance of efficient and reliable user authentication methods has
become increasingly critical. This paper examines the potential of mouse movement dynamics as a
consistent metric for continuous authentication. By analysing user mouse movement patterns in two
contrasting gaming scenarios, "Team Fortress" and "Poly Bridge," we investigate the distinctive
behavioral patterns inherent in high-intensity and low-intensity UI interactions. The study extends beyond
conventional methodologies by employing a range of machine learning models. These models are carefully
selected to assess their effectiveness in capturing and interpreting the subtleties of user behavior as
reflected in their mouse movements. This multifaceted approach allows for a more nuanced and
comprehensive understanding of user interaction patterns. Our findings reveal that mouse movement
dynamics can serve as a reliable indicator for continuous user authentication. The diverse machine
learning models employed in this study demonstrate competent performance in user verification, marking
an improvement over previous methods used in this field. This research contributes to the ongoing efforts to
enhance computer security and highlights the potential of leveraging user behavior, specifically mouse
dynamics, in developing robust authentication systems.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
Image segmentation and classification tasks in computer vision have proven to be highly effective using neural networks, specifically Convolutional Neural Networks (CNNs). These tasks have numerous
practical applications, such as in medical imaging, autonomous driving, and surveillance. CNNs are capable
of learning complex features directly from images and achieving outstanding performance across several
datasets. In this work, we have utilized three different datasets to investigate the efficacy of various preprocessing and classification techniques in accurssedately segmenting and classifying different structures
within the MRI and natural images. We have utilized both sample gradient and Canny Edge Detection
methods for pre-processing, and K-means clustering have been applied to segment the images. Image
augmentation improves the size and diversity of datasets for training the models for image classification
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
This research aims to further understanding in the field of continuous authentication using behavioural
biometrics. We are contributing a novel dataset that encompasses the gesture data of 15 users playing
Minecraft with a Samsung Tablet, each for a duration of 15 minutes. Utilizing this dataset, we employed
machine learning (ML) binary classifiers, being Random Forest (RF), K-Nearest Neighbors (KNN), and
Support Vector Classifier (SVC), to determine the authenticity of specific user actions. Our most robust
model was SVC, which achieved an average accuracy of approximately 90%, demonstrating that touch
dynamics can effectively distinguish users. However, further studies are needed to make it viable option
for authentication systems. You can access our dataset at the following
link:https://github.com/AuthenTech2023/authentech-repo
This paper discusses the capabilities and limitations of GPT-3 (0), a state-of-the-art language model, in the
context of text understanding. We begin by describing the architecture and training process of GPT-3, and
provide an overview of its impressive performance across a wide range of natural language processing
tasks, such as language translation, question-answering, and text completion. Throughout this research
project, a summarizing tool was also created to help us retrieve content from any types of document,
specifically IELTS (0) Reading Test data in this project. We also aimed to improve the accuracy of the
summarizing, as well as question-answering capabilities of GPT-3 (0) via long text
In the realm of computer security, the importance of efficient and reliable user authentication methods has
become increasingly critical. This paper examines the potential of mouse movement dynamics as a
consistent metric for continuous authentication. By analysing user mouse movement patterns in two
contrasting gaming scenarios, "Team Fortress" and "Poly Bridge," we investigate the distinctive
behavioral patterns inherent in high-intensity and low-intensity UI interactions. The study extends beyond
conventional methodologies by employing a range of machine learning models. These models are carefully
selected to assess their effectiveness in capturing and interpreting the subtleties of user behavior as
reflected in their mouse movements. This multifaceted approach allows for a more nuanced and
comprehensive understanding of user interaction patterns. Our findings reveal that mouse movement
dynamics can serve as a reliable indicator for continuous user authentication. The diverse machine
learning models employed in this study demonstrate competent performance in user verification, marking
an improvement over previous methods used in this field. This research contributes to the ongoing efforts to
enhance computer security and highlights the potential of leveraging user behavior, specifically mouse
dynamics, in developing robust authentication systems.
Image segmentation and classification tasks in computer vision have proven to be highly effective using neural networks, specifically Convolutional Neural Networks (CNNs). These tasks have numerous
practical applications, such as in medical imaging, autonomous driving, and surveillance. CNNs are capable
of learning complex features directly from images and achieving outstanding performance across several
datasets. In this work, we have utilized three different datasets to investigate the efficacy of various preprocessing and classification techniques in accurssedately segmenting and classifying different structures
within the MRI and natural images. We have utilized both sample gradient and Canny Edge Detection
methods for pre-processing, and K-means clustering have been applied to segment the images. Image
augmentation improves the size and diversity of datasets for training the models for image classification.
This work highlights transfer learning’s effectiveness in image classification using CNNs and VGG 16 that
provides insights into the selection of pre-trained models and hyper parameters for optimal performance.
We have proposed a comprehensive approach for image segmentation and classification, incorporating preprocessing techniques, the K-means algorithm for segmentation, and employing deep learning models such
as CNN and VGG 16 for classification.
The security of Electric Vehicle (EV) charging has gained momentum after the increase in the EV adoption
in the past few years. Mobile applications have been integrated into EV charging systems that mainly use a
cloud-based platform to host their services and data. Like many complex systems, cloud systems are
susceptible to cyberattacks if proper measures are not taken by the organization to secure them. In this
paper, we explore the security of key components in the EV charging infrastructure, including the mobile
application and its cloud service. We conducted an experiment that initiated a Man in the Middle attack
between an EV app and its cloud services. Our results showed that it is possible to launch attacks against
the connected infrastructure by taking advantage of vulnerabilities that may have substantial economic and
operational ramifications on the EV charging ecosystem. We conclude by providing mitigation suggestions
and future research directions.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
This paper describes the outcome of an attempt to implement the same transitive closure (TC) algorithm
for Apache MapReduce running on different Apache Hadoop distributions. Apache MapReduce is a
software framework used with Apache Hadoop, which has become the de facto standard platform for
processing and storing large amounts of data in a distributed computing environment. The research
presented here focuses on the variations observed among the results of an efficient iterative transitive
closure algorithm when run against different distributed environments. The results from these comparisons
were validated against the benchmark results from OYSTER, an open source Entity Resolution system. The
experiment results highlighted the inconsistencies that can occur when using the same codebase with
different implementations of Map Reduce.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
1. International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 2, April 2017
DOI:10.5121/ijcsit.2017.9206 69
A NOTE ON GÖDEL´S THEOREM
J. Ulisses Ferreira
Trv Pirapora 36 Costa Azul, 41770-220, Salvador, Brazil
ABSTRACT
This short and informal article shows that, although Godel's theorem is valid using classical logic, there
exists some four-valued logical system that is able to prove that arithmetic is both sound and complete.
This article also describes a four-valued Prolog in some informal, brief and intuitive manner.
KEYWORDS
Gödel, incompleteness theorem, four-valued logic,Hilbert’s program
1. INTRODUCTION
In 1931, Kurt Gödel, after his revolutionary theorem, placed a full stop on Hilbert's dream of
formalizing mathematics. Gödel demonstrated that it would not work even for arithmetic[1].
On the other hand, in the nineties, I started inserting a third value called "unknown" in Plain[2],
i.e. a programming language that I was designing at that time. The unknown constant was
theoretically referred to as uu since the end of nineties, when I was doing PhD abroad. In my own
PhD thesis, I introduced a five-valued logic from the values in {tt, ff, uu, ii, kk}[3]. In 2004, I
published not only that logic as a journal article but I also published a 7-valued logic in a
Conference in San Diego[4], adding the values {fi, it} ("false or inconsistent", "inconsistent or
true", respectively) for being able to be used together with the same uncertainty model proposed
during my Master's course in 1990. My 7-valued logic that makes use of that uncertainty model
permits that, during the computation, as the system obtains novel pieces of information, variables
change their values. An example of this is the paternity test: before the discovery of the DNA
test, it was possible to conclude whether a child was a daughter or son of a particular man by
hereditary physical characteristics. However, there was always uncertainty up to some extent.
The uncertainty factor could be represented by uu (unknown), at least as an initial state of some
variable. Since the DNA test was discovered, all variables which represents the hypothesis of
being the child's father should change their states from uu to either kk or tt or ff.
As part of my previous contribution, the kk value means "knowable", and it is usable when
something is not already known, but it is already known that it is consistent. It can either be true
or false but not both. It can be known by God or someone else or some machine, for instance, but
it is not already known by the machine or person who is deductively reasoning, and it may be
unknown forever, but at least its consistency is guaranteed. This is the meaning of the kk value,
which fits in the referred uncertainty model when a variable thresholds collapse: False = True,
which means that there is nothing strictly between the False and the True thresholds. In the above
example of the paternity test, uu used to represent the initial state before the discovery of the
DNA test, whereas kk represents the initial state given the existence of the DNA test, but before
knowing the result of a particular DNA test, either ff or tt.
Individually and previously, Kleene, Lukasiewicz and Priest proposed their three-valued logics.
2. International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 2, April 2017
70
In 1977, Nuel Belnap (1930-) had proposed his logic on 4 values[5]. What I observed several
years ago is that Gödel's proof may not work together with some logics that have more than 3
values. The 4 necessary values mean "true", "false", "unknown" and "inconsistent", or similar
meanings. That is, at least these 4 values and meanings. The latter two values correspond to N
(none) and B (both) in the four-valued Belnap's logic, respectively, and correspond to uu and ii,
respectively, in both referred logics of mine, as well as in my four-valued logic presented in this
article, and the four-valued Prolog also described here.
The problem Gödel introduced was due to existing self-references and paradoxes, which made
propositions of arithmetic result in both true and false. Together with the observation that any
proof over mathematics was also a mathematical object itself. However, Boolean logics are
clearly unable to permit that formal systems written in them capture the problem pointed out by
Gödel in his theorem.
One condition for a four-valued formal system being able to prove all true propositions, and only
the true propositions, is certainly that it has the same results of the classical logic, except for
when one or two operands have values other than true and false. In any proof, a result here is the
true value only, and do not include values such as "unknown". Belnap's four-valued logic does
not fail under this condition, for, although (B ˅ N = T), where T represents the true value, the
values of the operands are "B" and "N", which are not Boolean.
In my PhD thesis, there was a kind of typo in the truth-table for the specific case ff↔ff, which
results in tt in my logic but ff was written instead: a kind of mistake in only one of the two truth-
tables for the equivalence operation. However, taking this into account, and by using only one of
those equivalence operations, my five-valued logic suffices regarding that condition. Moreover, I
checked in 2007, by using a program of mine, which seems to be correct, whether such a four-
valued logic exists, and its computation resulted in several logics, where one of them was
Belnap's logic. The 12th solution written by my program computation was the logic which
pleased me the most, and I think that other people might have done the same as I did, and even
preferred the same logic as I did, but I have never seen one. In 2011, I could not claim the
authorship of that four-valued logic, but the true truth-tables of my preference are the following:
Table 1. The present author’s four-valued logic true table
Section 2 dedicates to Gödel’s theorem and his proof, and I informally describe a system for
capturing all possible results. In section 3, I briefly describe a four-valued Prolog programming
language, whereas section 4 contais my conclusions.
2. ON GÖDEL’S PROOF
The set of all propositions on arithmetical true is written for the two Boolean values, but that set
could be complete but cannot be sound, i.e. it is clearly inconsistent. However, I write an external
layer with an external view of that set. My layer is written with 4 or more values. It interprets that
set and, thus, both layers together form a formal system where the two-valued system is the
server while the four-valued system is the client. The external layer makes use of the internal one.
3. International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 2, April 2017
71
The system with at least four values is pretty simple and works in the following manner:
Whenever one attempts to prove that a proposition is true and the two-valued system results in
true, the external layer still tries to prove that the proposition is false: If the two-valued system
results in true, the external layer results in ii, the inconsistent value. However, on the other hand,
if the two-valued system results in false instead, the external layer results in true.
Whenever one attempts to prove that a proposition is true and the two-valued system results in
false, the external layer still tries to prove that the proposition is false: If the two-valued system
results in false, the external layer results in uu, the unknown value. However, on the other hand, if
the two-valued system results in true instead, the external layer results in false.
In other words, the meaning of a two-valued internal query is only the attempt to prove, which
either succeed or not. In this way, the whole formal system is clearly sound and complete. In
1997, I wrote a three-valued Prolog which I called Kleene at that time and Globallog in the
following year[6] for becoming more modest, and the same language is the subject of one of the
chapters of my PhD thesis.
The system described above in this section can be more clearly written in a Pascal-like language
style as follows:
1. An algorithm in Pascal-like language
type
proposition = string;
QueryAnswers = (LocalFalse, LocalTrue, NotFound);
FourValues = (uu, ff, tt, ii);
(* … *)
function TryProposition2v(p: proposition, q: QueryAnswers): boolean;
begin
(*
any polynomial search algorithm with unification for checking whether the
proposition p is true. Alternatively, this function also returns
the information that the search algorithm has been unable to answer
whether the proposition p is true or false with respect to the current state
of the knowledge base. In this case, where no unification has been found,
the result is false.
*)
end;
function proposition4v(p: proposition): FourValues;
begin
if TryProposition2v(p,LocalTrue) then if
TryProposition2v(p,LocalFalse) then
4. International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 2, April 2017
72
proposition4v := ii
else
proposition4v := tt
else
if TryProposition2v(p,LocalFalse) then
proposition4v := ff
else
proposition4v := uu
end;
Clearly, such an algorithm captures all possibilities, and can be adapted to extend from the
propositional logic to a more sophisticated and even second-order logic with propositions.
Certainly, we are unable to state all mathematical true, for mathematics is a science and, as such,
new theorems and proofs, new mathematical objects, are being formulated all the time by
researchers. So, a proper four-valued formal system is able to state when a proposition is still
unknown due to the uu value. On the other hand, such a formal system captures the notion of
paradoxes due to the ii value. Therefore, it is sound and complete.
3. A FOUR-VALUED PROLOG
For further work, a four-valued Prolog can be formally defined, implemented and used. In this
section, I introduce a brief, informal and intuitive description of the adaptation of the three-valued
Prolog defined by the present author[6]. Let us call Prolog4v the sample programming language
whose interpreter is intended to be the four-valued formal system.
3.1 Syntactical and Semantic Definitions
Definition 1. A program in Prolog4v is a sequence S of clauses c1 … cn . Thus, it is said that a
computation by S proves a goal g if and only if there exists some ci in S such that g is an
immediate consequence of ci , assuming that the body of ci can be proven. The notion of clause
and body are in the following definition subsection.
Given S as a sequence of clauses c1 … cn, a program in Prolog4v corresponds to the disjunction
among all clauses. That is: c1 ˅ … ˅ cn , where the disjunctive operator ˅ is the same operator of
the four-valued logic in table 1. Nonetheless, the interpreter, also called formal system here,
carries out its computation “downwards”, i.e. from the first to the last clause. The sequence of
clauses is often written like a Prolog program is, i.e. one clause occupies one line.
Definition 2. A clause is a language construct which has one of the forms bellow:
[not] p(t1, , … , tn).
or
[not] p(t1, , … , tn) ← [not] p1(t1,1, , … , tr,1), … , [not] pm(t1,m , … , ts,m).
5. International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 2, April 2017
73
The first clause above corresponds to a fact whereas the second clause corresponds to a rule. All
clauses end with a dot symbol. As usual in syntax definitions, the above brackets are not part of
the language but, instead, they mean that the negation operator in optional in the clauses. Any rule
contains its head, which is on the left of the inference operator ←, and its body, which is on the
right of the same operator. At the lexical level of Prolog4v, there are two different inference
operators to be chosen by the programmer, either “:-“ like in Prolog or “:=”. The former operator
obeys the Closed World Assumption[7] and makes use of the Negation as Failure[8]. This means
that if the body of a rule results in uu, the “:-“ operator makes the head of the same rule become
ff. Similarly, if the body of a rule results in ii, the “:-“ operator also makes the head of the same
rule become ff. The latter operator “:=” is a contribution of mine, which obeys what I called the
Open World Assumption in my PhD thesis[6] and it corresponds to the → operator described in
table 1.
Briefly, if no clause unifies some given goal g, the answer of the query for g is uu, the unknown
constant of Prolog4v.
The body of any rule is formed by a sequence of predicates with zero or more indexed parameters
t, separated by the comma symbol (“,”), which in its turn corresponds to the ˄ operator of the
four-valued logic that I introduced in table 1, in the introductory section. During the computation,
each predicate pj,(t1,j , … , tu,j) corresponds to a new four-valued goal and to a new four-valued
query.
Definition 3. There exist four predefined constants in Prolog4v, namely, ff, tt, uu and ii.
The above constants correspond to the four operands F, T, U and I, respectively, of the four-
valued logic described in table 1.
Note that, in accordance with table 1, if any of those queries in a body results in ff, the
computation of the whole rule results in ff regardless of the existence of any possible
inconsistency or lack of information in the other queries of the body of the rule in question. The
queries are performed from left to right like in Prolog, but it is easy to see that Prolog4v
interpreter can carry out the computation in parallel and it canb even distribute the computation
among a number of machines. Also from table 1, note that, a rule results in tt if and only if all
containing queries result in tt, that is, the trivial and Boolean cases clearly must hold.
With respect to the “:=” inference operator, which in its turn corresponds to the → implication
operator of the introduced four-valued logic, but containing the sides of the implication swapped,
I could have chosen any pairs of operands of the → table whose results are all tt. However, the
main diagonal of the → table is what makes sense in the real world, hence they are my choices.
That is to say, ff → ff, tt → tt, uu → uu, as well as ii → ii all result in tt and therefore → operator
is not only sound but also makes sense in the real world. During the computation, if the body of a
rule results in ii the query for the whole rule results in ii and, in this way, the inconsistency is
propagated, possibly to the level of the user, such as a mathematician.
However, any query with the negation operator can be treated as a unity. That is, although the
four-valued logic introduced in table 1 contains the “not” operator ⌐, the system might not make
use of it. Instead, the not operator can be part of the query as well as it is part of the unification
algorithm, i.e. the system tries to unify the predicate including the “not” operator. Furthermore,
not uu does not result in ii, whereas not ii does not result in uu either. Instead, the system ought
to propagate uu and also ii. Thus, not uu results in uu whereas not ii results in ii. These are the
only two exceptions with respect to table 1. In other words, there are two different forms of
6. International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 2, April 2017
74
negation.
In contrast with the negation in table 1, let us call the not operator in the definition 2 “abstract
negation”. I had also called “abstract negation” in the three-valued Prolog. Here, the negation is a
four-valued extension.
Finally, the ↔ operator in the above four-valued logic is simply not used by the system.
3.2 Examples
Consider the following example of a two-clause program in Prolog4v:
happy(ann). not
happy(ann).
Over the last thirty years, some proposals have been made for solving the inconsistency
problem[9], such as setting priorities, possibly in some implicitly way, for all clauses. The
literature on inconsistency in deductive databases and logic programs is large[10] but I think that
there is little references on abstract negation.
In the above example, a query like happy(ann) clearly results in ii. Accordingly, a query like not
hapy(ann) also results in ii. In both cases, the system tries to prove both and, in accordance with
the algorithm 1, implicitly makes two binary queries, for both the positive and the negative forms
of the predicate.
Now, consider the classical non-flying bird example:
fly(X) := bird(X), not penguin(X).
not fly(Y) := penguin(Y).
bird(tweety).
penguin(Z) :- bird(Z), polar(Z).
To answer the query fly(tweety), the system unifies the goal with the head of the first rule,
binding the variable X to the constant tweety. Then, the system finds the subgoal bird(tweety)
which in turn unifies the third clause and that subquerry results in tt. Then, in the body of the first
rule, not penguin(tweety), is the next subgoal to be explored. Note that, because of the inference
operator chosen, penguin is the head of a closed-world rule. Then, the subgoal unifies the fourth
clause binding Z to tweety. As the subgoal bird(tweety) had already been proven, the next
subgoal is polar(tweety). To explore this subgoal, the system does not unify any clause and,
because of this, this subquery results in uu. The body of the fourth clause results in uu since T Ʌ
U results in U in table 1. The subquery penguin(tweety) results in ff because of the closed-
world assumption made by using the “:-“ operator. If one replaces “:-“ by “:=” in the fourth
clause, the subquery penguin(tweety) in uu instead.
Now, consider a new clause
polar(tweety).
is asserted and the system places it at the end of the sequence of clauses. For the same query
7. International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 2, April 2017
75
fly(tweety), the system now answers ff. That is, it learns. In comparison to a similar Prolog
program:
fly(X) :- bird(X), not penguin(X).
bird(tweety).
penguin(Z) :- bird(Z), polar(Z).
The same query fly(tweety) would have resulted in true because the third clause alone ensures
that only a polar bird is a penguin. That is, until the knowledge base is complete, the system
sometimes gives wrong answers with respect to the real world. For instance, for a query such as
fly(airplane), results in uu in Prolog4v program above, whereas the same query results in false in
the three-clause Prolog program above. Following this, binary formal systems are clearly not
appropriate to write mathematical truths.
As another example, suppose that I know that Berne is the capital of Switzerland and that each
county has one capital only. In Prolog4v, one would write
capital(berne,switzerland).
not capital(X,Y) := capital(Z,Y), X <> Z.
where <> stands for the different from (≠) operator. A non-ground query, i.e. a query where there
is some unbound variable, for instance capital(bern,X) (note the different spellings) would result
in X = uu, whereas a ground query such as capital(zurich,switzerland) would definitely result in
ff as follows: the corresponding goal would not unify the first clause but would unify the second
one because the presence of the not abstract negation in its head does not fail during the
computation of the unification algorithm. In this case, X is bound to zurich and Y is bound to
switzerland. Then, the system tries to prove the subgoal capital(Z,switzerland) and unifies the
first clause, i.e. the fact capital(berne,switzerland) binding Z to berne. Now, the system
evaluates the expression X <> Z, which in turn results in tt as zurich is not berne. Since all
premises of the rule are true, the body results in tt and the system concludes that the head not
capital(zurich,switzerlan) is tt and, hence, that capital(zurich,switzerland) is ff, and that is the
response of the query at the user’s level.
4. CONCLUSIONS
There exists some four-valued formal system that is able to state all arithmetical truth. I think that
its descriptions is comprehensive even for undergraduate student. Philosophy students can also
understand the present article.
The computation by the referred formal system roughly takes the double the time of a typical
binary formal system: some time for trying to prove that a goal is true and some additional time
for trying to prove that the same goal is false. Therefore, its computation is polynomial.
ACKNOWLEDGEMENTS
Many thanks to my Master's course supervisors, namely, Pedro Sérgio Nicolletti and Hélio
Menezes Silva.
8. International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 2, April 2017
76
REFERENCES
[1] Gödel, Kurt, (1931) Über formal unentscheidbareSätze der Principia Mathematica und
verwandterSysteme I. MonatsheftefürMathematikunPhysik, Vol. 38, pp173-198.
[2] Ferreira, Ulisses (2000) uu for programming languages. ACM SIGPLAN Notices, 35(8): pp20-30.
[3] Ferreira, Ulisses (2004) A five-valued logic and a system. Journal of Computer Science and
Technology, 4(3): pp134-140, October.
[4] Ferreira, Ulisses (2004) Uncertainty and a 7-valued logic. In Pradip Peter Dey, Mohammad N. Amin,
and Thomas M. Gatton, editors, Proceedings of The 2nd International Conference on Computer
Science and its Applications, pp 170-173, National University, San Diego, CA, USA, June.
[5] Belnap Jr, Nuel, (1975) A useful four-valued logic. In J. Michael Dunn and George Epstein, editors,
Proceedings of the Fifth International Symposium on Multiple-Valued Logic, Modern Uses of
Multiple-Valued Logic, pp 8-37. Indiana University, D. Reidel Publishing Company.
[6] Ferreira, Ulisses (2004) A prolog-like language for the internet. In Veljko Milutinovic, editor,
Proceedings of IPSI CAITA-04, Purdue, Indiana, USA.
[7] Reiter, R. (1978) On Closed World Data Bases in Logic and Data Bases, Plenum Press, New York, pp
55-76.
[8] Clark, Keith (1978) Negation as Failure in Logic and Data Bases, Plenum Press, New York, pp 293-
322.
[9] Dung, P. M. & Mancarella P. (1996) Production Systems need Negation as Failure, Proceedings of
the XIII National Conference on Artificial Intelligence, vol 2, AAAI Press and the MIT Press, pp
1242-1247.
[10] Seipel, Dietmar (1998) Partial Evidential Stable Models for Disjunctive Deductive Databases, in
Logic Programming and Knowledge Representation, Third International Workshop / LPKR’97,
Lecture Notes in Artificial Intelligence, vol. 1471, Springer, New York, October, pp 66-83
AUTHOR
The present author studied as a Master student at the Universidade Federal da Paraíba in
Campina Grande, Brazil, and as a postgraduate student in the Department of Computer Science
at the University of Edinburgh, and did some further research work at Trinity College in Dublin
from 1998 until 2001.