This paper presents a model of an algorithmic framework and a system for the discovery of non sequitur fallacies in legal argumentation. The model functions on formalised legal text implemented in Prolog. Different parts of the formalised legal text for legal decision-making processes such as, claim of a plaintiff, the piece of law applied to the case, and the decision of judge, will be assessed by the algorithm, for detecting fallacies in an argument. We provide a mechanism designed to assess the coherence of every premise of a claim, their logic structure and legal consistency, with their corresponding piece of law at each stage of the argumentation. The modelled system checks for validity and soundness of a claim, as well as sufficiency and necessity of the premise of arguments. We assert that, dealing with the challenges of validity, soundness, sufficiency and necessity resolves fallacies in argumentation.
This document provides an overview of information extraction. It defines information extraction as the process of automatically extracting structured information from unstructured documents. It describes the five main types of information extraction as named entity recognition, co-reference resolution, template element construction, template relation construction, and scenario template production. Finally, it discusses some applications of information extraction and distinguishes it from information retrieval.
MalayIK: An Ontological Approach to Knowledge Transformation in Malay Unstruc...IJECEIAES
The number of unstructured documents written in Malay language is enormously available on the web and intranets. However, unstructured documents cannot be queried in simple ways, hence the knowledge contained in such documents can neither be used by automatic systems nor could be understood easily and clearly by humans. This paper proposes a new approach to transform extracted knowledge in Malay unstructured document using ontology by identifying, organizing, and structuring the documents into an interrogative structured form. A Malay knowledge base, the MalayIK corpus is developed and used to test the MalayIK-Ontology against Ontos, an existing data extraction engine. The experimental results from MalayIKOntology have shown a significant improvement of knowledge extraction over Ontos implementation. This shows that clear knowledge organization and structuring concept is able to increase understanding, which leads to potential increase in sharable and reusable of concepts among the community.
The Process of Information extraction through Natural Language ProcessingWaqas Tariq
Information Retrieval (IR) is the discipline that deals with retrieval of unstructured data, especially textual documents, in response to a query or topic statement, which may itself be unstructured, e.g., a sentence or even another document, or which may be structured, e.g., a boolean expression. The need for effective methods of automated IR has grown in importance because of the tremendous explosion in the amount of unstructured data, both internal, corporate document collections, and the immense and growing number of document sources on the Internet.. The topics covered include: formulation of structured and unstructured queries and topic statements, indexing (including term weighting) of document collections, methods for computing the similarity of queries and documents, classification and routing of documents in an incoming stream to users on the basis of topic or need statements, clustering of document collections on the basis of language or topic, and statistical, probabilistic, and semantic methods of analyzing and retrieving documents. Information extraction from text has therefore been pursued actively as an attempt to present knowledge from published material in a computer readable format. An automated extraction tool would not only save time and efforts, but also pave way to discover hitherto unknown information implicitly conveyed in this paper. Work in this area has focused on extracting a wide range of information such as chromosomal location of genes, protein functional information, associating genes by functional relevance and relationships between entities of interest. While clinical records provide a semi-structured, technically rich data source for mining information, the publications, in their unstructured format pose a greater challenge, addressed by many approaches.
This document discusses the state-of-the-art of Internet of Things (IoT) ontologies. It begins by defining ontology and describing important design criteria for ontologies including clarity, coherence, extendibility, and minimal encoding bias. It then discusses the challenges of IoT, including large scale networks, deep heterogeneity, and unknown topology. Several existing IoT ontologies are described, including SWAMO, MMI Device Ontology, and SSN. The document concludes that while no single global IoT ontology currently exists, ontologies are needed to address the semantic interoperability challenges of heterogeneous IoT devices and domains.
IRJET - Mobile Chatbot for Information SearchIRJET Journal
This document summarizes a research paper on developing a mobile chatbot using IBM Watson services to allow students to search for their exam scores. The chatbot uses Watson Assistant for natural language processing, a SQL database as a knowledge base to store score information, and text-to-speech and speech-to-text for input and output. It was built with Android Studio and Java to provide an intuitive mobile interface for users to interact with the chatbot.
Concept integration using edit distance and n gram match ijdms
Information is growing more rapidly on the World Wide Web (WWW) has made it necessary to make all
this information not only available to people but also to the machines. Ontology and token are widely being
used to add the semantics in data processing or information processing. A concept formally refers to the
meaning of the specification which is encoded in a logic-based language, explicit means concepts,
properties that specification is machine readable and also a conceptualization model how people think
about things of a particular subject area. In modern scenario more ontologies has been developed on
various different topics, results in an increased heterogeneity of entities among the ontologies. The concept
integration becomes vital over last decade and a tool to minimize heterogeneity and empower the data
processing. There are various techniques to integrate the concepts from different input sources, based on
the semantic or syntactic match values. In this paper, an approach is proposed to integrate concept
(Ontologies or Tokens) using edit distance or n-gram match values between pair of concept and concept
frequency is used to dominate the integration process. The proposed techniques performance is compared
with semantic similarity based integration techniques on quality parameters like Recall, Precision, FMeasure
& integration efficiency over the different size of concepts. The analysis indicates that edit
distance value based interaction outperformed n-gram integration and semantic similarity techniques.
Deep learning and neural network convertedJanu Jahnavi
Deep learning and neural networks can be used to solve complex tasks like image recognition, speech recognition, and predicting stock prices. Deep learning uses artificial neural networks that contain more than one hidden layer to analyze data in a way similar to how humans draw conclusions. Neural networks help computers recognize patterns through networks of artificial neurons that interpret sensory data and label or cluster inputs. Deep learning has many applications including self-driving cars, healthcare, voice assistants, adding sounds to silent movies, and generating text and images.
This document provides an overview of information extraction. It defines information extraction as the process of automatically extracting structured information from unstructured documents. It describes the five main types of information extraction as named entity recognition, co-reference resolution, template element construction, template relation construction, and scenario template production. Finally, it discusses some applications of information extraction and distinguishes it from information retrieval.
MalayIK: An Ontological Approach to Knowledge Transformation in Malay Unstruc...IJECEIAES
The number of unstructured documents written in Malay language is enormously available on the web and intranets. However, unstructured documents cannot be queried in simple ways, hence the knowledge contained in such documents can neither be used by automatic systems nor could be understood easily and clearly by humans. This paper proposes a new approach to transform extracted knowledge in Malay unstructured document using ontology by identifying, organizing, and structuring the documents into an interrogative structured form. A Malay knowledge base, the MalayIK corpus is developed and used to test the MalayIK-Ontology against Ontos, an existing data extraction engine. The experimental results from MalayIKOntology have shown a significant improvement of knowledge extraction over Ontos implementation. This shows that clear knowledge organization and structuring concept is able to increase understanding, which leads to potential increase in sharable and reusable of concepts among the community.
The Process of Information extraction through Natural Language ProcessingWaqas Tariq
Information Retrieval (IR) is the discipline that deals with retrieval of unstructured data, especially textual documents, in response to a query or topic statement, which may itself be unstructured, e.g., a sentence or even another document, or which may be structured, e.g., a boolean expression. The need for effective methods of automated IR has grown in importance because of the tremendous explosion in the amount of unstructured data, both internal, corporate document collections, and the immense and growing number of document sources on the Internet.. The topics covered include: formulation of structured and unstructured queries and topic statements, indexing (including term weighting) of document collections, methods for computing the similarity of queries and documents, classification and routing of documents in an incoming stream to users on the basis of topic or need statements, clustering of document collections on the basis of language or topic, and statistical, probabilistic, and semantic methods of analyzing and retrieving documents. Information extraction from text has therefore been pursued actively as an attempt to present knowledge from published material in a computer readable format. An automated extraction tool would not only save time and efforts, but also pave way to discover hitherto unknown information implicitly conveyed in this paper. Work in this area has focused on extracting a wide range of information such as chromosomal location of genes, protein functional information, associating genes by functional relevance and relationships between entities of interest. While clinical records provide a semi-structured, technically rich data source for mining information, the publications, in their unstructured format pose a greater challenge, addressed by many approaches.
This document discusses the state-of-the-art of Internet of Things (IoT) ontologies. It begins by defining ontology and describing important design criteria for ontologies including clarity, coherence, extendibility, and minimal encoding bias. It then discusses the challenges of IoT, including large scale networks, deep heterogeneity, and unknown topology. Several existing IoT ontologies are described, including SWAMO, MMI Device Ontology, and SSN. The document concludes that while no single global IoT ontology currently exists, ontologies are needed to address the semantic interoperability challenges of heterogeneous IoT devices and domains.
IRJET - Mobile Chatbot for Information SearchIRJET Journal
This document summarizes a research paper on developing a mobile chatbot using IBM Watson services to allow students to search for their exam scores. The chatbot uses Watson Assistant for natural language processing, a SQL database as a knowledge base to store score information, and text-to-speech and speech-to-text for input and output. It was built with Android Studio and Java to provide an intuitive mobile interface for users to interact with the chatbot.
Concept integration using edit distance and n gram match ijdms
Information is growing more rapidly on the World Wide Web (WWW) has made it necessary to make all
this information not only available to people but also to the machines. Ontology and token are widely being
used to add the semantics in data processing or information processing. A concept formally refers to the
meaning of the specification which is encoded in a logic-based language, explicit means concepts,
properties that specification is machine readable and also a conceptualization model how people think
about things of a particular subject area. In modern scenario more ontologies has been developed on
various different topics, results in an increased heterogeneity of entities among the ontologies. The concept
integration becomes vital over last decade and a tool to minimize heterogeneity and empower the data
processing. There are various techniques to integrate the concepts from different input sources, based on
the semantic or syntactic match values. In this paper, an approach is proposed to integrate concept
(Ontologies or Tokens) using edit distance or n-gram match values between pair of concept and concept
frequency is used to dominate the integration process. The proposed techniques performance is compared
with semantic similarity based integration techniques on quality parameters like Recall, Precision, FMeasure
& integration efficiency over the different size of concepts. The analysis indicates that edit
distance value based interaction outperformed n-gram integration and semantic similarity techniques.
Deep learning and neural network convertedJanu Jahnavi
Deep learning and neural networks can be used to solve complex tasks like image recognition, speech recognition, and predicting stock prices. Deep learning uses artificial neural networks that contain more than one hidden layer to analyze data in a way similar to how humans draw conclusions. Neural networks help computers recognize patterns through networks of artificial neurons that interpret sensory data and label or cluster inputs. Deep learning has many applications including self-driving cars, healthcare, voice assistants, adding sounds to silent movies, and generating text and images.
A prior case study of natural language processing on different domain IJECEIAES
This document summarizes a prior case study on natural language processing across different domains. It begins with an introduction to natural language processing, describing how it is a branch of artificial intelligence that allows computers to understand human language. It then reviews several existing studies that applied natural language processing techniques such as named entity recognition and text mining to tasks like identifying technical knowledge in resumes, enhancing reading skills for deaf students, and predicting student performance. The document concludes by highlighting some of the challenges in developing new natural language processing models.
https://www.learntek.org/blog/natural-language-processing-applications/
Learntek is global online training provider on Big Data Analytics, Hadoop, Machine Learning, Deep Learning, IOT, AI, Cloud Technology, DEVOPS, Digital Marketing and other IT and Management courses.
Knowledge Based Reasoning: Agents, Facets of Knowledge. Logic and Inferences: Formal Logic,
Propositional and First Order Logic, Resolution in Propositional and First Order Logic, Deductive
Retrieval, Backward Chaining, Second order Logic. Knowledge Representation: Conceptual
Dependency, Frames, Semantic nets.
1. Knowledge based reasoning uses logic and facts stored in a knowledge base to draw conclusions. The knowledge base represents information about the world that an AI system can use to solve problems.
2. A knowledge based agent is an AI system that maintains an internal state of knowledge in its knowledge base, reasons over that knowledge using inference rules, updates its knowledge base after observing new information, and takes actions.
3. A knowledge based agent's architecture includes a knowledge base to store facts, an inference system to apply logical rules to deduce new information and update the knowledge base, and the ability to perceive information, reason over its knowledge, and perform actions.
Conversational AI Agents have become mainstream today due to significant advancements in the methods required to build accurate models, such as machine learning and deep learning, and, secondly, because they are seen as a natural fit in a wide range of domains, such as healthcare, e-commerce, customer service, tourism, and education, that rely heavily on natural language conversations in day-to-day operations. This rapid increase in demand has been matched by an equally rapid rate of research and development, with new products being introduced on a daily basis.
Learn More:https://bit.ly/3tBkT81
Contact Us:
Website: https://www.phdassistance.com/
UK: +44 7537144372
India No:+91-9176966446
Email: info@phdassistance.com
Ontology is a formal explicit specification of a conceptualization that provides a shared understanding of a domain. An ontology for software engineering can help facilitate communication between distributed development teams by providing a common vocabulary and conceptualization of key software engineering concepts and their relationships. Such an ontology can be modeled using notations like UML class diagrams and activity diagrams to represent important software engineering concepts like classes, activities, and relationships. The software engineering ontology then allows for improved knowledge sharing and communication framework among distributed development teams.
Computer Aided Development of Fuzzy, Neural and Neuro-Fuzzy SystemsIJEACS
Development of an expert system is difficult because of two challenges involve in it. The first one is the expert system itself is high level system and deals with knowledge, which make is difficult to handle. Second, the systems development is more art and less science; hence there are little guidelines available about the development. This paper describes computer aided development of intelligent systems using modem artificial intelligence technology. The paper illustrates a design of a reusable generic framework to support friendly development of fuzzy, neural network and hybrid systems such as neuro-fuzzy system. The reusable component libraries for fuzzy logic based systems, neural network based system and hybrid system such as neuro-fuzzy system are developed and accommodated in this framework. The paper demonstrates code snippets, interface screens and class libraries overview with necessary technical details.
Association Rule Mining Based Extraction of Semantic Relations Using Markov L...IJwest
Ontology may be a conceptualization of a website into a human understandable, however machine-readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the intentional aspects of a site, whereas the denotative part is provided by a mental object that contains assertions about instances of concepts and relations. Semantic relation it might be potential to extract the whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations describe the linguistics relationships among the entities involve that is beneficial for a higher understanding of human language. The relation can be identified from the result of concept hierarchy extraction. The existing ontology learning process only produces the result of concept hierarchy extraction. It does not produce the semantic relation between the concepts. Here, we have to do the process of constructing the predicates and also first order logic formula. Here, also find the inference and learning weights using Markov Logic Network. To improve the relation of every input and also improve the relation between the contents we have to propose the concept of ARSRE. This method can find the frequent items between concepts and converting the extensibility of existing lightweight ontologies to formal one. The experimental results can produce the good extraction of semantic relations compared to state-of-art method.
This document provides an introduction to artificial intelligence (AI). It discusses the history and foundations of AI, including early philosophers who discussed the possibility of machine intelligence. It also defines key AI concepts like intelligence, rational thinking, and acting like humans. The document outlines different types of AI systems and why AI is powerful due to combining knowledge from many disciplines. It concludes with an overview of the history of AI from its beginnings in the 1940s to the growth of expert systems.
Text pre-processing of multilingual for sentiment analysis based on social ne...IJECEIAES
Sentiment analysis (SA) is an enduring area for research especially in the field of text analysis. Text pre-processing is an important aspect to perform SA accurately. This paper presents a text processing model for SA, using natural language processing techniques for twitter data. The basic phases for machine learning are text collection, text cleaning, pre-processing, feature extractions in a text and then categorize the data according to the SA techniques. Keeping the focus on twitter data, the data is extracted in domain specific manner. In data cleaning phase, noisy data, missing data, punctuation, tags and emoticons have been considered. For pre-processing, tokenization is performed which is followed by stop word removal (SWR). The proposed article provides an insight of the techniques, that are used for text pre-processing, the impact of their presence on the dataset. The accuracy of classification techniques has been improved after applying text preprocessing and dimensionality has been reduced. The proposed corpus can be utilized in the area of market analysis, customer behaviour, polling analysis, and brand monitoring. The text pre-processing process can serve as the baseline to apply predictive analysis, machine learning and deep learning algorithms which can be extended according to problem definition.
ARTIFICIAL INTELLIGENT ( ITS / TASK 6 ) done by Wael Saad Hameedi / P71062Wael Alawsey
This document provides an overview of artificial intelligence and several AI techniques. It discusses neural networks, genetic algorithms, expert systems, fuzzy logic, and the suitability of AI for solving transportation problems. Neural networks can be used to perform tasks like optical character recognition by analyzing images. Genetic algorithms use principles of natural selection to arrive at optimal solutions. Expert systems mimic human experts to provide advice. Fuzzy logic allows for gradual membership in sets rather than binary membership. Complexity and uncertainty make transportation well-suited for AI approaches.
Ajit Jaokar, Data Science for IoT professor at Oxford University “Enterprise ...Dataconomy Media
“Enterprise AI - Artificial Intelligence for the Enterprise."
AI is impacting many areas today. This talk discusses how AI will impact the Enterprise and what it means in the near future. The talk is based on my course I teach at the University of Oxford.
Defining privacy and related notions such as Personal Identifiable Information (PII) is a central notion in computer science and other fields. The theoretical, technological, and application aspects of PII require a framework that provides an overview and systematic structure for the discipline’s topics. This paper develops a foundation for representing information privacy. It introduces a coherent conceptualization of the privacy senses built upon diagrammatic representation. A new framework is presented based on a flow-based model that includes generic operations performed on PII.
The document summarizes a presentation on artificial general intelligence (AGI) given at the IntelliFest 2012 conference. It discusses the limitations of narrow AI and the constructivist approach needed for AGI. This involves self-constructing systems that can learn new tasks and adapt. The presentation highlights the HUMANOBS project, which uses a new architecture and programming language called Replicode to develop humanoid robots that can learn social skills through observation. Attention and temporal grounding are also identified as important issues for developing practical AGI systems.
This document describes a chatbot project that was developed to answer queries from UT-Dallas students. The chatbot uses natural language processing and a domain-specific knowledge base about UT-Dallas and its library to analyze user queries and generate relevant responses. It was implemented as an Android application using speech recognition and contains modules for tokenization, sentiment analysis, and pattern matching to understand and respond to queries. The document outlines the architecture, knowledge representation, matching algorithm, and provides examples of conversations with the chatbot.
USING MACHINE LEARNING TO BUILD A SEMI-INTELLIGENT BOT ecij
Nowadays, real-time systems and intelligent systems offer more and more control interface based on voice
recognition or human language recognition. Robots and drones will soon be mainly controlled by voice.
Other robots will integrate bots to interact with their users, this can be useful both in industry and
entertainment. At first, researchers were digging on the side of "ontology reasoning". Given all the
technical constraints brought by the treatment of ontologies, an interesting solution has emerged in last
years: the construction of a model based on machine learning to connect a human language to a knowledge
base (based for example on RDF). We present in this paper our contribution to build a bot that could be
used on real-time systems and drones/robots, using recent machine learning technologies.
Seminar Proposal Presentation on "Web Based System for of Citation Court Case...Omisore Olatunji
This document is a literature review for a proposed M.Tech project on developing an expert system for monitoring and citing court cases using fuzzy logic. It summarizes 7 relevant research papers on using technology in the legal system. The papers covered topics like using the internet for legal jurisdictions, using courtroom technology to improve efficiency, using neural networks and fuzzy logic for legal reasoning, developing online dispute resolution systems, and previous systems developed for monitoring and citing court cases in Nigeria. The literature review provides context and motivation for the proposed project.
Review and Approaches to Develop Legal Assistance for Lawyers and Legal Profe...IRJET Journal
This document discusses using artificial intelligence and machine learning techniques to develop a legal assistant to help lawyers and legal professionals. It notes that the Indian legal system has a large backlog of pending cases. The proposed legal assistant would use techniques like text analysis, sentiment analysis, neural networks, decision trees and clustering to analyze legal documents and help lawyers prepare cases more efficiently. It summarizes several existing studies that have explored models using convolutional neural networks, k-nearest neighbors algorithms and decision trees for tasks like classifying cases, predicting verdicts and clustering similar judgments. The document concludes that while research is ongoing, AI and ML models have shown promise in assisting with case preparation and judicial decision-making.
Indiopedia: a System to Acquire Legal Knowledge from the Crowd Using Games Te...Instituto i3G
This document describes a system called Indiopedia that uses crowdsourcing games to acquire legal knowledge from non-experts to expand lightweight ontologies in specific domains. Indiopedia builds on an existing legal knowledge system called Ontojuris that uses ontologies to improve legal search, but ontology building is time-consuming. Indiopedia addresses this by developing crowdsourcing games that turn the ontology expansion process into fun games for non-experts to play and contribute knowledge. The games aim to efficiently acquire semantic structures from many users to develop the legal ontologies in a scalable way. Initial experiments involved users playing games to help expand ontologies related to indigenous legislation.
Temporal Information Processing is a subfield of Natural Language Processing, valuable in many tasks
like Question Answering and Summarization. Temporal Information Processing is broadened, ranging
from classical theories of time and language to current computational approaches for Temporal
Information Extraction. This later trend consists on the automatic extraction of events and temporal
expressions. Such issues have attracted great attention especially with the development of annotated
corpora and annotations schemes mainly TimeBank and TimeML. In this paper, we give a survey of
Temporal Information Extraction from Natural Language texts.
CITIZENSHIP ACT (591) OF GHANA AS A LOGIC THEOREM AND ITS SEMANTIC IMPLICATIONSijnlc
This document discusses formalizing the Citizenship Act of Ghana as a logic theorem. It begins with an abstract that introduces formalizing excerpts of the Act in First Order Logic to allow for semantic analysis by machines. The results reveal semantic consequences of the natural construction of the legal text and address ambiguities. The paper then provides context on challenges with legal language complexity and a literature review on past research formalizing legal texts with logic. It proceeds to summarize sections of the Citizenship Act and discusses modeling it with a logic theorem to establish clarity and reveal any semantic errors or ambiguities in the natural language text.
A prior case study of natural language processing on different domain IJECEIAES
This document summarizes a prior case study on natural language processing across different domains. It begins with an introduction to natural language processing, describing how it is a branch of artificial intelligence that allows computers to understand human language. It then reviews several existing studies that applied natural language processing techniques such as named entity recognition and text mining to tasks like identifying technical knowledge in resumes, enhancing reading skills for deaf students, and predicting student performance. The document concludes by highlighting some of the challenges in developing new natural language processing models.
https://www.learntek.org/blog/natural-language-processing-applications/
Learntek is global online training provider on Big Data Analytics, Hadoop, Machine Learning, Deep Learning, IOT, AI, Cloud Technology, DEVOPS, Digital Marketing and other IT and Management courses.
Knowledge Based Reasoning: Agents, Facets of Knowledge. Logic and Inferences: Formal Logic,
Propositional and First Order Logic, Resolution in Propositional and First Order Logic, Deductive
Retrieval, Backward Chaining, Second order Logic. Knowledge Representation: Conceptual
Dependency, Frames, Semantic nets.
1. Knowledge based reasoning uses logic and facts stored in a knowledge base to draw conclusions. The knowledge base represents information about the world that an AI system can use to solve problems.
2. A knowledge based agent is an AI system that maintains an internal state of knowledge in its knowledge base, reasons over that knowledge using inference rules, updates its knowledge base after observing new information, and takes actions.
3. A knowledge based agent's architecture includes a knowledge base to store facts, an inference system to apply logical rules to deduce new information and update the knowledge base, and the ability to perceive information, reason over its knowledge, and perform actions.
Conversational AI Agents have become mainstream today due to significant advancements in the methods required to build accurate models, such as machine learning and deep learning, and, secondly, because they are seen as a natural fit in a wide range of domains, such as healthcare, e-commerce, customer service, tourism, and education, that rely heavily on natural language conversations in day-to-day operations. This rapid increase in demand has been matched by an equally rapid rate of research and development, with new products being introduced on a daily basis.
Learn More:https://bit.ly/3tBkT81
Contact Us:
Website: https://www.phdassistance.com/
UK: +44 7537144372
India No:+91-9176966446
Email: info@phdassistance.com
Ontology is a formal explicit specification of a conceptualization that provides a shared understanding of a domain. An ontology for software engineering can help facilitate communication between distributed development teams by providing a common vocabulary and conceptualization of key software engineering concepts and their relationships. Such an ontology can be modeled using notations like UML class diagrams and activity diagrams to represent important software engineering concepts like classes, activities, and relationships. The software engineering ontology then allows for improved knowledge sharing and communication framework among distributed development teams.
Computer Aided Development of Fuzzy, Neural and Neuro-Fuzzy SystemsIJEACS
Development of an expert system is difficult because of two challenges involve in it. The first one is the expert system itself is high level system and deals with knowledge, which make is difficult to handle. Second, the systems development is more art and less science; hence there are little guidelines available about the development. This paper describes computer aided development of intelligent systems using modem artificial intelligence technology. The paper illustrates a design of a reusable generic framework to support friendly development of fuzzy, neural network and hybrid systems such as neuro-fuzzy system. The reusable component libraries for fuzzy logic based systems, neural network based system and hybrid system such as neuro-fuzzy system are developed and accommodated in this framework. The paper demonstrates code snippets, interface screens and class libraries overview with necessary technical details.
Association Rule Mining Based Extraction of Semantic Relations Using Markov L...IJwest
Ontology may be a conceptualization of a website into a human understandable, however machine-readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the intentional aspects of a site, whereas the denotative part is provided by a mental object that contains assertions about instances of concepts and relations. Semantic relation it might be potential to extract the whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations describe the linguistics relationships among the entities involve that is beneficial for a higher understanding of human language. The relation can be identified from the result of concept hierarchy extraction. The existing ontology learning process only produces the result of concept hierarchy extraction. It does not produce the semantic relation between the concepts. Here, we have to do the process of constructing the predicates and also first order logic formula. Here, also find the inference and learning weights using Markov Logic Network. To improve the relation of every input and also improve the relation between the contents we have to propose the concept of ARSRE. This method can find the frequent items between concepts and converting the extensibility of existing lightweight ontologies to formal one. The experimental results can produce the good extraction of semantic relations compared to state-of-art method.
This document provides an introduction to artificial intelligence (AI). It discusses the history and foundations of AI, including early philosophers who discussed the possibility of machine intelligence. It also defines key AI concepts like intelligence, rational thinking, and acting like humans. The document outlines different types of AI systems and why AI is powerful due to combining knowledge from many disciplines. It concludes with an overview of the history of AI from its beginnings in the 1940s to the growth of expert systems.
Text pre-processing of multilingual for sentiment analysis based on social ne...IJECEIAES
Sentiment analysis (SA) is an enduring area for research especially in the field of text analysis. Text pre-processing is an important aspect to perform SA accurately. This paper presents a text processing model for SA, using natural language processing techniques for twitter data. The basic phases for machine learning are text collection, text cleaning, pre-processing, feature extractions in a text and then categorize the data according to the SA techniques. Keeping the focus on twitter data, the data is extracted in domain specific manner. In data cleaning phase, noisy data, missing data, punctuation, tags and emoticons have been considered. For pre-processing, tokenization is performed which is followed by stop word removal (SWR). The proposed article provides an insight of the techniques, that are used for text pre-processing, the impact of their presence on the dataset. The accuracy of classification techniques has been improved after applying text preprocessing and dimensionality has been reduced. The proposed corpus can be utilized in the area of market analysis, customer behaviour, polling analysis, and brand monitoring. The text pre-processing process can serve as the baseline to apply predictive analysis, machine learning and deep learning algorithms which can be extended according to problem definition.
ARTIFICIAL INTELLIGENT ( ITS / TASK 6 ) done by Wael Saad Hameedi / P71062Wael Alawsey
This document provides an overview of artificial intelligence and several AI techniques. It discusses neural networks, genetic algorithms, expert systems, fuzzy logic, and the suitability of AI for solving transportation problems. Neural networks can be used to perform tasks like optical character recognition by analyzing images. Genetic algorithms use principles of natural selection to arrive at optimal solutions. Expert systems mimic human experts to provide advice. Fuzzy logic allows for gradual membership in sets rather than binary membership. Complexity and uncertainty make transportation well-suited for AI approaches.
Ajit Jaokar, Data Science for IoT professor at Oxford University “Enterprise ...Dataconomy Media
“Enterprise AI - Artificial Intelligence for the Enterprise."
AI is impacting many areas today. This talk discusses how AI will impact the Enterprise and what it means in the near future. The talk is based on my course I teach at the University of Oxford.
Defining privacy and related notions such as Personal Identifiable Information (PII) is a central notion in computer science and other fields. The theoretical, technological, and application aspects of PII require a framework that provides an overview and systematic structure for the discipline’s topics. This paper develops a foundation for representing information privacy. It introduces a coherent conceptualization of the privacy senses built upon diagrammatic representation. A new framework is presented based on a flow-based model that includes generic operations performed on PII.
The document summarizes a presentation on artificial general intelligence (AGI) given at the IntelliFest 2012 conference. It discusses the limitations of narrow AI and the constructivist approach needed for AGI. This involves self-constructing systems that can learn new tasks and adapt. The presentation highlights the HUMANOBS project, which uses a new architecture and programming language called Replicode to develop humanoid robots that can learn social skills through observation. Attention and temporal grounding are also identified as important issues for developing practical AGI systems.
This document describes a chatbot project that was developed to answer queries from UT-Dallas students. The chatbot uses natural language processing and a domain-specific knowledge base about UT-Dallas and its library to analyze user queries and generate relevant responses. It was implemented as an Android application using speech recognition and contains modules for tokenization, sentiment analysis, and pattern matching to understand and respond to queries. The document outlines the architecture, knowledge representation, matching algorithm, and provides examples of conversations with the chatbot.
USING MACHINE LEARNING TO BUILD A SEMI-INTELLIGENT BOT ecij
Nowadays, real-time systems and intelligent systems offer more and more control interface based on voice
recognition or human language recognition. Robots and drones will soon be mainly controlled by voice.
Other robots will integrate bots to interact with their users, this can be useful both in industry and
entertainment. At first, researchers were digging on the side of "ontology reasoning". Given all the
technical constraints brought by the treatment of ontologies, an interesting solution has emerged in last
years: the construction of a model based on machine learning to connect a human language to a knowledge
base (based for example on RDF). We present in this paper our contribution to build a bot that could be
used on real-time systems and drones/robots, using recent machine learning technologies.
Seminar Proposal Presentation on "Web Based System for of Citation Court Case...Omisore Olatunji
This document is a literature review for a proposed M.Tech project on developing an expert system for monitoring and citing court cases using fuzzy logic. It summarizes 7 relevant research papers on using technology in the legal system. The papers covered topics like using the internet for legal jurisdictions, using courtroom technology to improve efficiency, using neural networks and fuzzy logic for legal reasoning, developing online dispute resolution systems, and previous systems developed for monitoring and citing court cases in Nigeria. The literature review provides context and motivation for the proposed project.
Review and Approaches to Develop Legal Assistance for Lawyers and Legal Profe...IRJET Journal
This document discusses using artificial intelligence and machine learning techniques to develop a legal assistant to help lawyers and legal professionals. It notes that the Indian legal system has a large backlog of pending cases. The proposed legal assistant would use techniques like text analysis, sentiment analysis, neural networks, decision trees and clustering to analyze legal documents and help lawyers prepare cases more efficiently. It summarizes several existing studies that have explored models using convolutional neural networks, k-nearest neighbors algorithms and decision trees for tasks like classifying cases, predicting verdicts and clustering similar judgments. The document concludes that while research is ongoing, AI and ML models have shown promise in assisting with case preparation and judicial decision-making.
Indiopedia: a System to Acquire Legal Knowledge from the Crowd Using Games Te...Instituto i3G
This document describes a system called Indiopedia that uses crowdsourcing games to acquire legal knowledge from non-experts to expand lightweight ontologies in specific domains. Indiopedia builds on an existing legal knowledge system called Ontojuris that uses ontologies to improve legal search, but ontology building is time-consuming. Indiopedia addresses this by developing crowdsourcing games that turn the ontology expansion process into fun games for non-experts to play and contribute knowledge. The games aim to efficiently acquire semantic structures from many users to develop the legal ontologies in a scalable way. Initial experiments involved users playing games to help expand ontologies related to indigenous legislation.
Temporal Information Processing is a subfield of Natural Language Processing, valuable in many tasks
like Question Answering and Summarization. Temporal Information Processing is broadened, ranging
from classical theories of time and language to current computational approaches for Temporal
Information Extraction. This later trend consists on the automatic extraction of events and temporal
expressions. Such issues have attracted great attention especially with the development of annotated
corpora and annotations schemes mainly TimeBank and TimeML. In this paper, we give a survey of
Temporal Information Extraction from Natural Language texts.
CITIZENSHIP ACT (591) OF GHANA AS A LOGIC THEOREM AND ITS SEMANTIC IMPLICATIONSijnlc
This document discusses formalizing the Citizenship Act of Ghana as a logic theorem. It begins with an abstract that introduces formalizing excerpts of the Act in First Order Logic to allow for semantic analysis by machines. The results reveal semantic consequences of the natural construction of the legal text and address ambiguities. The paper then provides context on challenges with legal language complexity and a literature review on past research formalizing legal texts with logic. It proceeds to summarize sections of the Citizenship Act and discusses modeling it with a logic theorem to establish clarity and reveal any semantic errors or ambiguities in the natural language text.
Great model a model for the automatic generation of semantic relations betwee...ijcsity
The
large
a
v
ailable
am
ou
n
t
of
non
-
structured
texts
that
b
e
-
long
to
differe
n
t
domains
su
c
h
as
healthcare
(e.g.
medical
records),
justice
(e.g.
l
a
ws,
declarations),
insurance
(e.g.
declarations),
etc. increases
the
effort
required
for
the
analysis
of
information
in
a
decision making
pro
-
cess.
Differe
n
t
pr
o
jects
and t
o
ols
h
av
e
pro
p
osed
strategies
to
reduce
this
complexi
t
y
b
y
classifying,
summarizing
or
annotating
the
texts.
P
artic
-
ularl
y
,
text
summary
strategies
h
av
e
pr
ov
en
to
b
e
v
ery
useful
to
pr
o
vide
a
compact
view
of
an
original
text.
H
ow
e
v
er,
the
a
v
ailable
strategies
to
generate
these
summaries
do
not
fit
v
ery
w
ell
within
the
domains
that
require
ta
k
e
i
n
to
consideration
the
tem
p
oral
dimension
of
the
text
(e.g.
a
rece
n
t
piece
of
text
in
a
medical
record
is
more
im
p
orta
n
t
than
a
pre
-
vious
one)
and
the
profile
of
the
p
erson
who
requires
the
summary
(e.g
the
medical
s
p
ecialization).
T
o
co
p
e with
these
limitations
this
pa
p
er
prese
n
ts
”GRe
A
T”
a
m
o
del
for
automatic
summary
generation
that
re
-
lies
on
natural
language
pr
o
cessing
and
text
mining
te
c
hniques
to
extract
the
most
rele
v
a
n
t
information
from
narrati
v
e
texts
and
disc
o
v
er
new
in
-
formation
from
the
detection
of
related
information. GRe
A
T
M
o
del
w
as impleme
n
ted
on
sof
tw
are
to
b
e
v
alidated
in
a
health
institution
where
it
has
sh
o
wn
to
b
e
v
ery
useful
to displ
a
y
a
preview
of
the
information
a
b
ou
t
medical
health
records
and
disc
o
v
er
new
facts
and
h
y
p
otheses
within
the
information.
Se
v
eral
tests
w
ere
executed
su
c
h
as
F
unctional
-
i
t
y
,
Usabili
t
y
and
P
erformance
regarding
to
the
impleme
n
ted
sof
t
w
are.
In
addition,
precision
and
recall
measures
w
ere
applied
on
the
results
ob
-
tained
through
the
impleme
n
ted
t
o
ol,
as
w
ell
as
on
the
loss
of
information
obtained
b
y
pr
o
viding
a
text
more
shorter than
the
original
CITIZENSHIP ACT (591) OF GHANA AS A LOGIC THEOREM AND ITS SEMANTIC IMPLICATIONSkevig
This document summarizes a paper that presents excerpts from Ghana's Citizenship Act formalized as a logic theorem. The paper aims to analyze the semantics of the legal text by formalizing it in logic. It discusses challenges with ambiguity in legal language and how formalization can help reveal semantic implications. It presents the Citizenship Act formalized with logical connectives and analyzes how this preserves clarity and allows for deductive reasoning through the text. The formalization is said to establish the intended semantics while preventing wrong meanings from ambiguity.
Supreme court dialogue classification using machine learning models IJECEIAES
This study aimed to classify sentences from supreme court dialogues as being said by a justice or non-justice using machine learning models. Two models were tested - naïve Bayes and logistic regression. The models were tested on datasets from individual court cases and a combined dataset. The naïve Bayes model performed better than logistic regression on individual cases, achieving AUC scores of 88.54% and 83.74%. However, on the combined dataset both models performed equally poorly with an AUC score of 67.72%. The study showed that models trained on individual cases yielded better performance than ones trained on multiple cases, demonstrating the importance of case specificity for legal classification tasks.
This document is a thesis that proposes using word embeddings to improve information retrieval by addressing term mismatch issues. It discusses word2vec, a technique for learning word embeddings from large text corpora that capture semantic relationships between words. The thesis proposes two approaches: 1) incorporating word embedding similarities into a probabilistic language model for retrieval and 2) a vector space model. Due to time constraints, only the first approach is implemented, which integrates word embeddings into ALMasri and Chevallet's probabilistic language model. Experiments are conducted to evaluate the impact of using semantic features from word embeddings on retrieval effectiveness.
An overview of information extraction techniques for legal document analysis ...IJECEIAES
In an Indian law system, different courts publish their legal proceedings every month for future reference of legal experts and common people. Extensive manual labor and time are required to analyze and process the information stored in these lengthy complex legal documents. Automatic legal document processing is the solution to overcome drawbacks of manual processing and will be very helpful to the common man for a better understanding of a legal domain. In this paper, we are exploring the recent advances in the field of legal text processing and provide a comparative analysis of approaches used for it. In this work, we have divided the approaches into three classes NLP based, deep learning-based and, KBP based approaches. We have put special emphasis on the KBP approach as we strongly believe that this approach can handle the complexities of the legal domain well. We finally discuss some of the possible future research directions for legal document analysis and processing.
Artificial-Intelligence--AI And ES Nowledge Base SystemsJim Webb
This document discusses teaching artificial intelligence concepts to students. It recommends using hands-on exercises and group work to effectively introduce topics. It also provides sample questions to test understanding of knowledge-based systems and expert systems, including their components, development, applications, benefits, and limitations.
ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS KNOWLEDGE-BASED SYSTEMS TEACHING ...Arlene Smith
This document discusses teaching artificial intelligence concepts to students. It recommends using hands-on exercises and group work to effectively introduce topics. It also provides sample questions to test understanding of knowledge-based systems and expert systems, including their components, development, applications, benefits, and limitations.
Here are the key points about using content-based filtering techniques:
- Content-based filtering relies on analyzing the content or description of items to recommend items similar to what the user has liked in the past. It looks for patterns and regularities in item attributes/descriptions to distinguish highly rated items.
- The item content/descriptions are analyzed automatically by extracting information from sources like web pages, or entered manually from product databases.
- It focuses on objective attributes about items that can be extracted algorithmically, like text analysis of documents.
- However, personal preferences and what makes an item appealing are often subjective qualities not easily extracted algorithmically, like writing style or taste.
- So while content-based filtering can
AI for Legal Research with applications, toolsmahaffeycheryld
AI applications in legal research include rapid document analysis, case law review, and statute interpretation. AI-powered tools can sift through vast legal databases to find relevant precedents and citations, enhancing research accuracy and speed. They assist in legal writing by drafting and proofreading documents. Predictive analytics help foresee case outcomes based on historical data, aiding in strategic decision-making. AI also automates routine tasks like contract review and due diligence, freeing up lawyers to focus on complex legal issues. These applications make legal research more efficient, cost-effective, and accessible.
A Survey on Speech Recognition with Language Specificationijtsrd
As a cross disciplinary, speech recognition is entirely based on the speech as the survey object. Speech recognition allows the machine to convert the speech signal into text or commands via the process of identification and understanding. Speech recognition involves in various fields of physiology, psychology, linguistics, computer science and signal processing, and is even related to the person’s body language, and its goal is to achieve natural language communication between man and machine. The speech recognition technology is gradually becoming the key technology of the IT man machine interface. This paper describes the development of speech recognition technology and its basic principles, methods, reviewed the classification of speech recognition systems, speech recognition approaches and voice recognition technology, analyzed the problems faced by the speech recognition. Dr. Preeti Savant | Lakshmi Sandhya H "A Survey on Speech Recognition with Language Specification" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-3 , April 2022, URL: https://www.ijtsrd.com/papers/ijtsrd49370.pdf Paper URL: https://www.ijtsrd.com/computer-science/speech-recognition/49370/a-survey-on-speech-recognition-with-language-specification/dr-preeti-savant
Framework for developing algorithmic fairnessjournalBEEI
In a world where the algorithm can control the lives of society, it is not surprising that specific complications in determining the fairness in the algorithmic decision will arise at some point. Machine learning has been the de facto tool to forecast a problem that humans cannot reliably predict without injecting some amount of subjectivity in it (i.e., eliminating the “irrational” nature of humans). In this paper, we proposed a framework for defining a fair algorithm metric by compiling information and propositions from various papers into a single summarized list of fairness requirements (guideline alike). The researcher can then adopt it as a foundation or reference to aid them in developing their interpretation of algorithmic fairness. Therefore, future work for this domain would have a more straightforward development process. We also found while structuring this framework that to develop a concept of fairness that everyone can accept, it would require collaboration with other domain expertise (e.g., social science, law, etc.) to avoid any misinformation or naivety that might occur from that particular subject. That is because this field of algorithmic fairness is far broader than one would think initially; various problems from the multiple points of view could come by unnoticed to the novice’s eye. In the real world, using active discriminator attributes such as religion, race, nation, tribe, religion, and gender become the problems, but in the algorithm, it becomes the fairness reason.
The Role of AI Technology for Legal Research and Decision MakingIRJET Journal
The document discusses the evolving role of artificial intelligence (AI) technology in the legal field, specifically how AI can enhance legal research and decision making through tools like machine learning, natural language processing, and predictive analytics. It explores the potential benefits of AI, such as improved efficiency and accuracy, as well as challenges around transparency, bias, and replacing human lawyers. The document concludes that while AI holds promise, its use in the legal system must be carefully regulated to ensure it is applied responsibly and ethically.
This document discusses a case study involving a small-medium enterprise (SME) that has experienced anomalies in its accounting and product records. The SME has hired a digital forensic investigator to determine if any malicious activity has occurred and ensure its systems are free of malware. The investigator will conduct a malware investigation and digital forensic investigation following the four principles of the Association of Chief Police Officers (ACPO) guidelines. The investigator will use the Four Step Forensics Process (FSFP) model to identify, preserve, extract, and analyze evidence from the SME's systems to determine the cause of problems and make recommendations.
IRJET- Survey for Amazon Fine Food ReviewsIRJET Journal
This document discusses sentiment analysis and summarizes several papers on related topics. It begins with an abstract describing sentiment analysis and its importance. The introduction defines sentiment classification and analysis. The literature survey section summarizes 5 papers on natural language processing and machine learning algorithms for sentiment analysis, including K-means clustering, bag-of-words models, TF-IDF vectorization for document clustering, hierarchical clustering methods, and using naive bayes and SVM for sentiment analysis and text summarization. The conclusion discusses techniques for data processing, natural language processing, and machine learning algorithms covered.
Similar to AUTOMATED DISCOVERY OF LOGICAL FALLACIES IN LEGAL ARGUMENTATION (20)
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Gas agency management system project report.pdfKamal Acharya
The project entitled "Gas Agency" is done to make the manual process easier by making it a computerized system for billing and maintaining stock. The Gas Agencies get the order request through phone calls or by personal from their customers and deliver the gas cylinders to their address based on their demand and previous delivery date. This process is made computerized and the customer's name, address and stock details are stored in a database. Based on this the billing for a customer is made simple and easier, since a customer order for gas can be accepted only after completing a certain period from the previous delivery. This can be calculated and billed easily through this. There are two types of delivery like domestic purpose use delivery and commercial purpose use delivery. The bill rate and capacity differs for both. This can be easily maintained and charged accordingly.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
VARIABLE FREQUENCY DRIVE. VFDs are widely used in industrial applications for...PIMR BHOPAL
Variable frequency drive .A Variable Frequency Drive (VFD) is an electronic device used to control the speed and torque of an electric motor by varying the frequency and voltage of its power supply. VFDs are widely used in industrial applications for motor control, providing significant energy savings and precise motor operation.
AUTOMATED DISCOVERY OF LOGICAL FALLACIES IN LEGAL ARGUMENTATION
1. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.2, March 2020
DOI: 10.5121/ijaia.2020.11203 37
AUTOMATED DISCOVERY OF LOGICAL
FALLACIES IN LEGAL ARGUMENTATION
Callistus Ireneous Nakpih1
and Simone Santini2
1
Mathematics and Computing Department, St. John Bosco’s College of Education,
Navrongo, Ghana
2
Computer Science Department, Universidad Autonoma De Madrid, Spain
ABSTRACT
This paper presents a model of an algorithmic framework and a system for the discovery of non sequitur
fallacies in legal argumentation. The model functions on formalised legal text implemented in Prolog.
Different parts of the formalised legal text for legal decision-making processes such as, claim of a plaintiff,
the piece of law applied to the case, and the decision of judge, will be assessed by the algorithm, for
detecting fallacies in an argument. We provide a mechanism designed to assess the coherence of every
premise of a claim, their logic structure and legal consistency, with their corresponding piece of law at
each stage of the argumentation. The modelled system checks for validity and soundness of a claim, as well
as sufficiency and necessity of the premise of arguments. We assert that, dealing with the challenges of
validity, soundness, sufficiency and necessity resolves fallacies in argumentation.
KEYWORDS
Artificial Intelligence, Natural Language Processing, Logic Fallacy, Legal Argumentation, Algorithms in
Prolog.
1. INTRODUCTION
Researchers from different disciplines in Computer Science, especially Natural Language
Processing (NLP), have attempted to fit different theories of human reasoning into machine
computations, for resolving some important semantic challenges. Various computational tools
have been developed through NLP, a discipline that is focused on the translation of human
languages into formalised machine language [1][2]. Some of the NLP tools are developed to
synthesise reasoning and logics of human expressions, which target different semantic challenges
or errors generated in discourse [3][4].
One of the important errors generated from human discourse is fallacies. Even though fallacies
form part of the natural way humans communicate, it is essentially the error of it. Fallacies
generally are presented as logical faults of statements, or faulty reasoning, or both, in an argument
[5][6]. Aristotle was one of the earliest philosophers who started the discussion of fallacies in a
more profound way[7]. Several other classical as well as contemporary discussions have helped
us to understand the nature, and to some extent the variance of expression of human thoughts that
generates different types of fallacies [8][9][10].
2. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.2, March 2020
38
However, It is still apparent that, the more the semantic gaps in natural language appears to be
bridged, the more generatively complex the errors in natural reasoning and expression of
language become even outward. There has been various forms of critiques [11][12] and counter
critiques [13] on how fallacies have been expressed, understood, classified and applied by
various researchers; it becomes quite revealing and yet interesting, that the form of fallacy itself is
not simply straightforward for experts to clearly classify with compelling consistency as to what
constitutes a fallacy as well as its types, and this makes it challenging for Computer Scientists to
design computational systems that have the capacity to capture the complete behaviour of
fallacies as a structured formalism.
Even though philosophers and argumentation theorists have provided some manual varied forms
of identifying fallacies, generally in natural language and in legal language, which partly depends
on sound reasoning [14], the process sometimes becomes puzzling and deserves more robust
computational approaches.
2. RELATED WORKS
The application of NLP tools has offered diverse solutions for different challenges in legal
practice. The efficiency of legal practice is largely hinged on the understanding and appropriate
use of legal language; a practice that requires legal research, the access to information and the use
of knowledge for legal decision-making. The Information and knowledge that governs legal
practice are usually composed into statutes, regulations, procedures and so forth, which can be on
different themes, but sometimes converge on account of their application on some specific issues.
The understanding and use (especially in divergent legal subject issues) of legal knowledge can
sometime be very tricky for the practitioners. These important challenges have attracted
technologies that focus on the efficient access, understanding and use of legal information for
decision-making, while NLP tools specifically target the textual structure, knowledge extraction,
semantics and retrieval systems for legal language.
Callistus Ireneous Nakpih in his work “Citizenship Act (591) of Ghana as a logic Theorem and Its
Semantic Implications” [15] presents legal text in a formalism that allows for computational
reasoning and deduction of meaning from the text. The theorems developed guides, or rather
compels a pattern of reasoning through the legal text to conform to domain specific decision-
making processes. We recognise in that research that, ambiguities in the linguistic structure of the
legal text is reduced and therefore offers a good knowledgebase system for the analysis and
discovery of fallacies in legal systems. Following this idea, we provide in our model, a repository
that will contain such formalism of knowledgebase of legal text that can be made accessible to a
fallacy analysing mechanism.
Hiroaki Yamada, Simone Teufel and Takenobu Tokunaga developed an annotated model with
algorithms for automatic extraction of argument structure based on Supervised Learning [16].
Their research work offers mechanisms that provide summaries or extracts from legal documents
on individual issue topics. They evaluated the need for legal practitioners to consistently follow
principles of legal arguments, an activity which can however be cumbersome in the midst of the
realities of information overload for practitioners. They provided a mechanism that makes readily
available, the right piece of information for a particular legal application. Our research highlights
the importance for the coherence of the use and application of legal information in decision-
making, and thus considers several indications and concepts provided in their work. Our model in
this paper requires the evaluation of a piece of law employed for decision-making, to ascertain
whether it forms part of a legal premise or otherwise. The concept of having access and using the
3. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.2, March 2020
39
right legal information or knowledge, as forming part of the process of efficient legal decision-
making is thus implemented for the analysis of fallacies in our research.
There are other several impressive systems that have been modelled for decision-making, using
different kinds of theories as well as drawing from other technologies which brings about
improvement in the automated world. One of such impressive development is Detore and
Suever’s “Automated Decision-Making Arrangement” [17]. Their system executes several levels
of decision operations, by receiving input data which is analysed by comparing with
knowledgebase data in the system. Data that matches the system’s knowledgebase is used to
execute a decision operation, whereas data that do not match the knowledgebase is analysed for
degree of divergence, and a decision operation is executed based on the nature of the input data.
The system is generally composed of input, output, processing and storage functions. We take
inspiration from the composition and function of this system, which led us to model our system to
receive formalised text as input which will be transmitted into an analyser, we hold in a
repository, a knowledgebase data of existing law or regulations, which will be applied to the input
data in the analyser in different ways at different levels for fallacy discovery. We provide a
pattern that will indicate the state of an argument being analysed in terms of soundness, validity
and sufficiency.
Computational mechanisms targeting the processing of natural language for knowledge and
decision-making is becoming even more impressive by the years. The IBM’s Project Debater is
one of the state of the art inventions that unveils a very practical feel of how machines can be
made to process discourse as humans do. From this invention, we see the first Artificial
Intelligent (AI) machine debate on complex topics without prior training on the topics. It has the
capacity to listen to its opponent debater and give a rebuttal. The system combines complex
mechanisms such as text segmentation, modelling human dilemma, listening comprehension,
data-driven speech writing and delivery among others. The results of Project Debater is quite
remarkable, just like several other systems that have yielded practical usefulness in various ways
for resolving natural language problems.
While AI is growing and gaining the capacity to resolve various complex human problems,
especially in NLP, we are yet to see robust computational mechanisms that are engineered to
directly target the discovery of fallacies in natural language expressions. We therefore in this
research, focus narrowly on the challenges of identifying fallacies in legal argumentation,
drawing from the existing reasoning theories and mechanisms.
3. FALLACIES IN LEGAL ARGUMENTATION
In legal argumentation, legal practitioners try to assess the validity and soundness of the
reasoning employed in the argumentation with the facts of a piece of law [18][19]. This exercise
can sometimes be difficult, as legal language expression itself can be very confusing [20]. When
there is a fault in the logic structure or reasoning pattern of a legal argumentation, fallacies may
occur. Argumentation theorists have tried several means to classify these fallacies to capture their
varied structure and complexities for identification. By these classifications, there are some form
of fallacies which are being referred to as logical fallacies, where the logic rules are particularly
said to be ignored in the course of making deductive conclusions [21].
Legal argumentation has been one domain of life that depends much on the use and interpretation
of text for processing decisions. The interpretation of text is however subject to the complexity of
legal expressions and reasoning, whereas legal expressions are mostly done with different levels
4. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.2, March 2020
40
of ambiguities which sometimes lead to unsound reasoning [22][23], and may as well result in the
introduction of fallacies in decisions that are taken through the process [24].
The discovery of fallacies in the course of legal discourse or argumentation is a difficult task, and
mostly eluding. This is because, the analysis of logical implication of legal expressions at the
level of discourse is not robust enough to reveal faulty reasoning that lead to fallacies. Even
though some researchers have attempted to use some computational means to support legal
processes, they have not directly dealt with the problem of fallacies in legal argumentation to the
best of our knowledge.
There are over hundred types of fallacies that have been classified by their nature and how they
are presented in language. We do admit that it is quite challenging to have one computational
system that can adequately process all the types of fallacies; we therefore only focus on logical
fallacies that occur in legal language and have used some of the basic rules for defining non
sequitur fallacies, which is a type of logical fallacy for our model.
3.1. Non Sequitur Fallacies and Legal Argumentation
Non sequitur fallacies are generally discussed as fallacies that have their conclusions not
following their premise, however, this opinion has been disagreed by other researchers who opine
that the use of non sequitur in relation to fallacies is more complex and spans beyond the
definition ascribed to it [25]. In a broader sense, non sequitur fallacies occur when the logical
structure of an argumentation is undermined; when the validity, soundness and sufficiency of
statements are faulty [26][27] [28].
Example 1;
All birds can fly
A penguin is a bird
Therefor a penguin can fly
Example 2:
If 2020 is divisible by 5
then 2020 is a leap year
Example 3:
A leap year is exactly divisible by 4
1900 is divisible 4
Therefore 1900 is a leap year
Example 1 appears to be a valid argument, that is, if in our real world it is really the case that all
birds can fly, then the conclusion would be correct. This is because the conclusion relates directly
to the two premise, so this is not an error of validity of the argument. However, we do recognise
two dissimilar premise which are synthesised but weakens the continuity of reasoning of the
argument, resulting into a false conclusion [29]. In our real world, not all birds can fly, and one of
such birds is a penguin; so in the assertion, the first premise which we know is factually false is
related to a factually true premise in the statement for the deduction of our conclusion. Even
though there is a semantic connection between them for the deduction process, the wrong
assumption or premise of all birds can fly, poses an error of sound reasoning, since in real life not
all birds can fly, which unfortunately leads to the logically correct deduction of a penguin can fly.
The deductive structure followed a false premise and hence resulted into a fallacious conclusion.
5. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.2, March 2020
41
In Example 2, we do realise that the year 2020 is indeed divisible by 5 and satisfies the condition
of the premise, and therefore superficially makes the conclusion true, because 2020 is a leap year.
However, 2020 being a leap year has nothing to do with it being divisible by 5. In other words,
2020 being divisible by 5 is not a criteria for determining whether it is a leap year or not, so the
premise is irrelevant to the argument; even though the premise and the derived conclusion both
happen to be true, the premise does not form part of the deduction and is not necessary for the
conclusion.
In Example 3, we also realise that, we have two true premise for deducing the conclusion; it is
true that a leap year is divisible by 4, and it is also true that 1900 is divisible by 4, however, the
conclusion that 1900 is a leap year is false. This is because, the premise given in the argument are
not sufficient to deduce the conclusion even though they are true, there are other factors which
are needed in addition to the stated premise to correctly arrive at the conclusion that a year is a
leap year or otherwise. These examples typifies the general form and structure of logical
fallacies, which are generated by the different types of errors as shown.
Non sequitur fallacies are common in legal argumentation, in the sense that, some claims made
usually do not have sufficient or relevant premise to justify alleged conclusions made by the
claimer, or, the reasoning employed to state an allegation is either faulty or not valid; the laws of
the court requires that, any person making an allegation against an opponent has the onus to
tender in evidence of their claim to the court for examination and decision-making.
The exercise of assessing and deciding whether a claim is true or false typically follows the
pattern of asserting a non sequitur fallacy in legal argumentation; in both instances of the
processes, we are generally interested in whether the conclusion made follows its corresponding
premise; this close relation of the processes allows us to narrow our solution to identify non
sequitur fallacies in legal argumentation which are logical and deductive errors in nature, by
establishing the disconnect between a premise and a conclusion. We are also interested in
ensuring that, claims made are done with legal certainty [30].
4. FINDINGS
In this study, we generally provide a modelled system that examines all aspects of a legal
statement using rules that establish validity, soundness, sufficiency and necessity, for the
automated discovery of fallacies; we assert from the finding of this study, that, an argument that
is valid, sound and has sufficient and necessary propositions avoids fallacies. The computational
mechanism provided by this study therefore requires that an argument is assessed in all these
aspects.
Logical fallacies occur in the process of reasoning where the logics of an argument is
undermined. We have only maintained the generic structure of Non Sequitur fallacies for their
deductions in legal text, and not really considering all specific forms and complexities of it. The
generic structure maintained here is sufficient for the purpose and delimitation of this study. Non
sequitur fallacies in legal argumentation can occur generally in the following;
1. A claim made against an opponent or a respondent.
2. Facts presented to a court by a respondent against a claim.
6. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.2, March 2020
42
3. The law or legislative instrument employed for judgement or decision-
making of the court.
4. The conclusion or decision of a judge on a case.
The modelled system considers the enlisted areas above for assessment and the discovery of
fallacies. Usually, in a court case, a claim follows a logic structure of antecedent and consequent
to challenge real facts of a respondent based on a legal backing [31]. In our model, a claim Cl has
a set of premise Pc, where Pc holds a list of statements Pc1, Pc2,…,Pcn. and Cl is implied by Pc.
Therefore, Cl:-Pc.
Where, Pc= [Pc1, Pc2, Pc3, …, Pcn].
Claims are truth-functional which are usually asserted using corresponding or appropriate laws.
We denote an existing piece of law that would be employed to assert a claim as Lw. The law is
also an argument composed with antecedent-consequent logic structure, and sometimes as
axiomatic facts. We denote the premise of a piece of law as PL, with atomic statements PL1,
PL2… PLn.
Therefore, Lw:-PL.
Where, PL=[PL1, PL2, ..., PLn].
We denote the decision of the judge or the court as D with a list of premise PD, with atomic
statements PD1, PD2,…,PDn. Which are based on legal provision.
Therefore, D:-PD.
Where, PD=[PD1, PD2,PD3,…,PDn].
The defined premise and implied conclusions are variables that hold atomic statements defined in
the law and the claim, which are all truth-functional. Generally, the mechanism for assessing legal
and logic requirements of an argument is provided for in the model.
4.1. Algorithmic Model for Automated Fallacy Discovery
The algorithm is designed to assess any claim, as well as associated legislative instrument
employed by a court. It also uses facts established by the court and performs a deductive reasoning
through the text for the discovery of logical fallacies that may occur in the process.
The Algorithm
Fetch Cl.
Fetch Lw.
DO Validity(Cl).
If FALSE PRINT “Fallacy Alert - invalid Claim” else GOTO NEXT
DO Lw =@= Cl.
If FALSE PRINT “Fallacy Alert - legal inconsistency” else GOTO
NEXT
DO same_length(Pc, PL).
If FALSE PRINT “Fallacy Alert - insufficient Premise” else GOTO
NEXT.
DO subset(Pc, PL).
7. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.2, March 2020
43
If FALSE PRINT “Fallacy Alert – Not Necessary Premise” else
DO ?-CL.
STOP.
Prolog code for establishing a fallacy;
Fallacy(S):-
Validity(Cl);
Lw =@=CL;
+ same_length(Pc, PL);
+ subset(Pc, PL);
([Validity(Cl), Lw =@=CL, + same_length(Pc, PL), + subset(Pc,
PL)]).
We propose for our system, an input mechanism that can transmit a claim generated in Prolog to a
fallacy analyser, that will receive the claim and as well receive the corresponding piece of law in
Prolog programme from a Knowledgebase. The analyser will use the pattern provided in the
algorithm to assess the validity, soundness, sufficiency and necessity as well as the legal backing
of an argument. The analyser after these assessments can then process facts of a respondent,
against whom a claim is made to ascertain the truth values of the whole argument.
Figure 1. Nakpih’s Model of Automated Fallacy Discovery System
8. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.2, March 2020
44
4.2. Validity and Soundness
Some researchers argue that validity of logics, especially of the First Order, is undecidable and
also semi-decidable [32][33]; however, the general rules for establishing validity suffices for the
process in our model, that is, an argument is said to be valid if and only if the conclusion cannot
be false whiles all premise of the argument are true [34], which can be proven through various
means including contrapositive [35][36]. The model below ensures validity of a claim;
If Validity(Cl):-
(
Pc= [Pc1, Pc2, Pc3, …, Pcn],
Pc->T
)
Then Cl:- T.
AND
If Pc:-T
And Cl:-F */invalid
For a claim to be in a valid logic form, all atomic statements of the premise Pc1 to Pcn if ANDed
together evaluates a True value (T), then the conclusion of the claim must resolve True. However,
if the ANDed atomic values of Pc resolves True and the conclusion of the claim resolves False
(F), then the argument or claim is invalid.
One other step in the decision-making process on a court case, is to examine evidence of claims
against the real facts of a respondent [37]. This is where the truth-functionality of propositions is
tested. The model therefore allows for the evaluation of truth values of propositions of claims
against the certified real facts of a respondent, and thus maintains the soundness of an argument.
A sound argument must first be valid and must have premise which are really true [38]. While
validity maintains that the structural composition of a claim guarantees that a conclusion should
be true as long as its premise are true, the soundness of a claim further enforces that the given
premise are actually true [39]. The component of the model enforcing soundness also evaluates
the content of a claim against established facts of the court to ascertain truth values of the claim
(after its logic form is satisfied). This will allow for evidence or facts of a claim to be evaluated
through deductive reasoning; the important facts established by court will be given as input to the
system for the evaluation process. The objective here is to ensure that, claims are not only valid
but sound, and this is because, sound arguments to a good extend resolves fallacies.
4.3. Sufficiency and Necessity
On the account of sufficiency for the avoidance of fallacy [40], we ensure that the minimal
premise required to establish a conclusion of an argument is enforced. By this evaluation, claims
made without adequate required premise to deduce a conclusion will be detected. The sufficiency
of the premise of a claim is evaluated based on legal provision; every piece of law usually has a
set of premise, or, axiomatic facts that are used to establish legal conclusions. The premise of a
claim therefore should not be anything less than what is required by a corresponding piece of law
for the deductive process. Thus, the obvious minimal premise for a legal conclusion must always
be maintained. We therefor generate same_length(Pc, PL) where the premise of the claim is
matched with the required premise of piece of law, for ensuring sufficiency of an argumentation.
9. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.2, March 2020
45
On the account of necessity [41], evaluation of a claim and its associated law is also made; the
evaluation is done to ascertain whether the claim is made on valid legal grounds or otherwise,
even before the process of fact-finding is engaged. Claims generated must correspond to some
legal provision to allow for legal reasoning. The atomic statements made in a claim should have
some coherence with atomic statements made in the declaration of its associated law; we generate
Lw=@=Cl to assess the structural equivalence of a piece of law corresponded to a claim. We as
well generate subset(Pc, PL) to ensure that the premise of a claim relates to the premise of an
existing law. This is to warrant that the premise of a claim made is a legal premise and is relevant
or necessary for the reasoning and deduction of a legal consequent in a claim. By this evaluation,
a premise which is found in a claim but not relevant or necessary for the deduction process per
legal provision will be detected.
The various aspects of the model ensures that different logical and legal requirements of an
argument are asserted, and their combined process guarantees the detection of fallacies.
5. FUTURE WORK
The system we have provided analyses formalised text for fallacies, however, the processes of
formalism of natural text is not automatic. For future works and for the improvement of our
system, it will be interesting to provide a mechanism that can extract natural text and formalise it
automatically into a knowledgebase where it can be further be retrieved for the fallacy analysis.
We also look forward to a mechanism that can automatically retrieve a piece of law related to a
claim. The ability to identify a piece of law that is appropriate for assessing the legal consistency
of a claim is a human activity in our system. An intelligent mechanism for retrieving a piece of
law automatically based on a claim will augment the efficiency of our model. There are several
information retrieval systems for legal practice, which generally retrieve documents required with
content for use, such retrieval mechanisms can be modelled to focus on understanding formalised
claims and then retrieve the corresponding legislative instrument from a knowledgebase for
fallacy analysis. Formalism of fallacies is a grey area in NLP, other researchers can take the
leverage of using theories and technologies that exist already to target this complex problem of
natural language.
6. CONCLUSION
The quest to resolve fallacies in argumentation is not a simple task; fallacies have been
controversial since the discussions of it begun, and still remains controversial, so, we do not claim
to offer a computational solution that captures all the complexities of logical fallacies.
Meanwhile, various research discussions have given credible insight to the general structure and
form of fallacies. The main concern for language users in general, especially legal practitioners, is
how to be able to identify these fallacies when they occur, or rather, how to avoid them all
together. Through NLP, which is a branch of AI, researchers have provided various tools for
addressing different semantic challenges; we therefore follow the insight on fallacies and the
technologies of NLP available to us for providing a system that discovers logical fallacies in legal
argumentation automatically.
In this paper we provided a computational mechanism for assessing legal argumentations, for the
discovery of logical fallacies, particularly non sequitur; we consider the problem of validity,
soundness, sufficiency and necessity among other factors as underpinning the generation of
logical fallacies in legal argumentation, we therefore provided a model that allows these aspects
of any legal claim or argument made to be evaluated. At each step of evaluation by the modelled
10. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.2, March 2020
46
system, we ensure that any error realised in a statement that amounts to a non sequitur fallacy can
be detected. We generated an algorithm in Prolog, and then a conceptual model of how the whole
system is intended to work for the discovery of fallacy. We thus assert by this research, that
dealing with the problem of validity, soundness, sufficiency and necessity resolves fallacies in
argumentation.
ACKNOWLEDGEMENTS
We acknowledge our friends in the legal fraternity who offered us very insightful guidance for
understanding legal processes and argumentation
REFERENCES
[1] T. Bell (2018) “what is Natural Language Processing? The Business benefits of NLP Explained” CIO
from IDG, accessed 18th September 2019, from
<https://www.cio.com/article/3258837/artificialintelligence/ what-is-natural-language-processing-the-
business-benefits-of-nlp-explained.html>
[2] U.S. Tiwary and T. Suddiqui (2008) “Natural Language Processing and Information Retrieval”,
Oxford University Press Inc., New York, NY USA.
[3] P. Blackburn, J. Bos (2003) “Computational Semantics”, THEORIA. An international Journal for
Theory, History and Foundation of Science, Vol. 18, No. 1, pp. 27-45.
[4] J.R. Smith (2007) “The Real Problem of Bridging the ‘Semantic Gap’ ”. In: Sebe N., Liu Y.,
Zhuang Y., Huang T.S. (eds) Multimedia Content Analysis and Mining. MCAM 2007. Lecture
Notes in Computer Science, Vol. 4577. Springer, Berlin, Heidelberg.
[5] H.J. Gensler (2010) “The A to Z of Logics”, Rowman & Littlefield, pp. 74.
[6] R. Paul and L. Elder (2008). “The thinker's guide to fallacies: The art of mental trickery and
manipulation.” Dillon Beach, CA: Foundation for Critical Thinking Press.
[7] Aristotle (1955) “On Sophistical Refutations” Loch Classical Library, Cambridge, Mass., Harvard
University Press, accessed 18th
September 2019, from
<http://www.hup.harvard.edu/catalog.php?isbn=9780674994416>
[8] A. Pineau (2013) “Informal Logic” Vol. 33, No. 4, pp. 531-546, McMaster University, Canada.
[9] R.H Johnson and J.A. Blair (1993). “Dissent in fallacyland, Part 2: Problems with Willard”. In R.E.
McKerrow (Ed.), Argument and the postmodern challenge: Proceedings of the eighth NCA/AFA
conference on argumentation, pp. 191-193. Annandale, VA.
[10] R.H. Johnson, and J.A. Blair (1983) “Logical Self-Defense”, Second Edition, McGraw Hill Ryerson,
Toronto.
[11] C.L. Hamblin (1970) “Fallacies”, London: Methuen.
[12] R.H. Johnson (1990) “Acceptance is not Enough: A Critique of Hamblin”, Philosophy & Rhetoric,
Penn State University Press, Vol. 23, No. 4, pp. 271-287.
[13] D.N. Walton (1991) “Hamblin on the Standard Treatment of Fallacies”, Philosophy & Rhetoric, Penn
State University Press, Vol. 24, No. 4. pp. 353-361.
11. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.2, March 2020
47
[14] H. Hans (2019), “Fallacies”, The Standard Encyclopaedia of Philosophy, Metaphysics Research Lab,
Standford University.
[15] N. I. Callistus (2018) “Citizenship Act (591) of Ghana As A Logic Theorem And Its Semantic
Implications”, International Journal on Natural Language Computing, Vol. 7, No. 5, pp. 11-26.
[16] H. Yamada, S. Teufel and T. Tokunaga (2017), "Designing an annotation scheme for summarizing
Japanese judgment documents," 9th International Conference on Knowledge and Systems
Engineering (KSE), Hue, pp. 275-280.
[17] W. A. Detore and R.D. Suever (1998) “Automated Decision-Making Arrangment”, United States
National Risk Management, Inc. (Fort Wayne, IN), Patent: 5732397.
[18] N. Luhmann (1995) “Legal Argumentation: An Analysis of Its Form”, Modern Law Review, Vol. 58,
No. 3. pp. 285-298.
[19] J. Dickson (2010) “Interpretation and Coherence in Legal Reasoning”, The Stanford Encyclopedia of
Philosophy, Metaphysics Research Lab, Standford University.
[20] V. Jacqueline (2009), “Speech acts in Legal Language: Introduction”, Journal of Pragmatics – J
PRAGMATICS, Vol. 41, pp. 393-400.
[21] J.W. Stanley (1872) “Elementary Lessons in Logic: Deductive and Inductive”, MacMillian and Co.,
London.
[22] A. Wagner and S. Cacciaguidi-Fahy (2006) “Legal Language and the search for Clarity: Practice and
tools”, Peter Langi.
[23] H. Singh (2013) “Language of Law – Ambiguities and Interpretation”, International Association of
Scientific Innovation and Research, USA, pp. 122-126.
[24] B. G. Slocum (2010) “The Importance of Being Ambiguous: Substantive Canons, Stare Decisis, and
the Central Role of Ambiguity Determinations in the Administrative State” Maryland Law Review,
Vol. 69, No. 4.
[25] J. Corcoran (2015) “Meaning of Non Sequitur”, State University of New York, Buffalo.
[26] J.A. Blair (1995), “The place of teaching informal fallacies in teaching reasoning skills or critical
thinking,” In Hansen and Pinto 1995, pp. 328–38.
[27] D.N Walton (2011) “Defeasible reasoning and informal fallacies” Synthese, Vol. 179, pp. 377-407.
[28] F. Rosen (2006) “The philosophy of error and liberty of thought: J. S. Mill on logical
fallacies,” Informal Logic, Vol. 26, pp.121–47.
[29] Mark van Atten and Dirk van Dalen (2002) “Arguments for Continuity Principle”, Bulleting of
Symbolic Logic. 8, pp. 329-347.
[30] A. Alexy (2015) “Legal Certainty and Correctness”, International Journal of Jurisprudence and
Philosophy of Law, Vol. 28, No. 4, pp. 441-541.
[31] Justice (2017) “How to Start Proceedings-The Claim Form”, Ministry of Justice, UK, accessed 22nd
August 2019, from < https://www.justice.gov.uk/courts/procedure-rules/civil/rules/part07>
[32] J. Worrell (2016) “Undecidability of Validity and Satisfiability”, Logic and Proof, Hilary.
12. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.11, No.2, March 2020
48
[33] J. Hoenike (2012) “Validity of FOL is Undecidable”, accessed 21st
November 2019, from
<https://swt.informatik.unifreiburg.de/teaching/SS2012/DecisionProcedures/resources/folIsUndecidab
le.pdf>
[34] Internet Encyclopedia of Philosophy (IEP) “Validity and Soundness”, accessed 21st
November 2019,
from < https://www.iep.utm.edu/val-snd/>
[35] T. Peil (2006) “Logic Argument and Constructing Proofs”, Minnesota State University, access 21st
November 2019, from < http://web.mnstate.edu/peil/geometry/Logic/6logic.htm>
[36] G. Kemerling (2011) “Proof-Procedures for Propositional Logic”, Philosophy Pages, accessed 21st
November 2019, from < http://www.philosophypages.com/lg/e11c.htm>
[37] R.S. Summers (1999) “Formal Legal Truth and Substantive Truth in judicial Fact-Finding: Their
Justified Divergence in Some Particular Cases”, Law and Philosophy, Springer, Vol. 18, No. 5, pp.
497-511.
[38] P. Girard “Sound and Cogent Argument”, Logic and Critical Thinking¸ University of Aukland,
accessed 13th
November 2019, from https://www.futurelearn.com/courses/logical-and-critical-
thinking/0/steps/9152
[39] J.Y.F. Lau (2011), “An Introduction to Critical Thinking and Creativity: Think more, Think better”,
Hoboken, N.J. Wiley.
[40] M. Walker (2011) “Critical Thinking by Example”, accessed 21st
November 2019, from <
http://www.criticalthinkingbyexample.com/Thecompletebook/1.1%20CTBE%20Book%20June%202
011.pdf >
[41] N. Swartz (1997) “The Concept of Necessary Conditions and Sufficient Conditions”, Simon Fraser
University, Department of Philosophy, accessed 20th
November 2019, from
<https://www.sfu.ca/~swartz/conditions1.htm>