Object Automation Software Solutions Pvt Ltd in collaboration with SRM Ramapuram delivered Workshop for Skill Development on Artificial Intelligence.
Uncertain Knowledge and reasoning by Mr.Abhishek Sharma, Research Scholar from Object Automation.
Knowledge representation In Artificial IntelligenceRamla Sheikh
facts, information, and skills acquired through experience or education; the theoretical or practical understanding of a subject.
Knowledge = information + rules
EXAMPLE
Doctors, managers.
This document discusses weak slot-and-filler knowledge representation structures. It describes how slots represent attributes and fillers represent values. Semantic networks are provided as an example where nodes represent objects/values and links represent relationships. Property inheritance allows subclasses to inherit attributes from more general superclasses. Frames are also discussed as a type of weak structure where each frame contains slots and associated values describing an entity. The document notes challenges with tangled hierarchies and provides examples of how to resolve conflicts through inferential distance in the property inheritance algorithm.
This document discusses knowledge-based agents in artificial intelligence. It defines knowledge-based agents as agents that maintain an internal state of knowledge, reason over that knowledge, update their knowledge based on observations, and take actions. Knowledge-based agents have two main components: a knowledge base that stores facts about the world, and an inference system that applies logical rules to deduce new information from the knowledge base. The document also describes the architecture of knowledge-based agents and different approaches to designing them.
Uncertain Knowledge and Reasoning in Artificial IntelligenceExperfy
Learn how to take informed decisions based on probabilities and expert knowledge
Understand and explore one of the most exciting advances in AI in the last decades.
Many hands-on examples, including Python code.
Check it out: https://www.experfy.com/training/courses/uncertain-knowledge-and-reasoning-in-artificial-intelligence
The document provides an overview of Truth Maintenance Systems (TMS) in artificial intelligence. It discusses key aspects of TMS including:
1. Enforcing logical relations among beliefs by maintaining and updating relations when assumptions change.
2. Generating explanations for conclusions by using cached inferences to avoid re-deriving inferences.
3. Finding solutions to search problems by representing problems as sets of variables, domains, and constraints.
The document also covers justification-based and assumption-based TMS, and how a TMS interacts with a problem solver to add and retract assumptions, detect contradictions, and perform belief revision.
This document provides an overview of predicate logic and various techniques for representing knowledge and drawing inferences using predicate logic, including:
- Representing facts as logical statements using predicates, variables, and quantifiers.
- Distinguishing between propositional logic and predicate logic and their abilities to represent objects and relationships.
- Techniques like resolution and Skolem functions that allow inferring new statements from existing ones in a logical and systematic way.
- How computable functions and predicates allow representing relationships that have infinitely many instances, like greater-than, in a computable way.
The document discusses these topics at a high-level and provides examples to illustrate key concepts in predicate logic and automated reasoning.
Reasoning is the process of deriving logical conclusions from facts or premises. There are several types of reasoning including deductive, inductive, abductive, analogical, and formal reasoning. Reasoning is a core component of artificial intelligence as AI systems must be able to reason about what they know to solve problems and draw new inferences. Formal logic provides the foundation for building reasoning systems through symbolic representations and inference rules.
This document discusses knowledge-based systems (KBS), including:
- KBS deal with unstructured knowledge and can justify decisions and learn.
- Developing KBS is difficult due to high costs, limited expert availability, and risky investments.
- A common KBS development model involves requirements, design, implementation, testing, and knowledge acquisition in multiple rounds.
- Knowledge acquisition involves eliciting, representing, and updating knowledge from domain experts.
Knowledge representation In Artificial IntelligenceRamla Sheikh
facts, information, and skills acquired through experience or education; the theoretical or practical understanding of a subject.
Knowledge = information + rules
EXAMPLE
Doctors, managers.
This document discusses weak slot-and-filler knowledge representation structures. It describes how slots represent attributes and fillers represent values. Semantic networks are provided as an example where nodes represent objects/values and links represent relationships. Property inheritance allows subclasses to inherit attributes from more general superclasses. Frames are also discussed as a type of weak structure where each frame contains slots and associated values describing an entity. The document notes challenges with tangled hierarchies and provides examples of how to resolve conflicts through inferential distance in the property inheritance algorithm.
This document discusses knowledge-based agents in artificial intelligence. It defines knowledge-based agents as agents that maintain an internal state of knowledge, reason over that knowledge, update their knowledge based on observations, and take actions. Knowledge-based agents have two main components: a knowledge base that stores facts about the world, and an inference system that applies logical rules to deduce new information from the knowledge base. The document also describes the architecture of knowledge-based agents and different approaches to designing them.
Uncertain Knowledge and Reasoning in Artificial IntelligenceExperfy
Learn how to take informed decisions based on probabilities and expert knowledge
Understand and explore one of the most exciting advances in AI in the last decades.
Many hands-on examples, including Python code.
Check it out: https://www.experfy.com/training/courses/uncertain-knowledge-and-reasoning-in-artificial-intelligence
The document provides an overview of Truth Maintenance Systems (TMS) in artificial intelligence. It discusses key aspects of TMS including:
1. Enforcing logical relations among beliefs by maintaining and updating relations when assumptions change.
2. Generating explanations for conclusions by using cached inferences to avoid re-deriving inferences.
3. Finding solutions to search problems by representing problems as sets of variables, domains, and constraints.
The document also covers justification-based and assumption-based TMS, and how a TMS interacts with a problem solver to add and retract assumptions, detect contradictions, and perform belief revision.
This document provides an overview of predicate logic and various techniques for representing knowledge and drawing inferences using predicate logic, including:
- Representing facts as logical statements using predicates, variables, and quantifiers.
- Distinguishing between propositional logic and predicate logic and their abilities to represent objects and relationships.
- Techniques like resolution and Skolem functions that allow inferring new statements from existing ones in a logical and systematic way.
- How computable functions and predicates allow representing relationships that have infinitely many instances, like greater-than, in a computable way.
The document discusses these topics at a high-level and provides examples to illustrate key concepts in predicate logic and automated reasoning.
Reasoning is the process of deriving logical conclusions from facts or premises. There are several types of reasoning including deductive, inductive, abductive, analogical, and formal reasoning. Reasoning is a core component of artificial intelligence as AI systems must be able to reason about what they know to solve problems and draw new inferences. Formal logic provides the foundation for building reasoning systems through symbolic representations and inference rules.
This document discusses knowledge-based systems (KBS), including:
- KBS deal with unstructured knowledge and can justify decisions and learn.
- Developing KBS is difficult due to high costs, limited expert availability, and risky investments.
- A common KBS development model involves requirements, design, implementation, testing, and knowledge acquisition in multiple rounds.
- Knowledge acquisition involves eliciting, representing, and updating knowledge from domain experts.
This document provides an overview of Chapter 14 on probabilistic reasoning and Bayesian networks from an artificial intelligence textbook. It introduces Bayesian networks as a way to represent knowledge over uncertain domains using directed graphs. Each node corresponds to a variable and arrows represent conditional dependencies between variables. The document explains how Bayesian networks can encode a joint probability distribution and represent conditional independence relationships. It also discusses techniques for efficiently representing conditional distributions in Bayesian networks, including noisy logical relationships and continuous variables. The chapter covers exact and approximate inference methods for Bayesian networks.
Tweening and morphing are techniques used in animation to generate intermediate frames between key frames. Tweening uses linear interpolation to create smooth transitions between frames by interpolating point positions. Morphing transitions between full color images by simultaneously warping and dissolving regions of images using tweening techniques applied to mesh grids overlaid on images. Both tweening and morphing require careful setup by artists and are used in hand-drawn animation as well as digital effects in movies.
The document discusses predicate logic and its use in representing knowledge in artificial intelligence. It introduces several key concepts:
- Predicate logic uses predicates, constants, variables, functions and quantifiers to represent objects and their relations in a knowledge base.
- Well-formed formulas in predicate logic can be used to represent facts about the world. Logical inference rules like resolution and unification can be used to derive new facts or answer queries.
- Knowledge bases can be represented as sets of clauses in conjunctive normal form to apply inference rules like resolution and forward/backward chaining. Converting to clausal form involves techniques like Skolemization.
Knowledge representation techniques face several issues including representing important attributes of objects, relationships between attributes, choosing the level of detail in representations, depicting sets of multiple objects, and determining appropriate structures as needed.
Slot and filler structures represent knowledge through attributes (slots) and their associated values (fillers). Weak slot and filler structures provide little domain knowledge. Frames are a type of weak structure where a frame contains slots describing an entity. Semantic networks also represent knowledge with nodes and labeled links, allowing inheritance of properties through generalization hierarchies. Both frames and semantic networks enable quick retrieval of attribute values and easy description of object relations, but semantic networks additionally allow representation of non-binary predicates and partitioned reasoning about quantified statements.
The document discusses various algorithms for visible surface detection, which is the identification and removal of surfaces that are not visible to the user based on their perspective. It describes the Z-buffer algorithm, BSP algorithm, A-buffer algorithm, scan-line algorithm, and painter's/depth sorting algorithm. For the Z-buffer algorithm, it explains how it uses two buffers (Z-buffer and refresh buffer) to compare depth values of overlapping pixels and determine which surfaces are visible. It also discusses considerations for different viewing directions. The BSP algorithm sorts polygons from back to front using a binary space partitioning tree. The A-buffer improves on Z-buffer for transparent surfaces by using linked lists at each pixel. The scan-line
The document discusses inference rules for quantifiers in first-order logic. It describes the rules of universal instantiation and existential instantiation. Universal instantiation allows inferring sentences by substituting terms for variables, while existential instantiation replaces a variable with a new constant symbol. The document also introduces unification, which finds substitutions to make logical expressions identical. Generalized modus ponens is presented as a rule that lifts modus ponens to first-order logic by using unification to substitute variables.
A situated planning agent treats planning and acting as a single process rather than separate processes. It uses conditional planning to construct plans that account for possible contingencies by including sensing actions. The agent resolves any flaws in the conditional plan before executing actions when their conditions are met. When facing uncertainty, the agent must have preferences between outcomes to make decisions using utility theory and represent probabilities using a joint probability distribution over variables in the domain.
The document discusses knowledge representation in cognitive science and artificial intelligence. It describes several ways of representing knowledge, including predicate logic, semantic networks, frames, and conceptual dependency networks. Semantic networks represent knowledge through interconnected nodes and labeled arcs, allowing for inheritance of properties up hierarchical structures. They provide an intuitive way to represent taxonomically structured knowledge but have limitations representing logical statements.
1. Planning involves finding a sequence of actions that achieves a goal starting from an initial state. It uses a set of operators that define the possible actions and their effects.
2. A plan is a sequence of operator instances that transforms the initial state into a goal state. Classical planning assumes fully observable, deterministic environments.
3. Planning problems can be represented using a logical language that describes states, goals, actions and their preconditions and effects. This representation allows planning algorithms to operate over problems.
Neural networks can be used for machine learning tasks like classification. They consist of interconnected nodes that update their weight values during a training process using examples. Neural networks have been applied successfully to tasks like handwritten character recognition, autonomous vehicle control by observing human drivers, and text-to-speech pronunciation generation. Their architecture is inspired by the human brain but neural networks are trained using computational methods while the brain uses biological processes.
Search techniques in ai, Uninformed : namely Breadth First Search and Depth First Search, Informed Search strategies : A*, Best first Search and Constraint Satisfaction Problem: criptarithmatic
Weak Slot and Filler Structures
Representation in a Semantic Net
Frames can also be regarded as an extension to Semantic nets. Indeed it is not clear where the distinction between a semantic net and a frame ends. Semantic nets initially we used to represent labelled connections between objects. As tasks became more complex the representation needs to be more structured. The more structured the system it becomes more beneficial to use frames. A frame is a collection of attributes or slots and associated values that describe some real world entity. Frames on their own are not particularly helpful but frame systems are a powerful way of encoding information to support reasoning. Set theory provides a good basis for understanding frame systems. Each frame represents:
a class (set), or
an instance (an element of a class).
Frame Knowledge Representation
We have already met this type of structure when discussing inheritance in the last lecture. We will now study this in more detail.
- The document discusses logic concepts including propositional calculus, propositional logic, and natural deduction systems.
- Propositional calculus uses logical operators like conjunction, disjunction, negation, implication, and biconditional to combine atomic propositions into compound propositions. Truth tables are used to determine the truth values of propositions.
- Propositional logic represents statements using propositional variables and logical connectives. It has limitations and cannot represent relations. Natural deduction systems and axiomatic systems provide formal rules for deducing conclusions.
Artificial Intelligence (AI) | Prepositional logic (PL)and first order predic...Ashish Duggal
The following are the topics in this presentation Prepositional Logic (PL) and First-order Predicate Logic (FOPL) is used for knowledge representation in artificial intelligence (AI).
There are also sub-topics in this presentation like logical connective, atomic sentence, complex sentence, and quantifiers.
This PPT is very helpful for Computer science and Computer Engineer
(B.C.A., M.C.A., B.TECH. , M.TECH.)
This document provides an overview of artificial intelligence (AI) including definitions of AI, different approaches to AI (strong/weak, applied, cognitive), goals of AI, the history of AI, and comparisons of human and artificial intelligence. Specifically:
1) AI is defined as the science and engineering of making intelligent machines, and involves building systems that think and act rationally.
2) The main approaches to AI are strong/weak, applied, and cognitive AI. Strong AI aims to build human-level intelligence while weak AI focuses on specific tasks.
3) The goals of AI include replicating human intelligence, solving complex problems, and enhancing human-computer interaction.
4) The history of AI
This document discusses various knowledge representation methods used in expert systems, including rules, semantic networks, frames, and constraints. It provides examples and explanations of each method. Procedural and declarative programming techniques are also covered. Forward and backward chaining for rule-based inference engines are explained through examples. Propositional and predicate logic are discussed as mathematical methods for representing knowledge.
- Weak slot and filler structures for knowledge representation lack rules, while strong structures like Conceptual Dependency (CD) and scripts overcome this.
- CD represents knowledge as a graphical presentation of high-level events using symbols like actions, objects, modifiers. It facilitates inference and is language independent.
- Scripts represent commonly occurring experiences through structured sequences of roles, props, scenes, and results to predict related events. Both CD and scripts decompose knowledge into primitives for fewer inference rules.
The document discusses inference in first-order logic. It provides a brief history of reasoning and logic. It then discusses reducing first-order inference to propositional inference using techniques like universal instantiation and existential instantiation. It introduces the concepts of unification and generalized modus ponens to perform inference in first-order logic. Forward chaining and resolution are also discussed as algorithms for performing inference in first-order logic.
The document discusses probability and acting under uncertainty in artificial intelligence. It covers several key concepts:
1) Agents must often act under uncertainty due to partial observability or non-determinism. They rely on belief states representing possible world states and generating contingency plans, but these can become large and unwieldy.
2) Probabilistic reasoning uses probability distributions over possible world states to represent an agent's beliefs. Bayes' rule allows computing conditional probabilities given evidence to update these beliefs.
3) Independence assumptions allow factoring full joint probability distributions over all variables, making computation more tractable when variables are conditionally independent.
Mathematical Background for Artificial Intelligenceananth
Mathematical background is essential for understanding and developing AI and Machine Learning applications. In this presentation we give a brief tutorial that encompasses basic probability theory, distributions, mixture models, anomaly detection, graphical representations such as Bayesian Networks, etc.
This document provides an overview of Chapter 14 on probabilistic reasoning and Bayesian networks from an artificial intelligence textbook. It introduces Bayesian networks as a way to represent knowledge over uncertain domains using directed graphs. Each node corresponds to a variable and arrows represent conditional dependencies between variables. The document explains how Bayesian networks can encode a joint probability distribution and represent conditional independence relationships. It also discusses techniques for efficiently representing conditional distributions in Bayesian networks, including noisy logical relationships and continuous variables. The chapter covers exact and approximate inference methods for Bayesian networks.
Tweening and morphing are techniques used in animation to generate intermediate frames between key frames. Tweening uses linear interpolation to create smooth transitions between frames by interpolating point positions. Morphing transitions between full color images by simultaneously warping and dissolving regions of images using tweening techniques applied to mesh grids overlaid on images. Both tweening and morphing require careful setup by artists and are used in hand-drawn animation as well as digital effects in movies.
The document discusses predicate logic and its use in representing knowledge in artificial intelligence. It introduces several key concepts:
- Predicate logic uses predicates, constants, variables, functions and quantifiers to represent objects and their relations in a knowledge base.
- Well-formed formulas in predicate logic can be used to represent facts about the world. Logical inference rules like resolution and unification can be used to derive new facts or answer queries.
- Knowledge bases can be represented as sets of clauses in conjunctive normal form to apply inference rules like resolution and forward/backward chaining. Converting to clausal form involves techniques like Skolemization.
Knowledge representation techniques face several issues including representing important attributes of objects, relationships between attributes, choosing the level of detail in representations, depicting sets of multiple objects, and determining appropriate structures as needed.
Slot and filler structures represent knowledge through attributes (slots) and their associated values (fillers). Weak slot and filler structures provide little domain knowledge. Frames are a type of weak structure where a frame contains slots describing an entity. Semantic networks also represent knowledge with nodes and labeled links, allowing inheritance of properties through generalization hierarchies. Both frames and semantic networks enable quick retrieval of attribute values and easy description of object relations, but semantic networks additionally allow representation of non-binary predicates and partitioned reasoning about quantified statements.
The document discusses various algorithms for visible surface detection, which is the identification and removal of surfaces that are not visible to the user based on their perspective. It describes the Z-buffer algorithm, BSP algorithm, A-buffer algorithm, scan-line algorithm, and painter's/depth sorting algorithm. For the Z-buffer algorithm, it explains how it uses two buffers (Z-buffer and refresh buffer) to compare depth values of overlapping pixels and determine which surfaces are visible. It also discusses considerations for different viewing directions. The BSP algorithm sorts polygons from back to front using a binary space partitioning tree. The A-buffer improves on Z-buffer for transparent surfaces by using linked lists at each pixel. The scan-line
The document discusses inference rules for quantifiers in first-order logic. It describes the rules of universal instantiation and existential instantiation. Universal instantiation allows inferring sentences by substituting terms for variables, while existential instantiation replaces a variable with a new constant symbol. The document also introduces unification, which finds substitutions to make logical expressions identical. Generalized modus ponens is presented as a rule that lifts modus ponens to first-order logic by using unification to substitute variables.
A situated planning agent treats planning and acting as a single process rather than separate processes. It uses conditional planning to construct plans that account for possible contingencies by including sensing actions. The agent resolves any flaws in the conditional plan before executing actions when their conditions are met. When facing uncertainty, the agent must have preferences between outcomes to make decisions using utility theory and represent probabilities using a joint probability distribution over variables in the domain.
The document discusses knowledge representation in cognitive science and artificial intelligence. It describes several ways of representing knowledge, including predicate logic, semantic networks, frames, and conceptual dependency networks. Semantic networks represent knowledge through interconnected nodes and labeled arcs, allowing for inheritance of properties up hierarchical structures. They provide an intuitive way to represent taxonomically structured knowledge but have limitations representing logical statements.
1. Planning involves finding a sequence of actions that achieves a goal starting from an initial state. It uses a set of operators that define the possible actions and their effects.
2. A plan is a sequence of operator instances that transforms the initial state into a goal state. Classical planning assumes fully observable, deterministic environments.
3. Planning problems can be represented using a logical language that describes states, goals, actions and their preconditions and effects. This representation allows planning algorithms to operate over problems.
Neural networks can be used for machine learning tasks like classification. They consist of interconnected nodes that update their weight values during a training process using examples. Neural networks have been applied successfully to tasks like handwritten character recognition, autonomous vehicle control by observing human drivers, and text-to-speech pronunciation generation. Their architecture is inspired by the human brain but neural networks are trained using computational methods while the brain uses biological processes.
Search techniques in ai, Uninformed : namely Breadth First Search and Depth First Search, Informed Search strategies : A*, Best first Search and Constraint Satisfaction Problem: criptarithmatic
Weak Slot and Filler Structures
Representation in a Semantic Net
Frames can also be regarded as an extension to Semantic nets. Indeed it is not clear where the distinction between a semantic net and a frame ends. Semantic nets initially we used to represent labelled connections between objects. As tasks became more complex the representation needs to be more structured. The more structured the system it becomes more beneficial to use frames. A frame is a collection of attributes or slots and associated values that describe some real world entity. Frames on their own are not particularly helpful but frame systems are a powerful way of encoding information to support reasoning. Set theory provides a good basis for understanding frame systems. Each frame represents:
a class (set), or
an instance (an element of a class).
Frame Knowledge Representation
We have already met this type of structure when discussing inheritance in the last lecture. We will now study this in more detail.
- The document discusses logic concepts including propositional calculus, propositional logic, and natural deduction systems.
- Propositional calculus uses logical operators like conjunction, disjunction, negation, implication, and biconditional to combine atomic propositions into compound propositions. Truth tables are used to determine the truth values of propositions.
- Propositional logic represents statements using propositional variables and logical connectives. It has limitations and cannot represent relations. Natural deduction systems and axiomatic systems provide formal rules for deducing conclusions.
Artificial Intelligence (AI) | Prepositional logic (PL)and first order predic...Ashish Duggal
The following are the topics in this presentation Prepositional Logic (PL) and First-order Predicate Logic (FOPL) is used for knowledge representation in artificial intelligence (AI).
There are also sub-topics in this presentation like logical connective, atomic sentence, complex sentence, and quantifiers.
This PPT is very helpful for Computer science and Computer Engineer
(B.C.A., M.C.A., B.TECH. , M.TECH.)
This document provides an overview of artificial intelligence (AI) including definitions of AI, different approaches to AI (strong/weak, applied, cognitive), goals of AI, the history of AI, and comparisons of human and artificial intelligence. Specifically:
1) AI is defined as the science and engineering of making intelligent machines, and involves building systems that think and act rationally.
2) The main approaches to AI are strong/weak, applied, and cognitive AI. Strong AI aims to build human-level intelligence while weak AI focuses on specific tasks.
3) The goals of AI include replicating human intelligence, solving complex problems, and enhancing human-computer interaction.
4) The history of AI
This document discusses various knowledge representation methods used in expert systems, including rules, semantic networks, frames, and constraints. It provides examples and explanations of each method. Procedural and declarative programming techniques are also covered. Forward and backward chaining for rule-based inference engines are explained through examples. Propositional and predicate logic are discussed as mathematical methods for representing knowledge.
- Weak slot and filler structures for knowledge representation lack rules, while strong structures like Conceptual Dependency (CD) and scripts overcome this.
- CD represents knowledge as a graphical presentation of high-level events using symbols like actions, objects, modifiers. It facilitates inference and is language independent.
- Scripts represent commonly occurring experiences through structured sequences of roles, props, scenes, and results to predict related events. Both CD and scripts decompose knowledge into primitives for fewer inference rules.
The document discusses inference in first-order logic. It provides a brief history of reasoning and logic. It then discusses reducing first-order inference to propositional inference using techniques like universal instantiation and existential instantiation. It introduces the concepts of unification and generalized modus ponens to perform inference in first-order logic. Forward chaining and resolution are also discussed as algorithms for performing inference in first-order logic.
The document discusses probability and acting under uncertainty in artificial intelligence. It covers several key concepts:
1) Agents must often act under uncertainty due to partial observability or non-determinism. They rely on belief states representing possible world states and generating contingency plans, but these can become large and unwieldy.
2) Probabilistic reasoning uses probability distributions over possible world states to represent an agent's beliefs. Bayes' rule allows computing conditional probabilities given evidence to update these beliefs.
3) Independence assumptions allow factoring full joint probability distributions over all variables, making computation more tractable when variables are conditionally independent.
Mathematical Background for Artificial Intelligenceananth
Mathematical background is essential for understanding and developing AI and Machine Learning applications. In this presentation we give a brief tutorial that encompasses basic probability theory, distributions, mixture models, anomaly detection, graphical representations such as Bayesian Networks, etc.
This document provides an introduction to probabilistic and stochastic models in machine learning. It discusses key concepts like probabilistic modeling, importance of probabilistic ML models, Bayesian inference, basics of probability theory, Bayes' rule, and examples of how to apply Bayes' theorem in machine learning. The document covers conditional probability, prior and posterior probability, and how a Naive Bayes classifier uses Bayes' theorem for classification tasks.
This document discusses handling uncertainty through probabilistic reasoning and machine learning techniques. It covers sources of uncertainty like incomplete data, probabilistic effects, and uncertain outputs from inference. Approaches covered include Bayesian networks, Bayes' theorem, conditional probability, joint probability distributions, and Dempster-Shafer theory. It provides examples of calculating conditional probabilities and using Bayes' theorem. Bayesian networks are defined as directed acyclic graphs representing probabilistic dependencies between variables, and examples show how to represent domains of uncertainty and perform probabilistic reasoning using a Bayesian network.
This document discusses different types of data and statistical concepts. It begins by describing the major types of data: numerical, categorical, and ordinal. Numerical data represents quantitative measurements, categorical data has no inherent mathematical meaning, and ordinal data has categorical categories with a mathematical order. It then discusses statistical measures like the mean, median, mode, standard deviation, variance, percentiles, moments, covariance, correlation, conditional probability, and Bayes' theorem. Examples are provided to help explain each concept.
Dr. Abhay Pratap Pandey introduces statistical inference and its key concepts. Statistical inference allows making conclusions about a population based on a sample. It involves estimation and hypothesis testing. Estimation determines population parameters using sample statistics. Hypothesis testing determines if sample data provides sufficient evidence to reject claims about population parameters. The document defines key terms like population, sample, parameter, statistic, and discusses properties of estimators like unbiasedness and consistency. It also explains hypothesis testing concepts like null and alternative hypotheses, types of errors, and steps to conduct hypothesis tests on a population mean. An example demonstrates hypothesis testing for a population mean using a z-test.
Informed search algorithms use heuristics to more efficiently find goal nodes in large search spaces. Heuristics estimate how close a state is to the goal and help guide the search. The heuristic function must be admissible, meaning its estimated cost must be less than or equal to the actual cost. Bayes' theorem allows calculating conditional probabilities and is fundamental to probabilistic reasoning, which represents knowledge with uncertainty using probabilities. Fuzzy set theory introduces vagueness by assigning membership degrees between 0 and 1 to represent how well something belongs to a set, like how sunny a day is based on cloud cover.
The document discusses discrete and continuous probability distributions, explaining that a discrete distribution applies to variables that can take on countable values while a continuous distribution is used for variables that can take any value within a range. It provides examples of discrete variables like coin flips and continuous variables like weights. The document also outlines the differences between discrete and continuous probability distributions in how they are represented and calculated.
This presentation contains my one day lectures which introduces fuzzy set theory, operations on fuzzy sets, some engineering control applications using Mamdamn model.
STSTISTICS AND PROBABILITY THEORY .pptxVenuKumar65
The document discusses key concepts in probability theory including probability, random experiments, sample spaces, events, random variables, probability distributions, and Bayes' theorem. It covers the binomial, Poisson, and normal distributions and their characteristics and applications. Decision theory is introduced as analyzing choices under uncertainty involving defining problems, identifying outcomes, assessing criteria, and evaluating alternatives to make optimal decisions.
CHAPTER 1 THEORY OF PROBABILITY AND STATISTICS.pptxanshujain54751
Probability theory is a branch of mathematics that uses concepts like sample space, probability distributions, and random variables to assign numerical likelihoods to the chances of outcomes occurring in random phenomena. It involves both theoretical and experimental approaches. Key aspects of probability theory include defining events and random variables, understanding independent and dependent events, and using formulas to calculate probabilities. Probability theory has various applications, like in finance to model markets, in product design to reduce failure probabilities, and in casinos to shape games of chance.
Statistical hypothesis testing is an important tool for scientists to critically evaluate hypotheses using empirical data. It helps keep scientists honest by requiring them to statistically test their ideas rather than accepting them uncritically. One should be skeptical of any paper that claims an alternative hypothesis is supported without providing a statistical test. A key statistical test is the chi-square test, which compares observed data to expected frequencies under the null hypothesis. It calculates a test statistic and compares it to critical values in tables to determine if the null hypothesis can be rejected in favor of the alternative hypothesis. Proper use of statistical testing is part of the scientific method and moral imperative for scientists.
This document provides an overview of hypothesis testing in the context of regression analysis with a single regressor. It discusses stating the population parameter of interest, using sample data to estimate this parameter, determining the standard error of the estimator, and conducting hypothesis tests by comparing the test statistic to critical values. Examples are provided to illustrate testing whether the slope coefficient is statistically different from zero, and interpreting results based on p-values and significance levels.
Chapter8 Introduction to Estimation Hypothesis Testing.pdfmekkimekki5
1. AT&T argues its rates are similar to competitors, with a mean of $17.09. It sampled 100 customers and recalculated bills based on competitors' rates.
2. The null hypothesis is that the mean is equal to AT&T's $17.09. The alternative hypothesis is that the mean is not equal to $17.09.
3. Using a two-tailed test at a 5% significance level, if the calculated p-value is less than 0.05 we would reject the null hypothesis, concluding the mean is likely not equal to $17.09.
A set of practical strategies and techniques for tackling vagueness in data modeling and creating models that are semantically more accurate and interoperable.
Sample Space and Event,Probability,The Axioms of Probability,Bayes TheoremBharath kumar Karanam
The document discusses key concepts in statistics including:
- A sample space contains all possible outcomes of an experiment and events are subsets of the sample space.
- Probability is a branch of mathematics that quantifies the likelihood of events based on the sample space.
- The axioms of probability establish rules like probabilities being between 0 and 1 and the probability of the entire sample space being 1.
- Bayes' theorem calculates conditional probabilities and allows updating probabilities as new evidence becomes available.
This document provides an overview of the Naive Bayes classifier and K-Nearest Neighbors (KNN) algorithms. It begins with use cases for building classifiers like spam detection and sentiment analysis. It then covers the key concepts of Naive Bayes, including probabilities, Bayes' theorem, and the Naive Bayes approach. The document also discusses how the Naive Bayes algorithm works using an example, and reviews the pros and cons which include its ability to perform well with less training data but potential for zero probabilities. Finally, it briefly mentions KNN and provides an exercise to apply both algorithms to a unified problem for comparison.
This document outlines the syllabus for a course titled "Predictive Analytics" taught by K. Mohanasundaram. The syllabus covers topics such as introduction to business analytics, mathematical modelling, data prediction techniques, regression analysis methods like simple linear regression, logistic regression, and forecasting techniques. It recommends textbooks and references for the course and provides an introduction to concepts like uncertainty modelling using probability distributions and random variables.
Resolving e commerce challenges with probabilistic programmingLogicAI
This summer, during the third edition of Data Science Summit in Warsaw, Magdalena Wójcik (Senior Data Scientist at LogicAI) presented how we used Bayesian models in one of our projects.
Similar to Uncertain Knowledge in AI from Object Automation (20)
This interactive course aims to equip students with an in-depth comprehension of
data science principles and methodologies, with a strong emphasis on practical
applications.
This document summarizes Avesta Sasan's background and research focus. It begins with an introduction of Avesta Sasan and their position as an Associate Professor at UC Davis. The document then discusses some challenges with keeping hardware development pace with the rapid growth and increasing resource demands of artificial intelligence. This includes charts showing the growth in model size and training costs outpacing hardware improvements. It argues that contributions are needed from across machine learning, hardware architecture, circuit design, and manufacturing to improve AI speed and efficiency. The document proposes using machine learning techniques in electronic design automation flows to help automate and accelerate physical design and RTL synthesis steps in order to help close the gap between hardware and AI.
The CHIPS Alliance is a Linux Foundation project that develops open source hardware specifications, implementations, verification tools, and IP blocks. It aims to lower the costs of hardware development through collaboration and shared resources. Members include companies and organizations working on CPUs, interconnects, I/O, machine learning accelerators, and more. The CHIPS Alliance uses Apache 2.0 licensing to encourage IP contribution and participation while allowing commercial use of outputs. It provides a neutral environment for hardware collaboration across companies and countries.
The document discusses various RTL design methodologies including modular design, hierarchical design, design abstraction, RTL coding guidelines, design verification, timing constraints, power optimization, area optimization, and design reuse. It defines each methodology, provides examples, and outlines the advantages of applying these methodologies in RTL design.
This document discusses the challenges of designing AI chips and how high-level synthesis (HLS) can help address them. It notes that AI design requirements change rapidly, requiring a nimble design approach. HLS allows designs to be specified at a high level in languages like C++ and optimized through automated exploration, enabling faster design cycles compared to manual RTL development. Case studies demonstrate how HLS can achieve better power, performance and area than hand-coded RTL. The document argues HLS is well-suited for AI chip design due to its ability to rapidly apply design intent and optimize architectures through automation.
AI-Inspired IOT Chiplets and 3D Heterogeneous IntegrationObject Automation
Ultra low power processor cores and 2.5D/3D heterogeneous chiplet integration are required for emerging IOT applications. Wafer-Level-Substrate demonstrates solid RF performance with sub 1um L/S capability, pad less vias, and active/passive embedding capabilities, enabling multi-die/chiplet and silicon photonic packaging. 3DHI stacking using high bandwidth substrate enables modular testing and provides effective thermal management.
This document discusses the state of artificial intelligence (AI) and NASSCOM's role in enabling AI adoption in India. It covers three key drivers of the AI revolution: vast amounts of data, mega computing power, and massive funding. NASSCOM's focus areas include accelerating India's digital transformation, developing talent and infrastructure for AI, and addressing barriers to responsible AI adoption such as skills shortages and regulatory uncertainty. The document presents data on global AI readiness and the potential economic impact of generative AI technologies.
CDAC presentation as part of Global AI Festival and FutureObject Automation
This document discusses generative artificial intelligence (GenAI) and its opportunities and challenges. It provides an overview of what GenAI is, including its evolution from earlier forms of AI. It discusses GenAI applications in areas like advisory systems, assistants, cooperation, augmentation, and autonomous systems. Examples are given of image captioning, screenplay assistance, and cybersecurity applications like flagging unusual network behavior. Challenges around invisible data perturbation and GenAI attacks are mentioned. The document concludes with a discussion of Gartner's hype cycle for GenAI, opportunities, challenges, and recommendations around GenAI threat mitigation including trust, risk management, democratization, and governance frameworks.
Global AI Festival and Future is a digital broadcast of thought-provoking discussions and insights from world AI leaders. The event covers global and regional streams, helping you learn about the latest technological improvements, practical use cases, and industry trends.
Key Outcomes of the Event
- Gain knowledge about Latest Technology Trends
- Networking Opportunity with Technical Leaders
- Opportunities to become a thought leader
- Learn and Understand Industry based AI Solutions
- Getting Access to world super fast GPU compute
- International Internship opportunities
- Quiz Competition to win prizes and placements
Object Automation, a technology company based in California,
has been concentrating on latest technologies and emerging
tech partnerships. These include research and solution
development, the development of onshore and offshore
technology projects, the establishment of tech centers of
excellence in AI, quantum, and chip design, Technology
workshops and boot camps for corporates, special labs for
universities, and cutting-edge industry projects.
This document provides an overview of an AI and Applications bootcamp program. The program includes a variety of courses that provide both theoretical foundations and practical skills in AI, machine learning, and related topics. It utilizes a blended learning approach including online videos, live virtual classes, projects, and masterclasses. The program aims to help professionals gain expertise in in-demand AI skills and advance their careers. It covers topics such as deep learning, computer vision, natural language processing, and more.
This document provides an overview of an AI and Applications bootcamp program. The program includes a variety of courses that provide both theoretical foundations and practical skills in AI, machine learning, and related topics. It utilizes a blended learning approach with online videos, live virtual classes, projects, and masterclasses. The program aims to help professionals gain expertise in in-demand AI skills and advance their careers. It covers topics such as deep learning, computer vision, natural language processing, and more.
This document provides an overview of an AI and Applications bootcamp program. The program includes a variety of courses that provide both theoretical foundations and practical skills in AI, machine learning, and related topics. It utilizes a blended learning approach with online videos, live virtual classes, projects, and masterclasses. The program aims to help professionals gain expertise in in-demand AI skills and advance their careers. It covers topics such as deep learning, computer vision, natural language processing, and more.
The document provides information on various technology course offerings from Object Automation System Solutions Inc. The courses include AI in Enterprise, Generative AI, UI Developer, Cyber Crime, Integrated Azure, and Chip Design. For each course, a brief description is given of the topics that will be covered. The document also provides information on who should take each course and highlights of the course delivery approach, which includes classroom and online live classes, pre-recorded classes, placement support, and implementing concepts in real-time projects.
The document outlines a 40-hour training agenda on enterprise artificial intelligence covering machine learning libraries and algorithms, building and deploying machine learning models using structured and unstructured databases, digital agriculture using deep learning, and model operations for production. The agenda is split over 5 days and covers topics such as regression, classification, model building with Docker, GitHub, IBM DB2, Cassandra, and deployment operations.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
3. Agenda
Uncertain Knowledge and Reasoning
Causes of uncertainty
Probabilistic Reasoning
Bayes Rules & use cases
Axioms of probability
Use case of Bayes Theorem in AI
Project – Signature forgery Detection using CNN Siamese network
4. Knowledge Representation and Reasoning in AI
• Humans are best at understanding, reasoning, and interpreting knowledge. Human knows things, which is
knowledge and as per their knowledge they perform various actions in the real world. But how machines
do all these things comes under knowledge representation and reasoning.
• KR, KRR is the part of AI which concerned with AI agents thinking and how thinking contributes to
intelligent behavior of agents.
• It is responsible for representing information about the real world so that a computer can understand and
can utilize this knowledge to solve the AI problems.
• It is also a way which describes how we can represent knowledge in AI.
• Knowledge representation is not just storing data into some database, but it also enables an intelligent
machine to learn from that knowledge and experiences so that it can behave intelligently like a human.
5. knowledge needs to be represented in AI systems
• Object: All the facts about objects. E.g., Guitars contains strings, trumpets are brass instruments.
• Events: Events are the actions which occur in our world.
• Performance: It describe behavior which involves knowledge about how to do things.
• Meta-knowledge: It is knowledge about what we know.
• Facts: Facts are the truths about the real world and what we represent.
Knowledge-Base: The central component of the knowledge-based agents is the knowledge base. It is
represented as KB. The Knowledgebase is a group of the Sentences (Here, sentences are used as a technical
term and not identical with the English language).
Knowledge: Knowledge is awareness or familiarity gained by experiences of facts, data, and situations.
Following are the types of knowledge in artificial intelligence
6. Issues in knowledge representation
• Relationships among attributes : The attributes used to describe objects are nothing but the entities.
However, the attributes of an object do not depend on the encoded specific knowledge
• Choosing the granularity of representation : While deciding the granularity, it is necessary to know:
• i. What are the primitives and at what level should the knowledge be represented? ii. What should be the number (small or large) of
low-level primitives or high-level facts? High-level facts may be insufficient to draw the conclusion while Low-level primitives may
require a lot of storage.
• Representing sets of objects : There are some properties of objects which satisfy the condition of a set
together but not as individual; Example: Consider the assertion made in the sentences: "There are more
sheep than people in Australia", and "English speakers can be found all over the world." These facts can be
described by including an assertion to the sets representing people, sheep, and English
• Finding the right structure as needed : To describe a particular situation, it is always important to find the
access of right structure. This can be done by selecting an initial structure and then revising the choice
7. Uncertainty
• If knowledge representation is with certain, means we were sure about the predicates. With this
knowledge representation, we might write A→B, which means if A is true then B is true,
• But consider a situation where we are not sure about whether A is true or not then we cannot
express this statement, this situation is called uncertainty.
• So to represent uncertain knowledge, where we are not sure about the predicates, we need
uncertain reasoning or probabilistic reasoning.
Causes of uncertainty
• Information occurred from unreliable sources.
• Experimental Errors
• Equipment fault
• Temperature variation
• Climate change.
8. Uncertain Knowledge and Reasoning
• Uncertain knowledge: When the available knowledge has multiple causes leading
to multiple effects or incomplete knowledge of causality in the domain
• Uncertain data: Data that is missing, unreliable, inconsistent or noisy
• Uncertain knowledge representation: The representations which provides a restricted
model of the real system, or has limited expressiveness
• Inference: In case of incomplete or default reasoning methods, conclusions drawn might
not be completely accurate.
9. Uncertainty can be dealt with using
• Probability theory
• Truth Maintenance systems
• Fuzzy logic.
10. Probabilistic reasoning
• It is a way of knowledge representation where we apply the concept of probability to
indicate the uncertainty in knowledge.
• We combine probability theory with logic to handle the uncertainty.
• We use probability in probabilistic reasoning because it provides a way to handle the
uncertainty that is the result of someone's laziness and ignorance.
• In the real world, there are lots of scenarios, where the certainty of something is not
confirmed, such as "It will rain today," "behavior of someone for some situations," "A
match between two teams or two players."
• These are probable sentences for which we can assume that it will happen but not sure
about it, so here we use probabilistic reasoning.
11. Need of probabilistic reasoning in AI
• When there are unpredictable outcomes.
• When specifications or possibilities of predicates becomes too large to handle.
• When an unknown error occurs during an experiment.
• In probabilistic reasoning, there are two ways to solve problems with uncertain
knowledge:
• Bayes' rule
• Bayesian Statistics
12. Probability
• Probability can be defined as a chance that an uncertain event will occur. It is the
numerical measure of the likelihood that an event will occur. The value of probability
always remains between 0 and 1 that represent ideal uncertainties.
• 0 ≤ P(A) ≤ 1, where P(A) is the probability of an event A.
• P(A) = 0, indicates total uncertainty in an event A.
• P(A) =1, indicates total certainty in an event A.
13. Probability
• P(¬A) = probability of a not happening event.
• P(¬A) + P(A) = 1.
• Event: Each possible outcome of a variable is called an event.
• Sample space: The collection of all possible events is called sample space.
• Random variables: Random variables are used to represent the events and objects in the
real world.
• Prior probability: The prior probability of an event is probability computed before
observing new information.
• Posterior Probability: The probability that is calculated after all evidence or information
has taken into account. It is a combination of prior probability and new information.
14. Axioms of Probability
• There are three axioms of probability that make the foundation of probability theory-
• Axiom 1: Probability of Event
• The first one is that the probability of an event is always between 0 and 1. 1 indicates
definite action of any of the outcome of an event and 0 indicates no outcome of the event
is possible.
• Axiom 2: Probability of Sample Space
• For sample space, the probability of the entire sample space is 1.
• Axiom 3: Mutually Exclusive Events
• And the third one is- the probability of the event containing any possible outcome of two
mutually disjoint is the summation of their individual probability.
15. Probability of Event
• The first axiom of probability is that the probability of any event is between 0 and 1.
• As we know the formula of probability is that we divide the total number of outcomes in the
event by the total number of outcomes in sample space.
• And the event is a subset of sample space, so the event cannot have more outcome than the
sample space.
• Clearly, this value is going to be between 0 and 1 since the denominator is always greater than the
numerator.
16. Probability of Sample Space
• The second axiom is that the probability for the entire sample space equals 1.
17. Mutually Exclusive Event
• When there is nothing common between A and B. These particular type of events which is
called Mutually Exclusive Events.
• These Mutually exclusive events mean that such events cannot occur together or in other words,
they don’t have common values or we can say their intersection is zero/null. We can also
represent such events as follows:
18. Conditional probability
• Conditional probability is a probability of occurring an event when another event has
already happened.
• Let's suppose, we want to calculate the event A when event B has already occurred,
"the probability of A under the conditions of B", it can be written as:Where P(A⋀B)= Joint
probability of a and B
• P(B)= Marginal probability of B.
• If the probability of A is given and we need to find the probability of B, then it will be given
as:
20. Example
• In a class, there are 70% of the students who like English and 40% of the students who
likes English and mathematics, and then what is the percent of students those who like
English also like mathematics?
• Let, A is an event that a student likes Mathematics
• B is an event that a student likes English.
• Hence, 57% are the students who like English also like Mathematics.
22. Bayes' theorem
• Bayes' theorem is also known as Bayes' rule, Bayes' law, or Bayesian reasoning, which
determines the probability of an event with uncertain knowledge.
• In probability theory, it relates the conditional probability and marginal probabilities of
two random events.
• Bayes' theorem was named after the British mathematician Thomas Bayes. The Bayesian
inference is an application of Bayes' theorem, which is fundamental to Bayesian statistics.
• It is a way to calculate the value of P(B|A) with the knowledge of P(A|B).
• Bayes' theorem allows updating the probability prediction of an event by observing new
information of the real world.
23. Example
• If cancer corresponds to one's age then by using Bayes' theorem, we can determine the probability
of cancer more accurately with the help of age.
• Bayes' theorem can be derived using product rule and conditional probability of event A with known
event B:
• As from product rule we can write:
• P(A ⋀ B)= P(A|B) P(B) or
• Similarly, the probability of event B with known event A:
• P(A ⋀ B)= P(B|A) P(A)
• Equating right hand side of both the equations, we will get:
24. Example
• The above equation (a) is called as Bayes' rule or Bayes' theorem. This equation is basic of most modern
AI systems for probabilistic inference.
• It shows the simple relationship between joint and conditional probabilities. Here,
• P(A|B) is known as posterior, which we need to calculate, and it will be read as Probability of hypothesis A
when we have occurred an evidence B.
• P(B|A) is called the likelihood, in which we consider that hypothesis is true, then we calculate the
probability of evidence.
• P(A) is called the prior probability, probability of hypothesis before considering the evidence
• P(B) is called marginal probability, pure probability of an evidence.
• In the equation (a), in general, we can write P (B) = P(A)*P(B|Ai), hence the Bayes' rule can be written as:
• Where A1
, A2
, A3
,........, An
is a set of mutually exclusive and exhaustive events.
25. Applying Bayes' rule
• Bayes' rule allows us to compute the single term P(B|A) in terms of P(A|B), P(B), and P(A).
• This is very useful in cases where we have a good probability of these three terms and want to determine
the fourth one.
• Suppose we want to perceive the effect of some unknown cause, and want to compute that cause, then
the Bayes' rule becomes:
26. Example-1
• A doctor is aware that disease meningitis causes a patient to have a stiff neck, and it occurs 80% of the
time. He is also aware of some more facts, which are given as follows:
• The Known probability that a patient has meningitis disease is 1/30,000.
• The Known probability that a patient has a stiff neck is 2%.
• Let a be the proposition that patient has stiff neck and b be the proposition that patient has meningitis. ,
so we can calculate the following as:
• P(a|b) = 0.8
• P(b) = 1/30000
• P(a)= .02
• Hence, we can assume that 1 patient out of 750 patients has meningitis disease with a stiff neck.
27. Question
From a standard deck of playing cards, a single card is drawn. The probability that the card is king is 4/52,
then calculate posterior probability P(King|Face), which means the drawn face card is a king card.4
• P(king): probability that the card is King= 4/52= 1/13
• P(face): probability that a card is a face card= 3/13
• P(Face|King): probability of face card when we assume it is a king = 1
• Putting all values in equation (i) we will get:
28. Application of Bayes' theorem in Artificial intelligence
• Bayes Theorem is also widely used in the field of machine learning and AI
• Including its use in a probability framework for fitting a model to a training
dataset, referred to as maximum a posteriori or MAP for short, and in developing models
for classification predictive modeling problems such as the Bayes Optimal Classifier
and Naive Bayes.
• It is used to calculate the next step of the robot when the already executed step is given.
• Bayes' theorem is helpful in weather forecasting.
• It can solve the Monty Hall problem.