SlideShare a Scribd company logo
1 of 58
Quantifying Uncertainty
Team Members :
1-Ahmed Talaat
2-Eman Mostafa
3-Haron Shihab
4-Hesham Hamdy
Cairo University
Faculty of Engineering
4th year Computer Department
1
 Acting under uncertainty
 Basic probability notation
 Inference using full join distribution
 Independence
 Bayes Rule
 Wumpus world
2
Acting under uncertainty
3
 Why use uncertainty ?
 In logical agents We have to consider all possible
explanations in
 This leads to very large and complex belief stat
representations
 Uncertainty provides a solution
4
 Uncertainty arises because of laziness and
ignorance.
 Our main tool for dealing with degrees of
belief is probability theory.
 The probability statements are made with
respect to a knowledge state, not with
respect to the real world.
5
 For the agent to make good choices , The
agent must have preferences between the
different suitable outputs
 Such choice is based on the utility of the
agent
6
 The use of the probability theory and the
utility theory combined is called the decision
theory
 Decision Theory = Probability +Utility
7
Basic probability notation
8
 Here are some probability notations that
we’re going to use
 Sample Space : The set of all possible worlds
 The Greek letter Omega  to represent the
sample space
 The lowercase Omega  to represents elements
in the sample space
9
 Unconditional probabilities (Priors )
 When rolling fair dice , if we assume that that
each die is fair and the rolls don’t interfere with
each others
 The set of possible worlds
 (1,1) , (1,2) , (1,3)….(2,1),(2,2),……(6,5),(6,6)
 P(Dice)=1/36
10
 Conditional probability
 The probability of a certain event happening ,
given the effect of another event (called
evidence)
 For example , The first die may be already
showing 5 and we are waiting for the other die to
settle down
 In that case , we are interested in the probability
of the other die given the first one is 5
 P(Dice |Die1=5)
11
 Semantics of a proposition
 the probability model is determined by the joint
distribution for all the random variables: the full
joint probability distribution
 for the Cavity, Toothache, Weather domain, the
notation is:
 P(Cavity, Toothache, Weather)
 this can be represented as a 2x2x4 table
 Given the definition of the probability of a
proposition as a sum over possible worlds, the
full joint distribution allows calculating the
probability of any proposition over its variables
by summing entries in the FJD
12
 The basic axioms of probability
1- 0  P()  1,
2-
So we can prove that P(⌐a)=1-P(a)
P(⌐a)=
= + -
= - = 1-P(a)



 1)(P



)()( PP
 a
P

)(
 a
P

)( a
P

)(

)(P
a
P

)(
a
P

)(
13
 Inclusion-Exclusion principle
P(a˅b)=P(a)+P(b)-P(a˄b)
 Kolmogorov’s axioms
Andrei Kolmogorov, who showed who to build up the rest of the
rest of probability theory
for example for his rules :
the fact that the logical agent cannot simultaneously believe
A,B, ⌐(A˄B),because there is no possible world in which all
three are true
14
 De Finetti example:
Agent 1 Agent 2
15
• P(a˅b)=P(a)+P(b)-P(a˄b)
P(a˅b)=P(a)+P(b)-P(a˄b)
=0.7!=0.8
Agent 1 probabilities
P(a)=0.4 P(a˅b)=0.8
P(b)=0.3 P(a˄b)=0
Agent 1
16
Agent 2 is the winner
Agent 1 Agent 2 Payoffs to Agent 1
A,b a, ⌐b ⌐a,b ⌐a, ⌐b
a 0.4 a 4 to 6 -6 -6 4 4
b 0.3 b 3 to 7 -7 3 -7 3
a˅b 0.8 ⌐(a˅b) 2 to 8 2 2 2 -8
-11 -1 -1 -1
17
Inference using full joint
distribution
18
toothache ⌐toothache
catch ⌐catch catch ⌐catch
cavity 0.108 0.012 0.072 0.008
⌐cavity 0.016 0.064 0.144 0.576
A full joint distribution for the Toothache , Cavity, Catch world
The probabilities in the joint distribution sum to 1
P( cavity ˅ toothache)=0.108+0.012+0.016+0.064=0.28
P(cavity)=0.108+0.012+0.072+0.008=0.2
19
toothache ⌐toothache
catch ⌐catch catch ⌐catch
cavity 0.108 0.012 0.072 0.008
⌐cavity 0.016 0.064 0.144 0.576
A full joint distribution for the Toothache , Cavity, Catch world
P(cavity)=0.108+0.012+0.072+0.008
marginalization or summing out :we sum
up the probabilities for each possible value of other
variables.
20
 Marginalization rule for any set of variables
Y and Z :


Zz
zYPYP ),()(
ZzWhere means to sum over all the possible
combinations of values of the set of variables Z
we abbreviate this as leaving Z implicitz


},{
),()(
ToothacheCatchz
zCavityPCavityP
21
 A variant of this rule involves conditional
probabilities instead of join distribution,
using the product rule as

z
zPzYPYP )()|()(
22
toothache ⌐toothache
catch ⌐catch catch ⌐catch
cavity 0.108 0.012 0.072 0.008
⌐cavity 0.016 0.064 0.144 0.576
A full joint distribution for the Toothache , Cavity, Catch world
P (cavity  toothache)
P (toothache )
(0.108+0.012)
(0.108 + 0.012 + 0.016 + 0.064)
= = 0.6
P(cavity| toothache)
23
toothache ⌐toothache
catch ⌐catch catch ⌐catch
cavity 0.108 0.012 0.072 0.008
⌐cavity 0.016 0.064 0.144 0.576
A full joint distribution for the Toothache , Cavity, Catch world
P(⌐cavity| toothache)
P(⌐cavity  toothache)
P (toothache )
(0.016+0.064)
(0.108 + 0.012 + 0.016 + 0.064)
= = 0.4
24
 P( cavity | toothache ) = p( cavity  toothache)
p( toothache )
• P(⌐ cavity | toothache ) =p(⌐ cavity  toothache)
p( toothache )
• 1/ P( toothache) is const α
cavity ⌐cavity
Cavity
25
toothache ⌐toothache
catch ⌐catch catch ⌐catch
cavity 0.108 0.012 0.072 0.008
⌐cavity 0.016 0.064 0.144 0.576
A full joint distribution for the Toothache , Cavity, Catch world
P( Cavity | toothache) = α P( Cavity, toothache)
= α[P( Cavity, toothache, catch)+ P( Cavity, toothache, catch)]
= α [<0.108,0.016> + <0.012,0.064>]
= α <0.12,0.08> = <0.6,0.4>
26
 we begin with the case in which the
query involves a single variable
X(Cavity).Let E be the list of evidence
variables (just toothache), let e be the
list of observed values from them, let Y
be the list of unobserved variables (just
catch) the query is P(X|e) and can be
evaluated as

y
yeXPaeXaPeXP ),,(),()|(
27
 Full joint distribution requires an input table
of size O(2^n) and takes O(2^n) time to
process the table. In a realistic problem we
could easily have n>100, making O(2^n)
impractical.
28
Independence
29
 Adding new variable to the
Tootache,Catch,Cavity problem by adding
Weather.
 Now P(Toothache,Catch,Cavity,Weather)
have 2 x 2 x 2 x 4 entries if Weather has 4
values
 Is P(Toothache,Catch,Cavity,Weather=cloudy)
realted to P(Toothache,Catch,Cavity) ?
30
 Weather has nothing to do with one’s dental
problems.
 Then:
 P(Toothache,Catch,Cavity,Weather)
= P(Weather)P(Tootache,Catch,Cavity)
 Same thing with coin flips. Each flip isn’t
dependent on the next one.
 P(a|b) = P(a) , P(b|a) = P(b) independent
events a & b
31
Cavity
Toothache Catch
Weather
Decomposes into
Cavity
Toothache Catch
Weather
Coin1 ………. Coin n
Decomposes into
Coin1 ……….. Coin n
32
Bayes’ Rule
33
)()|()( bPbaPbaP 
)()|()( aPabPbaP 
)()|()()|()( aPabPbPbaPbaP 
evidence
priorlikelihood
aP
bpbaP
abp


)(
)()|(
)|(

34
 We use this when we have effect of some
unknown cause and we would like to
determine the cause.
 Bayes’ rule becomes:
)(
)()|(
)|(
effectP
causepcauseeffectP
effectcauseP 
• P(effect|cause) describes the Causal
relationship
• P(cause|effect) describes the Diagnostic
relationship
35
 In medical diagnosis:
 The Doctor knows P(Symptoms|disease)
 Wants to derive a diagnosis P(disease|symptoms)
)symptom(
)()disease|symptom(
)symptom|disease(
P
diseasepP
P 
36
 Patient has Stiff Neck (Symptom)
 Doctor tries to relate it to meningitis
 P(stiff neck|meningitis) = 0.7
 P(meningitis) = 1/50000
 P(stiff neck) = 0.01
37
 We expect with the given data that almost 1 in
700 patient with neck stiffness alone to have
meningitis.
)neckstiff(
)()meningitis|neckstiff(
)neckstiff|meningitis(
P
meningitispP
P 
0014.0
01.0
5000017.0



38
 Where α is the normalization factor the make
P(y|x) + P(x) = 1
)()|(
)(
)()|(
)|( xpxyp
yP
xpxyP
yxp 
39
 What happens when the dentist’s probe
catches the aching tooth of a patient?
 Might be feasible for just 2 evidence but with
n evidence we can have 2n possible
combination of evidence
)()|(
)|(
cavitypcavitycatchtoothachep
catchtoothachecavityp



40
 When the probe catches in the tooth, this
means the probably has cavity and that
means that it causes toothache.
 Then Catch and Toothache are not absolutely
independent but the are independent given
the presence or the absence of the cavity.
41
P(Cavity)Cavity)|P(catch
catch)toothache|P(Cavity
Cavity)|P(catchCavity)|eP(toothach
Cavity)|catcheP(toothach



42
if effect(1) and effect(2) are conditionally
independent given cause
 P(effect(1), effect(2),…effect(n) | cause) =
P(effect(1) | cause) P(effect(2)|cause)…
P(effect(n)|cause)
 P(effect1| effect2, cause) = P(effect1 | cause)
 P(effect2 | effect1, cause) = P(effect2 | cause)

i
CauseEffectiPCauseP
EffectNEffectCauseP
)|()(
),..,1,(
43
The Wumpus Word
Revisited
44
45
 The Wumpus world is uncertain because the
agent’s sensors give only partial information
about the world
46
 How would the logical agent tackle the
wumpus world ?
47
 The logical agent gets stuck after finding a
breeze in both [1,2] and [2,1]
 There is no more safe place to explore so it
has to chose randomly!
48
 Choses to explore the square with the
highest likelihood of being safe
 We will see soon that a probabilistic agent
can do much better than a logical agent
49
 Let’s define some random variables:
 Pi,j True iff square [i,j] contains a pit
 Bi,j True iff square [i,j] is breezy
 Facts:
 b = b1,1 ^ b1,2 ^ b2,1
 known = p1,1 ^ p1,2 ^ p2,1
 Our Goal:
 Answer Queries like P(P1,3 | known, b)
 That is how likely is it that [1,3] contains a bit
given the observations so far
50
 The next step is to specify the full joint
probability distribution
 That is P(P1,1 , … , P4,4 , B1,1 , B1,2 , B2,1)
 Applying the product rule, we have
P(P1,1 , … , P4,4 , B1,1 , B1,2 , B2,1) =
P(B1,1 , B1,2 , B2,1 | P1,1 , … , P4,4)P(P1,1 , … , P4,4)
 P(B1,1 , B1,2 , B2,1 | P1,1 , … , P4,4) = 1 if breezes
are adjacent to pits and 0 otherwise.
51
 Assume that each square contains a pit with
p = 0.2, independently of other squares
 So P(P1,1 , … , P4,4) = 𝑖,𝑗=1,1
4,4
P(Pi,j)
 Generally for n pits:
P(P1,1 , … , P4,4) = 0.2n * 0.816-n
52
 P(P1,3 | known, b) = α 𝑢𝑛𝑘𝑜𝑤𝑛 P(𝑃1,3 ,
unkown, b)
 We seem to have reached our goal, but
there is a big problem
 There are 12 unknown squares, hence the
summation contains 212 = 4096 terms
 Summation grows exponentially with the
number of squares
53
 Intuition: Other squares are irrelevant
 [4,4] does NOT affect whether [1,3] has a pit
or not
 This intuition helps in reducing the
summation terms
 Remember that:
frontier is the pit variables other than the
query variable that are adjacent to visited
squares
54
P(P1,3 | known, b)
= α 𝑢𝑛𝑘𝑜𝑤𝑛 P(𝑃1,3 , known, unkown, b)
= α 𝑢𝑛𝑘𝑜𝑤𝑛 P b 𝑃1,3 , known, unknown)P(P1,3 ,
known, unknown) (by product rule)
= α 𝑓𝑟𝑜𝑛𝑡𝑖𝑒𝑟 𝑜𝑡ℎ𝑒𝑟 P b 𝑃1,3 , known , frontier,
other) P(P1,3 , known, frontier, other)
 Since b is independent of other given known, P1,3 , and
frontier
= α 𝑓𝑟𝑜𝑛𝑡𝑖𝑒𝑟 𝑜𝑡ℎ𝑒𝑟 P b 𝑃1,3 , known , frontier)
P(P1,3 , known, frontier, other)
55
P(P1,3 | known, b)
= α 𝑓𝑟𝑜𝑛𝑡𝑖𝑒𝑟 P 𝑏 𝑘𝑛𝑜𝑤𝑛, 𝑃1,3 , frontier)
𝑜𝑡ℎ𝑒𝑟 P(𝑃1,3 , known, frontier, other)
= α 𝑓𝑟𝑜𝑛𝑡𝑖𝑒𝑟 P 𝑏 𝑘𝑛𝑜𝑤𝑛, 𝑃1,3 , frontier)
𝑜𝑡ℎ𝑒𝑟 P(𝑃1,3)P(known)P(frontier)P(other)
= α P(known)P(P1,3) 𝑓𝑟𝑜𝑛𝑡𝑖𝑒𝑟 P 𝑏 𝑘𝑛𝑜𝑤𝑛, 𝑃1,3 ,
frontier)P(frontier) 𝑜𝑡ℎ𝑒𝑟 𝑃(𝑜𝑡ℎ𝑒𝑟)
= α’ P(P1,3) 𝑓𝑟𝑜𝑛𝑡𝑖𝑒𝑟 P 𝑏 𝑘𝑛𝑜𝑤𝑛, 𝑃1,3 ,
frontier)P(frontier)
56
 P(P1,3 | known, b) = α’(0.2(0.04 + 0.16 +
0.16), 0.8(0.04 + 0.16)) ≈ (0.31, 0.69)
57
 That is [1,3] contains a pit with roughly 31%
probability
 Similarly [2,2] contains a pit with roughly
86% probability.
 The agent should definitely avoid [2,2]
 That’s why probabilistic agent is much better
than the logical agent in the wumpus world
58

More Related Content

What's hot

ProLog (Artificial Intelligence) Introduction
ProLog (Artificial Intelligence) IntroductionProLog (Artificial Intelligence) Introduction
ProLog (Artificial Intelligence) Introductionwahab khan
 
Bayesian networks in AI
Bayesian networks in AIBayesian networks in AI
Bayesian networks in AIByoung-Hee Kim
 
Unit IV UNCERTAINITY AND STATISTICAL REASONING in AI K.Sundar,AP/CSE,VEC
Unit IV UNCERTAINITY AND STATISTICAL REASONING in AI K.Sundar,AP/CSE,VECUnit IV UNCERTAINITY AND STATISTICAL REASONING in AI K.Sundar,AP/CSE,VEC
Unit IV UNCERTAINITY AND STATISTICAL REASONING in AI K.Sundar,AP/CSE,VECsundarKanagaraj1
 
Learning sets of rules, Sequential Learning Algorithm,FOIL
Learning sets of rules, Sequential Learning Algorithm,FOILLearning sets of rules, Sequential Learning Algorithm,FOIL
Learning sets of rules, Sequential Learning Algorithm,FOILPavithra Thippanaik
 
Predicate logic_2(Artificial Intelligence)
Predicate logic_2(Artificial Intelligence)Predicate logic_2(Artificial Intelligence)
Predicate logic_2(Artificial Intelligence)SHUBHAM KUMAR GUPTA
 
Bayesian Networks - A Brief Introduction
Bayesian Networks - A Brief IntroductionBayesian Networks - A Brief Introduction
Bayesian Networks - A Brief IntroductionAdnan Masood
 
First order logic in knowledge representation
First order logic in knowledge representationFirst order logic in knowledge representation
First order logic in knowledge representationSabaragamuwa University
 
Cyrus beck line clipping algorithm
Cyrus beck line clipping algorithmCyrus beck line clipping algorithm
Cyrus beck line clipping algorithmPooja Dixit
 
Analytical learning
Analytical learningAnalytical learning
Analytical learningswapnac12
 
Quadric surfaces
Quadric surfacesQuadric surfaces
Quadric surfacesAnkur Kumar
 
AI_Session 18 Cryptoarithmetic problem.pptx
AI_Session 18 Cryptoarithmetic problem.pptxAI_Session 18 Cryptoarithmetic problem.pptx
AI_Session 18 Cryptoarithmetic problem.pptxAsst.prof M.Gokilavani
 
Daa:Dynamic Programing
Daa:Dynamic ProgramingDaa:Dynamic Programing
Daa:Dynamic Programingrupali_2bonde
 

What's hot (20)

ProLog (Artificial Intelligence) Introduction
ProLog (Artificial Intelligence) IntroductionProLog (Artificial Intelligence) Introduction
ProLog (Artificial Intelligence) Introduction
 
Bayesian networks in AI
Bayesian networks in AIBayesian networks in AI
Bayesian networks in AI
 
Unit IV UNCERTAINITY AND STATISTICAL REASONING in AI K.Sundar,AP/CSE,VEC
Unit IV UNCERTAINITY AND STATISTICAL REASONING in AI K.Sundar,AP/CSE,VECUnit IV UNCERTAINITY AND STATISTICAL REASONING in AI K.Sundar,AP/CSE,VEC
Unit IV UNCERTAINITY AND STATISTICAL REASONING in AI K.Sundar,AP/CSE,VEC
 
Learning sets of rules, Sequential Learning Algorithm,FOIL
Learning sets of rules, Sequential Learning Algorithm,FOILLearning sets of rules, Sequential Learning Algorithm,FOIL
Learning sets of rules, Sequential Learning Algorithm,FOIL
 
Problems, Problem spaces and Search
Problems, Problem spaces and SearchProblems, Problem spaces and Search
Problems, Problem spaces and Search
 
Predicate logic_2(Artificial Intelligence)
Predicate logic_2(Artificial Intelligence)Predicate logic_2(Artificial Intelligence)
Predicate logic_2(Artificial Intelligence)
 
Bayesian Networks - A Brief Introduction
Bayesian Networks - A Brief IntroductionBayesian Networks - A Brief Introduction
Bayesian Networks - A Brief Introduction
 
Bayesian networks
Bayesian networksBayesian networks
Bayesian networks
 
First order logic in knowledge representation
First order logic in knowledge representationFirst order logic in knowledge representation
First order logic in knowledge representation
 
Structured Knowledge Representation
Structured Knowledge RepresentationStructured Knowledge Representation
Structured Knowledge Representation
 
Cyrus beck line clipping algorithm
Cyrus beck line clipping algorithmCyrus beck line clipping algorithm
Cyrus beck line clipping algorithm
 
Dempster shafer theory
Dempster shafer theoryDempster shafer theory
Dempster shafer theory
 
Kr using rules
Kr using rulesKr using rules
Kr using rules
 
Analytical learning
Analytical learningAnalytical learning
Analytical learning
 
Uncertainty in AI
Uncertainty in AIUncertainty in AI
Uncertainty in AI
 
Quadric surfaces
Quadric surfacesQuadric surfaces
Quadric surfaces
 
AI_Session 18 Cryptoarithmetic problem.pptx
AI_Session 18 Cryptoarithmetic problem.pptxAI_Session 18 Cryptoarithmetic problem.pptx
AI_Session 18 Cryptoarithmetic problem.pptx
 
Spline representations
Spline representationsSpline representations
Spline representations
 
Daa:Dynamic Programing
Daa:Dynamic ProgramingDaa:Dynamic Programing
Daa:Dynamic Programing
 
Three dimensional concepts - Computer Graphics
Three dimensional concepts - Computer GraphicsThree dimensional concepts - Computer Graphics
Three dimensional concepts - Computer Graphics
 

Similar to Chapter 13

Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
A new axisymmetric finite element
A new axisymmetric finite elementA new axisymmetric finite element
A new axisymmetric finite elementStefan Duprey
 
Data mining assignment 2
Data mining assignment 2Data mining assignment 2
Data mining assignment 2BarryK88
 
Lecture 3 qualtifed rules of inference
Lecture 3 qualtifed rules of inferenceLecture 3 qualtifed rules of inference
Lecture 3 qualtifed rules of inferenceasimnawaz54
 
Connection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problemsConnection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problemsAlexander Litvinenko
 
Probability cheatsheet
Probability cheatsheetProbability cheatsheet
Probability cheatsheetSuvrat Mishra
 
Catalan Tau Collocation for Numerical Solution of 2-Dimentional Nonlinear Par...
Catalan Tau Collocation for Numerical Solution of 2-Dimentional Nonlinear Par...Catalan Tau Collocation for Numerical Solution of 2-Dimentional Nonlinear Par...
Catalan Tau Collocation for Numerical Solution of 2-Dimentional Nonlinear Par...IJERA Editor
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Valentin De Bortoli
 
Uncertain knowledge and reasoning
Uncertain knowledge and reasoningUncertain knowledge and reasoning
Uncertain knowledge and reasoningShiwani Gupta
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking componentsChristian Robert
 
Probability cheatsheet
Probability cheatsheetProbability cheatsheet
Probability cheatsheetJoachim Gwoke
 
Approximation Methods Of Solutions For Equilibrium Problem In Hilbert Spaces
Approximation Methods Of Solutions For Equilibrium Problem In Hilbert SpacesApproximation Methods Of Solutions For Equilibrium Problem In Hilbert Spaces
Approximation Methods Of Solutions For Equilibrium Problem In Hilbert SpacesLisa Garcia
 
Mayo Slides: Part I Meeting #2 (Phil 6334/Econ 6614)
Mayo Slides: Part I Meeting #2 (Phil 6334/Econ 6614)Mayo Slides: Part I Meeting #2 (Phil 6334/Econ 6614)
Mayo Slides: Part I Meeting #2 (Phil 6334/Econ 6614)jemille6
 

Similar to Chapter 13 (20)

Uncertainity
Uncertainity Uncertainity
Uncertainity
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
 
A new axisymmetric finite element
A new axisymmetric finite elementA new axisymmetric finite element
A new axisymmetric finite element
 
Data mining assignment 2
Data mining assignment 2Data mining assignment 2
Data mining assignment 2
 
Probability Cheatsheet.pdf
Probability Cheatsheet.pdfProbability Cheatsheet.pdf
Probability Cheatsheet.pdf
 
Ch7
Ch7Ch7
Ch7
 
Microeconomics Theory Exam Help
Microeconomics Theory Exam HelpMicroeconomics Theory Exam Help
Microeconomics Theory Exam Help
 
pattern recognition
pattern recognition pattern recognition
pattern recognition
 
Uncertainty
UncertaintyUncertainty
Uncertainty
 
Lecture 3 qualtifed rules of inference
Lecture 3 qualtifed rules of inferenceLecture 3 qualtifed rules of inference
Lecture 3 qualtifed rules of inference
 
Connection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problemsConnection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problems
 
Probability cheatsheet
Probability cheatsheetProbability cheatsheet
Probability cheatsheet
 
Catalan Tau Collocation for Numerical Solution of 2-Dimentional Nonlinear Par...
Catalan Tau Collocation for Numerical Solution of 2-Dimentional Nonlinear Par...Catalan Tau Collocation for Numerical Solution of 2-Dimentional Nonlinear Par...
Catalan Tau Collocation for Numerical Solution of 2-Dimentional Nonlinear Par...
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
 
Uncertain knowledge and reasoning
Uncertain knowledge and reasoningUncertain knowledge and reasoning
Uncertain knowledge and reasoning
 
NUMERICAL METHODS
NUMERICAL METHODSNUMERICAL METHODS
NUMERICAL METHODS
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
 
Probability cheatsheet
Probability cheatsheetProbability cheatsheet
Probability cheatsheet
 
Approximation Methods Of Solutions For Equilibrium Problem In Hilbert Spaces
Approximation Methods Of Solutions For Equilibrium Problem In Hilbert SpacesApproximation Methods Of Solutions For Equilibrium Problem In Hilbert Spaces
Approximation Methods Of Solutions For Equilibrium Problem In Hilbert Spaces
 
Mayo Slides: Part I Meeting #2 (Phil 6334/Econ 6614)
Mayo Slides: Part I Meeting #2 (Phil 6334/Econ 6614)Mayo Slides: Part I Meeting #2 (Phil 6334/Econ 6614)
Mayo Slides: Part I Meeting #2 (Phil 6334/Econ 6614)
 

Recently uploaded

Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatYousafMalik24
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...jaredbarbolino94
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Celine George
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Jisc
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPCeline George
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementmkooblal
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
MARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupMARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupJonathanParaisoCruz
 
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxRaymartEstabillo3
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
MICROBIOLOGY biochemical test detailed.pptx
MICROBIOLOGY biochemical test detailed.pptxMICROBIOLOGY biochemical test detailed.pptx
MICROBIOLOGY biochemical test detailed.pptxabhijeetpadhi001
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
 

Recently uploaded (20)

Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice great
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
9953330565 Low Rate Call Girls In Rohini Delhi NCR
9953330565 Low Rate Call Girls In Rohini  Delhi NCR9953330565 Low Rate Call Girls In Rohini  Delhi NCR
9953330565 Low Rate Call Girls In Rohini Delhi NCR
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERP
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of management
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
MARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupMARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized Group
 
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
MICROBIOLOGY biochemical test detailed.pptx
MICROBIOLOGY biochemical test detailed.pptxMICROBIOLOGY biochemical test detailed.pptx
MICROBIOLOGY biochemical test detailed.pptx
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
 

Chapter 13

  • 1. Quantifying Uncertainty Team Members : 1-Ahmed Talaat 2-Eman Mostafa 3-Haron Shihab 4-Hesham Hamdy Cairo University Faculty of Engineering 4th year Computer Department 1
  • 2.  Acting under uncertainty  Basic probability notation  Inference using full join distribution  Independence  Bayes Rule  Wumpus world 2
  • 4.  Why use uncertainty ?  In logical agents We have to consider all possible explanations in  This leads to very large and complex belief stat representations  Uncertainty provides a solution 4
  • 5.  Uncertainty arises because of laziness and ignorance.  Our main tool for dealing with degrees of belief is probability theory.  The probability statements are made with respect to a knowledge state, not with respect to the real world. 5
  • 6.  For the agent to make good choices , The agent must have preferences between the different suitable outputs  Such choice is based on the utility of the agent 6
  • 7.  The use of the probability theory and the utility theory combined is called the decision theory  Decision Theory = Probability +Utility 7
  • 9.  Here are some probability notations that we’re going to use  Sample Space : The set of all possible worlds  The Greek letter Omega  to represent the sample space  The lowercase Omega  to represents elements in the sample space 9
  • 10.  Unconditional probabilities (Priors )  When rolling fair dice , if we assume that that each die is fair and the rolls don’t interfere with each others  The set of possible worlds  (1,1) , (1,2) , (1,3)….(2,1),(2,2),……(6,5),(6,6)  P(Dice)=1/36 10
  • 11.  Conditional probability  The probability of a certain event happening , given the effect of another event (called evidence)  For example , The first die may be already showing 5 and we are waiting for the other die to settle down  In that case , we are interested in the probability of the other die given the first one is 5  P(Dice |Die1=5) 11
  • 12.  Semantics of a proposition  the probability model is determined by the joint distribution for all the random variables: the full joint probability distribution  for the Cavity, Toothache, Weather domain, the notation is:  P(Cavity, Toothache, Weather)  this can be represented as a 2x2x4 table  Given the definition of the probability of a proposition as a sum over possible worlds, the full joint distribution allows calculating the probability of any proposition over its variables by summing entries in the FJD 12
  • 13.  The basic axioms of probability 1- 0  P()  1, 2- So we can prove that P(⌐a)=1-P(a) P(⌐a)= = + - = - = 1-P(a)     1)(P    )()( PP  a P  )(  a P  )( a P  )(  )(P a P  )( a P  )( 13
  • 14.  Inclusion-Exclusion principle P(a˅b)=P(a)+P(b)-P(a˄b)  Kolmogorov’s axioms Andrei Kolmogorov, who showed who to build up the rest of the rest of probability theory for example for his rules : the fact that the logical agent cannot simultaneously believe A,B, ⌐(A˄B),because there is no possible world in which all three are true 14
  • 15.  De Finetti example: Agent 1 Agent 2 15
  • 16. • P(a˅b)=P(a)+P(b)-P(a˄b) P(a˅b)=P(a)+P(b)-P(a˄b) =0.7!=0.8 Agent 1 probabilities P(a)=0.4 P(a˅b)=0.8 P(b)=0.3 P(a˄b)=0 Agent 1 16
  • 17. Agent 2 is the winner Agent 1 Agent 2 Payoffs to Agent 1 A,b a, ⌐b ⌐a,b ⌐a, ⌐b a 0.4 a 4 to 6 -6 -6 4 4 b 0.3 b 3 to 7 -7 3 -7 3 a˅b 0.8 ⌐(a˅b) 2 to 8 2 2 2 -8 -11 -1 -1 -1 17
  • 18. Inference using full joint distribution 18
  • 19. toothache ⌐toothache catch ⌐catch catch ⌐catch cavity 0.108 0.012 0.072 0.008 ⌐cavity 0.016 0.064 0.144 0.576 A full joint distribution for the Toothache , Cavity, Catch world The probabilities in the joint distribution sum to 1 P( cavity ˅ toothache)=0.108+0.012+0.016+0.064=0.28 P(cavity)=0.108+0.012+0.072+0.008=0.2 19
  • 20. toothache ⌐toothache catch ⌐catch catch ⌐catch cavity 0.108 0.012 0.072 0.008 ⌐cavity 0.016 0.064 0.144 0.576 A full joint distribution for the Toothache , Cavity, Catch world P(cavity)=0.108+0.012+0.072+0.008 marginalization or summing out :we sum up the probabilities for each possible value of other variables. 20
  • 21.  Marginalization rule for any set of variables Y and Z :   Zz zYPYP ),()( ZzWhere means to sum over all the possible combinations of values of the set of variables Z we abbreviate this as leaving Z implicitz   },{ ),()( ToothacheCatchz zCavityPCavityP 21
  • 22.  A variant of this rule involves conditional probabilities instead of join distribution, using the product rule as  z zPzYPYP )()|()( 22
  • 23. toothache ⌐toothache catch ⌐catch catch ⌐catch cavity 0.108 0.012 0.072 0.008 ⌐cavity 0.016 0.064 0.144 0.576 A full joint distribution for the Toothache , Cavity, Catch world P (cavity  toothache) P (toothache ) (0.108+0.012) (0.108 + 0.012 + 0.016 + 0.064) = = 0.6 P(cavity| toothache) 23
  • 24. toothache ⌐toothache catch ⌐catch catch ⌐catch cavity 0.108 0.012 0.072 0.008 ⌐cavity 0.016 0.064 0.144 0.576 A full joint distribution for the Toothache , Cavity, Catch world P(⌐cavity| toothache) P(⌐cavity  toothache) P (toothache ) (0.016+0.064) (0.108 + 0.012 + 0.016 + 0.064) = = 0.4 24
  • 25.  P( cavity | toothache ) = p( cavity  toothache) p( toothache ) • P(⌐ cavity | toothache ) =p(⌐ cavity  toothache) p( toothache ) • 1/ P( toothache) is const α cavity ⌐cavity Cavity 25
  • 26. toothache ⌐toothache catch ⌐catch catch ⌐catch cavity 0.108 0.012 0.072 0.008 ⌐cavity 0.016 0.064 0.144 0.576 A full joint distribution for the Toothache , Cavity, Catch world P( Cavity | toothache) = α P( Cavity, toothache) = α[P( Cavity, toothache, catch)+ P( Cavity, toothache, catch)] = α [<0.108,0.016> + <0.012,0.064>] = α <0.12,0.08> = <0.6,0.4> 26
  • 27.  we begin with the case in which the query involves a single variable X(Cavity).Let E be the list of evidence variables (just toothache), let e be the list of observed values from them, let Y be the list of unobserved variables (just catch) the query is P(X|e) and can be evaluated as  y yeXPaeXaPeXP ),,(),()|( 27
  • 28.  Full joint distribution requires an input table of size O(2^n) and takes O(2^n) time to process the table. In a realistic problem we could easily have n>100, making O(2^n) impractical. 28
  • 30.  Adding new variable to the Tootache,Catch,Cavity problem by adding Weather.  Now P(Toothache,Catch,Cavity,Weather) have 2 x 2 x 2 x 4 entries if Weather has 4 values  Is P(Toothache,Catch,Cavity,Weather=cloudy) realted to P(Toothache,Catch,Cavity) ? 30
  • 31.  Weather has nothing to do with one’s dental problems.  Then:  P(Toothache,Catch,Cavity,Weather) = P(Weather)P(Tootache,Catch,Cavity)  Same thing with coin flips. Each flip isn’t dependent on the next one.  P(a|b) = P(a) , P(b|a) = P(b) independent events a & b 31
  • 32. Cavity Toothache Catch Weather Decomposes into Cavity Toothache Catch Weather Coin1 ………. Coin n Decomposes into Coin1 ……….. Coin n 32
  • 34. )()|()( bPbaPbaP  )()|()( aPabPbaP  )()|()()|()( aPabPbPbaPbaP  evidence priorlikelihood aP bpbaP abp   )( )()|( )|(  34
  • 35.  We use this when we have effect of some unknown cause and we would like to determine the cause.  Bayes’ rule becomes: )( )()|( )|( effectP causepcauseeffectP effectcauseP  • P(effect|cause) describes the Causal relationship • P(cause|effect) describes the Diagnostic relationship 35
  • 36.  In medical diagnosis:  The Doctor knows P(Symptoms|disease)  Wants to derive a diagnosis P(disease|symptoms) )symptom( )()disease|symptom( )symptom|disease( P diseasepP P  36
  • 37.  Patient has Stiff Neck (Symptom)  Doctor tries to relate it to meningitis  P(stiff neck|meningitis) = 0.7  P(meningitis) = 1/50000  P(stiff neck) = 0.01 37
  • 38.  We expect with the given data that almost 1 in 700 patient with neck stiffness alone to have meningitis. )neckstiff( )()meningitis|neckstiff( )neckstiff|meningitis( P meningitispP P  0014.0 01.0 5000017.0    38
  • 39.  Where α is the normalization factor the make P(y|x) + P(x) = 1 )()|( )( )()|( )|( xpxyp yP xpxyP yxp  39
  • 40.  What happens when the dentist’s probe catches the aching tooth of a patient?  Might be feasible for just 2 evidence but with n evidence we can have 2n possible combination of evidence )()|( )|( cavitypcavitycatchtoothachep catchtoothachecavityp    40
  • 41.  When the probe catches in the tooth, this means the probably has cavity and that means that it causes toothache.  Then Catch and Toothache are not absolutely independent but the are independent given the presence or the absence of the cavity. 41
  • 43. if effect(1) and effect(2) are conditionally independent given cause  P(effect(1), effect(2),…effect(n) | cause) = P(effect(1) | cause) P(effect(2)|cause)… P(effect(n)|cause)  P(effect1| effect2, cause) = P(effect1 | cause)  P(effect2 | effect1, cause) = P(effect2 | cause)  i CauseEffectiPCauseP EffectNEffectCauseP )|()( ),..,1,( 43
  • 45. 45
  • 46.  The Wumpus world is uncertain because the agent’s sensors give only partial information about the world 46
  • 47.  How would the logical agent tackle the wumpus world ? 47
  • 48.  The logical agent gets stuck after finding a breeze in both [1,2] and [2,1]  There is no more safe place to explore so it has to chose randomly! 48
  • 49.  Choses to explore the square with the highest likelihood of being safe  We will see soon that a probabilistic agent can do much better than a logical agent 49
  • 50.  Let’s define some random variables:  Pi,j True iff square [i,j] contains a pit  Bi,j True iff square [i,j] is breezy  Facts:  b = b1,1 ^ b1,2 ^ b2,1  known = p1,1 ^ p1,2 ^ p2,1  Our Goal:  Answer Queries like P(P1,3 | known, b)  That is how likely is it that [1,3] contains a bit given the observations so far 50
  • 51.  The next step is to specify the full joint probability distribution  That is P(P1,1 , … , P4,4 , B1,1 , B1,2 , B2,1)  Applying the product rule, we have P(P1,1 , … , P4,4 , B1,1 , B1,2 , B2,1) = P(B1,1 , B1,2 , B2,1 | P1,1 , … , P4,4)P(P1,1 , … , P4,4)  P(B1,1 , B1,2 , B2,1 | P1,1 , … , P4,4) = 1 if breezes are adjacent to pits and 0 otherwise. 51
  • 52.  Assume that each square contains a pit with p = 0.2, independently of other squares  So P(P1,1 , … , P4,4) = 𝑖,𝑗=1,1 4,4 P(Pi,j)  Generally for n pits: P(P1,1 , … , P4,4) = 0.2n * 0.816-n 52
  • 53.  P(P1,3 | known, b) = α 𝑢𝑛𝑘𝑜𝑤𝑛 P(𝑃1,3 , unkown, b)  We seem to have reached our goal, but there is a big problem  There are 12 unknown squares, hence the summation contains 212 = 4096 terms  Summation grows exponentially with the number of squares 53
  • 54.  Intuition: Other squares are irrelevant  [4,4] does NOT affect whether [1,3] has a pit or not  This intuition helps in reducing the summation terms  Remember that: frontier is the pit variables other than the query variable that are adjacent to visited squares 54
  • 55. P(P1,3 | known, b) = α 𝑢𝑛𝑘𝑜𝑤𝑛 P(𝑃1,3 , known, unkown, b) = α 𝑢𝑛𝑘𝑜𝑤𝑛 P b 𝑃1,3 , known, unknown)P(P1,3 , known, unknown) (by product rule) = α 𝑓𝑟𝑜𝑛𝑡𝑖𝑒𝑟 𝑜𝑡ℎ𝑒𝑟 P b 𝑃1,3 , known , frontier, other) P(P1,3 , known, frontier, other)  Since b is independent of other given known, P1,3 , and frontier = α 𝑓𝑟𝑜𝑛𝑡𝑖𝑒𝑟 𝑜𝑡ℎ𝑒𝑟 P b 𝑃1,3 , known , frontier) P(P1,3 , known, frontier, other) 55
  • 56. P(P1,3 | known, b) = α 𝑓𝑟𝑜𝑛𝑡𝑖𝑒𝑟 P 𝑏 𝑘𝑛𝑜𝑤𝑛, 𝑃1,3 , frontier) 𝑜𝑡ℎ𝑒𝑟 P(𝑃1,3 , known, frontier, other) = α 𝑓𝑟𝑜𝑛𝑡𝑖𝑒𝑟 P 𝑏 𝑘𝑛𝑜𝑤𝑛, 𝑃1,3 , frontier) 𝑜𝑡ℎ𝑒𝑟 P(𝑃1,3)P(known)P(frontier)P(other) = α P(known)P(P1,3) 𝑓𝑟𝑜𝑛𝑡𝑖𝑒𝑟 P 𝑏 𝑘𝑛𝑜𝑤𝑛, 𝑃1,3 , frontier)P(frontier) 𝑜𝑡ℎ𝑒𝑟 𝑃(𝑜𝑡ℎ𝑒𝑟) = α’ P(P1,3) 𝑓𝑟𝑜𝑛𝑡𝑖𝑒𝑟 P 𝑏 𝑘𝑛𝑜𝑤𝑛, 𝑃1,3 , frontier)P(frontier) 56
  • 57.  P(P1,3 | known, b) = α’(0.2(0.04 + 0.16 + 0.16), 0.8(0.04 + 0.16)) ≈ (0.31, 0.69) 57
  • 58.  That is [1,3] contains a pit with roughly 31% probability  Similarly [2,2] contains a pit with roughly 86% probability.  The agent should definitely avoid [2,2]  That’s why probabilistic agent is much better than the logical agent in the wumpus world 58