- Bayesian networks can model conditional independencies between variables based on the network structure. Each variable is conditionally independent of its non-descendants given its parents.
- The d-separation algorithm allows determining if two variables are conditionally independent given some evidence by checking if all paths between them are "blocked".
- For trees/forests where each node has at most one parent, inference can be done efficiently in linear time by decomposing probabilities and passing messages between nodes.
Naive Bayes is a kind of classifier which uses the Bayes Theorem. It predicts membership probabilities for each class such as the probability that given record or data point belongs to a particular class.
This presentation was prepared as part of the curriculum studies for CSCI-659 Topics in Artificial Intelligence Course - Machine Learning in Computational Linguistics.
It was prepared under guidance of Prof. Sandra Kubler.
Introduction to linear regression and the maths behind it like line of best fit, regression matrics. Other concepts include cost function, gradient descent, overfitting and underfitting, r squared.
What is the Expectation Maximization (EM) Algorithm?Kazuki Yoshida
Review of Do and Batzoglou. "What is the expectation maximization algorith?" Nat. Biotechnol. 2008;26:897. Also covers the Data Augmentation and Stan implementation. Resources at https://github.com/kaz-yos/em_da_repo
DBScan stands for Density-Based Spatial Clustering of Applications with Noise.
DBScan Concepts
DBScan Parameters
DBScan Connectivity and Reachability
DBScan Algorithm , Flowchart and Example
Advantages and Disadvantages of DBScan
DBScan Complexity
Outliers related question and its solution.
In machine learning, support vector machines (SVMs, also support vector networks[1]) are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. The basic SVM takes a set of input data and predicts, for each given input, which of two possible classes forms the output, making it a non-probabilistic binary linear classifier.
Naive Bayes is a kind of classifier which uses the Bayes Theorem. It predicts membership probabilities for each class such as the probability that given record or data point belongs to a particular class.
This presentation was prepared as part of the curriculum studies for CSCI-659 Topics in Artificial Intelligence Course - Machine Learning in Computational Linguistics.
It was prepared under guidance of Prof. Sandra Kubler.
Introduction to linear regression and the maths behind it like line of best fit, regression matrics. Other concepts include cost function, gradient descent, overfitting and underfitting, r squared.
What is the Expectation Maximization (EM) Algorithm?Kazuki Yoshida
Review of Do and Batzoglou. "What is the expectation maximization algorith?" Nat. Biotechnol. 2008;26:897. Also covers the Data Augmentation and Stan implementation. Resources at https://github.com/kaz-yos/em_da_repo
DBScan stands for Density-Based Spatial Clustering of Applications with Noise.
DBScan Concepts
DBScan Parameters
DBScan Connectivity and Reachability
DBScan Algorithm , Flowchart and Example
Advantages and Disadvantages of DBScan
DBScan Complexity
Outliers related question and its solution.
In machine learning, support vector machines (SVMs, also support vector networks[1]) are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. The basic SVM takes a set of input data and predicts, for each given input, which of two possible classes forms the output, making it a non-probabilistic binary linear classifier.
Comments on Jeffrey conditioning and external Bayesianityspetey
Comments on Carl Wagner\'s paper \"Jeffrey conditioning and external Bayesianity\" for the 2008 Formal Epistemology Workshop, focusing on the problem of specifying \"identical learning\" in a Bayesian framework.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4
Inference in Bayesian Networks
1. Bayesian Networks:
Independencies and Inference
Scott Davies and Andrew Moore
Note to other teachers and users of these slides. Andrew and Scott would
be delighted if you found this source material useful in giving your own
lectures. Feel free to use these slides verbatim, or to modify them to fit
your own needs. PowerPoint originals are available. If you make use of a
significant portion of these slides in your own lecture, please include this
message, or the following link to the source repository of Andre w’s
tutorials: http://www.cs.cmu.edu/~awm/tutorials . Comments and
corrections gratefully received.
What Independencies does a Bayes Net Model?
• In order for a Bayesian network to model a
probability distribution, the following must be true by
definition:
Each variable is conditionally independent of all its non-
descendants in the graph given the value of all its parents.
• This implies
n
P ( X 1 K X n ) = ∏ P ( X i | parents ( X i ))
i =1
• But what else does it imply?
1
2. What Independencies does a Bayes Net Model?
• Example:
Given Y, does learning the value of Z tell us
nothing new about X?
Z
I.e., is P(X|Y, Z) equal to P(X | Y)?
Y
Yes. Since we know the value of all of X’s
parents (namely, Y), and Z is not a
descendant of X, X is conditionally
X independent of Z.
Also, since independence is symmetric,
P(Z|Y, X) = P(Z|Y).
Quick proof that independence is symmetric
• Assume: P(X|Y, Z) = P(X|Y)
• Then:
P( X , Y | Z ) P( Z )
P ( Z | X ,Y ) = (Bayes’s Rule)
P( X , Y )
P(Y | Z ) P( X | Y , Z ) P( Z )
= (Chain Rule)
P( X | Y ) P(Y )
P(Y | Z ) P( X | Y ) P( Z )
= (By Assumption)
P( X | Y ) P(Y )
P(Y | Z ) P( Z )
= = P( Z | Y ) (Bayes’s Rule)
P(Y )
2
3. What Independencies does a Bayes Net Model?
• Let I<X,Y,Z> represent X and Z being conditionally
independent given Y.
Y
X Z
• I<X,Y,Z>? Yes, just as in previous example: All X’s
parents given, and Z is not a descendant.
What Independencies does a Bayes Net Model?
Z
U V
X
• I<X,{U},Z>? No.
• I<X,{U,V},Z>? Yes.
• Maybe I<X, S, Z> iff S acts a cutset between X and Z
in an undirected version of the graph…?
3
4. Things get a little more confusing
X Z
Y
• X has no parents, so we’re know all its parents’
values trivially
• Z is not a descendant of X
• So, I<X,{},Z>, even though there’s a undirected path
from X to Z through an unknown variable Y.
• What if we do know the value of Y, though? Or one
of its descendants?
The “Burglar Alarm” example
Burglar Earthquake
Alarm
Phone Call
• Your house has a twitchy burglar alarm that is also
sometimes triggered by earthquakes.
• Earth arguably doesn’t care whether your house is
currently being burgled
• While you are on vacation, one of your neighbors
calls and tells you your home’s burglar alarm is
ringing. Uh oh!
4
5. Things get a lot more confusing
Burglar Earthquake
Alarm
Phone Call
• But now suppose you learn that there was a medium-sized
earthquake in your neighborhood. Oh, whew! Probably not a
burglar after all.
• Earthquake “explains away” the hypothetical burglar.
• But then it must not be the case that
I<Burglar,{Phone Call}, Earthquake>, even though
I<Burglar,{}, Earthquake>!
d-separation to the rescue
• Fortunately, there is a relatively simple algorithm for
determining whether two variables in a Bayesian
network are conditionally independent: d-separation.
• Definition: X and Z are d-separated by a set of
evidence variables E iff every undirected path from X
to Z is “blocked”, where a path is “blocked” iff one
or more of the following conditions is true: ...
5
6. A path is “blocked” when...
• There exists a variable V on the path such that
• it is in the evidence set E
• the arcs putting V in the path are “tail-to-tail”
V
• Or, there exists a variable V on the path such that
• it is in the evidence set E
• the arcs putting V in the path are “tail-to- head”
V
• Or, ...
A path is “blocked” when… (the funky case)
• … Or, there exists a variable V on the path such that
• it is NOT in the evidence set E
• neither are any of its descendants
• the arcs putting V on the path are “head-to-head”
V
6
7. d-separation to the rescue, cont’d
• Theorem [Verma & Pearl, 1998]:
• If a set of evidence variables E d-separates X and
Z in a Bayesian network’s graph, then I<X, E, Z>.
• d-separation can be computed in linear time using a
depth-first-search-like algorithm.
• Great! We now have a fast algorithm for
automatically inferring whether learning the value of
one variable might give us any additional hints about
some other variable, given what we already know.
• “Might”: Variables may actually be independent when they’re not d-
separated, depending on the actual probabilities involved
d-separation example
A B
•I<C, {}, D>?
C D
•I<C, {A}, D>?
•I<C, {A, B}, D>?
E F •I<C, {A, B, J}, D>?
•I<C, {A, B, E, J}, D>?
G H
I J
7
8. Bayesian Network Inference
• Inference: calculating P(X|Y) for some variables or
sets of variables X and Y.
• Inference in Bayesian networks is #P-hard!
Inputs: prior probabilities of .5
I1 I2 I3 I4 I5
Reduces to
O
P(O) must be
How many satisfying assignments? (#sat. assign.)*(.5^#inputs)
Bayesian Network Inference
• But…inference is still tractable in some cases.
• Let’s look a special class of networks: trees / forests
in which each node has at most one parent.
8
9. Decomposing the probabilities
• Suppose we want P(Xi | E) where E is some set of
evidence variables.
• Let’s split E into two parts:
• Ei- is the part consisting of assignments to variables in the
subtree rooted at Xi
• Ei+ is the rest of it
Xi
Decomposing the probabilities, cont’d
P( X i | E ) = P ( X i | Ei− , Ei+ )
Xi
9
10. Decomposing the probabilities, cont’d
P( X i | E ) = P ( X i | Ei− , Ei+ )
P( Ei− | X , Ei+ ) P ( X | Ei+ )
= Xi
P( Ei− | Ei+ )
Decomposing the probabilities, cont’d
P( X i | E ) = P ( X i | Ei− , Ei+ )
P( Ei− | X , Ei+ ) P ( X | Ei+ )
= Xi
P( Ei− | Ei+ )
P( Ei− | X ) P( X | Ei+ )
=
P ( Ei− | Ei+ )
10
11. Decomposing the probabilities, cont’d
P( X i | E ) = P ( X i | Ei− , Ei+ )
P( Ei− | X , Ei+ ) P ( X | Ei+ )
= Xi
P( Ei− | Ei+ )
P( Ei− | X ) P( X | Ei+ )
=
P ( Ei− | Ei+ )
= ap ( X i )? ( X i ) Where:
• α is a constant independent of Xi
• π(Xi) = P(Xi |Ei+)
• λ(Xi) = P(Ei-| Xi)
Using the decomposition for inference
• We can use this decomposition to do inference as
follows. First, compute λ(Xi) = P(Ei-| Xi) for all Xi
recursively, using the leaves of the tree as the base
case.
• If Xi is a leaf:
• If Xi is in E: λ(Xi) = 1 if Xi matches E, 0 otherwise
• If Xi is not in E: Ei- is the null set, so
P(Ei-| Xi) = 1 (constant)
11
12. Quick aside: “Virtual evidence”
• For theoretical simplicity, but without loss of
generality, let’s assume that all variables in E (the
evidence set) are leaves in the tree.
• Why can we do this WLOG:
Equivalent to Xi
Xi
Observe Xi Observe Xi’
X i’
Where P(Xi’| Xi) =1 if Xi’=Xi, 0 otherwise
Calculating λ(Xi) for non-leaves
Xi
• Suppose Xi has one child, Xc.
Xc
• Then:
? ( X i ) = P ( E i− | X i ) =
12
13. Calculating λ(Xi) for non-leaves
Xi
• Suppose Xi has one child, Xc.
Xc
• Then:
∑ P(E
? ( X i ) = P ( E i− | X i ) = ,XC = j | Xi)
−
i
j
Calculating λ(Xi) for non-leaves
Xi
• Suppose Xi has one child, Xc.
Xc
• Then:
∑ P(E
? ( X i ) = P ( E i− | X i ) = ,XC = j | Xi)
−
i
j
∑ P(X = j | X i ) P ( E i− | X i , X C = j )
= C
j
13
14. Calculating λ(Xi) for non-leaves
Xi
• Suppose Xi has one child, Xc.
Xc
• Then:
∑ P(E
? ( X i ) = P ( E i− | X i ) = −
,XC = j | Xi)
i
j
∑ P(X
= = j | X i ) P ( E i− | X i , X C = j )
C
j
∑ P(X = j | X i ) P ( E i− | X C = j )
= C
j
∑ P(X
= = j | X i )?( X C = j)
C
j
Calculating λ(Xi) for non-leaves
• Now, suppose Xi has a set of children, C.
• Since Xi d-separates each of its subtrees, the
contribution of each subtree to λ(Xi) is independent:
∏?
? ( X i ) = P (Ei− | X i ) = ( Xi)
j
X j ∈C
∏ ∑
= P( X j | X i )?( X j )
X j ∈C X j
where λj(Xi) is the contribution to P(Ei-| Xi) of the part of
the evidence lying in the subtree rooted at one of Xi’s
children Xj.
14
15. We are now λ-happy
• So now we have a way to recursively compute all the
λ(Xi)’s, starting from the root and using the leaves as
the base case.
• If we want, we can think of each node in the network
as an autonomous processor that passes a little “λ
message” to its parent.
λ λ
λ λ λ λ
The other half of the problem
• Remember, P(Xi|E) = απ(Xi)λ(Xi). Now that we have
all the λ(Xi)’s, what about the π(Xi)’s?
π(Xi) = P(Xi |Ei+).
• What about the root of the tree, Xr? In that case, Er+
is the null set, so π(Xr) = P(Xr). No sweat. Since we
also know λ(Xr), we can compute the final P(Xr).
• So for an arbitrary Xi with parent Xp, let’s inductively
assume we know π(Xp) and/or P(Xp|E). How do we
get π(Xi)?
15
16. Computing π(Xi)
p( X i ) = P( X i | Ei+ ) =
Xp
Xi
Computing π(Xi)
p( X i ) = P( X i | Ei+ ) = ∑ P ( X i , X p = j | Ei+ )
j
Xp
Xi
16
17. Computing π(Xi)
p( X i ) = P( X i | Ei+ ) = ∑ P ( X i , X p = j | Ei+ )
j
= ∑ P ( X i | X p = j, E ) P ( X p = j | Ei+ )
+
i
Xp j
Xi
Computing π(Xi)
p( X i ) = P( X i | Ei+ ) = ∑ P ( X i , X p = j | Ei+ )
j
= ∑ P ( X i | X p = j, Ei+ ) P ( X p = j | Ei+ )
Xp j
= ∑ P ( X i | X p = j ) P ( X p = j | Ei+ )
j
Xi
17
18. Computing π(Xi)
p( X i ) = P( X i | Ei+ ) = ∑ P ( X i , X p = j | Ei+ )
j
= ∑ P ( X i | X p = j, E ) P ( X p = j | Ei+ )
+
i
Xp j
= ∑ P ( X i | X p = j ) P ( X p = j | Ei+ )
j
Xi
P( X p = j | E)
= ∑ P( X i | X p = j )
? i ( X p = j)
j
Computing π(Xi)
p( X i ) = P ( X i | Ei+ ) = ∑ P( X i , X p = j | Ei+ )
j
= ∑ P ( X i | X p = j, Ei+ ) P( X p = j | Ei+ )
j
Xp
= ∑ P ( X i | X p = j) P( X p = j | Ei+ )
j
Xi
P( X p = j | E )
= ∑ P ( X i | X p = j)
? i ( X p = j)
j
= ∑ P ( X i | X p = j)p i ( X p = j)
j
P( X p | E )
Where πi(Xp) is defined as
?i(X p)
18
19. We’re done. Yay!
• Thus we can compute all the π(Xi)’s, and, in turn, all
the P(Xi|E)’s.
• Can think of nodes as autonomous processors passing
λ and π messages to their neighbors
λ λ
ππ
λ λ λ λ
ππ
ππ
Conjunctive queries
• What if we want, e.g., P(A, B | C) instead of just
marginal distributions P(A | C) and P(B | C)?
• Just use chain rule:
• P(A, B | C) = P(A | C) P(B | A, C)
• Each of the latter probabilities can be computed
using the technique just discussed.
19
20. Polytrees
• Technique can be generalized to polytrees:
undirected versions of the graphs are still trees, but
nodes can have more than one parent
Dealing with cycles
• Can deal with undirected cycles in graph by
• clustering variables together
A
A
B C BC
D D
Set to 1
Set to 0
• Conditioning
20
21. Join trees
• Arbitrary Bayesian network can be transformed via
some evil graph-theoretic magic into a join tree in
which a similar method can be employed.
ABC
A
B C
BCD BCD
E D G
DF
F
In the worst case the join tree nodes must take on exponentially
many combinations of values, but often works well in practice
21