This Naive Bayes Tutorial from Edureka will help you understand all the concepts of Naive Bayes classifier, use cases and how it can be used in the industry. This tutorial is ideal for both beginners as well as professionals who want to learn or brush up their concepts in Data Science and Machine Learning through Naive Bayes. Below are the topics covered in this tutorial:
1. What is Machine Learning?
2. Introduction to Classification
3. Classification Algorithms
4. What is Naive Bayes?
5. Use Cases of Naive Bayes
6. Demo – Employee Salary Prediction in R
Uncertainty & Probability
Baye's rule
Choosing Hypotheses- Maximum a posteriori
Maximum Likelihood - Baye's concept learning
Maximum Likelihood of real valued function
Bayes optimal Classifier
Joint distributions
Naive Bayes Classifier
This Naive Bayes Tutorial from Edureka will help you understand all the concepts of Naive Bayes classifier, use cases and how it can be used in the industry. This tutorial is ideal for both beginners as well as professionals who want to learn or brush up their concepts in Data Science and Machine Learning through Naive Bayes. Below are the topics covered in this tutorial:
1. What is Machine Learning?
2. Introduction to Classification
3. Classification Algorithms
4. What is Naive Bayes?
5. Use Cases of Naive Bayes
6. Demo – Employee Salary Prediction in R
Uncertainty & Probability
Baye's rule
Choosing Hypotheses- Maximum a posteriori
Maximum Likelihood - Baye's concept learning
Maximum Likelihood of real valued function
Bayes optimal Classifier
Joint distributions
Naive Bayes Classifier
This presentation covers Decision Tree as a supervised machine learning technique, talking about Information Gain method and Gini Index method with their related Algorithms.
Decision tree induction \ Decision Tree Algorithm with Example| Data scienceMaryamRehman6
This Decision Tree Algorithm in Machine Learning Presentation will help you understand all the basics of Decision Tree along with what Machine Learning is, what Machine Learning is, what Decision Tree is, the advantages and disadvantages of Decision Tree, how Decision Tree algorithm works with resolved examples, and at the end of the decision Tree use case/demo in Python for loan payment. For both beginners and experts who want to learn Machine Learning Algorithms, this Decision Tree tutorial is perfect.
In this tutorial, we will learn the the following topics -
+ The Curse of Dimensionality
+ Main Approaches for Dimensionality Reduction
+ PCA - Principal Component Analysis
+ Kernel PCA
+ LLE
+ Other Dimensionality Reduction Techniques
This is the most simplest and easy to understand ppt. Here you can define what is decision tree,information gain,gini impurity,steps for making decision tree there pros and cons etc which will helps you to easy understand and represent it.
Naive Bayes is a kind of classifier which uses the Bayes Theorem. It predicts membership probabilities for each class such as the probability that given record or data point belongs to a particular class.
Decision Tree Algorithm | Decision Tree in Python | Machine Learning Algorith...Edureka!
** Machine Learning with Python : https://www.edureka.co/machine-learning-certification-training **
This Edureka tutorial on Decision Tree Algorithm in Python will take you through the fundamentals of decision tree machine learning algorithm concepts and its demo in Python. Below are the topics covered in this tutorial:
1. What is Classification?
2. Types of Classification
3. Classification Use Case
4. What is Decision Tree?
5. Decision Tree Terminology
6. Visualizing a Decision Tree
7 Writing a Decision Tree Classifier fro Scratch in Python using CART Algorithm
Check out our Python Machine Learning Playlist: https://goo.gl/UxjTxm
This presentation covers Decision Tree as a supervised machine learning technique, talking about Information Gain method and Gini Index method with their related Algorithms.
Decision tree induction \ Decision Tree Algorithm with Example| Data scienceMaryamRehman6
This Decision Tree Algorithm in Machine Learning Presentation will help you understand all the basics of Decision Tree along with what Machine Learning is, what Machine Learning is, what Decision Tree is, the advantages and disadvantages of Decision Tree, how Decision Tree algorithm works with resolved examples, and at the end of the decision Tree use case/demo in Python for loan payment. For both beginners and experts who want to learn Machine Learning Algorithms, this Decision Tree tutorial is perfect.
In this tutorial, we will learn the the following topics -
+ The Curse of Dimensionality
+ Main Approaches for Dimensionality Reduction
+ PCA - Principal Component Analysis
+ Kernel PCA
+ LLE
+ Other Dimensionality Reduction Techniques
This is the most simplest and easy to understand ppt. Here you can define what is decision tree,information gain,gini impurity,steps for making decision tree there pros and cons etc which will helps you to easy understand and represent it.
Naive Bayes is a kind of classifier which uses the Bayes Theorem. It predicts membership probabilities for each class such as the probability that given record or data point belongs to a particular class.
Decision Tree Algorithm | Decision Tree in Python | Machine Learning Algorith...Edureka!
** Machine Learning with Python : https://www.edureka.co/machine-learning-certification-training **
This Edureka tutorial on Decision Tree Algorithm in Python will take you through the fundamentals of decision tree machine learning algorithm concepts and its demo in Python. Below are the topics covered in this tutorial:
1. What is Classification?
2. Types of Classification
3. Classification Use Case
4. What is Decision Tree?
5. Decision Tree Terminology
6. Visualizing a Decision Tree
7 Writing a Decision Tree Classifier fro Scratch in Python using CART Algorithm
Check out our Python Machine Learning Playlist: https://goo.gl/UxjTxm
Naive Bayes is a kind of classifier which uses the Bayes Theorem. It predicts membership probabilities for each class such as the probability that given record or data point belongs to a particular class.
This presentation guide you through Bayes Theorem, Bayesian Classifier, Naive Bayes, Uses of Naive Bayes classification, Why text classification, Examples of Text Classification, Naive Bayes Approach and Text Classification Algorithm.
For more topics stay tuned with Learnbay.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Normal Labour/ Stages of Labour/ Mechanism of LabourWasim Ak
Normal labor is also termed spontaneous labor, defined as the natural physiological process through which the fetus, placenta, and membranes are expelled from the uterus through the birth canal at term (37 to 42 weeks
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
2. Bayesian Methods
Our focus this lecture
Learning and classification methods based on
probability theory.
Bayes theorem plays a critical role in probabilistic
learning and classification.
Uses prior probability of each category given no
information about an item.
Categorization produces a posterior probability
distribution over the possible categories given a
description of an item.
3. Basic Probability Formulas
Product rule
Sum rule
Bayes theorem
Theorem of total probability, if event Ai is
mutually exclusive and probability sum to 1
)
(
)
|
(
)
(
)
|
(
)
( A
P
A
B
P
B
P
B
A
P
B
A
P
)
(
)
(
)
(
)
( B
A
P
B
P
A
P
B
A
P
n
i
i
i A
P
A
B
P
B
P
1
)
(
)
|
(
)
(
)
(
)
(
)
|
(
)
|
(
D
P
h
P
h
D
P
D
h
P
4. Bayes Theorem
Given a hypothesis h and data D which bears on the
hypothesis:
P(h): independent probability of h: prior probability
P(D): independent probability of D
P(D|h): conditional probability of D given h:
likelihood
P(h|D): conditional probability of h given D: posterior
probability
)
(
)
(
)
|
(
)
|
(
D
P
h
P
h
D
P
D
h
P
5. Does patient have cancer or not?
A patient takes a lab test and the result comes back positive.
It is known that the test returns a correct positive result in
only 99% of the cases and a correct negative result in only
95% of the cases. Furthermore, only 0.03 of the entire
population has this disease.
1. What is the probability that this patient has cancer?
2. What is the probability that he does not have cancer?
3. What is the diagnosis?
6. Maximum A Posterior
Based on Bayes Theorem, we can compute the
Maximum A Posterior (MAP) hypothesis for the data
We are interested in the best hypothesis for some
space H given observed training data D.
)
|
(
argmax D
h
P
h
H
h
MAP
)
(
)
(
)
|
(
argmax
D
P
h
P
h
D
P
H
h
)
(
)
|
(
argmax h
P
h
D
P
H
h
H: set of all hypothesis.
Note that we can drop P(D) as the probability of the data is constant
(and independent of the hypothesis).
7. Maximum Likelihood
Now assume that all hypotheses are equally
probable a priori, i.e., P(hi ) = P(hj ) for all hi,
hj belong to H.
This is called assuming a uniform prior. It
simplifies computing the posterior:
This hypothesis is called the maximum
likelihood hypothesis.
)
|
(
max
arg h
D
P
h
H
h
ML
8. Desirable Properties of Bayes Classifier
Incrementality: with each training example,
the prior and the likelihood can be updated
dynamically: flexible and robust to errors.
Combines prior knowledge and observed
data: prior probability of a hypothesis
multiplied with probability of the hypothesis
given the training data
Probabilistic hypothesis: outputs not only a
classification, but a probability distribution
over all classes
9. Bayes Classifiers
Assumption: training set consists of instances of different classes
described cj as conjunctions of attributes values
Task: Classify a new instance d based on a tuple of attribute values
into one of the classes cj C
Key idea: assign the most probable class using Bayes
Theorem.
MAP
c
)
,
,
,
|
(
argmax 2
1 n
j
C
c
MAP x
x
x
c
P
c
j
)
,
,
,
(
)
(
)
|
,
,
,
(
argmax
2
1
2
1
n
j
j
n
C
c x
x
x
P
c
P
c
x
x
x
P
j
)
(
)
|
,
,
,
(
argmax 2
1 j
j
n
C
c
c
P
c
x
x
x
P
j
10. Parameters estimation
P(cj)
Can be estimated from the frequency of classes in the
training examples.
P(x1,x2,…,xn|cj)
O(|X|n•|C|) parameters
Could only be estimated if a very, very large number of
training examples was available.
Independence Assumption: attribute values are
conditionally independent given the target value: naïve
Bayes.
i
j
i
j
n c
x
P
c
x
x
x
P )
|
(
)
|
,
,
,
( 2
1
i
j
i
j
C
c
NB c
x
P
c
P
c
j
)
|
(
)
(
max
arg
11. Properties
Estimating instead of greatly
reduces the number of parameters (and the data
sparseness).
The learning step in Naïve Bayes consists of
estimating and based on the
frequencies in the training data
An unseen instance is classified by computing the
class that maximizes the posterior
When conditioned independence is satisfied, Naïve
Bayes corresponds to MAP classification.
)
|
( j
i c
x
P )
|
,
,
,
( 2
1 j
n c
x
x
x
P
)
( j
c
P
)
|
( j
i c
x
P
12. Example. ‘Play Tennis’ data
Day Outlook Temperature Humidity Wind Play
Tennis
Day1 Sunny Hot High Weak No
Day2 Sunny Hot High Strong No
Day3 Overcast Hot High Weak Yes
Day4 Rain Mild High Weak Yes
Day5 Rain Cool Normal Weak Yes
Day6 Rain Cool Normal Strong No
Day7 Overcast Cool Normal Strong Yes
Day8 Sunny Mild High Weak No
Day9 Sunny Cool Normal Weak Yes
Day10 Rain Mild Normal Weak Yes
Day11 Sunny Mild Normal Strong Yes
Day12 Overcast Mild High Strong Yes
Day13 Overcast Hot Normal Weak Yes
Day14 Rain Mild High Strong No
Question: For the day <sunny, cool, high, strong>, what’s
the play prediction?
13. Naive Bayes solution
Classify any new datum instance x=(a1,…aT) as:
To do this based on training examples, we need to estimate the
parameters from the training examples:
For each target value (hypothesis) h
For each attribute value at of each datum instance
)
(
estimate
:
)
(
ˆ h
P
h
P
)
|
(
estimate
:
)
|
(
ˆ h
a
P
h
a
P t
t
t
t
h
h
Bayes
Naive h
a
P
h
P
h
P
h
P
h )
|
(
)
(
max
arg
)
|
(
)
(
max
arg x
14. Based on the examples in the table, classify the following datum x:
x=(Outl=Sunny, Temp=Cool, Hum=High, Wind=strong)
That means: Play tennis or not?
Working:
)
|
(
)
|
(
)
|
(
)
|
(
)
(
max
arg
)
|
(
)
(
max
arg
)
|
(
)
(
max
arg
]
,
[
]
,
[
]
,
[
h
strong
Wind
P
h
high
Humidity
P
h
cool
Temp
P
h
sunny
Outlook
P
h
P
h
a
P
h
P
h
P
h
P
h
no
yes
h
t
t
no
yes
h
no
yes
h
NB
x
no
x
PlayTennis
answer
no
strong
P
no
high
P
no
cool
P
no
sunny
P
no
P
yes
strong
P
yes
high
P
yes
cool
P
yes
sunny
P
yes
P
etc
no
PlayTennis
strong
Wind
P
yes
PlayTennis
strong
Wind
P
no
PlayTennis
P
yes
PlayTennis
P
)
(
:
)
|
(
)
|
(
)
|
(
)
|
(
)
(
0053
.
0
)
|
(
)
|
(
)
|
(
)
|
(
)
(
.
60
.
0
5
/
3
)
|
(
33
.
0
9
/
3
)
|
(
36
.
0
14
/
5
)
(
64
.
0
14
/
9
)
(
0.0206
15. Underflow Prevention
Multiplying lots of probabilities, which are
between 0 and 1 by definition, can result in
floating-point underflow.
Since log(xy) = log(x) + log(y), it is better to
perform all computations by summing logs of
probabilities rather than multiplying
probabilities.
Class with highest final un-normalized log
probability score is still the most probable.
positions
i
j
i
j
C
c
NB c
x
P
c
P
c )
|
(
log
)
(
log
argmax
j