SlideShare a Scribd company logo
Naive Bayes 
Md Enamul Haque Chowdhury 
ID : CSE013083972D 
University of Luxembourg 
(Based on Ke Chen and Ashraf Uddin Presentation)
Contents 
 Background 
 Bayes Theorem 
 Bayesian Classifier 
 Naive Bayes 
 Uses of Naive Bayes classification 
 Relevant Issues 
 Advantages and Disadvantages 
 Some NBC Applications 
 Conclusions 
1
Background 
 There are three methods to establish a classifier 
a) Model a classification rule directly 
Examples: k-NN, decision trees, perceptron, SVM 
b) Model the probability of class memberships given input data 
Example: perceptron with the cross-entropy cost 
c) Make a probabilistic model of data within each class 
Examples: Naive Bayes, Model based classifiers 
 a) and b) are examples of discriminative classification 
 c) is an example of generative classification 
 b) and c) are both examples of probabilistic classification 
2
Bayes Theorem 
 Given a hypothesis h and data D which bears on the hypothesis: 
 P(h): independent probability of h: prior probability 
 P(D): independent probability of D 
 P(D|h): conditional probability of D given h: likelihood 
 P(h|D): conditional probability of h given D: posterior probability 
3
Maximum A Posterior 
 Based on Bayes Theorem, we can compute the Maximum A Posterior (MAP) 
hypothesis for the data 
 We are interested in the best hypothesis for some space H given observed training 
data D. 
H: set of all hypothesis. 
h argmaxP(h | D) 
h H 
MAP 
 
 
P D h P h 
( | ) ( ) 
P D 
( ) 
argmax 
hH 
 
argmaxP(D| h)P(h) 
hH 
 
Note that we can drop P(D) as the probability of the data is constant (and 
independent of the hypothesis). 
4
Maximum Likelihood 
 Now assume that all hypothesis are equally probable a prior, i.e. P(hi ) = P(hj ) for all 
hi, hj belong to H. 
 This is called assuming a uniform prior. It simplifies computing the posterior: 
h argmaxP(D| h) 
h H 
ML 
 
 
 This hypothesis is called the maximum likelihood hypothesis. 
5
Bayesian Classifier 
 The classification problem may be formalized using a-posterior probabilities: 
 P(C|X) = prob. that the sample tuple X=<x1,…,xk> is of class C. 
 E.g. P(class=N | outlook= sunny, windy=true,…) 
 Idea: assign to sample X the class label C such that P(C|X) is maximal 
6
Estimating a-posterior probabilities 
 Bayes theorem: 
P(C|X) = P(X|C)·P(C) / P(X) 
 P(X) is constant for all classes 
 P(C) = relative freq of class C samples 
 C such that P(C|X) is maximum = C such that P(X|C)·P(C) is maximum 
 Problem: computing P(X|C) is unfeasible! 
7
Naive Bayes 
 Bayes classification 
( ) ( ) ( ) ( , , | ) ( ) 1 P C| P |C P C P X X C P C n X  X   
Difficulty: learning the joint probability 
 Naive Bayes classification 
-Assumption that all input features are conditionally independent! 
P X X X C P X X X C P X X C 
( , ,  , | )  ( | ,  , , ) ( ,  
, | ) 
n n n 
1 2 1 2 2 
-MAP classification rule: for 
P X C P X X C 
  
( | ) ( , , | ) 
1 2 
P X C P X C P X C 
( | ) ( | ) ( | ) 
1 2 
n 
n 
  
( , , , ) 1 2 n x  x x  x 
* 
[P(x | c ) P(x | c )]P(c ) [P(x | c) P(x | c)]P(c), c c , c c , ,c n 1 
n 1 
L * * * 
1             
8
Naive Bayes 
 Algorithm: Discrete-Valued Features 
-Learning Phase: Given a training set S, 
c (c c , ,c ) 
For each target value of 1 
    
i i L 
ˆ ( ) estimate ( ) with examples in ; 
P C  c  P C  
c 
i i 
x X j n k ,N 
For every feature value of each feature (  1,    , ;  1,    
) 
jk j j 
ˆ ( | ) estimate ( | ) with examples in ; 
P X  x C  c  P X  x C  
c 
X N L j j ,  
Output: conditional probability tables; for elements 
-Test Phase: Given an unknown instance 
( , , ) 1 n X  a    a 
Look up tables to assign the label c* to X´ if 
S 
S 
j jk i j jk i 
[Pˆ(a  | c * )  Pˆ(a  | c * )]Pˆ(c * 
)  [Pˆ(a  | c)  Pˆ(a  | c)]Pˆ(c), c  c * 
, c  c ,  
,c 1 n 1 
n 1 
L 9
Example 
10
Example 
Learning Phase : 
Outlook Play=Yes Play=No 
Sunny 2/9 3/5 
Overcast 4/9 0/5 
Rain 3/9 2/5 
P(Play=Yes) = 9/14 
P(Play=No) = 5/14 
Temperature Play=Yes Play=No 
Hot 2/9 2/5 
Mild 4/9 2/5 
Cool 3/9 1/5 
Humidity Play=Yes Play=No 
High 3/9 4/5 
Normal 6/9 1/5 
Wind Play=Yes Play=No 
Strong 3/9 3/5 
Weak 6/9 2/5 
11
Example 
 Test Phase : 
-Given a new instance, predict its label 
x´=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong) 
-Look up tables achieved in the learning phrase 
P(Outlook=Sunny|Play=Yes) = 2/9 
P(Temperature=Cool|Play=Yes) = 3/9 
P(Huminity=High|Play=Yes) = 3/9 
P(Wind=Strong|Play=Yes) = 3/9 
P(Play=Yes) = 9/14 
-Decision making with the MAP rule: 
P(Outlook=Sunny|Play=No) = 3/5 
P(Temperature=Cool|Play==No) = 1/5 
P(Huminity=High|Play=No) = 4/5 
P(Wind=Strong|Play=No) = 3/5 
P(Play=No) = 5/14 
P(Yes|x´): [ P(Sunny|Yes) P(Cool|Yes) P(High|Yes) P(Strong|Yes) ] P(Play=Yes) = 0.0053 
P(No|x´): [ P(Sunny|No) P(Cool|No) P(High|No) P(Strong|No) ] P(Play=No) = 0.0206 
Given the fact P(Yes|x´) < P(No|x´) , we label x´ to be “No”. 
12
Naive Bayes 
 Algorithm: Continuous-valued Features 
- Numberless values for a feature 
- Conditional probability often modeled with the normal distribution 
  
(  
) 
ˆ ( | ) 2 
j ji 
 
1 
   
2 
exp 
2 
X c 
2 
: mean (avearage) of feature values of examples for whichC 
 
ji j i 
ji j i 
- Learning Phase: 
Output: normal distributions and 
- Test Phase: Given an unknown instance 
-Instead of looking-up tables, calculate conditional probabilities with all the normal 
distributions achieved in the learning phrase 
-Apply the MAP rule to make a decision 
ji 
ji 
j i 
C c 
X 
P X C c 
 
 
 
 
 
 
 
 
: standard deviation of feature values X of examples for which 
 
 
 
n L for (X , , X ), C c , ,c 1 1 X         
P C c i L i nL (  ) 1,  , 
( , , ) 1 n X  a    a 
13
Naive Bayes 
 Example: Continuous-valued Features 
-Temperature is naturally of continuous value. 
Yes: 25.2, 19.3, 18.5, 21.7, 20.1, 24.3, 22.8, 23.1, 19.8 
No: 27.3, 30.1, 17.4, 29.5, 15.1 
-Estimate mean and variance for each class 
N 
N 
1 
2 2 
  
  
n x 
x 
1 
  , 
   
 
N n 
1 
( ) 
n 
n 
N 
1 
  
 21.64,  
2.35 
  
Yes Yes 
  
23.88, 7.09 
No No 
-Learning Phase: output two Gaussian models for P(temp|C) 
 
 
  
 
  
1 
  
( 21.64) 
  
  
1 
 
  
 
  
 
  
 
( 23.88) 
50.25 
exp 
7.09 2 
ˆ ( | ) 
11.09 
exp 
2.35 2 
ˆ ( | ) 
2 
2 
x 
P x No 
x 
P x Yes 
 
14
Uses of Naive Bayes classification 
 Text Classification 
 Spam Filtering 
 Hybrid Recommender System 
- Recommender Systems apply machine learning and data mining techniques for 
filtering unseen information and can predict whether a user would like a given 
resource 
 Online Application 
- Simple Emotion Modeling 
15
Why text classification? 
 Learning which articles are of interest 
 Classify web pages by topic 
 Information extraction 
 Internet filters 
16
Examples of Text Classification 
 CLASSES=BINARY 
 “spam” / “not spam” 
 CLASSES =TOPICS 
 “finance” / “sports” / “politics” 
 CLASSES =OPINION 
 “like” / “hate” / “neutral” 
 CLASSES =TOPICS 
 “AI” / “Theory” / “Graphics” 
 CLASSES =AUTHOR 
 “Shakespeare” / “Marlowe” / “Ben Jonson” 
17
Naive Bayes Approach 
 Build the Vocabulary as the list of all distinct words that appear in all the documents 
of the training set. 
 Remove stop words and markings 
 The words in the vocabulary become the attributes, assuming that classification is 
independent of the positions of the words 
 Each document in the training set becomes a record with frequencies for each word 
in the Vocabulary. 
 Train the classifier based on the training data set, by computing the prior probabilities 
for each class and attributes. 
 Evaluate the results on Test data 
18
Text Classification Algorithm: Naive Bayes 
 Tct – Number of particular word in particular class 
 Tct’ – Number of total words in particular class 
 B´ – Number of distinct words in all class 
19
Relevant Issues 
 Violation of Independence Assumption 
 Zero conditional probability Problem 
20
Violation of Independence Assumption 
 Naive Bayesian classifiers assume that the effect of an attribute value on a given 
class is independent of the values of the other attributes. This assumption is called 
class conditional independence. It is made to simplify the computations involved and, 
in this sense, is considered “naive.” 
21
Improvement 
 Bayesian belief network are graphical models, which unlike naive Bayesian 
classifiers, allow the representation of dependencies among subsets of attributes. 
 Bayesian belief networks can also be used for classification. 
22
Zero conditional probability Problem 
 If a given class and feature value never occur together in the training set then the 
frequency-based probability estimate will be zero. 
 This is problematic since it will wipe out all information in the other probabilities when 
they are multiplied. 
 It is therefore often desirable to incorporate a small-sample correction in all 
probability estimates such that no probability is ever set to be exactly zero. 
23
Naive Bayes Laplace Correction 
 To eliminate zeros, we use add-one or Laplace smoothing, which simply adds one to 
each count 
24
Example 
 Suppose that for the class buys computer D (yes) in some training database, D, containing 1000 
tuples. 
 we have 0 tuples with income D low, 
 990 tuples with income D medium, and 
 10 tuples with income D high. 
 The probabilities of these events, without the Laplacian correction, are 0, 0.990 (from 990/1000), 
and 0.010 (from 10/1000), respectively. 
 Using the Laplacian correction for the three quantities, we pretend that we have 1 more tuple for 
each income-value pair. In this way, we instead obtain the following probabilities : 
respectively. The “corrected” probability estimates are close to their “uncorrected” counterparts, 
yet the zero probability value is avoided. 
25
Advantages 
• Advantages : 
 Easy to implement 
 Requires a small amount of training data to estimate the parameters 
 Good results obtained in most of the cases 
26
Disadvantages 
 Disadvantages: 
 Assumption: class conditional independence, therefore loss of accuracy 
 Practically, dependencies exist among variables 
-E.g., hospitals: patients: Profile: age, family history, etc. 
Symptoms: fever, cough etc., Disease: lung cancer, diabetes, etc. 
 Dependencies among these cannot be modelled by Naïve Bayesian Classifier 
27
Some NBC Applications 
 Credit scoring 
 Marketing applications 
 Employee selection 
 Image processing 
 Speech recognition 
 Search engines… 
28
Conclusions 
 Naive Bayes is: 
- Really easy to implement and often works well 
- Often a good first thing to try 
- Commonly used as a “punching bag” for smarter algorithms 
29
References 
 http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-20/www/mlbook/ch6.pdf 
 Data Mining: Concepts and Techniques, 3rd 
Edition, Han & kamber & Pei ISBN: 9780123814791 
 http://en.wikipedia.org/wiki/Naive_Bayes_classifier 
 http://www.slideshare.net/ashrafmath/naive-bayes-15644818 
 http://www.slideshare.net/gladysCJ/lesson-71-naive-bayes-classifier 
30
Questions ?

More Related Content

What's hot

Support Vector Machines for Classification
Support Vector Machines for ClassificationSupport Vector Machines for Classification
Support Vector Machines for Classification
Prakash Pimpale
 
Bias and variance trade off
Bias and variance trade offBias and variance trade off
Bias and variance trade off
VARUN KUMAR
 
Machine Learning - Accuracy and Confusion Matrix
Machine Learning - Accuracy and Confusion MatrixMachine Learning - Accuracy and Confusion Matrix
Machine Learning - Accuracy and Confusion Matrix
Andrew Ferlitsch
 
Dimensionality Reduction
Dimensionality ReductionDimensionality Reduction
Dimensionality Reduction
mrizwan969
 
Introduction to Machine Learning Classifiers
Introduction to Machine Learning ClassifiersIntroduction to Machine Learning Classifiers
Introduction to Machine Learning Classifiers
Functional Imperative
 
2.3 bayesian classification
2.3 bayesian classification2.3 bayesian classification
2.3 bayesian classification
Krish_ver2
 
L2. Evaluating Machine Learning Algorithms I
L2. Evaluating Machine Learning Algorithms IL2. Evaluating Machine Learning Algorithms I
L2. Evaluating Machine Learning Algorithms I
Machine Learning Valencia
 
Bayes network
Bayes networkBayes network
Bayes network
Dr. C.V. Suresh Babu
 
CART – Classification & Regression Trees
CART – Classification & Regression TreesCART – Classification & Regression Trees
CART – Classification & Regression Trees
Hemant Chetwani
 
2.4 rule based classification
2.4 rule based classification2.4 rule based classification
2.4 rule based classification
Krish_ver2
 
Bayesian Networks - A Brief Introduction
Bayesian Networks - A Brief IntroductionBayesian Networks - A Brief Introduction
Bayesian Networks - A Brief Introduction
Adnan Masood
 
Bayesian classification
Bayesian classificationBayesian classification
Bayesian classification
Manu Chandel
 
Lecture 6: Ensemble Methods
Lecture 6: Ensemble Methods Lecture 6: Ensemble Methods
Lecture 6: Ensemble Methods
Marina Santini
 
Introduction To Multilevel Association Rule And Its Methods
Introduction To Multilevel Association Rule And Its MethodsIntroduction To Multilevel Association Rule And Its Methods
Introduction To Multilevel Association Rule And Its Methods
IJSRD
 
Decision Tree Learning
Decision Tree LearningDecision Tree Learning
Decision Tree Learning
Md. Ariful Hoque
 
Understanding Bagging and Boosting
Understanding Bagging and BoostingUnderstanding Bagging and Boosting
Understanding Bagging and Boosting
Mohit Rajput
 
Bayesian networks
Bayesian networksBayesian networks
Bayesian networks
Massimiliano Patacchiola
 
Machine Learning - Ensemble Methods
Machine Learning - Ensemble MethodsMachine Learning - Ensemble Methods
Machine Learning - Ensemble Methods
Andrew Ferlitsch
 
Ensemble learning
Ensemble learningEnsemble learning
Ensemble learning
Mustafa Sherazi
 

What's hot (20)

Support Vector Machines for Classification
Support Vector Machines for ClassificationSupport Vector Machines for Classification
Support Vector Machines for Classification
 
Bias and variance trade off
Bias and variance trade offBias and variance trade off
Bias and variance trade off
 
Machine Learning - Accuracy and Confusion Matrix
Machine Learning - Accuracy and Confusion MatrixMachine Learning - Accuracy and Confusion Matrix
Machine Learning - Accuracy and Confusion Matrix
 
Dimensionality Reduction
Dimensionality ReductionDimensionality Reduction
Dimensionality Reduction
 
Introduction to Machine Learning Classifiers
Introduction to Machine Learning ClassifiersIntroduction to Machine Learning Classifiers
Introduction to Machine Learning Classifiers
 
2.3 bayesian classification
2.3 bayesian classification2.3 bayesian classification
2.3 bayesian classification
 
L2. Evaluating Machine Learning Algorithms I
L2. Evaluating Machine Learning Algorithms IL2. Evaluating Machine Learning Algorithms I
L2. Evaluating Machine Learning Algorithms I
 
Random forest
Random forestRandom forest
Random forest
 
Bayes network
Bayes networkBayes network
Bayes network
 
CART – Classification & Regression Trees
CART – Classification & Regression TreesCART – Classification & Regression Trees
CART – Classification & Regression Trees
 
2.4 rule based classification
2.4 rule based classification2.4 rule based classification
2.4 rule based classification
 
Bayesian Networks - A Brief Introduction
Bayesian Networks - A Brief IntroductionBayesian Networks - A Brief Introduction
Bayesian Networks - A Brief Introduction
 
Bayesian classification
Bayesian classificationBayesian classification
Bayesian classification
 
Lecture 6: Ensemble Methods
Lecture 6: Ensemble Methods Lecture 6: Ensemble Methods
Lecture 6: Ensemble Methods
 
Introduction To Multilevel Association Rule And Its Methods
Introduction To Multilevel Association Rule And Its MethodsIntroduction To Multilevel Association Rule And Its Methods
Introduction To Multilevel Association Rule And Its Methods
 
Decision Tree Learning
Decision Tree LearningDecision Tree Learning
Decision Tree Learning
 
Understanding Bagging and Boosting
Understanding Bagging and BoostingUnderstanding Bagging and Boosting
Understanding Bagging and Boosting
 
Bayesian networks
Bayesian networksBayesian networks
Bayesian networks
 
Machine Learning - Ensemble Methods
Machine Learning - Ensemble MethodsMachine Learning - Ensemble Methods
Machine Learning - Ensemble Methods
 
Ensemble learning
Ensemble learningEnsemble learning
Ensemble learning
 

Similar to Naive Bayes Presentation

Pattern recognition binoy 05-naive bayes classifier
Pattern recognition binoy 05-naive bayes classifierPattern recognition binoy 05-naive bayes classifier
Pattern recognition binoy 05-naive bayes classifier
108kaushik
 
Text classification
Text classificationText classification
Text classification
Fraboni Ec
 
Text classification
Text classificationText classification
Text classification
David Hoen
 
Text classification
Text classificationText classification
Text classification
Harry Potter
 
Text classification
Text classificationText classification
Text classification
Luis Goldster
 
Text classification
Text classificationText classification
Text classification
Young Alista
 
Text classification
Text classificationText classification
Text classification
James Wong
 
Text classification
Text classificationText classification
Text classification
Tony Nguyen
 
bayesNaive.ppt
bayesNaive.pptbayesNaive.ppt
bayesNaive.ppt
KhushiDuttVatsa
 
bayesNaive.ppt
bayesNaive.pptbayesNaive.ppt
bayesNaive.ppt
OmDalvi4
 
bayesNaive algorithm in machine learning
bayesNaive algorithm in machine learningbayesNaive algorithm in machine learning
bayesNaive algorithm in machine learning
Kumari Naveen
 
Calibrating Probability with Undersampling for Unbalanced Classification
Calibrating Probability with Undersampling for Unbalanced ClassificationCalibrating Probability with Undersampling for Unbalanced Classification
Calibrating Probability with Undersampling for Unbalanced Classification
Andrea Dal Pozzolo
 
ch8Bayes.ppt
ch8Bayes.pptch8Bayes.ppt
ch8Bayes.ppt
GurinderSingh494887
 
Supervised algorithms
Supervised algorithmsSupervised algorithms
Supervised algorithms
Yassine Akhiat
 
ML.pptx
ML.pptxML.pptx
Data classification sammer
Data classification sammer Data classification sammer
Data classification sammer
Sammer Qader
 
Bayesian Hierarchical Models
Bayesian Hierarchical ModelsBayesian Hierarchical Models
Bayesian Hierarchical Models
Ammar Rashed
 
M03 nb-02
M03 nb-02M03 nb-02
M03 nb-02
Raman Kannan
 
ch8Bayes.ppt
ch8Bayes.pptch8Bayes.ppt
ch8Bayes.ppt
ImXaib
 

Similar to Naive Bayes Presentation (20)

Pattern recognition binoy 05-naive bayes classifier
Pattern recognition binoy 05-naive bayes classifierPattern recognition binoy 05-naive bayes classifier
Pattern recognition binoy 05-naive bayes classifier
 
Text classification
Text classificationText classification
Text classification
 
Text classification
Text classificationText classification
Text classification
 
Text classification
Text classificationText classification
Text classification
 
Text classification
Text classificationText classification
Text classification
 
Text classification
Text classificationText classification
Text classification
 
Text classification
Text classificationText classification
Text classification
 
Text classification
Text classificationText classification
Text classification
 
bayesNaive.ppt
bayesNaive.pptbayesNaive.ppt
bayesNaive.ppt
 
bayesNaive.ppt
bayesNaive.pptbayesNaive.ppt
bayesNaive.ppt
 
bayesNaive algorithm in machine learning
bayesNaive algorithm in machine learningbayesNaive algorithm in machine learning
bayesNaive algorithm in machine learning
 
Calibrating Probability with Undersampling for Unbalanced Classification
Calibrating Probability with Undersampling for Unbalanced ClassificationCalibrating Probability with Undersampling for Unbalanced Classification
Calibrating Probability with Undersampling for Unbalanced Classification
 
ch8Bayes.ppt
ch8Bayes.pptch8Bayes.ppt
ch8Bayes.ppt
 
Supervised algorithms
Supervised algorithmsSupervised algorithms
Supervised algorithms
 
ML.pptx
ML.pptxML.pptx
ML.pptx
 
Data classification sammer
Data classification sammer Data classification sammer
Data classification sammer
 
Bayesian Hierarchical Models
Bayesian Hierarchical ModelsBayesian Hierarchical Models
Bayesian Hierarchical Models
 
M03 nb-02
M03 nb-02M03 nb-02
M03 nb-02
 
ch8Bayes.ppt
ch8Bayes.pptch8Bayes.ppt
ch8Bayes.ppt
 
ch8Bayes.ppt
ch8Bayes.pptch8Bayes.ppt
ch8Bayes.ppt
 

Recently uploaded

The French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free downloadThe French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free download
Vivekanand Anglo Vedic Academy
 
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Dr. Vinod Kumar Kanvaria
 
Francesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptxFrancesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptx
EduSkills OECD
 
Normal Labour/ Stages of Labour/ Mechanism of Labour
Normal Labour/ Stages of Labour/ Mechanism of LabourNormal Labour/ Stages of Labour/ Mechanism of Labour
Normal Labour/ Stages of Labour/ Mechanism of Labour
Wasim Ak
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
Delapenabediema
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
Celine George
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
Sandy Millin
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
Ashokrao Mane college of Pharmacy Peth-Vadgaon
 
The Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptxThe Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptx
DhatriParmar
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Thiyagu K
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
Special education needs
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
Levi Shapiro
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
Vikramjit Singh
 
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
Natural birth techniques - Mrs.Akanksha Trivedi Rama UniversityNatural birth techniques - Mrs.Akanksha Trivedi Rama University
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
Akanksha trivedi rama nursing college kanpur.
 
Marketing internship report file for MBA
Marketing internship report file for MBAMarketing internship report file for MBA
Marketing internship report file for MBA
gb193092
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
Jisc
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
JosvitaDsouza2
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
SACHIN R KONDAGURI
 
Acetabularia Information For Class 9 .docx
Acetabularia Information For Class 9  .docxAcetabularia Information For Class 9  .docx
Acetabularia Information For Class 9 .docx
vaibhavrinwa19
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
Jisc
 

Recently uploaded (20)

The French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free downloadThe French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free download
 
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
 
Francesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptxFrancesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptx
 
Normal Labour/ Stages of Labour/ Mechanism of Labour
Normal Labour/ Stages of Labour/ Mechanism of LabourNormal Labour/ Stages of Labour/ Mechanism of Labour
Normal Labour/ Stages of Labour/ Mechanism of Labour
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
 
The Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptxThe Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptx
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
 
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
Natural birth techniques - Mrs.Akanksha Trivedi Rama UniversityNatural birth techniques - Mrs.Akanksha Trivedi Rama University
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
 
Marketing internship report file for MBA
Marketing internship report file for MBAMarketing internship report file for MBA
Marketing internship report file for MBA
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
 
Acetabularia Information For Class 9 .docx
Acetabularia Information For Class 9  .docxAcetabularia Information For Class 9  .docx
Acetabularia Information For Class 9 .docx
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 

Naive Bayes Presentation

  • 1. Naive Bayes Md Enamul Haque Chowdhury ID : CSE013083972D University of Luxembourg (Based on Ke Chen and Ashraf Uddin Presentation)
  • 2. Contents  Background  Bayes Theorem  Bayesian Classifier  Naive Bayes  Uses of Naive Bayes classification  Relevant Issues  Advantages and Disadvantages  Some NBC Applications  Conclusions 1
  • 3. Background  There are three methods to establish a classifier a) Model a classification rule directly Examples: k-NN, decision trees, perceptron, SVM b) Model the probability of class memberships given input data Example: perceptron with the cross-entropy cost c) Make a probabilistic model of data within each class Examples: Naive Bayes, Model based classifiers  a) and b) are examples of discriminative classification  c) is an example of generative classification  b) and c) are both examples of probabilistic classification 2
  • 4. Bayes Theorem  Given a hypothesis h and data D which bears on the hypothesis:  P(h): independent probability of h: prior probability  P(D): independent probability of D  P(D|h): conditional probability of D given h: likelihood  P(h|D): conditional probability of h given D: posterior probability 3
  • 5. Maximum A Posterior  Based on Bayes Theorem, we can compute the Maximum A Posterior (MAP) hypothesis for the data  We are interested in the best hypothesis for some space H given observed training data D. H: set of all hypothesis. h argmaxP(h | D) h H MAP   P D h P h ( | ) ( ) P D ( ) argmax hH  argmaxP(D| h)P(h) hH  Note that we can drop P(D) as the probability of the data is constant (and independent of the hypothesis). 4
  • 6. Maximum Likelihood  Now assume that all hypothesis are equally probable a prior, i.e. P(hi ) = P(hj ) for all hi, hj belong to H.  This is called assuming a uniform prior. It simplifies computing the posterior: h argmaxP(D| h) h H ML    This hypothesis is called the maximum likelihood hypothesis. 5
  • 7. Bayesian Classifier  The classification problem may be formalized using a-posterior probabilities:  P(C|X) = prob. that the sample tuple X=<x1,…,xk> is of class C.  E.g. P(class=N | outlook= sunny, windy=true,…)  Idea: assign to sample X the class label C such that P(C|X) is maximal 6
  • 8. Estimating a-posterior probabilities  Bayes theorem: P(C|X) = P(X|C)·P(C) / P(X)  P(X) is constant for all classes  P(C) = relative freq of class C samples  C such that P(C|X) is maximum = C such that P(X|C)·P(C) is maximum  Problem: computing P(X|C) is unfeasible! 7
  • 9. Naive Bayes  Bayes classification ( ) ( ) ( ) ( , , | ) ( ) 1 P C| P |C P C P X X C P C n X  X   Difficulty: learning the joint probability  Naive Bayes classification -Assumption that all input features are conditionally independent! P X X X C P X X X C P X X C ( , ,  , | )  ( | ,  , , ) ( ,  , | ) n n n 1 2 1 2 2 -MAP classification rule: for P X C P X X C   ( | ) ( , , | ) 1 2 P X C P X C P X C ( | ) ( | ) ( | ) 1 2 n n   ( , , , ) 1 2 n x  x x  x * [P(x | c ) P(x | c )]P(c ) [P(x | c) P(x | c)]P(c), c c , c c , ,c n 1 n 1 L * * * 1             8
  • 10. Naive Bayes  Algorithm: Discrete-Valued Features -Learning Phase: Given a training set S, c (c c , ,c ) For each target value of 1     i i L ˆ ( ) estimate ( ) with examples in ; P C  c  P C  c i i x X j n k ,N For every feature value of each feature (  1,    , ;  1,    ) jk j j ˆ ( | ) estimate ( | ) with examples in ; P X  x C  c  P X  x C  c X N L j j ,  Output: conditional probability tables; for elements -Test Phase: Given an unknown instance ( , , ) 1 n X  a    a Look up tables to assign the label c* to X´ if S S j jk i j jk i [Pˆ(a  | c * )  Pˆ(a  | c * )]Pˆ(c * )  [Pˆ(a  | c)  Pˆ(a  | c)]Pˆ(c), c  c * , c  c ,  ,c 1 n 1 n 1 L 9
  • 12. Example Learning Phase : Outlook Play=Yes Play=No Sunny 2/9 3/5 Overcast 4/9 0/5 Rain 3/9 2/5 P(Play=Yes) = 9/14 P(Play=No) = 5/14 Temperature Play=Yes Play=No Hot 2/9 2/5 Mild 4/9 2/5 Cool 3/9 1/5 Humidity Play=Yes Play=No High 3/9 4/5 Normal 6/9 1/5 Wind Play=Yes Play=No Strong 3/9 3/5 Weak 6/9 2/5 11
  • 13. Example  Test Phase : -Given a new instance, predict its label x´=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong) -Look up tables achieved in the learning phrase P(Outlook=Sunny|Play=Yes) = 2/9 P(Temperature=Cool|Play=Yes) = 3/9 P(Huminity=High|Play=Yes) = 3/9 P(Wind=Strong|Play=Yes) = 3/9 P(Play=Yes) = 9/14 -Decision making with the MAP rule: P(Outlook=Sunny|Play=No) = 3/5 P(Temperature=Cool|Play==No) = 1/5 P(Huminity=High|Play=No) = 4/5 P(Wind=Strong|Play=No) = 3/5 P(Play=No) = 5/14 P(Yes|x´): [ P(Sunny|Yes) P(Cool|Yes) P(High|Yes) P(Strong|Yes) ] P(Play=Yes) = 0.0053 P(No|x´): [ P(Sunny|No) P(Cool|No) P(High|No) P(Strong|No) ] P(Play=No) = 0.0206 Given the fact P(Yes|x´) < P(No|x´) , we label x´ to be “No”. 12
  • 14. Naive Bayes  Algorithm: Continuous-valued Features - Numberless values for a feature - Conditional probability often modeled with the normal distribution   (  ) ˆ ( | ) 2 j ji  1    2 exp 2 X c 2 : mean (avearage) of feature values of examples for whichC  ji j i ji j i - Learning Phase: Output: normal distributions and - Test Phase: Given an unknown instance -Instead of looking-up tables, calculate conditional probabilities with all the normal distributions achieved in the learning phrase -Apply the MAP rule to make a decision ji ji j i C c X P X C c         : standard deviation of feature values X of examples for which    n L for (X , , X ), C c , ,c 1 1 X         P C c i L i nL (  ) 1,  , ( , , ) 1 n X  a    a 13
  • 15. Naive Bayes  Example: Continuous-valued Features -Temperature is naturally of continuous value. Yes: 25.2, 19.3, 18.5, 21.7, 20.1, 24.3, 22.8, 23.1, 19.8 No: 27.3, 30.1, 17.4, 29.5, 15.1 -Estimate mean and variance for each class N N 1 2 2     n x x 1   ,     N n 1 ( ) n n N 1    21.64,  2.35   Yes Yes   23.88, 7.09 No No -Learning Phase: output two Gaussian models for P(temp|C)        1   ( 21.64)     1           ( 23.88) 50.25 exp 7.09 2 ˆ ( | ) 11.09 exp 2.35 2 ˆ ( | ) 2 2 x P x No x P x Yes  14
  • 16. Uses of Naive Bayes classification  Text Classification  Spam Filtering  Hybrid Recommender System - Recommender Systems apply machine learning and data mining techniques for filtering unseen information and can predict whether a user would like a given resource  Online Application - Simple Emotion Modeling 15
  • 17. Why text classification?  Learning which articles are of interest  Classify web pages by topic  Information extraction  Internet filters 16
  • 18. Examples of Text Classification  CLASSES=BINARY  “spam” / “not spam”  CLASSES =TOPICS  “finance” / “sports” / “politics”  CLASSES =OPINION  “like” / “hate” / “neutral”  CLASSES =TOPICS  “AI” / “Theory” / “Graphics”  CLASSES =AUTHOR  “Shakespeare” / “Marlowe” / “Ben Jonson” 17
  • 19. Naive Bayes Approach  Build the Vocabulary as the list of all distinct words that appear in all the documents of the training set.  Remove stop words and markings  The words in the vocabulary become the attributes, assuming that classification is independent of the positions of the words  Each document in the training set becomes a record with frequencies for each word in the Vocabulary.  Train the classifier based on the training data set, by computing the prior probabilities for each class and attributes.  Evaluate the results on Test data 18
  • 20. Text Classification Algorithm: Naive Bayes  Tct – Number of particular word in particular class  Tct’ – Number of total words in particular class  B´ – Number of distinct words in all class 19
  • 21. Relevant Issues  Violation of Independence Assumption  Zero conditional probability Problem 20
  • 22. Violation of Independence Assumption  Naive Bayesian classifiers assume that the effect of an attribute value on a given class is independent of the values of the other attributes. This assumption is called class conditional independence. It is made to simplify the computations involved and, in this sense, is considered “naive.” 21
  • 23. Improvement  Bayesian belief network are graphical models, which unlike naive Bayesian classifiers, allow the representation of dependencies among subsets of attributes.  Bayesian belief networks can also be used for classification. 22
  • 24. Zero conditional probability Problem  If a given class and feature value never occur together in the training set then the frequency-based probability estimate will be zero.  This is problematic since it will wipe out all information in the other probabilities when they are multiplied.  It is therefore often desirable to incorporate a small-sample correction in all probability estimates such that no probability is ever set to be exactly zero. 23
  • 25. Naive Bayes Laplace Correction  To eliminate zeros, we use add-one or Laplace smoothing, which simply adds one to each count 24
  • 26. Example  Suppose that for the class buys computer D (yes) in some training database, D, containing 1000 tuples.  we have 0 tuples with income D low,  990 tuples with income D medium, and  10 tuples with income D high.  The probabilities of these events, without the Laplacian correction, are 0, 0.990 (from 990/1000), and 0.010 (from 10/1000), respectively.  Using the Laplacian correction for the three quantities, we pretend that we have 1 more tuple for each income-value pair. In this way, we instead obtain the following probabilities : respectively. The “corrected” probability estimates are close to their “uncorrected” counterparts, yet the zero probability value is avoided. 25
  • 27. Advantages • Advantages :  Easy to implement  Requires a small amount of training data to estimate the parameters  Good results obtained in most of the cases 26
  • 28. Disadvantages  Disadvantages:  Assumption: class conditional independence, therefore loss of accuracy  Practically, dependencies exist among variables -E.g., hospitals: patients: Profile: age, family history, etc. Symptoms: fever, cough etc., Disease: lung cancer, diabetes, etc.  Dependencies among these cannot be modelled by Naïve Bayesian Classifier 27
  • 29. Some NBC Applications  Credit scoring  Marketing applications  Employee selection  Image processing  Speech recognition  Search engines… 28
  • 30. Conclusions  Naive Bayes is: - Really easy to implement and often works well - Often a good first thing to try - Commonly used as a “punching bag” for smarter algorithms 29
  • 31. References  http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-20/www/mlbook/ch6.pdf  Data Mining: Concepts and Techniques, 3rd Edition, Han & kamber & Pei ISBN: 9780123814791  http://en.wikipedia.org/wiki/Naive_Bayes_classifier  http://www.slideshare.net/ashrafmath/naive-bayes-15644818  http://www.slideshare.net/gladysCJ/lesson-71-naive-bayes-classifier 30