• Save
Naive Bayes with Conditionally Dependent Data
Upcoming SlideShare
Loading in...5
×
 

Naive Bayes with Conditionally Dependent Data

on

  • 1,319 views

Examination of Naive Bayes with conditionally dependent data sets.

Examination of Naive Bayes with conditionally dependent data sets.

Statistics

Views

Total Views
1,319
Slideshare-icon Views on SlideShare
1,317
Embed Views
2

Actions

Likes
0
Downloads
0
Comments
0

1 Embed 2

http://www.linkedin.com 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Naive Bayes with Conditionally Dependent Data Naive Bayes with Conditionally Dependent Data Presentation Transcript

    • Why does Naïve Bayesian Classification work so well amidst known conditional dependencies in the data structure?
      • Part1: Written critique of Zhang 2004, “The optimality of Naïve Bayes”
      • Part2: Experiments with Naïve Bayes in the presence of different forms of synthetic conditional dependency, and synthetic conditional dependency mixed with benchmark data sets, to demonstrate principles outlined in Zhang 2004.
      • Part3: Summary presentation of results of above, along with training in the use of an “R” Naïve Bayes package
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    •  
    • Naïve Bayesian Classification A form of machine learning that avoids complicated conditional dependency models, and the requirement to define much of the conditional dependencies in your data. Why does it work so well amidst conditional dependency? Tim Hare
    • Naïve Bayes (naïvely, hence the name) assumes no conditional dependence, but this simplification comes at a potential cost of misclassification
      • Joint probability = likelihood * prior
    • NB performance is at odds with past theory : evidence in the primary literature that Naïve Bayes works beyond what would be anticipated given known conditional dependence in the data
      • Zhang 2004: “The Optimality of Naïve Bayes”
        • Closed form analytical investigation (argument by proof) in support of NB being able to classify reliably despite conditional dependence IF the dependence is of the same form across all classes
        • Contention: NB works well if the conditional dependence is of the same type in all classes within an attribute , or is not of the same type, but misclassification “cancels out” across attributes
    • Zhang 2004: Factoring a general form of Bayes into two parts: [NB] * [“something else”]
      • That more general framework can be factored into [NB] x [“something else”]
      • [“something else”]  1 IF conditional dependence is distributed evenly in all classes and in turn NB = general Bayesian model
      • This is one way in which NB can perform like FB
      Take home message: the factorization indicates that FB=NB under certain data structures, and not in others.
    • Full Bayes (FB) and Naïve Bayes (NB) classification carried out on synthetic data by hand on one data vector = <1,0> When conditional dependence is of different types (C1: if A then A, C2: if A then B) in the two classes (upper left data grid: you may recognize this as “XOR”) NB will fail to classify correctly (and the information is “lost” due to “cancellation” by equal probabilities taking part in each classification estimate). If the conditional dependence is of the same type (C1=C2: If A then B) in both classes (lower left data grid) NB may still classify the data correctly. FB always classifies correctly in BOTH instances. Posterior probability may be biased, but in fact that nets out (though analysis too complex to present here) to correct classification as well for a variety of reasons, in many cases. Loss (ratio is just 1) but no Bias Bias but no Loss
    • Naïve Bayes in R on the synthetic conditionally dependent data we analyzed in EXCEL for vector <1,0>, results in the same misclassification for the MIXED conditional dependence, and correct Democratic classification in the case of “even” conditional dependence.
    • Real data: House of Representatives 1984 voting record on 17 congressional bills (columns)
      • Two classes: C = (Democrat, Republican) = column 1
      • Binary attribute values are our “Yes”/”No” votes on each of 17 bills
      • Each row is the voting record of one Congress-person on all 17 bills
    • Use “R” for NB classification on HV84 +/- augmentation with conditional dependence via synthetic data
      • Control run: Use Naïve Bayes to classify the unmodified data on voting records as either having been cast by a Democrat or Republican (e.g. the class1 vs class2)
      • Experiment 1: Add “mixed” (“if A then A” to one class, “if A then B” the other class) conditional dependence synthetic data to the HV84 data set, and repeat the analysis of NB classification
      • Experiment 2: Add “consistent”, evenly distributed across classes (“if A then B” to both classes) conditional dependence synthetic data to the HV84 data set, and repeat the analysis of NB classification.
      • Our hand analysis (done above in EXCEL so far) as well as Zhang 2004, suggests we may not see a much difference in classification.
    • Control analysis for synthetic augmentation experiments #1 and #2 (to follow): NB analysis HV84 real data unmodified by synthetic data
      • #use the install packages GUI option to search for and install package 'e1071'
      • library(e1071)
      • HV84_data <- read.table(&quot;C:/HV84.csv&quot;, header=T, sep=&quot;,&quot;)
      • HV84_data
      • HV84_model <- naiveBayes(Class ~ ., data = HV84_data)
      • #HV84_pred_raw <- predict(HV84_model, HV84_data[1:5,-1], type = &quot;raw&quot;)
      • HV84_pred_raw <- predict(HV84_model, HV84_data[,-1], type = &quot;raw&quot;)
      • HV84_pred_class <- predict(HV84_model, HV84_data[,-1])
      • table(HV84_pred_class, HV84_data$Class)
      • write.csv(HV84_pred_raw, file = &quot;c:/HV84_pred_raw.csv&quot;)
    • Augmentation with synthetic data -- experiment 1: NB analysis on HV84 augmented by the conditionally dependent synthetic data, with the conditional dependence of the different types (“mixed”) in the two classes
      • #use the install packages GUI option to search for and install package 'e1071'
      • library(e1071)
      • HV84_MIXEDCD_data <- read.table(&quot;C:/HV84_MIXEDCD.csv&quot;, header=T, sep=&quot;,&quot;)
      • HV84_MIXEDCD_data
      • HV84_MIXEDCD_model <- naiveBayes(Class ~ ., data = HV84_MIXEDCD_data)
      • #HV84_MIXEDCD_pred_raw <- predict(HV84_MIXEDCD_model, HV84_MIXEDCD_data[1:5,-1], type = &quot;raw&quot;)
      • HV84_MIXEDCD_pred_raw<- predict(HV84_MIXEDCD_model, HV84_MIXEDCD_data[,-1], type = &quot;raw&quot;)
      • HV84_MIXEDCD_pred_class <- predict(HV84_MIXEDCD_model, HV84_MIXEDCD_data[,-1])
      • table(HV84_MIXEDCD_pred_class, HV84_MIXEDCD_data$Class)
      • write.csv(HV84_MIXEDCD_pred_raw, file = &quot;c:/HV84_MIXEDCD_pred_raw.csv&quot;)
    • Augmentation with synthetic data -- experiment 2: NB analysis on HV84 augmented by the conditionally dependent synthetic data, with the conditional dependence of the same type (“even”) in the two classes
      • #use the install packages GUI option to search for and install package 'e1071'
      • library(e1071)
      • HV84_EVENCD_data <- read.table(&quot;C:/HV84_EVENCD.csv&quot;, header=T, sep=&quot;,&quot;)
      • HV84_EVENCD_data
      • HV84_EVENCD_model <- naiveBayes(Class ~ ., data = HV84_EVENCD_data)
      • #HV84_EVENCD_pred_raw <- predict(HV84_EVENCD_model, HV84_EVENCD_data[1:5,-1], type = &quot;raw&quot;)
      • HV84_EVENCD_pred_raw<- predict(HV84_EVENCD_model, HV84_EVENCD_data[,-1], type = &quot;raw&quot;)
      • HV84_EVENCD_pred_class <- predict(HV84_EVENCD_model, HV84_EVENCD_data[,-1])
      • table(HV84_EVENCD_pred_class, HV84_EVENCD_data$Class)
      • write.csv(HV84_EVENCD_pred_raw, file = &quot;c:/HV84_EVENCD_pred_raw.csv&quot;)
    • Matrices of classification outcomes for control (top matrix), “mixed” (middle matrix) and “even” (bottom matrix): no adverse impact on classification Same assignment made in each experiment indicating that augmentation of real data with two types of conditional dependence does not influence classification, at least with this HV84 data set
    • Raw probabilities, however, show that even though assignments to class didn’t change in CONTROL, EXPT#1, and EXPT#2, differences (in this case slight) are imparted to the probability estimates, as expected. Important to note we only added 2 attributes (columns) to 17, so the percentage of “contamination” by synthetic data is small. Additional exploration could be done with increasing percentages of conditional dependence added in to the original HV84 data set.
    • Knowledge check: FB or NB?
        • You have 3 million potential pro-drug compounds to evaluate given their chemical features.
        • Each compound is described by a feature vector of 20 chemical attributes
        • 3,000 (0.1%, random sample) of these have been run through the assay, so you can classify these as “active” or “inactive”
        • What would be the pros and cons of building a Bayesian model using FB vs NB, to predict which of the un-assayed compounds might be potentially attractive to evaluate further?
        • 1) hard to know conditional dependencies and can tolerate some inaccuracy?  NB
        • 2) use the 3000 we know to assess conditional dependence  FB
        • 3) need very accurate probability estimates in classification  FB
        • 4) We can afford false positives or false negatives  NB
    • References
      • Zhang, 2004, “The Optimality of Naïve Bayes”. In: Proc 17 th International FLAIRS Conference, Florida, USA.
      •  
      • Zhang & Ling, 2001, “Learnability of Augmented Naïve Bayes in Nominal Domains”, in Proceedings of the Eighteenth International Conference on Machine Learning, 617-623.
      •  
      • Friedman  & Fayyad, 1997, “On Bias, variance, 0/1-loss, and the Curse of Dimensionality” Data Mining and Knowledge Discovery 1, 55–77.
      •  
      • Domingos and Pazzani, 1997, “Beyond Independence: Conditions for the Optimality of the Simple Bayesian Classifier”. Machine Learning 41(1):5-15.
    • Q & A