• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
week05.ppt
 

week05.ppt

on

  • 1,966 views

 

Statistics

Views

Total Views
1,966
Views on SlideShare
1,965
Embed Views
1

Actions

Likes
1
Downloads
44
Comments
0

1 Embed 1

http://www.slideshare.net 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • From: http://www.norsys.com/tutorials/netica/secA/tut_A1.htm As your clinic grows and you handle hundreds of patient cases, you learn that while the text books may have described the North American situation, the reality of your clinic and its population of patients is very different. This is what your data collection efforts reveal: 50% of your patients smoke. 1% have TB. 5.5% have lung cancer. 45% have some form of mild or chronic bronchitis. You enter these new figures into your net, and now you have a practical Bayes net, one that really describes the kind of patient you typically deal with. So, let us see how we would use this net in our daily medical practice. The first thing we should note is that the above net describes a new patient, one whom has just been referred to us, and for whom we have no knowledge whatsoever, other than that they are from our target population. As we acquire knowledge specific to each particular patient, the probabilities in the net will automatically adjust. This is the great beauty and power of Bayesian inference in action. And the great strength of the Bayes net approach is that the probabilities that result at each stage of knowlege buildup are mathematically and scientifically sound. In other words, given whatever knowledge we have about our patient, then based on the best mathematical and statistical knowledge to date, the net will tell us what we can legitimately conclude. This is a very powerful tool, indeed. Take a moment to think on it. You as a doctor are not just relying on hunches, or an intuitive sense of the likelihood of illness, as you may have in the past, but, rather, on a scientifically and provably accurate estimate of the likelihood of illness, one that gets more and more accurate as you gain knowledge about the particular patient, or about the particular population that the patient comes from. So, let us see how adding knowledge about a particular patient adjusts the probabilities. Let us say a woman walks in, a new patient, and we begin talking to her. She tells us that she is often short of breath (dyspnea). So, we enter that finding into our net. With Netica we shall see, this is as simple as pointing your mouse at a node and clicking on it once, whereupon a list of available states pops up, and you then click on the correct item in the list. After doing that, this is what the net looks like. Notice how the Dyspnea box is grayed, indicating that we have evidence for it being in one of its states. In this case, because our patient appears trustworthy, we say we are 100% certain that our patient has dyspnea. It is easy with Netica to enter an uncertain finding (also called a likelihood finding), say of 90% Present, but let's keep things simple for now. Observe how with this new finding, that our patient has dyspnea, that the probabilities for all three illnesses has increased. Why is this? Well, since all those illnesses have dyspnea as a symptom, because our patient is indeed exhibiting this symptom, it only makes sense that our belief in the possible presence of those illnesses should increase. Basically, the presence of the symptom has increased our belief that she might be seriously ill. Let's look at those inferences more closely. 1.The most significant jump is for Bronchitis, from 45% to 83.4%. Why such a large jump? Well, bronchitis is far more common than cancer or TB. So, once we have evidence for serious lung illness, it becomes our most likely candidate diagnosis. 2.The chances that our patient is a smoker has now increased substantially, from 50% to 63.4%. 3.The chances that she recently visited Asia has increased very slightly: from 1% to 1.03%, which is insignificant. 4.The chances of our getting an abnormal X-Ray from our patient has also gone up marginally, from 11% to 16%. If you think about this expansion of our knowledge, it is truly quite helpful. We have only entered one finding, the presence of Dyspnea, and this knowledge has "propagated" or spread its influence around the net, accurately updating all the other possible beliefs. Some of our beliefs are increased substantially, others hardly at all. And the beauty of it is that the amounts are precisely quantified. We still do not know what precisely is ailing our patient. Our current best belief is that she suffers from Bronchitis (probability of Present=83.4%). However, we would like to increase our chances of a correct diagnosis. If we stop here and diagnose her with Bronchitis and she really has Cancer, we would be a poor doctor indeed. We really need more information. So, being thorough, we run through our standard check-list of questions. We ask her if she has been to Asia recently. Surprisingly, she answers "yes". Now, let us see how this knowledge affects the net. Suddenly, the chances of tuberculosis has increased substantially, from 2% to 9%. Note, interestingly, that the chances of lung cancer, bronchitis, or of our patient being a smoker all have decreased. Why is this? Well, this is because the explanation of dyspnea is now more strongly explained by tuberculosis than before (although bronchitis still remains the best candidate diagnosis). And because cancer and bronchitis are now less probable, so is smoking. This phenomenon is called "explaining away" in Bayes net circles. It says that when you have competing possible causes for some event, and the chances of one of those causes increases, the chances of the other causes must decline since they are being "explained away" by the first explanation. To continue with our example, suppose we ask more questions and find out that our patient is indeed a smoker. Here is the updated net. Note that our current best hypothesis still remains that the patient is suffering from Bronchitis, and not TB or lung cancer. But to be sure, we order a diagnostic X-Ray be performed. Let us say that the X-ray turns out normal. The result is: Note how this more strongly confirms Bronchitis and disconfirms TB or lung cancer. But suppose the X-ray were abnormal. The result is: Note the big difference. TB or Lung Cancer has shot up enormously in probability. Bronchitis is still the most probable of the three separate illnesses, but it is less than the combination hypothesis of TB or Lung Cancer. So, we would then decide to perform further tests, order blood tests, lung tissue biopsies, and so forth. Our current Bayes net does not cover those tests, but it would be easy to extend it by simply adding extra nodes as we acquire new statistics for those diagnostic procedures. And we do not need to throw away any part of the previous net. This is another powerful feature of Bayes nets. They are easily extended (or reduced, simplified) to suit your changing needs and your changing knowledge.

week05.ppt week05.ppt Presentation Transcript

  • Data Mining - CSE5230 Classifiers 1 Bayesian Classification and Bayesian Networks CSE5230/DMS/2004/5
  • Lecture Outline
    • What is Classification?
    • Measuring Classifier Performance
    • Bayesian Classifiers
    • Bayes Theorem
    • Naïve Bayesian Classification
      • Example application: spam filtering
    • Bayesian Belief Networks
    • Training Bayesian Belief Networks
    • Why use Bayesian Classifiers?
    • Example Software: Netica
  • Lecture Objectives
    • By the end of this lecture you should be able to:
      • Explain what classification is
      • Give examples of several different types of classification algorithms and models used in data mining
      • State some basic performance measures that sophisticated classifiers should be able to beat if they are to be considered useful
      • Explain how Bayes rule relates the probabilities of evidence and class membership hypotheses
      • Explain why the Naïve Bayes classifier is called “naïve” – i.e. the assumption on which it is based
      • Explain how Bayesian networks can produce classifiers that are not “naïve”, and can exploit expert knowledge about relationships between variables
  • What is classification?
    • A classifier assigns items to classes
      • In data mining, the items are typically records from a database
      • The classes are defined by the person doing the data mining
        • each class has a class label , or name
          • this is the major difference between classification and clustering: clusters are not predefined, and are not labelled
    • A classifier “decides” which class an item belongs in on the basis of the values of its attributes
    • Often the class label corresponds to the value of one of the items’ attributes
      • The aim is to create a classifier that can predict the value of this attribute when the other attributes are known for a new data item
      • the attributes are divided into input attributes and the output attribute (or class attribute )
  • Learning Classifiers from Data
    • Classifiers are typically “learnt” from a training data set
      • “ learning” means determining the values of the parameters of the model
        • some learning algorithms also determine the structure of the model
      • This is often called training the classifier
    • The training data consists of many examples
      • each example is made up of a set of input attribute values and the desired output attribute value for that input
    • This is an example of supervised learning
    • Once a classifier is learnt, it should be tested on data not used during training – a test data set
      • Techniques exists for making use of the same data set for both training and tests, e.g. cross-validation
  • Classifiers used in Data Mining
    • There are many different classifiers and classification models used in data mining
      • most have their origins in machine learning and artificial intelligence research
      • Some come from classic statistics
    • Examples of classification models and classifiers:
      • Discriminant Analysis ( http://www. statsoftinc .com/textbook/ stdiscan .html )
      • K-Nearest-Neighbour Classifier
      • Naïve Bayes Classifier
      • Decision Trees (CART, CHAID, ID3, C4.5, etc.)
      • Feedforward Neural Networks (typically trained using the Backpropagation algorithm)
      • Support Vector Machines
      • and more…
  • Measuring Classifier Performance (1)
    • It is important to know if a classifier does a good job
      • a classifier does a good job if it assigns new data items to the correct classes
    • Classification performance is often expressed as a “percent correct” measure
      • classification performance of 70% correct means that 70% of items in the test data set are classified correctly
    • It is important to note that 100% correct performance is often impossible to achieve – even on the training data set
      • it is usually possible to get better performance on the training data by increasing the model complexity…
      • … but this often causes worse performance on the test data. This is called over-fitting the data.
  • Measuring Classifier Performance (2)
    • “ Percentage correct” measures can be misleading, and must be treated with caution
      • It is important to know the prior probabilities of the classes
    • We really want to know if the classifier is doing better than we could be assigning items to classes at random, e.g.
      • imagine that we want to classify all the chickens in a shed as either male or female. In this shed, 90% of chickens are female
      • We would get “90% correct” performance just by classifying all chickens as female, without even looking at them!
    • This is the first thing to check when evaluating a classifier: is it better than chance?
  • Measuring Classifier Performance (3)
    • Whenever you are presented with a measure of classification performance, you should consider whether it is doing better than chance
    • A measure exists that indicates how much better than chance performance actually is. This is known as Cohen’s  -statistic (kappa statistic) [Dun1989]:
    • where frequencies are in [0,1].
    • There are also other, more sophisticated measures of classifier performance. Good data mining packages, such as Weka, report several
  • Bayesian Classifiers
    • Bayesian Classifiers are statistical classifiers
      • based on Bayes Theorem (see following slides)
    • They can predict the probability that a data item is a member of a particular class
    • Perhaps the simplest Bayesian Classifier is known as the Naïve Bayesian Classifier
      • based on an independence assumption (which is usually incorrect)
      • performance is still often comparable to Decision Trees and Neural Network classifiers
      • First introduced as a “straw man” against which the performance of more sophisticated classifiers could be compared [ClN1989]
        • This is the second thing you should check when evaluating classifier performance: is it better than Naïve Bayes?
    • Consider the Venn diagram at right. The area of the rectangle is 1, and the area of each region gives the probability of the event(s) associated with that region
    • P(A|B) means “the probability of observing event A given that event B has already been observed”, i.e.
      • how much of the time that we see B do we also see A? (i.e. the ratio of the purple region to the magenta region)
    Bayes Theorem - 1 P(A|B) = P(A  B)/P(B), and also P(B|A) = P(A  B)/P(A), therefore P(A|B) = P(B|A)P(A)/P(B) (Bayes formula for two events) P(A) P(B) P(A  B)
  • Bayes Theorem - 2
    • More formally,
    • Let X be the data (evidence)
    • Let H be a hypothesis that X belongs to class C
    • In classification problems we wish to determine the probability that H holds given the observed data X
    • i.e. we seek P(H|X), which is known as the posterior probability of H conditioned on X
      • e.g. The probability that X is a kangaroo given that X jumps and is nocturnal
  • Bayes Theorem - 3
    • P(H) is the prior probability
      • i.e. the probability that any given data is a kangaroo regardless of it method of locomotion or night time behaviour - i.e. before we know anything about X
    • Similarly, P(X|H) is the posterior probability of X conditioned on H
      • i.e the probability that X is a jumper and is nocturnal given that we know X is a kangaroo
    • Bayes Theorem (from earlier slide) is then
  • Naïve Bayesian Classification - 1
    • Assumes that the effect of an attribute value on a given class is independent of the values of other attributes. This assumption is known as class conditional independence
      • This makes the calculations involved easier, but makes a simplistic assumption - hence the term “naïve”
    • Can you think of an real-life example where the class conditional independence assumption would break down?
  • Naïve Bayesian Classification - 2
    • Consider each data instance to be an n -dimensional vector of attribute values (i.e. features):
    • Given m classes C 1 ,C 2, …, C m , a data instance X is assigned to the class for which it has the greatest posterior probability, conditioned on X , i.e. X is assigned to C i if and only if
  • Naïve Bayesian Classification - 3
    • According to Bayes Theorem:
    • Since P(X) is constant for all classes, only the numerator P(X|C i )P(C i ) needs to be maximized
    • If the class probabilities P(C i ) are not known, they can be assumed to be equal, so that we need only maximize P(X|C i )
    • Alternately (and preferably) we can estimate the P(C i ) from the proportions in some training sample
  • Naïve Bayesian Classification - 4
    • It is can be very expensive to compute the P(X|C i )
      • if each component x k can have one of c values, there are c n possible values of X to consider
    • Consequently, the (naïve) assumption of class conditional independence is often made, giving
    • The P(x 1 |C i ),…, P(x n |C i ) can be estimated from a training sample (using the proportions if the variable is categorical; using a normal distribution and the calculated mean and standard deviation of each class if it is continuous)
  • Naïve Bayesian Classification - 5
    • Fully computed Bayesian classifiers are provably optimal
      • i.e. under the assumptions given, no other classifier can give better performance
    • In practice, assumptions are made to simplify calculations (e.g. class conditional independence), so optimal performance is not achieved
      • sub-optimal performance is due to inaccuracies in the assumptions made
    • Nevertheless, the performance of the Naïve Bayes Classifier is often comparable to that decision trees and neural networks [p. 299, HaK2000], and has been shown to be optimal under conditions somewhat broader than class conditional independence [DoP1996]
  • Application: Spam Filtering (1)
    • You are all almost certainly aware of the problem of “spam”, or junk email
      • Almost every email user receives unwanted, unsolicited email every day:
        • Advertising (often offensive, e.g. pornographic)
        • Get-rich-quick schemes
        • Attempts to defraud (e.g. the Nigerian 419 scam)
    • Spam exists because sending email is extremely cheap, and vast lists of email addresses harvested from the internet are easily available
    • Spam is a big problem. It costs users time, causes stress, and costs money (the download cost)
  • Application: Spam Filtering (2)
    • There are several approaches to stopping spam:
      • Black lists
        • banned sites and/or emailers
      • White lists
        • allowed sites and/or emailers
      • Filtering
        • deciding whether or not an email is spam based on its content
    • “ Bayesian filtering” for spam has got a lot of press recently, e.g.
      • “ How to spot and stop spam”, BBC News, 26/5/2003 http://news. bbc .co. uk /2/hi/technology/3014029. stm
      • “ Sorting the ham from the spam”, Sydney Morning Herald, 24/6/2003 http://www. smh .com.au/articles/2003/06/23/1056220528960.html
    • The “Bayesian filtering” they are talking about is actually Naïve Bayes Classification
  • Application: Spam Filtering (3)
    • Spam filtering is really a classification problem
      • Each email needs to be classified as either spam or not spam (“ham”)
    • To do classification, we need to choose a classier model (e.g. neural network, decision tree, naïve Bayes) and features
    • For spam filtering, the features can be
      • Words
      • combinations of (consecutive) words
      • words tagged with positional information
        • e.g. body of email, subject line, etc.
    • Early Bayesian spam filters achieved good accuracy:
      • Pantel and Lim [PaL1998]: 98% true positive, 1.16% false positive
    • More recent ones (with improved features) do even better
      • Graham [Gra2003]: 99.75% true positive, 0.06% false positive
      • This is good enough for use in production systems (e.g. Mozilla) – it’s moving out of the lab and into products
  • Bayesian Belief Networks - 1
    • Problem with the naïve Bayesian classifier: dependencies do exist between attributes
    • Bayesian Belief Networks (BBNs) allow for the specification of the joint conditional probability distributions : the class conditional dependencies can be defined between subsets of attributes
      • i.e. we can make use of prior knowledge
    • A BBN consists of two components. The first is a directed acyclic graph where
      • each node represents an variable; variables may correspond to actual data attributes or to “hidden variables”
      • each arc represents a probabilistic dependence
      • each variable is conditionally independent of its non-descendents, given its parents
  • Bayesian Belief Networks - 2
    • A simple BBN (from [HaK2000]). Nodes have binary values. Arcs allow a representation of causal knowledge
    FamilyHistory LungCancer PositiveXRay Dyspnea Emphysema Smoker
  • Bayesian Belief Networks - 3
    • The second component of a BBN is a conditional probability table (CPT) for each variable Z, which gives the conditional distribution P(Z|Parents(Z))
      • i.e. the conditional probability of each value of Z for each possible combination of values of its parents
    • e.g. for for node LungCancer we may have P(LungCancer = “True” | FamilyHistory = “True”  Smoker = “True”) = 0.8 P(LungCancer = “False” | FamilyHistory = “False”  Smoker = “False”) = 0.9 …
    • The joint probability of any tuple ( z 1 ,…, z n ) corresponding to variables Z 1 ,…,Z n is
  • Bayesian Belief Networks - 4
    • A node within the BBN can be selected as an output node
      • output nodes represent class label attributes
      • there may be more than one output node
    • The classification process, rather than returning a single class label (as many classifiers, e.g. decision trees, do) can return a probability distribution for the class labels
      • i.e. an estimate of the probability that the data instance belongs to each class
    • A Machine learning algorithm is needed to find the CPTs, and possibly the network structure
  • Training BBNs - 1
    • If the network structure is known and all the variables are observable then training the network simply requires the calculation of Conditional Probability Table (as in naïve Bayesian classification)
    • When the network structure is given but some of the variables are hidden (variables believed to influence but not observable) a gradient descent method can be used to train the BBN based on the training data. The aim is to learn the values of the CPT entries
  • Training BBNs - 2
    • Let S be a set of s training examples X 1 ,…,X s
    • Let w ijk be a CPT entry for the variable Y i = y ij having parents U i = u ik
      • e.g. from our example, Y i may be LungCancer, y ij its value “True”, U i lists the parents of Y i , e.g. {FamilyHistory, Smoker}, and u ik lists the values of the parent nodes, e.g. {“True”, “True”}
    • The w ijk are analogous to weights in a neural network, and can be optimized using gradient descent (the same learning technique as backpropagation is based on). See [HaK2000] for details
    • An important advance in the training of BBNs was the development of Markov Chain Monte Carlo methods [Nea1993]
  • Training BBNs - 3
    • Algorithms also exist for learning the network structure from the training data given observable variables (this is a discrete optimization problem)
    • In this sense they are an unsupervised technique for discovery of knowledge
    • A tutorial on Bayesian AI, including Bayesian networks, is available at http://www. csse . monash . edu .au/~ korb / bai / bai .html
  • Why use Bayesian Classifiers?
    • No classification method has been found to be superior over all others in every case (i.e. a data set drawn from a particular domain of interest)
      • indeed it can be shown that no such classifier can exist (see “No Free Lunch” theorem [p. 454, DHS2000])
    • Methods can be compared based on:
      • accuracy
      • interpretability of the results
      • robustness of the method with different datasets
      • training time
      • scalability
    • e.g. neural networks are more computationally intensive than decision trees
    • BBNs offer advantages based upon a number of these criteria (all of them in certain domains)
  • Example application – Netica (1)
    • Netica is an Application for Belief Networks and Influence Diagrams from Norsys Software Corp. Canada
    • http://www. norsys .com/
    • Can build, learn, modify, transform and store networks and find optimal solutions using an inference engine
    • A free demonstration version is available for download
    • There is also a useful tutorial on Bayes Nets: http://www. norsys .com/tutorials/ netica / nt _ toc _A. htm
  • Example application – Netica (2)
    • Netica Screen shots (from their tutorial):
  • References
    • [ClN1989] Peter Clark and Tim Niblett, The CN2 Induction Algorithm , Machine Learning 3(4), pp. 261-283, 1989.
    • [DHS2000] Richard O. Duda, Peter E. Hart and David G. Stork, Pattern Classification (2nd Edn), Wiley, New York, NY, 2000
    • [DoP1996] Pedro Domingos and Michael Pazzani. Beyond independence: Conditions for the optimality of the simple Bayesian classiffer , In Proceedings of the 13th International Conference on Machine Learning, pp. 105-112, 1996.
    • [Dun1989] Graham Dunn, Design and Analysis of Reliability Studies: The Statistical Evaluation of Measurement Errors, Edward Arnold, London, 1989.
    • [Gra2003] Paul Graham, Better Bayesian Filtering , In Proceedings of the 2003 Spam Conference, Cambridge, MA, USA, January 17 2003
    • [HaK2000] Jiawei Han and Micheline Kamber, Data Mining: Concepts and Techniques , The Morgan Kaufmann Series in Data Management Systems, Jim Gray, Series (Ed.), Morgan Kaufmann Publishers, August 2000
    • [Nea2001] Radford Neal, What is Bayesian Learning?, in comp.ai.neural-nets FAQ, Part 3 of 7: Generalization, on-line resource, accessed September 2001 http://www. faqs .org/ faqs / ai - faq /neural-nets/part3/section-7.html
    • [Nea1993] Radford Neal, Probabilistic inference using Markov chain Monte Carlo methods . Technical Report CRG-TR-93-1, Department of Computer Science, University of Toronto, 1993
    • [PaL1998] Patrick Pantel and Dekang Lin, SpamCop: A Spam Classification & Organization Program , In AAAI Workshop on Learning for Text Categorization, Madison, Wisconsin, July 1998.
    • [SDH1998] Mehran Sahami, Susan Dumais, David Heckerman and Eric Horvitz, A Bayesian Approach to Filtering Junk E-mail , In AAAI Workshop on Learning for Text Categorization, Madison, Wisconsin, July 1998.