Successfully reported this slideshow.
Upcoming SlideShare
×

# Bayseian decision theory

1,622 views

Published on

Published in: Technology, Education
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

### Bayseian decision theory

1. 1. Lecture 2. Bayesian Decision Theory Bayes Decision Rule Loss function Decision surface Multivariate normal and Discriminant Function
2. 2. Bayes Decision It is the decision making when all underlying probability distributions are known. It is optimal given the distributions are known. For two classes ω1 and ω2 , Prior probabilities for an unknown new observation: P(ω1) : the new observation belongs to class 1 P(ω2) : the new observation belongs to class 2 P(ω1 ) + P(ω2 ) = 1 It reflects our prior knowledge. It is our decision rule when no feature on the new object is available: Classify as class 1 if P(ω1 ) > P(ω2 )
3. 3. Bayes Decision We observe features on each object. P(x| ω1) & P(x| ω2) : class-specific density The Bayes rule:
4. 4. Bayes Decision Likelihood of observing x given class label.
5. 5. Bayes Decision Posterior probabilities.
6. 6. Loss function Loss function: probability statement --> decision some classification mistakes can be more costly than others. The set of c classes: The set of possible actions: : deciding that an observation belongs to Loss when taking action i given the observation belongs to hidden class j:
7. 7. Loss function The expected loss: Given an observation with covariant vector x, the conditional risk is: Our final goal is to minimize the total risk over all x.
8. 8. Loss function The zero-one loss: All errors are equally costly. The conditional risk is: “The risk corrsponding to this loss function is the average probability error.” R(αi | x)= λ(αi |ωj)P(ωj | x) j=1 j=c ∑ = P(ωj | x)=1−P(ωi | x) j≠i ∑ c,...,1j,i ji1 ji0 ),( ji =    ≠ = =ωαλ
9. 9. Loss function Let denote the loss for deciding class i when the true class is j In minimizing the risk, we decide class one if Rearrange it, we have
10. 10. Loss function λλ θ ω ω ωθ ω ω λλ λλ >= − − )|x(P )|x(P :ifdecidethen )(P )(P .Let 2 1 1 1 2 1121 2212 λ = 0 1 1 0      , then θλ = P(ω2 ) P(ω1) = θa λ = 0 2 1 0       then θλ = 2P(ω2 ) P(ω1) = θb Example:
11. 11. Loss function Likelihood ratio. Zero-one loss function If miss- classifying ω2 is penalized more:
12. 12. Discriminant function & decision surface Features -> discriminant functions gi(x), i=1,…,c Assign class i if gi(x) > gj(x) ∀j ≠ i Decision surface defined by gi(x) = gj(x)
13. 13. Decision surface The discriminant functions help partition the feature space into c decision regions (not necessarily contiguous). Our interest is to estimate the boundaries between the regions.
14. 14. Minimax Minimizing the maximum possible loss. What happens when the priors change?
15. 15. Normal density Reminder: the covariance matrix is symmetric and positive semidefinite. Entropy - the measure of uncertainty Normal distribution has the maximum entropy over all distributions with a given mean and variance.
16. 16. Reminder of some results for random vectors Let Σ be a kxk square symmetrix matrix, then it has k pairs of eigenvalues and eigenvectors. A can be decomposed as: Σ=λ1e1e1 ′+λ2e2e2 ′+.......+λkekek ′=PΛ′P Positive-definite matrix: ′xΣx >0,∀x ≠0 λ1 ≥λ2 ≥......≥λk >0 Note: ′xΣx =λ1( ′xe1)2 +......+λk( ′xek)2
17. 17. Normal density Whitening transform: P : eigen vector matrix Λ : diagonal eigen value matrix Aw = PΛ − 1 2 Aw t ΣAw = Λ − 1 2 Pt ΣPΛ − 1 2 = Λ − 1 2 Pt PΛPt PΛ − 1 2 = I Σ=λ1e1e1 ′+λ2e2e2 ′+.......+λkekek ′=PΛ′P
18. 18. Normal density To make a minimum error rate classification (zero-one loss), we use discriminant functions: This is the log of the numerator in the Bayes formula. The log posterior probability is proportional to it. Log is used because we are only comparing the gi’s, and log is monotone. When normal density is assumed: We have:
19. 19. Discriminant function for normal density (1)Σi = σ2 I Linear discriminant function: Note: blue boxes – irrelevant terms.
20. 20. Discriminant function for normal density The decision surface is where With equal prior, x0 is the middle point between the two means. The decision surface is a hyperplane,perpendicular to the line between the means.
21. 21. Discriminant function for normal density “Linear machine”: dicision surfaces are hyperplanes.
22. 22. Discriminant function for normal density With unequal prior probabilities, the decision boundary shifts to the less likely mean.
23. 23. Discriminant function for normal density (2) Σi = Σ
24. 24. Discriminant function for normal density Set: The decision boundary is:
25. 25. Discriminant function for normal density The hyperplane is generally not perpendicular to the line between the means.
26. 26. Discriminant function for normal density (3) Σi is arbitrary Decision boundary is hyperquadrics (hyperplanes, pairs of hyperplanes, hyperspheres, hyperellipsoids, hyperparaboloids, hyperhyperboloids) gi(x)= xt Wix+wi t x+wi0 Wi =− 1 2 Σi −1 wi =Σi −1 µi wi0 =− 1 2 µi t Σi −1 µi − 1 2 lnΣi +lnP(ωi)
27. 27. Discriminant function for normal density
28. 28. Discriminant function for normal density
29. 29. Discriminant function for normal density Extention to multi-class.
30. 30. Discriminant function for discrete features Discrete features: x = [x1, x2, …, xd ]t , xi∈{0,1 } pi = P(xi = 1 | ω1) qi = P(xi = 1 | ω2) The likelihood will be:
31. 31. Discriminant function for discrete features The discriminant function: The likelihood ratio:
32. 32. g(x) = wi i=1 d ∑ xi + w0 wi = ln pi(1−qi) qi(1− pi) i =1,...,d w0 = ln 1− pi 1−qii=1 d ∑ + ln P(ω1) P(ω2) Discriminant function for discrete features So the decision surface is again a hyperplane.
33. 33. Optimality Consider a two-class case. Two ways to make a mistake in the classification: Misclassifying an observation from class 2 to class 1; Misclassifying an observation from class 1 to class 2. The feature space is partitioned into two regions by any classifier: R1 and R2
34. 34. Optimality
35. 35. Optimality In the multi-class case, there are numerous ways to make mistakes. It is easier to calculate the probability of correct classification. Bayes classifier maximizes P(correct). Any other partitioning will yield higher probability of error. The result is not dependent on the form of the underlying distributions.