Medical Decision-Support Systems   Probabilisti c Reasoning in  Diagnostic Systems   Yuval Shahar, M.D., Ph.D.
Reasoning Under Uncertainty  in Medicine <ul><li>Uncertainty is inherent to medical reasoning </li></ul><ul><ul><li>relati...
Probability:  A Quick Introduction <ul><li>Probability function, range: [0, 1] </li></ul><ul><li>Prior probability  of A, ...
Probabilistic Calculus <ul><li>P(not(A)) = 1-P(A) </li></ul><ul><li>In general: </li></ul><ul><ul><li>P(A & B) = P(A) * P(...
Test Characteristics FP+TN TP+FN FN+TN True negative (TN) False negative (FN) Negative TP+FP False positive (FP) True posi...
Test Performace Measures <ul><li>The  gold standard  test: the procedure that defines presence or absence of a disease (of...
Test Predictive Values <ul><li>Positive predictive value (PV+) = P(D|T+) = TP/(TP+FP) </li></ul><ul><li>Negative predictiv...
Lab Tests: What is “Abnormal”?
The Cut-off Value Trade off <ul><li>Sensitivity and specificity depend on the cut off value between what we define as norm...
Receiver Operating Characteristic (ROC) Curves: Examples
Receiver Operating Characteristic (ROC) Curves: Interpretation  <ul><li>ROC curves summarize the trade-off between the TPR...
Bayes Theorem
Odds-Likelihood (Odds Ratio) Form of Bayes Theorem <ul><li>Odds = P(A)/(1-P(A)); P = Odds/(1+Odds) </li></ul><ul><li>Post-...
Application of Bayes Theorem <ul><li>Needs reliable pre-test probabilities </li></ul><ul><li>Needs reliable post-test like...
Relation of Pre-Test and  Post-Test Probabilities
Example:  Computing Predictive Values <ul><li>Assume P(Down Syndrom): </li></ul><ul><ul><li>(A) 0.1% (age 30) </li></ul></...
Predictive Values:  Down Syndrom
Example: de Dombal’s System (1972) <ul><li>Domain : Acute abdominal pain (7 possible diagnoses) </li></ul><ul><li>Input : ...
Decision Trees <ul><li>A convenient way to explicitly show the  order and relationships of possible decisions, uncertain o...
Decision Trees Conventions Decision node Chance node Information link Influence link
A Generic Decision Tree
Decision Trees: an HIV Example Decision node Chance node
Computation With Decision Trees <ul><li>Decision trees are “folded back” to the top most (leftmost, or initial) decision <...
Influence Diagrams:  Node Conventions Chance  node Decision node Utility node
Link Semantics in Influence Diagrams Dependence link Information link Influence link
Influence Diagrams:  An HIV Example
The Structure of Influence Diagram Links
Belief Networks  (Bayesian/Causal Probabilistic/Probabilistic Networks, etc) Disease Fever Sinusitis Runny nose Headache I...
Link Semantics in Belief Networks Dependence Independence Conditional independence of B and C, given A B C A
Advantages of Influence Diagrams and Belief Networks <ul><li>Excellent modeling tool that supports acquisition from domain...
Disadvantages of Influence Diagrams and Belief Networks <ul><li>Explicit representation of dependencies often requires acq...
Value of Information (VI) <ul><li>We often need to decide what would be  the next best piece of information  to gather (e....
Examples of Successful Belief- Network Applications <ul><li>In clinical medicine: </li></ul><ul><ul><li>Pathological diagn...
The Pathfinder Project (Heckerman, Horvitz, Nathwani 1992) <ul><li>Task and domain: Diagnosis of lymph node biopsy, an imp...
Pathfinder Domain <ul><li>More than 60 diseases </li></ul><ul><li>More than 130 findings, such as:  </li></ul><ul><ul><li>...
Pathfinder I/O behavior <ul><li>Input: set of <Feature, Instance>  (<F i , I i >)  pairs (e.g., <NECROSIS, ABSENT> </li></...
Pathfinder Methodology: Probabilities and Utilities <ul><li>Decision-theoretic computation </li></ul><ul><li>Bayesian appr...
Pathfinder Computation <ul><li>Normally we would use the general form of Bayes Theorem: </li></ul><ul><li>But that involve...
Pathfinder 1: The Simple Bayes Version <ul><li>Assuming conditional independence of features (Simple or Naïve Bayes): </li...
Pathfinder 2: The Belief Network Version <ul><li>Mutual exclusivity and exhaustiveness of diseases is reasonable in lymphn...
Decision-Theoretic Diagnosis <ul><li>Using the utility matrix and given observations   , the expected diagnostic utility ...
Pathfinder: Gathering Information <ul><li>Next best feature to observe is recommended using a  myopic  approximation, whic...
Pathfinder 2:  Knowledge Acquisition <ul><li>To facilitate acquisition of multiple probabilities, a  Similarity Network  m...
Pathfinder 1 and 2: Evaluation <ul><li>Pathfinder 1 was compared to Pathfinder 2 using 53 cases, a new user, and a thoroug...
Upcoming SlideShare
Loading in...5
×

Probabilistic reasoning

336

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
336
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
4
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Probabilistic reasoning

  1. 1. Medical Decision-Support Systems Probabilisti c Reasoning in Diagnostic Systems Yuval Shahar, M.D., Ph.D.
  2. 2. Reasoning Under Uncertainty in Medicine <ul><li>Uncertainty is inherent to medical reasoning </li></ul><ul><ul><li>relation of diseases to clinical and laboratory findings is probabilistic </li></ul></ul><ul><ul><li>Patient data itself is often uncertain with respect to value and time </li></ul></ul><ul><ul><li>Patient preferences regarding outcomes vary </li></ul></ul><ul><ul><li>Cost of interventions and therapy can change </li></ul></ul>
  3. 3. Probability: A Quick Introduction <ul><li>Probability function, range: [0, 1] </li></ul><ul><li>Prior probability of A, P(A): with no new information (e.g., no patient information) </li></ul><ul><li>Posterior probability of A: P(A) given certain information (e.g. laboratory tests) </li></ul><ul><li>Conditional probability : P(B|A) </li></ul><ul><li>Independence of A, B: P(B) = P(B|A) </li></ul><ul><li>Conditional independence of B,C, given A: P(B|A) = P(B|A & C) </li></ul><ul><ul><li>(e.g., two symptoms, given a specific disease) </li></ul></ul>
  4. 4. Probabilistic Calculus <ul><li>P(not(A)) = 1-P(A) </li></ul><ul><li>In general: </li></ul><ul><ul><li>P(A & B) = P(A) * P(B|A) </li></ul></ul><ul><li>If A, B are independent: </li></ul><ul><ul><li>P(A & B) = P(A) * P(B) </li></ul></ul><ul><li>If A, B are mutually exclusive: </li></ul><ul><ul><li>P(A or B) = P(A) + P(B) </li></ul></ul><ul><li>If A,B not mutually exclusive , but independent: </li></ul><ul><ul><li>P(A or B) = 1-P(not(A) & not(B)) = 1-(1-P(A))(1-P(B)) </li></ul></ul>
  5. 5. Test Characteristics FP+TN TP+FN FN+TN True negative (TN) False negative (FN) Negative TP+FP False positive (FP) True positive (TP) Positive Total Disease absent Disease present Disease Test result
  6. 6. Test Performace Measures <ul><li>The gold standard test: the procedure that defines presence or absence of a disease (often, very costly) </li></ul><ul><li>The index test : The test whose performance is examined </li></ul><ul><li>True positive rate ( TPR ) = Sensitivity : </li></ul><ul><ul><li>P(Test is positive|patient has disease) = P(T+|D+) </li></ul></ul><ul><ul><li>Ratio of number of diseased patients with positive tests to total number of patient: TP/(TP+FN) </li></ul></ul><ul><li>True negative rate ( TNR ) = Specificity </li></ul><ul><ul><li>P(Test is negative|patient has no disease) = P(T-|D-) </li></ul></ul><ul><ul><li>Ratio of number of nondiseased patients with negative tests to total number of patients: TN/(TN+FP) </li></ul></ul>
  7. 7. Test Predictive Values <ul><li>Positive predictive value (PV+) = P(D|T+) = TP/(TP+FP) </li></ul><ul><li>Negative predictive value (PV-) = P(D-|T-) = TN/(TN+FN) </li></ul>
  8. 8. Lab Tests: What is “Abnormal”?
  9. 9. The Cut-off Value Trade off <ul><li>Sensitivity and specificity depend on the cut off value between what we define as normal and abnormal </li></ul><ul><li>Assume high test values are abnormal; then, moving the cut-off value to a higher one increases FN results and decreases FP results (i.e. more specific) and vice versa </li></ul><ul><li>There is always a trade off in setting the cut-off point </li></ul>
  10. 10. Receiver Operating Characteristic (ROC) Curves: Examples
  11. 11. Receiver Operating Characteristic (ROC) Curves: Interpretation <ul><li>ROC curves summarize the trade-off between the TPR (sensitivity) and the false positive rate (FPR) (1-specificity) for a particular test, as we vary the cut-off treshold </li></ul><ul><li>The greater the area under the ROC curve, the better (more sensitive, more specific) </li></ul>
  12. 12. Bayes Theorem
  13. 13. Odds-Likelihood (Odds Ratio) Form of Bayes Theorem <ul><li>Odds = P(A)/(1-P(A)); P = Odds/(1+Odds) </li></ul><ul><li>Post-test odds = pretest odds * likehood ratio </li></ul>
  14. 14. Application of Bayes Theorem <ul><li>Needs reliable pre-test probabilities </li></ul><ul><li>Needs reliable post-test likelihood ratios </li></ul><ul><li>Assumes one disease only (mutual exclusivity of diseases) </li></ul><ul><li>Can be used in sequence for several tests, but only if they are conditionally independent given the disease; then we use the post-test probability of T i as the pre-test probability for T i+1 (Simple, or Naïve, Bayes) </li></ul>
  15. 15. Relation of Pre-Test and Post-Test Probabilities
  16. 16. Example: Computing Predictive Values <ul><li>Assume P(Down Syndrom): </li></ul><ul><ul><li>(A) 0.1% (age 30) </li></ul></ul><ul><ul><li>(B) 2% (age 45) </li></ul></ul><ul><li>Assume amniocentesis with Sensitivity of 99%, Specificity of 99% for Down Syndrom </li></ul><ul><li>PV+ = P(DS|Amnio+) </li></ul><ul><li>PV- = P(DS-|Amnio-) = 99.999% </li></ul>
  17. 17. Predictive Values: Down Syndrom
  18. 18. Example: de Dombal’s System (1972) <ul><li>Domain : Acute abdominal pain (7 possible diagnoses) </li></ul><ul><li>Input : Signs and symptoms of patient </li></ul><ul><li>Output : Probability distribution of diagnoses </li></ul><ul><li>Method : Naïve Bayesian classification </li></ul><ul><li>Evaluation : an eight-center study involving 250 physicians and 16,737 patients </li></ul><ul><li>Results : </li></ul><ul><ul><li>Diagnostic accuracy rose from 46 to 65% </li></ul></ul><ul><ul><li>The negative laparotomy rate fell by almost half </li></ul></ul><ul><ul><li>Perforation rate among patients with appendicitis fell by half </li></ul></ul><ul><ul><li>Mortality rate fell by 22% </li></ul></ul><ul><li>Results using survey data consistently better than the clinicians’ opinions and even the results using human probability estimates ! </li></ul>
  19. 19. Decision Trees <ul><li>A convenient way to explicitly show the order and relationships of possible decisions, uncertain outcomes of decisions , and outcome utilities </li></ul><ul><li>Enable computation of the decision that maximizes expected utility </li></ul>
  20. 20. Decision Trees Conventions Decision node Chance node Information link Influence link
  21. 21. A Generic Decision Tree
  22. 22. Decision Trees: an HIV Example Decision node Chance node
  23. 23. Computation With Decision Trees <ul><li>Decision trees are “folded back” to the top most (leftmost, or initial) decision </li></ul><ul><li>Computation is performed by averaging expected utility recursively over tree branches from right to left (bottom up), maximizing utility for every decision made and assuming that this is the expected utility for the subtree that follows the computed decision </li></ul>
  24. 24. Influence Diagrams: Node Conventions Chance node Decision node Utility node
  25. 25. Link Semantics in Influence Diagrams Dependence link Information link Influence link
  26. 26. Influence Diagrams: An HIV Example
  27. 27. The Structure of Influence Diagram Links
  28. 28. Belief Networks (Bayesian/Causal Probabilistic/Probabilistic Networks, etc) Disease Fever Sinusitis Runny nose Headache Influence diagrams without decision and utility nodes Gender
  29. 29. Link Semantics in Belief Networks Dependence Independence Conditional independence of B and C, given A B C A
  30. 30. Advantages of Influence Diagrams and Belief Networks <ul><li>Excellent modeling tool that supports acquisition from domain experts </li></ul><ul><ul><li>Intuitive semantics (e.g., information and influence links) </li></ul></ul><ul><ul><li>Explicit representation of dependencies </li></ul></ul><ul><ul><li>very concise representation of large decision models </li></ul></ul><ul><li>“ Anytime” algorithms available (using probability theory) to compute the distribution of values at any node given the values of any subset of the nodes (e.g., at any stage of information gathering) </li></ul><ul><li>Explicit support for value of information computations </li></ul>
  31. 31. Disadvantages of Influence Diagrams and Belief Networks <ul><li>Explicit representation of dependencies often requires acquisition of joint probability distributions (P(A|B,C)) </li></ul><ul><li>Computation in general intractable (NP hard) </li></ul><ul><li>Order of decisions and relations between decisions and available information might be obscured </li></ul>
  32. 32. Value of Information (VI) <ul><li>We often need to decide what would be the next best piece of information to gather (e.g., within a diagnostic process); that is, what is the best next question to ask (e.g., what would be the result of a urine culture?) </li></ul><ul><li>The Value of Information ( VI ) of feature f is the marginal expected utility of an optimal decision made knowing f , compared to making it without knowing f </li></ul><ul><li>The net value of information ( NVI ) of f = VI (f ) - cost( f ) </li></ul><ul><li>NVI is highly useful for a hypothetico-deductive diagnostic approach to decide what would be the next information item, if any, to investigate </li></ul>
  33. 33. Examples of Successful Belief- Network Applications <ul><li>In clinical medicine: </li></ul><ul><ul><li>Pathological diagnosis at the level of a subspecialized medical expert ( Pathfinder ) </li></ul></ul><ul><ul><li>Endocrinological diagnosis (NESTOR) </li></ul></ul><ul><li>In bioinformatics: </li></ul><ul><ul><li>Recognition of meaningful sites and features in DNA sequences </li></ul></ul><ul><ul><li>Educated guess of tertiary structure of proteins </li></ul></ul>
  34. 34. The Pathfinder Project (Heckerman, Horvitz, Nathwani 1992) <ul><li>Task and domain: Diagnosis of lymph node biopsy, an important medical problem </li></ul><ul><ul><li>Large difference between expert and general pathologist opinions (almost 65%!) </li></ul></ul><ul><li>Problems in the domain include </li></ul><ul><ul><li>Misrecognition of features (information gathering) </li></ul></ul><ul><ul><li>Misintegration of evidence (information processing) </li></ul></ul><ul><li>The Pathfinder project focused mainly on assistance in information processing </li></ul><ul><li>A Stanford/USC collaboration; eventually commercialized as Intellipath, marketed by the ACP, used as early as 1992 by at least 200 pathology sites </li></ul>
  35. 35. Pathfinder Domain <ul><li>More than 60 diseases </li></ul><ul><li>More than 130 findings, such as: </li></ul><ul><ul><li>Microscopic </li></ul></ul><ul><ul><li>immunological </li></ul></ul><ul><ul><li>molecular biology </li></ul></ul><ul><ul><li>Laboratory </li></ul></ul><ul><ul><li>Clinical </li></ul></ul><ul><li>Commercial product extended to at least 10 more medical domains </li></ul>
  36. 36. Pathfinder I/O behavior <ul><li>Input: set of <Feature, Instance> (<F i , I i >) pairs (e.g., <NECROSIS, ABSENT> </li></ul><ul><ul><li>Instances are mutually exclusive values of each feature </li></ul></ul><ul><ul><li>Prior probability of each disease D k is known </li></ul></ul><ul><ul><li>P(F 1 I 1 , F 2 I 2 …F t I t | D k ,  is in acquired knowledge base </li></ul></ul><ul><li>Output: P(D k |F 1 I 1 , F 2 I 2 …F m I m ,  </li></ul><ul><ul><li> = background knowledge (context) </li></ul></ul><ul><li>User can ask what is the next best (cost-effective) feature to investigate or enter </li></ul><ul><ul><li>Probabilistic (decision-theoretic) hypothethico-deductive approach </li></ul></ul><ul><li>Distribution of each D k is updated dynamically </li></ul>
  37. 37. Pathfinder Methodology: Probabilities and Utilities <ul><li>Decision-theoretic computation </li></ul><ul><li>Bayesian approach: Probabilities represent beliefs of experts (data can update beliefs) </li></ul><ul><li>Utilities represented as a matrix of all diseases </li></ul><ul><li>A matrix entry pair < D j D k> encodes the (patient) utility of diagnosing D k when patient really has D k </li></ul><ul><li>Since no therapeutic recommendations are made, the model can use one representative patient (the expert), expressed in micromorts and willingness-to-pay to avoid risk of each outcome </li></ul>
  38. 38. Pathfinder Computation <ul><li>Normally we would use the general form of Bayes Theorem: </li></ul><ul><li>But that involves exponential number of probabilities to be acquired and represented </li></ul>
  39. 39. Pathfinder 1: The Simple Bayes Version <ul><li>Assuming conditional independence of features (Simple or Naïve Bayes): </li></ul><ul><li>Assuming mutual exclusivity and exhaustiveness of diseases the overall computation is tractable: </li></ul>
  40. 40. Pathfinder 2: The Belief Network Version <ul><li>Mutual exclusivity and exhaustiveness of diseases is reasonable in lymphnode pathology </li></ul><ul><ul><li>Single disease per examined lymph node </li></ul></ul><ul><ul><li>Large, exhaustive knowledge base </li></ul></ul><ul><li>Conditional independence is less reasonable and can lead to erroneous conclusions </li></ul><ul><li>The simple Bayes representation of Pathfinder 1 was therefore enhanced to a belief network in Pathfinder 2 which included explicit dependencies between different features, still taking advantage of any explicit global and conditional independencies </li></ul>
  41. 41. Decision-Theoretic Diagnosis <ul><li>Using the utility matrix and given observations  , the expected diagnostic utility using  is averaged over all diagnoses: </li></ul><ul><ul><li>EU(D k (  )) =  j P(D j |  )U(D j ,D k ) </li></ul></ul><ul><li>Thus, Dx(  ) = ARGMAX k [EU(D k (  )) </li></ul><ul><li>However, since the diagnosis is sensitive to the utility model, Pathfinder does not recommend it, only the probabilities P(D k |  ) </li></ul>
  42. 42. Pathfinder: Gathering Information <ul><li>Next best feature to observe is recommended using a myopic approximation, which considers only up to one single feature to be observed </li></ul><ul><li>The feature chosen maximizes EU given that a diagnosis would be made after observing it </li></ul><ul><li>Feature f is chosen that maximizes NVI( f ) </li></ul><ul><li>Although myopic approximation could backfire, in practice it works well </li></ul><ul><ul><li>especially when U(D j ,D k ) =is set to 0 if one of the diseases is malignant and the other benign, and set to 1 if they are both malignant or both benign </li></ul></ul>
  43. 43. Pathfinder 2: Knowledge Acquisition <ul><li>To facilitate acquisition of multiple probabilities, a Similarity Network model was developed </li></ul><ul><li>Using similarity networks, an expert creates multiple small belief networks, representing 2 or more diseases that are difficult to distinguish </li></ul><ul><li>The local belief networks are then unified into a global belief network, preserving soundness </li></ul><ul><li>The graphical interface also allows partitioning of diseases into sets, relative to each set some feature is independent, thus further assisting in the construction </li></ul>
  44. 44. Pathfinder 1 and 2: Evaluation <ul><li>Pathfinder 1 was compared to Pathfinder 2 using 53 cases, a new user, and a thorough analysis of each case </li></ul><ul><ul><li>Diagnostic accuracy of PF2 is greater than that of PF1 (gold standard: the main domain expert’s distribution and his assessment on a scale of 1 to 10) </li></ul></ul><ul><ul><li>Difference is due to better probabilistic representation (better acquisition and inference) </li></ul></ul><ul><ul><li>Cost of constructing PF2 rather than PF1 is justified by the improvements, (measure: the utility of the diagnosis) </li></ul></ul><ul><ul><li>PF2 is at least as good as the main domain expert, with respect to diagnostic accuracy </li></ul></ul>
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×