DETECTION THEORY CHAPTER 1

822
-1

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
822
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

DETECTION THEORY CHAPTER 1

  1. 1. The Yes-No Experiment: Sensitivity<br />CHAPTER 1, Detection Theory: A User’s Guide<br />
  2. 2. Outline of the Book<br />Part 1: Basic Detection Theory and One-Interval Design<br /><ul><li>Chapter 1: The Yes-No Experiment: Sensitivity</li></ul>Part 2: Multidimensional Detection Theory and Multi-Interval Discrimination Design<br />Part 3: Stimulus Factors<br />
  3. 3. Outline<br />Understanding Yes-No Data<br />Implied ROCs<br />The Signal Detection Model<br />Calculational Methods<br />Essay: The Provenance of Detection Theory<br />
  4. 4. Understanding Yes-No Data<br />Understanding Yes-No Data<br />Example 1: Face Recognition<br />Summarizing the Data<br />Measuring Sensitivity<br />Two Simple Solution<br />A Detection Theory Solution<br />
  5. 5. Basic Detection Theory and One-Interval Designs<br /><ul><li>One-Interval Designs:
  6. 6. A single stimulus is presented on each trial.
  7. 7. Detection theory:
  8. 8. The methods we apply to measure the ability of subjects to discriminate two stimuli.</li></li></ul><li>The Experiment for detection theory<br />We analyze experiments that measure the ability to distinguish between stimuli.<br />Characteristic: observers can be more or less accurate.<br />Measurement: sensitivity measures<br />High sensitivity  better discrimination<br />Low sensitivity  poorer discrimination<br /> The Yes-No Experiment<br />
  9. 9. Example 1 : Face Recognition<br />Understanding Yes-no data<br />
  10. 10. Face-recognition experiment<br />
  11. 11. Summarizing the Data<br />Hit: Correctly recognizing an Old item.<br />Miss: Failing to recognize an Old item.<br />False alarm: Mistakenly recognizing a New item as old.<br />Correct rejection: Correctly responding “no” to New item.<br />
  12. 12. Only 2 of 4 numbers in the table provide independent information about the participant’s performance:<br />The hit rate : H=P(“yes”|S2)<br />The False alarm rate : F=P(“yes”|S1)<br />denoted as (false-alarm, hit) pair : (F,H)=(.8, .4)<br />
  13. 13. Measuring Sensitivity<br />The simplest possibility is to ignore one of our two response rates using H to measure performance<br />  clearly inadequate<br />Two important characteristic:<br />it can only be measured between two alternative stimuli and must therefore depend on both H and F.<br />S1 and S2 trials should have equal importance.<br />
  14. 14. Therefore…<br />Sensitivity measure/ index/ statistic: <br /> A function of H and F that attempts to capture this ability of the observer.<br />
  15. 15. Two Simple Solution<br />We look for the measure that goes up when H goes up, down when F goes up, and assigns equal importance to these statistics:<br />Simply subtracting F from H<br /> sensitivity=H-F<br />The proportion of correct responses, p(c)<br />p(c) = 0.5[H+(1-F)] = 0.5(H-F)+0.5<br /> p*(c) = (hit+CR)/total trials...number of two trials are unequal<br />
  16. 16. Which is better?<br />Neither, because p(c) depends directly on H-F, and not on either H or F separately.<br />The two measures that are monotonically related in this way are said to be equivalent measures of accuracy.<br />
  17. 17. A Detection Theory Solution<br />The most widely used sensitivity measure of detection theory : d’ (“dee-prime”)<br />d’ = z(H) - z(F)<br />Bears an obvious family resemblance with p(c).<br />The Z transformation:<br />A symmetry property : z(1-P) = -z(p)<br />We often refer to normal-distribution models by the abbreviation SDT.<br />
  18. 18. Calculate d’ with p(c)<br />It is sometimes important to calculate d’ when only p(c) is known:<br />Assumption  the hit rate equals the correct rejection rate.<br />As a result, H=1-F<br /> d’=2*z[p(c)]<br />This calculation is not correct in general.<br />
  19. 19. About d’…<br />When observers cannot discriminate at all, H=F and d’=0.<br />The largest possible finite value of d’ depends on the number of decimal places to which H and F are carried.<br />When H=.99 and F=.01, d’=4.65; many experimenters consider this an effective ceiling.<br />Perfect accuracy implies an infinite d’<br />
  20. 20. To avoid infinite d’ value…<br />Two adjustment:<br />Convert proportions of 0 and 1 to 1/(2N) and 1-1/(2N).<br />To add 0.5 to all data cells regardless of whether zeroes are present.<br />Most experiment avoid chance and perfect performance.<br />
  21. 21. Implied Rocs<br />ROC Space and Isosensitivity Curves<br />
  22. 22. ROC space and Isosensitivity Curves<br />A good sensitivity measure should be invariant when factors other than sensitivity change.<br />Detection theory: assumed to have a fixed sensitivityeven under different response bias.<br />When differ in response bias, the participants can produce the various performance pairs that leads to the fixed d’.<br />
  23. 23. Isosensitivity curve<br />Isosensitivity curve: the locus of (false-alarm, hit) pairs yielding a constant d’.<br />All points on the curve have the same sensitivity.<br />We show it in the unit square called ROC square.<br />
  24. 24. ROC space<br />ROC: Receiver/Relative Operating Characteristic<br />ROC space: the region in which ROCs must lie, is the unit square.<br />
  25. 25. ROC curve<br />Different curves represent different d’.<br />When performance is at chance (d’=0), the ROC is the major diagnosal <br />  the chance line.<br />The curves shift toward the upper left corner , where accuracy is perfect, as d’ increase.<br />These ROC curves summarize the predictions of detection theory.<br />
  26. 26. The characteristics of isosensitivity curves<br />The price of complete success in recognizing one stimulus class is complete failure in recognizing the other.<br />The slope of these curves decreases as the tendency to respond “yes” increases.<br />
  27. 27. ROCs in Transformed Coordinates<br />To find an algebraic expression for the ROC, we would need to solve this equation for H as a function of F:<br />The Transformed ROC, a zROC: z(H) = z(F)+d’<br />The range of values in these new units is from minus to plus infinity.<br />
  28. 28. Shape: a straight line with unit slope<br />The linearity of zROCs can be used to predict how much the FA rate will raise if the hit rate increases.<br />d’ : the intercept of the straight-line ROC<br />
  29. 29. ROC Implied by p(c)<br />Use the equation: <br /> H = F+[2p(c)-1]<br />a straight line of unit slope with intercepts equal <br /> 2p(c)-1.<br />Predict how much the FA rate will go up if the hit rate increases.<br />different from detection theory.<br />
  30. 30. Which Implied ROCs Are Correct?<br />The detection theory curves do a much better job than those for p(c).<br />The unit slope of zROC curves are not always observed experimentally. d’ might change along response bias.<br />The unit slope property reflects the equal importance of S1 and S2 trials to the corresponding sensitivity measure.<br />
  31. 31. When ROCs do have unit slope, they are symmetrical around the minor diagonal.<br />d’(1-H, 1-F) = d’(F, H)<br />
  32. 32. Sensitivity as Perceptual Distance<br />d’ has the mathematical properties of distance measures:<br />Distance between an object and itself is 0<br />All distances are positive (positivity)<br />Distance between object x and y is the same as between y and x (symmetry)<br />d’(x,w) ≦d’(x,y) + d’(y,w)<br />ratio scaling properties<br />unboundedness<br />
  33. 33. The triangle inequality can be replaced by:<br /> d’(x,w)n = d’(x,y)n + d’(y,w)n<br />When n=2, this is the Euclidean distance formula.<br />When n=1, this is the “city-block” metric<br />
  34. 34. The signal detection model<br />Underlying Distributions and the Decision Space<br />
  35. 35. Underlying Distributions and the Decision Space<br />Detection theory assumes that a participant in our memory experiment is judging its familiarity.<br />Repeated presentation generate a distribution of values instead of the same result all the time.<br />
  36. 36.
  37. 37. The two distributions together comprise the decision space – the internal or underlying problem facing the observer.<br />The participant can assess the familiarity value of the stimulus, but does not know which distribution led to that value.<br />What is the best strategy foe deciding on a response?<br />
  38. 38. Response Selection in the Decision Space<br />The optimal rule is to establish a criterion that divides the familiarity dimension into two parts.<br />
  39. 39. The decision space provides an interpretation of how ROCs are produced. <br />The participant can change the proportion of “yes” responses, and generate different points on an ROC, by moving the criterion.<br />The distribution most often used satisfy this requirement by assuming that any value on the decision axis can arise from either distribution.<br />
  40. 40. Sensitivity in the Decision Space<br />The mean difference between two distributions is a measure of sensitivity.<br />This distance is in fact identical to d’<br />Distance along a line can be measured from any zero point so we measure mean distances relative to the criterion k.<br />The mean difference= (M2-k)-(M1-k)<br />
  41. 41. Underlying Distribution and Transformations<br />The cumulative distribution function<br />When M-k=0, the yes rate is 50%; large positive differences correspond to high “yes” rates and large negative differences to low ones.<br />
  42. 42. Cumulative distribution function<br />d’: the distance between these abscissa points, <br /> z(H)-z(F).<br />It transforms a proportion into distance.<br />
  43. 43. Thus, we conclude…<br />The definition of the detection theory:<br /> a theory relating choice behavior to a psychological decision space.<br />An observer’s choices are determined by sensitivities and response biases. <br />
  44. 44. Calculational methods<br />
  45. 45. You can find d’ using the “inverse normal” functions of speadsheet program.<br />A more complex program is d’ plus.<br />
  46. 46. Essay: The Provenance of Detection Theory<br />The term “psychophysics was invented by Gustav Fechner.<br />He first take a mathematical approach to relating the internal and external worlds on the basis of experimental data.<br />Fechner developed experimental methods for estimating the difference threshold, or jnd.<br />
  47. 47. Attempts to measure jnds led to two complications:<br />The threshold appeared not to be a fixed quantity.<br />Different methods produced different values for the jnd.<br />
  48. 48. Redefinition to survive the first problem<br />Retained the literal notion of a sensory threshold, building mechanical and mathematical models to explain the gradual nature of observed function.<br />Substituted a continuum of experience for the discrete processes of the threshold.<br />
  49. 49. Statistical decision theory<br />Purpose: to solve the problem that stimulus environments are noisy does not tell an observer how best to cope with them.<br />Statistical decision theory pointed out that information derived from noisy signals could lead to action only when evaluated against well-defined goals.<br />Decisions should depend not only on the stimulus, but on the expected outcomes of action.<br />
  50. 50. By separating the world of stimuli and their perturbations from that of the decision process, detection theory was able to offer measures of performance that were not specific to procedure and that were independent of motivation.<br />Procedure and motivation could influence data, but affected only the decision process<br />

×