VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...
ROC CURVE AND ANALYSIS.pptx
1. ROC CURVE AND
ANALYSIS
SUBMITTED TO:
PROF. SOMEN SAHU
DEPT. OF FES
SUBMITTED BY –
AGNIVA PRADHAN
M.F.Sc 2ND SEMESTER
DEPT. OF FNT
M/F/2021/03
2. A receiver operating characteristic curve, or ROC
curve, is a graphical plot that illustrates the
diagnostic ability of a binary classifier system as its
discrimination threshold is varied. The method was
originally developed for operators of military radar
receivers starting in 1941, which led to its name.
The ROC curve is created by plotting the
true positive rate (TPR) against the
false positive rate (FPR) at various threshold
settings.
3. The term “Receiver Operating Characteristic” has its roots in World War II. ROC
curves were originally developed by the British as part of the “Chain Home” radar
system. ROC analysis was used to analyze radar data to differentiate between
enemy aircraft and signal noise (e.g. flocks of geese). As the sensitivity of the
receiver increased, so did the number of false positives (in other words, specificity
went down).
Note: The plot actually shows sensitivity vs (1 − specificity), and is therefore
sometimes called a sensitivity vs (1 − specificity) plot. The logic behind it is this: if
a test has zero diagnostic capability, it would be equally likely to produce a false
positive or a true positive, which is the same as:
Sensitivity = 1 – specificity.
4. True Positive Rate (TPR) is a synonym for recall and is
therefore defined as follows:
False Positive Rate (FPR) is defined as follows:
An ROC curve plots TPR vs. FPR at different classification
thresholds. Lowering the classification threshold classifies
more items as positive, thus increasing both False Positives
and True Positives. The following figure shows a typical
ROC curve.
Figure 4. TP vs. FP rate at different classification
thresholds.
To compute the points in an ROC curve, we could evaluate a
logistic regression model many times with different
classification thresholds, but this would be inefficient.
Fortunately, there's an efficient, sorting-based algorithm
that can provide this information for us, called AUC.
5. Consider an example of 70 patients with solitary pulmonary nodules who underwent plain chest
radiography to determine whether the nodules were benign or malignant.
According to the biopsy results and/or follow-up evaluations, 34 patients actually had
malignancies and 36 patients had benign lesions.
Chest radiographs were interpreted according to a five-point scale:
1 (definitely benign),
2 (probably benign),
3 (possibly malignant),
4 (probably malignant), and
5 (definitely malignant).
In this example, one can choose from four different cutoff levels to define a positive test for
malignancy on the chest radiographs: viz. ≥2 (i.e., the most liberal criterion), ≥3, ≥4, and 5 (i.e., the
most stringent criterion). Therefore, there are four pairs of sensitivity and specificity values, one
pair for each cutoff level, and the sensitivities and specificities depend on the cutoff levels that are
used to define the positive and negative test results.
6. This the data of a Disease Depression score
and out Come
Depression Score Out Come
40 Negative
49 Negative
47 Positive
46 Negative
31 Negative
32 Negative
46 Negative
66 Positive
48 Negative
46 Negative
46 Negative
38 Negative
64 Positive
41 Negative
45 Positive
32 Negative
34 Negative
60 Positive
57 Positive
55 Positive
50 Positive
41 Negative
65 Positive
47 Positive
49 Negative
63 Positive
47 Negative
46 Negative
56 Negative
40 Positive
49 Negative
60 Positive
7.
8.
9.
10. The test result variable(s):
DepressionScore has at least one
tie between the positive actual
state group and the negative
actual state group.
The smallest cutoff value is
the minimum observed test
value minus 1, and the largest
cutoff value is the maximum
observed test value plus 1. All
the other cutoff values are the
averages of two consecutive
ordered observed test values.
Coordinates of the Curve
Test Result Variable(s): DepressionScore
Positive if Greater
Than or Equal Toa
Sensitivity 1 - Specificity
30.00 1.000 1.000
31.50 1.000 .947
33.00 1.000 .842
36.00 1.000 .789
39.00 1.000 .737
40.50 .923 .684
43.00 .923 .579
45.50 .846 .579
46.50 .846 .316
47.50 .692 .263
48.50 .692 .211
49.50 .692 .053
52.50 .615 .053
55.50 .538 .053
56.50 .538 .000
58.50 .462 .000
61.50 .308 .000
63.50 .231 .000
64.50 .154 .000
65.50 .077 .000
67.00 .000 .000