Kappa
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Kappa

on

  • 550 views

Brief account of the kappa statistic for measuring agreement

Brief account of the kappa statistic for measuring agreement

Statistics

Views

Total Views
550
Views on SlideShare
550
Embed Views
0

Actions

Likes
0
Downloads
6
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Kappa Presentation Transcript

  • 1. KappaIs a form of correlation for measuring agreement on twoor more diagnostic categories by two or more clinicians or methods.Why not use % agreement?Because just by chance there could be lots of agreement.Kappa can be defined as the proportion of agreements after chanceagreement is removed. Kappa of 0 occurs when agreement is no better than chance.
  • 2. Kappa of 1 indicates perfect agreement.Negative Kappa means that there is lessagreement than you’d expect by chance (veryrare) Categories may be ordinal or nominal
  • 3. How is it calculated?Patient ID Psychiatrist Psychologist1 1 22 2 23 2 24 3 35 3 36 3 37 3 38 3 49 3 410 4 411 4 412 4 3
  • 4. Category 1 2 3 4 1 2 1 11 3 1111 1 4 11 11
  • 5. Steps1. Add agreements = 2 + 4 + 2 = 82. Multiply number of times each judge used a category: (1x0) + (2x3) + (6x5) + (3x4)3. Add them up = 484. Apply formula
  • 6. Kappa = (N x agreements) – N as in 3 N2 – N as in 3Which = (12 x 8) – 48 = 96 – 48 = 48 = 0.50 144 – 48 96 96
  • 7. How large should Kappa be?Landis & Koch (1977) suggested0.0 – 0.20 = no or slight agreement0.21 – 0.40 = fair0.41 – 0.60 = moderate0.61 – 0.80 = good> 0.80 = very good
  • 8. Weighted KappaIn ordinary Kappa, all disagreements aretreated equally. Weighted Kappa takesmagnitude of discrepancy into account (oftenmost useful); is often higher than unweightedKappa.
  • 9. N.B. Be careful with Kappa if the prevalence ofone of the categories in very low (< 10%); thiswill underestimate level of agreement.Example:If 2 judges are very accurate (95%) a Kappa of0.61 with a prevalence of 10% will drop to •0.45 if prevalence is 5% •0.14 if prevalence is 1%.