3. Who’s this guy?
● Dan Slimmon
● Senior Platform Engineer at Exosite
● Previously Operations Team Manager at
Blue State Digital
● https://twitter.com/danslimmon
4. Sensitivity & Specificity
● Medical testing concepts
● “How good is your test?”
● Medical concepts often work great for ops
7. A word problem
● If a paper contains plagiarism, you have a
90% chance of a positive result.
● If a paper doesn’t contain plagiarism, you
still have a 20% chance of a positive result.
● Jerkwad kids plagiarize 30% of the time
8. Question 1
Given a random paper, what’s the probability
that you’ll get a negative result?
● Plagiarism: 90% chance of positive
● No plagiarism: 20% chance of positive
● 30% chance of plagiarism
9. Question 2
If there’s plagiarism, what’s the probability you’ll
detect it?
● Plagiarism: 90% chance of positive
● No plagiarism: 20% chance of positive
● 30% chance of plagiarism
10. Question 2
If there’s plagiarism, what’s the probability you’ll
detect it?
● Plagiarism: 90% chance of positive
● No plagiarism: 20% chance of positive
● 30% chance of plagiarism
11. Question 3
If you get a positive result, what’s the
probability that the paper is plagiarized?
● Plagiarism: 90% chance of positive
● No plagiarism: 20% chance of positive
● 30% chance of plagiarism
21. Question 3
If you get a positive result, what’s the
probability that the paper was plagiarized?
Dark Green
-----------------------------------------(Dark Blue) + (Dark Green)
22. Question 3
If you get a positive result, what’s the
probability that the paper was plagiarized?
27
-----------------------------------------14 + 27
23. Question 3
If you get a positive result, what’s the
probability that the paper was plagiarized?
65.8%
37. The true-positive probability
Let’s calculate the probability that any given
probe run will produce a true positive.
P(TP) = (prob. of service failure) * (sensitivity)
P(TP) = 0.1% * 99%
P(TP) = 0.099%
39. The false-positive probability
P(FP) = (prob. working) * (100% - specificity)
P(FP) = 99.9% * 1%
P(FP) = 0.99%
So roughly 1 in every 100 checks will be a false
positive.
41. Positive predictive value
PPV = P(TP) / [P(TP) + P(FP)]
PPV = 0.099% / (0.099% + 0.99%)
PPV = 9.1%
If you get a positive, there’s only a 1 in 10
chance that something’s actually wrong.
54. A Pony I Want
Something like Nagios, but which
● Is SNR-aware
● Helps you separate detection from diagnosis
55. Other useful stuff
● Medical paper with a nice visualization:
http://tinyurl.com/specsens
● Blog post with some algebra:
http://tinyurl.com/carsmoke
● Base rate fallacy:
http://tinyurl.com/brfallacy
● Differential diagnosis:
http://tinyurl.com/sbddx