Your SlideShare is downloading. ×
Dealing with Uncertainty: What the reverend Bayes can teach us.
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Dealing with Uncertainty: What the reverend Bayes can teach us.

1,345
views

Published on

By Jurgen Van Gael - http://jvangael.github.io/ - @jvangael …

By Jurgen Van Gael - http://jvangael.github.io/ - @jvangael

As data scientists and decision makers, uncertainty is all around us: data is noisy, missing, wrong or inherently uncertain. Statistics offers a wide set of theories and tools to deal with this uncertainty, yet most people are unaware of a unifying theory of uncertainty. In this talk I want to introduce the audience to a branch of statistics called Bayesian reasoning which is a unifying, consistent, logical and most importantly successful way of dealing with uncertainty.

Over the past two centuries there have been many proposals for dealing with uncertainty (e.g. frequentist probabilities, fuzzy logic, ...). Under the influence of early 20th century statisticians, the Bayesian formalism was somewhat pushed into the background of the statistical scene. More recently though, some to the credit of computer science, Bayesian thinking has seen a revival. So what and how much should a data scientist or decision maker know about Bayesian thinking?

My talk will consist of four different parts. In the first part, I will explain the central dogma of Bayesian thinking: Bayes Rule. This simple equation (4 variables, one multiplication and one division!) describes how we should update our beliefs about the world in light of new data. I will discuss evidence from neuroscience and psychology that the brain uses Bayesian mechanism to reason about the world. Unfortunately, sometimes the brain fails miserably at taking all the variables of Bayes rule into account.

This leads to the second part of the talk where I will illustrate Bayes rule as a tool for decision makers to reason about uncertainty.

In the third part of the talk I will give an example of how we can build machine learning systems around Bayes rule. The key idea here is that Bayes rule allows us to keep track of uncertainty about the world. In this part I will illustrate one a Bayesian machine learning system in action.

In the final part of the talk I will introduce the concept of “Probabilistic Programming”. Probabilistic programming is a new embryonic programming paradigm that introduces “uncertain variables” as a first class citizen of a programming language and then uses Bayes rule to execute the programs.

When we look at machine learning conferences in the last few years, the Bayesian framework has been prominent. In this talk I want to help the audience understand how the Bayesian framework can help them in their data mining and decision making processes. If people leave the talk thinking Bayes rule is the E=MC^2 of data science, I will consider the presentation a success.

Published in: Technology, Education

0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,345
On Slideshare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
49
Comments
0
Likes
2
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Dealing With Uncertainty: What the reverend Bayes can teach us
  • 2. Probability – Bernoulli, de Moivre §  Fair coin -  50% heads -  50% tails What is the probability of two consecutive heads? 25% 25% 25% 25%
  • 3. 1701
  • 4. Inverse Probability (Bayes) §  Given a coin, not sure whether biased or not? §  If two rolls turn up heads, is the coin biased or not? Original Belief New Belief Observation
  • 5. BAYESIAN PROBABILITY
  • 6. Cox Axioms §  The plausibility of a statement is a real number and is dependent on information we have related to the statement. §  Plausibilities should vary sensibly with the assessment of plausibilities in the model. §  If the plausibility of a statement can be derived in many ways, all the results must be equal. Outcome: §  If A is true then p(A) = 1 §  p(A) + p(not A) = 1 §  p(A and B) = p(A|B) x p(B)
  • 7. Original Belief New Belief Observation p(“e↵ect”|“cause”)p(“cause”) p(“cause”|“e↵ect”) = p(“e↵ect”)
  • 8. What is the probability that the person behind the screen is a girl? 50% What is the probability that the person called Charlie behind the screen is a girl?
  • 9. Something about probability of Charlie § Girls: 32 / 22989 = 0.13% § Buys: 89 / 22070 = 0.4%
  • 10. What is the probability that the person called Charlie behind the screen is a girl? 32 / 22989 = 0.13% 50% p(“Charlie”|Girl)p(Girl) p(Girl|“Charlie”) = p(“Charlie”) 25% p(“Charlie”|Girl)p(Girl) + p(“Charlie”|Boy)p(Boy) 32 / 22989 = 0.13% 50% 89 / 22070 = 0.4% 50%
  • 11. BAYESIAN MACHINE LEARNING
  • 12. p(Content|Spam) ⇥ p(Spam) p(Spam|Content) = p(Content)
  • 13. TrueSkill p(Match Outcomes|Skill) ⇥ p(Skill) p(Skill|Match Outcomes) = p(Match Outcomes)
  • 14. p(Imaget |Roadt ) ⇥ p(Roadt ) p(Roadt+1 |Imaget ) = p(Imaget )
  • 15. Bayesian Sick People Experiment §  1 in 100 has health issue. §  Test is 90% accurate. §  You test positive, what are the odds that you need a treatment?
  • 16. What is the probability of being sick? A.  ≈ 95% B.  ≈ 90% C.  ≈ 50% D.  ≈ 10%
  • 17. §  1000 people in our sample. §  We expect 10 people to be sick (give or take). §  Imagine testing all individuals?
  • 18. §  1000 people in our sample. §  We expect 10 people to be sick (give or take). §  Imagine testing all individuals? à  9 out of 10 sick people test positive.
  • 19. §  1000 people in our sample. §  We expect 10 people to be sick (give or take). §  Imagine testing all individuals? à  9 out of 10 sick people test positive. à  99 out of 990 healthy people test positive! §  I.o.w. if you test positive, it is actually not very likely that you are sick.
  • 20. PROBABILISTIC PROGRAMMING
  • 21. Cause à Effect Inputà Output Effect à Cause Output à Input
  • 22. §  Imagine a timeline of sales per day for a particular product. §  Did the sales rate for this product change over time?
  • 23. Thinking From Cause to Effect §  In: -  Sales rate for period 1. model = pymc.Model() with model: switch = pymc.DiscreteUniform(lower=0, lower=70) -  Sales rate for period 2. rate_1 = pymc.Exponential(1.0) -  Switchover point between period 1 and 2. rate_2 = pymc.Exponential(1.0) §  Output: -  Unit sales over period 1 and 2. rates = pymc.switch(switch >= arange(70), rate_1, rate_2) unit_sales = pymc.Poisson(rates, observed=data)
  • 24. References §  Bayesian vs. Frequentist Statistics -  http://www.stat.ufl.edu/~casella/Talks/BayesRefresher.pdf §  Probabilistic Programming & Bayesian Methods for Hackers -  https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-BayesianMethods-for-Hackers §  Bayesian Methods -  http://www.gatsby.ucl.ac.uk/~zoubin/tmp/tutorial.pdf §  “The Theory That Would not Die”, Sharon Bertsch Mcgrayne -  http://www.amazon.co.uk/dp/0300188226
  • 25. Medical Example using PyMC model = pymc.Model() with model: sick = pymc.Bernoulli(p=0.01) test_result = pymc.Bernoulli(sick * 0.9 + (1-sick) * (1.0-0.9), observed=[1]) algorithm = pymc.Metropolis() print “Pr(Sick | Test) = %f” % pymc.sample(1000, algorithm)[sick].mean()

×