Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Bayesian data analysis

1,683 views

Published on

It is hard to live without generating and using data. Bayesian Data Analysis builds upon basic probability theory and gives us the necessary tools to deal with realistic data. In this session, I will make no assumptions on your knowledge of probability or random processes. Using the very popular case of coin tosses, I introduce the concepts of Prior, Likelihood and Posteriors. This gives me the platform to discuss Bayes Rule and Bernoulli Distribution. Further, the algebraic limitation of such an analysis leads us to appreciate the purpose and properties of Beta Distributions. If you have heard of "Weak Priors", "Strong Priors" and "Conjugates", but never quite got a chance to study them, you will find this talk very interesting. Ever since my joining PhD@IIIT, it has been a journey through Randomized Algorithms, Information Retrieval, Intelligent Systems and Statistical Computation - all of which deal with related perspectives on probability theory. General wisdom claims that discussing fundamentals is relatively harder than discussing advanced stuff. I look forward for this discussion and hope that you will find value in it.

Published in: Data & Analytics
  • Be the first to comment

Bayesian data analysis

  1. 1. Bayesian Data Analysis and Beta Distribution Venkatesh Vinayakarao IIIT Delhi, India. Thomas Bayes, 1701 to 1761 IIIT Delhi – Technical Talk, Ketchup Series. 12th Nov 2014.
  2. 2. Agenda Updating Beliefs using Probability Theory Role of Beta Distributions Will Discuss Concepts Illustrations Intuitions Purpose Properties Will not Discuss Details Definitions Formalism Derivations Proofs 11/12/2014 Venkatesh Vinayakarao 2
  3. 3. The Case of Coin Flips General Assumption: If a coin is fair! Heads (H) and Tails (T) are equally likely. But, coin need not be fair  Experiment with Coin - 1 Venkatesh Vinayakarao 3 HHHTTTHTHT Experiment with Coin - 2 HHHHHHHHHT Coin-1 more likely to be fair when compared to coin-2.
  4. 4. Our Beliefs • Can we find a structured way to determine coin’s nature? Venkatesh Vinayakarao 4 Coin-1 Data Observed None H T H T Belief Fair Skewed Fair Skewed Fair Coin-2 Data Observed None H H H H Belief Fair Skewed More Skewed Even More… Even Even More… Prior Belief Belief Updates
  5. 5. Strong Prior Priors • Priors can be strong or weak Coin is lab tested for 1 Million Tosses. 50% H, 50% T observed. One more observation will not change our belief significantly. Venkatesh Vinayakarao 5 Weak Prior New Coin A few observations sufficient to change our belief significantly.
  6. 6. HyperParameter • Prior probability (of Heads) could be anything: • O.5  Fair Coin • 0.25  Skewed towards Tails • 0.75  Skewed towards Heads • 1  Head is guaranteed! • 0  Both sides are Tails. We use θ as a HyperParameter to visualize what happens for different values. Venkatesh Vinayakarao 6
  7. 7. World of Distributions Discrete Distribution of Prior. Since I typically perceive coins as fair, Prior belief peaks at 0.5. Venkatesh Vinayakarao 7
  8. 8. Another Possibility I may also choose to be unbiased! i.e., θ may take any value equally likely. A Continuous Uniform Distribution! Venkatesh Vinayakarao 8
  9. 9. Observations Let’s flip the coin (N) 5 times. We observe (z) 3 Heads. Venkatesh Vinayakarao 9
  10. 10. Impact of Data Belief is influenced by Observations. But, note that: Belief ≠ observation Bayes’ Rule Eeks… what’s in the denominator? Venkatesh Vinayakarao 10
  11. 11. Numerator is easy • p(θ) was uniform. So, nothing to calculate. • How to calculate p(D|θ)? Venkatesh Vinayakarao 11 Jacob Bernoulli 1655 – 1705. θ푧 1 − θ 푛−푧 If D observed is HHHTT and θ is 0.5, We have: p(D|θ) = (0.5)3 1 − 0.5 5−3 Remember, two things: 1)we are interested in the distribution 2) Order of H,T does not matter.
  12. 12. Bayesian Update Venkatesh Vinayakarao 12
  13. 13. Painful Denominator • Recall, for discrete distributions: • And, for continuous distributions: Venkatesh Vinayakarao 13
  14. 14. A Simpler Way Form, Functions and Distributions Normal (or Gaussian) Poisson Venkatesh Vinayakarao 14
  15. 15. What form will suit us? Venkatesh Vinayakarao 15
  16. 16. Beta Distribution Vadivelu, a famous Tamil comedian. This is one of his great expression - terrified and confused. Venkatesh Vinayakarao 16
  17. 17. Beta Distribution • Takes two parameters Beta(a,b) • For now, assume a = #H + 1 and b = #T + 1. • After 10 flips, we may end up in one of these three: Venkatesh Vinayakarao 17
  18. 18. Prior and Posterior Let’s say we have a Strong Prior – Beta (11,11). What should happen if we see 10 more observations with 5H and 5T? Venkatesh Vinayakarao 18
  19. 19. Prior and Posterior… What if we have not seen any data? Venkatesh Vinayakarao 19
  20. 20. Conjugate Prior So, we see that: Such Priors that have same form are called Conjugate Priors Venkatesh Vinayakarao 20 Prior beta(a,b) Posterior beta(a+z,b+N-z) z Heads in N trials
  21. 21. Summary Prior Likelihood Posterior Bayes’ Rule Bernoulli Distribution Beta Distribution Venkatesh Vinayakarao 21
  22. 22. References Book on Doing Bayesian Data Analysis – John K. Kruschke. YouTube Video on Statistics 101: The Binomial Distribution – Brandon Foltz. Venkatesh Vinayakarao 22

×