Report

Share

Follow

•0 likes•484 views

•0 likes•484 views

Report

Share

Download to read offline

Presented at AnSci 875 Linear Models with Applications in Biology and Agriculture. University of Wisconsin-Madison.

Follow

- 1. Chapter 12 Application of Gibbs Sampling in Variance Component Estimation and Prediction of Breeding Value Linear Models for Prediction of Animal Breeding Values R .A. Mrode Gota Morota May 6, 2010 1 / 18
- 2. Outline Pre Gibbs Sampling Era Gibbs Sampling Gibbs Sampling vs. REML and BLUP 2 / 18
- 3. Outline Pre Gibbs Sampling Era Gibbs Sampling Gibbs Sampling vs. REML and BLUP 3 / 18
- 4. Outline Pre Gibbs Sampling Era Gibbs Sampling Gibbs Sampling vs. REML and BLUP 4 / 18
- 5. Controversy over REML • gives joint modes of variance components rather than marginal modes quadratic loss function = ˆ (yij − yij )2 • not all variance components have equal importance ⇓ variance components of no interest (nuisance paramters) should be integrated out ⇓ only paramters of interest should be maximized in the likelihood 5 / 18
- 6. VEIL (1990) Variance Estimation from Integrated Likelihood Gianola D, Foulley JL (1990) Variance Estimation from Integrated Likelihood (VEIL). Genet Sel Evol 22, 403-417 Inference is based on the marginal posterior distribtuion of each of the variance components. ⇓ Approximations to the marginal distributions were proposed. 6 / 18
- 7. Gibbs Sampling Conditional distribution ⇓ Marginal distribution • difﬁcult p (θa , θb , θc , θd )d θb d θc d θd • easy sample from p (θa |θb , θc , θd ) 7 / 18
- 8. History of Bayesian Analyses Coupled with Gibbs Sampling • Gelfand AE, and Smith AFM (1990): Introduced in statistics • Gelfand et al.(1990): One-way radom effects model • CS.Wang et al. (1993): Univariate mixed linear model using simulated data • CS.Wang et al. (1994): Univariate mixed linear model using ﬁeld data (litter size) • DA Sorensen et al. (1995) : Threshold models • Jamorozik and Shaeffer (1997): Random Regression Gibbs Sampling It has been applied to a wide range of animal breeding problems. 8 / 18
- 9. Likelihood Bayes’ theorem: joint posterior distribution ∝ likelihood × priors • Consider the linear model y = XB + Zu + e • Further, e|σ2 ∼ N (0, Iσ2 ) e e • The conditinal distribution which generates the data (likelihood): y|B, u, σ2 ∼ N (XB + Zu, Iσ2 ) e e n ∝ (σ2 )− 2 exp − e (y − XB − Zu) (y − XB − Zu) 2σ2 e (1) 9 / 18
- 10. Prior Distribution for Location Parameters • Prior for B: p (B) ∝ constant (2) • Prior for u u|Aσ2 ∼ N (0, Aσ2 ) u u q ∝ (σ2 )− 2 exp − u q ∝ (σ2 )− 2 exp − u (u − 0) (u − 0) 2Aσ2 u u A−1 u 2σ2 u (3) 10 / 18
- 11. Prior Distribution for Scale Parameters • Prior for σ2 u 2 p (σ2 |su , υu ) ∝ (σ2 )− u u υu +2 2 exp − 2 υu su 2σ2 u (4) exp − 2 υe se 2 σ2 e (5) • Prior for σ2 e 2 p (σ2 |se , υe ) ∝ (σ2 )− e e υe +2 2 Scaled inverted χ2 distribution Commoly used for priors of variance components in the Bayesian analyses 11 / 18
- 12. Joint Posterior Distribution Multiplication of likelihood (1) and priors (4) to (5) 2 2 Joint posterior distribution = p (B, u, σ2 , σ2 |y, su , υu , se , υe ) u e 2 2 ∝ p (y|B, u, σ2 ) p (B) p (u|σ2 ) p (σ2 |su , υu ) p (σ2 |se , υe ) e u u e ∝ (σ2 )− u (σ2 )− e n+υe +2 2 exp − q+υu +2 2 exp − 2 u A−1 u + υu su 2σ2 u 2 (y − XB − Zu) (y − XB − Zu) + υe se 2σ2 e (6) 12 / 18
- 13. Fully Conditional Distribution for Location Parameters The fully conditional distribution of each parameter is obtained by regarding all other parameters in (6) as known. • B: p (B|u, σ2 , σ2 , y) ∝ exp − e u (y − XB − Zu) (y − XB − Zu) 2σ2 e B|u, σ2 , σ2 , y ∼ N ((X X)−1 X (y − Zu), (X X)−1 σ2 ) e e u (7) 2 −1 2 −1 σe Z Zi + A σ i e i σ2 i u (8) • u: ui |B, u−i , σ2 , σ2 , y u e ˜ ∼ N ui , where q 2 −1 Z Zi + A−1 σe Z (y − XB − i ˜ Zj uj ) ui = i i σ2 i u j=1,j i 13 / 18
- 14. Fully Conditional Distribution for Scale Parameters The fully conditional distribution of each parameter is obtained by regarding all other parameters in (6) as known. • σ2 : e p (σ2 |B, u, σ2 , y) ∝ e u (σ2 )− e n+υe +2 2 exp − 2 (y − XB − Zu) (y − XB − Zu) + υe se 2σ2 e (9) 2 ˜2 υe = n + υe , se = [(y − XB − Zu) (y − XB − Zu) + υe se ]/υe ˜ ˜ • σ2 : u p (σ2 |B, u, σ2 , y) ∝ (σ2 )− u e u q+υu +2 2 exp − 2 u A−1 u + υu su 2 σ2 u (10) 2 ˜2 υu = q + υu , su = [u A−1 u + υu su ]/υu ˜ ˜ 14 / 18
- 15. Sampling • Consider following linear model XT X XT Z B XT y = T ZT X ZT Z + A−1 α u Z y LHS · C = RHS • Iteration Ci |(ELSE) ∼ N RHS[i] − j =1,j i LHS[i, j] · Bj LHS[i, i] , σ2 e LHS[i, i] 2 2 σ2 |(ELSE) ∼ [(y − Xb − Zu)T (y − Xb − Zu) + υe se ] · χ−+υe e n 2 2 σ2 |(ELSE) ∼ [uT A−1 u + υu su ] · χ−+υu a q 15 / 18
- 16. Inferences from the Gibbs Sampling Output σ2 : σ21 , σ21 , · · · , σ2k , u u u u • Direct inference from samples post mean = post variance = k i =1 σ2i u k k 2 i =1 (σui − post mean)2 k • Density Estimation • Kernel Density Estimation 16 / 18
- 17. Gibbs Sampling vs. REML and BLUP In practice, we don’t know the variance components. REML → BLUP procedure • does not take into account uncertainty in estimating variance components • estimating variance components are ignored in predicting breeding values • BLUP from the MME is no longer BLUP (empirical BLUP) Gibbs Sampling Able to estimates location paramters and scale paratmers jointly. 17 / 18
- 18. Summary Bayesian is great! 18 / 18