Example 1Normal factor                      Parsimonious Bayesian factor analysis whenmodelVariance                       ...
1 Example 1                  2 Normal factor modelExample 1                       Variance structureNormal factormodel    ...
Example 1. Early origins of healthExample 1Normal factormodel             Conti, Heckman, Lopes and Piatek (2011) Construc...
Correlation matrix (rounded)Example 1Normal factormodelVariancestructureIdentificationissues            1   0   0   0   0  ...
Normal factor modelExample 1Normal factormodel                  For any specified positive integer k ≤ m, the standard k−fa...
Variance structureExample 1Normal factormodelVariancestructureIdentificationissuesNumber ofparametersOrdering of thevariabl...
Identification issuesExample 1Normal factormodelVariancestructureIdentificationissuesNumber ofparameters        A rather tri...
SolutionsExample 1Normal factor                  1. β Σ−1 β = I .modelVariance                         Pretty hard to impo...
Number of parametersExample 1                  The resulting factor form of Ω hasNormal factormodelVariance               ...
Ordering of the variablesExample 1Normal factormodelVariancestructure         Alternative orderings are trivially produced...
SolutionExample 1Normal factormodelVariancestructureIdentification                  We can always find an orthonormal matrix...
Example 2. Exchange rate dataExample 1Normal factormodelVariancestructureIdentification     • Monthly international exchang...
Posterior inference of (β, Σ)Example 1Normal factor     1st orderingmodelVariance                                        ...
Posterior inference of ftExample 1Normal factormodelVariancestructureIdentificationissuesNumber ofparametersOrdering of the...
Reduced-rank loading matrixExample 1Normal factormodel                  Geweke and Singleton (1980) show that, if β has ra...
Example 2. MultimodalityExample 1Normal factormodelVariancestructureIdentificationissuesNumber ofparametersOrdering of thev...
Structured loadingsExample 1Normal factormodelVariancestructureIdentificationissuesNumber ofparameters        Proposed by K...
Modern structured factor analysisExample 1Normal factormodelVariancestructure         • Time-varying factor loadings: dyna...
Example 1: ML estimatesExample 1Normal factor                                ˆ       ˆmodel             i    Measurement  ...
Example 1: varimax rotationExample 1Normal factor                                ˆ       ˆmodel             i    Measureme...
Example 1: Parsimonious BFAExample 1Normal factor                                 ˜       ˜model             i     Measure...
Parsimonious BFAExample 1Normal factormodelVariance                  We introduce a more general set of identifiability con...
The regression-type representationExample 1Normal factormodelVariancestructureIdentificationissues            Assume that d...
TheoremExample 1                  Theorem. Assume that the data were generated by a basic factor modelNormal factormodel  ...
Indicator matrixExample 1Normal factormodelVariancestructureIdentification                  Since the identification of r an...
Number of factorsExample 1Normal factor                  Theorem 1 allows the identification of the true number ofmodelVari...
Model sizeExample 1Normal factormodelVariancestructureIdentificationissuesNumber of                  The model size d is de...
The Prior of the Indicator MatrixExample 1                  To define p(δ), we use a hierarchical prior which allowsNormal ...
The Prior on the IdiosyncraticExample 1Normal factor                                                      VariancesmodelVa...
We assume instead a proper inverted Gamma priorExample 1                                 σi2 ∼ G −1 (c0 , Ci0 )Normal fact...
The Prior on the Factor LoadingsExample 1Normal factormodelVariancestructure         We assume that the rows of the coeffici...
Fractional priorExample 1Normal factormodel             We use a fractional prior (O’Hagan, 1995) which was applied byVari...
UsingExample 1                            p(β δ |σi2 ) ∝ p(˜i |β δ , σi2 )b                                         i·    ...
The factors, however, are latent and are estimated together                  with the other parameters. This ties the m re...
Most visited configurationExample 1Normal factormodelVariance          Let l = (l1 , . . . , lrM ) be the most visited confi...
Example 1         The number of non-zero top elements rM in the identifiabilityNormal factor     constraint l is a third es...
MCMCExample 1         Highly efficient MCMC scheme:Normal factor      (a) Sample from p(δ|f1 , . . . , fT , τ , y):modelVari...
Adding a new factorExample 1Normal factormodelVariancestructureIdentificationissuesNumber ofparameters                    ...
Removing a factorExample 1Normal factormodelVariancestructureIdentificationissuesNumber ofparametersOrdering of the        ...
Example 3: Maxwell’s DataExample 1Normal factor     Scores on 10 tests for a sample of T = 148 children attendingmodelVari...
Example 1                  Table: Maxwell’s Children Data - neurotic children; posteriorNormal factor                  dis...
Example 1         Table: Maxwell’s Children Data - normal children; posteriorNormal factor     distribution p(r |y) of the...
Example 4: simulating large dataExample 1Normal factor                                   item1                            ...
Example 4: Posterior probabilitiesExample 1Normal factor                        item1                                     ...
Example 4: Estimated loadingsExample 1Normal factor                     item1                                  item2      ...
Example 1: revisitedExample 1Normal factormodelVariancestructureIdentification     Outcome system                         M...
Measurement system: 126 itemsExample 1Normal factor                                                        (age 10)modelVa...
Model ingredientsExample 1Normal factormodelVariancestructureIdentificationissuesNumber ofparametersOrdering of thevariable...
Example 1Normal factormodelVariancestructureIdentificationissuesNumber ofparametersOrdering of thevariablesExample 2Reduced...
Example 1Normal factormodelVariancestructureIdentificationissuesNumber ofparametersOrdering of thevariablesExample 2Reduced...
Example 1Normal factormodelVariancestructureIdentificationissuesNumber ofparametersOrdering of thevariablesExample 2Reduced...
ConclusionExample 1Normal factormodelVariancestructureIdentification     • SummaryissuesNumber of             • New take on...
Example 1Normal factormodelVariancestructureIdentificationissuesNumber ofparametersOrdering of thevariablesExample 2Reduced...
Upcoming SlideShare
Loading in …5
×

Hedibert Lopes' talk at BigMC

633
-1

Published on

"Parsimonious Bayesian factor analysis when
the number of factors is unknown", talk by Hedibert Lopes at the BigMC seminar, 9th June 2011

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
633
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Hedibert Lopes' talk at BigMC

  1. 1. Example 1Normal factor Parsimonious Bayesian factor analysis whenmodelVariance the number of factors is unknown1structureIdentificationissuesNumber ofparametersOrdering of the Hedibert Freitas LopesvariablesExample 2 Booth School of BusinessReduced-rankloading matrix University of ChicagoStructuredloadingsParsimoniousBFANewidentificationconditionsTheoremExample 3 S´minaire BIG’MC eExample 4: M´thodes de Monte Carlo en grande dimension esimulatinglarge data Institut Henri Poincar´ eExample 1:revisited March 11th 2011 1Conclusion Joint work with Sylvia Fr¨hwirth-Schnatter. u
  2. 2. 1 Example 1 2 Normal factor modelExample 1 Variance structureNormal factormodel Identification issuesVariancestructureIdentification Number of parametersissuesNumber of Ordering of the variablesparametersOrdering of thevariables 3 Example 2Example 2 Reduced-rank loading matrixReduced-rankloading matrix 4 Structured loadingsStructuredloadings 5 Parsimonious BFAParsimoniousBFA New identification conditionsNewidentificationconditions TheoremTheoremExample 3 6 Example 3Example 4: 7 Example 4: simulating large datasimulatinglarge data 8 Example 1: revisitedExample 1:revisited 9 ConclusionConclusion
  3. 3. Example 1. Early origins of healthExample 1Normal factormodel Conti, Heckman, Lopes and Piatek (2011) ConstructingVariancestructureIdentification economically justified aggregates: an application to the earlyissuesNumber of origins of health. Journal of Econometrics (to appear).parametersOrdering of thevariables Here we focus on a subset of the British Cohort Study startedExample 2Reduced-rank in 1970.loading matrixStructured 7 continuous cognitive tests (Picture LanguageloadingsParsimonious Comprehension,Friendly Math, Reading, Matrices, RecallBFA Digits, Similarities, Word Definition),NewidentificationconditionsTheorem 10 continuous noncognitive measurements (ChildExample 3 Developmental Scale).Example 4:simulating A total of m = 17 measurements on T = 2397 10-year oldlarge dataExample 1: individuals.revisitedConclusion
  4. 4. Correlation matrix (rounded)Example 1Normal factormodelVariancestructureIdentificationissues 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0Number of 0 1 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0parameters 0 1 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0Ordering of thevariables 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0Example 2 0 0 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0Reduced-rank 1 1 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0loading matrix 0 0 0 0 0 0 0 1 1 1 0 1 0 0 0 0 1Structured 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0loadings 0 0 0 0 0 0 0 1 1 1 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 0 0Parsimonious 0 0 0 0 0 0 0 1 0 1 1 1 0 0 0 0 1BFA 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0New 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0identification 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 -1 0conditions 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 1 0Theorem 0 0 0 0 0 0 0 1 0 1 0 1 0 0 0 0 1Example 3Example 4:simulatinglarge dataExample 1:revisitedConclusion
  5. 5. Normal factor modelExample 1Normal factormodel For any specified positive integer k ≤ m, the standard k−factorVariancestructure model relates each yt to an underlying k−vector of randomIdentificationissues variables ft , the common factors, viaNumber ofparametersOrdering of thevariables yt |ft ∼ N(βft , Σ)Example 2Reduced-rankloading matrix whereStructured ft ∼ N(0, Ik )loadingsParsimoniousBFA and 2 2Newidentification Σ = diag(σ1 , · · · , σm )conditionsTheoremExample 3 Unconditional covariance matrixExample 4:simulatinglarge data V (yt |β, Σ) = Ω = ββ + ΣExample 1:revisitedConclusion
  6. 6. Variance structureExample 1Normal factormodelVariancestructureIdentificationissuesNumber ofparametersOrdering of thevariables Conditional UnconditionalExample 2 var(yit |f ) = σi2 var(yit ) = k βil + σi2 l=1 2Reduced-rankloading matrix cov(yit , yjt |f ) = 0 cov(yit , yjt ) = k βil βjl l=1StructuredloadingsParsimoniousBFANew Common factors explain all the dependence structure amongidentificationconditions the m variables.TheoremExample 3Example 4:simulatinglarge dataExample 1:revisitedConclusion
  7. 7. Identification issuesExample 1Normal factormodelVariancestructureIdentificationissuesNumber ofparameters A rather trivial non-identifiability problem is sign-switching.Ordering of thevariablesExample 2Reduced-rank A more serious problem is factor rotation: invariance under anyloading matrix transformation of the formStructuredloadingsParsimonious β ∗ = βP and ft∗ = Pft ,BFANewidentificationconditions where P is any orthogonal k × k matrix.TheoremExample 3Example 4:simulatinglarge dataExample 1:revisitedConclusion
  8. 8. SolutionsExample 1Normal factor 1. β Σ−1 β = I .modelVariance Pretty hard to impose!structureIdentificationissuesNumber of 2. β is a block lower triangular2 .parametersOrdering of the  variables β11 0 0 ··· 0Example 2Reduced-rank  β21  β22 0 ··· 0  loading matrix  β31 β32 β33 ··· 0 Structured  loadings  . . . . . . .. . .   . . . . . Parsimonious β=  BFA  βk1 βk2 βk3 ··· βkk  Newidentificationconditions  βk+1,1 βk+1,2 βk+1,3  ··· βk+1,k  Theorem  . . . . . . . . . . Example 3  . . . . . Example 4: βm1 βm2 βm3 ··· βmksimulatinglarge dataExample 1: Somewhat restrictive, but useful for estimation.revisited 2Conclusion Geweke and Zhou (1996) and Lopes and West (2004).
  9. 9. Number of parametersExample 1 The resulting factor form of Ω hasNormal factormodelVariance m(k + 1) − k(k − 1)/2structureIdentificationissuesNumber of parameters, compared with the totalparametersOrdering of thevariables m(m + 1)/2Example 2Reduced-rankloading matrix in an unconstrained (or k = m) model, leading to theStructuredloadings constraint thatParsimoniousBFA 3 9New k ≤m+ − 2m + .identification 2 4conditionsTheoremExample 3 For example,Example 4: • m = 6 implies k ≤ 3,simulatinglarge data • m = 7 implies k ≤ 4,Example 1: • m = 10 implies k ≤ 6,revisitedConclusion • m = 17 implies k ≤ 12.
  10. 10. Ordering of the variablesExample 1Normal factormodelVariancestructure Alternative orderings are trivially produced viaIdentificationissuesNumber ofparameters yt∗ = AytOrdering of thevariablesExample 2 for some switching matrix A.Reduced-rankloading matrixStructuredloadings The new rotation has the same latent factors but transformedParsimonious loadings matrix Aβ.BFANewidentificationconditions y ∗ = Aβf + εt = β ∗ f + εtTheoremExample 3Example 4:simulating Problem: β ∗ not necessarily block lower triangular.large dataExample 1:revisitedConclusion
  11. 11. SolutionExample 1Normal factormodelVariancestructureIdentification We can always find an orthonormal matrix P such thatissuesNumber ofparametersOrdering of the β = β ∗ P = AβP ˜variablesExample 2Reduced-rank is block lower triangular and common factorsloading matrixStructuredloadings ˜ ft = PftParsimoniousBFANew still N(0, Ik ) (Lopes and West, 2004).identificationconditionsTheoremExample 3 The order of the variables in yt is immaterial when k isExample 4: properly chosen, i.e. when β is full-rank.simulatinglarge dataExample 1:revisitedConclusion
  12. 12. Example 2. Exchange rate dataExample 1Normal factormodelVariancestructureIdentification • Monthly international exchange rates.issuesNumber ofparameters • The data span the period from 1/1975 to 12/1986Ordering of thevariables inclusive.Example 2Reduced-rank • Time series are the exchange rates in British pounds ofloading matrix • US dollar (US)Structuredloadings • Canadian dollar (CAN)Parsimonious • Japanese yen (JAP)BFANew • French franc (FRA)identificationconditions • Italian lira (ITA)Theorem • (West) German (Deutsch)mark (GER)Example 3Example 4: • Example taken from Lopes and West (2004)simulatinglarge dataExample 1:revisitedConclusion
  13. 13. Posterior inference of (β, Σ)Example 1Normal factor 1st orderingmodelVariance    structure US 0.99 0.00 0.05Identificationissues   CAN 0.95 0.05     0.13  Number ofparameters  JAP 0.46 0.42   0.62 Ordering of the E (β|y ) =   E (Σ|y ) = diag  variables   FRA 0.39 0.91     0.04  Example 2  ITA 0.41 0.77   0.25 Reduced-rankloading matrix GER 0.40 0.77 0.28StructuredloadingsParsimonious 2nd orderingBFA    Newidentification US 0.98 0.00 0.06conditionsTheorem   JAP 0.45 0.42     0.62  Example 3  CAN 0.95 0.03   0.12  E (β|y ) =   E (Σ|y ) = diag  Example 4:   FRA 0.39 0.91     0.04  simulatinglarge data  ITA 0.41 0.77   0.25 Example 1: GER 0.40 0.77 0.26revisitedConclusion
  14. 14. Posterior inference of ftExample 1Normal factormodelVariancestructureIdentificationissuesNumber ofparametersOrdering of thevariablesExample 2Reduced-rankloading matrixStructuredloadingsParsimoniousBFANewidentificationconditionsTheoremExample 3Example 4:simulatinglarge dataExample 1:revisitedConclusion
  15. 15. Reduced-rank loading matrixExample 1Normal factormodel Geweke and Singleton (1980) show that, if β has rank r < kVariancestructure then there exists a matrix Q such thatIdentificationissuesNumber ofparameters βQ = 0 and Q Q = IOrdering of thevariablesExample 2 and, for any orthogonal matrix M,Reduced-rankloading matrixStructured ββ + Σ = (β + MQ ) (β + MQ ) + (Σ − MM ).loadingsParsimoniousBFANewidentification This translation invariance of Ω under the factor model impliesconditionsTheorem lack of identification and, in application, induces symmetriesExample 3 and potential multimodalities in resulting likelihood functions.Example 4:simulatinglarge data This issue relates intimately to the question of uncertainty ofExample 1:revisited the number of factors.Conclusion
  16. 16. Example 2. MultimodalityExample 1Normal factormodelVariancestructureIdentificationissuesNumber ofparametersOrdering of thevariablesExample 2Reduced-rankloading matrixStructuredloadingsParsimoniousBFANewidentificationconditionsTheoremExample 3Example 4:simulatinglarge dataExample 1:revisitedConclusion
  17. 17. Structured loadingsExample 1Normal factormodelVariancestructureIdentificationissuesNumber ofparameters Proposed by Kaiser (1958) the varimax method of orthogonalOrdering of thevariables rotation aims at providing axes with as few large loadings andExample 2 as many near-zero loadings as possible.Reduced-rankloading matrixStructuredloadings They are also known as dedicated factors.ParsimoniousBFANew Notice that the method can be applied to classical or BayesianidentificationconditionsTheorem estimates in order to obtain more interpretable factors.Example 3Example 4:simulatinglarge dataExample 1:revisitedConclusion
  18. 18. Modern structured factor analysisExample 1Normal factormodelVariancestructure • Time-varying factor loadings: dynamic correlationsIdentificationissuesNumber of Lopes and Carvalho (2007)parametersOrdering of thevariablesExample 2 • Spatial dynamic factor analysisReduced-rankloading matrix Lopes, Salazar and Gamerman (2008)StructuredloadingsParsimonious • Spatial hierarchical factors: ranking vulnerabilityBFANew Lopes, Schmidt, Salazar, Gomez and Achkar (2010)identificationconditionsTheoremExample 3 • Sparse factor modelsExample 4:simulating Fr¨hwirth-Schnatter and Lopes (2010) ularge dataExample 1:revisitedConclusion
  19. 19. Example 1: ML estimatesExample 1Normal factor ˆ ˆmodel i Measurement βi1 βi2 σi2 ˆVariancestructure 1 Picture Language 0.32 0.46 0.68Identification 2 Friendly Math 0.57 0.59 0.34issuesNumber of 3 Reading 0.56 0.62 0.30parametersOrdering of the 4 Matrices 0.41 0.51 0.57variables 5 Recall digits 0.31 0.29 0.82Example 2Reduced-rank 6 Similarities 0.39 0.53 0.57loading matrix 7 Word definition 0.43 0.55 0.52Structured 8 Child Dev 2 -0.67 0.18 0.52loadings 9 Child Dev 17 -0.69 -0.11 0.51ParsimoniousBFA 10 Child Dev 19 -0.74 0.32 0.35New 11 Child Dev 20 -0.45 0.33 0.69identificationconditions 12 Child Dev 23 -0.75 0.42 0.26Theorem 13 Child Dev 30 -0.52 0.28 0.66Example 3 14 Child Dev 31 -0.45 0.25 0.73Example 4:simulating 15 Child Dev 33 -0.42 0.31 0.73large data 16 Child Dev 52 0.41 -0.28 0.76Example 1: 17 Child Dev 53 -0.69 0.41 0.36revisitedConclusion
  20. 20. Example 1: varimax rotationExample 1Normal factor ˆ ˆmodel i Measurement βi1 βi2 σi2 ˆVariancestructure 1 Picture Language 0.00 0.56 0.68Identification 2 Friendly Math 0.12 0.81 0.34issuesNumber of 3 Reading 0.10 0.83 0.30parametersOrdering of the 4 Matrices 0.04 0.66 0.57variables 5 Recall digits 0.09 0.41 0.82Example 2Reduced-rank 6 Similarities 0.01 0.66 0.57loading matrix 7 Word definition 0.03 0.69 0.52Structured 8 Child Dev 2 -0.65 -0.25 0.52loadings 9 Child Dev 17 -0.50 -0.49 0.51ParsimoniousBFA 10 Child Dev 19 -0.79 -0.17 0.35New 11 Child Dev 20 -0.56 0.01 0.69identificationconditions 12 Child Dev 23 -0.85 -0.09 0.26Theorem 13 Child Dev 30 -0.58 -0.07 0.66Example 3 14 Child Dev 31 -0.51 -0.05 0.73Example 4:simulating 15 Child Dev 33 -0.52 0.02 0.73large data 16 Child Dev 52 0.49 0.01 0.76Example 1: 17 Child Dev 53 -0.80 -0.06 0.36revisitedConclusion
  21. 21. Example 1: Parsimonious BFAExample 1Normal factor ˜ ˜model i Measurement βi1 βi2 σi2 ˜Variancestructure 1 Picture Language 0 0.57 0.68Identification 2 Friendly Math 0 0.81 0.34issuesNumber of 3 Reading 0 0.83 0.31parametersOrdering of the 4 Matrices 0 0.66 0.56variables 5 Recall digits 0 0.42 0.83Example 2Reduced-rank 6 Similarities 0 0.66 0.56loading matrix 7 Word definition 0 0.7 0.51Structured 8 Child Dev 2 -0.68 0 0.54loadings 9 Child Dev 17 -0.56 0 0.68ParsimoniousBFA 10 Child Dev 19 -0.81 0 0.35New 11 Child Dev 20 -0.55 0 0.70identificationconditions 12 Child Dev 23 -0.86 0 0.27Theorem 13 Child Dev 30 -0.59 0 0.66Example 3 14 Child Dev 31 -0.51 0 0.74Example 4:simulating 15 Child Dev 33 -0.51 0 0.74large data 16 Child Dev 52 0.49 0 0.76Example 1: 17 Child Dev 53 -0.8 0 0.36revisitedConclusion
  22. 22. Parsimonious BFAExample 1Normal factormodelVariance We introduce a more general set of identifiability conditions forstructureIdentification the basic modelissuesNumber ofparametersOrdering of thevariables yt = Λft + t t ∼ Nm (0, Σ0 ) ,Example 2Reduced-rankloading matrix which handles the ordering problem in a more flexible way:Structured C1. Λ has full column-rank, i.e. r = rank(Λ).loadingsParsimonious C2. Λ is a generalized lower triangular matrix, i.e.BFANew l1 < . . . < lr , where lj denotes for j = 1, . . . , r the rowidentificationconditions index of the top non-zero entry in the jth column of Λ, i.e.TheoremExample 3 Λlj ,j = 0; Λij = 0, ∀ i < lj .Example 4: C3. Λ does not contain any column j where Λlj ,j is the onlysimulatinglarge data non-zero element in column j.Example 1:revisitedConclusion
  23. 23. The regression-type representationExample 1Normal factormodelVariancestructureIdentificationissues Assume that data y = {y1 , . . . , yT } were generated by theNumber ofparametersOrdering of the previous model and that the number of factors r , as well as Λvariables and Σ0 , should be estimated.Example 2Reduced-rankloading matrixStructured The usual procedure is to fit a model with k factors,loadingsParsimoniousBFA yt = βft + t, t ∼ Nm (0, Σ) ,NewidentificationconditionsTheorem where β is a m × k coefficient matrix with elements βij and ΣExample 3 is a diagonal matrix.Example 4:simulatinglarge dataExample 1:revisitedConclusion
  24. 24. TheoremExample 1 Theorem. Assume that the data were generated by a basic factor modelNormal factormodel obeying conditions C1 − C3 and that a regression-type representation withVariance k ≥ r potential factors is fitted. Assume that the following condition holdsstructureIdentification for β:issuesNumber ofparameters B1 The row indices of the top non-zero entry in each non-zero column ofOrdering of thevariables β are different.Example 2Reduced-rankloading matrix Then (r , Λ, Σ0 ) are related in the following way to (β, Σ):Structuredloadings (a) r columns of β are identical to the r columns of Λ.Parsimonious (b) If rank(β) = r , then the remaining k − r columns of β are zeroBFANew columns. The number of factors is equal to r and Σ0 = Σ,identificationconditions (c) If rank(β) > r , then only k − rank(β) of the remaining k − rTheorem columns are zero columns, while s = rank(β) − r columns withExample 3 column indices j1 , . . . , js differ from a zero column for a singleExample 4:simulating element lying in s different rows r1 , . . . , rs . The number of factors islarge data equal to r = rank(β) − s, while Σ0 = Σ + D, where D is a m × mExample 1: diagonal matrix of rank s with non-zero diagonal elementsrevisited Drl ,rl = βr2l ,jl for l = 1, . . . , s.Conclusion
  25. 25. Indicator matrixExample 1Normal factormodelVariancestructureIdentification Since the identification of r and Λ from β by Theorem 1 reliesissuesNumber of on identifying zero and non-zero elements in β, we followparametersOrdering of the previous work on parsimonious factor modeling and consider thevariablesExample 2 selection of the elements of β as a variable selection problem.Reduced-rankloading matrixStructuredloadings We introduce for each element βij a binary indicator δij andParsimonious define βij to be 0, if δij = 0, and leave βij unconstrainedBFANewidentification otherwise.conditionsTheoremExample 3 In this way, we obtain an indicator matrix δ of the sameExample 4: dimension as β.simulatinglarge dataExample 1:revisitedConclusion
  26. 26. Number of factorsExample 1Normal factor Theorem 1 allows the identification of the true number ofmodelVariance factors r directly from the indicator matrix δ:structureIdentificationissues k mNumber ofparameters r= I{ δij > 1},Ordering of thevariables j=1 i=1Example 2Reduced-rankloading matrix where I {·} is the indicator function, by taking spurious factorsStructuredloadings into account.ParsimoniousBFANew This expression is invariant to permuting the columns of δidentificationconditions which is helpful for MCMC based inference with respect to r .TheoremExample 3Example 4: Our approach provides a principled way for inference on r , assimulatinglarge data opposed to previous work which are based on rather heuristicExample 1: procedures to infer this quantity (Carvalho et al., 2008;revisited Bhattacharya and Dunson, 2009).Conclusion
  27. 27. Model sizeExample 1Normal factormodelVariancestructureIdentificationissuesNumber of The model size d is defined as the number of non-zeroparametersOrdering of the elements in Λ, i.e.variablesExample 2 k mReduced-rankloading matrix d= dj I {dj > 1}, dj = δij .Structuredloadings j=1 j=1ParsimoniousBFANewidentificationconditions ˜ Model size could be estimated by the posterior mode d or theTheorem posterior mean E(d|y) of p(d|y).Example 3Example 4:simulatinglarge dataExample 1:revisitedConclusion
  28. 28. The Prior of the Indicator MatrixExample 1 To define p(δ), we use a hierarchical prior which allowsNormal factormodel different occurrence probabilities τ = (τ1 , . . . , τk ) of non-zeroVariancestructure elements in the different columns of β and assume that allIdentificationissuesNumber of indicators are independent a priori given τ :parametersOrdering of thevariables Pr(δij = 1|τj ) = τj , τj ∼ B (a0 , b0 ) .Example 2Reduced-rankloading matrixStructuredloadings A priori r may be represented as r = k I {Xj > 1}, where j=1ParsimoniousBFA X1 , . . . , Xk are independent random variables and Xj follows aNewidentification Beta-binomial distribution with parameters Nj = m − j + 1, a0 ,conditionsTheorem and b0 .Example 3Example 4:simulating We recommend to tune for a given data set with known valueslarge data m and k the hyperparameters a0 and b0 in such a way thatExample 1:revisited p(r |a0 , b0 , m, k) is in accordance with prior expectationsConclusion concerning the number of factors.
  29. 29. The Prior on the IdiosyncraticExample 1Normal factor VariancesmodelVariancestructureIdentificationissuesNumber of Heywood problems typically occur, if the constraintparametersOrdering of thevariables 1 1Example 2 ≥ (Ω−1 )ii ⇔ σi2 ≤Reduced-rank σi2 (Ω−1 ) iiloading matrixStructuredloadings is violated, where the matrix Ω is the covariance matrix of yt .ParsimoniousBFANew Improper priors on the idiosyncratic variances such asidentificationconditions p(σi2 ) ∝ 1/σi2 ,TheoremExample 3Example 4:simulating are not able to prevent Heywood problems.large dataExample 1:revisitedConclusion
  30. 30. We assume instead a proper inverted Gamma priorExample 1 σi2 ∼ G −1 (c0 , Ci0 )Normal factormodelVariance for each of the idiosyncratic variances σi2 .structureIdentificationissuesNumber ofparameters c0 is large enough to bound the prior away from 0, typicallyOrdering of thevariables c0 = 2.5.Example 2Reduced-rankloading matrix Ci0 is chose by controlling the probability of a HeywoodStructuredloadings problemParsimonious Pr(X ≤ Ci0 (Ω−1 )ii )BFANewidentification where X ∼ G (c0 , 1). So, Ci0 as the largest value for whichconditionsTheorem 1Example 3 Ci0 /(c0 − 1) ≤Example 4: (S−1 )ii ysimulatinglarge data orExample 1:revisited σi2 ∼ G −1 c0 , (c0 − 1)/(S−1 )ii . yConclusion
  31. 31. The Prior on the Factor LoadingsExample 1Normal factormodelVariancestructure We assume that the rows of the coefficient matrix β areIdentificationissuesNumber of independent a priori given the factors f1 , . . . , fT .parametersOrdering of thevariablesExample 2 Let β δ be the vector of unconstrained elements in the ith row i·Reduced-rankloading matrix of β corresponding to δ. For each i = 1, . . . , m, we assumeStructured that β δ |σi2 ∼ N bδ , Bδ σi2 .loadings i· i0 i0ParsimoniousBFANewidentificationconditionsTheorem The prior moments are either chosen as in Lopes and WestExample 3 (2004) or Ghosh and Dunson (2009) who considered a “unitExample 4: scale” prior where bδ = 0 and Bδ = I. i0 i0simulatinglarge dataExample 1:revisitedConclusion
  32. 32. Fractional priorExample 1Normal factormodel We use a fractional prior (O’Hagan, 1995) which was applied byVariancestructureIdentification several authors for variable selection in latent variable models3issuesNumber ofparametersOrdering of thevariables The fractional prior can be interpreted as the posterior of aExample 2 non-informative prior and a fraction b of the data.Reduced-rankloading matrixStructuredloadings We consider a conditionally fractional prior for the “regressionParsimonious model”BFA yi = Xδ β δ + ˜i , ˜ i i·NewidentificationconditionsTheorem where yi = (yi1 · · · yiT ) and ˜i = ( i1 · · · iT ) . Xδ is a ˜ iExample 3 regressor matrix for β δ constructed from the latent factor i·Example 4: matrix F = (f1 · · · fT ) .simulatinglarge dataExample 1:revisited 3 Smith & Kohn, 2002; Fr¨hwirth-Schnatter & T¨chler, 2008; T¨chler, u u uConclusion 2008.
  33. 33. UsingExample 1 p(β δ |σi2 ) ∝ p(˜i |β δ , σi2 )b i· y i·Normal factormodel we obtain from regression model:VariancestructureIdentificationissues β δ |σi2 ∼ N biT , BiT σi2 /b , i·Number ofparametersOrdering of thevariables where biT and BiT are the posterior moments under anExample 2 non-informative prior:Reduced-rankloading matrixStructured −1loadings BiT = (Xδ ) Xδ i i , biT = BiT (Xδ ) yi . i ˜ParsimoniousBFANewidentificationconditionsTheorem It is not entirely clear how to choose the fraction b for a factorExample 3 model. If the regressors f1 , . . . , fT were observed, then weExample 4: would deal with m independent regression models for each ofsimulatinglarge data which T observations are available and the choice b = 1/TExample 1: would be appropriate.revisitedConclusion
  34. 34. The factors, however, are latent and are estimated together with the other parameters. This ties the m regression modelsExample 1 together.Normal factormodelVariancestructureIdentification If we consider the multivariate regression model as a whole,issuesNumber of then the total number N = mT of observations has to beparametersOrdering of the taken into account which motivates choosing bN = 1/(Tm).variablesExample 2Reduced-rankloading matrix In cases where the number of regressors d is of the sameStructured magnitude as the number of observations, Ley & Steel (2009)loadings recommend to choose instead the risk inflation criterionParsimoniousBFA bR = 1/d 2 suggested by Foster & George (1994), because bNNewidentificationconditions implies a fairly small penalty for model size and may lead toTheorem overfitting models.Example 3Example 4:simulating In the present context this implies choosing bR = 1/d(k, m)2large data where d(k, m) = (km − k(k − 1)/2) is the number of freeExample 1:revisited elements in the coefficient matrix β.Conclusion
  35. 35. Most visited configurationExample 1Normal factormodelVariance Let l = (l1 , . . . , lrM ) be the most visited configuration.structureIdentificationissuesNumber ofparameters We may then estimate for each indicator the marginal inclusionOrdering of thevariables probability ΛExample 2 Pr(δij = 1|y, l )Reduced-rankloading matrixStructured under l as the average over the elements of (δ Λ ) .loadingsParsimoniousBFA Note that Pr(δlΛ,j = 1|y, l ) = 1 for j = 1, . . . , rM . jNewidentificationconditionsTheoremExample 3 Following Scott and Berger (2006), we determine the medianExample 4: probability model (MPM) by setting the indicators δij in δ Λ to Λ Msimulating Λ 1 iff Pr(δij = 1|y, l ) ≥ 0.5.large dataExample 1:revisitedConclusion
  36. 36. Example 1 The number of non-zero top elements rM in the identifiabilityNormal factor constraint l is a third estimator of the number of factors, whilemodelVariancestructure the number dM of non-zero elements in the MPM is yetIdentificationissues another estimator of model size.Number ofparametersOrdering of thevariables A discrepancy between the various estimators of the number ofExample 2Reduced-rank factors r is often a sign of a weakly informative likelihood andloading matrix it might help to use a more informative prior for p(r ) byStructuredloadings choosing the hyperparameters a0 and b0 accordingly.ParsimoniousBFANewidentification Also the structure of the indicator matrices δ Λ and δ Λ H MconditionsTheorem corresponding, respectively, to the HPM and the MPM mayExample 3 differ, in particular if the frequency pH is small and some of theExample 4: Λ inclusion probabilities Pr(δij = 1|y, l ) are close to 0.5.simulatinglarge dataExample 1:revisitedConclusion
  37. 37. MCMCExample 1 Highly efficient MCMC scheme:Normal factor (a) Sample from p(δ|f1 , . . . , fT , τ , y):modelVariancestructure (a-1) Try to turn all non-zero columns of δ into zero columns.Identification (a-2) Update the indicators jointly for each of the remainingissuesNumber of non-zero columns j of δ:parametersOrdering of the (a-2-1) try to move the top non-zero element lj ;variables (a-2-2) update the indicators in the rows lj + 1, . . . , m.Example 2Reduced-rank (a-3) Try to turn all (or at least some) zero columns of δ into aloading matrix non-zero column.Structured (b) 2 2 Sample from p(β, σ1 , . . . , σm |δ, f1 , . . . , fT , y).loadings (c) 2 2 Sample from p(f1 , . . . , fT |β, σ1 , . . . , σm , y).ParsimoniousBFA (d) Perform an acceleration step (expanded factor model)New (e) For each j = 1, . . . , k, perform a random sign switch: substitute the drawsidentificationconditions of {fjt }T and {βij }m with probability 0.5 by {−fjt }T and {−βij }m , t=1 i=j t=1 i=jTheorem otherwise leave these draws unchanged. `Example 3 ´ (f) Sample τj for j = 1, . . . , k from τj |δ ∼ B a0 + dj , b0 + pj , where pj is the number of free elements and dj = m δij is number of non-zero elements PExample 4: i=1simulating in column j.large dataExample 1:revisited To generate sensible values for the latent factors in the initial model specification,Conclusion we found it useful to run the first few steps without variable selection.
  38. 38. Adding a new factorExample 1Normal factormodelVariancestructureIdentificationissuesNumber ofparameters    Ordering of thevariables X 0 0 X 0 0 0Example 2   X 0 0     X 0 0 0  Reduced-rankloading matrix   X 0 X     X 0 X 0  Structured  X X X  =⇒  X X X 0 loadings      X X X   X X X X Parsimonious    BFA  X X X   X X X 0 Newidentificationconditions X X X X X X XTheoremExample 3Example 4:simulatinglarge dataExample 1:revisitedConclusion
  39. 39. Removing a factorExample 1Normal factormodelVariancestructureIdentificationissuesNumber ofparametersOrdering of the      variables X 0 0 0 X 0 0 0 X 0 0Example 2  X 0 0 0   X 0 0 0   X 0 0 Reduced-rank      loading matrix   X 0 X 0     X 0 X 0     X 0 X  Structured ⇒ ⇒loadings   X X X 0   X X X 0   X X X  Parsimonious   X X X X     X X X X     X X X  BFANew  X X X 0   X X X 0   X X X identificationconditions X X X X X X X 0 X X XTheoremExample 3Example 4:simulatinglarge dataExample 1:revisitedConclusion
  40. 40. Example 3: Maxwell’s DataExample 1Normal factor Scores on 10 tests for a sample of T = 148 children attendingmodelVariance a psychiatric clinic as well as a sample of T = 810 normalstructureIdentification children.issuesNumber ofparametersOrdering of thevariables The first five tests are cognitive tests – (1) verbal ability, (2)Example 2 spatial ability, (3) reasoning, (4) numerical ability and (5)Reduced-rankloading matrix verbal fluency. The resulting tests are inventories for assessingStructuredloadings orectic tendencies, namely (6) neurotic questionaire, (7) waysParsimonious to be different, (8) worries and anxieties, (9) interests and (10)BFANew annoyances.identificationconditionsTheoremExample 3 While psychological theory suggests that a 2-factor model isExample 4: sufficient to account for the variation between the test scores,simulatinglarge data the significance test considered in Maxwell (1961) suggested toExample 1: fit a 3-factor model to the first and a 4-factor model to therevisited second data set.Conclusion
  41. 41. Example 1 Table: Maxwell’s Children Data - neurotic children; posteriorNormal factor distribution p(r |y) of the number r of factors (bold numbermodel corresponding to the posterior mode ˜) and highest posterior rVariancestructure identifiability constraint l with corresponding posterior probabilityIdentificationissues p(l |y) for various priors; number of visited models Nv ; frequency pH ,Number ofparameters number of factors rH , and model size dH of the HPM; number ofOrdering of thevariables factors rM and model size dM of the MPM corresponding to l ;Example 2 inefficiency factor τd of the posterior draws of the model size d.Reduced-rankloading matrix p(r |y)Structured Prior 2 3 4 5-6 l p(l |y)loadings b = 10−3 0.755 0.231 0.014 0 (1,6) 0.532 bN = 6.8 · 10−4 0.828 0.160 0.006 0 (1,6) 0.623Parsimonious bR = 4.9 · 10−4 0.871 0.127 0.001 0 (1,6) 0.589BFA b = 10−4 0.897 0.098 0.005 0 (1,6) 0.802Newidentification GD 0.269 0.482 0.246 0.003 (1,2,3) 0.174conditions LW 0.027 0.199 0.752 0.023 (1,2,3,6) 0.249Theorem Prior Nv pH rH dH rM dM τdExample 3 b = 10−3 1472 0.20 2 12 2 12 30.9 bN = 6.8 · 10−4 976 0.27 2 12 2 12 27.5Example 4: bR = 4.9 · 10−4 768 0.34 2 12 2 12 22.6simulating b = 10−4 421 0.45 2 12 2 12 18.1large data GD 4694 0.06 2 15 3 19 40.6 LW 7253 0.01 4 24 4 24 32.5Example 1:revisitedConclusion
  42. 42. Example 1 Table: Maxwell’s Children Data - normal children; posteriorNormal factor distribution p(r |y) of the number r of factors (bold numbermodelVariance corresponding to the posterior mode ˜) and highest posterior rstructureIdentification identifiability constraint l with corresponding posterior probabilityissuesNumber of p(l |y) for various priors; number of visited models Nv ; frequency pH ,parametersOrdering of the number of factors rH , and model size dH of the HPM; number ofvariablesExample 2 factors rM and model size dM of the MPM corresponding to l ;Reduced-rank inefficiency factor τd of the posterior draws of the model size d.loading matrixStructured p(r |y)loadings Prior 3 4 5 6 l p(l |y) b = 10−3 0 0.391 0.604 0.005 (1,2,4,5,6) 0.254Parsimonious bN = 1.2 · 10−4 0 0.884 0.116 0 (1,2,4,5) 0.366BFA b = 10−4 0 0.891 0.104 0.005 (1,2,4,5) 0.484New GD 0 0.396 0.594 0 (1,2,4,5,6) 0.229identificationconditions LW 0 0.262 0.727 0.011 (1,2,4,5,6) 0.259Theorem Prior Nv pH rH dH rM dM τd b = 10−3 4045 1.79 5 26 5 27 30.1Example 3 bN = 1.2 · 10−4 1272 11.41 4 23 4 23 28.5Example 4: b = 10−4 1296 12.17 4 23 4 23 29.1simulating GD 4568 1.46 5 29 5 28 32.7large data LW 5387 1.56 5 28 5 28 30.7Example 1:revisitedConclusion
  43. 43. Example 4: simulating large dataExample 1Normal factor item1 item2 item3 item4 1.0model item5 item6 item7 item8 item9 item10Variance item11 item12 item13 item14 0.9structure item15 item16 item17 item18Identification item19 item20 item21 item22issues item23 item24 item25 item26 0.8Number of item27 item28 item29parameters item30 item31 item32 item33 item34Ordering of the item35 item36 item37variables item38 item39 item40 item41 0.7 item42 item43 item44 item45Example 2 item46 item47 item48 item49 item50 item51Reduced-rank item52 item53 item54 0.6loading matrix item55 item56 item57 item58 item59 item60 item61Structured Variables item62 item63 item64 item65 item66 0.5loadings item67 item68 item69 item70 item71 item72 item73 item74Parsimonious item75 item76 item77 item78 item79 0.4BFA item80 item81 item82 item83 item84 item85New item86 item87 item88identification item89 item90 item91 item92 0.3conditions item93 item94 item95 item96Theorem item97 item98 item99 item100 item101 item102 item103 item104Example 3 item105 item106 item107 item108 0.2 item109 item110 item111 item112Example 4: item113 item114 item115 item116 item117 0.1simulating item118 item119 item120 item121 item122large data item123 item124 item125 item126 item127 item128 item129 item130 0.0Example 1: f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 FactorsrevisitedConclusion Continuous (100), binary (30), 2K observations and 10 factors.
  44. 44. Example 4: Posterior probabilitiesExample 1Normal factor item1 item2 item3 1.0 item4model item5 item6 item7 item8 item9 item10Variance item11 item12 item13 0.9structure item14 item15 item16 item17Identification item18 item19 item20 item21issues item22 item23 item24 item25Number of item26 item27 item28 item29 0.8parameters item30 item31 item32 item33Ordering of the item34 item35 item36 item37variables item38 item39 item40 0.7 item41 item42 item43 item44Example 2 item45 item46 item47 item48 item49 item50Reduced-rank item51 item52 item53 0.6loading matrix item54 item55 item56 item57 item58 item59 item60Structured item61 Variables item62 item63 item64 item65 0.5loadings item66 item67 item68 item69 item70 item71 item72 item73 item74Parsimonious item75 item76 item77 item78 0.4BFA item79 item80 item81 item82 item83 item84New item85 item86 item87 item88identification item89 item90 item91 0.3conditions item92 item93 item94 item95 item96Theorem item97 item98 item99 item100 item101 item102 item103Example 3 item104 item105 item106 item107 0.2 item108 item109 item110 item111 item112Example 4: item113 item114 item115 item116simulating item117 item118 item119 item120 0.1 item121large data item122 item123 item124 item125 item126 item127 item128 item129 item130Example 1: f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15 f16 f17 f18 f19 f20 0.0revisited FactorsConclusion
  45. 45. Example 4: Estimated loadingsExample 1Normal factor item1 item2 item3 1.1 item4model item5 item6 item7 item8 item9 item10Variance item11 item12 item13 1.0structure item14 item15 item16 item17Identification item18 item19 item20 item21issues item22 item23 item24 0.9 item25Number of item26 item27 item28 item29parameters item30 item31 item32 item33Ordering of the item34 item35 item36 item37 0.8variables item38 item39 item40 item41 item42 item43 item44Example 2 item45 item46 item47 item48 0.7 item49 item50Reduced-rank item51 item52 item53loading matrix item54 item55 item56 item57 item58 item59 item60 0.6Structured item61 Variables item62 item63 item64 item65loadings item66 item67 item68 item69 item70 item71 item72 0.5 item73 item74Parsimonious item75 item76 item77 item78BFA item79 item80 item81 item82 item83 0.4 item84New item85 item86 item87 item88identification item89 item90 item91conditions item92 item93 item94 item95 item96 0.3Theorem item97 item98 item99 item100 item101 item102 item103Example 3 item104 item105 item106 item107 0.2 item108 item109 item110 item111 item112Example 4: item113 item114 item115 item116simulating item117 item118 item119 item120 0.1 item121large data item122 item123 item124 item125 item126 item127 item128 item129 item130Example 1: f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15 f16 f17 f18 f19 f20 0.0revisited FactorsConclusion
  46. 46. Example 1: revisitedExample 1Normal factormodelVariancestructureIdentification Outcome system Measurement system (ageissuesNumber ofparameters Education: A-level or above (i.e., 10)Ordering of thevariables Diploma or Degree) Cognitive tests: 7 scalesExample 2 Health (age 30): self-reported poor Personality traits: 5 sub-scalesReduced-rank health, obesity, daily smoking =⇒ see next slideloading matrix Log hourly wage (age 30)StructuredloadingsParsimoniousBFA Control variablesNew mother’s age and education at birth, father’s high social class at birth,identificationconditions total gross family income and number of siblings at age 10, broken family,Theorem number of previous livebirths.Example 3 Exclusions in choice equation: deviation of gender-specific county-levelExample 4:simulating unemployment rate and of gross weekly wage from their long-run averagelarge data wages, for the years 1987-88.Example 1:revisitedConclusion
  47. 47. Measurement system: 126 itemsExample 1Normal factor (age 10)modelVariancestructureIdentificationissuesNumber of Cognitive tests (continuous Personality trait scalesparametersOrdering of the scales)variables • Rutter Parental ‘A’ Scale ofExample 2Reduced-rank • Picture Language Behavioral Disorder: 19loading matrix Comprehension Test continuous itemsStructuredloadings • Friendly Math Test • Conners Hyperactivity Scale: • Shortened Edinburgh Reading 19 continuous itemsParsimoniousBFA Test • Child Developmental ScaleNewidentification [CDS]: 53 continuous itemsconditions British Ability Scales (BAS):Theorem • Locus of Control Scale [LoC]:Example 3 • BAS - Matrices 16 binary itemsExample 4: • BAS - Recall Digits • Self-Esteem Scale [S-E]: 12simulatinglarge data • BAS - Similarities binary itemsExample 1:revisited • BAS - Word DefinitionConclusion
  48. 48. Model ingredientsExample 1Normal factormodelVariancestructureIdentificationissuesNumber ofparametersOrdering of thevariablesExample 2Reduced-rankloading matrixStructuredloadingsParsimoniousBFANewidentificationconditionsTheoremExample 3Example 4:simulatinglarge dataExample 1:revisitedConclusion
  49. 49. Example 1Normal factormodelVariancestructureIdentificationissuesNumber ofparametersOrdering of thevariablesExample 2Reduced-rankloading matrixStructuredloadingsParsimoniousBFANewidentificationconditionsTheoremExample 3Example 4:simulatinglarge dataExample 1:revisitedConclusion
  50. 50. Example 1Normal factormodelVariancestructureIdentificationissuesNumber ofparametersOrdering of thevariablesExample 2Reduced-rankloading matrixStructuredloadingsParsimoniousBFANewidentificationconditionsTheoremExample 3Example 4:simulatinglarge dataExample 1:revisitedConclusion
  51. 51. Example 1Normal factormodelVariancestructureIdentificationissuesNumber ofparametersOrdering of thevariablesExample 2Reduced-rankloading matrixStructuredloadingsParsimoniousBFANewidentificationconditionsTheoremExample 3Example 4:simulatinglarge dataExample 1:revisitedConclusion
  52. 52. ConclusionExample 1Normal factormodelVariancestructureIdentification • SummaryissuesNumber of • New take on factor model identifiabilityparametersOrdering of the • Generalized triangular parametrizationvariablesExample 2 • Customized MCMC schemeReduced-rank • Highly efficient model search strategyloading matrixStructured • Posterior distribution for the number of common factorsloadingsParsimoniousBFANew • ExtensionsidentificationconditionsTheorem • Nonlinear (discrete choice) factor modelsExample 3 • Non-normal (mixture of) factor modelsExample 4: • Dynamic factor modelssimulatinglarge dataExample 1:revisitedConclusion
  53. 53. Example 1Normal factormodelVariancestructureIdentificationissuesNumber ofparametersOrdering of thevariablesExample 2Reduced-rankloading matrixStructuredloadings Thank you!ParsimoniousBFANewidentificationconditionsTheoremExample 3Example 4:simulatinglarge dataExample 1:revisitedConclusion
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×