Be the first to like this
Perhaps one day we will figure out how to ask perfect survey questions. In the meantime, survey analyses are biased by random and correlated measurement errors, and evaluating the extent of such errors is therefore essential, both to remove the bias and to improve our question design.
When there is no gold standard, these errors are often estimated using multitrait-multimethod (MTMM) experiments or longitudinal data by applying linear or ordinal factor models, which assume that (latent) measurement is linear and that the only type of method bias is one that pushes the answers monotonely in a particular direction—that of acquiescence, for example. However, not all measurement is linear and not all method bias is monotone. Extreme response tendencies, for example, are nonmonotone, as are primacy and recency effects, which act on just one category. Just as the monotone kind, such method effects will also lead to spurious dependencies among different survey questions, distorting their true relationships. Diagnosing, preventing, or correcting for such distortions therefore calls for a model that can account for them.
For this purpose I will discuss the latent class MTMM model (Oberski 2011). In it, a latent loglinear modeling approach is combined with the MTMM design to yield a model that provides detailed information about the measurement quality of survey questions while also dealing with nonmonotone method biases. I will discuss the method's assumptions and demonstrate it on a few often-used survey questions. Standard software for latent class analysis can be used to estimate this model, so that evaluating the extent of nonlinear random and correlated measurement errors is now a reasonably user-friendly experience for survey researchers.