Avoiding A Grand Failure


Published on

This is a presentation on situational judgment tests created for a discussion on selection planning

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Avoiding A Grand Failure

  1. 1. David DeGeest<br />06J:278 Staffing<br />April 13, 2010<br />SJTs: Avoiding a Grand Failure<br />
  2. 2. Why are people so interested?<br />The Big Picture of SJT lit<br />
  3. 3. The claims about SJTs<br />“They seem to represent psychometric alchemy” (Landy, 2007).<br />Adverse impact is down, validity up<br />Assessees like them<br />Seem to address relevant KSAOs<br />They assess soft skills and tacit knowledge<br />They provide incremental validity above GMA and personality for predicting college GPA (Oswald et al., 2004)<br />Some SJTs have demonstrated criterion-related validities as high as r=.36 (Pereira and Harve, 1999)<br />They measure tacit knowledge and “non-academic intelligence” (Sternberg et al., 1995)<br />
  4. 4. What is an SJT?<br />Situational Judgment Tests (SJTs) or Inventories (SJIs) are psychological tests which offer respondents realistic, hypothetical scenarios and ask for an appropriate response.<br />SJTs are often-identified as a type of low-fidelity situation (Motowildo, 1990).<br />SJTs can be designed to be predictive of job performance, managerial ability, integrity, personality, and apparently other measures or constructs.<br />
  5. 5. Example of an SJT item from Becker (2005)<br />11. You’re retiring from a successful business that you started, and must now decide who will replace you. Two of your children want the position and would probably do a fine job. However, three non-family employees are more qualified. Who would you most likely put in charge?<br />A. The best performing non-family member, because the most qualified person deserves the job. <br />B. The lowest performing non-family member, because this won’t hurt your children’s feelings. <br />C. The highest performing child, because you have the right to do what is best for your kids. <br />D. The child you love the most, as long as he or she is able to do the job. <br />
  6. 6. History of SJTs<br />First recorded SJT: George Washington University Social Intelligence Test (1926)<br />Some usage during WWII by military psychologists<br />(1990): Motowildo’s research resurrected interest in SJTs<br />Idea of the “low fidelity simulation”<br />Commonly used now in industry as “customized” tool for organizations, consultants, etc.<br />Takeaway: There is a lot of perceived promise and sunk cost in SJT research.<br />
  7. 7. What the heck is an SJT?<br />Construct Validity and the Development of SJTs<br />
  8. 8. Item Characteristics <br />McDaniel et al (2005) claims that SJTs have eight differentiating characteristics:<br />Test fidelity<br />Stem length<br />Stem complexity<br />Stem comprehensibility<br />Nested stems<br />Nature of responses <br />Response instructions<br />Degree of item heterogeneity<br />No proscribed standards to develop an SJT<br />
  9. 9. Item Characteristics<br />Examples of response options:<br />What is the best answer?<br />What would you most likely do?<br />Rate each response for effectiveness<br />Rate each response on likelihood you would engage in behavior<br />Knowledge v. Behavioral Tendency<br />Dichotomization Issue<br />
  10. 10. Item Characteristics and Construct Validity<br />Construct Heterogeneity<br />Most items tend to correlate with GMA, Agreeableness, Conscientiousness, or Emotional Stability (McDaniel, 2005)<br />Ployhart and Erhart (2003) notes that multiple constructs measured with SJTs makes it hard to measure differences across studies<br />Takeaway: SJTs are best described as a method, not a construct (Schmitt & Chan, 2006)<br />
  11. 11. Exciting finds from research on SJTs<br />The promise of SJTs<br />
  12. 12. Generalizability<br />McDaniel et al. (2007) meta-analytically demonstrated SJTs have incremental validity of<br />.03 to .05 over GMA<br />.06 to .07 over Big Five<br />.01 to .02 over GMA/Big Five composite<br />McDaniel et al. (2001) showed that SJTs are generalizable as predictors of job performance<br />90% CV did not contain zero in the meta<br />Potosky et al. (2004) showed that a .84 score-equivalence correlation between an SJT administered via paper-and-pencil and the Internet<br />No effects based on beliefs in computer efficacy<br />Takeaway: multiple metas have demonstrated the generalizability of SJTs in predicting job performance.<br />
  13. 13. Variability in SJTs<br />Lievens and Sackett (2006) showed that video-based SJTs for interpersonal skills have more validity than written SJTs.<br />McDaniel et al. (2007) showed that reliabilities for SJTs can range from .63 to .88<br />The meta refers to alpha, but other reliability measures matter<br />Takeaway: Effects of variations in level of fidelity offer interesting possibilities for research.<br />
  14. 14. Assessment Reactions and Face Validity<br />Chan & Schmitt (1997) showed that B-W differences in test performance and face validity reactions were lower for video-based SJTs than pencil-and-paper tests<br />Race X Method interaction attributable to reading comprehension differences in subgroups<br />Increasing fidelity increased mean performance on SJT<br />Chan (1997) showed that paper-and-pencil SJTs are more consistent with beliefs, values, and expectations of whites.<br />Moving to video-based SJT increased validity perceptions for both whites and blacks<br />Bauer and Truxillo (2006) asserts that SJTs always have better face validity than do cognitive and personality measures.<br />Takeaways: SJTs are useful in terms of face validity and justice perceptions, particularly high-fidelity (video) simulations.<br />
  15. 15. The trouble with…<br />Problems with SJTs<br />
  16. 16. The problem of g<br /><ul><li>Nguyen (2005) Found that knowledge instruction SJT scores correlated .56 with GMA and behavioral instructions correlated .38 with GMA.
  17. 17. Peeters and Lievens (2005) found that faking-good instructions produced differences in means and criterion-related validities across subgroups.
  18. 18. “Specifically, the fakability of SJTs might depend on their correlation with one particular construct, namely, cognitive ability (see Nguyen &McDaniel, 2001).” (p.73)
  19. 19. Schmidt and Hunter (2003) found low discriminant validity between SJTs and job knowledge tests.
  20. 20. Retesting Issue
  21. 21. “If subgroup differences on a test exist, policies that permit retests by candidates who were unsuccessful on the test might inflate calculations of adverse impact.” (Lievens et al., 2005, p. 1005)</li></ul>Takeaway: If the degree of fakibility of an SJT depends on its GMA load, SJTs might just be contaminated g tests or low-reliability job knowledge tests.<br />
  22. 22. Faking<br />Nguyen et al. (2005) found that d=.34 for honest instructions and d=.15 for faking<br />Ployhart and Erhart (2003) also note that behavioral response instructions are both more prone to faking and have problematic more reliability issues<br />Hooper et al. (2006) notes that the fragmentation of the literature has made a meta-analytic study of this issue impossible.<br />
  23. 23. Response Instructions<br />Ployhart and Erhart (2003) found that response instructions had dramatic effects on validity, reliability, and performance for SJTs<br />Showed that dimensionality of an SJT is crucial to determining the reliability estimate to use.<br />McDaniel et al. (2007) found meaningful differences between means for tests with different behavioral and knowledge instructions.<br />Lievens and Sackett (2009) found no meaningful differences between means in a high-stakes testing environment with med school applicants.<br />Last two studies found that knowledge instructions for an SJT increased the scores’ correlation with a GMA measure<br />Takeaway: meta-analytic integration of these results is needed, but the primary research has yet to support this.<br />
  24. 24. What is the reliability for an SJT?<br />Bess (2001) points out the elephant in the room:<br />“SJTs by definition are multidimensional and therefore internal consistency is not an appropriate measure of reliability” (p.29)<br />Schmitt and Chan (1997) also notes this problem.<br />Examples of reliability estimates:<br />Ployhart and Erhart (2003) Used split-half estimates to get .67 and .68 reliabilities.<br />Lievens and Sackett (2009) found low alphas for their SJT (.55-.56)<br />Lievens and Sackett (2007) noted generating alternate forms is difficult for SJTs, given the contextual specificity of items.<br />This means parallel forms reliability is a non-practical measure.<br />Takeaway: no one is quite sure how to systematically assess reliabilities for SJT measures<br />
  25. 25. What do we know about SJTs?<br />Conclusions<br />
  26. 26. Things we know fairly clearly<br />SJTs are primarily a method, not a construct.<br />SJTs have demonstrated generalizable meta-analytic incremental validity over GMA and Big 5 single and composite measures in predicting job performance<br />Most SJTs are correlated with GMA to a varying extent and share some benefits and disadvantages with GMA<br />SJTs often correlate with the Big 3<br />
  27. 27. McDaniel et al. (2006) integrated model for SJT <br />
  28. 28. Where do researchers go from here? Practitioners?<br />Future Directions for SJTs<br />
  29. 29. Ployhart & Weekly (2006) Agenda for Research<br />Construct Validity<br />Correlates are known, but nomological net uncertain<br />SJTs targeted to constructs: the “holy grail” (p.348)<br />What exactly is “judgment?”<br />Understanding SJT structure<br />How do we build SJTs to get construct homogeneity?<br />How do we enhance the reliability of these measures?<br />More Experimentation/Micro Research<br />Correlation studies and metas show generalizability<br />Experimental studies can enhance understanding<br />
  30. 30. Ployhart & Weekly (2006) Agenda for Research<br />Need for Theoretical Development<br />Will help to integrate SJTs in mainline I/O research<br />Theory of situation perception and judgment research<br />The limits of SJTs<br />Little is known about applicant conditions for SJTs<br />Generalizability in international contexts?<br />Expansion of org context for SJTs<br />Possibility of use in training and development contexts (Fritzche et al., 2006)<br />Use in team contexts (Mumford et al., 2006)<br />
  31. 31. Other possibilities<br />Personality and Self-Reports<br />Ployhart and Ryan (2004) proposed integrating personality measures with an SJT to predict customer service orientation<br />Hogan (2005)’s work suggests that it may be possible to build a conscientiousness measure via an SJT that is more resistant to faking than current self-reports.<br />Bledow et Freese (2003)’s work also supports using SJTs to create non-self-report measures of constructs like personal drive or initiative.<br />Teams<br />Mumford et. al (2006)’s work suggests building an SJT that would ably predict a “team player” mentality.<br />
  32. 32. References<br />
  33. 33. Bauxer T.N.; and Truxillo, D.M. (2006). Applicant reactions to situational judgment tests: research and related practical issues. In Situational Judgement Tests: Theory, Measurement, and Application.<br />Becker, Thomas E. (2005). Development and Validation of a Situational Judgment Test of Employee Integrity. International Journal of Selection and Assessment, 13(3), 225-232.<br />Bess, T.L. (2001). Exploring the dimensionality of situational judgment: task and contextual knowledge. Unpublished Master’s Thesis. Accessesed 4/08/10 at: http://scholar.lib.vt.edu/theses/available/etd-04122001-183219/unrestricted/sjtdimensionality.pdf<br />Bledow, Ronald and Freese, Michael. A situational judgment test of personal initiative and its relation to performance. (Summer 2009) Personnel Psychology. 229-258<br />Chan, David. Racial subgroup differences in predictive validity perceptions on personality and cognitive ability tests. Journal of Applied Psychology. Vol 82(2), Apr 1997, 311-320.<br />Fritzsche, B.A.; Stagl, K.C.; Salas, E.; Burke, C.S. (2006) Enhancing the design, delivery, and evaluation of scenario-based training: can situational judgment tests contribute. In Situational Judgement Tests: Theory, Measurement, and Application.<br />Hogan, Robert (2005). In defense of personality measurement: new wine for old whiners. Human Performance 18(4), 31-41.<br />Hooper, A., Cullen, M., and Sackett, P. (2006). Operational Threats to the Use of SJTs: Faking, Coaching, and Retesting Issues. In Situational Judgement Tests: Theory, Measurement, and Application.<br />Landy, F. J. (2007). The validation of personnel decisions in the twenty first century: Back to the future. In S. M. McPhail (Ed.), Alternate validation strategies: Developing and leveraging existing validity evidence<br />(pp. 409–426). San Francisco: Jossey-Bass.<br />Lievens, Filip and Sackett, Paul R. (2006). Video-Based Versus Written Situational Judgment Tests: A Comparison in Terms of Predictive Validity. Journal of Applied Psychology, 91, 1181-1188.<br />Lievens, F.; Sackett, P.; and Buyse, Tine. (2009). The effects of response instructions on situational judgment test performance and validity in a high-stakes context. Journal of Applied Psychology 94(4), 1095-1101.<br />McDaniel, Michael and Nguyen, Nhung. Sitautional Judgment Tests: a review of practice and constructs assessed. (March/June 2001). International Journal of Selection and Assessment, 9(1/2), 103-113.<br />McDaniel, Michael, Finnegan, Elizabeth, Morgeson, Frederick, Campion, Michael, Braverman, Eric. (2001). Use of Situational Judgment Tests to Predict Job Performance: A clarification of the literature. Journal of Applied Psychology, 86(4), 730-740.<br />
  34. 34. McDaniel, Whetzel, Hartman, Nguyen, and Grubb (2006). Situational Judgment Tests: Validity and an Integrative Model. Situational Judgment Tests: Theory, Measurement, and Application.<br />McDaniel, Michael, Hartman, Nathan S., Whetzel, Deborah, Grubb, W. Lee III. (2007) Situational judgment tests, response instructions, and validity: a meta-analysis. Personnel Psychology,60, 63-91.<br />Motowidlo, SJ, Dunnette, MD, Carter, GW (1990), "An alternative selection procedure: thelow-fidelity simulation", Journal of Applied Psychology, Vol. 75 pp.640-7<br />Mumford, T.V.; Campion, M.; Morgeson, F.P. (2006). Sitautional judgment tests in work teams: a team role typology. In Situational Judgement Tests: Theory, Measurement, and Application.<br />Nguyen, N. T., Biderman, M. D., & McDaniel, M. A. (2005). Effects of response instructions on faking a situational judgment test. International Journal of Selection and Assessment, 13, 250–260.<br />Frederick L. Oswald, Neal Schmitt, Brian H. Kim, Lauren J. Ramsay, and Michael A. Gillespie (2004). Developing a Biodata Measure and Situational Judgment Inventory as Predictors of College Student Performance. Journal of Applied Psychology 89: 187-207.<br />Helga Peeters and Filip Lievens (2005). Situational Judgment Tests and their Predictiveness of College Students’ Success: The Influence of Faking. Educational and Psychological Measurement, 65: 70-89.<br />Ployhart, R.E. and Ryan, A.M. (2004, April) Integrating personality tests with situational judgment tests for the prediction of customer service performance. Paper presented at the 15th annual conference of the Society for Industrial and Organizational Psychology, New Orleans, LA.<br />Ployhart, Robert E. and Weekley, Jeff. (2003) Web-based and paper-and-pencil testing of applicants in a proctored setting: are personality tests, biodata, and situational judgment tests comparable? Personnel Psychology, 56, 733-752.<br />Potosky, Denise and Bobko, Philip (2004). Selection testing via the Internet: practical considerations and exploratory empirical findings. Personnel Psychology,57, 1003-1034.<br />Schmidt, Frank and Hunter, John E. (2003). Tacit Knowledge, Practical Intelligence, General Mental Ability, and Job Knowledge. Current Directions in Psychological Science, 2, 8-9.<br />