Chap007 measurement in_selection_editing

1,690 views

Published on

Published in: Technology, Business
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,690
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
101
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Chap007 measurement in_selection_editing

  1. 1. Part 4 Staffing Activities: Selection Chapter 7: Measurement Chapter 8: External Selection I Chapter 9: External Selection II Chapter 10: Internal Selection McGraw-Hill/Irwin Copyright © 2012 by The McGraw-Hill Companies, Inc. All rights reserved.
  2. 2. Part 4 Staffing Activities: Selection Chapter 07: Measurement
  3. 3. Staffing Organizations Model Staffing Policies and Programs Staffing System and Retention Management Support Activities Legal compliance Planning Job analysis Core Staffing Activities Recruitment: External, internal Selection: Measurement, external, internal Employment: Decision making, final match 7- Organization Strategy HR and Staffing Strategy Organization Mission Goals and Objectives
  4. 4. Chapter Outline <ul><li>Importance and Use of Measures </li></ul><ul><li>Key Concepts </li></ul><ul><ul><li>Measurement </li></ul></ul><ul><ul><li>Scores </li></ul></ul><ul><ul><li>Correlation Between Scores </li></ul></ul><ul><li>Quality of Measures </li></ul><ul><ul><li>Reliability of Measures </li></ul></ul><ul><ul><li>Validity of Measures </li></ul></ul><ul><ul><li>Validation of Measures in Staffing </li></ul></ul><ul><ul><li>Validity Generalization </li></ul></ul><ul><ul><li>Staffing Metrics and Benchmarks </li></ul></ul><ul><li>Collection of Assessment Data </li></ul><ul><ul><li>Testing Procedures </li></ul></ul><ul><ul><li>Acquisition of Tests and Test Manuals </li></ul></ul><ul><ul><li>Professional Standards </li></ul></ul><ul><li>Legal Issues </li></ul><ul><ul><li>Disparate Impact Statistics </li></ul></ul><ul><ul><li>Standardization and Validation </li></ul></ul>7-
  5. 5. Learning Objectives for This Chapter <ul><li>Define measurement and understand its use and importance in staffing decisions </li></ul><ul><li>Understand the concept of reliability and review the different ways reliability of measures can be assessed </li></ul><ul><li>Define validity and consider the relationship between reliability and validity </li></ul><ul><li>Compare and contrast the two types of validation studies typically conducted </li></ul><ul><li>Consider how validity generalization affects and informs validation of measures in staffing </li></ul><ul><li>Review the primary ways assessment data can be collected </li></ul>7-
  6. 6. Key Concepts <ul><li>Measurement </li></ul><ul><ul><li>the process of assigning numbers to objects to represent quantities of an attribute of the objects </li></ul></ul><ul><li>Scores </li></ul><ul><ul><li>the amount of the attribute being assessed </li></ul></ul><ul><li>Correlation between scores </li></ul><ul><ul><li>a statistical measure of the relation between the two sets of scores </li></ul></ul>7-
  7. 7. Importance and Use of Measures <ul><li>Measures </li></ul><ul><ul><li>Methods or techniques for describing and assessing attributes of objects </li></ul></ul><ul><li>Examples </li></ul><ul><ul><li>Tests of applicant KSAOs </li></ul></ul><ul><ul><li>Job performance ratings of employees </li></ul></ul><ul><ul><li>Applicants’ ratings of their preferences for various types of job rewards </li></ul></ul>7-
  8. 8. Importance and Use of Measures (continued) <ul><li>Summary of measurement process </li></ul><ul><ul><li>(a) Choose an attribute of interest </li></ul></ul><ul><ul><li>(b) Develop operational definition of attribute </li></ul></ul><ul><ul><li>(c) Construct a measure of attribute as operationally defined </li></ul></ul><ul><ul><li>(d) Use measure to actually gauge attribute </li></ul></ul><ul><li>Results of measurement process </li></ul><ul><ul><li>Scores become indicators of attribute </li></ul></ul><ul><ul><li>Initial attribute and its operational definition are transformed into a numerical expression of attribute </li></ul></ul>7-
  9. 9. Measurement: Definition <ul><li>Process of assigning numbers to objects to represent quantities of an attribute of the objects </li></ul><ul><ul><li>Attribute/Construct - Knowledge of mechanical principles </li></ul></ul><ul><ul><li>Objects - Job applicants </li></ul></ul>7-
  10. 10. Ex. 7.1 Use of Measures in Staffing 7-
  11. 11. Measurement: Standardization <ul><li>Involves </li></ul><ul><ul><li>Controlling influence of extraneous factors on scores generated by a measure and </li></ul></ul><ul><ul><li>Ensuring scores obtained reflect the attribute measured </li></ul></ul><ul><li>Properties of a standardized measure </li></ul><ul><ul><li>Content is identical for all objects measured </li></ul></ul><ul><ul><li>Administration of measure is identical for all objects </li></ul></ul><ul><ul><li>Rules for assigning numbers are clearly specified and agreed on in advance </li></ul></ul>7-
  12. 12. Measurement: Levels <ul><li>Nominal </li></ul><ul><ul><li>A given attribute is categorized and numbers are assigned to categories </li></ul></ul><ul><ul><li>No order or level implied among categories </li></ul></ul><ul><li>Ordinal </li></ul><ul><ul><li>Objects are rank-ordered according to how much of attribute they possess </li></ul></ul><ul><ul><li>Represents relative differences among objects </li></ul></ul><ul><li>Interval </li></ul><ul><ul><li>Objects are rank-ordered </li></ul></ul><ul><ul><li>Differences between adjacent points on measurement scale are equal in terms of attribute </li></ul></ul><ul><li>Ratio </li></ul><ul><ul><li>Similar to interval scales - equal differences between scale points for attribute being measured </li></ul></ul><ul><ul><li>Have a logical or absolute zero point </li></ul></ul>7-
  13. 13. Measurement: Differences in Objective and Subjective Measures <ul><li>Objective measures </li></ul><ul><ul><li>Rules used to assign numbers to attribute are predetermined, communicated, and applied through a system </li></ul></ul><ul><li>Subjective measures </li></ul><ul><ul><li>Scoring system is more elusive, often involving a rater who assigns the numbers </li></ul></ul><ul><li>Research results </li></ul>7-
  14. 14. Scores <ul><li>Definition </li></ul><ul><ul><li>Measures provide scores to represent amount of attribute being assessed </li></ul></ul><ul><ul><li>Scores are the numerical indicator of attribute </li></ul></ul><ul><li>Central tendency and variability </li></ul><ul><ul><li>Exh. 7.2: Central Tendency and Variability: Summary Statistics </li></ul></ul><ul><li>Percentiles </li></ul><ul><ul><li>Percentage of people scoring below an individual in a distribution of scores </li></ul></ul><ul><li>Standard scores </li></ul>7-
  15. 15. Correlation Between Scores <ul><li>Scatter diagrams </li></ul><ul><ul><li>Used to plot the joint distribution of the two sets of scores </li></ul></ul><ul><ul><li>Exh. 7.3: Scatter Diagrams and Corresponding Correlations </li></ul></ul><ul><li>Correlation coefficient </li></ul><ul><ul><li>Value of r summarizes both </li></ul></ul><ul><ul><ul><li>Strength of relationship between two sets of scores and </li></ul></ul></ul><ul><ul><ul><li>Direction of relationship </li></ul></ul></ul><ul><ul><li>Values can range from r = -1.0 to r = 1.0 </li></ul></ul><ul><ul><li>Interpretation - Correlation between two variables does not imply causation between them </li></ul></ul><ul><ul><li>Exh. 7.4: Calculation of Product-Movement Correlation Coefficient </li></ul></ul>7-
  16. 16. Exh. 7.3: Scatter Diagrams and Corresponding Correlations 7-
  17. 17. Exh. 7.3: Scatter Diagrams and Corresponding Correlations 7-
  18. 18. Exh. 7.3: Scatter Diagrams and Corresponding Correlations 7-
  19. 19. Significance of the Correlation Coefficient <ul><li>Practical significance </li></ul><ul><ul><li>Refers to size of correlation coefficient </li></ul></ul><ul><ul><li>The greater the degree of common variation between two variables, the more one variable can be used to understand another variable </li></ul></ul><ul><li>Statistical significance </li></ul><ul><ul><li>Refers to likelihood a correlation exists in a population, based on knowledge of the actual value of r in a sample from that population </li></ul></ul><ul><ul><li>Significance level is expressed as p < value </li></ul></ul><ul><ul><ul><li>Interpretation -- If p < .05, there are fewer than 5 chances in 100 of concluding there is a relationship in the population when, in fact, there is not </li></ul></ul></ul>7-
  20. 20. Quality of Measures <ul><li>Reliability of measures </li></ul><ul><li>Validity of measures </li></ul><ul><li>Validity of measures in staffing </li></ul><ul><li>Validity generalization </li></ul>7-
  21. 21. Quality of Measures: Reliability <ul><li>Definition: Consistency of measurement of an attribute </li></ul><ul><ul><li>A measure is reliable to the extent it provides a consistent set of scores to represent an attribute </li></ul></ul><ul><li>Reliability of measurement is of concern </li></ul><ul><ul><li>Both within a single time period and between time periods </li></ul></ul><ul><ul><li>For both objective and subjective measures </li></ul></ul><ul><li>Exh. 7.6: Summary of Types of Reliability </li></ul>7-
  22. 22. Ex. 7.6: Summary of Types of Reliability 7-
  23. 23. Quality of Measures: Reliability <ul><li>Measurement error </li></ul><ul><ul><li>Actual score = true score + error </li></ul></ul><ul><ul><li>Deficiency error: Occurs when there is failure to measure some aspect of attribute assessed </li></ul></ul><ul><ul><li>Contamination error: Represents occurrence of unwanted or undesirable influence on the measure and on individuals being measured </li></ul></ul>7-
  24. 24. Ex. 7.7 - Sources of Contamination Error and Suggestions for Control 7-
  25. 25. Quality of Measures: Reliability <ul><li>Procedures to calculate reliability estimates </li></ul><ul><ul><li>Coefficient alpha </li></ul></ul><ul><ul><ul><li>Should be least .80 for a measure to have an acceptable degree of reliability </li></ul></ul></ul><ul><ul><li>Interrater agreement </li></ul></ul><ul><ul><ul><li>Minimum level of interrater agreement - 75% or higher </li></ul></ul></ul><ul><ul><li>Test-Retest reliability </li></ul></ul><ul><ul><ul><li>Concerned with stability of measurement </li></ul></ul></ul><ul><ul><ul><li>Level of r should range between r = .50 to r = .90 </li></ul></ul></ul><ul><ul><li>Intrarater agreement </li></ul></ul><ul><ul><ul><li>For short time intervals between measures, a fairly high relationship is expected - r = .80 or 90% </li></ul></ul></ul>7-
  26. 26. Quality of Measures: Reliability <ul><li>Implications of reliability </li></ul><ul><ul><li>Standard error of measurement </li></ul></ul><ul><ul><ul><li>Since only one score is obtained from an applicant, the critical issue is how accurate the score is as an indicator of an applicant’s true level of knowledge </li></ul></ul></ul><ul><ul><li>Relationship to validity </li></ul></ul><ul><ul><ul><li>Reliability of a measure places an upper limit on the possible validity of a measure </li></ul></ul></ul><ul><ul><ul><li>A highly reliable measure is not necessarily valid </li></ul></ul></ul><ul><ul><ul><li>Reliability does not guarantee validity - it only makes it possible </li></ul></ul></ul>7-
  27. 27. Quality of Measures: Validity <ul><li>Definition: Degree to which a measure truly measures the attribute it is intended to measure </li></ul><ul><li>Accuracy of measurement </li></ul><ul><ul><li>Exh. 7.9: Accuracy of Measurement </li></ul></ul><ul><li>Accuracy of prediction </li></ul><ul><ul><li>Exh. 7.10: Accuracy of Prediction </li></ul></ul>7-
  28. 28. Ex. 7.9: Accuracy of Measurement 7-
  29. 29. Discussion questions <ul><li>Give examples of when you would want the following for a written job knowledge test </li></ul><ul><ul><li>a low coefficient alpha (e.g., α = .35) </li></ul></ul><ul><ul><li>a low test–retest reliability. </li></ul></ul>7-
  30. 30. Exh. 7.10: Accuracy of Prediction 7-
  31. 31. Exh. 7.10: Accuracy of Prediction 7-
  32. 32. Validity of Measures in Staffing <ul><li>Importance of validity to staffing process </li></ul><ul><ul><li>Predictors must be accurate representations of KSAOs to be measured </li></ul></ul><ul><ul><li>Predictors must be accurate in predicting job success </li></ul></ul><ul><li>Validity of predictors explored through validation studies </li></ul><ul><li>Two types of validation studies </li></ul><ul><ul><li>Criterion-related validation </li></ul></ul><ul><ul><li>Content validation </li></ul></ul>7-
  33. 33. Ex. 7.11: Criterion-Related Validation <ul><li>Criterion Measures: measures of performance on tasks and task dimensions </li></ul><ul><li>Predictor Measure: it taps into one or more of the KSAOs identified in job analysis </li></ul><ul><li>Predictor–Criterion Scores: must be gathered from a sample of current employees or job applicants </li></ul><ul><li>Predictor–Criterion Relationship: the correlation must be calculated. </li></ul>7-
  34. 34. Ex. 7.12: Concurrent and Predictive Validation Designs 7-
  35. 35. Ex. 7.12: Concurrent and Predictive Validation Designs 7-
  36. 36. Content Validation <ul><li>Content validation involves </li></ul><ul><ul><li>Demonstrating the questions/problems (predictor scores) are a representative sample of the kinds of situations occurring on the job </li></ul></ul><ul><li>Criterion measures are not used </li></ul><ul><ul><li>A judgment is made about the probable correlation between predictors and criterion measures </li></ul></ul><ul><li>Used in two situations </li></ul><ul><ul><li>When there are too few people to form a sample for criterion-related validation </li></ul></ul><ul><ul><li>When criterion measures are not available </li></ul></ul><ul><li>Exh. 7.14: Content Validation </li></ul>7-
  37. 37. Validity Generalization <ul><li>Degree to which validity can be extended to other contexts </li></ul><ul><ul><li>Contexts include different situations, samples of people and time periods </li></ul></ul><ul><li>Situation-specific validity vs. validity generalization </li></ul><ul><ul><li>Exh. 7.16: Hypothetical Validity Generalization Example </li></ul></ul><ul><ul><li>Distinction is important because </li></ul></ul><ul><ul><ul><li>Validity generalization allows greater latitude than situation specificity </li></ul></ul></ul><ul><ul><ul><li>More convenient and less costly not to have to conduct a separate validation study for every situation </li></ul></ul></ul>7-
  38. 38. Staffing Metrics and Benchmarks <ul><li>Metrics </li></ul><ul><ul><li>quantifiable measures that demonstrate the effectiveness (or ineffectiveness) of a particular practice or procedure </li></ul></ul><ul><li>Staffing metrics </li></ul><ul><ul><li>job analysis </li></ul></ul><ul><ul><li>validation </li></ul></ul><ul><ul><li>Measurement </li></ul></ul><ul><li>Benchmarking as a means of developing metrics </li></ul>7-
  39. 39. Collection of Assessment Data <ul><li>Testing procedures </li></ul><ul><ul><li>Paper and pencil measures </li></ul></ul><ul><ul><li>PC- and Web-based approaches </li></ul></ul><ul><li>Applicant reactions </li></ul><ul><li>Acquisition of tests and test manuals </li></ul><ul><ul><li>Paper and pencil measures </li></ul></ul><ul><ul><li>PC- and Web-based approaches </li></ul></ul><ul><li>Professional standards </li></ul>7-
  40. 40. Legal Issues <ul><li>Disparate impact statistics </li></ul><ul><ul><li>Applicant flow statistics </li></ul></ul><ul><ul><li>Applicant stock statistics </li></ul></ul><ul><li>Standardization </li></ul><ul><ul><li>Lack of consistency in treatment of applicants is a major factor contributing to discrimination </li></ul></ul><ul><ul><ul><li>Example: Gathering different types of background information from protected vs. non-protected groups </li></ul></ul></ul><ul><ul><ul><li>Example: Different evaluations of information for protected vs. non-protected groups </li></ul></ul></ul><ul><li>Validation </li></ul><ul><ul><li>If adverse impact exists, a company must either eliminate it or justify it exists for job-related reasons (validity evidence) </li></ul></ul>7-

×