Successfully reported this slideshow.

More Related Content

Scale development

  1. 1. Scale Development- An Overview Michael Sony emailofsony@gmail.com Workshop on Research Methods at GUDMS
  2. 2. Measurement Fundamental Activity in Science • Physical Measures • Examples – Weight – Pressure – Voltage – Temperature – Speed Workshop on Research Methods at GUDMS
  3. 3. Measurement Scales Workshop on Research Methods at GUDMS
  4. 4. Measurement Social Measures • Examples – Job Satisfaction – Stigma perceived by people with HIV – Happiness – Quality of Life – Motivation – Emotional Intelligence – Meaning in life Workshop on Research Methods at GUDMS
  5. 5. Meaning in life scale _____1. I understand my life’s meaning. _____2. I am looking for something that makes my life feel meaningful. _____3. I am always looking to find my life’s purpose. _____4. My life has a clear sense of purpose. _____5. I have a good sense of what makes my life meaningful. _____6. I have discovered a satisfying life purpose. _____7. I am always searching for something that makes my life feel significant. _____8. I am seeking a purpose or mission for my life. _____9. My life has no clear purpose. _____10. I am searching for meaning in my life. Steger, M. F., Frazier, P., Oishi, S., & Kaler, M. (2006). The meaning in life questionnaire: Assessing the presence of and search for meaning in life. Journal of counseling psychology, 53(1), 80. Workshop on Research Methods at GUDMS
  6. 6. Steps in Scale Development Construct Definition  Item Generation Content Validity & Pretesting Measurement Purification Verification of Dimensionality Nomological Validity  Criterion related Validity Accounting for Known issues in measurement scales Workshop on Research Methods at GUDMS
  7. 7. Construct Definition Workshop on Research Methods at GUDMS
  8. 8. Steps in Scale Development • Construct Definition • Specify Domain of the Construct • What is included and what is excluded • Theory as an aid to clarity Workshop on Research Methods at GUDMS
  9. 9. Item Generation Workshop on Research Methods at GUDMS
  10. 10. Steps in Scale Development Number of Items • Impossible to answer • Thumb rules Devellis (2003) – 3 to 4 times large than final scale. • Thumb rule for final scale for each dimension minimum 3 items. Workshop on Research Methods at GUDMS
  11. 11. An Example SERVQUAL • Initial Dimension – 10 • Initial Item Generation -97 • Final Dimensions – 5 • Number of items – 22 – Tangibles – 4 – Reliability -5 – Responsiveness -4 – Assurance – 4 – Empathy -5 – % Reduction in items = 77% Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1988). Servqual. Journal of retailing, 64(1), 12-40. (Citation: 16407) Workshop on Research Methods at GUDMS
  12. 12. Steps in Scale Development Characteristics of good or bad item a) Exceptionally lengthy items • It is fair to say that one of the things I seem to have a problem with much of the time is getting my point across to other people. • I often have difficulty making a point. b) Reading difficulty Workshop on Research Methods at GUDMS
  13. 13. Steps in Scale Development c) Redundancy – I will do almost anything to ensure my Childs success – No sacrifice is to great too great if it helps my child achieve success Workshop on Research Methods at GUDMS
  14. 14. Steps in Scale Development c) Redundancy - In my opinion pet lover are kind people - In my estimation pet lovers are kind people • Redundancy is tolerated in initial pool of items Workshop on Research Methods at GUDMS
  15. 15. Steps in Scale Development d) Multiple Negatives • I am not in favor of corporations stopping funding for antinuclear groups • I favor continued private support of groups advocating a nuclear ban Workshop on Research Methods at GUDMS
  16. 16. Steps in Scale Development e) Double barreled • I support civil rights because discrimination is a crime against God. • If a person supports civil rights for reasons other than its affront to a deity (e.g., because it is a crime against humanity), how should he or she answer? Workshop on Research Methods at GUDMS
  17. 17. Steps in Scale Development f) Positively and Negatively Worded Items - intent of wording items both positively and negatively within the same scale is usually to avoid an acquiescence, affirmation, or agreement Workshop on Research Methods at GUDMS
  18. 18. Content Validity Workshop on Research Methods at GUDMS
  19. 19. • the degree to which an instrument has an appropriate sample of items for the construct being measured Workshop on Research Methods at GUDMS
  20. 20. Content Validity 1. Initial screening of items; 2. Expert assessment of the applicability of the items to each dimension; and 3. Expert assessment of representativeness of the items to each dimension Workshop on Research Methods at GUDMS
  21. 21. Applicability of the items to each dimension • Judges are given the definition of each dimension, a related explanation and an example item • The judges are asked to allocate the items to one of the dimensions or to a “not applicable” category. • Items were eliminated that did not receive the appropriate category from at least (n-1) of the n judges. • Calculate correlations among Judges and inter judge reliability Workshop on Research Methods at GUDMS
  22. 22. Workshop on Research Methods at GUDMS
  23. 23. Technique to Spot Aberrant Judges • Intentional Introduction of lie items • Random • Camouflage • Lie Item Detection rate: 80-100% Excellent 60 -80% Good 40 -60 % Average < 40 % Poor Workshop on Research Methods at GUDMS
  24. 24. Representativeness of the items to each dimension • Each item is judged for Relevance , Simplicity Clarity • Calculate content validity index Workshop on Research Methods at GUDMS
  25. 25. Rating Form Workshop on Research Methods at GUDMS
  26. 26. CVI Source: Polit and Beck,2006 Polit, D. F., & Beck, C. T. (2006). The content validity index: are you sure you know what's being reported? Critique and recommendations. Research in nursing & health, 29(5), 489-497. Workshop on Research Methods at GUDMS
  27. 27. Guidelines • Lynn’s (1986) criteria (I-CVI = 1.00 with 3 to 5 experts and a minimum I-CVI of .78 for 6 to 10 experts) and it would have an SCVI/ Ave of .90 or higher. Lynn, M.R. (1986). Determination and quantification of content validity. Nursing Research, Workshop on Research Methods at GUDMS 35, 382– 385
  28. 28. Guidelines • two rounds of expert review • if the initial I-CVIs suggest the need for substantial item improvements, or if the reviewers identify aspects of the construct not adequately covered by the initial pool of items (Polite et al. 2006). Polite, D. F., Beck, C. T., & Hungler, B. D. (2006). Essentials of Nursing Research; methods, appraisal and uitilization. Workshop on Research Methods at GUDMS
  29. 29. Face Validity • Face validity is a post hoc assessment of whether the items in a scale measure a construct. • Experts from the field may be asked to comment on the scale and appropriate suggestion may be included Workshop on Research Methods at GUDMS
  30. 30. Pre-testing /Pilot Testing • Representative target groups • Pre-testing is the administration of the data collection instrument with a small set of respondents from the population. • The purpose of pre-testing is to identify problems with the data collection instrument and find possible solutions. • Recommended sample Size 10% of final sample size. Workshop on Research Methods at GUDMS
  31. 31. Measurement Purification Workshop on Research Methods at GUDMS
  32. 32. Measurement Purification • Exploratory factor analysis • Conducted to provide some insight into the dimensionality of the scale & Item reduction. Workshop on Research Methods at GUDMS
  33. 33. Measurement Purification- A Guidelines • Computation of coefficient of Cronbach Alpha and Item to total correlation • Deletion of those items whose items to total correlation is low and its deletion increased coefficient of Alpha • Factor Analysis for data reduction and to unearth underlying factor structure Workshop on Research Methods at GUDMS
  34. 34. Workshop on Research Methods at GUDMS
  35. 35. Workshop on Research Methods at GUDMS
  36. 36. Workshop on Research Methods at GUDMS
  37. 37. Verification of Dimensionality Workshop on Research Methods at GUDMS
  38. 38. Verification of dimensionality Construct Validity : the degree to which a scale measures what it claims, or purports, to be measuring 1. Convergent Validity 2. Discriminant Validity Workshop on Research Methods at GUDMS
  39. 39. Verification of dimensionality • Convergent Validity:- The items that are indicators of a same construct should converge or share high proportion of variance. • Average Factor loadings > 0.7 Workshop on Research Methods at GUDMS
  40. 40. Workshop on Research Methods at GUDMS
  41. 41. • Average factor loading for attitude towards co workers = 0.87 , VE = 0.75 • Average factor loading for Environmental Perception =0.835 VE = 0.70 • Average factor loading for Job Satisfaction =0.825 , VE = 0.68 Workshop on Research Methods at GUDMS
  42. 42. Verification of dimensionality • Discriminant validity:- is the extent to which a construct is truly distinct from other construct. Guideline • Variance extracted percentages for any two construct > square of the correlations estimate between these two constructs. Workshop on Research Methods at GUDMS
  43. 43. Workshop on Research Methods at GUDMS
  44. 44. Discriminant Validity • AC and EP = (0.75 + 0.70 )/2 = 0.72 = 0.72 • Correlation among them = 0.2252 = 0.05 Workshop on Research Methods at GUDMS
  45. 45. Assessment of Reliability Workshop on Research Methods at GUDMS
  46. 46. Internal Consistency • Internal consistency reliability analysis is a parametric procedure used to evaluate the consistency of results across items within a single scale (i.e., instrument) or subscale that is composed of multiple items. • Cronbach α > 0.7 Workshop on Research Methods at GUDMS
  47. 47. Assessment of reliability Test – Retest • Correlation of the scale with same respondents at two different time points e.g. after 4 weeks > 0.7 Workshop on Research Methods at GUDMS
  48. 48. Nomological validity Workshop on Research Methods at GUDMS
  49. 49. Theoretical Level: Concepts, Ideas Construct Theoretical Level: Concepts, Ideas Construct Construct Construct Construct Workshop on Research Methods at GUDMS
  50. 50. Nomological validity • Examining Whether the correlations among related consequences constructs make sense. An Example • Relationship Market Orientation and Business Performance. • Well grounded Theoretical reasons Workshop on Research Methods at GUDMS
  51. 51. Sin, L. Y., Tse, A. C., Yau, O. H., Chow, R. P., Lee, J. S., & Lau, L. B. (2005). Relationship marketing orientation: scale development and cross-cultural validation. Journal of Business Research, 58(2), 185-194. Workshop on Research Methods at GUDMS
  52. 52. Criterion related validity Concurrent Validity Criterion Validity Workshop on Research Methods at GUDMS
  53. 53. Criterion related validity • Concurrent validity :This measures the relationship between measures made with existing tests. The existing tests is thus the criterion. • Concurrent validity is demonstrated when a test correlates well with a measure that has previously been validated ( Bivariate Correlation) • E.g. Scores on a written first aid exam are highly correlated with scores assigned by raters during a hands-on measure in which examinees demonstrate* *Salkind, N. J. (Ed.). (2010). Encyclopedia of research design. Sage. Workshop on Research Methods at GUDMS
  54. 54. Criterion related validity • Predictive Validity: This measures the extent to which a future level of a variable can be predicted from a current measurement. • This includes correlation with measurements made with different instruments ( Regression analysis) Workshop on Research Methods at GUDMS
  55. 55. Known Issues in Scale Development Workshop on Research Methods at GUDMS
  56. 56. Social Desirability Bias • Social desirability bias is a social science research term that describes the tendency of survey respondents to answer questions in a manner that will be viewed favorably by others. • It can take the form of over-reporting "good behavior" or under-reporting "bad", or undesirable behavior. Workshop on Research Methods at GUDMS
  57. 57. Common Method Bias • Common-method variance (CMV) is the spurious "variance that is attributable to the measurement method rather than to the constructs the measures are assumed to represent Workshop on Research Methods at GUDMS
  58. 58. Thank you Workshop on Research Methods at GUDMS
  59. 59. Construct • a construct is the abstract idea, underlying theme, or subject matter that one wishes to measure Workshop on Research Methods at GUDMS
  60. 60. Further reading • DeVellis, R. F. (2011). Scale development: Theory and applications (Vol. 26). Sage Publications. • Polit, D. F., & Beck, C. T. (2006). The content validity index: are you sure you know what's being reported? Critique and recommendations. Research in nursing & health, 29(5), 489-497. • Hinkin, T. R. (1995). A review of scale development practices in the study of organizations. Journal of management, 21(5), 967-988. • Hair, J. F., Tatham, R. L., Anderson, R. E., & Black, W. (2006). Multivariate data analysis (Vol. 6). Upper Saddle River, NJ: Pearson Prentice Hall Workshop on Research Methods at GUDMS

×