Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Information literacy as a measurable construct: A need for more freely available, validated, and wide ranging instruments - Hollis

85 views

Published on

Presented at LILAC 2018

Published in: Education
  • Be the first to comment

  • Be the first to like this

Information literacy as a measurable construct: A need for more freely available, validated, and wide ranging instruments - Hollis

  1. 1. Information literacy as a measurable construct: A need for more freely available, validated, and wide ranging instruments. Helena Hollis Academic and Reader Services Coordinator Regent’s University London Library
  2. 2. Aims in examining IL measures This presentation reviews existing Information Literacy (IL) measures, treating them as psychometric tests, aiming to address two questions: – Do existing IL tests sufficiently meet the needs of researchers? – What do the tests tell us about whether or not there truly is a single variable identifiable as IL (i.e. a measurable construct)?
  3. 3. IL as a measurable construct If IL is a singular construct, then it is a common underlying determinant that can manifest itself in many different observable instances. IL Finding information for school work Writing illuminating blog posts Being critical about media sources
  4. 4. Construct Validity “…this problem of developing evidence to support an inferential leap from an observed consistency to a construct that accounts for that consistency is a generic concern of all science.” (Messick, 1974) – The literature provides ample theory to underpin this inferential leap. In fact, IL is considered so important it has been called a human right (Sturges & Gastinger, 2010) – Empirical evidence from rigorous quantitative investigation is lacking – Construct validity requires both theory and evidence
  5. 5. Exclusion of Measures
  6. 6. Validity – Face Validity: the subjective relation of the test to what it should measure – Criterion Validity: › how comparable is the test to others measuring the same construct? › how predictive it is of outcomes associated with that construct?
  7. 7. Self-Report Self-report measures will be excluded, and only tests of IL considered – Memory: participants don’t always correctly remember what they struggled with or what they did well at (Tourangeau, 2009) – Dunning-Kruger Effect: participants with very low ability often have over-inflated self-perceptions (Kruger & Dunning, 1999) – Imposter Syndrome: this effect can also run the other way, where one’s high levels of knowledge in a subject area lead them to under-estimate their ability
  8. 8. Cost Existing measures that require payment for use will be excluded – As library budgets come under increasing strain, few will have access to funding for costly research projects – If we are to treat IL as a human right, we should aim to measure it in all populations, not merely where funding is greatest
  9. 9. Evaluation of Existing Measures
  10. 10. Context and Domain Dependency Measure Source Context Dependency Domain Dependency B-TILED (Beile O'Neil, 2005), (Robertson, 2018), (Jesse, 2012) HE Education, Nursing, Seminary IL Test for Chemistry Students (Emmett & Emde, 2007) HE Chemistry Information Evaluation Pre- and Post- Test (Catalano, 2015) HE No Information Literacy (Psychology) (Leichner, Peter, Mayer, & Krampen, 2013) HE Psychology Information Literacy Survey (Ferguson, Neely, & Sullivan, 2006) HE No Information Literacy Test for Higher Education (Boh Podgornik et al., 2016) HE No Information Search Tasks (Leichner, Peter, Mayer, & Krampen, 2014) HE Psychology Information Skills Survey (Clark & Catts, 2007) HE Law, Social Sciences, Medicine Locally Developed IL Test (Mery, Newby, Peng, Bowler, & MacMillan, 2013) HE No Project Trails (Schloman & Gedeon, 2007) High School (USA) No VOILA (Ondrusek, Dent, Bonadie‐Joseph, & Williams, 2013) HE No
  11. 11. Further specificities Measure Classification scheme Resources Library Policy Referencing style Country B-TILED No Yes No No Yes IL Test for Chemistry Students Yes Yes Yes No Yes Information Evaluation Pre- and Post- Test No No No No Yes Information Literacy (Psychology) No Yes Yes No No Information Literacy Survey No No No No Yes Information Literacy Test for Higher Education No No No Yes Yes Information Search Tasks No No No No No Information Skills Survey Full test not available Locally Developed IL Test No Yes Yes Yes No Project Trails Full test not available VOILA Yes Yes Yes No No
  12. 12. Construct representation – Does the test capture all important aspects of the construct? “Knowing when and why you need information, where to find it, and how to evaluate, use and communicate it in an ethical manner” (CILIP, 2018)
  13. 13. Construct representation IL aspects Knowledge Ability when and why you need information where to find it information ethics how to evaluate information how to use and communicate it B-TILED Yes Yes Yes Yes No IL Test for Chemistry Students Yes Yes Yes Yes No Information Evaluation Pre- and Post- Test Yes No No Yes No Information Literacy (Psychology) Yes Yes No Yes No Information Literacy Survey Yes Yes Yes Yes Yes Information Literacy Test for Higher Education Yes Yes Yes Yes Yes Information Search Tasks Yes Yes No Yes No Information Skills Survey Full test not available Locally Developed IL Test Yes Yes Yes Yes No Project Trails Full test not available VOILA No Yes No No No
  14. 14. Summary – HE dominates! And all existing measures are for educational contexts – Measures are often designed to test the outcomes of specific IL programmes, not necessarily to test IL overall – Specificity is often built in to IL measures, limiting their use beyond the institution they were designed in – All facets of IL are not represented in every measure
  15. 15. IL as a measured by current tests Student skills, not necessarily general IL Student skills Using subject databases Finding sources for essays Referencing sources correctly
  16. 16. Recommendations
  17. 17. 1) IL measures for other contexts Context and domain dependency are not necessarily negatives – Allow for IL testing to support intervention design, and the evaluation of interventions, for specific populations’ needs – Allow for comparisons between different institutions – Demonstrate that IL exists as a measurable construct outside of educational contexts
  18. 18. 2) A General IL Measure the main recommendation of this paper is that a general measure of IL, neither context nor domain dependant, with minimum specificity and testing all facets of IL is needed – Allow for comparisons across different populations – Allow for broader and more far-reaching research into IL and its implications – Demonstrate that IL is a measurable construct in the population overall
  19. 19. Creating a General IL Measure › Balance of all IL aspects covering knowledge and ability › Avoiding specificities (e.g. classification scheme) › Face validity tested by library and information professionals from as many sectors as possible › Comparison with existing measures › Comparison to other factors IL should be predictive of › Meet rigorous standards for psychometric tests
  20. 20. Using a General IL Measure A wealth of research questions could be explored and many, many more. Are there regional differences in IL? Do these correlate with local library provision? Do prison inmates have lower IL levels than the general population? Are there differences between prisons with more or less IL training provision? Do businesses with more information literate employees perform better? Do university students and vocational apprentices have different IL levels?
  21. 21. IL as a measurable construct As things stand, we can argue from the empirical evidence that there is such a thing as “IL in education”. However, IL beyond this remains untested through validated instruments. – We have ample theory backing the notion of IL – We need to support this with empirical evidence
  22. 22. Thank you!
  23. 23. references… Beile O'Neil, P. (2005). Development and validation of the Beile test of information literacy for education (B-TILED). University of Central Florida. Boh Podgornik, B., Dolničar, D., Šorgo, A., & Bartol, T. (2016). Development, testing, and validation of an information literacy test (ILT) for higher education. Journal of the Association for Information Science and Technology, 67(10), 2420-2436. doi: 10.1002/asi.23586 Cameron, L., Wise, S. L., & Lottridge, S. M. (2007). The development and validation of the information literacy test. College & Research Libraries, 68(3), 229-237. Catalano, A. (2015). The effect of a situated learning environment in a distance education information literacy course. The Journal of Academic Librarianship, 41(5), 653-659. doi: 10.1016/j.acalib.2015.06.008 Catalano, A. (2016). Streamlining LIS research: A compendium of tried and true tests, measurements, and other instruments. Santa Barbara, CA: Libraries Unlimited. CILIP. (2018). What is information literacy?, Retrieved from https://infolit.org.uk/ Clark, C., & Catts, R. (2007). Information Skills Survey: Its application to a medical course. Evidence Based Library and Information Practice, 2(3), 3-26. Ferguson, J. E., Neely, T. Y., & Sullivan, K. (2006). A baseline information literacy assessment of biology students. Reference & User Services Quarterly, 46(2), 61-71. Jesse, M. (2012). Subject specific information literacy curriculum and assessment. The Christian Librarian, 55(1), 2-16. Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self- assessments. Journal of personality and social psychology, 77(6), 1121-1134. Leichner, N., Peter, J., Mayer, A.-K., & Krampen, G. (2013). Assessing information literacy among German psychology students. Reference Services Review, 41(4), 660-674.
  24. 24. …references Leichner, N., Peter, J., Mayer, A.-K., & Krampen, G. (2014). Assessing information literacy programmes using information search tasks. Journal of information literacy, 8(1), 3. Mery, Y., Newby, J., Peng, K., Bowler, M., & MacMillan, M. (2013). Assessing the reliability and validity of locally developed information literacy test items. Reference Services Review, 39(1), 98-122. doi: 10.1108/00907321111108141. Messick, S. (1974). The standard problem: Meaning and values in measurement and evaluation. ETS Research Report Series, 1974(2), i-37. doi: 10.1002/j.2333-8504.1974.tb01034.x Ondrusek, A., Dent, V. F., Bonadie‐Joseph, I., & Williams, C. (2013). A longitudinal study of the development and evaluation of an information literacy test. Reference Services Review, 33(4), 388-417. Robertson, D. S., & Felicilda-Reynaldo, R. F. D. (2018). Evaluation of graduate nursing students’ information literacy self-efficacy and applied skills. Journal of Nursing Education, 54(3), S26–S30. doi: 10.3928/01484834-20150218-03 Salem, J. A., & Radcliff, C. J. (2006). Using the SAILS test to assess information literacy. Paper presented at the Building Effective, Sustainable, Practical Assessment: Proceedings of the Second Library Assessment Conference (Charlottesville: 2006). Schloman, B. F., & Gedeon, J. A. (2007). Creating TRAILS. Knowledge Quest, 35(5), 44-47. Sturges, P., & Gastinger, A. (2010). Information literacy as a human right. Libri, 60(3), 195-202. Tourangeau, R. (2009). Remembering what happened: Memory errors and survey reports. In A. A. Stone, C. A. Bachrach, J. B. Jobe, H. S. Kurtzman, & V. S. Cain (Eds.), The science of self-report: Implications for research and practice (pp. 29-48). Mahwah, NJ: Lawrence Erlbaum Associates.

×