Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
SAIL AWAY: COMPARING THE
COHORT TEST TO THE BYOT
VERSIONS OF PROJECT SAILS
IN A CANADIAN CONTEXT
Rumi Graham, Nicole
Eva &...
CONTEXT
2
WHAT IS SAILS?
Skills tested:
 Developing a research
strategy
 Selecting finding tools
 Searching
 Using finding tool ...
WHY SAILS?
Luck of the draw – literally!
Teaching Development Fund
After the fact, few alternatives for Canadian
instituti...
THE RESEARCH: 2015
Research purpose:
 reliable, objective data on information literacy (IL) levels of first-year
undergra...
ACCESSING THE TEST
Web-based; custom link for CMS (Moodle);
custom web consent page
Unique SAILS ID# for each consenting s...
THE SAMPLE
Academic Writing (10 sections) = 250
students
Liberal Education 1000 = 87 students
THE INTERVENTIONS
In-class i...
THE RESULTS
71.3% of respondents came from LBED
1000
68% first year students
Approx 80% did both pre and post
THE COHORT
C...
U OF L RESULTS AGAINST
BENCHMARK
Pre-test Post-test
Above Retrieving Sources
Evaluating Sources
Developing a research stra...
420
440
460
480
500
520
540
560
580
Developing a research strategy
Selecting finding tools
Searching
Using finding tool fe...
THE LESSONS LEARNED
Limitations of cohort test
 We were the only international participant
 Our own benchmark; comparing...
2016: NEW SAILS TESTING OPTION
SAILS Build Your Own Test (BYOT) launched January
2016
12
 Customizable version of
“indivi...
2016: THE RESEARCH
Ran study again using BYOT and modified set
of courses
THE SAMPLE
 Library Science (LBSC) 0520 ~ 30 st...
2016: INCENTIVES & INTERVENTIONS
THE INCENTIVES
 LBSC 0520 and LBSC 2000
 In-class time to write pre-test and post-test
...
2016: BUILDING THE BYOTS
 IL instructors identified questions on
topics not covered in course
(eliminated 50 questions)
...
2016: TEST QUESTION MIX
#
Questions
Easy Moderat
e
Difficult
Developing Research
Strategy 4 2 1 1
Selecting Finding Tools ...
2016: PARTICIPATION RATE
17
#
Enrolled % Enrolled Wrote Pre-test
or Post-test
Participated in
Study
LBED
1000 95 60.5% 80 ...
2016: PRE-TEST QUESTION ON PRIOR IL
INSTRUCTION (n=124)
18
2016: MEAN SCORES IMPROVED IN ALL
COURSES?
LBED 1000 LBSC 0520 LBSC 2000
Pre-test lowest 14.5% 23.1% 23.1%
Pre-test
highes...
2016: MEAN SCORES IMPROVED IN ALL
YEARS?
20
1st Year 2nd Year 3rd Year+
Pre-test
lowest 23.1% 15.4% 30.8%
Pre-test
highest...
2016: INDIVIDUAL STUDENTS
IMPROVED?
107 students wrote the pre- and post-tests
(68%)
 Mean pre-test score = 53.95%
 Mean...
2016: STUDENTS IMPROVED IN ALL
COURSES? (n=107)
22
LBED
1000
LBSC
0520
LBSC
2000
Difference between pre-
test and post-tes...
2016: STUDENTS IMPROVED IN ALL
YEARS? (n=107)
23
1st Year
2nd
Year
3rd
Year+
Difference between pre-test
and post-test mea...
2016: POST-TEST SELF-ASSESSMENT
(n=126)
24
2016: LESSONS LEARNED
BYOT
•Mean time to complete post-test ~ 12 to 15 min. (LBSC courses),
suggesting 26-question BYOT no...
CONCLUSIONS
BYOT ADVANTAGES
•You determine which questions are included and overall test
length
•Permits singular focus on...
SOURCES
Project SAILS website: https://www.projectsails.org/
• International Cohort Assessment:
https://www.projectsails.o...
2016: NEW SAILS TESTING OPTION
SAILS Build Your Own Test (BYOT) launched January
2016
28
 Customizable version of
“indivi...
2016: THE RESEARCH
Ran study again using BYOT and modified set
of courses
THE SAMPLE
 Library Science (LBSC) 0520 ~ 30 st...
2016: INCENTIVES & INTERVENTIONS
THE INCENTIVES
 LBSC 0520 and LBSC 2000
 In-class time to write pre-test and post-test
...
2016: BUILDING THE BYOTS
 IL instructors identified questions on
topics not covered in course
(eliminated 50 questions)
...
2016: TEST QUESTION MIX
#
Questions
Easy Moderat
e
Difficult
Developing Research
Strategy 4 2 1 1
Selecting Finding Tools ...
2016: PARTICIPATION RATE
33
#
Enrolled % Enrolled Wrote Pre-test
or Post-test
Participated in
Study
LBED
1000 95 60.5% 80 ...
2016: PRE-TEST QUESTION ON PRIOR IL
INSTRUCTION (n=124)
34
2016: MEAN SCORES IMPROVED IN ALL
COURSES?
LBED 1000 LBSC 0520 LBSC 2000
Pre-test lowest 15.4% 23.1% 23.1%
Pre-test
highes...
2016: MEAN SCORES IMPROVED IN ALL
YEARS?
36
1st Year 2nd Year 3rd Year+
Pre-test
lowest 23.1% 15.4% 30.8%
Pre-test
highest...
2016: INDIVIDUAL STUDENTS
IMPROVED?
107 students wrote the pre- and post-tests
(68%)
 Mean pre-test score = 53.95%
 Mean...
2016: STUDENTS IMPROVED IN ALL
COURSES? (n=107)
38
LBED
1000
LBSC
0520
LBSC
2000
Difference between pre-
test and post-tes...
2016: STUDENTS IMPROVED IN ALL
YEARS? (n=107)
39
1st Year
2nd
Year
3rd
Year+
Difference between pre-test
and post-test mea...
2016: POST-TEST SELF-ASSESSMENT
(n=126)
40
2016: LESSONS LEARNED
BYOT
•Mean time to complete post-test ~ 12 to 15 min. (LBSC courses),
suggesting 26-question BYOT no...
CONCLUSIONS
BYOT ADVANTAGES
•You determine which questions are included and overall test
length
•Permits singular focus on...
SOURCES
Project SAILS website: https://www.projectsails.org/
• International Cohort Assessment:
https://www.projectsails.o...
Upcoming SlideShare
Loading in …5
×

SAIL away: comparing the cohort test to build your own test version of Project SAILS in a Canadian context - Eva & Graham

214 views

Published on

Presented at LILAC 2017

Published in: Education
  • Be the first to comment

  • Be the first to like this

SAIL away: comparing the cohort test to build your own test version of Project SAILS in a Canadian context - Eva & Graham

  1. 1. SAIL AWAY: COMPARING THE COHORT TEST TO THE BYOT VERSIONS OF PROJECT SAILS IN A CANADIAN CONTEXT Rumi Graham, Nicole Eva & Sandra Cowan University of Lethbridge Alberta, Canada
  2. 2. CONTEXT 2
  3. 3. WHAT IS SAILS? Skills tested:  Developing a research strategy  Selecting finding tools  Searching  Using finding tool features  Retrieving sources  Evaluating sources  Documenting sources  Understanding Economic, Legal, and Social Issues Test versions: • Individual scores test (U.S.) • Cohort test (U.S.) • International cohort test (our only option in 2015) Based on older ACRL ‘compete ncy standards ’ 3
  4. 4. WHY SAILS? Luck of the draw – literally! Teaching Development Fund After the fact, few alternatives for Canadian institutions 4Yosemite~commonswiki CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=520257
  5. 5. THE RESEARCH: 2015 Research purpose:  reliable, objective data on information literacy (IL) levels of first-year undergrads before and after librarian-created IL instruction Key questions:  Levels of IL possessed by incoming first-year students?  Improvement in students’ IL abilities after IL instruction?  Correlations between students’ IL attainment levels and  their year of study?  number or format of IL instruction sessions? 5 https://pixabay.com/en/question-speech-bubbles-speech- 1828268/
  6. 6. ACCESSING THE TEST Web-based; custom link for CMS (Moodle); custom web consent page Unique SAILS ID# for each consenting student No identifying data gathered/stored on SAILS server – total student anonymity 6
  7. 7. THE SAMPLE Academic Writing (10 sections) = 250 students Liberal Education 1000 = 87 students THE INTERVENTIONS In-class instruction Online modules THE INCENTIVES Writing: Draw for 1 of 2 $100 bookstore gift cards (must do both) Liberal Education: Draw + 3% bonus (1% pre-test + 2% post- test) 7
  8. 8. THE RESULTS 71.3% of respondents came from LBED 1000 68% first year students Approx 80% did both pre and post THE COHORT Cohort: 14 institutions; 6370 students Doctoral cohort 8
  9. 9. U OF L RESULTS AGAINST BENCHMARK Pre-test Post-test Above Retrieving Sources Evaluating Sources Developing a research strategy Searching Developing a research strategy Searching At benchmark Selecting finding tools Documenting sources Using finding tool features Documenting sources Retrieving Sources Using finding tool features Evaluating sources Below Selecting finding tools (in order of how well they performed) 9
  10. 10. 420 440 460 480 500 520 540 560 580 Developing a research strategy Selecting finding tools Searching Using finding tool featuresRetrieving sources Evaluating sources Documenting sources U of L Results against Benchmark (doctorate institutions) Pre-test Benchmark Post-test 10
  11. 11. THE LESSONS LEARNED Limitations of cohort test  We were the only international participant  Our own benchmark; comparing ourselves to ourselves  Swamped by large institution results (Ashford U 43%)  Cannot track individual results from pre to post test  Cannot choose which questions students get Limitations of separating sections of Academic Writing - cohort report Importance of incentives! Time of semester – competing obligations 11
  12. 12. 2016: NEW SAILS TESTING OPTION SAILS Build Your Own Test (BYOT) launched January 2016 12  Customizable version of “individual scores test”  Test scores tracked for each test-taker  Hand-picked test questions  No minimum number of test questions; maximum of 50  Available to international institutions
  13. 13. 2016: THE RESEARCH Ran study again using BYOT and modified set of courses THE SAMPLE  Library Science (LBSC) 0520 ~ 30 students  First Nations Transition Program course taught by a librarian  Library Science (LBSC) 2000 ~ 30 students  Arts & Science full-credit course taught by a librarian  Liberal Education (LBED) 1000 ~ 95 students  4 labs taught by a librarian; embedded in full-credit course 13
  14. 14. 2016: INCENTIVES & INTERVENTIONS THE INCENTIVES  LBSC 0520 and LBSC 2000  In-class time to write pre-test and post-test  Bonus marks: 2% pre-test; 3% post-test  Draw $100 gift card (one per course)  LBED 1000  Bonus marks: 1% for pre-test; 2% for post-test  Draw for $100 gift card THE INTERVENTIONS  In-class instruction; online modules (LBED 1000 only) 14Alan O’Rourke http://bit.ly/2mjCzBQ CC BY 2.0
  15. 15. 2016: BUILDING THE BYOTS  IL instructors identified questions on topics not covered in course (eliminated 50 questions)  From remaining 112 questions, researchers not involved in grading any coursework selected 2 matching, non-overlapping sets of questions  Both sets had same number of questions in each skill area reflecting same range of difficulty  Pre-test and post-test each contained 26 SAILS questions (42% shorter than 45-question cohort test) 15http://bit.ly/2mx4iOu CC0
  16. 16. 2016: TEST QUESTION MIX # Questions Easy Moderat e Difficult Developing Research Strategy 4 2 1 1 Selecting Finding Tools 3 1 1 1 Searching 4 1 2 1 Using Finding Tool Features 3 1 1 1 Retrieving Sources 3 1 1 1 Evaluating Sources 4 1 2 1 Documenting Sources 3 1 1 1 Economic, Legal, Social Issues 2 1 1 16
  17. 17. 2016: PARTICIPATION RATE 17 # Enrolled % Enrolled Wrote Pre-test or Post-test Participated in Study LBED 1000 95 60.5% 80 84.2% LBSC 0520 32 20.4% 31 96.9% LBSC 2000 30 19.1% 30 100% All Students 157 100% 141 89.8%
  18. 18. 2016: PRE-TEST QUESTION ON PRIOR IL INSTRUCTION (n=124) 18
  19. 19. 2016: MEAN SCORES IMPROVED IN ALL COURSES? LBED 1000 LBSC 0520 LBSC 2000 Pre-test lowest 14.5% 23.1% 23.1% Pre-test highest 80.8% 61.5% 80.8% Pre-test mean 56.4% 41.7% 56.9% Post-test lowest 15.4% 15.4% 23.1% Post-test highest 84.6% 73.1% 80.8% Post-test mean 59.9% 48.2% 59.8% 19 *Pre-test: n=124 Post-test: n=126
  20. 20. 2016: MEAN SCORES IMPROVED IN ALL YEARS? 20 1st Year 2nd Year 3rd Year+ Pre-test lowest 23.1% 15.4% 30.8% Pre-test highest 80.8% 80.8% 80.8% Pre-test mean 51.8% 56.4% 59.3% Post-test lowest 15.4% 26.9% 23.1% Post-test*Pre-test: n=124 Post-test: n=126
  21. 21. 2016: INDIVIDUAL STUDENTS IMPROVED? 107 students wrote the pre- and post-tests (68%)  Mean pre-test score = 53.95%  Mean post-test score = 58.16%  Mean difference = +4.21%  Margin of error = ± 2.9 % 21
  22. 22. 2016: STUDENTS IMPROVED IN ALL COURSES? (n=107) 22 LBED 1000 LBSC 0520 LBSC 2000 Difference between pre- test and post-test mean scores 5.8% 4.9% 0.004%
  23. 23. 2016: STUDENTS IMPROVED IN ALL YEARS? (n=107) 23 1st Year 2nd Year 3rd Year+ Difference between pre-test and post-test mean scores 6.30% 4.26% -5.28%
  24. 24. 2016: POST-TEST SELF-ASSESSMENT (n=126) 24
  25. 25. 2016: LESSONS LEARNED BYOT •Mean time to complete post-test ~ 12 to 15 min. (LBSC courses), suggesting 26-question BYOT not overly demanding •Greater likelihood of statistically significant results with larger classes •Mean scores all well below Proficiency level (70% or better), but 31 students reached Proficiency and 3 reached Mastery level (85% or better) in post-test INCENTIVES •Bonus marks a large incentive, but in-class time to write the tests even more effective •Were upper-level students more pragmatic in their participation efforts? 25
  26. 26. CONCLUSIONS BYOT ADVANTAGES •You determine which questions are included and overall test length •Permits singular focus on only your students’ test results •Permits tracking individual students’ scores •Affords wide range of statistical analyses COHORT TEST ADVANTAGES •Easier to prepare for (no need to select questions) •Useful for institutions committed to large-scale, longitudinal testing •No data analysis! (just interpretation) •Slightly less expensive than individual scores/BYOT 26
  27. 27. SOURCES Project SAILS website: https://www.projectsails.org/ • International Cohort Assessment: https://www.projectsails.org/International • Build Your Own Test: https://www.projectsails.org/BYOT Cowan, S., Graham, R. & Eva, N. (2016). How information literate are they? A SAILS study of (mostly) first-year students at the U of L. Light on Teaching, 2016-17, 17-20. Retrieved from http://bit.ly/2dlOTi6 Questions? 27
  28. 28. 2016: NEW SAILS TESTING OPTION SAILS Build Your Own Test (BYOT) launched January 2016 28  Customizable version of “individual scores test”  Test scores tracked for each test-taker  Hand-picked test questions  No minimum number of test questions; maximum of 50  Available to international institutions
  29. 29. 2016: THE RESEARCH Ran study again using BYOT and modified set of courses THE SAMPLE  Library Science (LBSC) 0520 ~ 30 students  First Nations Transition Program course taught by a librarian  Library Science (LBSC) 2000 ~ 30 students  Arts & Science full-credit course taught by a librarian  Liberal Education (LBED) 1000 ~ 95 students  4 labs taught by a librarian; embedded in full-credit course 29
  30. 30. 2016: INCENTIVES & INTERVENTIONS THE INCENTIVES  LBSC 0520 and LBSC 2000  In-class time to write pre-test and post-test  Bonus marks: 2% pre-test; 3% post-test  Draw $100 gift card (one per course)  LBED 1000  Bonus marks: 1% for pre-test; 2% for post-test  Draw for $100 gift card THE INTERVENTIONS  In-class instruction; online modules (LBED 1000 only) 30Alan O’Rourke http://bit.ly/2mjCzBQ CC BY 2.0
  31. 31. 2016: BUILDING THE BYOTS  IL instructors identified questions on topics not covered in course (eliminated 50 questions)  From remaining 112 questions, researchers not involved in grading any coursework selected 2 matching, non-overlapping sets of questions  Both sets had same number of questions in each skill area reflecting same range of difficulty  Pre-test and post-test each contained 26 SAILS questions (42% shorter than 45-question cohort test) 31http://bit.ly/2mx4iOu CC0
  32. 32. 2016: TEST QUESTION MIX # Questions Easy Moderat e Difficult Developing Research Strategy 4 2 1 1 Selecting Finding Tools 3 1 1 1 Searching 4 1 2 1 Using Finding Tool Features 3 1 1 1 Retrieving Sources 3 1 1 1 Evaluating Sources 4 1 2 1 Documenting Sources 3 1 1 1 Economic, Legal, Social Issues 2 1 1 32
  33. 33. 2016: PARTICIPATION RATE 33 # Enrolled % Enrolled Wrote Pre-test or Post-test Participated in Study LBED 1000 95 60.5% 80 84.2% LBSC 0520 32 20.4% 31 96.9% LBSC 2000 30 19.1% 30 100% All Students 157 100% 141 89.8%
  34. 34. 2016: PRE-TEST QUESTION ON PRIOR IL INSTRUCTION (n=124) 34
  35. 35. 2016: MEAN SCORES IMPROVED IN ALL COURSES? LBED 1000 LBSC 0520 LBSC 2000 Pre-test lowest 15.4% 23.1% 23.1% Pre-test highest 80.8% 61.5% 80.8% Pre-test mean 56.4% 41.7% 56.9% Post-test lowest 15.4% 15.4% 23.1% Post-test highest 84.6% 73.1% 80.8% Post-test mean 59.9% 48.2% 59.7% 35 *Pre-test: n=124 Post-test: n=126
  36. 36. 2016: MEAN SCORES IMPROVED IN ALL YEARS? 36 1st Year 2nd Year 3rd Year+ Pre-test lowest 23.1% 15.4% 30.8% Pre-test highest 80.8% 80.8% 80.8% Pre-test mean 51.8% 56.5% 59.3% Post-test lowest 15.4% 26.9% 23.1% Post-test*Pre-test: n=124 Post-test: n=126
  37. 37. 2016: INDIVIDUAL STUDENTS IMPROVED? 107 students wrote the pre- and post-tests (68%)  Mean pre-test score = 53.95%  Mean post-test score = 58.16%  Mean difference = +4.21%  Margin of error = ± 2.89 % 37
  38. 38. 2016: STUDENTS IMPROVED IN ALL COURSES? (n=107) 38 LBED 1000 LBSC 0520 LBSC 2000 Difference between pre- test and post-test mean scores 5.767% 4.933% 0.004%
  39. 39. 2016: STUDENTS IMPROVED IN ALL YEARS? (n=107) 39 1st Year 2nd Year 3rd Year+ Difference between pre-test and post-test mean scores 6.30% 4.26% -5.28%
  40. 40. 2016: POST-TEST SELF-ASSESSMENT (n=126) 40
  41. 41. 2016: LESSONS LEARNED BYOT •Mean time to complete post-test ~ 12 to 15 min. (LBSC courses), suggesting 26-question BYOT not overly demanding •Greater likelihood of statistically significant results with larger classes •Mean scores all well below Proficiency level (70% or better), but 31 students reached Proficiency and 3 reached Mastery level (85% or better) in post-test INCENTIVES •Bonus marks a large incentive, but in-class time to write the tests even more effective •Were upper-level students more pragmatic in their participation efforts? 41
  42. 42. CONCLUSIONS BYOT ADVANTAGES •You determine which questions are included and overall test length •Permits singular focus on only your students’ test results •Permits tracking individual students’ scores •Affords wide range of statistical analyses COHORT TEST ADVANTAGES •Easier to prepare for (no need to select questions) •Useful for institutions committed to large-scale, longitudinal testing •No data analysis! (just interpretation) •Slightly less expensive than individual scores/BYOT 42
  43. 43. SOURCES Project SAILS website: https://www.projectsails.org/ • International Cohort Assessment: https://www.projectsails.org/International • Build Your Own Test: https://www.projectsails.org/BYOT Cowan, S., Graham, R. & Eva, N. (2016). How information literate are they? A SAILS study of (mostly) first-year students at the U of L. Light on Teaching, 2016-17, 17-20. Retrieved from http://bit.ly/2dlOTi6 Questions? 43

×