SAIL away: comparing the cohort test to build your own test version of Project SAILS in a Canadian context - Eva & Graham
1. SAIL AWAY: COMPARING THE
COHORT TEST TO THE BYOT
VERSIONS OF PROJECT SAILS
IN A CANADIAN CONTEXT
Rumi Graham, Nicole
Eva & Sandra Cowan
University of
Lethbridge
Alberta, Canada
3. WHAT IS SAILS?
Skills tested:
Developing a research
strategy
Selecting finding tools
Searching
Using finding tool features
Retrieving sources
Evaluating sources
Documenting sources
Understanding Economic,
Legal, and Social Issues
Test versions:
• Individual scores test (U.S.)
• Cohort test (U.S.)
• International cohort test (our only option in 2015)
Based on
older
ACRL
‘compete
ncy
standards
’
3
4. WHY SAILS?
Luck of the draw – literally!
Teaching Development Fund
After the fact, few alternatives for Canadian
institutions
4Yosemite~commonswiki CC BY-SA 3.0,
https://commons.wikimedia.org/w/index.php?curid=520257
5. THE RESEARCH: 2015
Research purpose:
reliable, objective data on information literacy (IL) levels of first-year
undergrads before and after librarian-created IL instruction
Key questions:
Levels of IL possessed by incoming first-year students?
Improvement in students’ IL abilities after IL instruction?
Correlations between students’ IL attainment levels and
their year of study?
number or format of IL instruction sessions?
5
https://pixabay.com/en/question-speech-bubbles-speech-
1828268/
6. ACCESSING THE TEST
Web-based; custom link for CMS (Moodle);
custom web consent page
Unique SAILS ID# for each consenting student
No identifying data gathered/stored on SAILS
server – total student anonymity
6
7. THE SAMPLE
Academic Writing (10 sections) = 250
students
Liberal Education 1000 = 87 students
THE INTERVENTIONS
In-class instruction
Online modules
THE INCENTIVES
Writing: Draw for 1 of 2 $100 bookstore gift cards (must do
both)
Liberal Education: Draw + 3% bonus (1% pre-test + 2% post-
test)
7
8. THE RESULTS
71.3% of respondents came from LBED
1000
68% first year students
Approx 80% did both pre and post
THE COHORT
Cohort: 14 institutions; 6370 students
Doctoral cohort
8
9. U OF L RESULTS AGAINST
BENCHMARK
Pre-test Post-test
Above Retrieving Sources
Evaluating Sources
Developing a research strategy
Searching
Developing a research strategy
Searching
At benchmark Selecting finding tools
Documenting sources
Using finding tool features
Documenting sources
Retrieving Sources
Using finding tool features
Evaluating sources
Below Selecting finding tools
(in order of how well they performed)
9
10. 420
440
460
480
500
520
540
560
580
Developing a research strategy
Selecting finding tools
Searching
Using finding tool featuresRetrieving sources
Evaluating sources
Documenting sources
U of L Results against Benchmark (doctorate institutions)
Pre-test Benchmark Post-test
10
11. THE LESSONS LEARNED
Limitations of cohort test
We were the only international participant
Our own benchmark; comparing ourselves to ourselves
Swamped by large institution results (Ashford U 43%)
Cannot track individual results from pre to post test
Cannot choose which questions students get
Limitations of separating sections of Academic Writing -
cohort report
Importance of incentives!
Time of semester – competing obligations
11
12. 2016: NEW SAILS TESTING OPTION
SAILS Build Your Own Test (BYOT) launched January
2016
12
Customizable version of
“individual scores test”
Test scores tracked for
each test-taker
Hand-picked test
questions
No minimum number of
test questions; maximum
of 50
Available to international
institutions
13. 2016: THE RESEARCH
Ran study again using BYOT and modified set
of courses
THE SAMPLE
Library Science (LBSC) 0520 ~ 30 students
First Nations Transition Program course taught by a
librarian
Library Science (LBSC) 2000 ~ 30 students
Arts & Science full-credit course taught by a librarian
Liberal Education (LBED) 1000 ~ 95 students
4 labs taught by a librarian; embedded in full-credit course
13
14. 2016: INCENTIVES & INTERVENTIONS
THE INCENTIVES
LBSC 0520 and LBSC 2000
In-class time to write pre-test and post-test
Bonus marks: 2% pre-test; 3% post-test
Draw $100 gift card (one per course)
LBED 1000
Bonus marks: 1% for pre-test; 2% for post-test
Draw for $100 gift card
THE INTERVENTIONS
In-class instruction; online modules (LBED 1000 only)
14Alan O’Rourke http://bit.ly/2mjCzBQ CC BY 2.0
15. 2016: BUILDING THE BYOTS
IL instructors identified questions on
topics not covered in course
(eliminated 50 questions)
From remaining 112 questions,
researchers not involved in grading
any coursework selected 2 matching,
non-overlapping sets of questions
Both sets had same number of
questions in each skill area reflecting
same range of difficulty
Pre-test and post-test each contained
26 SAILS questions (42% shorter than
45-question cohort test) 15http://bit.ly/2mx4iOu CC0
19. 2016: MEAN SCORES IMPROVED IN ALL
COURSES?
LBED 1000 LBSC 0520 LBSC 2000
Pre-test lowest 14.5% 23.1% 23.1%
Pre-test
highest 80.8% 61.5% 80.8%
Pre-test mean 56.4% 41.7% 56.9%
Post-test
lowest 15.4% 15.4% 23.1%
Post-test
highest 84.6% 73.1% 80.8%
Post-test mean 59.9% 48.2% 59.8% 19
*Pre-test: n=124 Post-test: n=126
20. 2016: MEAN SCORES IMPROVED IN ALL
YEARS?
20
1st Year 2nd Year 3rd Year+
Pre-test
lowest 23.1% 15.4% 30.8%
Pre-test
highest 80.8% 80.8% 80.8%
Pre-test
mean 51.8% 56.4% 59.3%
Post-test
lowest 15.4% 26.9% 23.1%
Post-test*Pre-test: n=124 Post-test: n=126
21. 2016: INDIVIDUAL STUDENTS
IMPROVED?
107 students wrote the pre- and post-tests
(68%)
Mean pre-test score = 53.95%
Mean post-test score = 58.16%
Mean difference = +4.21%
Margin of error = ± 2.9 %
21
22. 2016: STUDENTS IMPROVED IN ALL
COURSES? (n=107)
22
LBED
1000
LBSC
0520
LBSC
2000
Difference between pre-
test and post-test mean
scores 5.8% 4.9% 0.004%
23. 2016: STUDENTS IMPROVED IN ALL
YEARS? (n=107)
23
1st Year
2nd
Year
3rd
Year+
Difference between pre-test
and post-test mean scores 6.30% 4.26% -5.28%
25. 2016: LESSONS LEARNED
BYOT
•Mean time to complete post-test ~ 12 to 15 min. (LBSC courses),
suggesting 26-question BYOT not overly demanding
•Greater likelihood of statistically significant results with larger
classes
•Mean scores all well below Proficiency level (70% or better), but 31
students reached Proficiency and 3 reached Mastery level (85% or
better) in post-test
INCENTIVES
•Bonus marks a large incentive, but in-class time to write the tests
even more effective
•Were upper-level students more pragmatic in their participation
efforts? 25
26. CONCLUSIONS
BYOT ADVANTAGES
•You determine which questions are included and overall test
length
•Permits singular focus on only your students’ test results
•Permits tracking individual students’ scores
•Affords wide range of statistical analyses
COHORT TEST ADVANTAGES
•Easier to prepare for (no need to select questions)
•Useful for institutions committed to large-scale, longitudinal
testing
•No data analysis! (just interpretation)
•Slightly less expensive than individual scores/BYOT 26
27. SOURCES
Project SAILS website: https://www.projectsails.org/
• International Cohort Assessment:
https://www.projectsails.org/International
• Build Your Own Test: https://www.projectsails.org/BYOT
Cowan, S., Graham, R. & Eva, N. (2016). How information
literate are they? A SAILS study of (mostly) first-year students
at the U of L. Light on Teaching, 2016-17, 17-20. Retrieved
from http://bit.ly/2dlOTi6
Questions?
27
28. 2016: NEW SAILS TESTING OPTION
SAILS Build Your Own Test (BYOT) launched January
2016
28
Customizable version of
“individual scores test”
Test scores tracked for
each test-taker
Hand-picked test
questions
No minimum number of
test questions; maximum
of 50
Available to international
institutions
29. 2016: THE RESEARCH
Ran study again using BYOT and modified set
of courses
THE SAMPLE
Library Science (LBSC) 0520 ~ 30 students
First Nations Transition Program course taught by a
librarian
Library Science (LBSC) 2000 ~ 30 students
Arts & Science full-credit course taught by a librarian
Liberal Education (LBED) 1000 ~ 95 students
4 labs taught by a librarian; embedded in full-credit course
29
30. 2016: INCENTIVES & INTERVENTIONS
THE INCENTIVES
LBSC 0520 and LBSC 2000
In-class time to write pre-test and post-test
Bonus marks: 2% pre-test; 3% post-test
Draw $100 gift card (one per course)
LBED 1000
Bonus marks: 1% for pre-test; 2% for post-test
Draw for $100 gift card
THE INTERVENTIONS
In-class instruction; online modules (LBED 1000 only)
30Alan O’Rourke http://bit.ly/2mjCzBQ CC BY 2.0
31. 2016: BUILDING THE BYOTS
IL instructors identified questions on
topics not covered in course
(eliminated 50 questions)
From remaining 112 questions,
researchers not involved in grading
any coursework selected 2 matching,
non-overlapping sets of questions
Both sets had same number of
questions in each skill area reflecting
same range of difficulty
Pre-test and post-test each contained
26 SAILS questions (42% shorter than
45-question cohort test) 31http://bit.ly/2mx4iOu CC0
35. 2016: MEAN SCORES IMPROVED IN ALL
COURSES?
LBED 1000 LBSC 0520 LBSC 2000
Pre-test lowest 15.4% 23.1% 23.1%
Pre-test
highest 80.8% 61.5% 80.8%
Pre-test mean 56.4% 41.7% 56.9%
Post-test
lowest 15.4% 15.4% 23.1%
Post-test
highest 84.6% 73.1% 80.8%
Post-test mean 59.9% 48.2% 59.7% 35
*Pre-test: n=124 Post-test: n=126
36. 2016: MEAN SCORES IMPROVED IN ALL
YEARS?
36
1st Year 2nd Year 3rd Year+
Pre-test
lowest 23.1% 15.4% 30.8%
Pre-test
highest 80.8% 80.8% 80.8%
Pre-test
mean 51.8% 56.5% 59.3%
Post-test
lowest 15.4% 26.9% 23.1%
Post-test*Pre-test: n=124 Post-test: n=126
37. 2016: INDIVIDUAL STUDENTS
IMPROVED?
107 students wrote the pre- and post-tests
(68%)
Mean pre-test score = 53.95%
Mean post-test score = 58.16%
Mean difference = +4.21%
Margin of error = ± 2.89 %
37
38. 2016: STUDENTS IMPROVED IN ALL
COURSES? (n=107)
38
LBED
1000
LBSC
0520
LBSC
2000
Difference between pre-
test and post-test mean
scores 5.767% 4.933% 0.004%
39. 2016: STUDENTS IMPROVED IN ALL
YEARS? (n=107)
39
1st Year
2nd
Year
3rd
Year+
Difference between pre-test
and post-test mean scores 6.30% 4.26% -5.28%
41. 2016: LESSONS LEARNED
BYOT
•Mean time to complete post-test ~ 12 to 15 min. (LBSC courses),
suggesting 26-question BYOT not overly demanding
•Greater likelihood of statistically significant results with larger
classes
•Mean scores all well below Proficiency level (70% or better), but 31
students reached Proficiency and 3 reached Mastery level (85% or
better) in post-test
INCENTIVES
•Bonus marks a large incentive, but in-class time to write the tests
even more effective
•Were upper-level students more pragmatic in their participation
efforts? 41
42. CONCLUSIONS
BYOT ADVANTAGES
•You determine which questions are included and overall test
length
•Permits singular focus on only your students’ test results
•Permits tracking individual students’ scores
•Affords wide range of statistical analyses
COHORT TEST ADVANTAGES
•Easier to prepare for (no need to select questions)
•Useful for institutions committed to large-scale, longitudinal
testing
•No data analysis! (just interpretation)
•Slightly less expensive than individual scores/BYOT 42
43. SOURCES
Project SAILS website: https://www.projectsails.org/
• International Cohort Assessment:
https://www.projectsails.org/International
• Build Your Own Test: https://www.projectsails.org/BYOT
Cowan, S., Graham, R. & Eva, N. (2016). How information
literate are they? A SAILS study of (mostly) first-year students
at the U of L. Light on Teaching, 2016-17, 17-20. Retrieved
from http://bit.ly/2dlOTi6
Questions? 43
Editor's Notes
The University of Lethbridge is a mid-sized research university (about 8,000 students) with a growing graduate program and a focus on Liberal Education in southern Alberta. Founded in 1967 – our 50th anniversary this year.
International cohort test officially became available for use June 2014
Students were mainly first year, enrolled in first year courses
Writing students watched online modules and had one in-class session with a librarian; LBED students watched the modules and had four labs with a librarian (rest of course is taught by Arts and Science faculty)
The incentive for the students, apart from knowing that they were contributing to research, was a chance to win a draw for one of two $100 gift certificates from the U of L bookstore. The Liberal Education students were also given a 3% bonus for completing both tests. This worked especially well, as we saw a very good participation rate among the students in this class – 61 out of 87 students completed the pre-test, and 61 out of 84 completed the post-test. The draw alone did not seem to be sufficient incentive, as out of 10 participating sections of Writing 1000 (potentially 250 students), only 26 students completed the pre- test, and 22 the post-test.