van der Sluis, H. (2009) ’From pen and paper to computer based assessment’

846 views

Published on

van der Sluis, H. (2009) ‘From Pen and Paper to Computer Based Assessment’.
Conference presentation at the at the Annual Learning and Teaching Conference, January 2009, Kingston University

Published in: Education, Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
846
On SlideShare
0
From Embeds
0
Number of Embeds
142
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • van der Sluis, H. (2009) ‘From Pen and Paper to Computer Based Assessment’. Conference presentation at the at the Annual Learning and Teaching Conference, January 2009, Kingston University
  • First-Generation Computer-Based Tests 1. Primarily serve institutional needs 2. Measure traditional skills and use test designs and item formats closely resembling paper-based tests, with the exception that tests are given adaptively 3. Administered in dedicated test centers as a “one-time” measurement 4. Take limited advantage of technology   Next-Generation Electronic Tests 1. Primarily serve institutional needs 2. Use new item formats (including multimedia and constructed response), automatic item generation, automatic scoring, and electronic networks to make performance assessment an integral program component; measure some new constructs 3. Administered in dedicated test centers as a “one-time” measurement 4. Allow customers to interact with testing companies entirely electronically   Generation “R” Tests 1. Serve both institutional and individual purposes 2. Integrated with instruction via electronic tools so that performance is sampled repeatedly over time; designed according to cognitive principles 3. Use complex simulations, including virtual reality, that model real environments and allow more natural interaction with computers 4. Administered at a distance 5. Assess new skills
  • - 1989 Cost and hardware issues PPA ≈ CBA -> different outcomes, 3 studies showed superiority of CBA test 11 studies showed no difference 9 studies showed superiority of PPA test Bunderson., C.V. Inouye, D.K. & Olsen, J.B. (1989) 1990-1999 Hardware problems, -Breakdowns -Slowness -Screen glare Bugbee jr. A.C., & Bernt F.M., (1990)   Difference in completion time -> no difference found, Difference expected in Computer literate, younger students more likely + male students more likely to do better on computer administered test. Bugbee jr. A.C., & Bernt F.M., (1990)   CBT improved performance by , -repetition -immediate timed feedback -self regulated learning Sly, L. (1999)   S prefer computerized exams , -do it when they are ready -received grades immediately Bugbee jr. A.C., & Bernt F.M., (1990)   Worries about difference in : -gender, computer anxiety, -background, -age Sly, L. (1999) Bugbee jr. A.C., & Bernt F.M., (1990)   In general formative testing useful for student performance . -closing the gap between actual and desired level of performance -instructions how to improve -time left to act on feedback -timely feedback Black, P., & Wiliam, D. (1998) 2000 - 2004 Limitations of objective test -presentations skills, -collaborative learning Buchanan, T. (2000)   Students benefit from learning by objective test by CBA -like repetition -useful feedback -unsupervised -self regulated learning -do it when they are ready -received grades immediately -lower attaining students benefit more than higher (formatively) -higher attaining students preferred PPA Buchanan, T. (2000) Russell, M., & Haney, W. (2000) Peat, M., & Franklin, S. (2003) Clariana, R. & Wallace, P. (2002)   Writing on computer -Students benefit answering open ended questions on computer -Underestimate ability to write with computer Russell, M., & Haney, W. (2000)   Test mode influence test behaviour -screen problems -risk behaviour. take a chance, less personal, have a go and start again, snap and grab Johnson, M. & Green, S. (2004)   No difference found in : -gender -competitiveness -computer familiarity (age) Expectation influence test mode should disappear Clariana, R. & Wallace, P. (2002)   Formative assessment have : -no impact final mark -positive impact on final mark Peat, M., & Franklin, S. (2003) Noyes, J., Garland, K., & Robbins, L., (2004)   2005- 2008 Looked a learning style and use formative assessments . Different learning style has different benefits from formative assessment. Wang, K. H., Wang, T. H., Wang, W. L., & Huang, S. C. (2006) Wang, T. H. (2007)   No difference found in large comparisons . -gender (formatively and summatively) -computer familiarity -possible high risk behaviour No difference in large comparisons between PPA - CBA on result for reading, writing and mathematics. Change in mode is less meaning full today Fill, K., and Brailsford, S. (2005) Hardré, P.L., Crowson, H.M., Xie, K., & Ly, C. 2007 Horne, J., (2007) Puhan, P., Boughton, K., & Kim, S. (2007) Kim, D.H., & Huynh, H. (2007)   Objective test can show relationship with general scholastic aptitude, short answer not. (Essays more reliable?) Bleske-Rechek, A., Zeug, N., & Webb, R.M. (2007)   Student writing on open-ended items significant better with CBA than PPA Kim, D.H., & Huynh, H. (2007)
  • Objective test -summative (MCQ, drag and drop, matching, sort answer) -formative (MCQ, drag and drop, matching, sort answer) Subjective test -restricted response (Analysis/criticism/cause of effect/comparison/reorganisation, application, summaries) -extended response (Reflection, evaluation, discussion)   Portfolio -portfolio for assessment, learning, showcase, transition, -journal -simulations
  • Used literature: Bennett, R.E., (1998) Reinventing Assessment: Speculations on the future of large-scale educational testing. Available from: www.ets.org/Media/Research/pdf/PICREINVENT.pdf Black, P., & Wiliam, D. (1998). Inside the Black Box: Raising Standards Through Classroom Assessment. Phi Delta Kappan, 80 (2), 139-144 Bleske-Rechek, A., Zeug, N., & Webb, R.M., (2007) Discrepant performance on multiple-choice and short answer assessments and the relation of performance to general scholastic aptitude. Assessment & Evaluation in Higher Education, Vol. 32 No 2 pp. 89-105 Buchanan, T. (2000). The Efficacy of a World-Wide Web Mediated Formative Assessement. Journal of Computer Assisted Learning, 16, 193-200 Bugbee jr. A.C., & Bernt F.M., (1990) Testing by computer: Findings in six years of use 1982-1988. Journal of Research on Computing in Education; Vol. 23 Issue 1. Bunderson., C.V. Inouye, D.K. & Olsen, J.B., (1989) The four generations of computerized educational measurement in R L Linn (ed). Educational measurement American Council on Education, Washington DC, 367-407 Clariana, R. & Wallace, P., (2002) Paper–based versus computer–based assessment: key factors associated with the test mode effect, British Journal of Educational Technology, Volume 33 (5), 593-602 Fill, K., and Brailsford, S., (2005) Investigating Gender Bias in Formative and Summative CAA, available from: http://eprints.soton.ac.uk/16256/01/Fill_and_Brailsford.html (Retrieved: 05/01/09) Hardré, P.L., Crowson, H.M., Xie, K., & Ly, C., 2007 Testing differential effects of computer-based, web-based and paper-based administration of questionnaire research instruments, British Journal of Educational Technology, Vol 38 (1), 5–22. Horne, J., (2007) Gender differences in computerised and conventional educational tests, Journal of Computer Assisted Learning 23, pp47–55. JISC 2006 Effective Practice with e-Assessment available from: http://www.jisc.ac.uk/media/documents/themes/elearning/effpraceassess.pdf Johnson, M. & Green, S. (2004). On-Line Assessment: The Impact of Mode on Student Performance. Paper presented at the at the Annual Conference of the British Educational Research Association, Manchester, UK. Kim, D.-H., & Huynh, H. (2007). Comparability of Computer and Paper-and-Pencil Versions of Algebra and Biology Assessments. Journal of Technology, Learning, and Assessment, 6(4). Availabe from http://www.jtla.org Mead, A.D. and Drasgow, F., 1993 Equivalence of computerized and paper-and-pencil cognitive ability test: a meta-analysis, Psychological Bulletin 114, pp 449-458 Noyes, J., Garland, K., & Robbins, L., 2004 Paper-based versus computer-based assessment: is workload another test mode effect? British Journal of Educational Technology, Vol 35 (1), 111–113. Peat, M., & Franklin, S. (2003). Has Student Learning been Improved by the Use of Online and Offline Formative Assessment Opportunities? Australian Journal of Educational Technology, 19 (1), 87-99. Poggio, J., Glasnapp, D. R., Yang, X., & Poggio, A. J. (2005). A comparative evaluation of score results from computerized and paper and pencil mathematics testing in a large scale state assessment program. Journal of Technology, Learning, and Assessment, 3(6). Available from http://www.jtla.org Puhan, P., Boughton, K., & Kim, S. (2007). Examining Differences in Examinee Performance in Paper and Pencil and Computerized Testing. Journal of Technology, Learning, and Assessment, 6(3). Available from http://www.jtla.org . Russell, M., & Haney, W. (2000). Bridging the Gap between Testing and Technology in Schools. Education Policy Analysis Archives, 8 (19). Also available from: http://epaa.asu.edu/epaa/v8n19.html Sly, L. (1999). Practise Tests as Formative Assessment Improve Student Performance on Computer-Managed Learning Assessments. Assessment & Evaluation in Higher Education, 24 (3), 339-343. Topping, K. J., & Fisher, A. M. (2003). Computerised Formative Assessment of Reading Comprehension: Field Trials in the UK. Journal of Research in Reading, 26 (3), 267-279. Wang, K. H., Wang, T. H., Wang, W. L., & Huang, S. C. (2006). Learning Styles and Formative Assessment Strategy: Enhancing Student Achievement in Web-Based Learning. Journal of Computer Assisted Learning, 22, 207-217. Wang, T. H. (2007). What Strategies are Effective for Formative Assessment in an E-Learning Environment? Journal of Computer Assisted Learning, 23, 171-186.
  • van der Sluis, H. (2009) ’From pen and paper to computer based assessment’

    1. 1. From P en and P aper to C omputer B ases A ssessment “ … technology can add value to assessment practice in a variety of ways….e-assessment in fact is much more than just an alternative way of doing what we already do.” JISC 2006 Hendrik van der Sluis Academic E-learning Developer [email_address]
    2. 2. Definitions of PPA, CBA and CAA <ul><li>Pen and Paper assessment (PPA) assessment delivered on paper and marked manually </li></ul><ul><li>Objective paper test </li></ul><ul><li>Written assignments (essays) </li></ul><ul><li>computer-based assessment (CBA) assessments delivered and marked by computer </li></ul><ul><li>Objective Test delivered by Blackboard Test Manager, Questionmark Perception, Respondus, </li></ul><ul><li>computer-assisted assessment (CAA) practice that relies in part on computers </li></ul><ul><li>online discussion forums for peer-assessment, </li></ul><ul><li>completion and submission of work electronically, </li></ul><ul><li>storage of work in an e-portfolio, </li></ul><ul><li>self- or peer-assessments in the form of diaries, blogs or wikis, </li></ul><ul><li>audience response systems in group work, </li></ul>
    3. 3. Brainstorm E-assessments <ul><li>higher order, </li></ul><ul><li>lower order, </li></ul><ul><li>transitional model, </li></ul><ul><li>social constructive model </li></ul><ul><li>formative, </li></ul><ul><li>summative, </li></ul><ul><li>objective, </li></ul><ul><li>subjective </li></ul><ul><li>reliability, </li></ul><ul><li>flexibility, </li></ul><ul><li>efficiency </li></ul><ul><li>security </li></ul><ul><li>complexity, </li></ul><ul><li>marking </li></ul><ul><li>validation </li></ul><ul><li>fairness </li></ul><ul><li>expectations, </li></ul><ul><li>familiarity, </li></ul><ul><li>behaviour </li></ul>
    4. 4. Randy Elliot Bennett Reinventing Assessment (1998) Three generations of large-scale educational assessments <ul><ul><ul><li>First-Generation Computer-Based Tests </li></ul></ul></ul><ul><ul><ul><li>(Infrastructure Building) </li></ul></ul></ul><ul><ul><ul><ul><ul><li>Next-Generation Electronic Tests </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>(Qualitative Change) </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Generation “R” Tests </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>(Reinvention) </li></ul></ul></ul></ul></ul>
    5. 5. Small literature review on objective tests - 1989 Cost and hardware issues PPA ≈ CBA -> different outcomes 1990 - 1999 Hardware problems Worries about gender, minorities and familiarity Benefits on performance and feedback 2000 - 2004 Limitations of objective test More benefits Writing on computer Test behaviour No difference found in outcome for minorities and familiarity Influence of formative assessment on final mark 2005 - 2008 Relationship with learning style Large summative assessment found no influence on gender and familiarity PPA ≈ CBA -> No different outcomes Writing on computer and outcome?
    6. 6. Computer Bases Assessment tools (at Kingston) formative summative Objective tests Subjective test Portfolio Test Manager (Blackboard) Questionmark Perception (Blackboard) Respondus Electronic Voting System extended-response restricted-response Wiki (Blackboard) Google Docs Blog (Blackboard) Google Presentation Social Bookmarking, (Delicious) Discussion Board (Blackboard) Google chat Podcast (Blackboard) Social networking, (One Community blog) Virtual World (Second Live) simulations
    7. 7. Reference <ul><li>JISC 2006 Effective Practice with e-Assessment available from: http://www.jisc.ac.uk/media/documents/themes/elearning/effpraceassess.pdf </li></ul><ul><li>Used literature for this presentation is available in the hand-out. </li></ul><ul><li>Images: Stock.XCHNG: http://www.sxc.hu/ </li></ul>

    ×