Comparability of Computer and Paper-and-Pencil Versions of Algebra and Biology Assessments . . . . . . . . . . Slides 3-5
Examining the Relationship Between Students’ Mathematics Test Scores and Computer Use at Home and at School . . . . . . . . . . Slides 6-8
Students’ Experiences with an Automated Essay Scorer . . . . . . . . . . Slides 9-11
Conclusion . . . . . . . . . . Slide 12
This article was written by Do-Hong Kim & Huynh Huynh and was published in the The Journal of Technology, Learning, and Assessment.
Computerized testing began in the early 1970s. This article compares differences between computerized testing results and paper and pencil results for various tests.
Computerized testing was not widely used when it first began due to high costs. It then became more widely used as technological advances were made.
A study conducted by Russell Haney in 1997 found that students who were used to writing on a computer performed better on the open-ended writing tests than using paper and pencil.
There were two comparability studies conducted on the NAEP. These studies showed that the paper group outperformed the computer group in the eighth-grade NAEP mathematics test, but no mode effect was found for the eighth-grade NAEP essay test.
Kim, D. H., & Huynh, Huynh. (2007). Comparability of computer and paper-and-pencil versions. The Journal of Technology, Learning, and Assessment , 6(4), Retrieved from http://www.jtla.org
A study was conducted in 2003 of the Kansas state assessment in seventh-grade mathematics. This study did not find any statistically significant differences between paper and computer versions of the test.
In 2002, Choi and Tinkler found that the computerized Oregon statewide reading and mathematics tests were more difficult for third-graders, while the paper version of the test was more difficult for tenth-graders.
In 2006, Way, Davis, and Fitzpatrick conducted a study that showed groups of 8 and 11 graders struggled more with the online versions of the Texas statewide tests in mathematics, reading/English language arts, science, and social studies than the paper versions.
After reading this article, I came to the conclusion that more research is needed in order for a widely accepted consensus to be reached on the issue of computerized and paper testing.
There is one quote from the text that I found debatable. This quote is, “The NAEP studies found that students’ familiarity with computers was related to their performance.”
The quote is debatable because it brings up an important question. Will computerized tests accurately test how much a student actually knows? If student “A” knows more than student “B”, but student “B” scored higher on a computerized test do to his or her familiarity with computers, is this fair?
Personally I think students should always be tested based on their knowledge of the subject matter and not on how well they can use a specific technology.
This article was written by Laura M. O’Dwyer, Michael Russell, Damian Bebell, & Kevon Seeley and it was published in the JTLA.
- Using technology to increase standardized test scores has been fueled by large investments at the federal, state, and local levels. It has also been fueled by the mandates of No Child Left Behind for improving student achievement.
Wenglinsky conducted an experiment in 1998 using fourth and eighth grade 1996 NAEP scores to examine technology use and achievement. Wenglinsky used two measures of technology use (use of simulation and higher order thinking software, and a measure of more general technology use). He found that both fourth and eighth grade students who used simulation and higher order thinking software had statistically higher scores in mathematics achievement.
However, when considering general student technology-use, Wenglinsky found that computer use was negatively related to mathematics achievements for grades four and eight.
In 2002, Angrist and Lavy performed their own experiment on technology-use and student achievement. They compared students from both “high” and “low” level technology environments.
They classified on access to technology instead of use of technology.
Their studies showed weak and, in some cases, negative relationships between technology and student test scores.
O'Dwyer, L. M., Russell, M., Bebell, D., & Seeley, K. (2008). Examining the relationship between students’ mathematics test. The Journal of Technology, Learning, and Assessment , 6(5), Retrieved from http://www.jtla.org
After reading this article, I came to the conclusion that the amount of technology in a school could potentially effect how well students perform on standardized tests.
One quote from the article that I found interesting is, “However, when considering general student technology- use, Wenglinsky found that computer use was negatively related to mathematics achievement for grades four and eight.
I found this quote interesting because it shows that technological advances may not produce the intended results.
Maybe it’s better to just crack open a book once in a while.
This article was written by Cassandra Scharber, Sara Dexter, Eric Riedel and was published in the JTLA.
Formative assessment evaluates student work student work as part of a continuum of growth toward increasing quality or degree of expertise rather than on a dichotomous, right or wrong basis(Sadler 1989).
The literature makes a strong case for the importance of formative assessment because this type of assessment has been proven to produce specific gains for learning.
Black and William argue that formative assessment is “at the heart of effective teaching.”
One application of technology for assessment purposes has been the use of automated essay scoring software, which is usually employed for summative rather than formative purposes and is designed to reduce costs and increase reliability in writing assessments
Scharberr, C., Dexter, S., & Riedal, E. (2008). Students’ experiences with an automated essay scorer. The Journal of Technology, Learning, and Assessment , 7(1), Retrieved from http://www.jtla.org
A notable exception is the work of Grimes & Warschauer (2006) in which use, attitudes, and usage patterns of secondary students and teachers when using two different AES systems are explored. Despite the growing corpus of AES literature, one area that has been overlooked is users’ perceptions and responses to automated essay scoring and feedback.
In-depth interviews with four students and a software’s log illustrating the series of revisions made to their answers along with how the AES scored each draft provides further insight into post-survey results.
How students write their essays depends largely on how they feel about the scorer.
After reading this article, I came to the conclusion that I like the idea of an automated essay scorer, but I would like to read up more on some of the possible drawbacks and benefits.
I found the following quote interesting, “Research on these and other available automated essay scorers primarily focuses on the analysis approaches, accuracy of the scores as compared to human scorers, and reliability of automated essay scoring systems rather than on students’ responses to and experiences with automated essay scorers.”
I found this quote interesting because it shows that how students feel about the scorer affects how they perform on the essay.
The three articles I chose to write about are Comparability of Computer and Paper-and-Pencil Versions of Algebra and Biology Assessments , Examining the Relationship Between Students’ Mathematics Test Scores and Computer Use at Home and at School , and Students’ Experiences with an Automated Essay Scorer .
They are related because they all deal with the technology and education. Each article involves some aspect of computer use and education.
They are important to me because I am going into the field of education. Also, through my years of being a student I have used computers to help “aid” my education and it is interesting to read about research conducted to find out whether or not computer use is really effective.