Comprehension of Electronic Text:The Effects of Reading On-screen and of <br />Electronic Formatting on Student Comprehension<br />Kathryn Patrick<br />Emporia State University<br />Abstract<br />This research proposal outlines an experimental research project into the effect of electronic format and formatting on the reading comprehension of undergraduate college students. The students’ comprehension of a sample of text on screen versus in a text book will be measured, as well as the comprehension levels of the electronic text sample formatted in various ways. If approved, this study could inform practices in traditional and online classes, as well as support further research into internet literacy and other topics. <br />Comprehension of Electronic Text: The Effects of Reading On-screen <br />and of Electronic Formatting on Student Comprehension<br />Introduction <br />Anyone who has taught or taken an online class will tell you that the experience is very different—and often more challenging—than a traditional, face-to-face class. There are many factors behind this, such as difficulty in creating social presence and a lack of nonverbal communication. A more basic problem, however, is that many students seem to have trouble comprehending the text materials posted in the class' course management system or sent to them by email. A professor will receive numerous emails about things which, they thought, they had clearly explained in their lecture or announcement. As a Graduate Teaching Assistant, I often send out step-by-step instructions by email, and many times the recipient's replies will leave me wondering, “Did they even read what I just sent them?” <br />Does simply viewing a text on a computer screen, rather than in print, affect readers’ ability to understand it? Does the formatting of electronic text effect comprehension? In this study, I intend to investigate this problem experimentally<br />Hypotheses<br />
Students who read a given text on a computer screen will score lower on a standardized comprehension test than students who read the same text from a textbook.
Students who read the text on the computer will score higher on the comprehension test when the formatting mimics the print published version in layout, font, and consistency, than when these elements are haphazardly modified.
Students who read the text on the computer will score higher on the comprehension test when the text is visually divided into short (one to three paragraph) sections than when it is visually presented as a single “block” of text.
Definitions<br />Comprehension—<br />The RAND Reading Study Group defined comprehension as “The process of simultaneously extracting and constructing meaning through interaction and involvement with written language” (2002). It is important to note that for this study, I will be focusing on ‘extracting and constructing meaning,’ which is comprehension, rather than on ‘interaction and involvement,’ which is literacy. The subjects’ interaction and involvement with text will be controlled, while their ability to extract and construct meaning is measured. This will be discussed further in the Literature Review section of this proposal.<br />Formatting—<br />For the purpose of this study, I have defined formatting as the visual appearance of a document, particularly of a document created on a computer.<br />Included in formatting, I will be looking at the following variables:<br />
Layout: including margins, line spacing, text alignment, and indentation
Division of the text into sections which are readily and visually apparent.
When testing the effect of division, I will separate sections using centered, bolded, underlined, all-capital headings.<br />There are certainly other elements of formatting. However, I felt that these are sufficient and useful independent variables for the purposes of this study. Other elements of formatting will be held constant between all versions of the electronic text, and will be reproduced as accurately as possible from the published text.<br />Literature review <br />In my review, I located a large amount of work into on-screen comprehension and internet literacy, and the challenges thereof, rather than of comprehension alone. These are very closely linked subjects. Coiro points out that “The Internet, in particular, provides new text formats, new purposes for reading, and new ways to interact with information that can confuse and overwhelm people taught to extract meaning from only conventional print” (2003). A low level of internet literacy will hamper a reader’s ability to comprehend basic on-screen text, since they are so likely to be confused and overwhelmed once they find it. However, even this basic comprehension does seem to have its own problems:<br />“Traditionally, we have tended to assume that online reading comprehension is isomorphic with offline reading comprehension (Coiro, 2003; Leu, Zawilinski, Castek, Bannerjee, Housand, Liu, & O'Neil, 2007). Data are appearing, however, to question this assumption; online reading comprehension appears to require additional and perhaps somewhat different comprehension skills and strategies.” (Leu et al 2007, emphasis mine)<br />Online literacy is a big and complicated topic. The New Literacies Research Lab and others are doing excellent work in that direction; however, I believe that on-screen comprehension can be isolated from internet literacy, determining if comprehension is effected by the format itself, rather than just the online environment (and Coiro’s new purposes, and new ways to interact), and if mimicking print format can reduce that effect. This knowledge could inform work in internet literacy, providing a baseline for on-screen comprehension, against which larger comprehension and literacy concerns could be compared. <br />Population and setting <br />The population of this study will be students of Emporia State University (ESU) who meet the following criteria:<br />
Are in their second year of study
Attended ESU for at least 1 semester prior to the study
Are a traditional-aged college student (age 18-24)
While Fall 2010 numbers are not yet available, in Fall 2009 there were 561 students who met these criteria, according to the Institutional Research department of ESU. <br />Using Creative Research Systems’ Sample Size Calculator, and assuming a population of 560 students, I determined that a sample size of 228 will be sufficient for a 95% confidence level and a confidence interval of 5. To allow for a potential increase in the population size for Fall 2010, and for selected students who decline to participate, I will begin with a sample size of 250 students. This sample will be selected randomly from a list of eligible students, and divided into 6 random sub-samples. Selected students will be contacted both by email and by standard mail, in order to reduce sample bias towards students who are more or less comfortable with computers and internet technology. Potential subjects would also be asked to self-identify whether they are proficient at reading English. <br />Each sub-sample should have around 38 participants, and will be kept as close to even as possible. A separate sample of 38 participants will be randomly selected for the pilot study.<br />Research Instruments<br />The primary research instrument will be a comprehension test. This is to be a purpose-created test, which will include 25-50 multiple choice questions, to be answered on machine-readable answer sheets. The same test will be used in each part of the study, and the questions will be asked in the same order for each subject. Students will be allowed to take as much time as they want on the test, and may reference their version of the text during the test.<br />I will also need to select a sample text for use in the study. This text should be moderately difficult, and found in a textbook by a reputable publisher. The beginning and the end of the selection will be clearly marked, with any text or graphics appearing on those pages which is outside of the selection covered by adhesive paper. The text should take about 10-15 minutes to read. If a suitable selection can be found in a subject area for which ESU does not offer a major, that will be used to minimize the portion of the population which will have had significantly more exposure to the subject than the rest of the subjects.<br />The unmodified electronic version of the text will be a PDF created from a Microsoft Word document, which will mimic the print version’s formatting as closely as possible in all areas. The four variant texts will also be PDFs, created from modified versions of the Microsoft Word document. Each variant text will contain significant changes made to one category of formatting (layout, text, consistency, division). <br />A final standard instrument will be a script for providing instructions to students, which will be used for each portion of the study. This script, as well as the comprehension test, electronic versions of the sample text, and a citation for the print version of the sample text, will be made available when the study’s findings are released, to aid future researchers in duplicating the study. <br />Pilot Study<br />The purpose of the pilot is primarily to test and measure the comprehension test as a research instrument, as well as to test the script and sample text. Using the standardized script, subjects will be asked to read the print version of the sample text, and then to complete the comprehension test. The subjects’ scores will be examined for any apparent problems in the comprehension test—for example, if a certain question is answered incorrectly by most of the subjects, that question will be revised or removed, or if most of the subjects receive a perfect score on the text, it is probably too easy and will have to be revised as a whole. Similarly, if the subjects seem confused, the script will be adjusted.<br />If the comprehension test is revised, the pilot study will be repeated to ensure that revisions are successful.<br />Research Design <br />The first part of the study will involve two subgroups, called Group 1.A and 1.B. These groups will meet and complete their portions of the study in the same room, at the same time of day, on the same day of the week, in consecutive weeks. I believe that this will minimize differences in the environment, maturation effects, and other confounds, while still allowing the groups to meet separately (both because observation of each other may skew results and for space concerns).<br />Group 1.A will read the print version of the sample text from the textbook. I have chosen to use physical books rather than copies or printouts so that Group 1.A, my control, will have a very “traditional” reading experience, and because of the high standards that book publishers have for formatting. Group 1.B will read the same selection of text, with the formatting reproduced as accurately as possible from the textbook, on laptop screens. The laptops will be, as much as possible, identical—especially in screen size and resolution. The text will already be open, and set to full screen. <br />Each group will complete the comprehension test detailed in the Research Instruments section. The data will go to answering my first question—regarding the comprehension challenges of online text versus print text—and the Group B data will be used to measure the effects of formatting in the second experiment.<br />For the second part of the study, the remaining 4 subgroups (Groups 2.A through 2.D) will each be assigned one of the variant texts. Students will, again, read these texts on laptops, under the same conditions as Group 1.B, and then complete the comprehension test. This data should illustrate the effect of formatting on comprehension. These groups will meet in the same room, at the same time, on the same day as the first two groups, either individually or in pairs as space allows. There may be some maturation between Groups 1.A and 2.D, but condensing the study further would introduce other confounds. <br />Data analysis <br />If a subject leaves more than half of the test questions unanswered, or if it is apparent from the answer sheet that the subject was not applying him-or-herself to the test (i.e., if the answer bubbles are filled in to form a straight line or obvious pattern), those scores will be excluded from calculations. Otherwise, all scores will be compiled in the following table, with the individual percents row of course expanded to include each subject. For privacy reasons, the subjects’ names will be omitted. <br />Group 1.A Group 1.B Group 2.A Group 2.B Group 2.C Group 2.D Print Unmodified Layout Font Consistency Division Individual %s correct Range of scores Mean score Standard deviation Combined RangeCombined MeanStandard Deviation<br />To support (or refute) my hypotheses directly, I will compile the following data sets.<br />Mean scores and range of scores from Pilot vs Group 1.AThis data will be used to determine the instrument’s level of test-retest reliability.<br />Mean scores and range of scores of Group 1.A vs Group 1.BThis data will be used to answer hypothesis 1—that students who read a given text on a computer screen will score lower than students who read the text from a textbook.<br />Mean scores and range of scores of Group 1.B and 2.A-2.DThis data will be used to answer hypotheses 2 and 3—that students will score higher when the formatting mimics the print published version in layout, font, and consistency, than when these elements are haphazardly modified, and that students will score higher when the text is visually divided into short sections, than when it is a “block” of text.<br />Group 1.BGroup 2.AGroup 2.BGroup 2.CGroup 2.DUnmodifiedLayoutFontConsistencyDivisionMean ScoreBaselineLower?Lower?Lower?Higher?Range of Scores<br />
Mean score of Group 1.B and the combined mean score of Groups 2.A through 2.DThis data will also be used to answer hypothesis 2.
Limitations of the study <br />Due to time considerations, and the potential difficulty of gathering a volunteer population large enough from my randomly-selected sample, this study will not examine the difference (if any) in comprehension between differently-formatted text samples in print, nor will it measure the effect of “distractions” in the online reading environment—a surrounding page frame, background image, or other programs being used in the background. <br />Also, the text sample will be a selection from a textbook—this study will not compare print vs electronic versions of instructions, narratives or other types of text, though the results may be applicable to other types of text. Other types of print samples, such as computer-generated text which has been printed on a standard ink-jet, may have different comprehension levels, but all of these concerns will have to be left to other studies. <br />Furthermore, qualitative aspects, such as reader experience, preferences, and concerns, though important, are outside of the range of this experiment. This experiment is only concerned with providing solid, reproducible evidence for (or against) the effect of displaying text on-screen, and of formatting text in a certain way. <br />Role of the researcher and review board <br />Because this study uses human subjects, I will need to obtain approval from the Emporia State University review board. I will also need to obtain the list of students who meet the selection criteria from the University, to develop the comprehension test and instructions script, and to select the sample text. I will create the electronic text and 4 variants. <br />When these preparations are complete, I will select and recruit the research sample, select and reserve a room for the groups to meet it, and proctor each group as they complete their part of the study. Finally, I will compile and analyze the resulting data.<br />Due to the complexities of these tasks, I am seeking a research partner who will be able to take over some of these responsibilities. <br />Schedule<br />August -September:<br />Create research instrument<br />Select sample text<br />Obtain population data<br />September: <br />Select, recruit subjects<br />Conduct Pilot Study<br />October-November:<br />Conduct the study<br />It is important that all of the groups will be able to meet before the end of classes and the beginning of finals. Also, because so many students leave campus, it will be impossible to schedule one of the study groups during fall break. If multiple pilots are conducted, and not enough weeks are left for all of the groups to meet before the end of classes, the study will be postponed, and all of the groups will meet in the first six weeks of the spring semester. This will allow maturation between the final pilot and the study, but that is preferable to allowing maturation between the study groups themselves. <br />Budget <br />Enough textbooks for Group 1.A<br />
$57 average textbook cost in 2007-08 (National Association of College Stores)
This cost could be reduced if the university bookstore could be convinced to loan the textbooks to the study. Even if this could not be arranged, a portion of the cost could be recouped by selling the textbooks back. Despite the cost, it is important to use new textbooks, so that each subject will have an identical text sample and an identical experience. <br />Other materials<br />
Incentives total$882<br />Total budget$3,131<br />References<br />Beck, S. E., & Manuel, K. (2008). Practical research methods for librarians and information professionals. New York, NY: Neal-Schuman Publishers, Inc.<br />Coiro, J. (2003). Reading comprehension on the Internet: Expanding our understanding of reading comprehension to encompass new literacies [Exploring Literacy on the Internet department]. The Reading Teacher, 56(5), 458–464. Retrieved July 16, 2010, from www.readingonline.org/electronic/elec_index.asp?HREF=/electronic/rt/2-03_Column/index.html<br />Creative Research Systems (2007-2010). Sample Size Calculator. Retrieved July 16, 2010, from http://surveysystem.com/sscalc.htm<br />Leu, D.J., Jr., Reinking, D., Carter, A., Castek, J., Coiro, J., Henry, L.A., et al. (2007, April 9). Defining online reading comprehension: Using think aloud verbal protocols to refine a preliminary model of Internet reading comprehension processes. Paper presented at the American Educational Research Association, Chicago, IL. Retrieved July 16, 2010 from docs.google.com/Doc?id=dcbjhrtq_10djqrhz<br />Leu, D. J., Zawilinski, L., Castek, J., Banerjee, M., Housand, B., Liu, Y., and O’Neil. M (2007). What is new about the new literacies of online reading comprehension? In A. Berger, L. Rush, & J. Eakle (Eds.). Secondary school reading and writing: What research reveals for classroom practices. National Council of Teachers of English/National Conference of Research on Language and Literacy: Chicago, IL. <br />Mokhtari, K., Kymes, A., and Edwards, P. (2008). Assessing the New Literacies of Online Reading Comprehension: An Informative Interview With W. Ian O’Byrne, Lisa Zawilinski, J. Greg McVerry, and Donald J. Leu at the University of Connecticut. /The Reading Teacher, 62/(4). 354-357.<br />RAND Reading Study Group. (2002). Reading for understanding: Towards an R&D program in reading comprehension. Retrieved July 16, 2010, from http://www.rand.org/multi/achievementforall/reading/readreport.html<br />