Published on

Rough draft of LRA/NRC paper on strategy use and critical evaluation of websites.

Published in: Education, Technology
  • Be the first to comment

  • Be the first to like this


  1. 1. Running Head: CRITICAL EVALUATION STRATEGY USE <br />Critical Evaluation Strategy Use:<br />Think-Aloud Results <br />J. Gregory McVerry<br />Heidi Everett-Cacopardo<br />University of Connecticut<br />The largest and most recent review of reading comprehension research concluded that reading on the Internet may require novel skills beyond those necessary for reading printed material (Rand Reading Study Group, 2002). One of the most challenging of these new skills is the ability to critically evaluate information from websites (Burbules & Callister, 2002; Leu, Kinzer, Coiro, & Cammack, 2004). Readers now have to wade through market driven information they encounter (Tate & Alexander, 1996), and can quickly become entangled in websites that lack quality control and traditional markers of credibility (Reih & Belkin, 1998).<br />In fact, recent evidence from self-report studies (Metzger, Flannigan, & Zwarun, 2003; Princeton Survey Research Associates, 2005), think aloud research (Coiro & Dobler, 2007; Damico & Baildon, 1998, , and classroom research (Graesser et al., 2008; Sanchez , Wiley & Goldman, 2006.; Zhang & Duke, 2007) have provided insight into types of critical evaluation skills students need and the low frequency of strategy use. Yet we still do not have a clear understanding about the relationship between offline and offline evaluation skills. There were two purposes for this verbal protocol study: 1) compare comprehension strategies used by students successful on a critical evaluation task to strategies used by students who were less successful; 2) compare the strategies noted in verbal protocol tasks as students evaluate online information to the results of previous research that examined sourcing and evaluation in offline reading tasks. <br />Theoretical Perspective<br />Understanding the complexities of meaning making has never been easy. These complexities have been captured in a heuristic created by the Rand Reading Study Group (2002).. The authors defined reading comprehension as “the process of simultaneously extracting and constructing meaning through interaction and involvement with written language" (Rand Reading Study Group, 2002, p. 11); which, according to the group, involves three elements: the reader, the text, and the activity. This study examines differences in the reader while she engaged with literacy texts shifting from page to pixels (Hartman, Morsik, & Zheng, in press). As noted by the RRSG (2002), this shift “requires skills and abilities beyond those required for the comprehension of conventional, linear print" (p. 14).<br />New frameworks are emerging to investigate this shift and understand the novel skills meaning makers need. New literacies of online reading comprehension (Leu, et al., 2004; Leu, O’Byrne, Zawilinski, McVerry, & Everett-Cacopardo, 2009,) is a theoretical perspective that seeks to understand how these new skills and abilities change as we move online. This perspective defines online reading comprehension as a process involving: <br />“…the skills, strategies, and dispositions necessary to successfully use and adapt to the rapidly changing information and communication technologies and contexts that continuously emerge in our world and influence all areas of our personal and professional lives. These new literacies allow us to use the Internet and other ICT to identify important questions, locate information, critically evaluate the usefulness of that information, synthesize information to answer those questions, and then communicate the answers to others.” (Leu, et al., 2004, p. 1570).<br />In union with Afflerbach, Pearson, and Paris (2008), the difference between online reading comprehension skills and strategies were defined on a continuum of automaticity and deliberate goal setting. Skills are automatic actions requiring no conscious awareness whereas strategies are defined as deliberate goal directed actions. <br />Pressley and Afflerbach (1995) developed a taxonomy of active comprehension processes, composed of skills and strategies that fall into three categories: learning text content, monitoring understanding, and evaluation. This study was concerned with the latter. More specifically it sought to understand the processes necessary to critically evaluate websites.<br />The skills and strategies labeled as critical evaluation of websites have different definitions (Coiro, 2003). These definitions can draw on traditions in critical reading (Coiro, 2007), critical literacy (Fabos, 2008), and critical thinking (Brem, Russel, & Weems, 2001; Kiili, et al.,, 2008; Sanchez, Wiley, & Goldman, 2007). Killi, Laurinen, and Marttunen (2008), in their study of students evaluating Internet sources concluded that evaluation skills fall on two dimensions, credibility--“paying attention to distinguishing reliable from unreliable evidence” and relevancy—“paying attention to distinguishing essential and nonessential information” (p. 77). This study explored only the contextual processes associated with judging the credibility of a website<br />Prior Research<br />Early research on evaluating websites developed and measured skills of readers based on principles of judging printed texts: accuracy, authority, objectivity, currency, and coverage (Tate & Alexander, 1996). Researchers often using these text features as a launching pad, employed self-report surveys and frequency questionnaires (Flanagin & Metzger, 2000; Fogg et al, 2001; Fox & Raihne, 2005; Fox & Raihne, 2002) to explore the types of skills and strategies used by readers to evaluate websites. <br />Skills and strategies common across the results self report studies include a:) identifying the author or sponsor, b:) examining the URL, c:) using format or appearance of the website, d:) checking currency of information e:) checking accuracy using secondary source, f:) examining bias, and g;) using content to judge website. Researchers concluded that there were differences between expert and novice users as they read websites in specific content areas. Secondly, participants in self-report research studies rarely used more than one strategy to judge a site, and most telling, participants usually used superficial content to evaluate the usefulness and truthfulness of websites.<br />Another strain of research exploring critical evaluation of websites have employed qualitative methods such as interviews ( Reih & Belikn, 1998), case studies (Damico & Baildon, 2007), and verbal protocol analysis (Coiro & Dobler, 2007; Damico & Baildon, 2006; Reih & Belikn, 1998, Zawilinski et al., 2007). Studies using these procedures support many of the conclusions of earlier self-report results. Conclusions across the studies using a critical thinking and critical reading lens (Coiro & Dobler, 2007, Reih, 2002; Reih & Belikn, 1998; Zawilinski et al., 2007) suggest that there maybe common markers of reliability and relevancy such as verifying the author, accuracy, and currency. Results from studies using a more socio-cultural lens or critical literacy lens (Brem, Russel, & Weems, 2001; Damico & Baildon, 2007) highlight the role worldviews and beliefs play as judgments are made during online reading comprehension.<br />This study recognizes the strong influence beliefs have on judgments (cite stromso), but focuses on the differences in strategy use based on student success on a critical evaluation task. Specifically, the goal of the study was to compare instances of critical evaluation strategy use across groups of students to explore any patterns or themes that emerged.<br />Method<br />Protocols for this study were selected for this study from an existing database of preliminary research designed to develop taxonomy of online reading comprehension (Zawilinski et al., 2007). In the earlier study students completed three verbal protocol tasks to elicit strategies used during online reading comprehension.<br />Participants<br />Participants in this study included 10 students who completed a specific critical evaluation task. They were drawn from a sample of 53 adolescent online readers selected from a population of 1,025 seventh graders schooled in economically challenged districts in South Carolina and Connecticut. The larger population of students were selected based on a variety of criteria that included a:) scoring in the top 25% on a measure of online reading comprehension; b:) frequency of Internet use; and c:) comfort in explaining strategy use to adults. <br />Materials<br />The original database (Zawilinski et al., 2007). included think alouds from three specific verbal protocol tasks. These tasks varied in the amount of teacher direction the student received and the teacher’s ability to interrupt the student during the task. Verbal protocols analyzed in this study asked students to explicitly evaluate the reliability of two websites. The students had to locate Dog Island, a hoax website, and The World Wildlife Fund, and answer the following questions: (1) Tell us if you consider this site very reliable, somewhat reliable, or not at all reliable? (2) Prove your choice. What information can you find that tells you that you are right? Explain. (3) What would you tell other students about how best to determine if a site is reliable or not? (4) Please tell us. Who created this site? (5) Why did they create it?<br />Procedures<br /> The protocol for this task required the student to think aloud at structural interruptions. Specifically, if students did not voluntarily describe their thinking, the researcher would interrupt the student and ask, “Can you tell me what you are thinking?” at three pre-determined junctures during the online reading comprehension task: (1) when students were reading a web page, (2) when students clicked on a link or navigation butting, and (3) when students entered key words into a search engine.<br />Following the think aloud protocols transcripts were developed. These included the transcription of full audio recording of the session, video of students onscreen actions captured using Camtasia, and descriptions of all on-screen movements made by each informant as they read online. <br />Data Analysis<br />The analysis of data used abductive (Onwuegbuzie & Leech, 2006) coding methods followed by a constant-comparative (Bogdan & Biklen, 2003; Merriam, 1988) analysis of instances of codes and transcripts after students engaged in think-alouds. <br />Abductive coding methods (Onwuegbuzie & Leech, 2006) employ both inductive and deductive coding procedures. Deductive coding began with a preliminary taxonomy developed in previous research (Zawilinski et al., 2007). In the previous research, initial codes were deduced from the theory base of online reading comprehension developed by Leu et al. (2004), which included questions, locating information, evaluation of information, synthesizing information, and communicating information. Branches were then inductively added to the taxonomy. As the researchers conducted a close read of selected protocols, they then met to discuss the developing taxonomy and eliminated overlapping codes or established new codes. This initial preliminary study resulted in a taxonomy of 37 separate codes.<br />In this follow up study, similar abductive methods (Onwuegbuzie & Leech, 2006) were used to expand the taxonomy. Two researchers met and coded three verbal protocols together. First, the transcriptions were parsed into idea units (Anderson, Reynolds, Schallertt, & Goetz, 1977). These idea units were defined as beginning with student action or talk towards a specific goal. All related talk and repetitive actions were included in an idea unit. The idea unit ended when talk or action clearly changed, such as clicking on “Go/Search/Enter,” or back arrow in browser, or minimize/maximizing windows. Then, the two coders began to code each idea unit. All discrepancies and disagreements were discussed until the coders reached 100% agreement. Codes were also added to the taxonomy if new behaviors were noted and other codes were revised or deleted if they were not unique. Only one code could be used to categorize a behavior.<br />Next, the original two coders trained three additional researchers. The process began by coding a verbal protocol together. Each individual coder was assigned a separate verbal protocol and coded it separately. The group then met and discussed the protocols. All disagreements were discussed until a consensus was reached and the taxonomy was revised. This process was repeated for each of the three verbal protocol tasks. Each sequential verbal protocol would begin with two coders meeting to revise the taxonomy. Then five coders would score and discuss an anchor verbal protocol together, followed by coding another anchor verbal protocol alone. All disagreements were discussed until 100% agreement was reached and revisions of the taxonomy were made if necessary. This process resulted in a taxonomy consisting of 233 codes (see appendix A).<br />To test inter-rater reliability, a series of agreement tests were developed for each of the three verbal protocol tasks. Two coders would meet independently and develop a master-coding task. The test developers were chosen because they exceeded 95% agreement on the individual coding of the anchor sets. The test developers would meet and code the think aloud. All disagreements were discussed until 100% resolution. This same think aloud task would then be administered to the other raters. A simple percentage score was used to calculate agreement.<br />A simple percentage was chosen over Flies’s Kappa (1971) for a variety of reasons. First, calculating Kappa requires the building of matrices listing every possible code for each instance, and with 150 protocols with 30-80 idea units in each these matrices would be unmanageable. Second Kappa, either Cohen or Fliess, adjust for chance, and with over 200 hundred codes the risk of agreement by chance seems minimal. Therefore a simple percentage was chosen.<br />In order to pass the inter-rater test, each coder had to reach a 90% threshold for agreement. If a coder did not meet the 90% threshold they would meet with the test developers and discuss and resolve any disagreement. They would then be administered another test. If the coders did not reach 90% agreement a second time they would code three anchor verbal protocols with the test developers and then receive a third test. This process was repeated for all three verbal protocol tasks and all coders reached the 90% agreement by the second test embedded in each task.<br />Determining Success<br />The reader’s success on the critical evaluation tasks was determined through their response to the question, “Tell us if you consider this site very reliable, somewhat reliable, or not at all reliable?” Each reader’s response was coded for each of the websites. For Dog Island website, an answer of very reliable was incorrect and coded as a zero, somewhat reliable was coded as a one, and not all reliable (correct) was coded as a two. For the World Wildlife Fund website, very reliable was correct and coded as a two, somewhat reliable was coded as a one, and not all reliable (incorrect) was coded as a two. In verbal protocol task one, the students were exposed to hoax websites and therefore knew the teacher may include unreliable sites into the Internet inquiry tasks.<br />Then using NVivo 8, a query was conducted based on the success of students on the critical evaluation task. We collected all student critical evaluation codes (see table 1) related to students level of success in ranking the websites. First, we ran a matrix analysis displaying all students and strategies. Then we ran queries asking for a matrix of specific critical evaluation codes for each group. The matrices showed different strategy use based on success on a task. These differences were mainly on the use of prior knowledge and content to judge a website. Therefore we ran coding inquiries for both: 1) using prior knowledge to judge accuracy and 2) using content to judge reliability. Then using constant-comparative methods (Bogdan & Biklen, 2003; Merriam, 1988) we analyzed the results until recurring patterns emerged. <br />To investigate these patterns we chose to write a summaryof each student’s Internet inquiry. The summaries included the following information: students’ definition of reliability, description of critical evaluation strategy use, and the skills they would recommend to other students. <br />Then we gave each of the eleven summaries a close read and created memos of the students’ use of prior knowledge and content. Returning to constant-comparative methods (Bogdan & Biklen, 2003; Merriam, 1988), we read across all of the summaries, memos, and the coding queries until themes emerged from the patterns noted in the matrices. <br />Results<br />There was a difference in the types of strategies coded by researchers based on the success of students (see table two). The matrix of overall strategy use reveals that two behaviors were most noted: using prior knowledge to verify accuracy and using content of the page to judge reliability. The matrices for individual groups, based on success of identifying dog island as unreliable, revealed that students more successful on the task relied more on prior knowledge to judge accuracy and students somewhat successful used page content to judge reliability. The close reading of these instances revealed that students who had knowledge or experience with dogs or geography were more likely to claim Dog Island as not reliable or somewhat reliable. The same theme held true with the World Wildlife Fund. Students who had knowledge of, or experience with, the WWF were more likely to rate the website as reliable. <br />Ranking Dog Island as Not Reliable<br />Students who correctly identified Dog Island as not reliable generally relied on prior knowledge or past experiences to make their overall judgment. Student 07, for example, used his knowledge of dogs and the state of Florida to make his judgment. He questioned Dog Islands feeding methods through the hunting of animals and fish. He then takes issue with the claim that dogs on Dog Island are “free from…stress and hardship” by saying:<br />I don’t see, like, they don’t really work or most dogs I know. They’re usually wait [garbled] their owners or if they’re on the street they usually are in the yard or out, out back using the bathroom or, or eating so they really don’t have, like, the hard life<br />He then questions how an island can have limitless space and enough room for 2500 dogs. This prompts him to scan through the pages on the website looking for the location of the island. On one of the pages he notes that the headquarters is listed in Tallahassee, Florida. He states, “Tallahassee is the capital of Florida, and its on land, and there is no lakes there so I don’t know how there could be a dog island.” <br />Student 01 wrapped her judgment of Dog Island up in her prior experiences and feelings towards dogs. She begins by saying she has to automatically assume the site is unreliable because it has yet to be checked against a secondary source. Then after locating the website Student 01 scans the site and says it sounds like a dog retirement home. She then goes on to read the copyright information at the bottom of the website. She looks at the information but does not make any judgment of the website. She then clicks on the photos tab and comments how cute the dogs are and how they are sent in a box. Then when she reads on the rates page that the dogs are sent there permanently she states, “Send your dog…I don’t want to send my dog! Great it’s permanent. But I’m not sending my dog to this place.” She concludes that the entire site seems “silly.” <br />Students who correctly identified the Dog Island site as unreliable also used their prior knowledge to identify missing content. Student 11 begins his evaluation by first looking for a specific link that describes the location of the Island. He notes that the website gives information about the dogs and humans but he can not find a specific link about the exact location of the island. He then decides to find a secondary source about Dog Island because maybe he didn’t find the right one, based on the task. Then after looking at the search results he decides he had correct website and returns. After looking for information about the dogs he concludes “Overall I think this site isn’t reliable because it doesn’t give a lot of information about the island and the dogs on it.”<br />Students who identified Dog Island as not reliable also had a better understanding of reliability in the context of websites. Before the think aloud task began most of the students were asked to define reliability and give an example. Students who performed well on the task understood reliability to have two meanings. First, they defined it as being dependable with statements such as, “something you can depend on. If you go on vacation and have a friend watch your dig, you can depend on them.” Unlike many students in the other groups, students successful on the task understood that reliability also meant, as Student 07 noted, ”in terms of a reliable site.” To illustrate student 01 added to her definition of a dependable friend with, “ but if you’re on the computer and you’re at a website, a reliable source is true information that’s been compared and has been researched by a oh I forgot what they are called (pause)… people like you!”<br />Students more successful at identifying Dog Island as an unreliable website also had a greater awareness of critical evaluation strategies taught by school curriculum. This is evident in both their think aloud data and the skills they would recommend to other students. Student 1, once she found the Dog Island website, started off by commenting that she has to assume the website is unreliable because it wasn’t checked yet by an expert or secondary source, and she made the following recommendation in her final email: “to read everything carefully and try to set aside facts that sound ridiculous like ‘the dogs will never want to leave.’“Then check the copyright information then check the date time and year the site was made or posted.” Student 11, who did not include any tips in her email, looked for specific links to key information she felt was missing. Student 7 recommended three steps to checking the reliability of a website: <br />“Step1: Scan through the article and see if the information makes any sense. (Ties into the information given.)<br />Step2: Search for any hyperlinks that could lead you to any more information.<br />Step3: Search for a copyright sign because the copyright could mean that is where the location is.”<br />Even though students relied on prior knowledge and past experiences to judge the reliability of Dog Island they had knowledge of strategies such as checking a secondary source, judging coverage or comprehensiveness, judging currency, and looking for a copyright to find out information about the author. <br />Ranking Dog Island as Somewhat Reliable<br />The students who ranked Dog Island as somewhat reliable also used prior knowledge to judge accuracy of specific claims. Student 19 after spending a minute reading through the website, questioned the reliability of the entire site. She then states it is somewhat reliable because the website, “Doesn't even tell you where Dog Island is.” She was looking for a specific detail about the location of the island. Her judgment was also influenced by the overall content of the page: <br />“I'm kind of confused, I was actually a little bit confused when I started reading it. I mean, it seems kind of strange that there would be and Island with a bunch of Dogs to have and all it has is like rabbits around and fish and all. It seems a little strange to me, so I'm going to say it's somewhat reliable. I'll say, the site dog Island is somewhat reliable. “<br />She then continues to look for information about Dog Island to complete the task and when looking for the location of the website labels a specific webpage within Dog Island as not reliable because it was too hard to find. Overall, it seems that student 19 felt the site was not credible but still labeled it as somewhat reliable.<br />Student 29 began with a very deliberate process to judge the reliability of the website. He began by saying, “Who. Why. What. How. Okay. Why did they create it? Who created it? Who?” He clearly had a plan to question the author. He then scans the copyright info, and states “I just looked at the bottom cause usually it always says who made it and stuff. But it only says Dog Island. And it’s either that it’s them who made it or they just advertising or something.” Then he decides it is somewhat reliable because they only show dogs and no humans. As he looks for pictures of humans, he clicks on the facilities page and gets upset that dogs are organized by size, because he would be put in the smallest group. He then decides because there are people maybe it is reliable. Then he clicks on the FAQ page and decides the site is somewhat reliable: “I don’t exactly trust other people to take care of my dog. My dog. Now, he is mine. So, it’s not exactly my type of thing to send to an Island by themselves. I’d go with him.” Then after reading about how the dogs hunt he decides it is somewhat reliable because dogs do not eat those “things”.<br />Student 31, even though, he ranked the website as somewhat reliable did not believe Dog Island to be true. He became immediately skeptical when reading about dogs escaping their “tough urban lives.” In his final email message, he attacks three claims: why would you get a dog and give it away, how can an island have unlimited space, and finally he reminds us that not all dogs get along.<br />Students who ranked Dog Island as somewhat reliable generally reached the same conclusions as students who ranked the website reliable. However, students who ranked Dog Island as somewhat reliable did not have a clear definition of the word reliable in the context of website evaluation. More often than not, they simply defined reliable in terms of dependable. More telling were there examples of reliability. Student five describes a friend, “Um, like, a friend, like, a true friend is reliable. They’re there for you when you’re, like, around when you need them and stuff.” Student 17 identifies, “Something you can trust. That you know won’t hurt you.” Student 29 states that Google is reliable, because you find things. Only student 31 defined reliable as something that can be proven.<br />Students who ranked the website as somewhat reliable were more likely to recommend that students judge the overall content of a website rather than use specific critical evaluation strategies. Student 19 recommended strategies specific to Dog Island by detailing how he found information about the Island’s location. Student 29 states, “You can tell if the information is false by you knowing what they’re saying is not true.” Both students 31 and 17 state that you check the overall content and then judge the reliability by checking a secondary source, but neither student chose to do so.<br />Ranking Dog Island as Reliable<br />Only one student ranked Dog Island as reliable so looking for patterns across the group is impossible. However, comparing Student 3 to the rest of the participants provides some insights. First of all, Student 3 judged the site as reliable because he defined reliability as any website that cannot be edited. He would judge the overall credibility by content simply if the website allowed user participation. He also recommended this as a skill to other people. Student 3 represents a stark example of errors of oversimplification (Spiro, Coulson, Feltovich, & Anderson, 1988). This was common with many of the students who also judged the website somewhat reliable. They had a general awareness of specific strategies, but often used them incorrectly, out of context, or weighted superficial information such as copyright information too heavily.<br />Ranking of World Wildlife Fund<br />Looking for patterns across the rankings of the World Wildlife Fund website is not really possible. Out of the ten students who completed this protocol task only six completed this portion of the think aloud. Each think aloud session was designed to take one class period. Many students spent the entire session on the Dog Island portion. Compounding the issue two students ranked the site as unreliable as a direct consequence to the wording in the protocol. In the task the students had to find World Wildlife Fund website with a black and white panda. Student 11 judged the website as unreliable because she could not find the logo, even though she visited over ten WebPages on the WWF website. Another part of task, designed to parse out the detection of bias had students search for hunting information. Students commented they could not find this information, and therefore maybe the site was unreliable.<br />Those students who did rank WWF as reliable relied on prior knowledge and past experiences or the comprehensive coverage on the websites content. Student 31 had direct experience with the WWF and wrote in his email response:<br />I consider this site very reliable because of my background with the WWF, I know that the WWF exists, and I have done some work with them, I donated to save some otters, Ii think. So from my background knowledge (schema) I know that this site is reliable. 1. I have been to this site before on my own Internet Excursions ( spelling?) Also from my background experience with the WWF. <br />Student 17 also uses prior knowledge to judge the WWF as reliable and states that she knows pandas are endangered and therefore WWF must be reliable because it says they help endangered animals. Student 19 ranks the website as reliable because, “it really says a lot.”<br />On the WWF task, students did put an emphasis on the author. It most be noted that the protocol asked students to locate the author on both Dog Island and WWF. When using the hoax site, students often located the browser and explained their purpose as profit driven. In judging the WWF website a few students put greater emphasis on the author in shaping perspectives. Student 5, mistakenly rejected the site as unreliable because the front page did not tell you enough about the author. Thus we see errors of oversimplification. Student 29, on the other hand commented, “The people that made this website all put in different info about animals that they….specialized in. I could not find info about animals because they are trying to protect specific species and not trying to kill them.”<br />Discussion<br />Under the premise that ten cases may not have been adequate for data saturation, we do not attempt to draw broad themes about similarities and differences among offline sourcing skills and critical evaluation of websites. Instead, in order to explore connections we frame the patterns found in these think alouds within existing literature using both offline and online texts.<br /> The results of this study suggest it is the connections to text rather than the dissection of texts that lead to success on a teacher defined critical evaluation task. Across all the participants, the most frequent judgments were based on checking accuracy of claims against what was already known. Students also identified missing information by checking content against knowledge. <br />Interestingly, even though students, who met the criteria for success on the task, used connections to texts in their decisions they seemed to have greater awareness of critical evaluation strategies. The students, stating that Dog Island was unreliable, often verbally noted or recommended critical evaluation strategies taught in schools. (Eisenberg & Berkowitz, 2000) At the same time, however, students who met the criteria for success did not always apply the same strategies they suggested. This phenomenon was an interesting theme that emerged from the study. It seems that neither connections to text nor a taxonomy of strategies can stand alone in pedagogy to teach critical evaluation of websites. As noted in offline studies (Stahl, Hynd, Brion, McNight, & Bosquet, 1996;Perfetti, Britt, Rouet, Mason, & Georgi, 1993) these differences cannot be explained by content knowledge. Similarly, a taxonomy of comprehension strategies could never account for the multilayer meanings student make (Fabos, 2008).<br />This intertwined nature of knowledge and strategy use highlights the multidimensional nature of comprehension. Deschyrver and Spiro (in press) point out that learning on the web requires “advanced web exploration techniques and opening mindsets.” Tierney (2009) described meaning making with digital texts as a process that takes both “artistry and agency.” Leu et al., (2004) noted that online reading comprehension requires not only skills and strategies but also dispositions. Thus, the transition from page to pixel may be increasing the multifaceted nature of comprehension (Hartman et al., in press). The artistry, skills, and exploration techniques that emerged from patterns in this study included the use of: prior knowledge, content, and source information to make a judgment about a website. <br />Prior Knowledge<br />Prior knowledge is a central difference in readers that affects comprehension (RRSG, 2002). Readers use what they already know to construct meaning from texts (Afflerbach, 1986; Anderson & Pearson, 1984). Many elements noted in previous research were evident in student think alouds. Students applied general world knowledge (Bartlett, 1995) as they thought about domesticated dogs and geography. Students applied topic knowledge (Alexander & Jetton, 2000) as they evaluated the WWF website. They used text structure (Goldman & Rakestraw, 2000) to look for copyright information or expected hyperlinks. <br />Similar to offline sourcing, prior knowledge affects critical evaluation of websites. Coiro and Dobler (2007), using think aloud data, found that more successful online readers use topical information to find relevant sources. Yet, the Internet maybe changing how readers use prior knowledge. Coiro (2007) using hierarchical regression analyses found a significant interaction between prior knowledge and student performance on a measure of online reading comprehension. Prior knowledge had a smaller effect on students who performed well on the measure and had the greatest effect on students who performed poorly on the measure. In this study, the students who had the most knowledge of strategies used prior knowledge more effectively to judge both specific claims and websites. <br />Content<br />Studies that investigated offline evaluation using multiple documents in history, have found differences in experts and novices (Wineburg, 1991; Briit & Agliskas, 2002). These studies have found that evaluation of usefulness varied based on document type and expertise (Rouet et al., 1997). Furthermore when students are asked to give their own opinion they rarely integrate content (Stahl et al., 1996) Finally students often use the amount of information to judge a source (Goldman, 2004).<br />Studies investigating the critical evaluation of websites have reached similar conclusions. First people put different trust in the content of media and view websites as the least credible (Flanigan & Metzger, 2000; Princeton Survey Research, 2002). College age participants (Tillotson, 2002) and middle school students (Coiro & Dobler, 2007) often rely on superficial content to reach their decision. Tillotson (2002) interviewed 499 college students and found over half relied solely on content to evaluate a task. Coiro and Dobler (2007) conducted think aloud protocols with 11 seventh graders and found that they used the amount of content to judge a website.<br />Research in both online and offline texts found use of content varies between experts and novices. The patterns in our study found similar results. Students who met the criteria for success were more apt to identify missing content that would establish reliability and less successful students used the overall content. Another interesting pattern that emerged is the relationship between using content and the text itself. Students were more inclined to use the amount of text to judge the WWF site reliable and judge accuracy of claims on the hoax website. Therefore, using content to judge a website is not necessarily the skills of a neophyte. Content judgments can be correct. <br />It is important to note that a major difference between this verbal protocol task and previous studies was the role of multiple texts. This study relied on two texts that did not need to be integrated in an effort to examine critical evaluation skills. Future research should investigate students judging content when they have to read multiple websites to answer a question.<br />Author<br />Researchers seem to be in agreement that attending to author is an important strategy for multiple document reading. Wineburg (1991) found that expert readers evaluate source characteristics before reading a document. Braten, Strømsø, and Britt (2009) using hierarchical regression found that sourcing knowledge may explain additional variance after accounting for prior knowledge. Even with the importance of sourcing as a domain skill and its possible relationship with comprehension there is little evidence that students use sourcing skills when reading online texts.<br />The primary purpose and audience the creator had in mind for creating the website is another critical piece of information that can provide evidence of bias when evaluating the accuracy of online information (Damico & Baildon, 2007). Therefore locating the creator of a website and their biographical information is one crucial piece of evaluating the accuracy of online information (Tilmann, 2003). Yet there is little evidence that common pedagogies such as checklists encourage students to look at the author of a website (Metzger, 2007; Zhang & Duke, 2007). This may indicate thatauthor sourcing differs online for two very important reasons. First when using printed texts students assume some level of credibility (Flanigan & Metzger, 2000). Second there is no standard forat for websites. The author’s name can not be found on the cover. Therefore sourcing online requires location skills.<br />In this study the students were prompted to search out author information and decide why that author created the website. Therefore conclusions about students naturalistic strategy use can not be made. However, patterns in the data revealed some interesting findings. Most students, on the dog island page, accepted the founders as creators of the website. These students listed either love for dogs or making money as motivating the authors. Even when the students accepted the website as not reliable or “silly” they still identified and accepted the authors reason for creating the website. On the WWF many students struggled to understand who authored the webpage. Only one students noted that it was probably a group of experts specialized in animals. Most telling only two students considered how an authors bias may influence the content on a page. They noted that a website created by an organization to help animals would not be the place to search for hunting information. No student turned to a search engine to find out more information about an author. Overall the students’ lack of sourcing skills mirrors research using multiple offline texts. <br />Conclusion<br />Across the patterns of this study it seemed students judgment of texts depended more on their connections to text rather than the dissection of texts. The students who were more successful on this task used their prior knowledge to either check the accuracy of claims or to highlight missing content. Yet, at the same time, even though students relied on prior knowledge the more successful students had a greater awareness of critical evaluation skills. This paradox provides evidence that reading comprehension has evolved with the Internet.<br />The patterns in this study reflect results from investigations in offline reading. Specifically studies examining the reading of multiple documents and author sourcing found that experts read with a more critical eye. Secondly this study with seventh graders also shows similarities in critical evaluation strategy use when compared to high school and college age readers. However even though the findings from this study share common patterns with offline studies the patterns also provide more evidence to a growing body of research that reading comprehension has changed.<br />In fact this study provides additional research to a growing consensus (Tierney, 2009) that reading online requires novel skills (RRSG, 2002). This study examined the use of knowledge and regulation (Paris, Wasik, Turner, 1991) to critically evaluate websites. Hartman et al. (in press) in a review of reading comprehension expanded on the notion of declarative, procedural, and conditional knowledge (Paris et al., 1991) to include specific knowledge necessary for online reading comprehension. They suggest that three types of knowledge have grown in their importance to actively construct meaning online: identify, location, and goals. Patterns that emerged from this study support these claims.<br />Identity Knowledge<br />Identity knowledge is defined as metacognitive knowledge about how authors “construct, represent, and project online identities” (Hartman et al., in press, p. 22). While attending to author credibility has always been key to offline reading (Stahl et al., 1996; Wineburg, 1991). Rapidly changing forms of texts have made this issue more central. In this study students were often able to locate authors but very few examined perspectives critically. Most telling are the number of errors of oversimplification (Spiro et al., 1998) evident as students searched and discussed authors. For example students may have incorrectly rejected a website as unreliable because they could not locate copyright information. They also simply accepted authors at face value or assigned authorship to any name they could find on a website. Very few students recognized that websites such as the WWF may have multiple authors united by an “institutional voice.” The connections students made to the text to judge a website also highlights the importance to consider how one’s own identity shapes the meaning of texts.<br />Location Knowledge<br />Location knowledge is both the ability to find information using an Internet browser and the ability to orientate oneself within the architecture of the Internet (Hartman et al., in press). This requires both retrospective knowledge of where students have been and prospective knowledge of where they might go. (Hartman et al., in press). This dual nature of location knowledge requires forward inferencing (Coiro & Dobler, 2007) as students choose links. The patterns in this study reveal that location knowledge is central to critical evaluation strategy use. Students more familiar with web architecture new for instance that author information maybe found by looking for frequently asked question. Students more successful on the task were able to draw quicker inferences by scanning headings, links, and buttons on the website. Finally one student did not succeed on the task because he could not find the Dog Island website providing further evidence that locating information is a “bottleneck skill” (Henry, 2006) when reading online.<br />Goal Knowledge<br />Goal knowledge refers the “sustained purpose for comprehending online” (Hartman et al., in press). Students who maintain reading goals and adjust the goals based on what they read will be better at critical evaluation taks. Even in the context of this study, with a teacher directed goal and decontextualized texts there is evidence of goal knowledge. Since the Internet provides unlimited freedom (Burbules & Caster, 2000) and too many distractors and choices (Coiro, 2003) students in this study often lose track of the goal. This was evident in the amount of time students spent looking at “cute” pictures of dogs. Successful students, however, would refer back to the task and constantly make sure they reached their goals. <br />Implications for Practice<br />All ten participants in this study, who were drawn from a larger sample of good online readers, in this study would benefit from additional instruction in the critical evaluation of websites. The students did not have a common vocabulary for questioning websites, never looked for a secondary source, and were prone to errors of oversimplification. In most language arts and English classrooms students are rarely provided instruction in online reading comprehension (Coiro, 2003; Leu, Mcverry et al., 2008). More students are turning to the Internet as their primary source of information. Therefore critical evaluation is an essential reading skill that must be taught in the curriculum.<br />In order to teach these skills educators need to include opportunities for strategy exchange in the classroom. There is little evidence that metacognitive checklists improve critical evaluation strategy use (Metzger, 2007). This could provide further evidence that strategy exchange during online reading comprehension requires greater level of collaboration and communication (Zawilinski, 2008). Teachers will need experience in using pedagogical practices such as Internet Reciprocal Teaching (Leu, Coiro et al., 2008) or metacognitive software (Damico & Baildon, 2007).<br />Future Directions for Research<br />The relationship between prior knowledge and online reading comprehension skills and strategies needs to be explored. In this study students who were aware of more comprehension strategies were more likely to use prior knowledge to judge a reliability of a website. Prior knowledge, when entered first in hierarchical regressions in document sourcing studies (Braten, Stromso, & Britt, 2009; Strømsø, H.I., Bråten, & Britt, 2009b) explained the greatest amount of variance over and above other dependant variables. In this study the connections students made to texts was critical to success. Studies should also be conducted that explore, rather than control for, prior knowledge. <br />Readers link all kinds of texts across time and space (Hartman, 1995) and the Internet has transformed the scope and speed of these links (Coiro, Lankshear, Knobel, & Leu, 2008). Efforts should be made to understand these links through verbal protocol analysis. These efforts should also include collecting raw data from existing protocol databases to look across age groups from a variety of theoretical lenses. Future verbal protocol analysis studies should also investigate methodological impacts of different protocol design. Students In this study task relied so heavily on directions that they could not complete the task successfully. The protocol heavily influenced some student responses. This was a deliberate effort to elicit self-reports of critical evaluation strategies. Methodological reviews should be conducted across online reading comprehension protocols to explore optimal conditions for increasing participants’ responses.<br />As efforts to conduct verbal protocol analysis in online environments continue we must also be wary of creating false dichotomies (Moje, 2009) between offline and online reading. The Internet has shifted our literacy practices (Coiro et al., 2008) and created a multiplicity of text forms (Cope & Kalantzis, 2000). Efforts to investigate critical evaluation strategy use must also include case studies of ongoing inquiry in order understand the interplay and differences of reading in all of its multiple forms.<br /> <br /> <br />Reference:<br />Afflerbach, P. (1986). The influence ofprior knowledge on expert readers' importance assignment process. In J. A. Niles & RV. Lalik (Eds.), National reading conference yearbook, 35, Solving problems in literacy: Learners, teachers and researchers (pp. 30-40). Rochester, NY: National Reading Conference.<br />Afflerbach, P., Pearson, P., & Paris, S. G. (2008). Clarifying differences between reading skills and reading strategies. The Reading Teacher, 61(5), 364-373.<br />Agosto, D. (2002). Bounded rationality and satisficing in young people’s web based decision making. Jornal of the American Society for Information Science and Technology.<br />Alexander, P. A., & Jetton, T. L. (2002). Learning from text: A multidimensional and developmental perspective. In M. L. Kamil, P. Mosenthal, P. D. Pearson, and R Barr (Eds.) Handbook ofreading research, Volume III (pp. 285-310). Mahwah, NJ: Erlbaum.<br />Anderson, R C., & Pearson, P. D. (1984). A schema-theoretic view ofbasic processes in reading comprehension. In P. D. Pearson (Ed.), Handbook ofreading research (pp. 255-291). New York: Longman.<br />Anderson, R.C., Reynolds, R.E., Schallert, D.L., & Goetz, E.T. (1977). Frameworks for comprehending discourse. American Educational Research Journal 14, 367-381. <br />Bartlett, F. C. (1995). Remembering: A study in experimental and social psychology. Cambridge, UK: Cambridge University Press. (Original work published 1932).<br />Bråten, I., Strømsø, H.I., & Britt, M.A.  (2009). Trust matters: Examining the role of source evaluation in students’ construction of meaning within and across multiple texts. Reading Research Quarterly, 44(1), 6–28<br />Britt, M. A. & Aglinskas, C. (2002). Improving students’ abilities to identify and use source information. Cognition and Instruction, 20(4), 485-522. <br />Burbules, N. C. & Callister, T. A. Jr. (2000). Watch IT: The risks and promises of information technologies for education. Boulder, CO: Westview Press.<br />Bogdan,  R. C., & Biklen, S. K. (2003). Qualitative research for education: An introduction to theories and methods,  (5th ed.). Boston: Allyn & Bacon.<br />Brem, S., Russell, J., & Weems, L. (2001) Science on the Web: Student evaluations of scientific arguments Discourse Processes, 32(2).<br />Coiro, J. (2003). Rethinking comprehension strategies to better prepare students for critically evaluating content on the Internet. The NERA Journal, 39, 29-34.<br />Coiro, J., & Dobler, E. (2007). Exploring the online reading comprehension strategies used by sixth-grade skilled readers to search for and locate information on the Internet. Reading Research Quarterly. 42, 214-257.<br />Cope, B. & Kalantzis, M. (2000). Multiliteracies: Literacy learning and the design of social Futures. New York: Routledge.<br />Damico, J., & Baildon, M. (2007a). Reading web sites in an inquiry‐based social studies classroom. In D. Row, R. Jimenez, D. Compton, D. Dickenson, Y. Kim, K. Leander, & V. Risko (Eds.), 56th Yearbook of the National Reading Conference (pp. 204‐217). Oak Creek, W I: National Reading Association.<br />Damico, J., & Baildon, M. (2007b). Examining Ways Readers Engage With Websites During Think-Aloud Sessions. Journal of Adolescent & Adult Literacy, 51(3), 254–263<br />DeSchryver, M. & Spiro, R. J. (in press). New forms of deep learning on the Web: Meeting the challenge of cognitive load in conditions of unfettered exploration in online multimedia environments. To appear in Cognitive Effects of Multimedia Learning, R. Zheng (ed.). Hershey, PA: IGI Global.<br />Eisenberg, M. B., & Berkowitz, R. E. (1999). Teaching information and technology literacy skills: The Big 6 in elementary schools. Worthington, OH: Linworth<br />Fabos, B. (2008). The price of information: Critical literacy, education, and today’s Internet. In J.Coiro, M. Knobel, D. Leu, & C. Lankshear (Eds.). Handbook of research on new literacies (pp. 839-870). Mahwah, NJ: Erlbaum.<br />Flanagin, A.J., & Metzger, M.J. (2000). Perceptions of Internet information credibility. Journalism & Mass Communication Quarterly, 77(3), 515–540.<br />Fleiss, J.L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76, 378-382. <br />Fogg, B.J., J. Marshall, O. Laraki, A. Osipovich, C. Varma, N. Fang, et al. (2001). What makes web sites credible? A report on a large quantitative study. Presented to the Computer-Human Interaction Conference, Seattle, Washington.<br />Fox, S. (2005) Health information online. Washington: Pew Internet and American Life Project. Retrieved from HYPERLINK "" on September 1, 2007.<br />Fox, S., & Rainie, L. (2002). Vital decisions: How Internet users decide what information to trust when they or their loved onesare sick. Washington, DC: Pew Internet & American Life Project. Retrieved June 5, 2008, from<br />Fraser, H. (2004). Doing narrative research: Analyzing personal stories line by line. Qualitative Social Work, 3(2), 179–201.<br />Goldman, S. R. (2004). Cognitive aspects of constructing meaning through and across multiple texts. In N. Shuart-Ferris & D. M. Bloome (Eds.), Uses of intertextuality in classroom and educational research (pp. 313-347). Greenwich, CT: Information Age Publishing. <br />Goldman, S. R, & Rakestraw, J. A. Jr. (2000). Structural aspects of constructing meaning from text. In M. L. Kamil, P. B. Mosenthal, P. D. Pearson, & R Barr (Eds.), Handbook ofReading Research, (Vol. 3, pp. 311-336). Mahwah, NJ: Erlbaum.<br />Graesser, A.C., Wiley, J., Goldman, S.R., O’Reilly, T., Jeon, M., & McDaniel, B. (2007). SEEK Web tutor: Fostering a critical stance while exploring the causes of volcanic eruption. Metacognition and Learning, 2, 89-105.<br />Hartman, D., Morsnik, P. M., & Zheng, J. (in press). From print to pixels: The evolution of cognitive conceptions of reading comprehension. In B. Baker (Ed.), The new literacies. New York: Guilford.<br />Henry, L. A. (2006). SEARCHing for an answer: The critical role ofnew literacies while readingon the Internet. The Reading Teacher.<br />Leu, D. J., Coiro, J., Castek, J., Hartman, D., Henry, L. A., & Reinking, D. (in press). Research on instruction and assessment in the     new literacies of online reading comprehension. To appear in Cathy Collins Block, Sherri Parris, & Peter Afflerbach (Eds.).     Comprehension instruction: Research-based best practices. New York: Guilford Press.<br />Leu, D. J., McVerry, J. G., O’Byrne, W. I., Zawilinski, L., Castek, J., & Hartman, D. K. (2008). The new literacies of online reading comprehension and the irony of No Child Left Behind: Students who require our assistance the most, actually receive it the least. In L. Morrow, R. Rueda, & D. Lapp (Eds.), Handbook of research on literacy instruction: Issues of diversity, policy, and equity (pp.173-194). New York: Guilford. <br />Leu, D. J., O’Byrne, W. I., Zawilinski, L., McVerry, J. G., & Everett-Cocapardo, H. (2009). Expanding the new literacies conversation. Educational Researcher, 38, 264-269.<br />Merriam, S.B. (1988). Case study research in education: A qualitative approach. San Francisco: Jossey-Bass.<br />Metzger, M. (2007) Making sense of credibility on the web: Models for evaluating online information and recommendations for future research. Journal of the American Society for Information Science and Technology, 58.<br />Metzger, J. M., Flanagin, J. A., & Zwarun, L. (2003). College student Web use, perceptions of information credibility, and verification behavior. Computers & Education, 41, 271-290.<br />Moje, E. B. (2009). A Call for new research on new and multi-literacies Research in the Teaching of English, 43 (348-362).<br />Paris, S. G. Wasik, B. A., & Turner, J. C. (1991). The development of strategic readers. In R. Barr, M. L. Kamil, P. Mosenthal, & P. D. Pearson (Eds.), Handbook of reading research, Vol 2 (pp. 609-640) New York: Longman.<br />Perfetti, C. A., Britt, M. A., & Georgi, M. C. (1995). Text-based learning and reasoning: <br /> Studies in history. Lawrence Erlbaum Associates: Mahwah, NJ.<br />Pressley, M., & Afflerbach, P. (1995). Verbal protocols of reading: The nature of constructively responsive reading. Hillsdale, NJ: Erlbaum.<br />Princeton Survey Research Associates International. (2005). Leap of faith: Using the Internet despite the dangers. Princenton, NJ: Author.<br />RAND Reading Study Group. (2002). Reading for understanding: Toward an R&D program in reading comprehension. Santa Monica, CA: author.<br />Rieh, S.Y. (2002). Judgment of information quality and cognitive authority in the Web. Journal of the American Society for Information Science andTechnology, 53(2), 145–161.<br />Rieh, S.Y., & Belkin, N.J. (1998). Understanding judgment of information quality and cognitive authority in the WWW. In C.M. Preston (Ed.), Proceedings of the 61st ASIS annual meeting (pp. 279–289). Silver Spring, MD: American Society for Information Science.<br />Rouet, J. F., Favart, M., Britt, M. A., & Perfetti, C. A. (1997). Studying and using <br /> multiple documents in history: Effects of discipline expertise. Cognition and <br /> Instruction, 15(1), 85-106. <br />Sanchez, C.A., Wiley, J., & Goldman, S.R. (2006). Teaching students to evaluate sourcereliability during Internet research tasks. Proceedings of the Seventh International Conference on the Learning Sciences (pp. 662-666). Bloomington, IN.<br />Spiro, R.J., Coulson, R.L., Feltovich, P.J., & Anderson, D.K. (1988). Cognitive flexibility theory: Advanced knowledge acquisition in ill-structured domains (Tech. Rep. No. 441). Urbana-Champaign, IL: University of Illinois, Center for the Study of Reading. <br />Stahl, S.A., Hynd, C.R., Britton, B.K., McNish, M.M., & Bosquet, D. (1996). What happens when students read multiple source documents in history?. Reading Research Quarterly, 31(4), 430–456.<br />Strømsø, H.I., & Bråten, I. (2009a). Beliefs about knowledge and knowing and multiple text comprehension among upper secondary students. Educational Psychology, 29, 425-445<br />Strømsø, H.I., Bråten, I., & Britt, M.A. (2009b). Reading multiple texts about climate change: The relationship between memory for sources and text comprehension. Learning and Instruction.<br />Tate, M., & Alexander, J. (1996). Teaching critical evaluation skills for World Wide Web resources. [Electronic Version] Computers in Libraries.<br />Tierney, R.J. (2009). The agency and artistry of meaning makers within and across digital spaces. In S.E. Israel & G.G. Duffy (Eds.). Handbook of research on reading comprehension (pp. 261–288). New York: Routledge.<br />Tillotson, J. (2002) Website evaluation: a survey of undergrads. Online Information Review, 6(26).<br />Wineburg, S. S. (1991). Historical problem solving: A study of the cognitive processes used in the evaluation of documentary and pictorial evidence. Journal of Educational Psychology, 83, 73-87. <br />Zawilinski. L., Carter, A., O’Byrne, I. W., McVerry, G., Nierlich, T., & Leu, D. (2007). Toward a taxonomy of online reading comprehension strategies. Paper presented at the 57th Annual National Reading Conference. Austin, TX.<br />Zhang, S. & Duke. N. K. (2007). Instruction in WWWDOT approach to improving websites. An experimental study with 4th and 5th graders. Paper presented Nov. 28, 2007 at National Reading Conference: Austin, TX.<br /> <br />16573586995<br />StudentDog Island JudgmentFrequency of ContentFrequency of Prior KnowledgeStudent 1Not reliable03Student 7Not reliable05Student 11Not reliable31Student 5Somewhat reliable10Student 17Somewhat reliable11Student 19Somewhat reliable80Student 29Somewhat reliable23Student 31Somewhat reliable14Student 3Reliable30<br /> <br />