SlideShare a Scribd company logo
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY
Trustworthiness Evaluation, Controversy, and Reading Ability:
A Study of Eighth Graders Evaluating Multiple Conflicting Sources
Angela K. Johnson and Ralph T. Putnam
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 0
Abstract
The present study examined how eighth graders evaluate the trustworthiness of sources
presenting opposing views on a controversy. Data included students’ prior opinions on the issue,
trustworthiness ratings of five offline web articles, justifications for those ratings, and a reading
comprehension measure. Students differentiated sources by trustworthiness, but reading ability
correlated with greater differentiation. Prior opinion influenced ratings; this was somewhat
mediated by reading level. Students attended to content factors most frequently, with high
readers attending to authorship more than other readers. Data suggest distraction from the
evaluation task among average and low readers. Implications point to the benefits of teaching
attention to source authorship and providing scaffolds to mediate the complexity of critical
evaluation in the context of reading about controversial issues.
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 1
Eighth Graders’ Critical Evaluation of Sources about a Controversial Issue
With large numbers of schools providing “one to one” and “bring your own device”
access, the web has become the front door to information in the classroom. In exchange for
access to vast quantities of information, however, comes the responsibility to determine what
information is reliable and who can be trusted in this immense and free domain. No longer can
the public depend solely on publishing companies, editors, journalists, or librarians to vet
sources; users must also learn to critically evaluate information themselves.
Against this backdrop, educational researchers have scrambled to understand the extent to
which students are capable of and willing to critically evaluate web sources (e.g., Hargittai,
Fullerton, Menchen-Trevino, & Thomas, 2010; Head & Eisenberg, 2010; Metzger & Flanagin,
2013). Studies suggest that students have difficulty determining and applying effective criteria
for evaluating their trustworthiness (Coombes, 2008; Kiili, Laurinen, & Marttunen, 2007; Kim &
Sin, 2011; Kuiper, Volman, & Terwel, 2005). Indeed, evaluation seems to be one of the more
challenging aspects of online reading (Colwell, Hunt-Barron, & Reinking, 2013; Walraven,
Brandgruwel, & Boshuizen, 2009).
At the same time, the Common Core State Standards (Common Core State Standards for
English Language Arts & Literacy in History/Social Studies, Science, and Technical Subjects,
2010), still adhered to by 38 states (Norton 2017), prioritize the rigorous expectation that
students assess and construct arguments with sound claims, evidence, and reasoning (Key Shifts
in English Language Arts, 2010). In the pursuit of these students frequently research
controversial issues on the web, and are expected to process and evaluate arguments while also
evaluating trustworthiness. Little is known about how the cognitive challenges of reading
multiple texts with conflicting views may affect middle-grade students’ ability to evaluate the
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 2
trustworthiness of sources. If prior opinions significantly affect their ability to do so, students
may be particularly vulnerable to bias and misinformation about controversial issues. Since an
effective democracy rests on an informed electorate, it is important to understand the challenges
inherent in evaluating sources about controversial issues and to teach students developmentally
appropriate methods for overcoming them. This study examines if and how eighth graders’ prior
opinions and reading abilities affect the criteria they apply when evaluating the trustworthiness
of sources about a controversial issue—whether school personnel should be allowed to carry
concealed weapons in schools.
Theoretical Framework
Several theoretical constructs inform the present study. Theories of online text processing
are grounded in theories of offline reading, but apply these to learning on the web, where readers
are more likely to confront multiple conflicting texts of varying trustworthiness. These clarify the
specific skillsets needed to meet such challenges. The processing of multiple conflicting texts is
also influenced by research on persuasion, which is therefore pertinent to the present study.
Finally, theoretical constructs of metacognition bear on our understanding of a reader’s capacity
to manage the challenges of both the learning task and the learning environment. An overview of
these constructs follows.
Online Text Processing
Hartman, Morsink, and Zheng (2010) asserted that a complication of online reading
resides in the “multiple plurals” (p. 140) of online texts. The various elements that combine to
establish meaning—for example, reader, author, task, context, and so forth—are themselves
plural and continually shifting, and therefore confound the act of meaning construction.
Hartman and colleagues proposed that a reader must integrate three types of knowledge in
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 3
comprehending online text: (a) knowledge of identity—knowing who wrote a text and how
authors “construct, represent, and project online identities” (p. 146); (b) knowledge of location—
knowing how to “orient oneself in a website” and “in cyberspace” (p. 148); and (c) knowledge of
one’s own goal—knowing why one is reading and remaining focused on that goal. Application
of the first may involve assessment of an author’s expertise and trustworthiness, while the latter
may involve assessment of a site’s match to reading goals.
Studies have shown that, when evaluating sources, students attend to information
relevance more than other criteria (Kuiper et al., 2005; Mothe & Sahut, 2011), signifying that
they do evaluate sources with relevance to reading goals. This is in accordance with Hartman,
Morsink, and Zheng’s (2010) third type of knowledge, knowledge of goal. Other studies have
also shown readers to be task-oriented while reading online, suggesting they are capable of
remaining focused on broader goals (Kiili et al., 2007; Ladbrook & Probert, 2011). Of the three
categories of knowledge, students often lack—or fail to apply—knowledge of identity, or
authorship (Bråten, Strømsø, & Britt, 2009; Coiro, 2007; Zawilinski et al., 2007). Lack of
attention to authorship is particularly problematic because sourcing—defined by Rouet (2006) as
“identifying a number of parameters that characterize the author and conditions of production of
the information” (p. 177)—has been found to improve students’ comprehension of multiple
conflicting online texts (Strømsø, Bråten, & Britt, 2010).
In fact, research on the processing of multiple texts suggests that sourcing is an important
element of comprehension. Rouet (2006) posited that the mental representation an expert reader
creates while reading multiple texts includes two components, or nodes. The content node
comprises a mental representation of the content of a single source, integrating the information
encoded in the text and the prior knowledge of the reader. The source node is a mental
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 4
representation of the source and its author, including identification, affiliations, expertise, bias,
and prior knowledge of these. The source representations of individual texts combine to create a
source model, which would reflect, for example, whether two sources agreed or disagreed, and
whether one author was more expert than another. Content and source representations combine
to create a situation model, a synthesized understanding of the topic. To successfully construct a
situation model, the reader identifies the source of each individual text, compares information in
one text to that of another, and maintains a connection between source nodes and content nodes.
Studies show that expert readers successfully attend to source characteristics to help them
synthesize multiple conflicting texts (Wineburg, 1991), but that younger readers have difficulty
keeping track of connections between source and content nodes (Golder & Rouet, 2000, in
Rouet, 2006). Other studies found that readers overlook author and source information as they
evaluate web sources (Bråten et al., 2009; Coiro, 2007; Zawilinski et al., 2007).
Findings suggest, therefore, that enabling attention to the relationship between the author,
purpose, and content of a message is an essential step toward effective evaluation, a claim
reflected in Coiro’s (2014) recommendation that students evaluate web sources based on four
criteria: (a) content relevance (the extent to which information and its presentation meet the
needs of the reader); (b) content accuracy (the extent to which information can be viewed as
factual and accurate); (c) author reliability (the extent to which the author can be trusted to
provide reliable information); and (d) author stance (the perspective or bias of the author, which
may influence his message).
Persuasion Theory
A third body of literature informing the present study involves theories of persuasion. In
studies of persuasive text comprehension, readers consistently demonstrate biased assimilation,
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 5
the evaluation of arguments in favor of their personal views (Kobayashi, 2010; Lord, Ross, &
Lepper, 1979). Misinformed readers will also be resistant to correction, a phenomenon known as
the continued influence effect (Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012).
According to Lewandowsky et al., the effect may result from the strength of preliminary mental
models constructed during the initial exposure to a series of events, facts, or processes. If a
mental model was initially constructed with misinformation, corrections will create gaps in the
model, and such gaps may be less desirable than retaining a complete, albeit misinformed,
model. In addition, easy to process information is accepted more readily than difficult to process
information. This would include ideas that are expressed simply, but also ideas familiar to the
reader (Lewandowsky et al., 2012). In the context of reading persuasive texts, it would follow
that more easily understood arguments would carry greater weight than those that are more
difficult to understand. In fact, since the structure of argumentative text makes it inherently more
challenging to process than other types of text (Haria & Midgette, 2014; Larson, Britt, & Larson,
2004) one would expect evaluation of argumentative texts in general to be difficult. Wiley and
Bailey (2006) found little evidence of student dyads using evaluative strategies when reading
argumentative texts, and Hsieh and Tsai (2013) found that cognitive load affected the ability of
readers to apply advanced evaluation strategies.
In sum, readers may approach controversial texts with conflicting purposes: On the one
hand a reader instinctively seeks affirmation for prior beliefs; on the other, the evaluation of
trustworthiness requires a more objective assessment. If the source presents an opinion consistent
with one’s own, the reader may accept its argument uncritically; alternatively, if the source
presents an opposing opinion, he or she may attend more closely to counter argue, a kind of
assimilation bias defined as a disconfirmation bias. A second assimilation bias, the confirmation
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 6
bias, in which participants seek out information to confirm their existing views and dismiss
information that conflicts with their views, has also been found (Kobayashi, 2010b; Taber &
Lodge, 2006; van Strien et al., 2014; Winkielman, Huber, Kavanagh, & Schwarz, 2012). In
either case, the reader’s assessment of trustworthiness is affected by prior opinion. As
counterintuitive as it may be, readers of persuasive texts would do well to bracket off their
personal views to evaluate trustworthiness from as objective a stance as possible. In one study
students who were instructed to use scientific standards of support provided by evidence to
critically evaluate sources for and against human-induced climate change showed significant
changes in their perceptions of the issue (Lombardi, Sinatra, & Nussbaum, 2013), suggesting
they drew effective conclusions regarding the reliability of the texts they examined. The
implication is that rational objectivity is one important precursor for effectively evaluating texts
about controversial issues.
Metacognitive Processes in Evaluation
Metacognition involves the ability to monitor and self-regulate one’s thinking and
learning, and has been likened to a toolbox: The skilled user knows which tools to use in
particular circumstances, and alleviates some of his workload by efficient selection and
application of those tools (Ford & Yore, 2012). In the cognitive workspace, metacognition is an
executive function allowing for “planning, monitoring, and regulating actions and command of
materials to respect the spatial limitations” of memory, thereby offloading certain cognitive
demands to allow greater cognitive space for message processing (Ford & Yore, 2010, p. 258).
In addition, metacognition is generally considered to be a “significant path to critical
thinking” (Magno, 2010, p. 137). According to Kuhn and Dean (2004), critical thinking requires
“meta-level operations” (p. 270) that consist of both separate and integrated metacognitive skills
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 7
functioning at the executive level. The executive operations that serve metacognition include
declarative, conditional, and procedural knowledge; planning, monitoring, and debugging
strategies; information management; and evaluation functions (Magno, 2010). To exemplify,
Magno offers the following:
a meta-level connection occurs when the individual evaluates an argument, . . . makes
sure that they are well informed about the content (declarative knowledge), plans how to
make the argument (planning and procedural knowledge), monitors whether they
understood well the content to be evaluated (monitoring), and potently evaluates the
tasks. (p. 149)
In the context of evaluating sources that present arguments, executive function operates on
multiple levels: on one level the argument itself must be evaluated for cogency; on another level
the source must be evaluated for both relevance and trustworthiness. At the same time, these
functions must occur while bracketing off the reader’s personal bias from his or her evaluation to
retain an objective perspective (Haria & Midgette, 2014), another layer of complexity requiring
metacognitive skill and cognitive resources.
The objectivity that source evaluation requires appears to be facilitated by metacognitive
scaffolds. Lombardi et al. (2013) implemented a successful intervention for supporting the
evaluation of arguments in which students constructed evidence maps for opposing viewpoints.
Metacognition is also associated with the evaluation of source trustworthiness in general. Mason,
Boldrin, and Ariasi (2009) examined the influence of epistemic metacognition, defined as “a
reflective activity about knowledge and knowing” (p. 67), on the evaluation of web sources by
middle-school students. They found epistemic metacognition to be modestly associated with
higher-level reflection on the justification of knowledge on the web. Epistemic metacognition
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 8
also predicted the students’ critical comparison of information from several sources. Similarly,
Kiili, Laurinen, and Marttunen (2009) found metacognitive skills essential for effective
comprehension on the web, despite the fact that even competent readers evaluated information
relevance more often than they evaluated credibility. Recognizing the complexities of online
inquiry, Zhang and Quintana (2012) designed a digital support system for scaffolding students’
metacognitive processes, which they found to facilitate a “fewer-but-deeper pattern” of reading
(p. 194) in which students read fewer sources but spent more time on each. Although Zhang and
Quitana did not specifically examine evaluation behaviors, their results suggest that
metacognitive supports allow for deeper processing, a likely prerequisite for evaluation.
Therefore, although metacognition may not necessarily lead to evaluation, evaluation requires
some level of metacognition.
Summary
Taken in sum, the theoretical framework outlined above suggests that evaluating the
trustworthiness of sources presenting multiple views on controversial issues presents several
complexities, complicated even more by a reader’s prior opinions on the topic. These are
outlined as follows:
1. The processing of multiple texts, an inherent element of reading on the web, is more
challenging than the processing of single texts, requiring the reader to maintain an active
link between source and content models to construct an overall situation model.
2. The processing of persuasive texts is more challenging than the processing of narrative
and simple expository texts, requiring more cognitive effort to establish intratextual
connections.
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 9
3. The evaluation of multiple conflicting texts presents specific challenges to readers with
prior opinions on the topic, requiring strong executive function skills to evaluate
argumentative and one-sided texts objectively.
4. The process of evaluating sources is itself a higher-order skill, requiring metacognition in
the form of executive function processes.
It becomes clear that evaluating sources in the context of research on controversial issues
requires considerable metacognitive proficiency to coordinate and monitor executive function
processes. Despite this fact, it is common for students conduct web research on controversial
issues of interest to them, because such activities heighten student engagement, serve the
learning goals of the Common Core State Standards, and build critical thinking capacity.
Research Questions
The purpose of this study was to examine how eighth-grade students grappled with the
challenges of evaluating multiple conflicting sources on a controversial issue. Although
numerous studies have examined students’ ability to evaluate the trustworthiness of web sites
(e.g., Goldman, 2011; Wopereis & van Merriënboer, 2011) and others have examined how
students who hold prior opinions synthesize texts presenting conflicting viewpoints (Bråten et
al., 2009; van Strien et al., 2014), few have examined how students evaluate the trustworthiness
of sources presenting conflicting viewpoints on controversial issues about which students have
prior opinions. Against the theoretical backdrop outlined above, the present study was designed
to examine these questions:
1. Are eighth-grade students able to effectively rate the trustworthiness of sources that differ
in quality with regard to author expertise and information referencing?
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 10
2. Are eighth-grade students able to set aside their personal opinions on an issue to
objectively evaluate sources that differ in quality with regard to author expertise and
information referencing?
3. By what criteria do eighth-grade students decide what sources to trust while examining
sources that differ in quality with regard to author expertise and information referencing?
4. Do eighth-grade students with differing abilities in reading comprehension apply
evaluation criteria differently?
Method
This study was a descriptive, task-based analysis to determine how students evaluate the
trustworthiness of sources presenting conflicting sides of a controversial issue. Study
participants included 81 eighth graders (42 males and 39 females) in a required language arts
course taught by the first author in a rural-suburban public school in the Midwestern United
States. As measured by qualification for free and reduced school lunch, 29% of the school
population was low income. All students spoke English as a first language.
Data Collection
Students engaged in a task to determine the trustworthiness of articles from the Web that
presented opposing viewpoints on the issue of allowing school employees to carry concealed
weapons in schools. Students examined five articles (see Table 1) in random order: two articles
in favor of allowing school personnel to possess concealed guns in schools (Lott, 2012; Johnston,
n.d.), two articles against (Gorman-Smith & McLaughlin, 2012; Karoli, 2012), and one neutral
article (FactCheck.org, 2012). Of the two pro and two con articles, one was a high-quality source
and one was a low-quality source, based on author expertise and information referencing.
Specifically, authors of the two low-quality sources were individuals without credentials that
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 11
would warrant them experts on the topic of guns in schools. The professional identity of one of
those authors was not revealed, and the profession of the other (a minister and medical doctor)
reflected no expertise on the general topic of gun control or on the specific topic of guns in
schools. Both sources considered low quality contained strongly emotional appeals but listed no
references. In contrast, the high-quality sources clearly revealed the authors’ educational and
professional backgrounds, which in both cases provided some indication of expertise on the
issue. These included a former employee of the U.S. Sentencing Commission and a professor
from the University of Chicago’s School of Social Service Administration. In addition, both
high-quality sites contained references for information and were measured in tone. It should be
noted, however, that even the sources considered high quality in the study leaned strongly toward
a particular stance. In the context of researching controversial issues, it is typical for sources to
be partial. Our concern was whether, under such conditions, students would use author and
referencing information that was readily available on the page to help ascertain the
trustworthiness of the sources. Several web site genres, including newspaper opinion columns,
blogs written by religious and special interest groups, and the myth-debunking page of a
nonpartisan research foundation were included. The sites were presented as screen shots in their
original web form but not linked to the Internet while students viewed them; although this made
the task less authentic in that students were unable to search or click on links, it allowed for the
measurement of students’ attention to the most obvious authorship and referencing features—
those that required no further navigation or investigation beyond the first page. The high-quality
sites were chosen in part because they provided ample information about the authors to inform
students’ evaluations of trustworthiness, whereas the low-quality sites provided little or no such
information. Removing the complexity and possible distractions of web space navigation
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 12
allowed for a controlled examination of whether students would consider authorship and
referencing features when given easily accessible information and adequate time to do so.
Data collection spanned three class sessions. On the first day, the teacher read aloud
while students read silently a two-paragraph summary presenting opposing sides of the issue and
answered questions regarding vocabulary to clarify the meaning of the paragraph. Students then
circled a statement indicating whether they agreed, disagreed, or felt neutral about the assertion:
“Schools would be safer if school employees, including teachers, were allowed to carry
concealed weapons in school.” Students did not discuss their opinions on the issue, and surveys
were collected immediately.
The teacher then conducted a 15-minute class discussion on web site trustworthiness. She
first asked students to define trustworthiness in general, and then asked how they would define it
in describing web sites. Students defined trustworthiness in terms of people who are “able to be
trusted” or who “you can count on.” Students responded that trustworthy web sites “tell the
truth,” “don’t make up lies to sell you things,” and “have facts, not just stuff people made up.” It
was agreed that trustworthy sites should be used for school assignments while less trustworthy
sites should be avoided.
The next day, students completed the evaluation task in the school computer lab. In one
50-minute period, students examined and rated the trustworthiness of five sources on a 5-point
Likert scale from very trustworthy to not at all (see Appendix A). Trustworthiness was defined in
the directions as “able to be relied on as honest and truthful.” After rating each site, students
listed all the reasons for their rating in a bulleted list.
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 13
Data Analysis
Data for quantitative analysis included each student’s (a) prior opinion rating;
(b) trustworthiness rating of the two pro and two con sites; and (c) reading level as determined
by scores on the reading comprehension portion of the Measures of Academic Performance
(MAP) test, administered online by the Northwest Educational Assessment (NWEA). Ratings of
the neutral stance site were excluded. Students were divided into reading groups using divisions
suggested by the NWEA (Comparative Data to Inform Instructional Decisions, 2011), whereby
students .5 standard deviation above the grade-level norm were labeled “higher achievement,”
and those .5 standard deviation below the norm were labeled “lower achievement.” Students
falling in the range between were considered “average achieving readers.” We ran a repeated-
measures ANOVA with Site Quality (high or low) and Site Stance (pro or con) as within-
subjects factors, and Reading Level (low, average, high) and Prior Opinion (con, neutral, pro) as
between-subjects factors.
Qualitative analysis data drew on the pre-task survey of students’ opinion and the
evaluation criteria listed by students to justify their ratings of the five sites, including the neutral
stance site. Analysis of evaluation criteria students listed involved a grounded coding process
(Glaser & Strauss, 2009). Codes emerged through constant comparative analysis, and responses
were attributed to existing codes until the need for new codes was exhausted. A second rater
coded 15% of the student responses for evaluation criteria, with interrater agreement of 77%.
Appendix B contains a list of final codes, with a description and examples of each.
Results
The first two research questions concerned the students’ ability to determine the
trustworthiness of web sites about controversial issues, and were addressed by the quantitative
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 14
analysis. The subsequent qualitative analysis presents results relevant to the questions regarding
specific criteria students considered while rating the sites’ trustworthiness.
Source Quality
The ANOVA revealed a positive main effect for source quality on source rating F(1,72 )
= 38.15, p < .001, with a mean trustworthiness rating of 3.79 for high-quality sources and 3.06
for low-quality sources. Overall, students distinguished between the more and less trustworthy
sources. Their ability to distinguish between them, however, was influenced by reading ability,
evidenced by a positive linear interaction between source quality and reading level, F(2,72) =
4.45, p = .015 (See Figure 1). High-level readers rated the low-quality sources lower (M = 2.72)
than did the low-level readers (M = 3.19), and rated the high-quality sources higher (M = 3.91)
than did the low-level readers (M = 3.59). Thus, all subgroups differentiated correctly between
high- and low-quality sources, with stronger readers differentiating them more clearly.
--------------------------------------------------
Insert Figure 1 About Here
--------------------------------------------------
Source Stance
There was an unexpected main effect of source stance on source ratings, F(1, 72) =
17.03, p = .000, with a mean trustworthiness rating of 3.14 for the con source and 3.71 for the
pro sources. Students rated the low-quality pro source (M = 3.56) a full point higher than the
low-quality con source (M = 2.56), but rated the high-quality pro source (M =3.90) only slightly
higher than the high-quality con source (M = 3.79). One explanation for this may be that the
low-quality pro source was written by a Christian minister and medical doctor identified as “Dr.
Johnston,” whom students may have viewed as more trustworthy due to his vocation as minister,
physician, or both. Conversely, the author of the low-quality con article was identified only by
the name “Karoli,” a “card-carrying member of We the People.” While the minister-doctor’s
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 15
profession does not guarantee his expertise on the topic of guns in schools, students may have
viewed him more favorably due to his religious affiliations and/or his medical training, even
though Dr. Johnston’s formal education and credentials were not revealed on the page. The rate
of opinion change from pre- to post-task also supports the supposition that students found pro
sources to be generally more convincing than con sources, as more students’ opinions moved
toward a pro stance (n = 30) than moved toward a con stance (n = 18) after completing the task.
Prior Opinion
Students’ evaluations of trustworthiness were influenced by a confirmation bias,
indicated by a positive interaction between source stance and prior opinion, F(2, 72) = 4.977, p =
.009. Overall, students rated sources with which they agreed (M = 3.74) higher than sources with
which they disagreed (M = 3.24). As seen in Figure 2, however, this was not true across all
groups. Students who were against guns in schools prior to the task rated the pro gun sources
slightly higher than they rated the con gun sources.
--------------------------------------------------
Insert Figure 2 About Here
--------------------------------------------------
A closer look at the mean ratings for each site revealed that the low-quality pro-gun
source was rated higher on average than the low-quality con source by all students regardless of
opinion. The ratings of the low-quality pro-gun source were apparently inflated across all groups,
which may explain the unexpectedly high rating of the low-quality pro source by con stance
students. Taking this inflation into account, Figure 2 does show trends reflecting a stronger trust
of sources that align with student opinions and a stronger distrust of those that do not. Reading
level played a role, evidenced by a positive interaction between source stance, prior opinion, and
reading level, F(4, 72) = 4.866, p = .002. Low readers’ mean ratings of sources with which they
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 16
agreed were .75 higher than ratings of sources with which they disagreed. The difference in
mean ratings was .63 for average readers and .02 for high readers. As student reading level
increased, so did students’ ability to separate prior opinion from the evaluation of a source’s
trustworthiness.
Evaluation Criteria
Through our qualitative analysis we examined the criteria students used to judge the
trustworthiness of the five sources. Of the categories of evaluation criteria that emerged, four
aligned with Coiro’s (2014) criteria for evaluating web sites: (a) author reliability, (b) author
slant, (c) content accuracy, and (d) content relevance. Three additional categories emerged: (e)
task goal—for criteria revealing the students’ ability or inability to focus on the task goal of site
evaluation; (f) other—for criteria which did not fit into any of the five previous categories; and
(h) not able to code—for comments that could not be understood well enough to code.
--------------------------------------------------
Insert Table 1 About Here
--------------------------------------------------
Table 1 presents the criteria students used to judge the websites, grouped by the six
categories of evaluation criteria and listed in order of the percentage of students referring to
criteria in that category. For example, 95% of students listed one of the content accuracy criteria
at least once; 73% listed one of the author reliability criteria at least once. Within each category,
the specific criteria are similarly listed in order of the percentage of students listing the criterion
at least once. Thus 84% of students recorded at least one criteria coded as evidence/facts/data.
Note this measure of frequency is based on whether a student included a particular criterion at
least once, not how many times the student listed the criterion. Another indicator of criteria use is
how many times a criterion was listed by students, which is captured by the average number of
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 17
times a criterion was cited per student (third column in Table 1). Thus, across all students,
content accuracy criteria were listed an average of 7.96 times per student; evidence/facts/data
was listed an average of 2.81 times per student. For this initial consideration of the criteria
students used, we focus on the percentage of students citing a criterion or category of criteria at
least once.
Overall, content accuracy criteria were cited by the most students, with 95% of students
listing at least one content accuracy criterion. The top two criteria cited by students were content
accuracy criteria: facts/evidence/data –references to evidence, facts, proof, data, statistics,
research, or scientific studies (e.g., “gives facts about gun control and homicide rates,” or
“backed up with examples and facts”); and reasoning/commentary—references to reasoning,
explanation, commentary, logic, or examples (e.g., “gives good reasoning,” “gives reasons why,”
or “shows examples of why he’s right”). Students, regardless of reading level, based their
judgments of a site’s trustworthiness most often on whether the site provided factual evidence for
its opinion, and whether it provided adequate reasoning for that opinion. The third most common
criteria was logical/makes sense—statements reflecting the reader’s assessment of whether an
argument was logical, reasonable, sensible, sound, or convincing (e.g., “has unrealistic ideas
about what could happen,” and “I don’t know if all this will help violence go down”). Students
clearly attended to the content of arguments presented when evaluating the sources, a finding
inconsistent with prior research indicating that students tend to base evaluations of
trustworthiness on surface features rather than on content (Kiili et al., 2007; Kuiper et al., 2005;
Ladbrook & Probert, 2011; Mothe & Sahut, 2011). However, our design removed the
complexities of web navigation and narrowed the evaluation process by limiting students to the
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 18
examination of one page per source. Within this more controlled design students took the time to
read more deeply and to judge the validity of arguments presented.
The students’ strong emphasis on site content may be informed by research on student
engagement and comprehension. Argumentative tasks have been shown to prompt greater
engagement than simple fact-finding tasks (Nussbaum & Sinatra, 2003), and situational interest
may prompt deeper text comprehension (Schraw & Lehman, 2001). Furthermore, students asked
to construct arguments comprehend sources more deeply than when asked to construct narrative
or expository accounts of source texts (Wiley & Voss, 1999). The students in the present study
were not asked to compose arguments, but because their opinions were assessed prior to the task,
they may have viewed reading as a formative step in constructing a revised opinion on the issue.
The second-most cited category of trustworthiness criteria was content relevance, with
84% of students listing these criteria. Codes in this category conveyed a source’s usefulness in
regards to information it provided or the way information was presented. The most-cited code
within content relevance was focused, applied when students judged the trustworthiness of a
source based on whether it was focused clearly on the issue. Because clear focus fulfills the
needs of the reader by making information easier to locate and understand, it was considered a
type of relevance. Comments about the text veering off topic were assigned this code, as in
“going too much out of topic” or “stays on topic throughout the reading.” The code was also
applied in situations in which students had difficulty making sense of an author’s stance, as in “I
can’t tell what they’re trying to tell the reader” or “not very sure what they’re trying to prove.”
Prevalence of this code among all levels of readers is consistent with prior research indicating
that information relevance (in this case, relevance characterized by usability) is a priority with
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 19
readers. However, since relevance should not be directly equated with trustworthiness—they are
essentially separate constructs— the finding is somewhat problematic.
The next criteria categories dealt with authors. Author reliability (73%) concerns qualities
of the author that suggest whether he or she can be trusted to provide reliable information on a
topic, including whether information about the author is available and adequate, as well as
whether the information suggests reliability and trustworthiness. Author slant (57%) criteria
reflect whether the author seeks to persuade by presenting information in biased way, or whether
the author’s purpose may influence his or her objectivity. The only author-related code in the top
five individual criteria was balance/bias (54%), indicating that many students considered
whether sites presented a one-sided or balanced view of the issue. Examples included comments
such as “doesn’t take other people’s opinion into consideration,” “isn’t trying to be persuasive,
just factual,” “it gave opinions on people who want guns and people who do not” and “keeps on
one subject only, rejecting guns.” Again, this supports the finding that students at least attempted
to comprehend the arguments presented on the sites and considered whether the authors were
biased or balanced. Moreover, the fact that this was the only author-related code to appear in the
top five criteria suggests that students were more strongly focused on argument content than on
identifying author reliability. The remaining author-related categories were generally less
frequent, indicating that many students judged trustworthiness without attending to the identity
and expertise of the site authors. This is despite the fact that the education and experience of the
high-quality site authors were clearly noted on the sites, whereas details on the education and
topic-relevant experience of the lower quality site authors were largely absent. This finding is in
line with prior research suggesting that students do not typically examine author expertise and
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 20
author stance in their determination of trustworthiness (Britt & Aglinskas, 2002; Walraven et al.,
2009).
The Influence of Reading Ability
Columns 4-6 of Table 1 present the average number of times each evaluation criterion
was cited by low, average, and high readers. These data suggest differing competence in
evaluating sources, especially for high readers, whose evaluation criteria were markedly different
from those of average and low readers.
Author-related Criteria. High readers focused more frequently on author reliability and
author stance (6.91 and 2.38 times per student) than did average readers (3.72 and 1.03 times per
student) and low readers (1.15 and 0.85 times per student). When high readers did refer to
authorship, their justifications were more articulate and specific than those of low readers, for
example: “obvious use of heavy persuasion,” “gives counter-arguments,” and “doesn’t give us
background info on the author.” In contrast, when low-level readers referenced authorship, their
comments tended to be less specific, for example, “it’s opinion,” or “just opinions” to indicate
bias, and “informal talk” or “joking around” to indicate language that called the author’s
expertise or character into question. Thus, high readers not only attended more to authorship, but
when they did, more capably articulated their justifications.
Prior Opinion. Frequencies of particular criteria used by students supported quantitative
results showing a significant interaction among prior opinion, reading level, and site stance. The
code agree or disagree indicated that students evaluated sources based on whether they agreed
with them or not, for example: “I think I can trust it because some of the things here sound like
something I would have said;” “I trust what he has to say because he’s right;” “I like this site
because it agrees with my opinions;” and “I think it has a good viewpoint of what I believe, told
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 21
from my point of view.” This code was applied an average of .55 times per low reader and 1.00
times per average reader. Conversely, the code was applied only .03 times per high reader,
appearing only once among this group; high readers were more capable of considering
trustworthiness independent of their opinions on the topic, whereas average readers struggled to
set aside their personal biases while evaluating. The bracketing off of biases described by Haria
and Midgette (2014) requires the reader to separate his or her personal stance on an issue from
the task goal of evaluating an argument and from the task goal of evaluating the source. The
ability to bracket off biases to consider an argument and its source objectively is a complex
cognitive skill that challenged both average and low readers. While one might expect average
readers to more capably bracket off biases than low readers, the higher incidence of this code
among average readers may be explained by the lower readers’ inability to clearly comprehend
the texts, in which case their understanding of the arguments presented was not sufficient to
recognize their agreement or disagreement with the sites.
Focus on Task Goal. A second trend observed in the frequencies of criteria parallels the
students’ ability to bracket off biases and reiterates the ability of high readers to retain an
objective stance. Consistent with research linking metacognition and evaluation, low and average
readers showed difficulty retaining focus on the task goal of source evaluation while reading
about the controversy. These students conflated justifying an evaluation of the source’s
trustworthiness with justification of a stance on the issue, as evidenced by high frequency of the
justifies stance code for low readers (1.80 times per student) and average readers (1.69 times per
student). This code was applied when students justified their source rating with an argument
supporting or refuting a stance on the issue, for example, “Guns don’t kill people, people kill
people,” “gun free zones are the safest for murderers,” and “teachers would have a hard time
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 22
protecting the class if it was too big.” At other times, students recorded information from the site
(records random fact from site code), similarly reflecting a failure to focus on the task goal of
evaluating trustworthiness. The task goal-(minus) code signified this loss of focus on source
evaluation and was common among both low readers (2.45 times per student) and average
readers (2.22 times per student), but rare among high readers (.27 times per student). Capable
readers retained focus on the task goal of source evaluation more successfully than did average
and low readers, likely a function of stronger metacognitive skill. Conversely, there was a low
incidence of the task goal+(plus) code across all students (applied by 2.47% of all students).
This code was applied when students made a clear distinction between evaluation of source
trustworthiness and consideration of the argument, an indication that metacognitive skill was at
work, for example: “I understand their point and I agree but for a report this won’t be a site I
would refer to,” or “This article is very trustworthy, but I don’t agree that guns should be in
schools.” The code was applied three times with two students, one high and one average reader.
Though it is likely other students made this distinction mentally, the fact that so many average
and low readers struggled to retain focus on the goal of task evaluation, and so few high readers
articulated a detachment of their opinion on the issue from their evaluation of trustworthiness,
indicates this was a challenging task for most readers.
Overall, the comparison of low, average, and high readers supports prior research
indicating that argumentative text comprehension is highly complex and challenging. Readers
must attend to authors’ intent, divorce personal opinions from their evaluations of arguments,
question authors’ biases while setting their own aside, and critically analyze argument validity
(Haria & Midgette, 2014). In this study, many low and average readers seemed preoccupied
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 23
with text processing and evaluating arguments, presumably leaving little cognitive space to
consider factors such as author expertise and bias in evaluations of trustworthiness.
Conclusion
This study fills a gap in the literature on source evaluation by examining the criteria
eighth graders applied in evaluating sources presenting arguments on a controversial issue. It
examines the effects of prior opinion, reading ability, and source stance on source evaluation,
showing interesting patterns in the evaluation criteria applied by students. However, the study is
not without limitations. Although it reveals the criteria students applied in their evaluations of
source trustworthiness, the use of those criteria does not necessarily suggest their skillful
application. Criteria indicate only that a student is considering a particular rationale, not that he
or she is doing so proficiently. If students sometimes misapplied criteria, however, it may still be
viewed as evidence of progress from the absence of any criteria to an attempt at thoughtful
evaluation, albeit clumsy.
Students’ evaluations were influenced by their prior opinions on the issue, confirming the
existence of an assimilation bias. This suggests that methods for debiasing such as those
suggested by Lewandowsky et al. (2012) might be necessary to guard against such tendencies.
For example, fostering healthy skepticism about a source can reduce the influence of
misinformation. Weaker readers especially may benefit from instruction specifically targeted
toward debiasing. However, what specific debiasing techniques would be most effective with
young adolescents in the context of source evaluation is largely unknown. Methods for
effectively reducing assimilation bias in these circumstances, and especially among struggling
readers, is a topic for future research.
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 24
That this task was difficult not only for low ability readers, but also for average readers,
deserves closer examination. Cognitive load theory (Sweller, 2011), in which cognitive load is
comprised of intrinsic, germane, and extraneous load (Paas, Renkl, & Sweller, 2004) may be
relevant. Intrinsic load depends on the cognitive work required to process the material to be
learned, and is therefore dependent on text difficulty or concept complexity, while germane load
is dependent on the skills and capacities of the individual, consisting of the prior knowledge and
skillsets applied during the learning process. Germane load might theoretically be reduced by
improving one’s metacognitive skill, thereby facilitating the processing of intrinsic load more
efficiently or effectively (Antonenko & Niederhauser, 2010). Both intrinsic and germane load
contribute to understanding. Extraneous load, however, refers to cognitive work that does not
contribute to an individual’s understanding and therefore detracts from learning. In cognitive
load theory, a goal of instructional design is to reduce extraneous load as much as possible to
allow more mental space for intrinsic and germane loads. Cognitive load can also be mediated by
teaching strategies for more efficient processing (thereby reducing germane load) or by scaling
back the complexity of the task to a level within the zone of the learner’s proximal development
(Vygotsky, 1978), thereby reducing intrinsic load. If even average readers struggle to apply the
metacognitive skills required to retain focus on source evaluation while processing
argumentative text, it follows that instructional scaffolds could support them. But what are
those? In a task as complex as the one studied here, how might cognitive load be reduced, and to
what extent? Which scaffolds serve to reduce germane load rather than to increase extraneous
load? These are important questions for future research that touch not just on source evaluation,
but on instructional design and delivery for critical thinking in general.
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 25
Since struggling readers in this study also attended less frequently to evaluation criteria
reflecting on authorship than did stronger readers, explicitly teaching students to attend more
specifically to authorship may be a promising practical solution to strengthen all students’
evaluation skills. To ignore authorship is to ignore an issue of central importance in evaluating
the trustworthiness of sources about controversial issues. Encouraging attention to it may be a
simple, efficient, and effective way to shift attention away from a reader’s personal opinion
while at the same time encouraging his or her attention to potential biases, conflicts of interest,
and issues of author expertise.
In sum, the present study points to the importance of instructional approaches that
scaffold evaluations of a source’s trustworthiness for average as well as struggling readers, that
encourage attention to authorship, that help readers to retain the metacognitive stance required to
separate personal opinion from determination of a source’s trustworthiness, and that support
readers in managing the simultaneous demands of multiple text processing, argumentative text
processing, and source evaluation.
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 26
References
Antonenko, P. D., & Niederhauser, D. S. (2010). The influence of leads on cognitive load and
learning in a hypertext environment. Computers in Human Behavior, 26(2), 140–150.
doi:10.1016/j.chb.2009.10.014
Braten, I., Ferguson, L. E., Stromso, H. I., & Anmarkrud, O. (2012). Justification beliefs and
multiple-documents comprehension. European Journal of Psychology of Education, (July).
doi:10.1007/s10212-012-0145-2
Bråten, I., Strømsø, H. I., & Britt, M. A. (2009). Trust matters : Examining the role of source
evaluation in students’ construction of meaning within and across multiple texts. Reading
Research Quarterly, 44(1), 6–28. doi:10.1598/RRQ.44.1.1
Britt, M. A., & Aglinskas, C. (2002). Improving students’ ability to identify and use source
information. Cognition and Instruction, 20(4), 485–522. doi:10.1207/S1532690XCI2004_2
Coiro, J. (2014, April 7). Teaching adolescents how to evaluate the quality of online information.
[Web log post]. Edutopia. Retrieved from http://www.edutopia.org/blog/evaluating-quality-
of-online-info-julie-coiro
Coiro, J. (2003). Reading comprehension on the Internet: Expanding our understanding of
reading comprehension to encompass new literacies. The Reading Teacher, 56(5), 458–464.
Coiro, J. (2007). Exploring changes to reading comprehension on the Internet: Paradoxes and
possibilities for diverse adolescent readers. (Doctoral dissertation, University of
Connecticut). Retrieved from http://digitalcommons.uconn.edu/dissertations/AAI3270969/
Colwell, J., Hunt-Barron, S., & Reinking, D. (2013). Obstacles to developing digital literacy on
the Internet in middle school science instruction. Journal of Literacy Research, 45(3), 295–
324. doi:10.1177/1086296X13493273
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 27
Common Core State Standards Initiative. (2010) Common core state standards for English
language arts & literacy in history/social studies, science, and technical subjects. Retrieved
from http://www.corestandards.org/wp-content/uploads/ELA_Standards.pdf
Common Core State Standards Initiative. (2010). Key shifts in English language arts. Retrieved
from http://www.corestandards.org/other-resources/key-shifts-in-english-language-arts/
Coombes, B. (2008). Generation Y : Are they really digital natives or more like digital refugees ?
Voices, 7(1), 31–40.
Duke, N. K., Schmar-Dobler, E., & Zhang, S. (2006). Comprehension and technology. In M. C.
McKenna, L. D. Labbo, R. D. Kieffer, & D. Reinking (Eds.), International handbook of
literacy and technology: Volume two (pp. 317–326). Mahwah, NJ: Lawrence Erlbaum
Associates, Publishers.
Ford, C. L., & Yore, L. D. (2012). Toward convergence of critical thinking, metacognition, and
reflection: Illustration from natural and social sciences, teacher education, and classroom
practice. In A. Zohar & Y. J. Dori (Eds.), Metacognition in science education (Vol. 40, pp.
251–271). Dordrecht: Springer Netherlands. doi:10.1007/978-94-007-2132-6
Franco, G. M., Muis, K. R., Kendeou, P., Ranellucci, J., Sampasivam, L., & Wang, X. (2012).
Examining the influences of epistemic beliefs and knowledge representations on cognitive
processing and conceptual change when learning physics. Learning and Instruction, 22(1),
62–77. doi:10.1016/j.learninstruc.2011.06.003
Goldman, S. R. (2011). Choosing and using multiple information sources: Some new findings
and emergent issues. Learning and Instruction, 21(2), 238–242.
doi:10.1016/j.learninstruc.2010.02.006
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 28
Hargittai, E., Fullerton, L., Menchen-Trevino, E., & Thomas, K. Y. (2010). Trust online: Young
adults’ evaluation of Web content. International Journal of Communication, 4, 468–494.
Haria, P. D., & Midgette, E. (2014). A genre-specific reading comprehension strategy to enhance
struggling fifth-grade readers’ ability to critically analyze argumentative text. Reading &
Writing Quarterly, 30(4), 297–327. doi:10.1080/10573569.2013.818908
Hartman, D. K., Morsink, P. M., & Zheng, J. (2010). From print to pixels: The evolution of
cognitive conceptions of reading comprehension. In E. A. Baker (Ed.), The new literacies:
Multiple perspectives on research and practice (pp. 131–164). New York: Guilford Press.
Head, A., & Eisenberg, M. (2010). Truth be told: How college students evaluate and use
information in the digital age. Literacy, 53, 1–72. The Information School, University of
Washington. Retrieved from
http://projectinfolit.org/pdfs/PIL_Fall2010_Survey_FullReport1.pdf
Hsieh, Y.-H., & Tsai, C.-C. (2013). Students’ scientific epistemic beliefs, online evaluative
standards, and online searching strategies for science information: The moderating role of
cognitive load experience. Journal of Science Education and Technology, 23(3), 299–308.
doi:10.1007/s10956-013-9464-6
Kiili, C., Laurinen, L., & Marttunen, M. (2007). How students evaluate credibility and relevance
of information on the Internet? IADIS International Conference on Cognition and
Exploratory Learning in the Digital Age (CELDA 2007), 155–162.
Kiili, C., Laurinen, L., & Marttunen, M. (2009). Skillful reader is metacognitively competent. In
L. T. W. Hin & R. Subramaniam (Eds.), Handbook of research on new media literacy at the
k-12 level (pp. 654–668). Hershey, NY: Information Science Reference.
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 29
Kim, K. S., & Sin, S. C. J. (2011). Selecting quality sources: Bridging the gap between the
perception and use of information sources. Journal of Information Science, 37(2), 178–188.
doi:10.1177/0165551511400958
Kobayashi, K. (2010a). Critical Integration of Multiple Texts. Japanese Journal of Educational
Psychology, 58(4), 503–516.
Kobayashi, K. (2010b). Strategic use of multiple texts for the evaluation of arguments. Reading
Psychology, 31(2), 121–149. doi:10.1080/02702710902754192
Kuhn, D. (2000). Metacognitive development. Current Directions in Psychological Science,
9(5), 178–181. doi:10.1111/1467-8721.00088
Kuhn, D., & Dean, D. (2004). Metacognition : A bridge between cognitive psychology and
educational practice. Theory Into Practice, 43(4), 268–274.
Kuiper, E., Volman, M., & Terwel, J. (2005). The Web as an information resource in K-12
education: Strategies for supporting students in searching and processing information.
Review of Educational Research, 75(3), 285–328.
Ladbrook, J., & Probert, E. (2011). Information skills and critical literacy : Where are our
digikids at with online searching and are their teachers helping ? Australasian Journal of
Educational Technology, 27(1), 105–121.
Larson, M., Britt, M. A., & Larson, A. a. (2004). Disfluencies in comprehending argumentative
texts. Reading Psychology, 25(3), 205–224. doi:10.1080/02702710490489908
Leu, D. J., Kinzer, C. K., Coiro, J. L., & Cammack, D. W. (2004). Toward a theory of new
literacies emerging from the Internet and other ICT. In R. B. Ruddell & N. Unrau (Eds.),
Theoretical models and processes of reading, fifth edition. (pp. 1568-1611). Newark, DE:
International Reading Association.
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 30
Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012).
Misinformation and its correction: Continued influence and successful debiasing.
Psychological Science in the Public Interest, 13(3), 106–131.
doi:10.1177/1529100612451018
Lombardi, D., Sinatra, G. M., & Nussbaum, E. M. (2013). Plausibility reappraisals and shifts in
middle school students’ climate change conceptions. Learning and Instruction, 27, 50–62.
doi:10.1016/j.learninstruc.2013.03.001
Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The
effects of prior theories on subsequently considered evidence. Journal of Personality and
Social Psychology, 37(11), 2098–2109. doi:10.1037//0022-3514.37.11.2098
Magno, C. (2010). The role of metacognitive skills in developing critical thinking.
Metacognition and Learning, 5(2), 137–156. doi:10.1007/s11409-010-9054-4
Mason, L., Boldrin, A., & Ariasi, N. (2009). Epistemic metacognition in context: Evaluating and
learning online information. Metacognition and Learning, 5(1), 67–90. doi:10.1007/s11409-
009-9048-2
Metzger, M. J., & Flanagin, A. J. (2013). Credibility and trust of information in online
environments: The use of cognitive heuristics. Journal of Pragmatics, 59, 210–220.
doi:10.1016/j.pragma.2013.07.012
Mothe, J., & Sahut, G. (2011). Is a relevant piece of information a valid one? Teaching critical
evaluation of online information. In E. Efthimiadis, J. M. Fernández-Luna, J. F. Huete, & A.
MacFarlane (Eds.), Teaching and learning in information retrieval (pp. 153–168).
Heidelberg: Springer.
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 31
Muis, K. R., & Franco, G. M. (2009). Epistemic beliefs: Setting the standards for self-regulated
learning. Contemporary Educational Psychology, 34(4), 306–318.
doi:10.1016/j.cedpsych.2009.06.005
Northwest Evaluation Association. (2011). Comparative Data to Inform Instructional Decisions.
Retrieved from https://www.nwea.org/content/uploads/2014/07/NWEA-Comparative-Data-
One-Sheet.pdf
Norton, J. (2017, Feb. 15). Common Core revisions: What are states really changing? [Web log
post]. Edtechtimes. Retrieved from https://edtechtimes.com/2017/02/15/common-core-
revisions-what-are-states-really-changing/
Nussbaum, E. M., & Sinatra, G. M. (2003). Argument and conceptual engagement.
Contemporary Educational Psychology, 28, 384–395. doi:10.1016/S0361-476X(02)00038-
3
Paas, F., Renkl, A., & Sweller, J. (2004). Cognitive load theory : Instructional implications of the
interaction between information structures and cognitive architecture. Instructional Science,
32, 1–8.
Rouet, J.-F. (2006). The skills of document use: From text comprehension to Web-based
learning. Mahwah, NJ: Erlbaum.
Rouet, J.-F., Ros, C., Goumi, A., Macedo-Rouet, M., & Dinet, J. (2011). The influence of
surface and deep cues on primary and secondary school students’ assessment of relevance
in Web menus. Learning and Instruction, 21(2), 205–219.
doi:10.1016/j.learninstruc.2010.02.007
Schmar-Dobler, E. (2003). The Internet : The link between literacy and technology. Journal of
Adolescent & Adult Literacy, 47(1), 80-85.
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 32
Schommer, M. (1990). Effects of beliefs about the nature of knowledge on comprehension.
Journal of Educational Psychology, 82(3), 498–504. doi:10.1037//0022-0663.82.3.498
Schraw, G., & Lehman, S. (2001). Situational nterest : A review of the literature and directions
for future research. Educational Psychology Review, 13(1), 23–52.
Spiro, R. J., Feltovich, P. J., & Coulson, R. L. (1996). Two epistemic world views: Prefigurative
schemas and learning in complex domains. Applied Cognitive Psychology, 10(Special Issue:
Reasoning Processes), S51–S61.
Spiro, R. J., Feltovich, P. J., Jacobson, M. J., & Coulson, R. L. (1992). Cognitive flexibility,
constructivism, and hypertext: Random access instruction for advanced knowledge
acquisition in ill-structured domains. In T. M. Duffy & D. H. Jonassen
(Eds.),Constructivism and the technology of instruction: A conversation (pp. 57–75).
Hillsdale, NJ. Retrieved from http://74.125.155.132/scholar?q=cache:At4-
p7m5PiwJ:scholar.google.com/&hl=en&as_sdt=0,23
Strømsø, H. I., Bråten, I., & Britt, M. A. (2010). Reading multiple texts about climate change:
The relationship between memory for sources and text comprehension. Learning and
Instruction, 20(3), 192–204. doi:10.1016/j.learninstruc.2009.02.001
Strømsø, H. I., Bråten, I., & Samuelstuen, M. S. (2008). Dimensions of topic-specific
epistemological beliefs as predictors of multiple text understanding. Learning and
Instruction, 18(6), 513–527. doi:10.1016/j.learninstruc.2007.11.001
Sutherland-Smith, W. (2002). Weaving the literacy Web : Changes in reading from page to
screen. The Reading Teacher, 55(7), 662–669.
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 33
Sweller, J. (2011). Cognitive load theory. In J. P. Mestre & B. F. Ross (Eds.) The psychology of
learning and motivation (pp. 37–76). Elsevier Inc. doi:10.1016/B978-0-12-387691-
1.00002-8
Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs.
American Journal of Political Science, 50(3), 755–769.
Tsai, C.-C. (2004). Beyond cognitive and metacognitive tools: the use of the Internet as an
“epistemological” tool for instruction. British Journal of Educational Technology, 35(5),
525–536. doi:10.1111/j.0007-1013.2004.00411.x
Van Strien, J. L. H., Brand-Gruwel, S., & Boshuizen, H. P. a. (2014). Dealing with conflicting
information from multiple nonlinear texts: Effects of prior attitudes. Computers in Human
Behavior, 32, 101–111. doi:10.1016/j.chb.2013.11.021
Vygotsky, L. S. (1978). Interaction between learning and development. In M. Gauvain & M.
Cole (Eds.) Readings on the development of children (2nd
ed., pp. 29-36). New York: W. H.
Freeman and Company. Retrieved from http://www.psy.cmu.edu/~siegler/vygotsky78.pdf
Walraven, A., Brandgruwel, S., & Boshuizen, H. (2009). How students evaluate information and
sources when searching the World Wide Web for information. Computers & Education,
52(1), 234–246. doi:10.1016/j.compedu.2008.08.003
Wiley, J., & Bailey, J. (2006). Effects of collaboration and argumentation on learning from Web
pages. In A. M. O’Donnell, C. E. Hmelo-Silver, & G. Erkens (Eds.), Collaborative
learning, reasoning, and technology (pp. 297–321). Mahwah, NJ: Lawrence Erlbaum
Associates.
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 34
Wiley, J., & Voss, J. F. (1999). Constructing arguments from multiple sources: Tasks that
promote understanding and not just memory for text. Journal of Educational Psychology,
91(2), 301–311. doi:10.1037//0022-0663.91.2.301
Wineburg, S. S. (1991). Historical problem solving : A study of the cognitive processes used in
the evaluation of documentary and pictorial evidence. Journal of Educational Psychology,
83(1), 73–87.
Winkielman, P., Huber, D. E., Kavanagh, L., & Schwarz, N. (2012). Fluency of consistency:
When thoughts fit nicely and flow smoothly. In B. Gawronski & F. Strack (Eds.), Cognitive
consistency: A fundamental principle in social cognition (pp. 89–111). New York: Guilford
Press.
Wopereis, I. G. J. H., & van Merriënboer, J. J. G. (2011). Evaluating text-based information on
the World Wide Web. Learning and Instruction, 21(2), 232–237.
doi:10.1016/j.learninstruc.2010.02.003
Zawilinski, L., Carter, A., O’Byrne, I., McVerry, G., Nierlich, T., & Leu, D. J. (2007,
November). Toward a taxonomy of online reading comprehension strategies. Paper
presented at the National Reading Conference, Austin, TX. Retrieved from
https://scholar.google.com/citations?view_op=view_citation&hl=en&user=QMTW9n4AA
AAJ&citation_for_view=QMTW9n4AAAAJ:W7OEmFMy1HYC
Zhang, M., & Quintana, C. (2012). Scaffolding strategies for supporting middle school students’
online inquiry processes. Computers & Education, 58(1), 181–196.
doi:10.1016/j.compedu.2011.07.016
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 35
Table 1
Evaluation Criteria Frequencies
Average Times Cited Per Student
Category/Code
Students
Referring to
Criteria
(%)
All
Students
(n=81)
Low
Readers
(n=20)
Average
Readers
(n=32)
H
Rea
(n=
Content Accuracy 95.06% 7.96 7.10 7.36 9
evidence/facts/data 83.95% 2.81 2.30 2.53 3
reasoning/commentary 69.14% 2.58 2.60 2.34 2
logical/makes sense 58.02% 1.44 1.25 1.66 1
quotes other sources 30.86% 0.48 0.40 0.22 0
corroborating source 17.28% 0.23 0.15 0.16 0
pictures/graphs 12.35% 0.14 0.05 0.13 0
date/age 11.11% 0.21 0.30 0.19 0
prior knowledge of topic 6.17% 0.07 0.05 0.13 0
Content Relevance 83.95% 4.97 5.45 5.12 4
focused 43.21% 0.80 0.80 0.94 0
graphs/charts/pictures 41.98% 0.69 0.50 0.81 0
length/amount/detail of info 37.04% 0.70 1.20 0.47 0
info relevance 35.80% 0.86 1.00 1.03 0
organization/rhetoric 32.10% 0.58 0.85 0.53 0
solutions 18.52% 0.31 0.15 0.53 0
language access 16.05% 0.28 0.45 0.09 0
ads/links/sidebar content 12.35% 0.27 0.00 0.28 0
links to additional info 11.11% 0.19 0.15 0.13 0
interesting 9.88% 0.14 0.35 0.03 0
interactive 4.94% 0.15 0.00 0.28 0
Author Reliability 72.84% 4.22 1.15 3.72 6
language 30.86% 0.52 0.35 0.72 0
refers to specific source/type 29.63% 0.41 0.30 0.28 0
author character 28.40% 0.47 0.35 0.50 0
source/author info available 24.69% 0.88 0.05 0.41 1
published/vetted 22.22% 0.33 0.00 0.25 0
author expertise 20.99% 0.30 0.00 0.34 0
number of authors/sources 14.81% 0.26 0.00 0.09 0
familiar with site 13.58% 0.17 0.00 0.16 0
appearance 9.88% 0.23 0.00 0.09 0
title 9.88% 0.10 0.10 0.16 0
ad quality 8.64% 0.12 0.00 0.19 0
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 36
ad quantity 8.64% 0.15 0.00 0.13 0
popularity 7.41% 0.17 0.00 0.25 0
pictures 6.17% 0.07 0.00 0.06 0
writer has own blog or site 3.70% 0.04 0.00 0.09 0
Author Slant 56.79% 1.47 0.85 1.03 2
balance/bias 4 54.32% 1.28 0.80 0.78 2
author purpose 7.41% 0.19 0.05 0.25 0
Other 46.91% 0.87 0.95 1.28 0
agree/disagree 27.16% 0.54 0.55 1.00 0
general statement of trust 27.16% 0.33 0.40 0.28 0
Task Goal – 33.33% 1.58 2.45 2.22 0
justifies stance 27.16% 1.15 1.80 1.69 0
records random fact from site 16.05% 0.43 0.65 0.53 0
Task Goal +
explicitly focused on goal
2.47% 0.04 0.00 0.06 0
NC not able to code 33.33% 0.53 0.60 0.53 0
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 37
Figure 1. Average site ratings of high and low quality sites by student reading level.
2.00
2.50
3.00
3.50
4.00
4.50
low high
Site Quality
low readers
ave readers
high readers
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 38
Figure 2. Average site ratings of pro and con stance sites by students’ prior opinion (for, against, and neutral to allowing school
personnel to carry concealed weapons in schools).
2.00
2.50
3.00
3.50
4.00
4.50
Con Guns Pro Guns
Site Stance
For
Neutral
Against
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 39
Appendix A
Table A1
Sources Included in Evaluation Task
Title of
Article
URL Address of Static Site in
PDF format
Source Description Source
Stance
Viewpoint:
Arming
Teachers
Isn’t the
Answer
http://goo.gl/SEPeiM TIME Magazine op ed
written by a professor at the
University of Chicago’s
School of Social Service
Administration and senior
research fellow with the
nonpartisan Coalition for
Evidence-Based Policy; and
an education policy
consultant to The American
Federation of Teachers,
Teach for America and
Senator Tom Harkin.
Con (Against
Guns in
Schools)
Opposing
View: Guns
in Schools
Can Save
Lives
http://goo.gl/OjP3Or USA Today op ed written by
a former chief economist for
the U.S. Sentencing
Commission and author of
More Guns, Less Crime
Pro (For
Guns in
Schools)
Arming
Teachers Is
Not an
Answer
http://goo.gl/Dn4nDS Crooks and Liars blog
written by an author
identified as “Karoli,” a
“card-carrying member of
Con (Against
Guns in
Schools)
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 40
We the People.” No other
author information available
on the page.
Stop
School
Shootings
by Letting
Teachers
and
Principals
Carry Guns
http://goo.gl/CdXeXI Right Remedy blog written
by Dr. Patrick Johnston, a
minister and medical doctor.
No other author information
available on page.
Pro (For
Guns in
Schools)
Gun
Rhetoric vs.
Gun Facts
http://goo.gl/NwYpKt FactCheck.org site “offering
facts and context as the
national gun control debate
intensifies.”
Neutral
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 41
Appendix B
Task Instructions Day 1
Name_________________________
In December of 2012, in Newtown, Connecticut, an armed gunman forced his way into an elementary school. Before he was finally
stopped, he took the lives of 20 children and 6 adults. Since then, legislators in several states have introduced bills to allow school
employees to carry guns on school property. Those legislators believe that if school employees were armed, they would be better able
to protect children in situations like Newtown. Those who disagree believe that allowing school employees to carry guns would not
make those schools safer.
Having read the introduction, what is your opinion about the following statement?
Allowing school employees, including teachers, to carry concealed weapons in school would make my school a safer place.
Circle one of the following:
I strongly agree.
I tend to agree.
I’m not sure.
I tend to disagree.
I strongly disagree.
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 42
Appendix C
Task Instructions Day 2
Name_________________________
DIRECTIONS:
1. Open Internet Explorer and go to the home page of the media center. Click on the RESEARCH TASK link in the yellow
sidebar.
The Task
Imagine that your state has recently passed a law allowing adults who have a permit to carry concealed weapons in schools—including
school employees like principals and teachers. However, the law also states that individual districts have the right to outlaw concealed
weapons in their schools if they wish to do so.
In light of recent school shootings in which numerous students and teachers lost their lives, your school board is discussing whether or
not to allow adult employees to carry concealed weapons in your school. You are doing research to learn more about this issue. These
are some web sites you have found:
2. Read and examine each of the sites. You may visit them in any order, and you may return to previous sites whenever you wish.
As you read and examine the sites, complete the following chart in as much detail as you can. You will have 50 minutes to complete
the task, allowing approximately 10 minutes per site. You will be given a signal every 15 minutes to help you keep track of time.
Name of the site
you are rating:
This is in the
banner of the
site. An
abbreviation is
fine!
Rate the trustworthiness
of the site by circling one.
NOTE: The definition of
trustworthiness is “able
to be relied on as honest
and truthful.”
In a bulleted list, write down ALL THE
REASONS YOU HAVE for the
trustworthiness rating you chose in column
2. BE SPECIFIC. For example, DO NOT
write generalities, as in “I like it better” or
“it’s a worse site.” Be specific about your
reasons and write ALL your reasons down!
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 43
YOU MAY CONTINUE ON THE BACK IF
NEEDED.
Name of site:
(VERY briefly—
from the banner!)
1 -- not at all trustworthy
2 -- questionably
trustworthy
3 -- I can’t tell or
determine trustworthiness
4 -- somewhat
trustworthy
5 -- very trustworthy
-
-
-
-
-
-
-
Name of site: 1 -- not at all trustworthy
2 -- questionably
trustworthy
3 -- I can’t tell or
determine trustworthiness
4 -- somewhat
trustworthy
5 -- very trustworthy
-
-
-
-
-
-
-
Name of site: 1 -- not at all trustworthy -
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 44
2 -- questionably
trustworthy
3 -- I can’t tell or
determine trustworthiness
4 -- somewhat
trustworthy
5 -- very trustworthy
-
-
-
-
-
-
-
Name of site: 1 -- not at all trustworthy
2 -- questionably
trustworthy
3 -- I can’t tell or
determine trustworthiness
4 -- somewhat
trustworthy
5 -- very trustworthy
-
-
-
-
-
-
-
-
Name of site: 1 -- not at all trustworthy
2 -- questionably
trustworthy
-
-
-
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 45
3 -- I can’t tell or
determine trustworthiness
4 -- somewhat
trustworthy
5 -- very trustworthy
-
-
-
-
-
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 46
Appendix E
List of Codes
Code
Category
Code name Code description Example
Author
Reliability
ad quality quality of ads or promotional links reflect on reliability
of source
“ads on side of page are
encouragement for
positive ideas”
ad quantity quantity of ads or promotional links reflects on source
reliability
“the site has too many
ads”
appearance general appearance (how site looks) reflects on source
reliability
“looks fake”
author character author's perceived character reflects on reliability of
source
“they are trying to
protect children and
teachers”
“bashed police officers”
author expertise expertise of author reflects on reliability of source “written by a professor
at the University of
Chicago”
familiar with site extent to which site is recognized or well-known reflects
on reliability of source
“magazine I’ve heard of
before”
language extent to which language on the site is perceived as
appropriate (indicating an educated or self-controlled
writer) reflects on reliability of source
“uses rude language”
number of
authors/sources
number or authors or information sources reflects on its
reliability
“many different sources
are given”
pictures pictures reflect on the reliability of the source “pictures are
believable”
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 47
popularity popularity of site reflects on reliability of source “popular magazine”
published/vetted whether source has been published or vetted reflects on
reliability
“information was
checked”
refers to specific
source/type
source contains reference(s) to specific source(s) or
type(s) of information that reflect on its reliability
“quotes the president”
“gets info from people
who studied the
problem”
source/author info
available
extent to which source(s) and author(s) are able to be
identified or are described reflects on reliability of the
source
“has the authors listed”
“tells the sources”
title title of source reflects on its reliability “suspicious title”
writer has own blog or
site
recognition that author has his own blog or site reflects
on reliability of source
“has his own blog so
must know something”
Author
Stance
author purpose purpose of site considered in evaluating reliability of the
source/author
“might just be trying to
sell you his book”
balance/bias extent to which source/author avoids bias/seeks balance
reflects on reliability
“tells both sides of the
issue”
“very biased to guns”
Content
Accuracy
corroborating sources extent to which source provides references to
corroborating sources reflects on its accuracy
“supports opinion with
other sides backing him
up”
date/age age/date of site (or information on site) reflects on its
accuracy
“newly published”
evidence/facts/data extent to which source contains references to evidence,
facts, proof, data, statistics, research, or scientific
studies reflects on its accuracy
“gives facts and
statistics”
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 48
logical/makes sense extent to which an argument presented by the source is
logical/reasonable (sensible, sound, or convincing)
reflects on its accuracy
“argument makes
sense”
“seems like a good
argument”
pictures/graphs extent to which pictures, graphs or charts provide
support/corroborating evidence reflects on its accuracy
“graphs prove he’s
probably right”
prior knowledge of
topic
extent to which informative content of site matches
reader's prior knowledge reflects on its accuracy
“the facts are true from
what I have heard”
quotes other sources extent to which there are direct quotes in informational
text reflects on information accuracy
“has quotes from lots of
people”
reasoning/commentary extent to which source provides adequate reasoning,
explanation, commentary, logic, or examples reflects on
its accuracy
“explains how his
solutions are going to
help”
Content
Relevance
ads/links/sidebar
content
evaluation based on extent to which ads, links, or
sidebar content distract the reader, making site less
usable
“ads are distracting”
“ads are bigger than the
article--annoying”
focused evaluation based on extent to which informational text is
perceived to remain on-topic or focused for improved
usability and understanding
“focuses on the problem
at hand”
“doesn’t exactly state a
claim”
graphs/charts/pics evaluation based on extent to which pictures, graphs,
charts serve the reader as tools for understanding,
making site more usable
“charts to help show
shooting rates”
“needs graphs to help
explain”
info relevance evaluation based on extent to which information or
details are perceived as relevant to the task; whether the
information provided is what the reader is looking for
“there’s a lot of info
about arming teachers”
“good information”
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 49
interaction evaluation based on extent to which the reader can
interact with the site by responding, contributing to, or
joining the cause (including links to social networks)
“links to Facebook and
Twitter”
interesting evaluation based on extent to which source content
interests or engages the reader
“kept my interest”
“pretty good hook”
language access extent to which language on the site is accessible to the
reader, making site more usable
“uses big words I don’t
understand”
length/amount/detail
of info
evaluation based on extent to which length, amount, or
detail of information meets the needs of the reader
“gives a lot of detail”
links to additional info evaluation based on extent to which site provides links
to additional information to learn more or meet
information needs of the reader
“has links to other sites
to learn more about it”
organization/rhetoric extent to which organization of text or text features
serve reader's comprehension or informational needs,
making site more usable
“tabs help you find what
you need”
“gives a nice summary
in the introduction”
solutions evaluation based on extent to which site offers solutions
to the problem of school shootings (solutions are a
reader need)
“offers ways to solve the
problem”
Task Goal+
(plus)
explicitly focused on
task goal
reader specifically states there is a difference between
his/her personal stance and his/her rating, showing a
bracketing off of personal opinion to evaluate site
objectively
“I agree with the writer,
but I don’t think I trust
this site”
Task Goal-
(minus) =
does not
attend to
justifies stance reader provides justification for a stance on the issue
rather than justification for evaluation of source
“the only way to stop a
bad guy with a gun is a
good guy with a gun”
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 50
task goal
while
evaluating
records random fact
from site
reader records random fact or quote from the site “police are recruiting
students just in case the
teacher fails at his job”
Other agree/disagree extent to which reader agrees or disagrees with source
stance used to determine trustworthiness
“I agree that guns
should be allowed in
school”
“this sounds like what I
would have said”
general statement of
trust
reader makes a general statement of trustworthiness or
information reliability without providing clear
justification for it
“very trusting”
Not Able to
Code
not able to code intended meaning not clear enough to code “tell some trustworthy
in it”
TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 51

More Related Content

What's hot

Content analysis research
Content analysis researchContent analysis research
Content analysis research
Reina Antonette
 
Content Analysis
Content AnalysisContent Analysis
Content Analysis
Sameena Siddique
 
Erin Murphy, Writing Sample, critical analysis of gender studies research
Erin Murphy, Writing Sample, critical analysis of gender studies researchErin Murphy, Writing Sample, critical analysis of gender studies research
Erin Murphy, Writing Sample, critical analysis of gender studies research
Erin Murphy
 
How to do content analysis_abriged
How to do content analysis_abrigedHow to do content analysis_abriged
How to do content analysis_abriged
Devi Prasad
 
Content analysis media
Content analysis mediaContent analysis media
Content analysis media
luckson matsuro
 
How does content knowledge impact reading comprehension?
How does content knowledge impact reading comprehension?How does content knowledge impact reading comprehension?
How does content knowledge impact reading comprehension?
Natalie Saaris
 
Content Analysis Overview for Persona Development
Content Analysis Overview for Persona DevelopmentContent Analysis Overview for Persona Development
Content Analysis Overview for Persona Development
Pamela Rutledge
 
WK 10 – Research Workshop - Content and discourse analysis
WK 10 – Research Workshop - Content and discourse analysis WK 10 – Research Workshop - Content and discourse analysis
WK 10 – Research Workshop - Content and discourse analysis
Carolina Matos
 
Content analysis
Content analysisContent analysis
Content analysis
Atul Thakur
 
Role of College Libraries in meeting user’s information needs issues and chal...
Role of College Libraries in meeting user’s information needs issues and chal...Role of College Libraries in meeting user’s information needs issues and chal...
Role of College Libraries in meeting user’s information needs issues and chal...
Dr. Utpal Das
 
Methodology & Content analysis
Methodology & Content analysisMethodology & Content analysis
Methodology & Content analysis
Florence Paisey
 
Media content analysis and reporting
Media content analysis and reportingMedia content analysis and reporting
Media content analysis and reporting
Lotfi Saibi
 
Content analysis
Content analysisContent analysis
Content analysis
atrantham
 
Content analysis slides
Content analysis slidesContent analysis slides
Content analysis slides
hibazaidi
 
Content analysis
Content analysisContent analysis
Content analysis
Tahemina Naaz
 
COSC 111 Searching Spring 2011
COSC 111 Searching Spring 2011COSC 111 Searching Spring 2011
COSC 111 Searching Spring 2011
Laksamee Putnam
 
Reference & Research W Finds
Reference & Research W  FindsReference & Research W  Finds
Reference & Research W Finds
out2sea5
 
Reference & Research Grd 6 01 Ss
Reference & Research Grd 6 01 SsReference & Research Grd 6 01 Ss
Reference & Research Grd 6 01 Ss
Lynne Oakvik
 
Content analysis
Content analysisContent analysis
Content analysis
nadia naseem
 
Lesson hypertext and intertext
Lesson hypertext and intertextLesson hypertext and intertext
Lesson hypertext and intertext
CristinaGrumal
 

What's hot (20)

Content analysis research
Content analysis researchContent analysis research
Content analysis research
 
Content Analysis
Content AnalysisContent Analysis
Content Analysis
 
Erin Murphy, Writing Sample, critical analysis of gender studies research
Erin Murphy, Writing Sample, critical analysis of gender studies researchErin Murphy, Writing Sample, critical analysis of gender studies research
Erin Murphy, Writing Sample, critical analysis of gender studies research
 
How to do content analysis_abriged
How to do content analysis_abrigedHow to do content analysis_abriged
How to do content analysis_abriged
 
Content analysis media
Content analysis mediaContent analysis media
Content analysis media
 
How does content knowledge impact reading comprehension?
How does content knowledge impact reading comprehension?How does content knowledge impact reading comprehension?
How does content knowledge impact reading comprehension?
 
Content Analysis Overview for Persona Development
Content Analysis Overview for Persona DevelopmentContent Analysis Overview for Persona Development
Content Analysis Overview for Persona Development
 
WK 10 – Research Workshop - Content and discourse analysis
WK 10 – Research Workshop - Content and discourse analysis WK 10 – Research Workshop - Content and discourse analysis
WK 10 – Research Workshop - Content and discourse analysis
 
Content analysis
Content analysisContent analysis
Content analysis
 
Role of College Libraries in meeting user’s information needs issues and chal...
Role of College Libraries in meeting user’s information needs issues and chal...Role of College Libraries in meeting user’s information needs issues and chal...
Role of College Libraries in meeting user’s information needs issues and chal...
 
Methodology & Content analysis
Methodology & Content analysisMethodology & Content analysis
Methodology & Content analysis
 
Media content analysis and reporting
Media content analysis and reportingMedia content analysis and reporting
Media content analysis and reporting
 
Content analysis
Content analysisContent analysis
Content analysis
 
Content analysis slides
Content analysis slidesContent analysis slides
Content analysis slides
 
Content analysis
Content analysisContent analysis
Content analysis
 
COSC 111 Searching Spring 2011
COSC 111 Searching Spring 2011COSC 111 Searching Spring 2011
COSC 111 Searching Spring 2011
 
Reference & Research W Finds
Reference & Research W  FindsReference & Research W  Finds
Reference & Research W Finds
 
Reference & Research Grd 6 01 Ss
Reference & Research Grd 6 01 SsReference & Research Grd 6 01 Ss
Reference & Research Grd 6 01 Ss
 
Content analysis
Content analysisContent analysis
Content analysis
 
Lesson hypertext and intertext
Lesson hypertext and intertextLesson hypertext and intertext
Lesson hypertext and intertext
 

Similar to Comps paper journal version 2015

Lra09critevalroughdraft
Lra09critevalroughdraftLra09critevalroughdraft
Lra09critevalroughdraft
Greg Mcverry
 
An Analysis On Students Use Of Online Resources For Written Assignments
An Analysis On Students  Use Of Online Resources For Written AssignmentsAn Analysis On Students  Use Of Online Resources For Written Assignments
An Analysis On Students Use Of Online Resources For Written Assignments
Cheryl Brown
 
Online Reading Comprehension: Opportunities, Challenges, and Next Steps
Online Reading Comprehension: Opportunities, Challenges, and Next Steps Online Reading Comprehension: Opportunities, Challenges, and Next Steps
Online Reading Comprehension: Opportunities, Challenges, and Next Steps
Julie Coiro
 
A literate environment analysis
A literate environment analysisA literate environment analysis
A literate environment analysis
shouze
 
Literate environment analysis presentation
Literate environment analysis presentationLiterate environment analysis presentation
Literate environment analysis presentation
mkseabolt
 
Literacy Environment ppt
Literacy Environment pptLiteracy Environment ppt
Literacy Environment ppt
Amparo Camacho
 
Literate environment analysis presentation
Literate environment analysis presentationLiterate environment analysis presentation
Literate environment analysis presentation
Kim Sexton
 
Literate Environment Analysis Presentation
Literate Environment Analysis PresentationLiterate Environment Analysis Presentation
Literate Environment Analysis Presentation
lmckenzie37
 
SOURCE-BASED NEWS WRITING AMONG UNDERGRADUATE STUDENTS: STUDENTS’ PERSPECTIVE...
SOURCE-BASED NEWS WRITING AMONG UNDERGRADUATE STUDENTS: STUDENTS’ PERSPECTIVE...SOURCE-BASED NEWS WRITING AMONG UNDERGRADUATE STUDENTS: STUDENTS’ PERSPECTIVE...
SOURCE-BASED NEWS WRITING AMONG UNDERGRADUATE STUDENTS: STUDENTS’ PERSPECTIVE...
AJHSSR Journal
 
Literate presentation p 3
Literate presentation p 3Literate presentation p 3
Literate presentation p 3
Courtneybuel
 
A Three-Stage Framework For Teaching Literature Reviews A New Approach
A Three-Stage Framework For Teaching Literature Reviews  A New ApproachA Three-Stage Framework For Teaching Literature Reviews  A New Approach
A Three-Stage Framework For Teaching Literature Reviews A New Approach
Vernette Whiteside
 
Literate environment analysis
Literate environment analysisLiterate environment analysis
Literate environment analysis
TOMEICA
 
Literate environment presentation
Literate environment presentationLiterate environment presentation
Literate environment presentation
padmajanaidu
 
Iejee inquiry
Iejee inquiryIejee inquiry
Iejee inquiry
Agus S. Hidayat, S.Pd
 
Ss06 03-038
Ss06 03-038Ss06 03-038
Ss06 03-038
Keri Strahler
 
One page response for this discussion post.  Must be APA, three scho.docx
One page response for this discussion post.  Must be APA, three scho.docxOne page response for this discussion post.  Must be APA, three scho.docx
One page response for this discussion post.  Must be APA, three scho.docx
johnbbruce72945
 
Sample-Annotated-Bibliography.pdf
Sample-Annotated-Bibliography.pdfSample-Annotated-Bibliography.pdf
Sample-Annotated-Bibliography.pdf
RowellDCTrinidad
 
Literacy power point
Literacy power pointLiteracy power point
Literacy power point
tannprice
 
A study of sixth graders’ critical evaluation of Internet sources
A study of sixth graders’ critical evaluation of Internet sourcesA study of sixth graders’ critical evaluation of Internet sources
A study of sixth graders’ critical evaluation of Internet sources
aj6785
 
Wk7 assgngstewart-harman
Wk7 assgngstewart-harmanWk7 assgngstewart-harman
Wk7 assgngstewart-harman
Gina Stewart-Harman
 

Similar to Comps paper journal version 2015 (20)

Lra09critevalroughdraft
Lra09critevalroughdraftLra09critevalroughdraft
Lra09critevalroughdraft
 
An Analysis On Students Use Of Online Resources For Written Assignments
An Analysis On Students  Use Of Online Resources For Written AssignmentsAn Analysis On Students  Use Of Online Resources For Written Assignments
An Analysis On Students Use Of Online Resources For Written Assignments
 
Online Reading Comprehension: Opportunities, Challenges, and Next Steps
Online Reading Comprehension: Opportunities, Challenges, and Next Steps Online Reading Comprehension: Opportunities, Challenges, and Next Steps
Online Reading Comprehension: Opportunities, Challenges, and Next Steps
 
A literate environment analysis
A literate environment analysisA literate environment analysis
A literate environment analysis
 
Literate environment analysis presentation
Literate environment analysis presentationLiterate environment analysis presentation
Literate environment analysis presentation
 
Literacy Environment ppt
Literacy Environment pptLiteracy Environment ppt
Literacy Environment ppt
 
Literate environment analysis presentation
Literate environment analysis presentationLiterate environment analysis presentation
Literate environment analysis presentation
 
Literate Environment Analysis Presentation
Literate Environment Analysis PresentationLiterate Environment Analysis Presentation
Literate Environment Analysis Presentation
 
SOURCE-BASED NEWS WRITING AMONG UNDERGRADUATE STUDENTS: STUDENTS’ PERSPECTIVE...
SOURCE-BASED NEWS WRITING AMONG UNDERGRADUATE STUDENTS: STUDENTS’ PERSPECTIVE...SOURCE-BASED NEWS WRITING AMONG UNDERGRADUATE STUDENTS: STUDENTS’ PERSPECTIVE...
SOURCE-BASED NEWS WRITING AMONG UNDERGRADUATE STUDENTS: STUDENTS’ PERSPECTIVE...
 
Literate presentation p 3
Literate presentation p 3Literate presentation p 3
Literate presentation p 3
 
A Three-Stage Framework For Teaching Literature Reviews A New Approach
A Three-Stage Framework For Teaching Literature Reviews  A New ApproachA Three-Stage Framework For Teaching Literature Reviews  A New Approach
A Three-Stage Framework For Teaching Literature Reviews A New Approach
 
Literate environment analysis
Literate environment analysisLiterate environment analysis
Literate environment analysis
 
Literate environment presentation
Literate environment presentationLiterate environment presentation
Literate environment presentation
 
Iejee inquiry
Iejee inquiryIejee inquiry
Iejee inquiry
 
Ss06 03-038
Ss06 03-038Ss06 03-038
Ss06 03-038
 
One page response for this discussion post.  Must be APA, three scho.docx
One page response for this discussion post.  Must be APA, three scho.docxOne page response for this discussion post.  Must be APA, three scho.docx
One page response for this discussion post.  Must be APA, three scho.docx
 
Sample-Annotated-Bibliography.pdf
Sample-Annotated-Bibliography.pdfSample-Annotated-Bibliography.pdf
Sample-Annotated-Bibliography.pdf
 
Literacy power point
Literacy power pointLiteracy power point
Literacy power point
 
A study of sixth graders’ critical evaluation of Internet sources
A study of sixth graders’ critical evaluation of Internet sourcesA study of sixth graders’ critical evaluation of Internet sources
A study of sixth graders’ critical evaluation of Internet sources
 
Wk7 assgngstewart-harman
Wk7 assgngstewart-harmanWk7 assgngstewart-harman
Wk7 assgngstewart-harman
 

More from aj6785

A.k. johnson resume 2020
A.k. johnson resume 2020A.k. johnson resume 2020
A.k. johnson resume 2020
aj6785
 
Copy of roots and branches poetry book may 3, 2 26 pm
Copy of roots and branches poetry book   may 3, 2 26 pmCopy of roots and branches poetry book   may 3, 2 26 pm
Copy of roots and branches poetry book may 3, 2 26 pm
aj6785
 
Aera presentation final
Aera presentation finalAera presentation final
Aera presentation final
aj6785
 
Final reflection johnson
Final reflection johnsonFinal reflection johnson
Final reflection johnson
aj6785
 
Final design project
Final design projectFinal design project
Final design project
aj6785
 
JohnsonEmoMap
JohnsonEmoMapJohnsonEmoMap
JohnsonEmoMap
aj6785
 
Soc semmcte
Soc semmcteSoc semmcte
Soc semmcte
aj6785
 
Cv resume-sep2012
Cv resume-sep2012Cv resume-sep2012
Cv resume-sep2012
aj6785
 
Tech the Common Core!
Tech the Common Core!Tech the Common Core!
Tech the Common Core!
aj6785
 
Cv resume-2012
Cv resume-2012Cv resume-2012
Cv resume-2012
aj6785
 
Diigo instructions
Diigo instructionsDiigo instructions
Diigo instructions
aj6785
 
Cv resume-2011
Cv resume-2011Cv resume-2011
Cv resume-2011
aj6785
 

More from aj6785 (12)

A.k. johnson resume 2020
A.k. johnson resume 2020A.k. johnson resume 2020
A.k. johnson resume 2020
 
Copy of roots and branches poetry book may 3, 2 26 pm
Copy of roots and branches poetry book   may 3, 2 26 pmCopy of roots and branches poetry book   may 3, 2 26 pm
Copy of roots and branches poetry book may 3, 2 26 pm
 
Aera presentation final
Aera presentation finalAera presentation final
Aera presentation final
 
Final reflection johnson
Final reflection johnsonFinal reflection johnson
Final reflection johnson
 
Final design project
Final design projectFinal design project
Final design project
 
JohnsonEmoMap
JohnsonEmoMapJohnsonEmoMap
JohnsonEmoMap
 
Soc semmcte
Soc semmcteSoc semmcte
Soc semmcte
 
Cv resume-sep2012
Cv resume-sep2012Cv resume-sep2012
Cv resume-sep2012
 
Tech the Common Core!
Tech the Common Core!Tech the Common Core!
Tech the Common Core!
 
Cv resume-2012
Cv resume-2012Cv resume-2012
Cv resume-2012
 
Diigo instructions
Diigo instructionsDiigo instructions
Diigo instructions
 
Cv resume-2011
Cv resume-2011Cv resume-2011
Cv resume-2011
 

Recently uploaded

LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UP
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPLAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UP
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UP
RAHUL
 
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
IreneSebastianRueco1
 
Hindi varnamala | hindi alphabet PPT.pdf
Hindi varnamala | hindi alphabet PPT.pdfHindi varnamala | hindi alphabet PPT.pdf
Hindi varnamala | hindi alphabet PPT.pdf
Dr. Mulla Adam Ali
 
The History of Stoke Newington Street Names
The History of Stoke Newington Street NamesThe History of Stoke Newington Street Names
The History of Stoke Newington Street Names
History of Stoke Newington
 
The Diamonds of 2023-2024 in the IGRA collection
The Diamonds of 2023-2024 in the IGRA collectionThe Diamonds of 2023-2024 in the IGRA collection
The Diamonds of 2023-2024 in the IGRA collection
Israel Genealogy Research Association
 
How to Setup Warehouse & Location in Odoo 17 Inventory
How to Setup Warehouse & Location in Odoo 17 InventoryHow to Setup Warehouse & Location in Odoo 17 Inventory
How to Setup Warehouse & Location in Odoo 17 Inventory
Celine George
 
How to Make a Field Mandatory in Odoo 17
How to Make a Field Mandatory in Odoo 17How to Make a Field Mandatory in Odoo 17
How to Make a Field Mandatory in Odoo 17
Celine George
 
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdfANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
Priyankaranawat4
 
Cognitive Development Adolescence Psychology
Cognitive Development Adolescence PsychologyCognitive Development Adolescence Psychology
Cognitive Development Adolescence Psychology
paigestewart1632
 
Pride Month Slides 2024 David Douglas School District
Pride Month Slides 2024 David Douglas School DistrictPride Month Slides 2024 David Douglas School District
Pride Month Slides 2024 David Douglas School District
David Douglas School District
 
World environment day ppt For 5 June 2024
World environment day ppt For 5 June 2024World environment day ppt For 5 June 2024
World environment day ppt For 5 June 2024
ak6969907
 
Digital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments UnitDigital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments Unit
chanes7
 
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Dr. Vinod Kumar Kanvaria
 
How to Add Chatter in the odoo 17 ERP Module
How to Add Chatter in the odoo 17 ERP ModuleHow to Add Chatter in the odoo 17 ERP Module
How to Add Chatter in the odoo 17 ERP Module
Celine George
 
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
PECB
 
DRUGS AND ITS classification slide share
DRUGS AND ITS classification slide shareDRUGS AND ITS classification slide share
DRUGS AND ITS classification slide share
taiba qazi
 
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
Natural birth techniques - Mrs.Akanksha Trivedi Rama UniversityNatural birth techniques - Mrs.Akanksha Trivedi Rama University
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
Akanksha trivedi rama nursing college kanpur.
 
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdfবাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
eBook.com.bd (প্রয়োজনীয় বাংলা বই)
 
PIMS Job Advertisement 2024.pdf Islamabad
PIMS Job Advertisement 2024.pdf IslamabadPIMS Job Advertisement 2024.pdf Islamabad
PIMS Job Advertisement 2024.pdf Islamabad
AyyanKhan40
 
Azure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHatAzure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHat
Scholarhat
 

Recently uploaded (20)

LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UP
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPLAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UP
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UP
 
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
 
Hindi varnamala | hindi alphabet PPT.pdf
Hindi varnamala | hindi alphabet PPT.pdfHindi varnamala | hindi alphabet PPT.pdf
Hindi varnamala | hindi alphabet PPT.pdf
 
The History of Stoke Newington Street Names
The History of Stoke Newington Street NamesThe History of Stoke Newington Street Names
The History of Stoke Newington Street Names
 
The Diamonds of 2023-2024 in the IGRA collection
The Diamonds of 2023-2024 in the IGRA collectionThe Diamonds of 2023-2024 in the IGRA collection
The Diamonds of 2023-2024 in the IGRA collection
 
How to Setup Warehouse & Location in Odoo 17 Inventory
How to Setup Warehouse & Location in Odoo 17 InventoryHow to Setup Warehouse & Location in Odoo 17 Inventory
How to Setup Warehouse & Location in Odoo 17 Inventory
 
How to Make a Field Mandatory in Odoo 17
How to Make a Field Mandatory in Odoo 17How to Make a Field Mandatory in Odoo 17
How to Make a Field Mandatory in Odoo 17
 
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdfANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
 
Cognitive Development Adolescence Psychology
Cognitive Development Adolescence PsychologyCognitive Development Adolescence Psychology
Cognitive Development Adolescence Psychology
 
Pride Month Slides 2024 David Douglas School District
Pride Month Slides 2024 David Douglas School DistrictPride Month Slides 2024 David Douglas School District
Pride Month Slides 2024 David Douglas School District
 
World environment day ppt For 5 June 2024
World environment day ppt For 5 June 2024World environment day ppt For 5 June 2024
World environment day ppt For 5 June 2024
 
Digital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments UnitDigital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments Unit
 
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
 
How to Add Chatter in the odoo 17 ERP Module
How to Add Chatter in the odoo 17 ERP ModuleHow to Add Chatter in the odoo 17 ERP Module
How to Add Chatter in the odoo 17 ERP Module
 
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
 
DRUGS AND ITS classification slide share
DRUGS AND ITS classification slide shareDRUGS AND ITS classification slide share
DRUGS AND ITS classification slide share
 
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
Natural birth techniques - Mrs.Akanksha Trivedi Rama UniversityNatural birth techniques - Mrs.Akanksha Trivedi Rama University
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
 
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdfবাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
 
PIMS Job Advertisement 2024.pdf Islamabad
PIMS Job Advertisement 2024.pdf IslamabadPIMS Job Advertisement 2024.pdf Islamabad
PIMS Job Advertisement 2024.pdf Islamabad
 
Azure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHatAzure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHat
 

Comps paper journal version 2015

  • 1. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY Trustworthiness Evaluation, Controversy, and Reading Ability: A Study of Eighth Graders Evaluating Multiple Conflicting Sources Angela K. Johnson and Ralph T. Putnam
  • 2. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 0 Abstract The present study examined how eighth graders evaluate the trustworthiness of sources presenting opposing views on a controversy. Data included students’ prior opinions on the issue, trustworthiness ratings of five offline web articles, justifications for those ratings, and a reading comprehension measure. Students differentiated sources by trustworthiness, but reading ability correlated with greater differentiation. Prior opinion influenced ratings; this was somewhat mediated by reading level. Students attended to content factors most frequently, with high readers attending to authorship more than other readers. Data suggest distraction from the evaluation task among average and low readers. Implications point to the benefits of teaching attention to source authorship and providing scaffolds to mediate the complexity of critical evaluation in the context of reading about controversial issues.
  • 3. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 1 Eighth Graders’ Critical Evaluation of Sources about a Controversial Issue With large numbers of schools providing “one to one” and “bring your own device” access, the web has become the front door to information in the classroom. In exchange for access to vast quantities of information, however, comes the responsibility to determine what information is reliable and who can be trusted in this immense and free domain. No longer can the public depend solely on publishing companies, editors, journalists, or librarians to vet sources; users must also learn to critically evaluate information themselves. Against this backdrop, educational researchers have scrambled to understand the extent to which students are capable of and willing to critically evaluate web sources (e.g., Hargittai, Fullerton, Menchen-Trevino, & Thomas, 2010; Head & Eisenberg, 2010; Metzger & Flanagin, 2013). Studies suggest that students have difficulty determining and applying effective criteria for evaluating their trustworthiness (Coombes, 2008; Kiili, Laurinen, & Marttunen, 2007; Kim & Sin, 2011; Kuiper, Volman, & Terwel, 2005). Indeed, evaluation seems to be one of the more challenging aspects of online reading (Colwell, Hunt-Barron, & Reinking, 2013; Walraven, Brandgruwel, & Boshuizen, 2009). At the same time, the Common Core State Standards (Common Core State Standards for English Language Arts & Literacy in History/Social Studies, Science, and Technical Subjects, 2010), still adhered to by 38 states (Norton 2017), prioritize the rigorous expectation that students assess and construct arguments with sound claims, evidence, and reasoning (Key Shifts in English Language Arts, 2010). In the pursuit of these students frequently research controversial issues on the web, and are expected to process and evaluate arguments while also evaluating trustworthiness. Little is known about how the cognitive challenges of reading multiple texts with conflicting views may affect middle-grade students’ ability to evaluate the
  • 4. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 2 trustworthiness of sources. If prior opinions significantly affect their ability to do so, students may be particularly vulnerable to bias and misinformation about controversial issues. Since an effective democracy rests on an informed electorate, it is important to understand the challenges inherent in evaluating sources about controversial issues and to teach students developmentally appropriate methods for overcoming them. This study examines if and how eighth graders’ prior opinions and reading abilities affect the criteria they apply when evaluating the trustworthiness of sources about a controversial issue—whether school personnel should be allowed to carry concealed weapons in schools. Theoretical Framework Several theoretical constructs inform the present study. Theories of online text processing are grounded in theories of offline reading, but apply these to learning on the web, where readers are more likely to confront multiple conflicting texts of varying trustworthiness. These clarify the specific skillsets needed to meet such challenges. The processing of multiple conflicting texts is also influenced by research on persuasion, which is therefore pertinent to the present study. Finally, theoretical constructs of metacognition bear on our understanding of a reader’s capacity to manage the challenges of both the learning task and the learning environment. An overview of these constructs follows. Online Text Processing Hartman, Morsink, and Zheng (2010) asserted that a complication of online reading resides in the “multiple plurals” (p. 140) of online texts. The various elements that combine to establish meaning—for example, reader, author, task, context, and so forth—are themselves plural and continually shifting, and therefore confound the act of meaning construction. Hartman and colleagues proposed that a reader must integrate three types of knowledge in
  • 5. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 3 comprehending online text: (a) knowledge of identity—knowing who wrote a text and how authors “construct, represent, and project online identities” (p. 146); (b) knowledge of location— knowing how to “orient oneself in a website” and “in cyberspace” (p. 148); and (c) knowledge of one’s own goal—knowing why one is reading and remaining focused on that goal. Application of the first may involve assessment of an author’s expertise and trustworthiness, while the latter may involve assessment of a site’s match to reading goals. Studies have shown that, when evaluating sources, students attend to information relevance more than other criteria (Kuiper et al., 2005; Mothe & Sahut, 2011), signifying that they do evaluate sources with relevance to reading goals. This is in accordance with Hartman, Morsink, and Zheng’s (2010) third type of knowledge, knowledge of goal. Other studies have also shown readers to be task-oriented while reading online, suggesting they are capable of remaining focused on broader goals (Kiili et al., 2007; Ladbrook & Probert, 2011). Of the three categories of knowledge, students often lack—or fail to apply—knowledge of identity, or authorship (Bråten, Strømsø, & Britt, 2009; Coiro, 2007; Zawilinski et al., 2007). Lack of attention to authorship is particularly problematic because sourcing—defined by Rouet (2006) as “identifying a number of parameters that characterize the author and conditions of production of the information” (p. 177)—has been found to improve students’ comprehension of multiple conflicting online texts (Strømsø, Bråten, & Britt, 2010). In fact, research on the processing of multiple texts suggests that sourcing is an important element of comprehension. Rouet (2006) posited that the mental representation an expert reader creates while reading multiple texts includes two components, or nodes. The content node comprises a mental representation of the content of a single source, integrating the information encoded in the text and the prior knowledge of the reader. The source node is a mental
  • 6. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 4 representation of the source and its author, including identification, affiliations, expertise, bias, and prior knowledge of these. The source representations of individual texts combine to create a source model, which would reflect, for example, whether two sources agreed or disagreed, and whether one author was more expert than another. Content and source representations combine to create a situation model, a synthesized understanding of the topic. To successfully construct a situation model, the reader identifies the source of each individual text, compares information in one text to that of another, and maintains a connection between source nodes and content nodes. Studies show that expert readers successfully attend to source characteristics to help them synthesize multiple conflicting texts (Wineburg, 1991), but that younger readers have difficulty keeping track of connections between source and content nodes (Golder & Rouet, 2000, in Rouet, 2006). Other studies found that readers overlook author and source information as they evaluate web sources (Bråten et al., 2009; Coiro, 2007; Zawilinski et al., 2007). Findings suggest, therefore, that enabling attention to the relationship between the author, purpose, and content of a message is an essential step toward effective evaluation, a claim reflected in Coiro’s (2014) recommendation that students evaluate web sources based on four criteria: (a) content relevance (the extent to which information and its presentation meet the needs of the reader); (b) content accuracy (the extent to which information can be viewed as factual and accurate); (c) author reliability (the extent to which the author can be trusted to provide reliable information); and (d) author stance (the perspective or bias of the author, which may influence his message). Persuasion Theory A third body of literature informing the present study involves theories of persuasion. In studies of persuasive text comprehension, readers consistently demonstrate biased assimilation,
  • 7. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 5 the evaluation of arguments in favor of their personal views (Kobayashi, 2010; Lord, Ross, & Lepper, 1979). Misinformed readers will also be resistant to correction, a phenomenon known as the continued influence effect (Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012). According to Lewandowsky et al., the effect may result from the strength of preliminary mental models constructed during the initial exposure to a series of events, facts, or processes. If a mental model was initially constructed with misinformation, corrections will create gaps in the model, and such gaps may be less desirable than retaining a complete, albeit misinformed, model. In addition, easy to process information is accepted more readily than difficult to process information. This would include ideas that are expressed simply, but also ideas familiar to the reader (Lewandowsky et al., 2012). In the context of reading persuasive texts, it would follow that more easily understood arguments would carry greater weight than those that are more difficult to understand. In fact, since the structure of argumentative text makes it inherently more challenging to process than other types of text (Haria & Midgette, 2014; Larson, Britt, & Larson, 2004) one would expect evaluation of argumentative texts in general to be difficult. Wiley and Bailey (2006) found little evidence of student dyads using evaluative strategies when reading argumentative texts, and Hsieh and Tsai (2013) found that cognitive load affected the ability of readers to apply advanced evaluation strategies. In sum, readers may approach controversial texts with conflicting purposes: On the one hand a reader instinctively seeks affirmation for prior beliefs; on the other, the evaluation of trustworthiness requires a more objective assessment. If the source presents an opinion consistent with one’s own, the reader may accept its argument uncritically; alternatively, if the source presents an opposing opinion, he or she may attend more closely to counter argue, a kind of assimilation bias defined as a disconfirmation bias. A second assimilation bias, the confirmation
  • 8. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 6 bias, in which participants seek out information to confirm their existing views and dismiss information that conflicts with their views, has also been found (Kobayashi, 2010b; Taber & Lodge, 2006; van Strien et al., 2014; Winkielman, Huber, Kavanagh, & Schwarz, 2012). In either case, the reader’s assessment of trustworthiness is affected by prior opinion. As counterintuitive as it may be, readers of persuasive texts would do well to bracket off their personal views to evaluate trustworthiness from as objective a stance as possible. In one study students who were instructed to use scientific standards of support provided by evidence to critically evaluate sources for and against human-induced climate change showed significant changes in their perceptions of the issue (Lombardi, Sinatra, & Nussbaum, 2013), suggesting they drew effective conclusions regarding the reliability of the texts they examined. The implication is that rational objectivity is one important precursor for effectively evaluating texts about controversial issues. Metacognitive Processes in Evaluation Metacognition involves the ability to monitor and self-regulate one’s thinking and learning, and has been likened to a toolbox: The skilled user knows which tools to use in particular circumstances, and alleviates some of his workload by efficient selection and application of those tools (Ford & Yore, 2012). In the cognitive workspace, metacognition is an executive function allowing for “planning, monitoring, and regulating actions and command of materials to respect the spatial limitations” of memory, thereby offloading certain cognitive demands to allow greater cognitive space for message processing (Ford & Yore, 2010, p. 258). In addition, metacognition is generally considered to be a “significant path to critical thinking” (Magno, 2010, p. 137). According to Kuhn and Dean (2004), critical thinking requires “meta-level operations” (p. 270) that consist of both separate and integrated metacognitive skills
  • 9. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 7 functioning at the executive level. The executive operations that serve metacognition include declarative, conditional, and procedural knowledge; planning, monitoring, and debugging strategies; information management; and evaluation functions (Magno, 2010). To exemplify, Magno offers the following: a meta-level connection occurs when the individual evaluates an argument, . . . makes sure that they are well informed about the content (declarative knowledge), plans how to make the argument (planning and procedural knowledge), monitors whether they understood well the content to be evaluated (monitoring), and potently evaluates the tasks. (p. 149) In the context of evaluating sources that present arguments, executive function operates on multiple levels: on one level the argument itself must be evaluated for cogency; on another level the source must be evaluated for both relevance and trustworthiness. At the same time, these functions must occur while bracketing off the reader’s personal bias from his or her evaluation to retain an objective perspective (Haria & Midgette, 2014), another layer of complexity requiring metacognitive skill and cognitive resources. The objectivity that source evaluation requires appears to be facilitated by metacognitive scaffolds. Lombardi et al. (2013) implemented a successful intervention for supporting the evaluation of arguments in which students constructed evidence maps for opposing viewpoints. Metacognition is also associated with the evaluation of source trustworthiness in general. Mason, Boldrin, and Ariasi (2009) examined the influence of epistemic metacognition, defined as “a reflective activity about knowledge and knowing” (p. 67), on the evaluation of web sources by middle-school students. They found epistemic metacognition to be modestly associated with higher-level reflection on the justification of knowledge on the web. Epistemic metacognition
  • 10. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 8 also predicted the students’ critical comparison of information from several sources. Similarly, Kiili, Laurinen, and Marttunen (2009) found metacognitive skills essential for effective comprehension on the web, despite the fact that even competent readers evaluated information relevance more often than they evaluated credibility. Recognizing the complexities of online inquiry, Zhang and Quintana (2012) designed a digital support system for scaffolding students’ metacognitive processes, which they found to facilitate a “fewer-but-deeper pattern” of reading (p. 194) in which students read fewer sources but spent more time on each. Although Zhang and Quitana did not specifically examine evaluation behaviors, their results suggest that metacognitive supports allow for deeper processing, a likely prerequisite for evaluation. Therefore, although metacognition may not necessarily lead to evaluation, evaluation requires some level of metacognition. Summary Taken in sum, the theoretical framework outlined above suggests that evaluating the trustworthiness of sources presenting multiple views on controversial issues presents several complexities, complicated even more by a reader’s prior opinions on the topic. These are outlined as follows: 1. The processing of multiple texts, an inherent element of reading on the web, is more challenging than the processing of single texts, requiring the reader to maintain an active link between source and content models to construct an overall situation model. 2. The processing of persuasive texts is more challenging than the processing of narrative and simple expository texts, requiring more cognitive effort to establish intratextual connections.
  • 11. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 9 3. The evaluation of multiple conflicting texts presents specific challenges to readers with prior opinions on the topic, requiring strong executive function skills to evaluate argumentative and one-sided texts objectively. 4. The process of evaluating sources is itself a higher-order skill, requiring metacognition in the form of executive function processes. It becomes clear that evaluating sources in the context of research on controversial issues requires considerable metacognitive proficiency to coordinate and monitor executive function processes. Despite this fact, it is common for students conduct web research on controversial issues of interest to them, because such activities heighten student engagement, serve the learning goals of the Common Core State Standards, and build critical thinking capacity. Research Questions The purpose of this study was to examine how eighth-grade students grappled with the challenges of evaluating multiple conflicting sources on a controversial issue. Although numerous studies have examined students’ ability to evaluate the trustworthiness of web sites (e.g., Goldman, 2011; Wopereis & van Merriënboer, 2011) and others have examined how students who hold prior opinions synthesize texts presenting conflicting viewpoints (Bråten et al., 2009; van Strien et al., 2014), few have examined how students evaluate the trustworthiness of sources presenting conflicting viewpoints on controversial issues about which students have prior opinions. Against the theoretical backdrop outlined above, the present study was designed to examine these questions: 1. Are eighth-grade students able to effectively rate the trustworthiness of sources that differ in quality with regard to author expertise and information referencing?
  • 12. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 10 2. Are eighth-grade students able to set aside their personal opinions on an issue to objectively evaluate sources that differ in quality with regard to author expertise and information referencing? 3. By what criteria do eighth-grade students decide what sources to trust while examining sources that differ in quality with regard to author expertise and information referencing? 4. Do eighth-grade students with differing abilities in reading comprehension apply evaluation criteria differently? Method This study was a descriptive, task-based analysis to determine how students evaluate the trustworthiness of sources presenting conflicting sides of a controversial issue. Study participants included 81 eighth graders (42 males and 39 females) in a required language arts course taught by the first author in a rural-suburban public school in the Midwestern United States. As measured by qualification for free and reduced school lunch, 29% of the school population was low income. All students spoke English as a first language. Data Collection Students engaged in a task to determine the trustworthiness of articles from the Web that presented opposing viewpoints on the issue of allowing school employees to carry concealed weapons in schools. Students examined five articles (see Table 1) in random order: two articles in favor of allowing school personnel to possess concealed guns in schools (Lott, 2012; Johnston, n.d.), two articles against (Gorman-Smith & McLaughlin, 2012; Karoli, 2012), and one neutral article (FactCheck.org, 2012). Of the two pro and two con articles, one was a high-quality source and one was a low-quality source, based on author expertise and information referencing. Specifically, authors of the two low-quality sources were individuals without credentials that
  • 13. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 11 would warrant them experts on the topic of guns in schools. The professional identity of one of those authors was not revealed, and the profession of the other (a minister and medical doctor) reflected no expertise on the general topic of gun control or on the specific topic of guns in schools. Both sources considered low quality contained strongly emotional appeals but listed no references. In contrast, the high-quality sources clearly revealed the authors’ educational and professional backgrounds, which in both cases provided some indication of expertise on the issue. These included a former employee of the U.S. Sentencing Commission and a professor from the University of Chicago’s School of Social Service Administration. In addition, both high-quality sites contained references for information and were measured in tone. It should be noted, however, that even the sources considered high quality in the study leaned strongly toward a particular stance. In the context of researching controversial issues, it is typical for sources to be partial. Our concern was whether, under such conditions, students would use author and referencing information that was readily available on the page to help ascertain the trustworthiness of the sources. Several web site genres, including newspaper opinion columns, blogs written by religious and special interest groups, and the myth-debunking page of a nonpartisan research foundation were included. The sites were presented as screen shots in their original web form but not linked to the Internet while students viewed them; although this made the task less authentic in that students were unable to search or click on links, it allowed for the measurement of students’ attention to the most obvious authorship and referencing features— those that required no further navigation or investigation beyond the first page. The high-quality sites were chosen in part because they provided ample information about the authors to inform students’ evaluations of trustworthiness, whereas the low-quality sites provided little or no such information. Removing the complexity and possible distractions of web space navigation
  • 14. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 12 allowed for a controlled examination of whether students would consider authorship and referencing features when given easily accessible information and adequate time to do so. Data collection spanned three class sessions. On the first day, the teacher read aloud while students read silently a two-paragraph summary presenting opposing sides of the issue and answered questions regarding vocabulary to clarify the meaning of the paragraph. Students then circled a statement indicating whether they agreed, disagreed, or felt neutral about the assertion: “Schools would be safer if school employees, including teachers, were allowed to carry concealed weapons in school.” Students did not discuss their opinions on the issue, and surveys were collected immediately. The teacher then conducted a 15-minute class discussion on web site trustworthiness. She first asked students to define trustworthiness in general, and then asked how they would define it in describing web sites. Students defined trustworthiness in terms of people who are “able to be trusted” or who “you can count on.” Students responded that trustworthy web sites “tell the truth,” “don’t make up lies to sell you things,” and “have facts, not just stuff people made up.” It was agreed that trustworthy sites should be used for school assignments while less trustworthy sites should be avoided. The next day, students completed the evaluation task in the school computer lab. In one 50-minute period, students examined and rated the trustworthiness of five sources on a 5-point Likert scale from very trustworthy to not at all (see Appendix A). Trustworthiness was defined in the directions as “able to be relied on as honest and truthful.” After rating each site, students listed all the reasons for their rating in a bulleted list.
  • 15. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 13 Data Analysis Data for quantitative analysis included each student’s (a) prior opinion rating; (b) trustworthiness rating of the two pro and two con sites; and (c) reading level as determined by scores on the reading comprehension portion of the Measures of Academic Performance (MAP) test, administered online by the Northwest Educational Assessment (NWEA). Ratings of the neutral stance site were excluded. Students were divided into reading groups using divisions suggested by the NWEA (Comparative Data to Inform Instructional Decisions, 2011), whereby students .5 standard deviation above the grade-level norm were labeled “higher achievement,” and those .5 standard deviation below the norm were labeled “lower achievement.” Students falling in the range between were considered “average achieving readers.” We ran a repeated- measures ANOVA with Site Quality (high or low) and Site Stance (pro or con) as within- subjects factors, and Reading Level (low, average, high) and Prior Opinion (con, neutral, pro) as between-subjects factors. Qualitative analysis data drew on the pre-task survey of students’ opinion and the evaluation criteria listed by students to justify their ratings of the five sites, including the neutral stance site. Analysis of evaluation criteria students listed involved a grounded coding process (Glaser & Strauss, 2009). Codes emerged through constant comparative analysis, and responses were attributed to existing codes until the need for new codes was exhausted. A second rater coded 15% of the student responses for evaluation criteria, with interrater agreement of 77%. Appendix B contains a list of final codes, with a description and examples of each. Results The first two research questions concerned the students’ ability to determine the trustworthiness of web sites about controversial issues, and were addressed by the quantitative
  • 16. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 14 analysis. The subsequent qualitative analysis presents results relevant to the questions regarding specific criteria students considered while rating the sites’ trustworthiness. Source Quality The ANOVA revealed a positive main effect for source quality on source rating F(1,72 ) = 38.15, p < .001, with a mean trustworthiness rating of 3.79 for high-quality sources and 3.06 for low-quality sources. Overall, students distinguished between the more and less trustworthy sources. Their ability to distinguish between them, however, was influenced by reading ability, evidenced by a positive linear interaction between source quality and reading level, F(2,72) = 4.45, p = .015 (See Figure 1). High-level readers rated the low-quality sources lower (M = 2.72) than did the low-level readers (M = 3.19), and rated the high-quality sources higher (M = 3.91) than did the low-level readers (M = 3.59). Thus, all subgroups differentiated correctly between high- and low-quality sources, with stronger readers differentiating them more clearly. -------------------------------------------------- Insert Figure 1 About Here -------------------------------------------------- Source Stance There was an unexpected main effect of source stance on source ratings, F(1, 72) = 17.03, p = .000, with a mean trustworthiness rating of 3.14 for the con source and 3.71 for the pro sources. Students rated the low-quality pro source (M = 3.56) a full point higher than the low-quality con source (M = 2.56), but rated the high-quality pro source (M =3.90) only slightly higher than the high-quality con source (M = 3.79). One explanation for this may be that the low-quality pro source was written by a Christian minister and medical doctor identified as “Dr. Johnston,” whom students may have viewed as more trustworthy due to his vocation as minister, physician, or both. Conversely, the author of the low-quality con article was identified only by the name “Karoli,” a “card-carrying member of We the People.” While the minister-doctor’s
  • 17. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 15 profession does not guarantee his expertise on the topic of guns in schools, students may have viewed him more favorably due to his religious affiliations and/or his medical training, even though Dr. Johnston’s formal education and credentials were not revealed on the page. The rate of opinion change from pre- to post-task also supports the supposition that students found pro sources to be generally more convincing than con sources, as more students’ opinions moved toward a pro stance (n = 30) than moved toward a con stance (n = 18) after completing the task. Prior Opinion Students’ evaluations of trustworthiness were influenced by a confirmation bias, indicated by a positive interaction between source stance and prior opinion, F(2, 72) = 4.977, p = .009. Overall, students rated sources with which they agreed (M = 3.74) higher than sources with which they disagreed (M = 3.24). As seen in Figure 2, however, this was not true across all groups. Students who were against guns in schools prior to the task rated the pro gun sources slightly higher than they rated the con gun sources. -------------------------------------------------- Insert Figure 2 About Here -------------------------------------------------- A closer look at the mean ratings for each site revealed that the low-quality pro-gun source was rated higher on average than the low-quality con source by all students regardless of opinion. The ratings of the low-quality pro-gun source were apparently inflated across all groups, which may explain the unexpectedly high rating of the low-quality pro source by con stance students. Taking this inflation into account, Figure 2 does show trends reflecting a stronger trust of sources that align with student opinions and a stronger distrust of those that do not. Reading level played a role, evidenced by a positive interaction between source stance, prior opinion, and reading level, F(4, 72) = 4.866, p = .002. Low readers’ mean ratings of sources with which they
  • 18. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 16 agreed were .75 higher than ratings of sources with which they disagreed. The difference in mean ratings was .63 for average readers and .02 for high readers. As student reading level increased, so did students’ ability to separate prior opinion from the evaluation of a source’s trustworthiness. Evaluation Criteria Through our qualitative analysis we examined the criteria students used to judge the trustworthiness of the five sources. Of the categories of evaluation criteria that emerged, four aligned with Coiro’s (2014) criteria for evaluating web sites: (a) author reliability, (b) author slant, (c) content accuracy, and (d) content relevance. Three additional categories emerged: (e) task goal—for criteria revealing the students’ ability or inability to focus on the task goal of site evaluation; (f) other—for criteria which did not fit into any of the five previous categories; and (h) not able to code—for comments that could not be understood well enough to code. -------------------------------------------------- Insert Table 1 About Here -------------------------------------------------- Table 1 presents the criteria students used to judge the websites, grouped by the six categories of evaluation criteria and listed in order of the percentage of students referring to criteria in that category. For example, 95% of students listed one of the content accuracy criteria at least once; 73% listed one of the author reliability criteria at least once. Within each category, the specific criteria are similarly listed in order of the percentage of students listing the criterion at least once. Thus 84% of students recorded at least one criteria coded as evidence/facts/data. Note this measure of frequency is based on whether a student included a particular criterion at least once, not how many times the student listed the criterion. Another indicator of criteria use is how many times a criterion was listed by students, which is captured by the average number of
  • 19. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 17 times a criterion was cited per student (third column in Table 1). Thus, across all students, content accuracy criteria were listed an average of 7.96 times per student; evidence/facts/data was listed an average of 2.81 times per student. For this initial consideration of the criteria students used, we focus on the percentage of students citing a criterion or category of criteria at least once. Overall, content accuracy criteria were cited by the most students, with 95% of students listing at least one content accuracy criterion. The top two criteria cited by students were content accuracy criteria: facts/evidence/data –references to evidence, facts, proof, data, statistics, research, or scientific studies (e.g., “gives facts about gun control and homicide rates,” or “backed up with examples and facts”); and reasoning/commentary—references to reasoning, explanation, commentary, logic, or examples (e.g., “gives good reasoning,” “gives reasons why,” or “shows examples of why he’s right”). Students, regardless of reading level, based their judgments of a site’s trustworthiness most often on whether the site provided factual evidence for its opinion, and whether it provided adequate reasoning for that opinion. The third most common criteria was logical/makes sense—statements reflecting the reader’s assessment of whether an argument was logical, reasonable, sensible, sound, or convincing (e.g., “has unrealistic ideas about what could happen,” and “I don’t know if all this will help violence go down”). Students clearly attended to the content of arguments presented when evaluating the sources, a finding inconsistent with prior research indicating that students tend to base evaluations of trustworthiness on surface features rather than on content (Kiili et al., 2007; Kuiper et al., 2005; Ladbrook & Probert, 2011; Mothe & Sahut, 2011). However, our design removed the complexities of web navigation and narrowed the evaluation process by limiting students to the
  • 20. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 18 examination of one page per source. Within this more controlled design students took the time to read more deeply and to judge the validity of arguments presented. The students’ strong emphasis on site content may be informed by research on student engagement and comprehension. Argumentative tasks have been shown to prompt greater engagement than simple fact-finding tasks (Nussbaum & Sinatra, 2003), and situational interest may prompt deeper text comprehension (Schraw & Lehman, 2001). Furthermore, students asked to construct arguments comprehend sources more deeply than when asked to construct narrative or expository accounts of source texts (Wiley & Voss, 1999). The students in the present study were not asked to compose arguments, but because their opinions were assessed prior to the task, they may have viewed reading as a formative step in constructing a revised opinion on the issue. The second-most cited category of trustworthiness criteria was content relevance, with 84% of students listing these criteria. Codes in this category conveyed a source’s usefulness in regards to information it provided or the way information was presented. The most-cited code within content relevance was focused, applied when students judged the trustworthiness of a source based on whether it was focused clearly on the issue. Because clear focus fulfills the needs of the reader by making information easier to locate and understand, it was considered a type of relevance. Comments about the text veering off topic were assigned this code, as in “going too much out of topic” or “stays on topic throughout the reading.” The code was also applied in situations in which students had difficulty making sense of an author’s stance, as in “I can’t tell what they’re trying to tell the reader” or “not very sure what they’re trying to prove.” Prevalence of this code among all levels of readers is consistent with prior research indicating that information relevance (in this case, relevance characterized by usability) is a priority with
  • 21. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 19 readers. However, since relevance should not be directly equated with trustworthiness—they are essentially separate constructs— the finding is somewhat problematic. The next criteria categories dealt with authors. Author reliability (73%) concerns qualities of the author that suggest whether he or she can be trusted to provide reliable information on a topic, including whether information about the author is available and adequate, as well as whether the information suggests reliability and trustworthiness. Author slant (57%) criteria reflect whether the author seeks to persuade by presenting information in biased way, or whether the author’s purpose may influence his or her objectivity. The only author-related code in the top five individual criteria was balance/bias (54%), indicating that many students considered whether sites presented a one-sided or balanced view of the issue. Examples included comments such as “doesn’t take other people’s opinion into consideration,” “isn’t trying to be persuasive, just factual,” “it gave opinions on people who want guns and people who do not” and “keeps on one subject only, rejecting guns.” Again, this supports the finding that students at least attempted to comprehend the arguments presented on the sites and considered whether the authors were biased or balanced. Moreover, the fact that this was the only author-related code to appear in the top five criteria suggests that students were more strongly focused on argument content than on identifying author reliability. The remaining author-related categories were generally less frequent, indicating that many students judged trustworthiness without attending to the identity and expertise of the site authors. This is despite the fact that the education and experience of the high-quality site authors were clearly noted on the sites, whereas details on the education and topic-relevant experience of the lower quality site authors were largely absent. This finding is in line with prior research suggesting that students do not typically examine author expertise and
  • 22. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 20 author stance in their determination of trustworthiness (Britt & Aglinskas, 2002; Walraven et al., 2009). The Influence of Reading Ability Columns 4-6 of Table 1 present the average number of times each evaluation criterion was cited by low, average, and high readers. These data suggest differing competence in evaluating sources, especially for high readers, whose evaluation criteria were markedly different from those of average and low readers. Author-related Criteria. High readers focused more frequently on author reliability and author stance (6.91 and 2.38 times per student) than did average readers (3.72 and 1.03 times per student) and low readers (1.15 and 0.85 times per student). When high readers did refer to authorship, their justifications were more articulate and specific than those of low readers, for example: “obvious use of heavy persuasion,” “gives counter-arguments,” and “doesn’t give us background info on the author.” In contrast, when low-level readers referenced authorship, their comments tended to be less specific, for example, “it’s opinion,” or “just opinions” to indicate bias, and “informal talk” or “joking around” to indicate language that called the author’s expertise or character into question. Thus, high readers not only attended more to authorship, but when they did, more capably articulated their justifications. Prior Opinion. Frequencies of particular criteria used by students supported quantitative results showing a significant interaction among prior opinion, reading level, and site stance. The code agree or disagree indicated that students evaluated sources based on whether they agreed with them or not, for example: “I think I can trust it because some of the things here sound like something I would have said;” “I trust what he has to say because he’s right;” “I like this site because it agrees with my opinions;” and “I think it has a good viewpoint of what I believe, told
  • 23. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 21 from my point of view.” This code was applied an average of .55 times per low reader and 1.00 times per average reader. Conversely, the code was applied only .03 times per high reader, appearing only once among this group; high readers were more capable of considering trustworthiness independent of their opinions on the topic, whereas average readers struggled to set aside their personal biases while evaluating. The bracketing off of biases described by Haria and Midgette (2014) requires the reader to separate his or her personal stance on an issue from the task goal of evaluating an argument and from the task goal of evaluating the source. The ability to bracket off biases to consider an argument and its source objectively is a complex cognitive skill that challenged both average and low readers. While one might expect average readers to more capably bracket off biases than low readers, the higher incidence of this code among average readers may be explained by the lower readers’ inability to clearly comprehend the texts, in which case their understanding of the arguments presented was not sufficient to recognize their agreement or disagreement with the sites. Focus on Task Goal. A second trend observed in the frequencies of criteria parallels the students’ ability to bracket off biases and reiterates the ability of high readers to retain an objective stance. Consistent with research linking metacognition and evaluation, low and average readers showed difficulty retaining focus on the task goal of source evaluation while reading about the controversy. These students conflated justifying an evaluation of the source’s trustworthiness with justification of a stance on the issue, as evidenced by high frequency of the justifies stance code for low readers (1.80 times per student) and average readers (1.69 times per student). This code was applied when students justified their source rating with an argument supporting or refuting a stance on the issue, for example, “Guns don’t kill people, people kill people,” “gun free zones are the safest for murderers,” and “teachers would have a hard time
  • 24. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 22 protecting the class if it was too big.” At other times, students recorded information from the site (records random fact from site code), similarly reflecting a failure to focus on the task goal of evaluating trustworthiness. The task goal-(minus) code signified this loss of focus on source evaluation and was common among both low readers (2.45 times per student) and average readers (2.22 times per student), but rare among high readers (.27 times per student). Capable readers retained focus on the task goal of source evaluation more successfully than did average and low readers, likely a function of stronger metacognitive skill. Conversely, there was a low incidence of the task goal+(plus) code across all students (applied by 2.47% of all students). This code was applied when students made a clear distinction between evaluation of source trustworthiness and consideration of the argument, an indication that metacognitive skill was at work, for example: “I understand their point and I agree but for a report this won’t be a site I would refer to,” or “This article is very trustworthy, but I don’t agree that guns should be in schools.” The code was applied three times with two students, one high and one average reader. Though it is likely other students made this distinction mentally, the fact that so many average and low readers struggled to retain focus on the goal of task evaluation, and so few high readers articulated a detachment of their opinion on the issue from their evaluation of trustworthiness, indicates this was a challenging task for most readers. Overall, the comparison of low, average, and high readers supports prior research indicating that argumentative text comprehension is highly complex and challenging. Readers must attend to authors’ intent, divorce personal opinions from their evaluations of arguments, question authors’ biases while setting their own aside, and critically analyze argument validity (Haria & Midgette, 2014). In this study, many low and average readers seemed preoccupied
  • 25. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 23 with text processing and evaluating arguments, presumably leaving little cognitive space to consider factors such as author expertise and bias in evaluations of trustworthiness. Conclusion This study fills a gap in the literature on source evaluation by examining the criteria eighth graders applied in evaluating sources presenting arguments on a controversial issue. It examines the effects of prior opinion, reading ability, and source stance on source evaluation, showing interesting patterns in the evaluation criteria applied by students. However, the study is not without limitations. Although it reveals the criteria students applied in their evaluations of source trustworthiness, the use of those criteria does not necessarily suggest their skillful application. Criteria indicate only that a student is considering a particular rationale, not that he or she is doing so proficiently. If students sometimes misapplied criteria, however, it may still be viewed as evidence of progress from the absence of any criteria to an attempt at thoughtful evaluation, albeit clumsy. Students’ evaluations were influenced by their prior opinions on the issue, confirming the existence of an assimilation bias. This suggests that methods for debiasing such as those suggested by Lewandowsky et al. (2012) might be necessary to guard against such tendencies. For example, fostering healthy skepticism about a source can reduce the influence of misinformation. Weaker readers especially may benefit from instruction specifically targeted toward debiasing. However, what specific debiasing techniques would be most effective with young adolescents in the context of source evaluation is largely unknown. Methods for effectively reducing assimilation bias in these circumstances, and especially among struggling readers, is a topic for future research.
  • 26. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 24 That this task was difficult not only for low ability readers, but also for average readers, deserves closer examination. Cognitive load theory (Sweller, 2011), in which cognitive load is comprised of intrinsic, germane, and extraneous load (Paas, Renkl, & Sweller, 2004) may be relevant. Intrinsic load depends on the cognitive work required to process the material to be learned, and is therefore dependent on text difficulty or concept complexity, while germane load is dependent on the skills and capacities of the individual, consisting of the prior knowledge and skillsets applied during the learning process. Germane load might theoretically be reduced by improving one’s metacognitive skill, thereby facilitating the processing of intrinsic load more efficiently or effectively (Antonenko & Niederhauser, 2010). Both intrinsic and germane load contribute to understanding. Extraneous load, however, refers to cognitive work that does not contribute to an individual’s understanding and therefore detracts from learning. In cognitive load theory, a goal of instructional design is to reduce extraneous load as much as possible to allow more mental space for intrinsic and germane loads. Cognitive load can also be mediated by teaching strategies for more efficient processing (thereby reducing germane load) or by scaling back the complexity of the task to a level within the zone of the learner’s proximal development (Vygotsky, 1978), thereby reducing intrinsic load. If even average readers struggle to apply the metacognitive skills required to retain focus on source evaluation while processing argumentative text, it follows that instructional scaffolds could support them. But what are those? In a task as complex as the one studied here, how might cognitive load be reduced, and to what extent? Which scaffolds serve to reduce germane load rather than to increase extraneous load? These are important questions for future research that touch not just on source evaluation, but on instructional design and delivery for critical thinking in general.
  • 27. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 25 Since struggling readers in this study also attended less frequently to evaluation criteria reflecting on authorship than did stronger readers, explicitly teaching students to attend more specifically to authorship may be a promising practical solution to strengthen all students’ evaluation skills. To ignore authorship is to ignore an issue of central importance in evaluating the trustworthiness of sources about controversial issues. Encouraging attention to it may be a simple, efficient, and effective way to shift attention away from a reader’s personal opinion while at the same time encouraging his or her attention to potential biases, conflicts of interest, and issues of author expertise. In sum, the present study points to the importance of instructional approaches that scaffold evaluations of a source’s trustworthiness for average as well as struggling readers, that encourage attention to authorship, that help readers to retain the metacognitive stance required to separate personal opinion from determination of a source’s trustworthiness, and that support readers in managing the simultaneous demands of multiple text processing, argumentative text processing, and source evaluation.
  • 28. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 26 References Antonenko, P. D., & Niederhauser, D. S. (2010). The influence of leads on cognitive load and learning in a hypertext environment. Computers in Human Behavior, 26(2), 140–150. doi:10.1016/j.chb.2009.10.014 Braten, I., Ferguson, L. E., Stromso, H. I., & Anmarkrud, O. (2012). Justification beliefs and multiple-documents comprehension. European Journal of Psychology of Education, (July). doi:10.1007/s10212-012-0145-2 Bråten, I., Strømsø, H. I., & Britt, M. A. (2009). Trust matters : Examining the role of source evaluation in students’ construction of meaning within and across multiple texts. Reading Research Quarterly, 44(1), 6–28. doi:10.1598/RRQ.44.1.1 Britt, M. A., & Aglinskas, C. (2002). Improving students’ ability to identify and use source information. Cognition and Instruction, 20(4), 485–522. doi:10.1207/S1532690XCI2004_2 Coiro, J. (2014, April 7). Teaching adolescents how to evaluate the quality of online information. [Web log post]. Edutopia. Retrieved from http://www.edutopia.org/blog/evaluating-quality- of-online-info-julie-coiro Coiro, J. (2003). Reading comprehension on the Internet: Expanding our understanding of reading comprehension to encompass new literacies. The Reading Teacher, 56(5), 458–464. Coiro, J. (2007). Exploring changes to reading comprehension on the Internet: Paradoxes and possibilities for diverse adolescent readers. (Doctoral dissertation, University of Connecticut). Retrieved from http://digitalcommons.uconn.edu/dissertations/AAI3270969/ Colwell, J., Hunt-Barron, S., & Reinking, D. (2013). Obstacles to developing digital literacy on the Internet in middle school science instruction. Journal of Literacy Research, 45(3), 295– 324. doi:10.1177/1086296X13493273
  • 29. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 27 Common Core State Standards Initiative. (2010) Common core state standards for English language arts & literacy in history/social studies, science, and technical subjects. Retrieved from http://www.corestandards.org/wp-content/uploads/ELA_Standards.pdf Common Core State Standards Initiative. (2010). Key shifts in English language arts. Retrieved from http://www.corestandards.org/other-resources/key-shifts-in-english-language-arts/ Coombes, B. (2008). Generation Y : Are they really digital natives or more like digital refugees ? Voices, 7(1), 31–40. Duke, N. K., Schmar-Dobler, E., & Zhang, S. (2006). Comprehension and technology. In M. C. McKenna, L. D. Labbo, R. D. Kieffer, & D. Reinking (Eds.), International handbook of literacy and technology: Volume two (pp. 317–326). Mahwah, NJ: Lawrence Erlbaum Associates, Publishers. Ford, C. L., & Yore, L. D. (2012). Toward convergence of critical thinking, metacognition, and reflection: Illustration from natural and social sciences, teacher education, and classroom practice. In A. Zohar & Y. J. Dori (Eds.), Metacognition in science education (Vol. 40, pp. 251–271). Dordrecht: Springer Netherlands. doi:10.1007/978-94-007-2132-6 Franco, G. M., Muis, K. R., Kendeou, P., Ranellucci, J., Sampasivam, L., & Wang, X. (2012). Examining the influences of epistemic beliefs and knowledge representations on cognitive processing and conceptual change when learning physics. Learning and Instruction, 22(1), 62–77. doi:10.1016/j.learninstruc.2011.06.003 Goldman, S. R. (2011). Choosing and using multiple information sources: Some new findings and emergent issues. Learning and Instruction, 21(2), 238–242. doi:10.1016/j.learninstruc.2010.02.006
  • 30. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 28 Hargittai, E., Fullerton, L., Menchen-Trevino, E., & Thomas, K. Y. (2010). Trust online: Young adults’ evaluation of Web content. International Journal of Communication, 4, 468–494. Haria, P. D., & Midgette, E. (2014). A genre-specific reading comprehension strategy to enhance struggling fifth-grade readers’ ability to critically analyze argumentative text. Reading & Writing Quarterly, 30(4), 297–327. doi:10.1080/10573569.2013.818908 Hartman, D. K., Morsink, P. M., & Zheng, J. (2010). From print to pixels: The evolution of cognitive conceptions of reading comprehension. In E. A. Baker (Ed.), The new literacies: Multiple perspectives on research and practice (pp. 131–164). New York: Guilford Press. Head, A., & Eisenberg, M. (2010). Truth be told: How college students evaluate and use information in the digital age. Literacy, 53, 1–72. The Information School, University of Washington. Retrieved from http://projectinfolit.org/pdfs/PIL_Fall2010_Survey_FullReport1.pdf Hsieh, Y.-H., & Tsai, C.-C. (2013). Students’ scientific epistemic beliefs, online evaluative standards, and online searching strategies for science information: The moderating role of cognitive load experience. Journal of Science Education and Technology, 23(3), 299–308. doi:10.1007/s10956-013-9464-6 Kiili, C., Laurinen, L., & Marttunen, M. (2007). How students evaluate credibility and relevance of information on the Internet? IADIS International Conference on Cognition and Exploratory Learning in the Digital Age (CELDA 2007), 155–162. Kiili, C., Laurinen, L., & Marttunen, M. (2009). Skillful reader is metacognitively competent. In L. T. W. Hin & R. Subramaniam (Eds.), Handbook of research on new media literacy at the k-12 level (pp. 654–668). Hershey, NY: Information Science Reference.
  • 31. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 29 Kim, K. S., & Sin, S. C. J. (2011). Selecting quality sources: Bridging the gap between the perception and use of information sources. Journal of Information Science, 37(2), 178–188. doi:10.1177/0165551511400958 Kobayashi, K. (2010a). Critical Integration of Multiple Texts. Japanese Journal of Educational Psychology, 58(4), 503–516. Kobayashi, K. (2010b). Strategic use of multiple texts for the evaluation of arguments. Reading Psychology, 31(2), 121–149. doi:10.1080/02702710902754192 Kuhn, D. (2000). Metacognitive development. Current Directions in Psychological Science, 9(5), 178–181. doi:10.1111/1467-8721.00088 Kuhn, D., & Dean, D. (2004). Metacognition : A bridge between cognitive psychology and educational practice. Theory Into Practice, 43(4), 268–274. Kuiper, E., Volman, M., & Terwel, J. (2005). The Web as an information resource in K-12 education: Strategies for supporting students in searching and processing information. Review of Educational Research, 75(3), 285–328. Ladbrook, J., & Probert, E. (2011). Information skills and critical literacy : Where are our digikids at with online searching and are their teachers helping ? Australasian Journal of Educational Technology, 27(1), 105–121. Larson, M., Britt, M. A., & Larson, A. a. (2004). Disfluencies in comprehending argumentative texts. Reading Psychology, 25(3), 205–224. doi:10.1080/02702710490489908 Leu, D. J., Kinzer, C. K., Coiro, J. L., & Cammack, D. W. (2004). Toward a theory of new literacies emerging from the Internet and other ICT. In R. B. Ruddell & N. Unrau (Eds.), Theoretical models and processes of reading, fifth edition. (pp. 1568-1611). Newark, DE: International Reading Association.
  • 32. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 30 Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131. doi:10.1177/1529100612451018 Lombardi, D., Sinatra, G. M., & Nussbaum, E. M. (2013). Plausibility reappraisals and shifts in middle school students’ climate change conceptions. Learning and Instruction, 27, 50–62. doi:10.1016/j.learninstruc.2013.03.001 Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098–2109. doi:10.1037//0022-3514.37.11.2098 Magno, C. (2010). The role of metacognitive skills in developing critical thinking. Metacognition and Learning, 5(2), 137–156. doi:10.1007/s11409-010-9054-4 Mason, L., Boldrin, A., & Ariasi, N. (2009). Epistemic metacognition in context: Evaluating and learning online information. Metacognition and Learning, 5(1), 67–90. doi:10.1007/s11409- 009-9048-2 Metzger, M. J., & Flanagin, A. J. (2013). Credibility and trust of information in online environments: The use of cognitive heuristics. Journal of Pragmatics, 59, 210–220. doi:10.1016/j.pragma.2013.07.012 Mothe, J., & Sahut, G. (2011). Is a relevant piece of information a valid one? Teaching critical evaluation of online information. In E. Efthimiadis, J. M. Fernández-Luna, J. F. Huete, & A. MacFarlane (Eds.), Teaching and learning in information retrieval (pp. 153–168). Heidelberg: Springer.
  • 33. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 31 Muis, K. R., & Franco, G. M. (2009). Epistemic beliefs: Setting the standards for self-regulated learning. Contemporary Educational Psychology, 34(4), 306–318. doi:10.1016/j.cedpsych.2009.06.005 Northwest Evaluation Association. (2011). Comparative Data to Inform Instructional Decisions. Retrieved from https://www.nwea.org/content/uploads/2014/07/NWEA-Comparative-Data- One-Sheet.pdf Norton, J. (2017, Feb. 15). Common Core revisions: What are states really changing? [Web log post]. Edtechtimes. Retrieved from https://edtechtimes.com/2017/02/15/common-core- revisions-what-are-states-really-changing/ Nussbaum, E. M., & Sinatra, G. M. (2003). Argument and conceptual engagement. Contemporary Educational Psychology, 28, 384–395. doi:10.1016/S0361-476X(02)00038- 3 Paas, F., Renkl, A., & Sweller, J. (2004). Cognitive load theory : Instructional implications of the interaction between information structures and cognitive architecture. Instructional Science, 32, 1–8. Rouet, J.-F. (2006). The skills of document use: From text comprehension to Web-based learning. Mahwah, NJ: Erlbaum. Rouet, J.-F., Ros, C., Goumi, A., Macedo-Rouet, M., & Dinet, J. (2011). The influence of surface and deep cues on primary and secondary school students’ assessment of relevance in Web menus. Learning and Instruction, 21(2), 205–219. doi:10.1016/j.learninstruc.2010.02.007 Schmar-Dobler, E. (2003). The Internet : The link between literacy and technology. Journal of Adolescent & Adult Literacy, 47(1), 80-85.
  • 34. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 32 Schommer, M. (1990). Effects of beliefs about the nature of knowledge on comprehension. Journal of Educational Psychology, 82(3), 498–504. doi:10.1037//0022-0663.82.3.498 Schraw, G., & Lehman, S. (2001). Situational nterest : A review of the literature and directions for future research. Educational Psychology Review, 13(1), 23–52. Spiro, R. J., Feltovich, P. J., & Coulson, R. L. (1996). Two epistemic world views: Prefigurative schemas and learning in complex domains. Applied Cognitive Psychology, 10(Special Issue: Reasoning Processes), S51–S61. Spiro, R. J., Feltovich, P. J., Jacobson, M. J., & Coulson, R. L. (1992). Cognitive flexibility, constructivism, and hypertext: Random access instruction for advanced knowledge acquisition in ill-structured domains. In T. M. Duffy & D. H. Jonassen (Eds.),Constructivism and the technology of instruction: A conversation (pp. 57–75). Hillsdale, NJ. Retrieved from http://74.125.155.132/scholar?q=cache:At4- p7m5PiwJ:scholar.google.com/&hl=en&as_sdt=0,23 Strømsø, H. I., Bråten, I., & Britt, M. A. (2010). Reading multiple texts about climate change: The relationship between memory for sources and text comprehension. Learning and Instruction, 20(3), 192–204. doi:10.1016/j.learninstruc.2009.02.001 Strømsø, H. I., Bråten, I., & Samuelstuen, M. S. (2008). Dimensions of topic-specific epistemological beliefs as predictors of multiple text understanding. Learning and Instruction, 18(6), 513–527. doi:10.1016/j.learninstruc.2007.11.001 Sutherland-Smith, W. (2002). Weaving the literacy Web : Changes in reading from page to screen. The Reading Teacher, 55(7), 662–669.
  • 35. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 33 Sweller, J. (2011). Cognitive load theory. In J. P. Mestre & B. F. Ross (Eds.) The psychology of learning and motivation (pp. 37–76). Elsevier Inc. doi:10.1016/B978-0-12-387691- 1.00002-8 Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50(3), 755–769. Tsai, C.-C. (2004). Beyond cognitive and metacognitive tools: the use of the Internet as an “epistemological” tool for instruction. British Journal of Educational Technology, 35(5), 525–536. doi:10.1111/j.0007-1013.2004.00411.x Van Strien, J. L. H., Brand-Gruwel, S., & Boshuizen, H. P. a. (2014). Dealing with conflicting information from multiple nonlinear texts: Effects of prior attitudes. Computers in Human Behavior, 32, 101–111. doi:10.1016/j.chb.2013.11.021 Vygotsky, L. S. (1978). Interaction between learning and development. In M. Gauvain & M. Cole (Eds.) Readings on the development of children (2nd ed., pp. 29-36). New York: W. H. Freeman and Company. Retrieved from http://www.psy.cmu.edu/~siegler/vygotsky78.pdf Walraven, A., Brandgruwel, S., & Boshuizen, H. (2009). How students evaluate information and sources when searching the World Wide Web for information. Computers & Education, 52(1), 234–246. doi:10.1016/j.compedu.2008.08.003 Wiley, J., & Bailey, J. (2006). Effects of collaboration and argumentation on learning from Web pages. In A. M. O’Donnell, C. E. Hmelo-Silver, & G. Erkens (Eds.), Collaborative learning, reasoning, and technology (pp. 297–321). Mahwah, NJ: Lawrence Erlbaum Associates.
  • 36. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 34 Wiley, J., & Voss, J. F. (1999). Constructing arguments from multiple sources: Tasks that promote understanding and not just memory for text. Journal of Educational Psychology, 91(2), 301–311. doi:10.1037//0022-0663.91.2.301 Wineburg, S. S. (1991). Historical problem solving : A study of the cognitive processes used in the evaluation of documentary and pictorial evidence. Journal of Educational Psychology, 83(1), 73–87. Winkielman, P., Huber, D. E., Kavanagh, L., & Schwarz, N. (2012). Fluency of consistency: When thoughts fit nicely and flow smoothly. In B. Gawronski & F. Strack (Eds.), Cognitive consistency: A fundamental principle in social cognition (pp. 89–111). New York: Guilford Press. Wopereis, I. G. J. H., & van Merriënboer, J. J. G. (2011). Evaluating text-based information on the World Wide Web. Learning and Instruction, 21(2), 232–237. doi:10.1016/j.learninstruc.2010.02.003 Zawilinski, L., Carter, A., O’Byrne, I., McVerry, G., Nierlich, T., & Leu, D. J. (2007, November). Toward a taxonomy of online reading comprehension strategies. Paper presented at the National Reading Conference, Austin, TX. Retrieved from https://scholar.google.com/citations?view_op=view_citation&hl=en&user=QMTW9n4AA AAJ&citation_for_view=QMTW9n4AAAAJ:W7OEmFMy1HYC Zhang, M., & Quintana, C. (2012). Scaffolding strategies for supporting middle school students’ online inquiry processes. Computers & Education, 58(1), 181–196. doi:10.1016/j.compedu.2011.07.016
  • 37. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 35 Table 1 Evaluation Criteria Frequencies Average Times Cited Per Student Category/Code Students Referring to Criteria (%) All Students (n=81) Low Readers (n=20) Average Readers (n=32) H Rea (n= Content Accuracy 95.06% 7.96 7.10 7.36 9 evidence/facts/data 83.95% 2.81 2.30 2.53 3 reasoning/commentary 69.14% 2.58 2.60 2.34 2 logical/makes sense 58.02% 1.44 1.25 1.66 1 quotes other sources 30.86% 0.48 0.40 0.22 0 corroborating source 17.28% 0.23 0.15 0.16 0 pictures/graphs 12.35% 0.14 0.05 0.13 0 date/age 11.11% 0.21 0.30 0.19 0 prior knowledge of topic 6.17% 0.07 0.05 0.13 0 Content Relevance 83.95% 4.97 5.45 5.12 4 focused 43.21% 0.80 0.80 0.94 0 graphs/charts/pictures 41.98% 0.69 0.50 0.81 0 length/amount/detail of info 37.04% 0.70 1.20 0.47 0 info relevance 35.80% 0.86 1.00 1.03 0 organization/rhetoric 32.10% 0.58 0.85 0.53 0 solutions 18.52% 0.31 0.15 0.53 0 language access 16.05% 0.28 0.45 0.09 0 ads/links/sidebar content 12.35% 0.27 0.00 0.28 0 links to additional info 11.11% 0.19 0.15 0.13 0 interesting 9.88% 0.14 0.35 0.03 0 interactive 4.94% 0.15 0.00 0.28 0 Author Reliability 72.84% 4.22 1.15 3.72 6 language 30.86% 0.52 0.35 0.72 0 refers to specific source/type 29.63% 0.41 0.30 0.28 0 author character 28.40% 0.47 0.35 0.50 0 source/author info available 24.69% 0.88 0.05 0.41 1 published/vetted 22.22% 0.33 0.00 0.25 0 author expertise 20.99% 0.30 0.00 0.34 0 number of authors/sources 14.81% 0.26 0.00 0.09 0 familiar with site 13.58% 0.17 0.00 0.16 0 appearance 9.88% 0.23 0.00 0.09 0 title 9.88% 0.10 0.10 0.16 0 ad quality 8.64% 0.12 0.00 0.19 0
  • 38. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 36 ad quantity 8.64% 0.15 0.00 0.13 0 popularity 7.41% 0.17 0.00 0.25 0 pictures 6.17% 0.07 0.00 0.06 0 writer has own blog or site 3.70% 0.04 0.00 0.09 0 Author Slant 56.79% 1.47 0.85 1.03 2 balance/bias 4 54.32% 1.28 0.80 0.78 2 author purpose 7.41% 0.19 0.05 0.25 0 Other 46.91% 0.87 0.95 1.28 0 agree/disagree 27.16% 0.54 0.55 1.00 0 general statement of trust 27.16% 0.33 0.40 0.28 0 Task Goal – 33.33% 1.58 2.45 2.22 0 justifies stance 27.16% 1.15 1.80 1.69 0 records random fact from site 16.05% 0.43 0.65 0.53 0 Task Goal + explicitly focused on goal 2.47% 0.04 0.00 0.06 0 NC not able to code 33.33% 0.53 0.60 0.53 0
  • 39. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 37 Figure 1. Average site ratings of high and low quality sites by student reading level. 2.00 2.50 3.00 3.50 4.00 4.50 low high Site Quality low readers ave readers high readers
  • 40. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 38 Figure 2. Average site ratings of pro and con stance sites by students’ prior opinion (for, against, and neutral to allowing school personnel to carry concealed weapons in schools). 2.00 2.50 3.00 3.50 4.00 4.50 Con Guns Pro Guns Site Stance For Neutral Against
  • 41. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 39 Appendix A Table A1 Sources Included in Evaluation Task Title of Article URL Address of Static Site in PDF format Source Description Source Stance Viewpoint: Arming Teachers Isn’t the Answer http://goo.gl/SEPeiM TIME Magazine op ed written by a professor at the University of Chicago’s School of Social Service Administration and senior research fellow with the nonpartisan Coalition for Evidence-Based Policy; and an education policy consultant to The American Federation of Teachers, Teach for America and Senator Tom Harkin. Con (Against Guns in Schools) Opposing View: Guns in Schools Can Save Lives http://goo.gl/OjP3Or USA Today op ed written by a former chief economist for the U.S. Sentencing Commission and author of More Guns, Less Crime Pro (For Guns in Schools) Arming Teachers Is Not an Answer http://goo.gl/Dn4nDS Crooks and Liars blog written by an author identified as “Karoli,” a “card-carrying member of Con (Against Guns in Schools)
  • 42. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 40 We the People.” No other author information available on the page. Stop School Shootings by Letting Teachers and Principals Carry Guns http://goo.gl/CdXeXI Right Remedy blog written by Dr. Patrick Johnston, a minister and medical doctor. No other author information available on page. Pro (For Guns in Schools) Gun Rhetoric vs. Gun Facts http://goo.gl/NwYpKt FactCheck.org site “offering facts and context as the national gun control debate intensifies.” Neutral
  • 43. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 41 Appendix B Task Instructions Day 1 Name_________________________ In December of 2012, in Newtown, Connecticut, an armed gunman forced his way into an elementary school. Before he was finally stopped, he took the lives of 20 children and 6 adults. Since then, legislators in several states have introduced bills to allow school employees to carry guns on school property. Those legislators believe that if school employees were armed, they would be better able to protect children in situations like Newtown. Those who disagree believe that allowing school employees to carry guns would not make those schools safer. Having read the introduction, what is your opinion about the following statement? Allowing school employees, including teachers, to carry concealed weapons in school would make my school a safer place. Circle one of the following: I strongly agree. I tend to agree. I’m not sure. I tend to disagree. I strongly disagree.
  • 44. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 42 Appendix C Task Instructions Day 2 Name_________________________ DIRECTIONS: 1. Open Internet Explorer and go to the home page of the media center. Click on the RESEARCH TASK link in the yellow sidebar. The Task Imagine that your state has recently passed a law allowing adults who have a permit to carry concealed weapons in schools—including school employees like principals and teachers. However, the law also states that individual districts have the right to outlaw concealed weapons in their schools if they wish to do so. In light of recent school shootings in which numerous students and teachers lost their lives, your school board is discussing whether or not to allow adult employees to carry concealed weapons in your school. You are doing research to learn more about this issue. These are some web sites you have found: 2. Read and examine each of the sites. You may visit them in any order, and you may return to previous sites whenever you wish. As you read and examine the sites, complete the following chart in as much detail as you can. You will have 50 minutes to complete the task, allowing approximately 10 minutes per site. You will be given a signal every 15 minutes to help you keep track of time. Name of the site you are rating: This is in the banner of the site. An abbreviation is fine! Rate the trustworthiness of the site by circling one. NOTE: The definition of trustworthiness is “able to be relied on as honest and truthful.” In a bulleted list, write down ALL THE REASONS YOU HAVE for the trustworthiness rating you chose in column 2. BE SPECIFIC. For example, DO NOT write generalities, as in “I like it better” or “it’s a worse site.” Be specific about your reasons and write ALL your reasons down!
  • 45. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 43 YOU MAY CONTINUE ON THE BACK IF NEEDED. Name of site: (VERY briefly— from the banner!) 1 -- not at all trustworthy 2 -- questionably trustworthy 3 -- I can’t tell or determine trustworthiness 4 -- somewhat trustworthy 5 -- very trustworthy - - - - - - - Name of site: 1 -- not at all trustworthy 2 -- questionably trustworthy 3 -- I can’t tell or determine trustworthiness 4 -- somewhat trustworthy 5 -- very trustworthy - - - - - - - Name of site: 1 -- not at all trustworthy -
  • 46. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 44 2 -- questionably trustworthy 3 -- I can’t tell or determine trustworthiness 4 -- somewhat trustworthy 5 -- very trustworthy - - - - - - - Name of site: 1 -- not at all trustworthy 2 -- questionably trustworthy 3 -- I can’t tell or determine trustworthiness 4 -- somewhat trustworthy 5 -- very trustworthy - - - - - - - - Name of site: 1 -- not at all trustworthy 2 -- questionably trustworthy - - -
  • 47. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 45 3 -- I can’t tell or determine trustworthiness 4 -- somewhat trustworthy 5 -- very trustworthy - - - - -
  • 48. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 46 Appendix E List of Codes Code Category Code name Code description Example Author Reliability ad quality quality of ads or promotional links reflect on reliability of source “ads on side of page are encouragement for positive ideas” ad quantity quantity of ads or promotional links reflects on source reliability “the site has too many ads” appearance general appearance (how site looks) reflects on source reliability “looks fake” author character author's perceived character reflects on reliability of source “they are trying to protect children and teachers” “bashed police officers” author expertise expertise of author reflects on reliability of source “written by a professor at the University of Chicago” familiar with site extent to which site is recognized or well-known reflects on reliability of source “magazine I’ve heard of before” language extent to which language on the site is perceived as appropriate (indicating an educated or self-controlled writer) reflects on reliability of source “uses rude language” number of authors/sources number or authors or information sources reflects on its reliability “many different sources are given” pictures pictures reflect on the reliability of the source “pictures are believable”
  • 49. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 47 popularity popularity of site reflects on reliability of source “popular magazine” published/vetted whether source has been published or vetted reflects on reliability “information was checked” refers to specific source/type source contains reference(s) to specific source(s) or type(s) of information that reflect on its reliability “quotes the president” “gets info from people who studied the problem” source/author info available extent to which source(s) and author(s) are able to be identified or are described reflects on reliability of the source “has the authors listed” “tells the sources” title title of source reflects on its reliability “suspicious title” writer has own blog or site recognition that author has his own blog or site reflects on reliability of source “has his own blog so must know something” Author Stance author purpose purpose of site considered in evaluating reliability of the source/author “might just be trying to sell you his book” balance/bias extent to which source/author avoids bias/seeks balance reflects on reliability “tells both sides of the issue” “very biased to guns” Content Accuracy corroborating sources extent to which source provides references to corroborating sources reflects on its accuracy “supports opinion with other sides backing him up” date/age age/date of site (or information on site) reflects on its accuracy “newly published” evidence/facts/data extent to which source contains references to evidence, facts, proof, data, statistics, research, or scientific studies reflects on its accuracy “gives facts and statistics”
  • 50. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 48 logical/makes sense extent to which an argument presented by the source is logical/reasonable (sensible, sound, or convincing) reflects on its accuracy “argument makes sense” “seems like a good argument” pictures/graphs extent to which pictures, graphs or charts provide support/corroborating evidence reflects on its accuracy “graphs prove he’s probably right” prior knowledge of topic extent to which informative content of site matches reader's prior knowledge reflects on its accuracy “the facts are true from what I have heard” quotes other sources extent to which there are direct quotes in informational text reflects on information accuracy “has quotes from lots of people” reasoning/commentary extent to which source provides adequate reasoning, explanation, commentary, logic, or examples reflects on its accuracy “explains how his solutions are going to help” Content Relevance ads/links/sidebar content evaluation based on extent to which ads, links, or sidebar content distract the reader, making site less usable “ads are distracting” “ads are bigger than the article--annoying” focused evaluation based on extent to which informational text is perceived to remain on-topic or focused for improved usability and understanding “focuses on the problem at hand” “doesn’t exactly state a claim” graphs/charts/pics evaluation based on extent to which pictures, graphs, charts serve the reader as tools for understanding, making site more usable “charts to help show shooting rates” “needs graphs to help explain” info relevance evaluation based on extent to which information or details are perceived as relevant to the task; whether the information provided is what the reader is looking for “there’s a lot of info about arming teachers” “good information”
  • 51. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 49 interaction evaluation based on extent to which the reader can interact with the site by responding, contributing to, or joining the cause (including links to social networks) “links to Facebook and Twitter” interesting evaluation based on extent to which source content interests or engages the reader “kept my interest” “pretty good hook” language access extent to which language on the site is accessible to the reader, making site more usable “uses big words I don’t understand” length/amount/detail of info evaluation based on extent to which length, amount, or detail of information meets the needs of the reader “gives a lot of detail” links to additional info evaluation based on extent to which site provides links to additional information to learn more or meet information needs of the reader “has links to other sites to learn more about it” organization/rhetoric extent to which organization of text or text features serve reader's comprehension or informational needs, making site more usable “tabs help you find what you need” “gives a nice summary in the introduction” solutions evaluation based on extent to which site offers solutions to the problem of school shootings (solutions are a reader need) “offers ways to solve the problem” Task Goal+ (plus) explicitly focused on task goal reader specifically states there is a difference between his/her personal stance and his/her rating, showing a bracketing off of personal opinion to evaluate site objectively “I agree with the writer, but I don’t think I trust this site” Task Goal- (minus) = does not attend to justifies stance reader provides justification for a stance on the issue rather than justification for evaluation of source “the only way to stop a bad guy with a gun is a good guy with a gun”
  • 52. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 50 task goal while evaluating records random fact from site reader records random fact or quote from the site “police are recruiting students just in case the teacher fails at his job” Other agree/disagree extent to which reader agrees or disagrees with source stance used to determine trustworthiness “I agree that guns should be allowed in school” “this sounds like what I would have said” general statement of trust reader makes a general statement of trustworthiness or information reliability without providing clear justification for it “very trusting” Not Able to Code not able to code intended meaning not clear enough to code “tell some trustworthy in it”