This study examined how eighth graders evaluate the trustworthiness of sources presenting opposing views on a controversial issue. The students rated the trustworthiness of five news articles on whether school staff should carry guns. Students differentiated trustworthiness between sources, though reading ability correlated with greater differentiation. Prior opinions influenced ratings, somewhat mediated by reading level. Students focused on content over authorship, though high readers considered authorship more. The data suggest average and low readers had difficulty staying focused on evaluation. Teaching source authorship and providing scaffolds may help students critically evaluate controversial issues.
‘A PERSISTENT SOURCE OF DISQUIET’28: AN INVESTIGATION OF THE CULTURAL CAPITAL...ijejournal
This paper discusses the findings of a cultural content analysis conducted on the reading component of
twenty IELTS exams. A total of sixty reading passages were examined for cultural capital. The study found
that, on average, one reading test contained fourteen cultural references in terms of a variety of cultural
elements including cultural objects and historical settings. Geographically speaking, the readings referred
to 139 places or regions around the world with only five references pertaining to the Middle East and none
to the United Arab Emirates where this study was conducted.
Content analysis is a research technique used to objectively, systematically, and quantitatively analyze the manifest content of communications. It can be used to analyze any type of recorded media, such as text, images, or videos. There are two main types: conceptual analysis, which establishes the frequency of concepts, and relational analysis, which examines relationships between concepts. Content analysis is useful for reducing large amounts of unstructured data, identifying important aspects of content, and making inferences about messages, authors, and cultural contexts. While it provides an unobtrusive means of analysis, it can also be time-consuming and reductive when dealing with complex materials.
CONTENT ANALYSIS (Quantitative Research Methods)Libcorpio
Content Analysis, Quantitative Research Methods, LIS Education, Library and Information Science, LIS Studies, Information Management, Education and Learning, Library science, Information science, Library Research Methods,
This document provides an overview of how to analyze content using NVivo software. It discusses uploading documents to NVivo, coding the content into categories or nodes, and using tools like word frequency queries, text searches, and visualization to analyze relationships within and across texts. The goal of content analysis is to make inferences about messages, authors, audiences, and cultural contexts by systematically coding and examining texts.
This document summarizes a paper that proposes a model for determining student plagiarism using both electronic detection and academic judgement. The paper reviews definitions of plagiarism and different types that have been identified in literature. It explores responses to plagiarism and the use of electronic text-matching software to detect direct plagiarism, but notes the software's limitations. The paper concludes that despite shortcomings, electronic detection combined with manual analysis, nuanced academic judgement and clear processes can help determine if plagiarism occurred.
This document discusses content analysis as a research technique for systematically examining recorded information. It defines content analysis and outlines its uses and processes. Content analysis involves defining categories, measuring variables, and coding data for analysis. It can be quantitative or qualitative. Textual analysis examines meaning and power relations within texts, considering features, rhetoric, and relationships with other cultural artifacts. The purpose is to uncover dominant, negotiated, or oppositional meanings.
‘A PERSISTENT SOURCE OF DISQUIET’28: AN INVESTIGATION OF THE CULTURAL CAPITAL...ijejournal
This paper discusses the findings of a cultural content analysis conducted on the reading component of
twenty IELTS exams. A total of sixty reading passages were examined for cultural capital. The study found
that, on average, one reading test contained fourteen cultural references in terms of a variety of cultural
elements including cultural objects and historical settings. Geographically speaking, the readings referred
to 139 places or regions around the world with only five references pertaining to the Middle East and none
to the United Arab Emirates where this study was conducted.
Content analysis is a research technique used to objectively, systematically, and quantitatively analyze the manifest content of communications. It can be used to analyze any type of recorded media, such as text, images, or videos. There are two main types: conceptual analysis, which establishes the frequency of concepts, and relational analysis, which examines relationships between concepts. Content analysis is useful for reducing large amounts of unstructured data, identifying important aspects of content, and making inferences about messages, authors, and cultural contexts. While it provides an unobtrusive means of analysis, it can also be time-consuming and reductive when dealing with complex materials.
CONTENT ANALYSIS (Quantitative Research Methods)Libcorpio
Content Analysis, Quantitative Research Methods, LIS Education, Library and Information Science, LIS Studies, Information Management, Education and Learning, Library science, Information science, Library Research Methods,
This document provides an overview of how to analyze content using NVivo software. It discusses uploading documents to NVivo, coding the content into categories or nodes, and using tools like word frequency queries, text searches, and visualization to analyze relationships within and across texts. The goal of content analysis is to make inferences about messages, authors, audiences, and cultural contexts by systematically coding and examining texts.
This document summarizes a paper that proposes a model for determining student plagiarism using both electronic detection and academic judgement. The paper reviews definitions of plagiarism and different types that have been identified in literature. It explores responses to plagiarism and the use of electronic text-matching software to detect direct plagiarism, but notes the software's limitations. The paper concludes that despite shortcomings, electronic detection combined with manual analysis, nuanced academic judgement and clear processes can help determine if plagiarism occurred.
This document discusses content analysis as a research technique for systematically examining recorded information. It defines content analysis and outlines its uses and processes. Content analysis involves defining categories, measuring variables, and coding data for analysis. It can be quantitative or qualitative. Textual analysis examines meaning and power relations within texts, considering features, rhetoric, and relationships with other cultural artifacts. The purpose is to uncover dominant, negotiated, or oppositional meanings.
The document discusses qualitative content analysis. It defines content analysis as the systematic classification and interpretation of text through coding and identifying themes. Content analysis allows researchers to understand social reality and explore meanings in a scientific manner. It can use inductive or deductive approaches to analyze data. Unique characteristics include flexibility in approaches and ability to extract manifest and latent meanings from text. Researchers use content analysis to describe message characteristics and identify themes. The process involves defining a research question, sampling material, developing a coding scheme of themes, coding the content, and analyzing results both qualitatively and quantitatively. Validity and reliability are also addressed.
Content analysis is a research technique used to objectively analyze the manifest content of communication through a systematic classification and description of its key elements. It can be used both quantitatively by counting words and themes, and qualitatively by analyzing the social meanings and concepts within a text. Coding is a crucial part of content analysis, where conceptual categories are applied to classify the text. The goal is to describe the actual content present rather than interpret the author's intended meaning, distinguishing it from hermeneutic analysis. Content analysis has been widely applied in social research to study topics like propaganda, popular culture, gender representations, and discourse.
This document provides a critical analysis of Erik Tancorov's research paper on the challenges faced by gender non-conforming students and their coping mechanisms. The analysis assess Tancorov's paper in terms of paradigm, genre, conceptual framework, research question, literature review, and methodology. It finds that Tancorov takes a social constructivist paradigm and uses a case study genre. While his research question and literature review are relevant, his methodology could be strengthened by incorporating additional data sources. The analysis concludes that explicitly stating the social constructivist paradigm and emphasizing queer identity development through a critical theory lens could enhance the effectiveness of Tancorov's study.
Content analysis is a scientific method used in social science research to analyze communication content and draw inferences. It involves six main steps: 1) formulating a research question, 2) selecting communication content and samples, 3) developing content categories, 4) determining units of analysis, 5) creating a coding scheme and testing intercoder reliability, and 6) analyzing collected data. Content analysis has been used in studies of propaganda, media coverage of political issues, and trends in academic publications. It allows for quantitative analysis of messages but cannot verify causal relationships or ensure shared meanings between senders and receivers.
This document discusses quantitative and qualitative media content analysis. It provides definitions of content analysis and outlines the key elements of conducting quantitative media content analysis, including objectivity, validity, generalizability, replicability, and sampling. Quantitative content analysis aims to be scientific while qualitative analysis seeks to understand deeper meanings and interpretations. Both approaches are seen as complementary ways to analyze media content.
How does content knowledge impact reading comprehension?Natalie Saaris
1) Content knowledge refers to all the information and facts people know about the world, which is obtained through experience, education, media, and social interactions.
2) A study found that students' background knowledge of baseball had a bigger impact on their comprehension of a passage about baseball than their reading level.
3) Building students' content knowledge should include directly teaching needed background context, sequencing instruction to build connections between topics, and using reading strategies in the context of meaningful content.
Content Analysis Overview for Persona DevelopmentPamela Rutledge
After developing an Ad Hoc persona as the core of your engagement strategy, it's important to test your assumptions against real people and real data. Content analysis is a methodology for evaluating text-based data that can be gathered from social media tools.
WK 10 – Research Workshop - Content and discourse analysis Carolina Matos
This document provides an overview of media content analysis. It discusses quantitative versus qualitative content analysis and defines content analysis as the systematic analysis of messages to make valid inferences. It also covers developing coding schemes, sampling methods, and how to conduct a content analysis study through coding data and establishing intercoder reliability. Manual and electronic methods of coding are also compared.
This document provides an overview of content analysis. It defines content analysis as the objective, systematic, and quantitative analysis of communicated content such as texts, books, websites, paintings and laws. The document outlines the various types of content that can be analyzed, such as written, oral, iconic, audio-visual and hypertext. It also discusses the different purposes and uses of content analysis across multiple fields. Furthermore, it describes the typical steps involved in conducting a content analysis, including planning, coding text into categories, examining results, and making inferences.
Role of College Libraries in meeting user’s information needs issues and chal...Dr. Utpal Das
The document discusses the role and issues facing college libraries in India in the digital era. It outlines the objectives of college libraries as enriching academic activities, providing information/knowledge support, providing electronic access to resources, preserving intellectual assets, and generating awareness through literacy programs. It also examines challenges such as limited budgets, poor infrastructure, increased R&D in ICT, information overload, pressure from agencies, and lack of human resources. Finally, it explores how the digital shift is impacting functions like collections, access, services, and archiving.
Content analysis is a research technique used to analyze messages and make inferences from text. It involves objectively and systematically identifying characteristics of messages according to explicit rules and procedures. Content analysis can be used to analyze various forms of communication and media. It has both quantitative and qualitative applications. The key aspects of content analysis involve defining categories, sampling data, analyzing context, establishing boundaries, and making valid inferences from text.
This document discusses media content analysis and provides guidance on conducting effective analysis. It addresses:
- The importance of timeliness, accuracy, and verification in intelligence reports.
- Definitions and methods of media content analysis, including quantitative and qualitative approaches.
- Advantages such as analyzing ideologies, accessing communication texts, allowing quantitative and qualitative operations, and providing historical insights.
- Disadvantages like potentially distorting society, being time-consuming, lacking theoretical basis, and oversimplifying complex texts.
- The six key questions that must be addressed in every content analysis related to data, definitions, populations, contexts, boundaries, and inference targets.
This document discusses various methods for conducting content analysis of documents, including both manual and automated techniques. It describes advantages and disadvantages of different approaches such as human coding, dictionary-based methods, and supervised machine learning. Examples of document types, coding units, and content categories are also provided to illustrate how to design and implement a content analysis study.
Content analysis is a systematic, objective, and quantitative method for analyzing documents. It involves inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information. There are six main stages to content analysis: selecting content, defining units of analysis, preparing content for coding, coding the content, counting and weighting the content, and drawing conclusions. Content analysis can be conceptual, focusing on concepts within the text, or relational, exploring relationships between concepts. While it provides valuable insights, content analysis is also very time consuming and prone to errors.
This document discusses content analysis, which is a research technique used to make inferences from textual materials. There are two main types of content analysis: conceptual analysis, which establishes the existence and frequency of concepts, and relational analysis, which explores relationships between concepts. The document outlines steps for conducting each type of analysis, such as developing coding rules, distinguishing concepts, and reducing text to categories.
This document provides an overview of a library instruction session on finding information using the Cook Library. It discusses developing effective search strategies and keywords, searching the library catalog to find books, using databases to find articles, and tips for evaluating websites. Search techniques like Boolean operators, truncation, and using synonyms are explained. Examples are provided of searching the catalog and databases for information on how information technology can affect foreign language learning in college students. Contact information is given for the instructor and reference librarians for any additional questions.
The document discusses reference, research, and reading strategies for middle school students. It provides an overview of the FINDS model for guiding students through the research process, focusing on locating, organizing, and presenting information. It also analyzes FCAT data to identify skills successful students demonstrate, such as drawing conclusions, distinguishing strong evidence, and differentiating between valid and accurate information.
This document provides information and resources for teaching 6th grade students reference and research skills using the FINDS model. It outlines the FINDS process which guides students through focusing their topic, investigating sources, taking notes, developing their findings, and scoring their research success. Examples are given for applying FINDS to informational texts, fiction texts, and novel extensions using technology. Teachers are encouraged to collaborate with media specialists and pursue online courses to learn more about integrating the FINDS model into reference and research instruction.
This document defines content analysis and discusses Kerlinger's definition. Kerlinger defines content analysis as a systematic, objective, and quantitative method for studying communication. It involves treating all content in an identical manner (systematic), allowing different researchers to obtain the same results (objective), and quantifying the analysis to aid precision (quantitative). The document also lists uses of content analysis, such as identifying what exists in media, comparing media to reality, testing hypotheses, and exploring media images of minority groups.
This document provides an overview of hypertext and intertext in reading and writing. It defines hypertext as non-linear text that uses links to allow readers to navigate between related pieces of information and create their own understanding. Hypertext is made possible by technologies like the World Wide Web and allows for multimedia integration. Intertext refers to the relationships between texts and how a text's meaning depends on its context. Reading and writing involves understanding intertextual connections and how authors develop arguments using evidence from other sources.
The document summarizes research on students' ability to critically evaluate information found online. It discusses how students struggle with skills like determining a website's credibility and relevance. The study aimed to compare the strategies used by students who were successful versus less successful on a task evaluating the reliability of websites. By analyzing think-aloud protocols, researchers hoped to better understand the relationship between offline and online evaluation skills and identify common markers used to judge a site's reliability.
An Analysis On Students Use Of Online Resources For Written AssignmentsCheryl Brown
This document summarizes a research study on students' use of online resources for written assignments. The study investigated how 4 Indonesian university students selected credible online sources and integrated those sources into their research papers. It found that students applied criteria of authority, currency, and coverage when selecting sources. However, most online sources used in their papers were integrated inappropriately, likely due to difficulties paraphrasing and retaining meaning. The study aimed to understand students' abilities in identifying credible online sources and properly integrating those sources through paraphrasing, summarizing, and citations.
The document discusses qualitative content analysis. It defines content analysis as the systematic classification and interpretation of text through coding and identifying themes. Content analysis allows researchers to understand social reality and explore meanings in a scientific manner. It can use inductive or deductive approaches to analyze data. Unique characteristics include flexibility in approaches and ability to extract manifest and latent meanings from text. Researchers use content analysis to describe message characteristics and identify themes. The process involves defining a research question, sampling material, developing a coding scheme of themes, coding the content, and analyzing results both qualitatively and quantitatively. Validity and reliability are also addressed.
Content analysis is a research technique used to objectively analyze the manifest content of communication through a systematic classification and description of its key elements. It can be used both quantitatively by counting words and themes, and qualitatively by analyzing the social meanings and concepts within a text. Coding is a crucial part of content analysis, where conceptual categories are applied to classify the text. The goal is to describe the actual content present rather than interpret the author's intended meaning, distinguishing it from hermeneutic analysis. Content analysis has been widely applied in social research to study topics like propaganda, popular culture, gender representations, and discourse.
This document provides a critical analysis of Erik Tancorov's research paper on the challenges faced by gender non-conforming students and their coping mechanisms. The analysis assess Tancorov's paper in terms of paradigm, genre, conceptual framework, research question, literature review, and methodology. It finds that Tancorov takes a social constructivist paradigm and uses a case study genre. While his research question and literature review are relevant, his methodology could be strengthened by incorporating additional data sources. The analysis concludes that explicitly stating the social constructivist paradigm and emphasizing queer identity development through a critical theory lens could enhance the effectiveness of Tancorov's study.
Content analysis is a scientific method used in social science research to analyze communication content and draw inferences. It involves six main steps: 1) formulating a research question, 2) selecting communication content and samples, 3) developing content categories, 4) determining units of analysis, 5) creating a coding scheme and testing intercoder reliability, and 6) analyzing collected data. Content analysis has been used in studies of propaganda, media coverage of political issues, and trends in academic publications. It allows for quantitative analysis of messages but cannot verify causal relationships or ensure shared meanings between senders and receivers.
This document discusses quantitative and qualitative media content analysis. It provides definitions of content analysis and outlines the key elements of conducting quantitative media content analysis, including objectivity, validity, generalizability, replicability, and sampling. Quantitative content analysis aims to be scientific while qualitative analysis seeks to understand deeper meanings and interpretations. Both approaches are seen as complementary ways to analyze media content.
How does content knowledge impact reading comprehension?Natalie Saaris
1) Content knowledge refers to all the information and facts people know about the world, which is obtained through experience, education, media, and social interactions.
2) A study found that students' background knowledge of baseball had a bigger impact on their comprehension of a passage about baseball than their reading level.
3) Building students' content knowledge should include directly teaching needed background context, sequencing instruction to build connections between topics, and using reading strategies in the context of meaningful content.
Content Analysis Overview for Persona DevelopmentPamela Rutledge
After developing an Ad Hoc persona as the core of your engagement strategy, it's important to test your assumptions against real people and real data. Content analysis is a methodology for evaluating text-based data that can be gathered from social media tools.
WK 10 – Research Workshop - Content and discourse analysis Carolina Matos
This document provides an overview of media content analysis. It discusses quantitative versus qualitative content analysis and defines content analysis as the systematic analysis of messages to make valid inferences. It also covers developing coding schemes, sampling methods, and how to conduct a content analysis study through coding data and establishing intercoder reliability. Manual and electronic methods of coding are also compared.
This document provides an overview of content analysis. It defines content analysis as the objective, systematic, and quantitative analysis of communicated content such as texts, books, websites, paintings and laws. The document outlines the various types of content that can be analyzed, such as written, oral, iconic, audio-visual and hypertext. It also discusses the different purposes and uses of content analysis across multiple fields. Furthermore, it describes the typical steps involved in conducting a content analysis, including planning, coding text into categories, examining results, and making inferences.
Role of College Libraries in meeting user’s information needs issues and chal...Dr. Utpal Das
The document discusses the role and issues facing college libraries in India in the digital era. It outlines the objectives of college libraries as enriching academic activities, providing information/knowledge support, providing electronic access to resources, preserving intellectual assets, and generating awareness through literacy programs. It also examines challenges such as limited budgets, poor infrastructure, increased R&D in ICT, information overload, pressure from agencies, and lack of human resources. Finally, it explores how the digital shift is impacting functions like collections, access, services, and archiving.
Content analysis is a research technique used to analyze messages and make inferences from text. It involves objectively and systematically identifying characteristics of messages according to explicit rules and procedures. Content analysis can be used to analyze various forms of communication and media. It has both quantitative and qualitative applications. The key aspects of content analysis involve defining categories, sampling data, analyzing context, establishing boundaries, and making valid inferences from text.
This document discusses media content analysis and provides guidance on conducting effective analysis. It addresses:
- The importance of timeliness, accuracy, and verification in intelligence reports.
- Definitions and methods of media content analysis, including quantitative and qualitative approaches.
- Advantages such as analyzing ideologies, accessing communication texts, allowing quantitative and qualitative operations, and providing historical insights.
- Disadvantages like potentially distorting society, being time-consuming, lacking theoretical basis, and oversimplifying complex texts.
- The six key questions that must be addressed in every content analysis related to data, definitions, populations, contexts, boundaries, and inference targets.
This document discusses various methods for conducting content analysis of documents, including both manual and automated techniques. It describes advantages and disadvantages of different approaches such as human coding, dictionary-based methods, and supervised machine learning. Examples of document types, coding units, and content categories are also provided to illustrate how to design and implement a content analysis study.
Content analysis is a systematic, objective, and quantitative method for analyzing documents. It involves inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information. There are six main stages to content analysis: selecting content, defining units of analysis, preparing content for coding, coding the content, counting and weighting the content, and drawing conclusions. Content analysis can be conceptual, focusing on concepts within the text, or relational, exploring relationships between concepts. While it provides valuable insights, content analysis is also very time consuming and prone to errors.
This document discusses content analysis, which is a research technique used to make inferences from textual materials. There are two main types of content analysis: conceptual analysis, which establishes the existence and frequency of concepts, and relational analysis, which explores relationships between concepts. The document outlines steps for conducting each type of analysis, such as developing coding rules, distinguishing concepts, and reducing text to categories.
This document provides an overview of a library instruction session on finding information using the Cook Library. It discusses developing effective search strategies and keywords, searching the library catalog to find books, using databases to find articles, and tips for evaluating websites. Search techniques like Boolean operators, truncation, and using synonyms are explained. Examples are provided of searching the catalog and databases for information on how information technology can affect foreign language learning in college students. Contact information is given for the instructor and reference librarians for any additional questions.
The document discusses reference, research, and reading strategies for middle school students. It provides an overview of the FINDS model for guiding students through the research process, focusing on locating, organizing, and presenting information. It also analyzes FCAT data to identify skills successful students demonstrate, such as drawing conclusions, distinguishing strong evidence, and differentiating between valid and accurate information.
This document provides information and resources for teaching 6th grade students reference and research skills using the FINDS model. It outlines the FINDS process which guides students through focusing their topic, investigating sources, taking notes, developing their findings, and scoring their research success. Examples are given for applying FINDS to informational texts, fiction texts, and novel extensions using technology. Teachers are encouraged to collaborate with media specialists and pursue online courses to learn more about integrating the FINDS model into reference and research instruction.
This document defines content analysis and discusses Kerlinger's definition. Kerlinger defines content analysis as a systematic, objective, and quantitative method for studying communication. It involves treating all content in an identical manner (systematic), allowing different researchers to obtain the same results (objective), and quantifying the analysis to aid precision (quantitative). The document also lists uses of content analysis, such as identifying what exists in media, comparing media to reality, testing hypotheses, and exploring media images of minority groups.
This document provides an overview of hypertext and intertext in reading and writing. It defines hypertext as non-linear text that uses links to allow readers to navigate between related pieces of information and create their own understanding. Hypertext is made possible by technologies like the World Wide Web and allows for multimedia integration. Intertext refers to the relationships between texts and how a text's meaning depends on its context. Reading and writing involves understanding intertextual connections and how authors develop arguments using evidence from other sources.
The document summarizes research on students' ability to critically evaluate information found online. It discusses how students struggle with skills like determining a website's credibility and relevance. The study aimed to compare the strategies used by students who were successful versus less successful on a task evaluating the reliability of websites. By analyzing think-aloud protocols, researchers hoped to better understand the relationship between offline and online evaluation skills and identify common markers used to judge a site's reliability.
An Analysis On Students Use Of Online Resources For Written AssignmentsCheryl Brown
This document summarizes a research study on students' use of online resources for written assignments. The study investigated how 4 Indonesian university students selected credible online sources and integrated those sources into their research papers. It found that students applied criteria of authority, currency, and coverage when selecting sources. However, most online sources used in their papers were integrated inappropriately, likely due to difficulties paraphrasing and retaining meaning. The study aimed to understand students' abilities in identifying credible online sources and properly integrating those sources through paraphrasing, summarizing, and citations.
Online Reading Comprehension: Opportunities, Challenges, and Next Steps Julie Coiro
How does reading and learning change on the Internet? You are invited into a conversation about the nature of information on the Internet and its implications for how we think about reading comprehension and critical thinking in a digital information age. Julie first explores how the Internet poses new opportunities for authentic inquiry, collaborative conversations, and students to develop their voices as active citizens. Then, she describes the reading challenges that extend beyond traditional reading comprehension skills to encompass rapidly changing literacies for questioning, locating, evaluating, synthesizing, and communicating information during online inquiry. Finally, she highlights important areas for future research in order to keep up with the changing technologies that will continue to redefine what literacy means in the future.
The author analyzed their use of literacy assessments and lesson planning approaches for early reading students. Running records and the Motivation to Read Profile were used to understand students' cognitive and interest levels to better plan instruction. A literacy matrix was utilized to select texts at different ability levels. Lessons focused on teaching decoding, comprehension strategies, and metacognition to promote independent learning. Both critical thinking and response perspectives were incorporated to foster deeper analysis and connections to texts.
This presentation analyzed the presenter's use of various literacy teaching strategies in her classroom. It discussed how creating individual student portfolios helped her get to know students' interests and abilities. It also described how selecting texts at different levels that interested students promoted engagement. The presentation explained that teaching critical thinking skills helped students analyze texts more deeply. Overall, it demonstrated how the presenter developed a literate environment catering to individual students.
This document discusses creating a literate environment for literacy learners by getting to know students, selecting appropriate texts, and using interactive and critical/response instructional strategies. It recommends assessing students' cultural backgrounds, interests, reading abilities and attitudes to understand their needs. A variety of fiction and nonfiction texts at different levels should be selected. Research-based practices like guided reading, word study, and choral reading are interactive strategies that support literacy development, while discussions and writing allow students to critically respond to texts.
The document discusses key principles for developing literacy in early readers. It emphasizes the importance of understanding students' cognitive and non-cognitive needs through various assessments. The author developed a literacy unit on bats for three transitional readers using texts that addressed their needs. Strategies focused on vocabulary development and comprehension. Students showed growth in these areas, demonstrating the importance of selecting appropriate texts and instructional strategies matched to students' literacy levels and needs.
SOURCE-BASED NEWS WRITING AMONG UNDERGRADUATE STUDENTS: STUDENTS’ PERSPECTIVE...AJHSSR Journal
ABSTRACT: students' interest in writing must be increased, especially writing based on sources. Therefore,
this study would like to find students' perspectives and perceived challenges in writing based on sources. This
research was based on a case study. The research involved 68 students from one of the Universityy of PGRI
Semarang, and it analyzed 20 selected news writing papers written by students. The instruments for data
collection included questionnaire and observation. The results of this study found that 59% of students
perceived that writing news writing papers is very difficult. The students found it difficult to find suitable
sources for writing material and develop their ideas. They tended to use articles as the sources from the internet
whose originality is unclear. One reason this happens was because 80% of students did not know where to find
or get indexed journals to use as references for their writing. In addition, many students also preferred not to use
journals that were majorly English-based because of their limitation of English skills. The implication is that
students should be given further understanding of how to access journals and use reference sources in writing. It
is expected that every lecturer will give more source-based news writing assignments so that the ability of
students to write and develop ideas based on existing sources can be further improved.
KEYWORDS: source-based news writing, teaching writing, writing
This document discusses various approaches and considerations for literacy instruction. It describes how assessments can help teachers understand students' needs and motivations. It also discusses selecting texts across different genres and levels. The document outlines the interactive, critical, and response perspectives for literacy lessons, focusing on developing strategic, metacognitive, and critical thinking skills when reading and writing.
A Three-Stage Framework For Teaching Literature Reviews A New ApproachVernette Whiteside
This three-stage framework is proposed to better teach students how to write literature reviews:
1. Students learn how to systematically search relevant literature through database examples and key search strings.
2. Students learn to critically read and deconstruct texts using a template and questioning approach.
3. Students learn to reconstruct the material into a coherent argument using a simple metaphor to demonstrate synthesis.
The framework aims to simplify the literature review process for students and provide explicit guidance for teachers.
This document discusses creating an effective literacy classroom environment. It emphasizes getting to know students through assessments of their reading abilities, attitudes and motivation. The document also covers selecting appropriate texts at students' reading levels across genres. It describes implementing interactive, critical and response perspectives in literacy instruction, including modeling comprehension strategies and providing opportunities for student response and interaction with texts.
This presentation analyzes how several key elements as discussed in the framework for literacy helped me create a literate environment in my classroom.
This document discusses a study that investigated how individual differences in thinking dispositions may affect student learning from multiple-document science inquiry tasks. Middle school students were given documents about global temperature patterns and asked to understand how and why recent patterns differ from the past. Understanding was assessed through an essay and verification tasks. The study found that reading skills and commitment to logic, evidence, and reasoning uniquely predicted understanding, even after accounting for prior knowledge and interest. This suggests these thinking dispositions and reading ability independently influence how students learn from multiple documents during science inquiries.
This document discusses assessing the credibility of weblogs (blogs) through natural language processing (NLP) techniques. It proposes a multi-phase study to identify factors that users consider when evaluating blog credibility, test these factors by analyzing blog readers' judgments, perform NLP analysis on blogs to extract credibility profiles, and analyze comments to determine if profiles match readers who found the blogs credible. A preliminary framework identifies four credibility assessment factors: the blogger's expertise/identity, trustworthiness/values, information quality, and personal appeals.
One page response for this discussion post. Must be APA, three scho.docxjohnbbruce72945
One page response for this discussion post. Must be APA, three scholarly references.
A Brief Statement that Summarizes the Literature I Have Reviewed to Date
Researchers have found that approximately half of all undergraduate college students have committed some form of plagiarism (Blum, 2011). However, this number may be inaccurate because some students may not admit to plagiarism and because it does not take into account all ways in which students can plagiarize (Colella-Sandercock, 2015). A relatively new way for students to plagiarize is to use paraphrasing websites (Rogerson & McCarthy, 2017). These are free websites where students can copy information from a source onto the website, and the website will then rewrite the information for students free of charge (Rogerson & McCarthy, 2017). Although these websites are called
paraphrasing websites
, they do not actually paraphrase information. Instead, they replace words found in the original text with synonyms (Kannangara, 2017). This is also known as patchworking, which is considered a form of plagiarism (Howard, 1992). Sometimes, the patchworking done by these paraphrasing websites makes the new passage to sound unintelligible (Kannangara, 2017). Despite this, it has been suggested that students might use paraphrasing websites because they believe their papers will go undetected by plagiarism detection software (Kannangara, 2017; Rogerson & McCarthy, 2017). However, more research is needed to support this claim (Rogerson & McCarthy, 2017). There might be other reasons why students use these websites (Rogerson & McCarthy, 2017).
Academic locus of control is one theory that explains why some students choose to commit other forms of plagiarism (Bretag et al., 2014; Pino & Smith, 2003; Power, 2009). Academic locus of control refers to whether students take personal responsibility, or blame others for their academic successes or failures (Pino & Smith, 2003). Researchers have found that students with high internal locus of control, which means that they take personal responsibility for their academic successes and failures, are less likely to plagiarize than students with high external locus of control, which means that students believe that someone else besides them is to blame for academic successes and failures (Power, 2009). However, past research findings on academic locus of control should not be generalized to students who use paraphrasing websites, because researchers did not measure this type of plagiarism in their studies (Pino & Smith, 2003; Power, 2009).
Gaps/Limitations in the Literature
Most research on the use of paraphrasing websites by college students has focused on what these websites do and the quality of the passages that are created by these websites. Less is known about why students choose to use these websites (Kannangara, 2017; Rogerson & McCarthy, 2017). Researchers have found that poor time management skills, as well as a lack of understanding of how to paraphr.
This section provides annotations of recent research on digital literacy and technology tools in English language arts contexts. Key findings include:
- Speech recognition software supported struggling first grade readers' engagement and writing accuracy when purposefully integrated into the classroom.
- Tablets offered benefits for middle school students with diverse learning needs but also challenges regarding safety, security, and behavior that require solutions.
- Digital texts fostered affective literacy encounters for readers and supported emergent literacy practices when their material-social aspects were foregrounded.
- "Let's Play" video games allowed analysis of games as cultural texts and served as models for media production and critical conversations in the classroom.
- More successful adolescent readers engaged in higher-level ep
This document discusses key aspects of early literacy instruction for beginning readers in pre-K through 3rd grade. It addresses the importance of understanding students' motivation and engagement with reading through assessments. Teachers should learn about students' interests and reading abilities in order to select appropriate texts at the right level. The document also discusses interactive, critical, and response perspectives for literacy instruction, emphasizing comprehension strategies, examining texts critically, and personal engagement with reading.
A study of sixth graders’ critical evaluation of Internet sourcesaj6785
This study was a descriptive, task-based analysis to determine how sixth-grade students approach the cognitive task of critically evaluating Internet sources. Pairs of sixth grade students in an Information Literacy course evaluated four preselected Internet sites to determine their credibility and appropriateness for two specific research scenarios. Data for analysis included written responses, screencasts, and video of students while completing the task. Results suggest that these students tended toward simplistic modes of evaluation in the face of increased cognitive load, though some moved toward a more critical stance and many applied basic metacognitive strategies. The study points to the importance of instructional approaches that teach students to flexibly apply evaluation criteria in ill-structured environments, that teach advanced metacognitive strategies, and that instill habits of mind for critical inquiry. Instruction that empowers students to practice healthy skepticism even in the face of authority is also essential.
This document analyzes how Gina Stewart-Harman created an effective literacy environment for her students using research-based practices. She gets to know students' literacy experiences and needs through assessments and uses this data to guide instruction and select appropriate texts. Stewart-Harman considers text dimensions, levels, structures, and genres to choose books that engage and meet students' needs. She implements the interactive, critical, and response perspectives on literacy instruction to facilitate cognitive and affective development and allow students to connect with texts on personal levels.
Angela K. Johnson is seeking a position that values diversity, social justice, critical thinking and student-centered education. She has over 20 years of experience as an instructional coach, library media specialist, language arts instructor and technology specialist. She holds a Ph.D. in Educational Psychology and Educational Technology from Michigan State University and has received numerous awards for her work, including the Lakeshore Public Schools Superintendent's Excellence Award.
Eighth graders were asked to critically evaluate the trustworthiness of websites presenting opposing views on the controversial issue of whether school personnel should be allowed to carry concealed weapons. Higher reading level students focused more on evidence, authorship, and sourcing criteria. Lower and average readers were more likely to justify their stance rather than evaluate trustworthiness, and average readers had more difficulty separating personal opinions. The study suggests teaching all students, especially lower and average readers, to pay closer attention to authorship and sourcing criteria when evaluating online information.
The document is Angie Johnson's final reflection paper for her course CEP 917. It synthesizes the key theories and concepts from the course. The reflection is organized into a logical progression of ideas moving from general to specific. Some of the main points made include: [1] Design is relevant to education as teaching aims to change minds; [2] Students can learn through designing changes in the world; [3] Educational research can benefit from a design-based method by studying solutions to authentic problems.
Angie Johnson designed a project to share the effective teaching method of Socratic seminars with other teachers. She created a video showing an edited seminar, a presentation explaining the benefits and basics of seminars, and a website with additional resources. She gathered feedback which showed the importance of motivating teachers and providing support. Her products were well-received in a trial presentation. Going forward, she plans to expand the website with more resources from her ongoing use of seminars in the classroom.
- The document details a woman's shopping excursion to an Ann Taylor Loft outlet store to purchase a new suit for a conference. As she browses the store, her mood fluctuates between interested, annoyed, fatigued, and relieved.
- She has difficulty finding her size in various sections and grows tired from reaching high racks. In the fitting room, she encounters an impatient crowd and limited space. Ultimately, she purchases a suit and shirt at 50% off.
Socratic seminars are structured discussions that emphasize core skills like communication, evidence, reasoning, and critical thinking. They involve students sitting in a circle to discuss open-ended questions about a text without raising hands. The teacher facilitates the discussion by paraphrasing, following up, and encouraging participation from all students. Students reflect individually and as a group on their discussion and set goals for improvement. Socratic seminars take practice but can promote significant growth in students' analytical and discussion skills.
Angela K. Johnson is a Ph.D. candidate studying students' information seeking behaviors and meaning construction using digital resources. She has over 20 years of experience as a middle school media specialist and language arts teacher. Her research focuses on how sociocultural contexts and instructional delivery influence students' academic learning processes online. She holds several teaching certifications and has received grants for technology integration projects at her school.
This document provides an overview of how to integrate technology into teaching the Common Core State Standards. It begins with introducing Edmodo as a platform for discussion and sharing resources. It then discusses three elements of the teaching process: 1) Locating texts and pairing fiction with nonfiction, 2) Close reading and critical thinking using tools like Wallwisher and Diigo, and 3) Assessing knowledge through digital products created with tools like Smore, Glogster and Animoto. Throughout are examples of specific tools that can be used at different grade levels to meet the goals of the Common Core.
Angela K. Johnson is a Ph.D. candidate studying students' information seeking behaviors and meaning construction using digital resources. Her research also examines how sociocultural contexts and instruction influence these processes. She has over 20 years of experience as a middle school media specialist and high school teacher, and holds several teaching certifications. Her areas of research interest and extensive professional background demonstrate her expertise in digital literacy, online learning, and information science in education.
This document provides instructions for using Diigo, a web annotation and bookmarking tool. It explains that Diigo allows users to save web links and pages for future reference, organize those links in a searchable manner, and find saved links that meet specific needs through tagging. The document then provides step-by-step instructions for installing the Diigo bookmarklet toolbar button and using it to bookmark and tag web pages in a user's Diigo library. It emphasizes that thoughtful, careful tagging is important so bookmarks can be easily found later through searching tags.
Angela K. Johnson is a PhD candidate studying students' information seeking behaviors and meaning construction using digital resources. Her research focuses on middle and high school students. She has over 20 years of experience as a middle school media specialist and high school teacher. She teaches courses in English, literature, composition and French. She holds several certifications in teaching and library media.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Azure Interview Questions and Answers PDF By ScholarHat
Comps paper journal version 2015
1. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY
Trustworthiness Evaluation, Controversy, and Reading Ability:
A Study of Eighth Graders Evaluating Multiple Conflicting Sources
Angela K. Johnson and Ralph T. Putnam
2. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 0
Abstract
The present study examined how eighth graders evaluate the trustworthiness of sources
presenting opposing views on a controversy. Data included students’ prior opinions on the issue,
trustworthiness ratings of five offline web articles, justifications for those ratings, and a reading
comprehension measure. Students differentiated sources by trustworthiness, but reading ability
correlated with greater differentiation. Prior opinion influenced ratings; this was somewhat
mediated by reading level. Students attended to content factors most frequently, with high
readers attending to authorship more than other readers. Data suggest distraction from the
evaluation task among average and low readers. Implications point to the benefits of teaching
attention to source authorship and providing scaffolds to mediate the complexity of critical
evaluation in the context of reading about controversial issues.
3. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 1
Eighth Graders’ Critical Evaluation of Sources about a Controversial Issue
With large numbers of schools providing “one to one” and “bring your own device”
access, the web has become the front door to information in the classroom. In exchange for
access to vast quantities of information, however, comes the responsibility to determine what
information is reliable and who can be trusted in this immense and free domain. No longer can
the public depend solely on publishing companies, editors, journalists, or librarians to vet
sources; users must also learn to critically evaluate information themselves.
Against this backdrop, educational researchers have scrambled to understand the extent to
which students are capable of and willing to critically evaluate web sources (e.g., Hargittai,
Fullerton, Menchen-Trevino, & Thomas, 2010; Head & Eisenberg, 2010; Metzger & Flanagin,
2013). Studies suggest that students have difficulty determining and applying effective criteria
for evaluating their trustworthiness (Coombes, 2008; Kiili, Laurinen, & Marttunen, 2007; Kim &
Sin, 2011; Kuiper, Volman, & Terwel, 2005). Indeed, evaluation seems to be one of the more
challenging aspects of online reading (Colwell, Hunt-Barron, & Reinking, 2013; Walraven,
Brandgruwel, & Boshuizen, 2009).
At the same time, the Common Core State Standards (Common Core State Standards for
English Language Arts & Literacy in History/Social Studies, Science, and Technical Subjects,
2010), still adhered to by 38 states (Norton 2017), prioritize the rigorous expectation that
students assess and construct arguments with sound claims, evidence, and reasoning (Key Shifts
in English Language Arts, 2010). In the pursuit of these students frequently research
controversial issues on the web, and are expected to process and evaluate arguments while also
evaluating trustworthiness. Little is known about how the cognitive challenges of reading
multiple texts with conflicting views may affect middle-grade students’ ability to evaluate the
4. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 2
trustworthiness of sources. If prior opinions significantly affect their ability to do so, students
may be particularly vulnerable to bias and misinformation about controversial issues. Since an
effective democracy rests on an informed electorate, it is important to understand the challenges
inherent in evaluating sources about controversial issues and to teach students developmentally
appropriate methods for overcoming them. This study examines if and how eighth graders’ prior
opinions and reading abilities affect the criteria they apply when evaluating the trustworthiness
of sources about a controversial issue—whether school personnel should be allowed to carry
concealed weapons in schools.
Theoretical Framework
Several theoretical constructs inform the present study. Theories of online text processing
are grounded in theories of offline reading, but apply these to learning on the web, where readers
are more likely to confront multiple conflicting texts of varying trustworthiness. These clarify the
specific skillsets needed to meet such challenges. The processing of multiple conflicting texts is
also influenced by research on persuasion, which is therefore pertinent to the present study.
Finally, theoretical constructs of metacognition bear on our understanding of a reader’s capacity
to manage the challenges of both the learning task and the learning environment. An overview of
these constructs follows.
Online Text Processing
Hartman, Morsink, and Zheng (2010) asserted that a complication of online reading
resides in the “multiple plurals” (p. 140) of online texts. The various elements that combine to
establish meaning—for example, reader, author, task, context, and so forth—are themselves
plural and continually shifting, and therefore confound the act of meaning construction.
Hartman and colleagues proposed that a reader must integrate three types of knowledge in
5. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 3
comprehending online text: (a) knowledge of identity—knowing who wrote a text and how
authors “construct, represent, and project online identities” (p. 146); (b) knowledge of location—
knowing how to “orient oneself in a website” and “in cyberspace” (p. 148); and (c) knowledge of
one’s own goal—knowing why one is reading and remaining focused on that goal. Application
of the first may involve assessment of an author’s expertise and trustworthiness, while the latter
may involve assessment of a site’s match to reading goals.
Studies have shown that, when evaluating sources, students attend to information
relevance more than other criteria (Kuiper et al., 2005; Mothe & Sahut, 2011), signifying that
they do evaluate sources with relevance to reading goals. This is in accordance with Hartman,
Morsink, and Zheng’s (2010) third type of knowledge, knowledge of goal. Other studies have
also shown readers to be task-oriented while reading online, suggesting they are capable of
remaining focused on broader goals (Kiili et al., 2007; Ladbrook & Probert, 2011). Of the three
categories of knowledge, students often lack—or fail to apply—knowledge of identity, or
authorship (Bråten, Strømsø, & Britt, 2009; Coiro, 2007; Zawilinski et al., 2007). Lack of
attention to authorship is particularly problematic because sourcing—defined by Rouet (2006) as
“identifying a number of parameters that characterize the author and conditions of production of
the information” (p. 177)—has been found to improve students’ comprehension of multiple
conflicting online texts (Strømsø, Bråten, & Britt, 2010).
In fact, research on the processing of multiple texts suggests that sourcing is an important
element of comprehension. Rouet (2006) posited that the mental representation an expert reader
creates while reading multiple texts includes two components, or nodes. The content node
comprises a mental representation of the content of a single source, integrating the information
encoded in the text and the prior knowledge of the reader. The source node is a mental
6. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 4
representation of the source and its author, including identification, affiliations, expertise, bias,
and prior knowledge of these. The source representations of individual texts combine to create a
source model, which would reflect, for example, whether two sources agreed or disagreed, and
whether one author was more expert than another. Content and source representations combine
to create a situation model, a synthesized understanding of the topic. To successfully construct a
situation model, the reader identifies the source of each individual text, compares information in
one text to that of another, and maintains a connection between source nodes and content nodes.
Studies show that expert readers successfully attend to source characteristics to help them
synthesize multiple conflicting texts (Wineburg, 1991), but that younger readers have difficulty
keeping track of connections between source and content nodes (Golder & Rouet, 2000, in
Rouet, 2006). Other studies found that readers overlook author and source information as they
evaluate web sources (Bråten et al., 2009; Coiro, 2007; Zawilinski et al., 2007).
Findings suggest, therefore, that enabling attention to the relationship between the author,
purpose, and content of a message is an essential step toward effective evaluation, a claim
reflected in Coiro’s (2014) recommendation that students evaluate web sources based on four
criteria: (a) content relevance (the extent to which information and its presentation meet the
needs of the reader); (b) content accuracy (the extent to which information can be viewed as
factual and accurate); (c) author reliability (the extent to which the author can be trusted to
provide reliable information); and (d) author stance (the perspective or bias of the author, which
may influence his message).
Persuasion Theory
A third body of literature informing the present study involves theories of persuasion. In
studies of persuasive text comprehension, readers consistently demonstrate biased assimilation,
7. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 5
the evaluation of arguments in favor of their personal views (Kobayashi, 2010; Lord, Ross, &
Lepper, 1979). Misinformed readers will also be resistant to correction, a phenomenon known as
the continued influence effect (Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012).
According to Lewandowsky et al., the effect may result from the strength of preliminary mental
models constructed during the initial exposure to a series of events, facts, or processes. If a
mental model was initially constructed with misinformation, corrections will create gaps in the
model, and such gaps may be less desirable than retaining a complete, albeit misinformed,
model. In addition, easy to process information is accepted more readily than difficult to process
information. This would include ideas that are expressed simply, but also ideas familiar to the
reader (Lewandowsky et al., 2012). In the context of reading persuasive texts, it would follow
that more easily understood arguments would carry greater weight than those that are more
difficult to understand. In fact, since the structure of argumentative text makes it inherently more
challenging to process than other types of text (Haria & Midgette, 2014; Larson, Britt, & Larson,
2004) one would expect evaluation of argumentative texts in general to be difficult. Wiley and
Bailey (2006) found little evidence of student dyads using evaluative strategies when reading
argumentative texts, and Hsieh and Tsai (2013) found that cognitive load affected the ability of
readers to apply advanced evaluation strategies.
In sum, readers may approach controversial texts with conflicting purposes: On the one
hand a reader instinctively seeks affirmation for prior beliefs; on the other, the evaluation of
trustworthiness requires a more objective assessment. If the source presents an opinion consistent
with one’s own, the reader may accept its argument uncritically; alternatively, if the source
presents an opposing opinion, he or she may attend more closely to counter argue, a kind of
assimilation bias defined as a disconfirmation bias. A second assimilation bias, the confirmation
8. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 6
bias, in which participants seek out information to confirm their existing views and dismiss
information that conflicts with their views, has also been found (Kobayashi, 2010b; Taber &
Lodge, 2006; van Strien et al., 2014; Winkielman, Huber, Kavanagh, & Schwarz, 2012). In
either case, the reader’s assessment of trustworthiness is affected by prior opinion. As
counterintuitive as it may be, readers of persuasive texts would do well to bracket off their
personal views to evaluate trustworthiness from as objective a stance as possible. In one study
students who were instructed to use scientific standards of support provided by evidence to
critically evaluate sources for and against human-induced climate change showed significant
changes in their perceptions of the issue (Lombardi, Sinatra, & Nussbaum, 2013), suggesting
they drew effective conclusions regarding the reliability of the texts they examined. The
implication is that rational objectivity is one important precursor for effectively evaluating texts
about controversial issues.
Metacognitive Processes in Evaluation
Metacognition involves the ability to monitor and self-regulate one’s thinking and
learning, and has been likened to a toolbox: The skilled user knows which tools to use in
particular circumstances, and alleviates some of his workload by efficient selection and
application of those tools (Ford & Yore, 2012). In the cognitive workspace, metacognition is an
executive function allowing for “planning, monitoring, and regulating actions and command of
materials to respect the spatial limitations” of memory, thereby offloading certain cognitive
demands to allow greater cognitive space for message processing (Ford & Yore, 2010, p. 258).
In addition, metacognition is generally considered to be a “significant path to critical
thinking” (Magno, 2010, p. 137). According to Kuhn and Dean (2004), critical thinking requires
“meta-level operations” (p. 270) that consist of both separate and integrated metacognitive skills
9. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 7
functioning at the executive level. The executive operations that serve metacognition include
declarative, conditional, and procedural knowledge; planning, monitoring, and debugging
strategies; information management; and evaluation functions (Magno, 2010). To exemplify,
Magno offers the following:
a meta-level connection occurs when the individual evaluates an argument, . . . makes
sure that they are well informed about the content (declarative knowledge), plans how to
make the argument (planning and procedural knowledge), monitors whether they
understood well the content to be evaluated (monitoring), and potently evaluates the
tasks. (p. 149)
In the context of evaluating sources that present arguments, executive function operates on
multiple levels: on one level the argument itself must be evaluated for cogency; on another level
the source must be evaluated for both relevance and trustworthiness. At the same time, these
functions must occur while bracketing off the reader’s personal bias from his or her evaluation to
retain an objective perspective (Haria & Midgette, 2014), another layer of complexity requiring
metacognitive skill and cognitive resources.
The objectivity that source evaluation requires appears to be facilitated by metacognitive
scaffolds. Lombardi et al. (2013) implemented a successful intervention for supporting the
evaluation of arguments in which students constructed evidence maps for opposing viewpoints.
Metacognition is also associated with the evaluation of source trustworthiness in general. Mason,
Boldrin, and Ariasi (2009) examined the influence of epistemic metacognition, defined as “a
reflective activity about knowledge and knowing” (p. 67), on the evaluation of web sources by
middle-school students. They found epistemic metacognition to be modestly associated with
higher-level reflection on the justification of knowledge on the web. Epistemic metacognition
10. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 8
also predicted the students’ critical comparison of information from several sources. Similarly,
Kiili, Laurinen, and Marttunen (2009) found metacognitive skills essential for effective
comprehension on the web, despite the fact that even competent readers evaluated information
relevance more often than they evaluated credibility. Recognizing the complexities of online
inquiry, Zhang and Quintana (2012) designed a digital support system for scaffolding students’
metacognitive processes, which they found to facilitate a “fewer-but-deeper pattern” of reading
(p. 194) in which students read fewer sources but spent more time on each. Although Zhang and
Quitana did not specifically examine evaluation behaviors, their results suggest that
metacognitive supports allow for deeper processing, a likely prerequisite for evaluation.
Therefore, although metacognition may not necessarily lead to evaluation, evaluation requires
some level of metacognition.
Summary
Taken in sum, the theoretical framework outlined above suggests that evaluating the
trustworthiness of sources presenting multiple views on controversial issues presents several
complexities, complicated even more by a reader’s prior opinions on the topic. These are
outlined as follows:
1. The processing of multiple texts, an inherent element of reading on the web, is more
challenging than the processing of single texts, requiring the reader to maintain an active
link between source and content models to construct an overall situation model.
2. The processing of persuasive texts is more challenging than the processing of narrative
and simple expository texts, requiring more cognitive effort to establish intratextual
connections.
11. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 9
3. The evaluation of multiple conflicting texts presents specific challenges to readers with
prior opinions on the topic, requiring strong executive function skills to evaluate
argumentative and one-sided texts objectively.
4. The process of evaluating sources is itself a higher-order skill, requiring metacognition in
the form of executive function processes.
It becomes clear that evaluating sources in the context of research on controversial issues
requires considerable metacognitive proficiency to coordinate and monitor executive function
processes. Despite this fact, it is common for students conduct web research on controversial
issues of interest to them, because such activities heighten student engagement, serve the
learning goals of the Common Core State Standards, and build critical thinking capacity.
Research Questions
The purpose of this study was to examine how eighth-grade students grappled with the
challenges of evaluating multiple conflicting sources on a controversial issue. Although
numerous studies have examined students’ ability to evaluate the trustworthiness of web sites
(e.g., Goldman, 2011; Wopereis & van Merriënboer, 2011) and others have examined how
students who hold prior opinions synthesize texts presenting conflicting viewpoints (Bråten et
al., 2009; van Strien et al., 2014), few have examined how students evaluate the trustworthiness
of sources presenting conflicting viewpoints on controversial issues about which students have
prior opinions. Against the theoretical backdrop outlined above, the present study was designed
to examine these questions:
1. Are eighth-grade students able to effectively rate the trustworthiness of sources that differ
in quality with regard to author expertise and information referencing?
12. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 10
2. Are eighth-grade students able to set aside their personal opinions on an issue to
objectively evaluate sources that differ in quality with regard to author expertise and
information referencing?
3. By what criteria do eighth-grade students decide what sources to trust while examining
sources that differ in quality with regard to author expertise and information referencing?
4. Do eighth-grade students with differing abilities in reading comprehension apply
evaluation criteria differently?
Method
This study was a descriptive, task-based analysis to determine how students evaluate the
trustworthiness of sources presenting conflicting sides of a controversial issue. Study
participants included 81 eighth graders (42 males and 39 females) in a required language arts
course taught by the first author in a rural-suburban public school in the Midwestern United
States. As measured by qualification for free and reduced school lunch, 29% of the school
population was low income. All students spoke English as a first language.
Data Collection
Students engaged in a task to determine the trustworthiness of articles from the Web that
presented opposing viewpoints on the issue of allowing school employees to carry concealed
weapons in schools. Students examined five articles (see Table 1) in random order: two articles
in favor of allowing school personnel to possess concealed guns in schools (Lott, 2012; Johnston,
n.d.), two articles against (Gorman-Smith & McLaughlin, 2012; Karoli, 2012), and one neutral
article (FactCheck.org, 2012). Of the two pro and two con articles, one was a high-quality source
and one was a low-quality source, based on author expertise and information referencing.
Specifically, authors of the two low-quality sources were individuals without credentials that
13. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 11
would warrant them experts on the topic of guns in schools. The professional identity of one of
those authors was not revealed, and the profession of the other (a minister and medical doctor)
reflected no expertise on the general topic of gun control or on the specific topic of guns in
schools. Both sources considered low quality contained strongly emotional appeals but listed no
references. In contrast, the high-quality sources clearly revealed the authors’ educational and
professional backgrounds, which in both cases provided some indication of expertise on the
issue. These included a former employee of the U.S. Sentencing Commission and a professor
from the University of Chicago’s School of Social Service Administration. In addition, both
high-quality sites contained references for information and were measured in tone. It should be
noted, however, that even the sources considered high quality in the study leaned strongly toward
a particular stance. In the context of researching controversial issues, it is typical for sources to
be partial. Our concern was whether, under such conditions, students would use author and
referencing information that was readily available on the page to help ascertain the
trustworthiness of the sources. Several web site genres, including newspaper opinion columns,
blogs written by religious and special interest groups, and the myth-debunking page of a
nonpartisan research foundation were included. The sites were presented as screen shots in their
original web form but not linked to the Internet while students viewed them; although this made
the task less authentic in that students were unable to search or click on links, it allowed for the
measurement of students’ attention to the most obvious authorship and referencing features—
those that required no further navigation or investigation beyond the first page. The high-quality
sites were chosen in part because they provided ample information about the authors to inform
students’ evaluations of trustworthiness, whereas the low-quality sites provided little or no such
information. Removing the complexity and possible distractions of web space navigation
14. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 12
allowed for a controlled examination of whether students would consider authorship and
referencing features when given easily accessible information and adequate time to do so.
Data collection spanned three class sessions. On the first day, the teacher read aloud
while students read silently a two-paragraph summary presenting opposing sides of the issue and
answered questions regarding vocabulary to clarify the meaning of the paragraph. Students then
circled a statement indicating whether they agreed, disagreed, or felt neutral about the assertion:
“Schools would be safer if school employees, including teachers, were allowed to carry
concealed weapons in school.” Students did not discuss their opinions on the issue, and surveys
were collected immediately.
The teacher then conducted a 15-minute class discussion on web site trustworthiness. She
first asked students to define trustworthiness in general, and then asked how they would define it
in describing web sites. Students defined trustworthiness in terms of people who are “able to be
trusted” or who “you can count on.” Students responded that trustworthy web sites “tell the
truth,” “don’t make up lies to sell you things,” and “have facts, not just stuff people made up.” It
was agreed that trustworthy sites should be used for school assignments while less trustworthy
sites should be avoided.
The next day, students completed the evaluation task in the school computer lab. In one
50-minute period, students examined and rated the trustworthiness of five sources on a 5-point
Likert scale from very trustworthy to not at all (see Appendix A). Trustworthiness was defined in
the directions as “able to be relied on as honest and truthful.” After rating each site, students
listed all the reasons for their rating in a bulleted list.
15. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 13
Data Analysis
Data for quantitative analysis included each student’s (a) prior opinion rating;
(b) trustworthiness rating of the two pro and two con sites; and (c) reading level as determined
by scores on the reading comprehension portion of the Measures of Academic Performance
(MAP) test, administered online by the Northwest Educational Assessment (NWEA). Ratings of
the neutral stance site were excluded. Students were divided into reading groups using divisions
suggested by the NWEA (Comparative Data to Inform Instructional Decisions, 2011), whereby
students .5 standard deviation above the grade-level norm were labeled “higher achievement,”
and those .5 standard deviation below the norm were labeled “lower achievement.” Students
falling in the range between were considered “average achieving readers.” We ran a repeated-
measures ANOVA with Site Quality (high or low) and Site Stance (pro or con) as within-
subjects factors, and Reading Level (low, average, high) and Prior Opinion (con, neutral, pro) as
between-subjects factors.
Qualitative analysis data drew on the pre-task survey of students’ opinion and the
evaluation criteria listed by students to justify their ratings of the five sites, including the neutral
stance site. Analysis of evaluation criteria students listed involved a grounded coding process
(Glaser & Strauss, 2009). Codes emerged through constant comparative analysis, and responses
were attributed to existing codes until the need for new codes was exhausted. A second rater
coded 15% of the student responses for evaluation criteria, with interrater agreement of 77%.
Appendix B contains a list of final codes, with a description and examples of each.
Results
The first two research questions concerned the students’ ability to determine the
trustworthiness of web sites about controversial issues, and were addressed by the quantitative
16. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 14
analysis. The subsequent qualitative analysis presents results relevant to the questions regarding
specific criteria students considered while rating the sites’ trustworthiness.
Source Quality
The ANOVA revealed a positive main effect for source quality on source rating F(1,72 )
= 38.15, p < .001, with a mean trustworthiness rating of 3.79 for high-quality sources and 3.06
for low-quality sources. Overall, students distinguished between the more and less trustworthy
sources. Their ability to distinguish between them, however, was influenced by reading ability,
evidenced by a positive linear interaction between source quality and reading level, F(2,72) =
4.45, p = .015 (See Figure 1). High-level readers rated the low-quality sources lower (M = 2.72)
than did the low-level readers (M = 3.19), and rated the high-quality sources higher (M = 3.91)
than did the low-level readers (M = 3.59). Thus, all subgroups differentiated correctly between
high- and low-quality sources, with stronger readers differentiating them more clearly.
--------------------------------------------------
Insert Figure 1 About Here
--------------------------------------------------
Source Stance
There was an unexpected main effect of source stance on source ratings, F(1, 72) =
17.03, p = .000, with a mean trustworthiness rating of 3.14 for the con source and 3.71 for the
pro sources. Students rated the low-quality pro source (M = 3.56) a full point higher than the
low-quality con source (M = 2.56), but rated the high-quality pro source (M =3.90) only slightly
higher than the high-quality con source (M = 3.79). One explanation for this may be that the
low-quality pro source was written by a Christian minister and medical doctor identified as “Dr.
Johnston,” whom students may have viewed as more trustworthy due to his vocation as minister,
physician, or both. Conversely, the author of the low-quality con article was identified only by
the name “Karoli,” a “card-carrying member of We the People.” While the minister-doctor’s
17. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 15
profession does not guarantee his expertise on the topic of guns in schools, students may have
viewed him more favorably due to his religious affiliations and/or his medical training, even
though Dr. Johnston’s formal education and credentials were not revealed on the page. The rate
of opinion change from pre- to post-task also supports the supposition that students found pro
sources to be generally more convincing than con sources, as more students’ opinions moved
toward a pro stance (n = 30) than moved toward a con stance (n = 18) after completing the task.
Prior Opinion
Students’ evaluations of trustworthiness were influenced by a confirmation bias,
indicated by a positive interaction between source stance and prior opinion, F(2, 72) = 4.977, p =
.009. Overall, students rated sources with which they agreed (M = 3.74) higher than sources with
which they disagreed (M = 3.24). As seen in Figure 2, however, this was not true across all
groups. Students who were against guns in schools prior to the task rated the pro gun sources
slightly higher than they rated the con gun sources.
--------------------------------------------------
Insert Figure 2 About Here
--------------------------------------------------
A closer look at the mean ratings for each site revealed that the low-quality pro-gun
source was rated higher on average than the low-quality con source by all students regardless of
opinion. The ratings of the low-quality pro-gun source were apparently inflated across all groups,
which may explain the unexpectedly high rating of the low-quality pro source by con stance
students. Taking this inflation into account, Figure 2 does show trends reflecting a stronger trust
of sources that align with student opinions and a stronger distrust of those that do not. Reading
level played a role, evidenced by a positive interaction between source stance, prior opinion, and
reading level, F(4, 72) = 4.866, p = .002. Low readers’ mean ratings of sources with which they
18. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 16
agreed were .75 higher than ratings of sources with which they disagreed. The difference in
mean ratings was .63 for average readers and .02 for high readers. As student reading level
increased, so did students’ ability to separate prior opinion from the evaluation of a source’s
trustworthiness.
Evaluation Criteria
Through our qualitative analysis we examined the criteria students used to judge the
trustworthiness of the five sources. Of the categories of evaluation criteria that emerged, four
aligned with Coiro’s (2014) criteria for evaluating web sites: (a) author reliability, (b) author
slant, (c) content accuracy, and (d) content relevance. Three additional categories emerged: (e)
task goal—for criteria revealing the students’ ability or inability to focus on the task goal of site
evaluation; (f) other—for criteria which did not fit into any of the five previous categories; and
(h) not able to code—for comments that could not be understood well enough to code.
--------------------------------------------------
Insert Table 1 About Here
--------------------------------------------------
Table 1 presents the criteria students used to judge the websites, grouped by the six
categories of evaluation criteria and listed in order of the percentage of students referring to
criteria in that category. For example, 95% of students listed one of the content accuracy criteria
at least once; 73% listed one of the author reliability criteria at least once. Within each category,
the specific criteria are similarly listed in order of the percentage of students listing the criterion
at least once. Thus 84% of students recorded at least one criteria coded as evidence/facts/data.
Note this measure of frequency is based on whether a student included a particular criterion at
least once, not how many times the student listed the criterion. Another indicator of criteria use is
how many times a criterion was listed by students, which is captured by the average number of
19. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 17
times a criterion was cited per student (third column in Table 1). Thus, across all students,
content accuracy criteria were listed an average of 7.96 times per student; evidence/facts/data
was listed an average of 2.81 times per student. For this initial consideration of the criteria
students used, we focus on the percentage of students citing a criterion or category of criteria at
least once.
Overall, content accuracy criteria were cited by the most students, with 95% of students
listing at least one content accuracy criterion. The top two criteria cited by students were content
accuracy criteria: facts/evidence/data –references to evidence, facts, proof, data, statistics,
research, or scientific studies (e.g., “gives facts about gun control and homicide rates,” or
“backed up with examples and facts”); and reasoning/commentary—references to reasoning,
explanation, commentary, logic, or examples (e.g., “gives good reasoning,” “gives reasons why,”
or “shows examples of why he’s right”). Students, regardless of reading level, based their
judgments of a site’s trustworthiness most often on whether the site provided factual evidence for
its opinion, and whether it provided adequate reasoning for that opinion. The third most common
criteria was logical/makes sense—statements reflecting the reader’s assessment of whether an
argument was logical, reasonable, sensible, sound, or convincing (e.g., “has unrealistic ideas
about what could happen,” and “I don’t know if all this will help violence go down”). Students
clearly attended to the content of arguments presented when evaluating the sources, a finding
inconsistent with prior research indicating that students tend to base evaluations of
trustworthiness on surface features rather than on content (Kiili et al., 2007; Kuiper et al., 2005;
Ladbrook & Probert, 2011; Mothe & Sahut, 2011). However, our design removed the
complexities of web navigation and narrowed the evaluation process by limiting students to the
20. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 18
examination of one page per source. Within this more controlled design students took the time to
read more deeply and to judge the validity of arguments presented.
The students’ strong emphasis on site content may be informed by research on student
engagement and comprehension. Argumentative tasks have been shown to prompt greater
engagement than simple fact-finding tasks (Nussbaum & Sinatra, 2003), and situational interest
may prompt deeper text comprehension (Schraw & Lehman, 2001). Furthermore, students asked
to construct arguments comprehend sources more deeply than when asked to construct narrative
or expository accounts of source texts (Wiley & Voss, 1999). The students in the present study
were not asked to compose arguments, but because their opinions were assessed prior to the task,
they may have viewed reading as a formative step in constructing a revised opinion on the issue.
The second-most cited category of trustworthiness criteria was content relevance, with
84% of students listing these criteria. Codes in this category conveyed a source’s usefulness in
regards to information it provided or the way information was presented. The most-cited code
within content relevance was focused, applied when students judged the trustworthiness of a
source based on whether it was focused clearly on the issue. Because clear focus fulfills the
needs of the reader by making information easier to locate and understand, it was considered a
type of relevance. Comments about the text veering off topic were assigned this code, as in
“going too much out of topic” or “stays on topic throughout the reading.” The code was also
applied in situations in which students had difficulty making sense of an author’s stance, as in “I
can’t tell what they’re trying to tell the reader” or “not very sure what they’re trying to prove.”
Prevalence of this code among all levels of readers is consistent with prior research indicating
that information relevance (in this case, relevance characterized by usability) is a priority with
21. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 19
readers. However, since relevance should not be directly equated with trustworthiness—they are
essentially separate constructs— the finding is somewhat problematic.
The next criteria categories dealt with authors. Author reliability (73%) concerns qualities
of the author that suggest whether he or she can be trusted to provide reliable information on a
topic, including whether information about the author is available and adequate, as well as
whether the information suggests reliability and trustworthiness. Author slant (57%) criteria
reflect whether the author seeks to persuade by presenting information in biased way, or whether
the author’s purpose may influence his or her objectivity. The only author-related code in the top
five individual criteria was balance/bias (54%), indicating that many students considered
whether sites presented a one-sided or balanced view of the issue. Examples included comments
such as “doesn’t take other people’s opinion into consideration,” “isn’t trying to be persuasive,
just factual,” “it gave opinions on people who want guns and people who do not” and “keeps on
one subject only, rejecting guns.” Again, this supports the finding that students at least attempted
to comprehend the arguments presented on the sites and considered whether the authors were
biased or balanced. Moreover, the fact that this was the only author-related code to appear in the
top five criteria suggests that students were more strongly focused on argument content than on
identifying author reliability. The remaining author-related categories were generally less
frequent, indicating that many students judged trustworthiness without attending to the identity
and expertise of the site authors. This is despite the fact that the education and experience of the
high-quality site authors were clearly noted on the sites, whereas details on the education and
topic-relevant experience of the lower quality site authors were largely absent. This finding is in
line with prior research suggesting that students do not typically examine author expertise and
22. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 20
author stance in their determination of trustworthiness (Britt & Aglinskas, 2002; Walraven et al.,
2009).
The Influence of Reading Ability
Columns 4-6 of Table 1 present the average number of times each evaluation criterion
was cited by low, average, and high readers. These data suggest differing competence in
evaluating sources, especially for high readers, whose evaluation criteria were markedly different
from those of average and low readers.
Author-related Criteria. High readers focused more frequently on author reliability and
author stance (6.91 and 2.38 times per student) than did average readers (3.72 and 1.03 times per
student) and low readers (1.15 and 0.85 times per student). When high readers did refer to
authorship, their justifications were more articulate and specific than those of low readers, for
example: “obvious use of heavy persuasion,” “gives counter-arguments,” and “doesn’t give us
background info on the author.” In contrast, when low-level readers referenced authorship, their
comments tended to be less specific, for example, “it’s opinion,” or “just opinions” to indicate
bias, and “informal talk” or “joking around” to indicate language that called the author’s
expertise or character into question. Thus, high readers not only attended more to authorship, but
when they did, more capably articulated their justifications.
Prior Opinion. Frequencies of particular criteria used by students supported quantitative
results showing a significant interaction among prior opinion, reading level, and site stance. The
code agree or disagree indicated that students evaluated sources based on whether they agreed
with them or not, for example: “I think I can trust it because some of the things here sound like
something I would have said;” “I trust what he has to say because he’s right;” “I like this site
because it agrees with my opinions;” and “I think it has a good viewpoint of what I believe, told
23. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 21
from my point of view.” This code was applied an average of .55 times per low reader and 1.00
times per average reader. Conversely, the code was applied only .03 times per high reader,
appearing only once among this group; high readers were more capable of considering
trustworthiness independent of their opinions on the topic, whereas average readers struggled to
set aside their personal biases while evaluating. The bracketing off of biases described by Haria
and Midgette (2014) requires the reader to separate his or her personal stance on an issue from
the task goal of evaluating an argument and from the task goal of evaluating the source. The
ability to bracket off biases to consider an argument and its source objectively is a complex
cognitive skill that challenged both average and low readers. While one might expect average
readers to more capably bracket off biases than low readers, the higher incidence of this code
among average readers may be explained by the lower readers’ inability to clearly comprehend
the texts, in which case their understanding of the arguments presented was not sufficient to
recognize their agreement or disagreement with the sites.
Focus on Task Goal. A second trend observed in the frequencies of criteria parallels the
students’ ability to bracket off biases and reiterates the ability of high readers to retain an
objective stance. Consistent with research linking metacognition and evaluation, low and average
readers showed difficulty retaining focus on the task goal of source evaluation while reading
about the controversy. These students conflated justifying an evaluation of the source’s
trustworthiness with justification of a stance on the issue, as evidenced by high frequency of the
justifies stance code for low readers (1.80 times per student) and average readers (1.69 times per
student). This code was applied when students justified their source rating with an argument
supporting or refuting a stance on the issue, for example, “Guns don’t kill people, people kill
people,” “gun free zones are the safest for murderers,” and “teachers would have a hard time
24. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 22
protecting the class if it was too big.” At other times, students recorded information from the site
(records random fact from site code), similarly reflecting a failure to focus on the task goal of
evaluating trustworthiness. The task goal-(minus) code signified this loss of focus on source
evaluation and was common among both low readers (2.45 times per student) and average
readers (2.22 times per student), but rare among high readers (.27 times per student). Capable
readers retained focus on the task goal of source evaluation more successfully than did average
and low readers, likely a function of stronger metacognitive skill. Conversely, there was a low
incidence of the task goal+(plus) code across all students (applied by 2.47% of all students).
This code was applied when students made a clear distinction between evaluation of source
trustworthiness and consideration of the argument, an indication that metacognitive skill was at
work, for example: “I understand their point and I agree but for a report this won’t be a site I
would refer to,” or “This article is very trustworthy, but I don’t agree that guns should be in
schools.” The code was applied three times with two students, one high and one average reader.
Though it is likely other students made this distinction mentally, the fact that so many average
and low readers struggled to retain focus on the goal of task evaluation, and so few high readers
articulated a detachment of their opinion on the issue from their evaluation of trustworthiness,
indicates this was a challenging task for most readers.
Overall, the comparison of low, average, and high readers supports prior research
indicating that argumentative text comprehension is highly complex and challenging. Readers
must attend to authors’ intent, divorce personal opinions from their evaluations of arguments,
question authors’ biases while setting their own aside, and critically analyze argument validity
(Haria & Midgette, 2014). In this study, many low and average readers seemed preoccupied
25. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 23
with text processing and evaluating arguments, presumably leaving little cognitive space to
consider factors such as author expertise and bias in evaluations of trustworthiness.
Conclusion
This study fills a gap in the literature on source evaluation by examining the criteria
eighth graders applied in evaluating sources presenting arguments on a controversial issue. It
examines the effects of prior opinion, reading ability, and source stance on source evaluation,
showing interesting patterns in the evaluation criteria applied by students. However, the study is
not without limitations. Although it reveals the criteria students applied in their evaluations of
source trustworthiness, the use of those criteria does not necessarily suggest their skillful
application. Criteria indicate only that a student is considering a particular rationale, not that he
or she is doing so proficiently. If students sometimes misapplied criteria, however, it may still be
viewed as evidence of progress from the absence of any criteria to an attempt at thoughtful
evaluation, albeit clumsy.
Students’ evaluations were influenced by their prior opinions on the issue, confirming the
existence of an assimilation bias. This suggests that methods for debiasing such as those
suggested by Lewandowsky et al. (2012) might be necessary to guard against such tendencies.
For example, fostering healthy skepticism about a source can reduce the influence of
misinformation. Weaker readers especially may benefit from instruction specifically targeted
toward debiasing. However, what specific debiasing techniques would be most effective with
young adolescents in the context of source evaluation is largely unknown. Methods for
effectively reducing assimilation bias in these circumstances, and especially among struggling
readers, is a topic for future research.
26. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 24
That this task was difficult not only for low ability readers, but also for average readers,
deserves closer examination. Cognitive load theory (Sweller, 2011), in which cognitive load is
comprised of intrinsic, germane, and extraneous load (Paas, Renkl, & Sweller, 2004) may be
relevant. Intrinsic load depends on the cognitive work required to process the material to be
learned, and is therefore dependent on text difficulty or concept complexity, while germane load
is dependent on the skills and capacities of the individual, consisting of the prior knowledge and
skillsets applied during the learning process. Germane load might theoretically be reduced by
improving one’s metacognitive skill, thereby facilitating the processing of intrinsic load more
efficiently or effectively (Antonenko & Niederhauser, 2010). Both intrinsic and germane load
contribute to understanding. Extraneous load, however, refers to cognitive work that does not
contribute to an individual’s understanding and therefore detracts from learning. In cognitive
load theory, a goal of instructional design is to reduce extraneous load as much as possible to
allow more mental space for intrinsic and germane loads. Cognitive load can also be mediated by
teaching strategies for more efficient processing (thereby reducing germane load) or by scaling
back the complexity of the task to a level within the zone of the learner’s proximal development
(Vygotsky, 1978), thereby reducing intrinsic load. If even average readers struggle to apply the
metacognitive skills required to retain focus on source evaluation while processing
argumentative text, it follows that instructional scaffolds could support them. But what are
those? In a task as complex as the one studied here, how might cognitive load be reduced, and to
what extent? Which scaffolds serve to reduce germane load rather than to increase extraneous
load? These are important questions for future research that touch not just on source evaluation,
but on instructional design and delivery for critical thinking in general.
27. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 25
Since struggling readers in this study also attended less frequently to evaluation criteria
reflecting on authorship than did stronger readers, explicitly teaching students to attend more
specifically to authorship may be a promising practical solution to strengthen all students’
evaluation skills. To ignore authorship is to ignore an issue of central importance in evaluating
the trustworthiness of sources about controversial issues. Encouraging attention to it may be a
simple, efficient, and effective way to shift attention away from a reader’s personal opinion
while at the same time encouraging his or her attention to potential biases, conflicts of interest,
and issues of author expertise.
In sum, the present study points to the importance of instructional approaches that
scaffold evaluations of a source’s trustworthiness for average as well as struggling readers, that
encourage attention to authorship, that help readers to retain the metacognitive stance required to
separate personal opinion from determination of a source’s trustworthiness, and that support
readers in managing the simultaneous demands of multiple text processing, argumentative text
processing, and source evaluation.
28. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 26
References
Antonenko, P. D., & Niederhauser, D. S. (2010). The influence of leads on cognitive load and
learning in a hypertext environment. Computers in Human Behavior, 26(2), 140–150.
doi:10.1016/j.chb.2009.10.014
Braten, I., Ferguson, L. E., Stromso, H. I., & Anmarkrud, O. (2012). Justification beliefs and
multiple-documents comprehension. European Journal of Psychology of Education, (July).
doi:10.1007/s10212-012-0145-2
Bråten, I., Strømsø, H. I., & Britt, M. A. (2009). Trust matters : Examining the role of source
evaluation in students’ construction of meaning within and across multiple texts. Reading
Research Quarterly, 44(1), 6–28. doi:10.1598/RRQ.44.1.1
Britt, M. A., & Aglinskas, C. (2002). Improving students’ ability to identify and use source
information. Cognition and Instruction, 20(4), 485–522. doi:10.1207/S1532690XCI2004_2
Coiro, J. (2014, April 7). Teaching adolescents how to evaluate the quality of online information.
[Web log post]. Edutopia. Retrieved from http://www.edutopia.org/blog/evaluating-quality-
of-online-info-julie-coiro
Coiro, J. (2003). Reading comprehension on the Internet: Expanding our understanding of
reading comprehension to encompass new literacies. The Reading Teacher, 56(5), 458–464.
Coiro, J. (2007). Exploring changes to reading comprehension on the Internet: Paradoxes and
possibilities for diverse adolescent readers. (Doctoral dissertation, University of
Connecticut). Retrieved from http://digitalcommons.uconn.edu/dissertations/AAI3270969/
Colwell, J., Hunt-Barron, S., & Reinking, D. (2013). Obstacles to developing digital literacy on
the Internet in middle school science instruction. Journal of Literacy Research, 45(3), 295–
324. doi:10.1177/1086296X13493273
29. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 27
Common Core State Standards Initiative. (2010) Common core state standards for English
language arts & literacy in history/social studies, science, and technical subjects. Retrieved
from http://www.corestandards.org/wp-content/uploads/ELA_Standards.pdf
Common Core State Standards Initiative. (2010). Key shifts in English language arts. Retrieved
from http://www.corestandards.org/other-resources/key-shifts-in-english-language-arts/
Coombes, B. (2008). Generation Y : Are they really digital natives or more like digital refugees ?
Voices, 7(1), 31–40.
Duke, N. K., Schmar-Dobler, E., & Zhang, S. (2006). Comprehension and technology. In M. C.
McKenna, L. D. Labbo, R. D. Kieffer, & D. Reinking (Eds.), International handbook of
literacy and technology: Volume two (pp. 317–326). Mahwah, NJ: Lawrence Erlbaum
Associates, Publishers.
Ford, C. L., & Yore, L. D. (2012). Toward convergence of critical thinking, metacognition, and
reflection: Illustration from natural and social sciences, teacher education, and classroom
practice. In A. Zohar & Y. J. Dori (Eds.), Metacognition in science education (Vol. 40, pp.
251–271). Dordrecht: Springer Netherlands. doi:10.1007/978-94-007-2132-6
Franco, G. M., Muis, K. R., Kendeou, P., Ranellucci, J., Sampasivam, L., & Wang, X. (2012).
Examining the influences of epistemic beliefs and knowledge representations on cognitive
processing and conceptual change when learning physics. Learning and Instruction, 22(1),
62–77. doi:10.1016/j.learninstruc.2011.06.003
Goldman, S. R. (2011). Choosing and using multiple information sources: Some new findings
and emergent issues. Learning and Instruction, 21(2), 238–242.
doi:10.1016/j.learninstruc.2010.02.006
30. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 28
Hargittai, E., Fullerton, L., Menchen-Trevino, E., & Thomas, K. Y. (2010). Trust online: Young
adults’ evaluation of Web content. International Journal of Communication, 4, 468–494.
Haria, P. D., & Midgette, E. (2014). A genre-specific reading comprehension strategy to enhance
struggling fifth-grade readers’ ability to critically analyze argumentative text. Reading &
Writing Quarterly, 30(4), 297–327. doi:10.1080/10573569.2013.818908
Hartman, D. K., Morsink, P. M., & Zheng, J. (2010). From print to pixels: The evolution of
cognitive conceptions of reading comprehension. In E. A. Baker (Ed.), The new literacies:
Multiple perspectives on research and practice (pp. 131–164). New York: Guilford Press.
Head, A., & Eisenberg, M. (2010). Truth be told: How college students evaluate and use
information in the digital age. Literacy, 53, 1–72. The Information School, University of
Washington. Retrieved from
http://projectinfolit.org/pdfs/PIL_Fall2010_Survey_FullReport1.pdf
Hsieh, Y.-H., & Tsai, C.-C. (2013). Students’ scientific epistemic beliefs, online evaluative
standards, and online searching strategies for science information: The moderating role of
cognitive load experience. Journal of Science Education and Technology, 23(3), 299–308.
doi:10.1007/s10956-013-9464-6
Kiili, C., Laurinen, L., & Marttunen, M. (2007). How students evaluate credibility and relevance
of information on the Internet? IADIS International Conference on Cognition and
Exploratory Learning in the Digital Age (CELDA 2007), 155–162.
Kiili, C., Laurinen, L., & Marttunen, M. (2009). Skillful reader is metacognitively competent. In
L. T. W. Hin & R. Subramaniam (Eds.), Handbook of research on new media literacy at the
k-12 level (pp. 654–668). Hershey, NY: Information Science Reference.
31. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 29
Kim, K. S., & Sin, S. C. J. (2011). Selecting quality sources: Bridging the gap between the
perception and use of information sources. Journal of Information Science, 37(2), 178–188.
doi:10.1177/0165551511400958
Kobayashi, K. (2010a). Critical Integration of Multiple Texts. Japanese Journal of Educational
Psychology, 58(4), 503–516.
Kobayashi, K. (2010b). Strategic use of multiple texts for the evaluation of arguments. Reading
Psychology, 31(2), 121–149. doi:10.1080/02702710902754192
Kuhn, D. (2000). Metacognitive development. Current Directions in Psychological Science,
9(5), 178–181. doi:10.1111/1467-8721.00088
Kuhn, D., & Dean, D. (2004). Metacognition : A bridge between cognitive psychology and
educational practice. Theory Into Practice, 43(4), 268–274.
Kuiper, E., Volman, M., & Terwel, J. (2005). The Web as an information resource in K-12
education: Strategies for supporting students in searching and processing information.
Review of Educational Research, 75(3), 285–328.
Ladbrook, J., & Probert, E. (2011). Information skills and critical literacy : Where are our
digikids at with online searching and are their teachers helping ? Australasian Journal of
Educational Technology, 27(1), 105–121.
Larson, M., Britt, M. A., & Larson, A. a. (2004). Disfluencies in comprehending argumentative
texts. Reading Psychology, 25(3), 205–224. doi:10.1080/02702710490489908
Leu, D. J., Kinzer, C. K., Coiro, J. L., & Cammack, D. W. (2004). Toward a theory of new
literacies emerging from the Internet and other ICT. In R. B. Ruddell & N. Unrau (Eds.),
Theoretical models and processes of reading, fifth edition. (pp. 1568-1611). Newark, DE:
International Reading Association.
32. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 30
Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012).
Misinformation and its correction: Continued influence and successful debiasing.
Psychological Science in the Public Interest, 13(3), 106–131.
doi:10.1177/1529100612451018
Lombardi, D., Sinatra, G. M., & Nussbaum, E. M. (2013). Plausibility reappraisals and shifts in
middle school students’ climate change conceptions. Learning and Instruction, 27, 50–62.
doi:10.1016/j.learninstruc.2013.03.001
Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The
effects of prior theories on subsequently considered evidence. Journal of Personality and
Social Psychology, 37(11), 2098–2109. doi:10.1037//0022-3514.37.11.2098
Magno, C. (2010). The role of metacognitive skills in developing critical thinking.
Metacognition and Learning, 5(2), 137–156. doi:10.1007/s11409-010-9054-4
Mason, L., Boldrin, A., & Ariasi, N. (2009). Epistemic metacognition in context: Evaluating and
learning online information. Metacognition and Learning, 5(1), 67–90. doi:10.1007/s11409-
009-9048-2
Metzger, M. J., & Flanagin, A. J. (2013). Credibility and trust of information in online
environments: The use of cognitive heuristics. Journal of Pragmatics, 59, 210–220.
doi:10.1016/j.pragma.2013.07.012
Mothe, J., & Sahut, G. (2011). Is a relevant piece of information a valid one? Teaching critical
evaluation of online information. In E. Efthimiadis, J. M. Fernández-Luna, J. F. Huete, & A.
MacFarlane (Eds.), Teaching and learning in information retrieval (pp. 153–168).
Heidelberg: Springer.
33. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 31
Muis, K. R., & Franco, G. M. (2009). Epistemic beliefs: Setting the standards for self-regulated
learning. Contemporary Educational Psychology, 34(4), 306–318.
doi:10.1016/j.cedpsych.2009.06.005
Northwest Evaluation Association. (2011). Comparative Data to Inform Instructional Decisions.
Retrieved from https://www.nwea.org/content/uploads/2014/07/NWEA-Comparative-Data-
One-Sheet.pdf
Norton, J. (2017, Feb. 15). Common Core revisions: What are states really changing? [Web log
post]. Edtechtimes. Retrieved from https://edtechtimes.com/2017/02/15/common-core-
revisions-what-are-states-really-changing/
Nussbaum, E. M., & Sinatra, G. M. (2003). Argument and conceptual engagement.
Contemporary Educational Psychology, 28, 384–395. doi:10.1016/S0361-476X(02)00038-
3
Paas, F., Renkl, A., & Sweller, J. (2004). Cognitive load theory : Instructional implications of the
interaction between information structures and cognitive architecture. Instructional Science,
32, 1–8.
Rouet, J.-F. (2006). The skills of document use: From text comprehension to Web-based
learning. Mahwah, NJ: Erlbaum.
Rouet, J.-F., Ros, C., Goumi, A., Macedo-Rouet, M., & Dinet, J. (2011). The influence of
surface and deep cues on primary and secondary school students’ assessment of relevance
in Web menus. Learning and Instruction, 21(2), 205–219.
doi:10.1016/j.learninstruc.2010.02.007
Schmar-Dobler, E. (2003). The Internet : The link between literacy and technology. Journal of
Adolescent & Adult Literacy, 47(1), 80-85.
34. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 32
Schommer, M. (1990). Effects of beliefs about the nature of knowledge on comprehension.
Journal of Educational Psychology, 82(3), 498–504. doi:10.1037//0022-0663.82.3.498
Schraw, G., & Lehman, S. (2001). Situational nterest : A review of the literature and directions
for future research. Educational Psychology Review, 13(1), 23–52.
Spiro, R. J., Feltovich, P. J., & Coulson, R. L. (1996). Two epistemic world views: Prefigurative
schemas and learning in complex domains. Applied Cognitive Psychology, 10(Special Issue:
Reasoning Processes), S51–S61.
Spiro, R. J., Feltovich, P. J., Jacobson, M. J., & Coulson, R. L. (1992). Cognitive flexibility,
constructivism, and hypertext: Random access instruction for advanced knowledge
acquisition in ill-structured domains. In T. M. Duffy & D. H. Jonassen
(Eds.),Constructivism and the technology of instruction: A conversation (pp. 57–75).
Hillsdale, NJ. Retrieved from http://74.125.155.132/scholar?q=cache:At4-
p7m5PiwJ:scholar.google.com/&hl=en&as_sdt=0,23
Strømsø, H. I., Bråten, I., & Britt, M. A. (2010). Reading multiple texts about climate change:
The relationship between memory for sources and text comprehension. Learning and
Instruction, 20(3), 192–204. doi:10.1016/j.learninstruc.2009.02.001
Strømsø, H. I., Bråten, I., & Samuelstuen, M. S. (2008). Dimensions of topic-specific
epistemological beliefs as predictors of multiple text understanding. Learning and
Instruction, 18(6), 513–527. doi:10.1016/j.learninstruc.2007.11.001
Sutherland-Smith, W. (2002). Weaving the literacy Web : Changes in reading from page to
screen. The Reading Teacher, 55(7), 662–669.
35. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 33
Sweller, J. (2011). Cognitive load theory. In J. P. Mestre & B. F. Ross (Eds.) The psychology of
learning and motivation (pp. 37–76). Elsevier Inc. doi:10.1016/B978-0-12-387691-
1.00002-8
Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs.
American Journal of Political Science, 50(3), 755–769.
Tsai, C.-C. (2004). Beyond cognitive and metacognitive tools: the use of the Internet as an
“epistemological” tool for instruction. British Journal of Educational Technology, 35(5),
525–536. doi:10.1111/j.0007-1013.2004.00411.x
Van Strien, J. L. H., Brand-Gruwel, S., & Boshuizen, H. P. a. (2014). Dealing with conflicting
information from multiple nonlinear texts: Effects of prior attitudes. Computers in Human
Behavior, 32, 101–111. doi:10.1016/j.chb.2013.11.021
Vygotsky, L. S. (1978). Interaction between learning and development. In M. Gauvain & M.
Cole (Eds.) Readings on the development of children (2nd
ed., pp. 29-36). New York: W. H.
Freeman and Company. Retrieved from http://www.psy.cmu.edu/~siegler/vygotsky78.pdf
Walraven, A., Brandgruwel, S., & Boshuizen, H. (2009). How students evaluate information and
sources when searching the World Wide Web for information. Computers & Education,
52(1), 234–246. doi:10.1016/j.compedu.2008.08.003
Wiley, J., & Bailey, J. (2006). Effects of collaboration and argumentation on learning from Web
pages. In A. M. O’Donnell, C. E. Hmelo-Silver, & G. Erkens (Eds.), Collaborative
learning, reasoning, and technology (pp. 297–321). Mahwah, NJ: Lawrence Erlbaum
Associates.
36. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 34
Wiley, J., & Voss, J. F. (1999). Constructing arguments from multiple sources: Tasks that
promote understanding and not just memory for text. Journal of Educational Psychology,
91(2), 301–311. doi:10.1037//0022-0663.91.2.301
Wineburg, S. S. (1991). Historical problem solving : A study of the cognitive processes used in
the evaluation of documentary and pictorial evidence. Journal of Educational Psychology,
83(1), 73–87.
Winkielman, P., Huber, D. E., Kavanagh, L., & Schwarz, N. (2012). Fluency of consistency:
When thoughts fit nicely and flow smoothly. In B. Gawronski & F. Strack (Eds.), Cognitive
consistency: A fundamental principle in social cognition (pp. 89–111). New York: Guilford
Press.
Wopereis, I. G. J. H., & van Merriënboer, J. J. G. (2011). Evaluating text-based information on
the World Wide Web. Learning and Instruction, 21(2), 232–237.
doi:10.1016/j.learninstruc.2010.02.003
Zawilinski, L., Carter, A., O’Byrne, I., McVerry, G., Nierlich, T., & Leu, D. J. (2007,
November). Toward a taxonomy of online reading comprehension strategies. Paper
presented at the National Reading Conference, Austin, TX. Retrieved from
https://scholar.google.com/citations?view_op=view_citation&hl=en&user=QMTW9n4AA
AAJ&citation_for_view=QMTW9n4AAAAJ:W7OEmFMy1HYC
Zhang, M., & Quintana, C. (2012). Scaffolding strategies for supporting middle school students’
online inquiry processes. Computers & Education, 58(1), 181–196.
doi:10.1016/j.compedu.2011.07.016
37. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 35
Table 1
Evaluation Criteria Frequencies
Average Times Cited Per Student
Category/Code
Students
Referring to
Criteria
(%)
All
Students
(n=81)
Low
Readers
(n=20)
Average
Readers
(n=32)
H
Rea
(n=
Content Accuracy 95.06% 7.96 7.10 7.36 9
evidence/facts/data 83.95% 2.81 2.30 2.53 3
reasoning/commentary 69.14% 2.58 2.60 2.34 2
logical/makes sense 58.02% 1.44 1.25 1.66 1
quotes other sources 30.86% 0.48 0.40 0.22 0
corroborating source 17.28% 0.23 0.15 0.16 0
pictures/graphs 12.35% 0.14 0.05 0.13 0
date/age 11.11% 0.21 0.30 0.19 0
prior knowledge of topic 6.17% 0.07 0.05 0.13 0
Content Relevance 83.95% 4.97 5.45 5.12 4
focused 43.21% 0.80 0.80 0.94 0
graphs/charts/pictures 41.98% 0.69 0.50 0.81 0
length/amount/detail of info 37.04% 0.70 1.20 0.47 0
info relevance 35.80% 0.86 1.00 1.03 0
organization/rhetoric 32.10% 0.58 0.85 0.53 0
solutions 18.52% 0.31 0.15 0.53 0
language access 16.05% 0.28 0.45 0.09 0
ads/links/sidebar content 12.35% 0.27 0.00 0.28 0
links to additional info 11.11% 0.19 0.15 0.13 0
interesting 9.88% 0.14 0.35 0.03 0
interactive 4.94% 0.15 0.00 0.28 0
Author Reliability 72.84% 4.22 1.15 3.72 6
language 30.86% 0.52 0.35 0.72 0
refers to specific source/type 29.63% 0.41 0.30 0.28 0
author character 28.40% 0.47 0.35 0.50 0
source/author info available 24.69% 0.88 0.05 0.41 1
published/vetted 22.22% 0.33 0.00 0.25 0
author expertise 20.99% 0.30 0.00 0.34 0
number of authors/sources 14.81% 0.26 0.00 0.09 0
familiar with site 13.58% 0.17 0.00 0.16 0
appearance 9.88% 0.23 0.00 0.09 0
title 9.88% 0.10 0.10 0.16 0
ad quality 8.64% 0.12 0.00 0.19 0
38. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 36
ad quantity 8.64% 0.15 0.00 0.13 0
popularity 7.41% 0.17 0.00 0.25 0
pictures 6.17% 0.07 0.00 0.06 0
writer has own blog or site 3.70% 0.04 0.00 0.09 0
Author Slant 56.79% 1.47 0.85 1.03 2
balance/bias 4 54.32% 1.28 0.80 0.78 2
author purpose 7.41% 0.19 0.05 0.25 0
Other 46.91% 0.87 0.95 1.28 0
agree/disagree 27.16% 0.54 0.55 1.00 0
general statement of trust 27.16% 0.33 0.40 0.28 0
Task Goal – 33.33% 1.58 2.45 2.22 0
justifies stance 27.16% 1.15 1.80 1.69 0
records random fact from site 16.05% 0.43 0.65 0.53 0
Task Goal +
explicitly focused on goal
2.47% 0.04 0.00 0.06 0
NC not able to code 33.33% 0.53 0.60 0.53 0
39. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 37
Figure 1. Average site ratings of high and low quality sites by student reading level.
2.00
2.50
3.00
3.50
4.00
4.50
low high
Site Quality
low readers
ave readers
high readers
40. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 38
Figure 2. Average site ratings of pro and con stance sites by students’ prior opinion (for, against, and neutral to allowing school
personnel to carry concealed weapons in schools).
2.00
2.50
3.00
3.50
4.00
4.50
Con Guns Pro Guns
Site Stance
For
Neutral
Against
41. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 39
Appendix A
Table A1
Sources Included in Evaluation Task
Title of
Article
URL Address of Static Site in
PDF format
Source Description Source
Stance
Viewpoint:
Arming
Teachers
Isn’t the
Answer
http://goo.gl/SEPeiM TIME Magazine op ed
written by a professor at the
University of Chicago’s
School of Social Service
Administration and senior
research fellow with the
nonpartisan Coalition for
Evidence-Based Policy; and
an education policy
consultant to The American
Federation of Teachers,
Teach for America and
Senator Tom Harkin.
Con (Against
Guns in
Schools)
Opposing
View: Guns
in Schools
Can Save
Lives
http://goo.gl/OjP3Or USA Today op ed written by
a former chief economist for
the U.S. Sentencing
Commission and author of
More Guns, Less Crime
Pro (For
Guns in
Schools)
Arming
Teachers Is
Not an
Answer
http://goo.gl/Dn4nDS Crooks and Liars blog
written by an author
identified as “Karoli,” a
“card-carrying member of
Con (Against
Guns in
Schools)
42. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 40
We the People.” No other
author information available
on the page.
Stop
School
Shootings
by Letting
Teachers
and
Principals
Carry Guns
http://goo.gl/CdXeXI Right Remedy blog written
by Dr. Patrick Johnston, a
minister and medical doctor.
No other author information
available on page.
Pro (For
Guns in
Schools)
Gun
Rhetoric vs.
Gun Facts
http://goo.gl/NwYpKt FactCheck.org site “offering
facts and context as the
national gun control debate
intensifies.”
Neutral
43. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 41
Appendix B
Task Instructions Day 1
Name_________________________
In December of 2012, in Newtown, Connecticut, an armed gunman forced his way into an elementary school. Before he was finally
stopped, he took the lives of 20 children and 6 adults. Since then, legislators in several states have introduced bills to allow school
employees to carry guns on school property. Those legislators believe that if school employees were armed, they would be better able
to protect children in situations like Newtown. Those who disagree believe that allowing school employees to carry guns would not
make those schools safer.
Having read the introduction, what is your opinion about the following statement?
Allowing school employees, including teachers, to carry concealed weapons in school would make my school a safer place.
Circle one of the following:
I strongly agree.
I tend to agree.
I’m not sure.
I tend to disagree.
I strongly disagree.
44. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 42
Appendix C
Task Instructions Day 2
Name_________________________
DIRECTIONS:
1. Open Internet Explorer and go to the home page of the media center. Click on the RESEARCH TASK link in the yellow
sidebar.
The Task
Imagine that your state has recently passed a law allowing adults who have a permit to carry concealed weapons in schools—including
school employees like principals and teachers. However, the law also states that individual districts have the right to outlaw concealed
weapons in their schools if they wish to do so.
In light of recent school shootings in which numerous students and teachers lost their lives, your school board is discussing whether or
not to allow adult employees to carry concealed weapons in your school. You are doing research to learn more about this issue. These
are some web sites you have found:
2. Read and examine each of the sites. You may visit them in any order, and you may return to previous sites whenever you wish.
As you read and examine the sites, complete the following chart in as much detail as you can. You will have 50 minutes to complete
the task, allowing approximately 10 minutes per site. You will be given a signal every 15 minutes to help you keep track of time.
Name of the site
you are rating:
This is in the
banner of the
site. An
abbreviation is
fine!
Rate the trustworthiness
of the site by circling one.
NOTE: The definition of
trustworthiness is “able
to be relied on as honest
and truthful.”
In a bulleted list, write down ALL THE
REASONS YOU HAVE for the
trustworthiness rating you chose in column
2. BE SPECIFIC. For example, DO NOT
write generalities, as in “I like it better” or
“it’s a worse site.” Be specific about your
reasons and write ALL your reasons down!
45. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 43
YOU MAY CONTINUE ON THE BACK IF
NEEDED.
Name of site:
(VERY briefly—
from the banner!)
1 -- not at all trustworthy
2 -- questionably
trustworthy
3 -- I can’t tell or
determine trustworthiness
4 -- somewhat
trustworthy
5 -- very trustworthy
-
-
-
-
-
-
-
Name of site: 1 -- not at all trustworthy
2 -- questionably
trustworthy
3 -- I can’t tell or
determine trustworthiness
4 -- somewhat
trustworthy
5 -- very trustworthy
-
-
-
-
-
-
-
Name of site: 1 -- not at all trustworthy -
46. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 44
2 -- questionably
trustworthy
3 -- I can’t tell or
determine trustworthiness
4 -- somewhat
trustworthy
5 -- very trustworthy
-
-
-
-
-
-
-
Name of site: 1 -- not at all trustworthy
2 -- questionably
trustworthy
3 -- I can’t tell or
determine trustworthiness
4 -- somewhat
trustworthy
5 -- very trustworthy
-
-
-
-
-
-
-
-
Name of site: 1 -- not at all trustworthy
2 -- questionably
trustworthy
-
-
-
47. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 45
3 -- I can’t tell or
determine trustworthiness
4 -- somewhat
trustworthy
5 -- very trustworthy
-
-
-
-
-
48. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 46
Appendix E
List of Codes
Code
Category
Code name Code description Example
Author
Reliability
ad quality quality of ads or promotional links reflect on reliability
of source
“ads on side of page are
encouragement for
positive ideas”
ad quantity quantity of ads or promotional links reflects on source
reliability
“the site has too many
ads”
appearance general appearance (how site looks) reflects on source
reliability
“looks fake”
author character author's perceived character reflects on reliability of
source
“they are trying to
protect children and
teachers”
“bashed police officers”
author expertise expertise of author reflects on reliability of source “written by a professor
at the University of
Chicago”
familiar with site extent to which site is recognized or well-known reflects
on reliability of source
“magazine I’ve heard of
before”
language extent to which language on the site is perceived as
appropriate (indicating an educated or self-controlled
writer) reflects on reliability of source
“uses rude language”
number of
authors/sources
number or authors or information sources reflects on its
reliability
“many different sources
are given”
pictures pictures reflect on the reliability of the source “pictures are
believable”
49. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 47
popularity popularity of site reflects on reliability of source “popular magazine”
published/vetted whether source has been published or vetted reflects on
reliability
“information was
checked”
refers to specific
source/type
source contains reference(s) to specific source(s) or
type(s) of information that reflect on its reliability
“quotes the president”
“gets info from people
who studied the
problem”
source/author info
available
extent to which source(s) and author(s) are able to be
identified or are described reflects on reliability of the
source
“has the authors listed”
“tells the sources”
title title of source reflects on its reliability “suspicious title”
writer has own blog or
site
recognition that author has his own blog or site reflects
on reliability of source
“has his own blog so
must know something”
Author
Stance
author purpose purpose of site considered in evaluating reliability of the
source/author
“might just be trying to
sell you his book”
balance/bias extent to which source/author avoids bias/seeks balance
reflects on reliability
“tells both sides of the
issue”
“very biased to guns”
Content
Accuracy
corroborating sources extent to which source provides references to
corroborating sources reflects on its accuracy
“supports opinion with
other sides backing him
up”
date/age age/date of site (or information on site) reflects on its
accuracy
“newly published”
evidence/facts/data extent to which source contains references to evidence,
facts, proof, data, statistics, research, or scientific
studies reflects on its accuracy
“gives facts and
statistics”
50. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 48
logical/makes sense extent to which an argument presented by the source is
logical/reasonable (sensible, sound, or convincing)
reflects on its accuracy
“argument makes
sense”
“seems like a good
argument”
pictures/graphs extent to which pictures, graphs or charts provide
support/corroborating evidence reflects on its accuracy
“graphs prove he’s
probably right”
prior knowledge of
topic
extent to which informative content of site matches
reader's prior knowledge reflects on its accuracy
“the facts are true from
what I have heard”
quotes other sources extent to which there are direct quotes in informational
text reflects on information accuracy
“has quotes from lots of
people”
reasoning/commentary extent to which source provides adequate reasoning,
explanation, commentary, logic, or examples reflects on
its accuracy
“explains how his
solutions are going to
help”
Content
Relevance
ads/links/sidebar
content
evaluation based on extent to which ads, links, or
sidebar content distract the reader, making site less
usable
“ads are distracting”
“ads are bigger than the
article--annoying”
focused evaluation based on extent to which informational text is
perceived to remain on-topic or focused for improved
usability and understanding
“focuses on the problem
at hand”
“doesn’t exactly state a
claim”
graphs/charts/pics evaluation based on extent to which pictures, graphs,
charts serve the reader as tools for understanding,
making site more usable
“charts to help show
shooting rates”
“needs graphs to help
explain”
info relevance evaluation based on extent to which information or
details are perceived as relevant to the task; whether the
information provided is what the reader is looking for
“there’s a lot of info
about arming teachers”
“good information”
51. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 49
interaction evaluation based on extent to which the reader can
interact with the site by responding, contributing to, or
joining the cause (including links to social networks)
“links to Facebook and
Twitter”
interesting evaluation based on extent to which source content
interests or engages the reader
“kept my interest”
“pretty good hook”
language access extent to which language on the site is accessible to the
reader, making site more usable
“uses big words I don’t
understand”
length/amount/detail
of info
evaluation based on extent to which length, amount, or
detail of information meets the needs of the reader
“gives a lot of detail”
links to additional info evaluation based on extent to which site provides links
to additional information to learn more or meet
information needs of the reader
“has links to other sites
to learn more about it”
organization/rhetoric extent to which organization of text or text features
serve reader's comprehension or informational needs,
making site more usable
“tabs help you find what
you need”
“gives a nice summary
in the introduction”
solutions evaluation based on extent to which site offers solutions
to the problem of school shootings (solutions are a
reader need)
“offers ways to solve the
problem”
Task Goal+
(plus)
explicitly focused on
task goal
reader specifically states there is a difference between
his/her personal stance and his/her rating, showing a
bracketing off of personal opinion to evaluate site
objectively
“I agree with the writer,
but I don’t think I trust
this site”
Task Goal-
(minus) =
does not
attend to
justifies stance reader provides justification for a stance on the issue
rather than justification for evaluation of source
“the only way to stop a
bad guy with a gun is a
good guy with a gun”
52. TRUSTWORTHINESS EVALUATION, CONTROVERSY, AND READING ABILITY 50
task goal
while
evaluating
records random fact
from site
reader records random fact or quote from the site “police are recruiting
students just in case the
teacher fails at his job”
Other agree/disagree extent to which reader agrees or disagrees with source
stance used to determine trustworthiness
“I agree that guns
should be allowed in
school”
“this sounds like what I
would have said”
general statement of
trust
reader makes a general statement of trustworthiness or
information reliability without providing clear
justification for it
“very trusting”
Not Able to
Code
not able to code intended meaning not clear enough to code “tell some trustworthy
in it”