Aera 2011 COIL poster

  • 398 views
Uploaded on

@ 12:25p today in the New Orleans Marriott, Third Level, Mardi Gras Salon FGH.

@ 12:25p today in the New Orleans Marriott, Third Level, Mardi Gras Salon FGH.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
398
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
1
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Publishing as the Province of a Participatory Culture:
    Evaluating Online Information
    J. Greg McVerry, University of Connecticut
    & W. Ian O’Byrne, University of Connecticut
    Research Question: How reliable and valid are scores on measures of critical evaluation of online information in a multiple choice format?
    PHASE ONE
    Literature Review: Based on a taxonomy of critical evaluation (Kiili et al., 2008), and measures developed by Brem, Russell, & Weems (2001), and Leu et al. (2010), constructed to match the hypothesized constructs of credibility and relevance (Harris, 1997; Rieh & Belkin, 1998; Coiro, 2003; Meola, 2004; Judd, Farrow & Tims, 2006; Fabos, 2008;Kiili, Laurinen & Marttunen, 2008; Strømsø & Bråten, 2010).
    Content Validation: CVI (Rubio, Berg-Weger, Tebb, Lee, & Rauch, 2003) with a threshold of 2.70 (Rubio et al., 2003). Subconstruct of purpose moved from credibility to relevance; the subconstruct of usability within credibility was dropped and replaced with items to add subconstruct of bias.
    PHASE TWO
    Revisions to the Assessment: Given the low factor loadings and inadequate reliability obtained in Phase One, revisions were made to: reduce the number of scales and subscales, make distractors easier to recognize, and simplify the testing format. This included only measuring the construct of credibility (author, bias, source, publisher), and dropping the construct of relevance.
     
    Content Validation: CVI (Rubio, Berg-Weger, Tebb, Lee, & Rauch, 2003) needed to exceed 2.75. CVR for each item needed to exceed 0.70 (McKenzie et al., 1999). Some general confusion among experts on hypothesized construct of source (source of information used to back up a claim) and author (the creator of the content on the webpage). Experts also contributed author’s point of view as an item measuring overall bias rather than being contributed to the specific author.
    Two Phase Development
    Cognitive Labs Think Alouds: Eight 7th grade students from a needs improvement school in the northeast in semi-structured think-alouds (Afflerbach, 2002) during cognitive labs (Ericsson & Simon, 1999)
    Students suggested the wording on some items needed revisions. No students scored correctly on question eight correctly. Item will be revised. Item ten was also interesting. This item asked students to decide which publisher created the most credible medical information. Three students indicated that they did not pick the Mayo Clinic because, “It said Mayo so it is about Mayonnaise.”
    Exploratory Factor Analysis: Two factors extracted using PAF and oblimin rotation of pretest/posttest data from 7th graders in a needs improvement school in the northeast (N = 197)
    Discussion:
    • Examination of items in Phase One suggests that what may ultimately determined odds ratio was level of specificity in reading required by students to answer the items.
    • 2. The information presented in items asks students to judge the credibility and relevance of information that is increasingly more difficult to read.
    • 3. Items that require more than simple skimming and scanning, or subtle differences in textual analysis and inferential reasoning more difficult.
    • 4. Mixing both static images and hyperlinks maintained ecological validity of the instrument in Phase Two while reducing item format complexity.
    • 5. When students lacked prior knowledge they would rely on the content of the pages to judge credibility of websites
    • 6. Further investigation into the interaction between item format and performance needed.
    • 7. Greater range of scores needed to provide enough variance for a valid and reliable measure.
    Future Directions:
    • Continued exploration of the constructs & sub-constructs involved as students critically evaluate and examine online information
    • 8. Testing of the situated nature of the critical evaluation of online information.
    • 9. Revised items will be piloted with 120 7th graders across high, average and low SES school in the 7th grade.
    • 10. Examination of the levels of specificity and discrimination of information space within items that affects item difficulty and variance explained by items.
    Scan the QR code using your cell phone to view the COIL, or use the following URL:
    http://goo.gl/zhiOW
    Binary Logistic Regression: Item-level analysis of pretest/posttest results (N = 197) showed three categories of responses