• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Bijker, M. (2010)   Making Measures And Inferences Reserve
 

Bijker, M. (2010) Making Measures And Inferences Reserve

on

  • 358 views

Presentation of Monique Bijker (OU CELSTEC Learning & Cognition)

Presentation of Monique Bijker (OU CELSTEC Learning & Cognition)

Statistics

Views

Total Views
358
Views on SlideShare
356
Embed Views
2

Actions

Likes
0
Downloads
8
Comments
0

1 Embed 2

http://www.linkedin.com 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Bijker, M. (2010)   Making Measures And Inferences Reserve Bijker, M. (2010) Making Measures And Inferences Reserve Presentation Transcript

    • MAKING MEASURES FOR ACCURATE INFERENCES MONIQUE BIJKER MARCEL VAN DER KLINK ELS BOSHUIZEN CELSTEC, OPEN UNIVERSITY OF THE NETHERLANDS
    • OVERVIEW
      • Practical and theoretical rationales
      • Literature review
      • Fundamental measurement to improve theory
      • The development of items and scales
      • Participants
      • Results
      • Differences between educational science and psychology students
    • PRACTICAL BACKGROUND
      • Instruments for self-reported generic competences, to predict learning performance measures and labor market success measures in causal models (SEM)
    • TOWER OF BABEL
      • Self-regulating learning capabilities ?
      • An intra-individual system of motivations, expectancies, and learning strategies?
      • For our purposes, can we use separate variables from the system?
      • Self-directing learning capabilies?
      • Self-directing career capabilities?
    • VAGUENESS
      • The concepts are never studied simultaneously and never operationalized and validated simultaneously.
      • Unknown whether they are similar or different.
    • FINDINGS BASED ON LITERATURE
      • Self-efficacy (SE) and self-regulating learning capabilities (SRLC): bottom-up concepts, emerging from social-cognitive experimental research. Predictors of the (more frequent) use of cognitive strategies and predictors of academic achievement (Pintrich et al., 1991, 1993)
      • SE: “People’s judgments of their capabilities to organize and execute courses of action required to attain designated types of performances” (Bandura, 1986, p. 391)
      • SRLC: planning, monitoring, evaluation. Effort, perseverance, and persistence (Pintrich et al., 1991, 1993).
    • FINDINGS BASED ON LITERATURE
      • Self-directed learning and career capabilities (SDLC and SDCC): top down concepts, emerging from descriptive adult learning theory, multidisciplinary career theory, and informal learning environments.
      • Influenced by social, economic, and political perspectives.
      • Predictors of employability.
      • SDLC: “A characteristic adaptation to influence work-related learning processes in order to cope for oneself on the labour market” (Raemdonck, 2006, p.13).
      • SDCC: “A characteristic adaptation to influence career processes in order to cope for oneself on the labour market” (Raemdonck, 2006, p.13).
    • UNADDRESSED QUESTIONS
      • Can operationalizations of self-regulating (SRLC; TSE) and self-directing capabilities (SDLC-SDCC) be combined in one construct?
      • Do the concepts predict different outcomes?
      • Are there any differences in these concepts between different groups of adult learners in formal education programs?
    • APPROACH
      • Use of 36 existing and the development of 48 new, theory-based items.
      • Collection of real data.
      • The use of a measurement theory that defines the measures, and constructs person capability measures independent from the items, and items independent from the persons: the Rasch model
      • Selection of items that fit the model and verification of the construct validity and dimensionality.
      • Creation of measures in the first sample and anchoring the measures in the second sample on the first one, to correct for possibly different response patterns on items.
    • WHY THE RASCH MODEL?
      • Rasch person and item measures are invariant across samples and tests (generalization).
      • Rasch transforms qualitatively ordered (Likert type) raw scores in mathematically ordered person and item interval measures . Each unit of measurement is the same as the next one.
      • Rasch recognizes that items contribute differently to the underlying variable (in difficulty, or endorsability).
      • Rasch recognizes that scale distances (1-2; 2-3; 3-4; 4-5) in Likert-type items are unequal. Scales of items should fit the Rasch model, to measure person capabilities invariantly. Hence, Likert raw scores are unsuitable to be summed up, and will bias statistical analyses.
      • Generalizability theory and CFA cannot adjust for targeting and the lack of interval properties of scales.
    • SCALE DISTANCES AND ITEM CONTRIBUTIONS
    • OTHER PROBLEMS: DISORDERED THRESHOLDS
    • FORMULA
      • The polytomous "Rating Scale" model:
      • log(Pnij/ Pni(j-1) ) = Bn - Di - Fj
      • where
      • Pnij is the probability that person n encountering item i is observed in category j,
      • Bn is the "ability" measure of person n,
      • Di is the "difficulty" measure of item i, the point where the highest and lowest categories of the item are equally probable.
      • Fj is the "calibration" measure of category j relative to category j-1, the point where categories j-1 and j are equally probable relative to the measure of the item.
    • DATA COLLECTION
      • Online questionnaires composed of the 84 items (and additional open questions about curricula).
      • Participants: 232 adult students of the school of Educational Sciences and 139 students of the school of Psychology of the Open University of the Netherlands in their premaster (BSc) or master trajectory.
      • 35% male, 65% female. Average age: 42, SD = 10.
    • RESULTS
      • Four distinct scales with Cronbach alpha’s of .90 (SDCC; 20 items), .84 (SDLC; 23 items), .72 (SRLC; 6 items), and .79 (TSE; 9 items). (RQ1)
      • 26 items of the 84 did not fit the model. Predominantly the new items fit the Rasch model in SDLC and SDCC.
      • Specifically items in SDLC are very sensitive for misfitting the model, misfits, and disordered thresholds. SDLC has very small categories.
      • TSE is characterized by contextualized items. Which items are generalizable to other contexts (suitable for anchoring)?
      • SRLC is too easy to endorse.
      • SDCC is the most stable and best targeted construct.
      • Modeling of the constructs in SEM. (RQ2)
      • Three significant differences between ES and Psy. (RQ3)
    • SCALES SUCH AS TSE tem Infit Outfit Measure Error PTMEA Miscellaneous 83 and 84 are similar in ES and Psy. 80 is different in ES and Psy. 77 .82 .84 1.86 .13 .57 80 1.29 1.27 1.03 .14 .42 72 .74 .74 .90 .14 .70 84 .98 .91 .51A .14 .60 70 .64 .64 .22A .15 .69 71 .77 .75 .09A .15 .72 83 .99 .94 -.41A .15 .70 73 1.13 1.07 -1.10A .15 .54 78 .88 .82 -1.16 .15 .74 All items Mean .91 .89 .21 .15 Person Reliability .79 SD. .19 .18 .94 .01 Person Separation 1.91 All persons Item Reliability .97 Mean .89 .89 1.62 .62 Item Separation 6.24 SD .64 .64 1.35 .11 Cronbach alpha .82 Average measures 1 2 3 4 5 -1.96 -.61 .58 2.37 4.10 Step calibration measures -3.79 -1.95 1.19 4.55
    •  
    •  
    •  
    •  
    • Implications for practice
      • For ES: In the premaster stage: Focus on tasks that support SRLC and academic achievement (planning; monitoring; evaluation ,but also support effort, persistence, and perseverance).
      • For ES: Support TSE, by mastery experiences, modeling, and persuasion.
      • For PSY: support SDLC by integrating more authentic professional tasks (or practice experiences), not only in research practicals, but also regarding diagnostic or interventions practice.
    • Implications for future research
      • How generalizable is self-efficacy as a construct (and consequently, how can you compare groups on this phenomenon)?
      • What is the quality of the negatively formulated items?
      • Is it justified to assume that student samples, in comparable stages of their learning trajectory, are of an equal endorsability level in self-reporting generic competences in different domains?
      • Is it justified to assume that responses on items can be attributed to persons, if context affects response patterns (e.g. SRLC “When I participate in an education program I make sure that I complete that program”)? (has also consequences for making measures)
    • Rude questions…
      • What is the quality of the instruments we use to measure learning and development (how and when are they validated? With which methods)?
      • How reliable, valid, and comparable are our performance measures , if we do not use Rasch validated items or tests?
      • How frequently do we calibrate our measures?
    • THANK YOU FOR YOUR ATTENTION. Any questions? [email_address]