Supporting English Learners within the Common Core State Standards
2016_GURT_LCP_Scorecard
1. Linguistic Correlates of Proficiency (LCP):
Test Performance Report Development
Shauna Jayne Sweet
Svetlana V. Cook
Scott Jackson
Cathy Doughty
Alia Lancaster
Karineh Aghajanian-Stewart
Nicholas Pandža
Timothy Howell
GURT 2016 | Georgetown University
2. A Quick Preview
• Proficiency Versus Linguistic Ability:
A different conceptual lens offering greater
clarity and utility to learners
• What We’ve Done:
Scorecard logic, components and features
• What We’re Doing:
Instrument revision, validation, and statistical
points worth sharing
2
3. The (Lack of) Information in Language
Proficiency Scales
• The unique context of advanced language learners
studying less commonly taught languages.
• High-stakes language proficiency tests are common but
do they really address needs within the learning
community?
• Scores describe (at best):
o Functional tasks that the individual can perform
o Broad level of capability
• Scores do not describe:
o What specific linguistic issues present barriers
o Potential paths to improvement
4. Is Proficiency Really of Interest?
• Language acquisition is a
psycholinguistic process that proceeds
along a particular trajectory, with certain
constraints.
• Language proficiency is both more and
less than acquisition of linguistic system
o Reading/listening comprehension
o Being well-spoken
o Etc.
• Language acquisition is necessary, but not
sufficient, for high-level proficiency
5. Shifting Focus to Language Acquisition
and Learning
• A focus on linguistic ability rather than
proficiency
• Finer-grained measures to facilitate tracking
of improvement
• By developing a tool to understand a
trajectory of development, it can aid
learners and instructors:
o What should the learner focus on next?
o What might be holding a learner back from
reaching a higher proficiency?
6. What We’ve Done: An Overview
• Renewed efforts to
construct parallel
structure across
languages
• Piloted batteries in a
tailored language
training program
• Piloted individualized
feedback
7. What We’ve Done: Logic
• Worked to develop a score card
that provides useful feedback
• Achieves the appropriate grain-
size for giving meaningful
feedback to both learners and
instructors in a tailored learning
program
• Strikes a balance between
language-general framework
and language-specific features
• Presents information in a
readable format
8. What We’ve Done: Components
The cornerstone of the
Improvement
Summary is a table that
clearly illustrates areas
in which the learner
successfully
demonstrated progress
( ) or did not
demonstrate progress
on the LCP ( )
10. What We’ve Done: A Foundation
• Articulated and committed to a
theoretical framework
• Identified a metric that make sense to the
people who need to interpret test results
• Laid the necessary conceptual
foundation for continued instrument
refinement and validation efforts.
10
13. What We’re Doing
• Continue instrument refinement thinking
about tasks as evidence of underlying
constructs.
• Ensure that within each language, we’re
reliably and accurately measuring the
components of linguistic ability
• Ensure that across languages we’re
capturing the same components.
13
15. A Few Take-Home Points
• At the core of diagnostic assessment is
dialogue.
• Definitional and conceptual clarity is key.
o To establishing testable hypotheses and a
validation framework
o To ensuring purposeful iteration
• Differences in measurement don’t
preclude comparability of constructs. It’s
testable!
15
Can you talk about challenge of useful feedback here
The Current Ability Summary features a second table that shows, by Domain and Linguistic Feature, where the TLTI participant consistently demonstrated his or her ability, provided partial or inconsistent demonstration of ability, or those areas where his or her performance provided only limited demonstration of ability.