Changing issues, changing questions, changing approaches <ul><li>An overview of research in the Alternative Admissions Res...
The late 1980s to about 2001 <ul><li>The main questions were about: </li></ul><ul><li>Access </li></ul><ul><li>Finding way...
(extract: Placement Test in English for Education Purposes construct) <ul><li>aim to predict the performance of candidates...
The main assessment related challenges were: <ul><li>producing a greater range of scores (spread), so that capable student...
Towards ‘dynamic’ testing: the old ELPT and the new PTEEP <ul><li>The same three major writing tasks (summary, description...
ASSESSING THE IMPACT OF THE SCAFFOLDING APPROACH   Questionnaire evidence from pilot groups suggests that candidates found...
ELPT and PTEEP Scores
Quantitative approaches used at this stage and in early 2000s <ul><li>Problems with correlational analysis </li></ul><ul><...
The early 2000s <ul><li>Expansion of “AARP” testing </li></ul><ul><li>Introduction of Quantitative Literacy test </li></ul...
2005  -> the main questions … <ul><li>Concerns largely shifted from access to throughput </li></ul><ul><li>Placement funct...
Issues and challenges <ul><li>Academic literacy/ies construct widened to include greater focus on quantitative literacy (a...
Research approaches: quantitative <ul><li>From Classical Test Theory to Item Response Theory </li></ul><ul><li>Psychometri...
CTT vs IRT  (Hambleton and Jones, Comparison of CTT and IRT and their applications to test development)
Item and Test Characteristic Curves; Test Information Functions and Equating
<ul><li>THE NATIONAL BENCHMARK </li></ul><ul><li>TESTS PROJECT </li></ul>
Why did HESA commission the NBTP? <ul><ul><li>Demonstrable inefficiencies in Higher Education itself (low throughput etc.)...
In summary …. <ul><li>The NBTP is about higher education getting its own act in order – it is not about pointing fingers a...
What do the NBTs aim to do? <ul><ul><li>Provide additional information about performance in core, underlying areas (additi...
 
Example - Competency Specification Proportions NBT Quantitative Literacy   Competence area Knowing Applying routine proced...
Mathematical and statistical ideas Quantity, number and operations (LO1) 1 0 6 9 3 4 1 2 26 31.0 25-30 Shape, dimension an...
How are the benchmark (cut-off points) derived? <ul><li>The process is fundamentally different to the examination paper de...
NBT information <ul><li>INDIVIDUAL LEVEL </li></ul><ul><li>Benchmark level (Basic, Intermediate, Proficient) </li></ul><ul...
<ul><li>DATA  </li></ul><ul><li>BASED ON  </li></ul><ul><li>FEB 2009 PILOTS </li></ul>
ACADEMIC LITERACY (overall) N =  12,202   Participating institutions: Mangosuthu, Rhodes, Stellenbosch, UCT, UKZN, UWC, Wi...
ACADEMIC LITERACY by Faculty
Quantitative Literacy N = 12,202
Quantitative Literacy by Faculty
Mathematics (overall)
Mathematics overall [Intermediate ‘Top’ and ‘Bottom’]
Mathematics by Faculty
Upcoming SlideShare
Loading in …5
×

Changing issues, changing questions, changing approaches: An overview of research in the Alternative Admission Research Project

679
-1

Published on

The presentation will highlight changing demands (from a sharp focus on access to concerns about throughput) and responses related to admission to higher education, and the research underpinning such responses. Beginning in the late 1980s, the paper traces the development of assessment procedures n the ‘dynamic’ testing tradition (responding to the need to test for ‘potential’ and widen access). The paper ends with a discussion of the National Benchmark Tests Project (responding the need to places students in appropriate curricula and improve throughput), focusing on the research and approaches underlying these tests as well as the findings and some implications both for schooling and higher education.

Presented by A/Prof. Nan Yeld & Robert Prince

Published in: Education, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
679
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
9
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Brief explanation the differences between CTT and IRT
  • Briefly explain IRT: 3 parameter model b = item difficulty; a = item discrimination and c = pseudoguessing parameters Item Characteristic Curves Test Characteristic Curve Test Information Function Equating
  • Changing issues, changing questions, changing approaches: An overview of research in the Alternative Admission Research Project

    1. 1. Changing issues, changing questions, changing approaches <ul><li>An overview of research in the Alternative Admissions Research Project </li></ul>
    2. 2. The late 1980s to about 2001 <ul><li>The main questions were about: </li></ul><ul><li>Access </li></ul><ul><li>Finding ways to identify talented students from poor schools (focus on the DET) </li></ul>
    3. 3. (extract: Placement Test in English for Education Purposes construct) <ul><li>aim to predict the performance of candidates in future situations in which language will be an important, but not sole, variable (language-as-vehicle rather than language-as-target); </li></ul><ul><li>acknowledge the effects on cognitive functioning of the quantity and quality of prior mediated learning opportunities experienced by an individual (i.e. take seriously the impact of educational disadvantage by building in mediated learning opportunities). </li></ul><ul><li>be based on a notion of knowing and learning which views learners as actively involved as individuals and in collaboration with others, in creating and negotiating meaning in a wide variety of settings. This process of conceptual development is seen as highly dependent on specific areas of expertise involving knowledge and information, and on the connections between these. </li></ul><ul><li>be based on a componential model of language ability, which comprises topical knowledge and language knowledge, mediated by strategic competence (metacognitive strategy use) and affective schemata. </li></ul>
    4. 4. The main assessment related challenges were: <ul><li>producing a greater range of scores (spread), so that capable students could be more clearly differentiated from weaker students </li></ul><ul><li>improving (raising) the level of stronger students’ scores </li></ul><ul><li>increasing the predictive validity of the test (that is, did the test correctly distinguish between weaker and stronger students) </li></ul>
    5. 5. Towards ‘dynamic’ testing: the old ELPT and the new PTEEP <ul><li>The same three major writing tasks (summary, description and contrast/comparison) </li></ul><ul><li>The same additional text types (graph and diagram) </li></ul>The PTEEP included structured and sequenced tasks designed to act as mediation for the writing assignments. The ELPT did not include these ‘scaffolding’ tasks. <ul><li>The same prose texts </li></ul>Differences Similarities
    6. 6. ASSESSING THE IMPACT OF THE SCAFFOLDING APPROACH Questionnaire evidence from pilot groups suggests that candidates found the new test to be more ‘user-friendly’. Correlations between first-year academic (UCT) performance and the selection tests strengthened. Stronger candidates use preparatory tasks whereas weaker candidates do not see the connections between tasks (such tasks therefore help to widen the gap). Based on scores from the same tasks (summary, short description and one page essay), the range of scores increased. Candidate perceptions Predictive validity Task preparation study Range of scores
    7. 7. ELPT and PTEEP Scores
    8. 8. Quantitative approaches used at this stage and in early 2000s <ul><li>Problems with correlational analysis </li></ul><ul><li>A move to ‘survival analysis’ </li></ul><ul><li>Classical item analysis </li></ul><ul><li>Psychometrically ‘naïve’ tests relying on ranking – limitations for score use </li></ul><ul><li>Reliance on group performance for ‘standards’ </li></ul>
    9. 9. The early 2000s <ul><li>Expansion of “AARP” testing </li></ul><ul><li>Introduction of Quantitative Literacy test </li></ul><ul><li>Major innovations in the area of test delivery (use of innovative technology etc) </li></ul><ul><li>Pressure on tests reduced ‘potential’ aspect </li></ul><ul><li>Development of Health Science Consortium group of tests </li></ul>
    10. 10. 2005 -> the main questions … <ul><li>Concerns largely shifted from access to throughput </li></ul><ul><li>Placement function emphasised – implications for curriculum (focus mainly on extended progs) </li></ul><ul><li>Concerns about the new school curriculum and qualification – target now all applicants (mass test) </li></ul><ul><li>Increasing tensions with Department of Education (with stronger FET curriculum links) </li></ul>
    11. 11. Issues and challenges <ul><li>Academic literacy/ies construct widened to include greater focus on quantitative literacy (and reading not writing the focus) </li></ul><ul><li>For the first time direct testing in the school curriculum (Mathematics & Maths Literacy) </li></ul><ul><li>Radically different development trajectories, research traditions for the new NBTs (achievement tests) and TAPs (re-designing the old PTEEPs) </li></ul>
    12. 12. Research approaches: quantitative <ul><li>From Classical Test Theory to Item Response Theory </li></ul><ul><li>Psychometric test development approaches </li></ul>
    13. 13. CTT vs IRT (Hambleton and Jones, Comparison of CTT and IRT and their applications to test development)
    14. 14. Item and Test Characteristic Curves; Test Information Functions and Equating
    15. 15. <ul><li>THE NATIONAL BENCHMARK </li></ul><ul><li>TESTS PROJECT </li></ul>
    16. 16. Why did HESA commission the NBTP? <ul><ul><li>Demonstrable inefficiencies in Higher Education itself (low throughput etc.) </li></ul></ul><ul><ul><ul><li>Difficulties in identifying students’ educational needs </li></ul></ul></ul><ul><ul><ul><li>Lack of appropriate curriculum flexibility at entry to meet these needs </li></ul></ul></ul><ul><ul><li>Concerns about how to interpret the new NSC </li></ul></ul>
    17. 17. In summary …. <ul><li>The NBTP is about higher education getting its own act in order – it is not about pointing fingers at the school system </li></ul><ul><li>It sets out to do this by providing information about the competence of entering students in terms of 3 core domains of knowledge / skills </li></ul><ul><li>it is important to note that higher education’s ‘take’ on what these core sets are, and at what level they should be mastered, will in all probability differ somewhat from those deemed most salient by the school-leaving system. </li></ul>
    18. 18. What do the NBTs aim to do? <ul><ul><li>Provide additional information about performance in core, underlying areas (additional to NSC information) </li></ul></ul><ul><ul><li>The core (domain) areas are: </li></ul></ul><ul><ul><ul><li>Academic literacy one 3-hr test </li></ul></ul></ul><ul><ul><ul><li>Quantitative literacy </li></ul></ul></ul><ul><ul><ul><li>Mathematics one 3-hr test </li></ul></ul></ul>
    19. 20. Example - Competency Specification Proportions NBT Quantitative Literacy Competence area Knowing Applying routine procedures in familiar contexts Applying multistep procedures in a variety of contexts Reasoning and reflecting 1+ 1- 2+ 2- 3+ 3- 4+ 4- n % Comprehending: identifying or locating Vocabulary 5 0 7 7 2 4 1 5 31 19.0 15-20 Representations of numbers and operations 1 0 1 2 1 1 1 0 7 4.3 5-10 Conventions for visual representations 4 0 10 6 1 5 1 5 32 19.6 20-25 Acting, interpreting, communicating Using representations of data 2 0 6 3 1 4 1 4 21 12.9 20-25 Computing (and estimating?) 0 0 4 10 2 4 0 2 22 13.5 15-20 Conjecturing 0 0 0 0 0 0 0 1 1 0.6 0-5 Interpreting 5 0 10 5 3 5 2 5 35 21.5 10-15 Reasoning 0 0 1 0 0 0 1 4 6 3.7 5-10 Representing quantitative information 0 0 3 1 0 0 2 0 6 3.7 5-10 Describing quantitative relationships 0 0 1 0 1 0 0 0 2 1.2 0-5 n 17 0 43 34 11 23 9 26 163   % 10.4 0.0 26.4 20.9 6.7 14.1 5.5 16.0 % 10.4 47.2 20.9 21.5
    20. 21. Mathematical and statistical ideas Quantity, number and operations (LO1) 1 0 6 9 3 4 1 2 26 31.0 25-30 Shape, dimension and space (LO3) 3 0 4 3 0 2 1 1 14 16.7 10-15 Relationships, pattern, permutation (LO2a) 0 0 5 2 1 0 2 1 11 13.1 10-15 Change and rates (LO2b) 0 0 4 2 1 0 1 1 9 10.7 10-15 Data representation and analysis (LO4a) 2 0 5 3 1 4 0 6 21 25.0 20-25 Chance and uncertainty (LO4b) 0 0 1 0 0 2 0 0 3 3.6 5-10 n 6 0 25 19 6 12 5 11 84   % 7.1 0.0 29.8 22.6 7.1 14.3 6.0 13.1 % 7.1 52.4 21.4 19.0   15-20 25-30 25-30 25-30
    21. 22. How are the benchmark (cut-off points) derived? <ul><li>The process is fundamentally different to the examination paper design procedures, and the norm referenced standardising and resulting processes of the NSC </li></ul><ul><li>All items need to have been through rigorous review (fairness, content etc) and be statistically robust </li></ul><ul><li>All items need to have been piloted </li></ul><ul><li>The benchmark setting process is NOT about whether students can pass an item or not - the process is based on a set of probability assessments made by first year lecturers, with the core questions being: </li></ul><ul><ul><ul><li>“ if a student can’t pass this item / do this, will s/he experience academic difficulties – and if so, how severe?” </li></ul></ul></ul>
    22. 23. NBT information <ul><li>INDIVIDUAL LEVEL </li></ul><ul><li>Benchmark level (Basic, Intermediate, Proficient) </li></ul><ul><li>Description of what this means for each domain (ie what does being in the ‘Basic’ category mean a student knows and can do in Mathematics) </li></ul><ul><li>Clear recommendations about the type and extent of support needed </li></ul><ul><li>GROUP LEVEL </li></ul><ul><li>At the level of a faculty, or qualification, or institution …. </li></ul><ul><li>Give clear indication of the needs and strengths of entering cohorts, either before entry, or at registration: useful for placement into existing courses, and/or with course design or modification. </li></ul>
    23. 24. <ul><li>DATA </li></ul><ul><li>BASED ON </li></ul><ul><li>FEB 2009 PILOTS </li></ul>
    24. 25. ACADEMIC LITERACY (overall) N = 12,202 Participating institutions: Mangosuthu, Rhodes, Stellenbosch, UCT, UKZN, UWC, Wits. Serious learning challenges – long term, pre-tertiary intervention needed. Challenges identified such that it is predicted that academic progress will be adversely affected. If admitted, students’ educational needs should be met in a way deemed appropriate by the institution (eg extended or augmented programmes, special skills provision) Performance such that academic performance will not be affected. If admitted, students should be placed on regular programmes of study. 851 5571 5780 0 1000 2000 3000 4000 5000 6000 7000 Total ACADEMIC LITERACY NBT Benchmark Levels, February 2009 Basic Intermediate Proficient
    25. 26. ACADEMIC LITERACY by Faculty
    26. 27. Quantitative Literacy N = 12,202
    27. 28. Quantitative Literacy by Faculty
    28. 29. Mathematics (overall)
    29. 30. Mathematics overall [Intermediate ‘Top’ and ‘Bottom’]
    30. 31. Mathematics by Faculty
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×