Validation and moderation workshop Session 1

1,966 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,966
On SlideShare
0
From Embeds
0
Number of Embeds
147
Actions
Shares
0
Downloads
70
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • About meBackgrounds:Who they areGeneral conversation:ExperienceKnowledge of V&MVTAExpectations for today
  • The moderationprocess involves assessors discussing and reaching agreement about assessment and outcomes in a particular industry. Validation is a process for ensuring that the way a unit or group of units is assessed, and evidence collected, is consistent with requirements.Moderation and validation are both concerned with consistency, assessment, industry requirements and unit of competency requirements.Questions that moderation and validation help answer include:Am I collecting evidence that is consistent with the requirements of the unit of competency?Am I collecting evidence that is appropriate to the industry? Do I gather sufficient evidence using appropriate methods and tools?Have I interpreted the evidence requirements appropriately?When you moderate your assessment with a trainer/assessor who is vocationally competent and has current relevant experience in the industry, feedback from this process will help you maintain consistency with other assessors, the units of competency and current industry practice.
  • Today is based on 3 central ideasWhat will we assess ?The focus for assessment is on the knowledge and understanding, skills and attributes and capabilities within the experiences and theie learning outcomesConsider what kind of activities you plan to offer the learners in your group/class, that will result in evidence of their competence.
  • Validation is a quality review process. It involves checking that the assessment tool produced valid, reliable, sufficient, current and authentic evidence to enable reasonable judgements to be made as to whether the requirements of the relevant aspects of the Training Package or accredited course had been met. It includes reviewing and making recommendations for future improvements to the assessment tool, process and/or outcomes. What is moderation? Moderation is the process of bringing assessment judgements and standards into alignment. It is a process that ensures the same standards are applied to all 2 An assessment tool includes the following components: the context and conditions for the assessment, the tasks to be administered to the candidate, an outline of the evidence to be gathered from the candidate and the evidence criteria used to judge the quality of performance (i.e. the assessment decision making rules). It also includes the administration, recording and reporting requirements. Assessment results within the same Unit(s) of Competency. It is an active process in the sense that adjustments to assessor judgements are made to overcome differences in the difficulty of the tool and/or the severity of judgements. Validation versus moderation The two terms validation and moderation have been used interchangeably in the VET sector; and whilst each are based on similar processes, there are a number of distinctive features. Let’s look at the table below
  • In summary, the major distinguishing features between validation and moderation are that:  Validation is concerned with quality review whilst moderation is concerned with quality control;  The primary purpose of moderation is to help achieve comparability of standards across organisations whilst validation is primarily concerned with continuous improvement of assessment practices and outcomes;  validation and moderation can both focus on assessment tools, moderation requires access to judged (or scored) candidate evidence. The latter is only desirable for validation;  Both consensus and external approaches to validation and moderation are possible. Moderation can also be based upon statistical procedures whilst validation can include less formal arrangements such as assessor partnerships;  The outcomes of validation are in terms of recommendations for future improvement to the assessment tools and/or processes; whereas moderation may also include making adjustments to assessor judgements to bring standards into alignment, where determined necessary
  • Assessment Purposes?Assessment for learning occurs when teachers use inferences about student progress to inform their teaching (formative)Assessment as learning occurs when students reflect on and monitor their progress to inform their future learning goals (formative)Assessment of learning occurs when teachers use evidence of student learning to make judgements on student achievement against goals and standards (summative)
  • Some of the reasons include:These strategies are not designed to produce a 'perfect' assessment system but their value is based on the assumption that, in most circumstances, the judgment of a group of assessors may be more reliable than the judgment of an individual assessor. The very last reason you validate and moderate is for compliance:The VET Quality Framework is a set of standards and conditions that ASQA uses to assess whether an RTO meets its requirements for registration.The VET Quality Framework comprises: the Standards for NVR Registered Training Organisations 2012 15.2 Strategies for training and assessment meet the requirements of the relevant Training Package or VET accredited course and have been developed through effective consultation with industry
  • Here is also some examples of the outcomes of a moderation process. Like validation, it could lead to recommendations for improvements to the tool, but unlike validation, it may require altering students’ results prior to finalisation to bring standards into alignment. There is also a requirement for some form of accountability.
  • External Approaches (Validation and Moderation)There are various external approaches to assessment validation and moderation. One approach would be for an external person (or a panel of people) to visit the organisation to judge the way in which candidates’ evidence were collected and judged against the Unit(s) of Competency. Differences between the local and external assessment judgements could then be either:Discussed and reconciled accordingly (i.e., if conducted for moderation purposes); and/or Discussed to identify ways in which improvements to future assessment practices could be undertaken (i.e., if conducted for validation purposes).  An alternative external approach would be for samples of assessment tools and/or judged candidate evidence to be sent to a central location for specialist assessors to review directly against the Unit(s) of Competency. The specialist external assessors could be representatives of the relevant national Industry Skills Council (ISC) and/or the relevant state/territory registering bodies. Again, differences between the organisation and the external-based assessments could then be discussed (e.g., for validation) and/or reconciled (e.g., for moderation) at a distance.  There are a number of benefits from using external moderators/validators. These include the potential to:Offer authoritative interpretations of the standards specified within Units of Competency;Improve consistency of the standards across locations by identifying local bias and/or misconceptions (if any);Offer advice to organisations and assessors on assessment approaches and procedures; andObserve actual assessment processes in real time as opposed to simply reviewing assessment products (if site visits are included). In relation to moderation, although external approaches have greater quality control over the assessment processes and outcomes than consensus meetings, they have less quality control than statistical approaches.
  • Typically consensus meetings involve assessors reviewing their own and their colleagues’ assessment tools and outcomes as part of a group. It can occur within and/or across organisations. It is typically based on agreement within a group on the appropriateness of the assessment tools and assessor judgements for a particular unit(s) of competency. A major strength of consensus meetings is that assessors are directly involved in all aspects of assessment and gain professionally by learning not only how and what to assess, but what standards to expect from their candidates. It also enables assessors to develop strong networks and promotes collegiality. Another benefit from consensus meetings is that it provides opportunity for sharing materials/resources among assessors. If used for moderation purposes, consensus meetings however provide less quality control than external and statistical approaches as again, they can be influenced by local values and expectations.
  • Questions are often asked about how can validation processes be systematic. here as to what a plan for validation may include. Implementation of the plan goes towards the notion of ‘systematic’.
  • As you can see, there are a number of different quality management processes that could be used to help achieve national comparability of standards, whilst still maintaining flexibility at the RTO level to design and conduct assessments.
  • Purposeful process of systematically gathering, interpreting, recording and communicating to stakeholders, information on student performance.
  • The blue text are the aspects to be looked at today.
  • This industry consultation and validation is a requirement of the current standards. It is the responsibility of the RTO to meet current requirements.
  • Assessment tools do not need to include a Competency Mapping – that is it is not an auditable requirements. However, the key purpose of the competency mapping is to ensure the sufficiency of the evidence to be collected as well as the content validity
  • Will depend upon level of risk associated with the unit of competency and the assessment
  • What are the rules of evidence:The expected performance needs to be documented to ensure that there is a common understanding across assessors to inform the decision of competence. Decision making rules need to applied at various levels. For example, for each key aspect of performance that needs to be judged for each task, some form of rules and/or guidelines for judging the quality of that performance would be need to be documented (e.g. what is an acceptable response to an open ended interview question). This is referred to as the evidence criteria. There also needs to be a final decision making rule as to how to make an overall judgement of competence using evidence from multiple sources (e.g. an interview and a portfolio). A competency based assessment system encourages the use of a range of assessment methods and tasks for gathering evidence of competence. However, the use of a combination of assessment methods and tasks yields information about performance that must be first interpreted by the assessor to infer the competence level of the candidate; and secondly, used to make as judgement as to whether the competency standards have been met (Wheeler, 1993).  Developers of assessment tasks and assessment policy documents will need to determine the critical nature of the competencies and to set minimum levels of performance for each assessment task designed. Assessors conduct assessments within high risk or emergency situations for instance, where a wrong assessment decision could place either the candidate or his/her peers in danger.
  • Reasonable adjustments refer, identifying a particular target group/person with background characteristics for which the assessment potentially may prohibit them from completing the task and demonstrating their ‘true’ competence level (for example, setting a written test for people with low literacy). In these cases the assessment tasks will need to be adjusted or alternative tasks developed. A simple example would be responding to questions orally rather than written. However, remember that the adjustments must not alter the expected standard of performance specified within the unit/s of competency.
  • Of particular concern in audit is:Lack of ‘face’ validity of assessmentLack of a simulated environment that reflects workplace realities.
  • There are a number of ways in which RTOs could engage industry in its assessment quality management system. For example, industry could be involved in determining whether a qualification requires a moderation or a validation process. As part of this process, industry could be involved in determining the level of risk associated with conducting a false positive assessment (i.e., assessing someone as competent when in actual fact they are not yet competent). This may involve determining the critical nature of the competency, the financial/safety implications of making a wrong assessment judgement as well as the frequency of use of the competency in the workplace. The greater the risk, the more likely the need for moderation. Other ways in which industry could be involved in the assessment quality management system have been documented in the Assessor Guide: Validation and Moderation. AssurancePanelling of Assessment Tools to determineRelevance and realism to workplace (face validity)Content validity (mapping of key components within task to curriculum/standards)Technical accuracyAppropriateness of language/terminologyLiteracy and Numeracy requirementsEvidence criteria used to judge candidate performance for each taskRange and conditions for the assessment (e.g., materials/equipment, facilities, time restrictions, level of support permitted)Any reasonable adjustments to evidence collection (as opposed to standard expected)Sufficiency of evidence across time and contexts (transferability) Consultation with Industry/Community representatives to identify:Benchmark examples of candidate work at both competent and not yet competent levelsExemplar assessment tasks/activities Quality ControlModeration consensus panel membershipIdentifying benchmark samples of borderline casesDetermining level of tolerance (in relation to risk assessment)External moderation (if representing an organisation/association of authority or standing within the industry) Quality ReviewPanel representation on validation panel (e.g., check content and face validity of assessment tools)Follow up surveys to determine predictive validity  
  • I have trainers who also have RTO businesses in the industry sector they train in. Can I use them as part of my industry consultation?A This is a good form of industry consultation as long as it is well documented which ‘hat’ the trainers are wearing in this instance. I also wouldn’t use this as the onlyform of industry consultation as it can be viewed as ‘just talking to your own’. I would want to see more evidence of consistent messages which confirms your TAS.Industry validation is about the assessment process and internal validation about the technical assessment instrument and ensuring it meets all the requir
  • Validation and moderation workshop Session 1

    1. 1. June 2013
    2. 2. Workshop overview The quality of learning – what do we assess ? Assessment is ongoing – when do we assess ? Approaches to assessment – how do we assess ?
    3. 3. What is Validation and Moderation Validation is a quality review process of assessment. • producing valid, reliable, sufficient, current and authentic evidence • enabling reasonable judgements to be made • reviewing and making recommendations for future improvements Moderation is the process of bringing assessment judgements and standards into alignment. • ensuring the same standards are applied to judge the quality of performance (i.e. the assessment decision making rules) • including administration, recording and reporting requirements • it is an active process Validation versus moderation • have been used interchangeably in the VET sector; and whilst each are based on similar processes, there are a number of distinctive features.
    4. 4. Features Validation Moderation Assessment Quality Quality Review Management Type Quality Control Primary Purpose Continuous improvement Bring judgements and standards into alignment. Timing On-going Prior to the finalisation of candidate results Focus Assessment Tools and Candidate Evidence (including assessor judgements) (desirable only) Assessment tools and Candidate Evidence, including assessor judgements (mandatory) Type of Approaches Assessor Partnerships Consensus Meetings External (validators or panels) Consensus Meetings External (moderators or panels) Statistical Outcomes Recommendations for future improvements Recommendations for future Improvements and adjustments to assessor judgements (if required)
    5. 5. What is Assessment? • purposeful process of systematically gathering, interpreting, recording and communicating to stakeholders, information on student performance. • assessment for learning occurs when teachers use inferences about student progress to inform their teaching (formative) • assessment as learning occurs when students reflect on and monitor their progress to inform their future learning goals (formative) • assessment of learning occurs when teachers use evidence of student learning to make judgements on student achievement against goals and standards (summative)
    6. 6. Why participate in validation and moderation processes? • to ensure assessments align with current industry practice and benchmarks remain within acceptable limits • to ensure the quality and consistency of assessment • confirms the consistency of assessor/s judgement • to gain a comprehensive understanding of the depth of students’ knowledge and skills • VET Quality Framework compliance
    7. 7. Outcomes of validation Recommendations for future improvements: • Review of context and conditions for the assessment • Task/s to be administered to the candidates • Administration instructions • Criteria used for judging the quality of performance (e.g. the decision making rules, evidence requirements etc.) • Guidelines for making reasonable adjustments to ensure that the expected standard of performance specified within the Unit(s) of Competency has not been altered • Recording and reporting requirements
    8. 8. Outcomes of moderation • recommendations for future improvement and adjustments to assessor judgements (if required) • recommendations for improvement to the assessment tools • adjusting the results of a specific cohort of candidates prior to the finalisation of results and • requesting copies of final candidate assessment results in accordance with recommended actions.
    9. 9. Approaches to Validate and Moderate Assessments Statistical Requires some form of common assessment Strength strongest form of quality control Weakness lacks face validity, may have limited content validity External Site visits Strength ‘expert’ interpretations of standards/ advice to RTO/ assessors on assessment approaches and procedures / observe actual assessment processes in real time Weakness can be expensive and there is less control than a statistical approach
    10. 10. Approaches to Validate and Moderate Assessments Partnerships Sharing, discussing and/or reviewing one another’s tools and/or judgements Strength Low costs, personally empowering, non-threatening , easily organised Weakness Potential to reinforce misconceptions and mistakes Consensus Reviewing own & colleagues assessment tools and judgements as a group Strength professional development, networking, promotes collegiality and sharing Weakness less quality control than external and statistical approaches as they can also be influenced by local values and expectations and requires a culture of sharing
    11. 11. Systematic Validation Indicators is there a plan for assessment validation (including validation of RPL assessment) in place? • • • • • are units of competency to be validated over a set period of time provide dates for proposed validation activities include details about who will participate in assessment validation include a strategy to ensure that all relevant staff are involved identify what processes and materials for implementing and recording the outcomes of assessment validation does the RTO have validation materials (policy, procedure, forms) in place that cause participants to engage effectively in validation? does the RTO have a process for monitoring the action taken as a result of validation? does the RTO have a process and plan in place for reviewing the effectiveness of assessment validation?
    12. 12. Assessment Quality Management Quality Assurance (Input approach) • Industry competency standards as the benchmarks of varying levels of performances • National assessment principles • Minimum qualifications for assessors • Standardisation of reporting formats • Assessment tool banks • Common assessment tasks • Exemplar assessment tools • Professional development programs/workshops for assessors Quality Control (Outcome approach) Quality Review (Retrospective approach) Examples include: Examples Include: Moderation in which adjustments to assessor judgements are made to overcome differences in the difficulty of the assessment tool and/or severity of the judgement. • Monitoring and auditing of organisation • Review and validation of assessment tools, processes and outcomes to identify future improvements. • Follow-up surveys with key stakeholders (e.g., student destination surveys, employer feedback on how well the assessment outcomes predicted workplace performance).
    13. 13. Key Stages in developing assessment tools • identify and describe the purposes for the assessment • identify the assessment information that can be used as evidence of competence/learning • identify a range of possible methods that might be used to collect assessment information • define the contexts for interpreting assessment information in ways that are meaningful for both assessor and candidate • determine the decision making rules • define procedures for coding and recording assessment information • identify stakeholders in the assessment and define their reporting needs.
    14. 14. Essential Characteristics – Assessment Tool An assessment tool includes the following components: • • • • the context and conditions for the assessment the tasks to be administered to the candidate an outline of the evidence to be gathered from the candidate the evidence criteria used to judge the quality of performance (i.e., the assessment decision making rules); as well as the • the administration, recording and reporting requirements.
    15. 15. Ideal Characteristics • • • • • • • • • • • • • the context competency mapping the information to be provided to the candidate the evidence to be collected from the candidate decision making rules range and conditions materials/resources required assessor intervention reasonable adjustments validity evidence reliability evidence recording requirements reporting requirements
    16. 16. Where do you start the process?
    17. 17. Training and Assessment Strategy Outlines the overall validation and moderation processes that will be used for the qualification. Includes a description of the types of documentation that will be kept as evidence. Validation and Moderation Schedule Provides a schedule and description of processes for when the various validation and moderation activities will occur. • Industry consultation of the TAS • Industry consultation of assessment tools • Validation activities with other assessors • Moderation activities with other assessors • Internal moderation of judgements • Internal review & Internal audit Internal Audit Report Includes review of the validation processes and the application of these with documented evidence and the Internal Review Report. The Review & Audit process lead into continuous improvement in relation to the TAS and other documentation and processes. All documentation generated from the Validation & Moderation process leads into the Internal Audit Process. Internal Review Report Includes review of the validation processes and the application of these with the documented evidence provided (described below). Record of Industry Consultation • Industry consultation of TAS • Industry consultation of assessment tools Record of Trainer/Assessor Validation & Moderation • Validation and Moderation activities with other assessors Provides documented evidence of validation activities with relevant industry stakeholders. This needs to be signed by the industry stakeholder involved in the process. Provides documented evidence of validation and moderation activities with assessors not involved in the development of the assessment tools. Record of Internal Moderation • Internal moderation of judgements Provides documented evidence of internal moderation where more than one assessor is involved in making judgements for students enrolled in the qualification.
    18. 18. Industry Validation of TAS
    19. 19. Who are my students? What does competent look like in the current workplace? How will I know someone is competent? What evidence do I need to collect to confidently judge someone as competent? What methods will I use to collect that evidence? What assessment tools will I need to collect that evidence? Will I have enough evidence?
    20. 20. Competency Mapping Step 1: Step 2: Step 3: Unpack the unit of competency to identify its critical components. For each assessment method, list the tasks to be performed by the candidate. For each assessment method, map the critical components of the unit to each assessment task.
    21. 21. Level of specificity in mapping – Risk Assessment Risk can be determined by consideration of: • safety (e.g. potential danger to clients from an incorrect judgement) • purpose and use of the outcomes (e.g. selection purposes) • human capacity (e.g. level of expertise and experience of the assessors) • contextual (e.g. changes in technology, workplace processes, legislation, licensing requirements and/or training packages)
    22. 22. Decision Making Rules The rules to be used to: • check the quality of the evidence (i.e. the rules of evidence) • judge how well the candidate performed on the task according to the standard expected • interpret evidence from multiple sources to make an overall judgement
    23. 23. Reasonable Adjustments • this section of the assessment tool should describe the guidelines for making reasonable adjustments to the way in which evidence of performance is gathered without altering the expected performance standards • in some cases the assessment tasks will need to be adjusted or alternative tasks developed. A simple example would be responding to questions orally rather than written. • however, remember that the adjustments must not alter the expected standard of performance specified within the unit/s of competency.
    24. 24. Simulated assessment • for the purposes of assessment, a simulated workplace is one in which all of the required skills are performed with respect to the provision of paid services to an employer or the public can be demonstrated as though the business was actually operating. • in order to be valid and reliable, the simulation must closely resemble what occurs in a real work environment. • the simulated workplace should involve a range of activities that reflect real work experience. it should allow the performance of all of the required skills and demonstration of the required knowledge.
    25. 25. Activities • In your groups discuss what input employers could provide to develop valid assessment tools and processes to employ a person with a Cert IV in Business . • Note down 2/3 questions you could ask employers and how the responses will inform the development or review of assessment tools and/or processes.
    26. 26. Questions
    27. 27. References • • • • • • MAXWELL, G S 2001, Moderation of Assessment in Vocational Education and Training, University of Queensland, Australia. (Wheeler, 1993). www.nqc.tvetaustralia.com.au http://policy.ballarat.edu.au/tafe/validation/ http://www.nssc.natese.gov.au/nqc_archive/nqc_publications/publications/assessment http://www.velgtraining.com/
    28. 28. Thank you Cecilia Sorensen Project Manager | Sarina Russo Institute P: 07 3308 2200 F: 07 3221 5304 E: CSorensen@sri.edu.au W: www.sri.edu.au

    ×