This article discusses curriculum evaluation models that can provide teachers with practical guidance for evaluating curriculum. It summarizes three models: Davis' Process Model, Stake's Countenance Model, and Eisner's Connoisseurship Model. Davis' model outlines the key processes of curriculum evaluation, including delineating the evaluation, collecting information, and utilizing the results. Stake's model focuses on defining intents, collecting observational data, and identifying discrepancies. Eisner's model provides guidance for interpreting and appraising evaluation findings through descriptive, interpretive, and judgment stages to build consensus. The article argues that combining elements of these three models can provide an effective framework to structure teacher-led curriculum evaluation activities.
Managing the evaluation of curriculum and instructionmiira77
Curriculum focuses on important knowledge and skills for learning through a design or roadmap, while instruction is how that learning is achieved. Evaluating curriculum and instruction involves asking key questions about whether objectives are being addressed, content is presented sequentially, students are involved in experiences, and students are reacting to content. The evaluation process includes focusing on a component, collecting information, organizing, analyzing, reporting, and using feedback to modify and adjust the curriculum and instruction.
Robert E. Stake developed the responsive evaluation model in 1967 which is based on the concerns of stakeholders being paramount. The evaluator meets with stakeholders to understand their perspectives and the program's purposes. They identify issues to evaluate and design evaluations to collect needed data, often using human observers. The evaluator then organizes the data into themes and portrays the findings in ways that communicate to stakeholders. A key advantage is sensitivity to stakeholder values and involving them, while a potential downside is clients manipulating concerns to avoid exposing weaknesses.
The document discusses curriculum evaluation, including its purpose and key models. It aims to determine if planned curriculum and programs achieve desired results and how they can be improved. Models described include Tyler's objectives-centered model, Stufflebeam's CIPP model, and Eisner's connoisseurship model. An eclectic approach uses aspects of different models. Challenges to evaluation include demonstrating measurable benefits and quality leadership supporting the process.
This document provides an overview of Ralph Tyler's objective model of curriculum development. The model emphasizes consistency between objectives, learning experiences, and outcomes. It includes four main principles: 1) defining learning objectives based on studies of learners and society, 2) selecting learning experiences to meet objectives, 3) organizing experiences for maximum effect, and 4) evaluating and revising ineffective aspects. The model aims to provide a structured approach for examining curriculum elements and their relationships. It focuses on clearly specifying objectives and using evaluation to improve curriculum alignment. The document discusses key terms, strengths, and criticisms of Tyler's influential model.
The document discusses evaluation in language curriculum design. It explains that evaluation aims to determine if a course is successful and where improvements are needed. Evaluation looks at all aspects of curriculum design, including results, planning, teaching quality, learner satisfaction, and cost-effectiveness. Gathering information involves interviews, surveys, and observations. It is important to gain support from those involved and determine who the evaluation is for and what information they need. Formative evaluation can help improve courses by involving teachers and designers and providing ongoing feedback. The results of evaluation are used to strengthen course design and implementation.
This document outlines the process of curriculum evaluation and the differences between process and outcome evaluation. It discusses the six basic steps to conduct an evaluation, which are to assemble a team, prepare, develop a plan, collect information, analyze the information, and prepare a report. Process evaluation helps answer why outcomes were or weren't achieved, while outcome evaluation is both qualitative and quantitative in nature.
This document discusses curriculum evaluation based on several models and frameworks. It describes Stufflebeam's CIPP model which evaluates the context, inputs, processes, and products of a curriculum. The document also provides a suggested six-step plan for conducting curriculum evaluation, including focusing on objectives, collecting information, organizing data, analyzing information, reporting findings, and providing continuous feedback for improvements.
Managing the evaluation of curriculum and instructionmiira77
Curriculum focuses on important knowledge and skills for learning through a design or roadmap, while instruction is how that learning is achieved. Evaluating curriculum and instruction involves asking key questions about whether objectives are being addressed, content is presented sequentially, students are involved in experiences, and students are reacting to content. The evaluation process includes focusing on a component, collecting information, organizing, analyzing, reporting, and using feedback to modify and adjust the curriculum and instruction.
Robert E. Stake developed the responsive evaluation model in 1967 which is based on the concerns of stakeholders being paramount. The evaluator meets with stakeholders to understand their perspectives and the program's purposes. They identify issues to evaluate and design evaluations to collect needed data, often using human observers. The evaluator then organizes the data into themes and portrays the findings in ways that communicate to stakeholders. A key advantage is sensitivity to stakeholder values and involving them, while a potential downside is clients manipulating concerns to avoid exposing weaknesses.
The document discusses curriculum evaluation, including its purpose and key models. It aims to determine if planned curriculum and programs achieve desired results and how they can be improved. Models described include Tyler's objectives-centered model, Stufflebeam's CIPP model, and Eisner's connoisseurship model. An eclectic approach uses aspects of different models. Challenges to evaluation include demonstrating measurable benefits and quality leadership supporting the process.
This document provides an overview of Ralph Tyler's objective model of curriculum development. The model emphasizes consistency between objectives, learning experiences, and outcomes. It includes four main principles: 1) defining learning objectives based on studies of learners and society, 2) selecting learning experiences to meet objectives, 3) organizing experiences for maximum effect, and 4) evaluating and revising ineffective aspects. The model aims to provide a structured approach for examining curriculum elements and their relationships. It focuses on clearly specifying objectives and using evaluation to improve curriculum alignment. The document discusses key terms, strengths, and criticisms of Tyler's influential model.
The document discusses evaluation in language curriculum design. It explains that evaluation aims to determine if a course is successful and where improvements are needed. Evaluation looks at all aspects of curriculum design, including results, planning, teaching quality, learner satisfaction, and cost-effectiveness. Gathering information involves interviews, surveys, and observations. It is important to gain support from those involved and determine who the evaluation is for and what information they need. Formative evaluation can help improve courses by involving teachers and designers and providing ongoing feedback. The results of evaluation are used to strengthen course design and implementation.
This document outlines the process of curriculum evaluation and the differences between process and outcome evaluation. It discusses the six basic steps to conduct an evaluation, which are to assemble a team, prepare, develop a plan, collect information, analyze the information, and prepare a report. Process evaluation helps answer why outcomes were or weren't achieved, while outcome evaluation is both qualitative and quantitative in nature.
This document discusses curriculum evaluation based on several models and frameworks. It describes Stufflebeam's CIPP model which evaluates the context, inputs, processes, and products of a curriculum. The document also provides a suggested six-step plan for conducting curriculum evaluation, including focusing on objectives, collecting information, organizing data, analyzing information, reporting findings, and providing continuous feedback for improvements.
The document discusses a curriculum evaluation class at PED 109. It provides details of the class facilitators and their topics. The topics include what, why and how to evaluate a curriculum, curriculum evaluation through learning assessment, and planning, implementing, and evaluating curriculum understanding connections. It then discusses curriculum evaluation in more depth, including definitions, objectives, reasons for evaluating curriculum, and different models of curriculum evaluation.
Evaluation is the process of determining the value or extent to which goals are being achieved through assessment. It provides information to learners, teachers, and external agencies. Curriculum evaluation is the process of obtaining information to judge the worth of an educational program by determining if the curriculum as outlined is being carried out in the classroom. Key questions asked in curriculum evaluation include whether objectives are being addressed, contents are presented in sequence, students are involved in suggested experiences, and how students react. Summative evaluation occurs at the end to determine what has happened, while formative evaluation takes place during to determine what is happening.
The document discusses curriculum evaluation, including its objectives, forms, and approaches. It defines curriculum evaluation as determining the outcomes and effectiveness of an educational program. There are three main forms: formative evaluation improves programs during development, summative evaluation assesses programs after implementation, and diagnostic evaluation identifies causes of learning issues. Approaches include the scientific approach using quantitative data, the humanistic approach capturing qualitative narratives, and bureaucratic, autocratic, and democratic evaluations led by different stakeholders. Curriculum evaluation is an essential part of the development process and provides feedback to improve educational programs over time.
Formative, summative, and diagnostic evaluations are important strategies for curriculum development. Formative evaluation identifies problems early to allow corrective action, while summative evaluation occurs at the end of a project to assess its overall value. Diagnostic evaluation analyzes curriculum when content is updated or changed. Evaluations can be done by insiders like teachers and outsiders like consultants, with each having advantages and disadvantages. The best approach uses a combination of internal and external evaluators to increase the reliability and validity of results while encouraging stakeholder participation and ownership.
This document discusses evaluation of curriculum and the teaching-learning process. It defines evaluation as the systematic process of determining if changes in student behavior are occurring as intended. Evaluation is an ongoing, continuous process that assesses programs, individuals and institutions, and helps make decisions about students, teaching methods, and objectives. It includes both measurement of outputs and qualitative considerations to suggest improvements. The document also discusses the meaning and need for curriculum evaluation, its various methods and techniques, levels of formative and summative evaluation, factors that influence curriculum changes, and outlines a curriculum evaluation plan.
Curriculum evaluation involves systematically assessing the appropriateness and effectiveness of learning experiences to determine if instructional objectives have been achieved, and can be informed by collecting feedback from various sources such as students, teachers, peer groups, and external professional evaluators. The sources of evaluation include surveying students who have completed the program to get their perspective on relevance and job opportunities, self-evaluation to assess objectives and methods, peer groups who may provide candid feedback, and experts in the field who can offer an unbiased assessment.
This document outlines 5 types of evaluation techniques: formative evaluation, summative evaluation, pay off evaluation, intrinsic evaluation, and inference. Formative evaluation is done during preliminary trials of an educational program to improve the curriculum. Summative evaluation involves judging a finished educational product and assessing if it is better than others. Pay off evaluation examines the effects of a program by comparing pre- and post-test scores. Intrinsic evaluation assesses an educational program based on its objectives and instructional methods. Inference refers to the process of coding observational data.
The document discusses the CIPP model for curriculum evaluation developed by Daniel Stufflebeam. The CIPP model guides evaluators and stakeholders in systematically assessing a curriculum at four stages: context, input, process, and product. Context evaluation involves analyzing needs and goals. Input evaluation considers resources and design. Process evaluation monitors implementation. Product evaluation judges outcomes against anticipated results to determine if the curriculum should continue, be modified, or discontinued. The model helps answer four questions: what should we do, how should we do it, are we doing it as planned, and did the program work.
This document discusses various models and approaches for evaluating curriculum. It begins by defining curriculum evaluation and its purposes. Several evaluation models are then described in detail, including the Bradley Model, Tyler's Objectives Model, Stufflebeam's CIPP Model, Stake's Responsive Model, and Scriven's Consumer Oriented Approach. Common steps in the evaluation process are also outlined, such as identifying stakeholders, issues to examine, appropriate data sources and collection techniques. The overall goal of curriculum evaluation is to assess the strengths and weaknesses of the curriculum to inform necessary improvements or changes.
The document discusses curriculum evaluation and different factors that should be considered. It outlines that a valid curriculum is needed for students to learn necessary workplace skills, even if instructors and students are good. When evaluating a curriculum, multiple factors must be taken into account including the learner, curriculum, instructors, available facilities, and other program aspects. Curriculum evaluation is defined as making value judgements about a curriculum's worth and involves gathering data to decide whether to accept, change, or eliminate the curriculum. Models of curriculum evaluation are useful as they define what should be studied and procedures used to extract important data. Evaluation can be formative during development or summative at the end of a program.
This document discusses different components and approaches to curriculum development:
- Component 3 discusses teaching methods that should stimulate different domains of learning while considering student learning styles and outcomes.
- Component 4 examines curriculum evaluation models and defines evaluation as determining quality, effectiveness, and value, or meeting goals and outcomes.
- Approaches include behavioral based on goals/objectives, managerial focused on implementation, and systems examining relationships between parts.
- The humanistic approach considers formal/planned and informal/hidden curriculums with the learner at the center.
This document discusses techniques for evaluating curriculum. It begins by defining curriculum evaluation as an attempt to gauge the effectiveness and value of educational projects and work undertaken by students. There are two levels of curriculum evaluation: formative evaluation during curriculum development to provide feedback and influence revisions, and summative evaluation after implementation to appraise the final curriculum. Several techniques are described for evaluating curriculum, including observation, questionnaires, checklists, interviews, workshops/group discussions, and the Delphi technique. The goal of curriculum evaluation is to improve educational goals, materials, and instructional methods over time based on advances in fields and feedback from students, teachers, and other stakeholders.
This document provides an overview of curriculum evaluation, including reasons for evaluation, types of evaluation, and models of evaluation. It also discusses levels and approaches to curriculum improvement. Specifically, it outlines that curriculum evaluation establishes strengths and weaknesses, provides feedback for improvement, and informs strategic decisions. The two main types of evaluation are formative, which provides ongoing feedback, and summative, which measures outcomes. Several models of curriculum evaluation are also presented, including Tyler's objective-centered model, Stufflebeam's CIPP model, Taba's antecedents-transactions-outcomes model, Stake's responsive model, and Kirkpatrick's reaction-learning-behavior-results model.
The document summarizes two models of curriculum evaluation: Tyler's model from 1949 and Stufflebeam's CIPP model from 1971. Tyler's model emphasizes measuring student progress towards instructional objectives. The CIPP model evaluates the context, input, process, and products of a program to facilitate rational decision-making. It assesses needs, available resources, implementation processes, and outcomes to guide decisions about continuing, modifying, or terminating an educational program.
The document discusses evaluation as both a process and a tool for curriculum development. As a process, evaluation follows procedures based on models and frameworks to achieve desired results. As a tool, evaluation helps teachers and implementers judge the worth and merit of programs, innovations, and curricular changes. The results of evaluation, whether as a process or tool, form the basis for improving the curriculum. The document then examines several evaluation models, including Bradley's effectiveness model and Tyler's objectives-centered model, which provide indicators and processes for evaluating curriculum elements and determining strengths and weaknesses.
Ralph Tyler proposed a model for developing curricula that involves 4 steps: 1) defining learning objectives based on student and societal needs, 2) selecting useful learning experiences to meet the objectives, 3) organizing experiences for effective instruction, and 4) evaluating effectiveness and revising areas of weakness. The model emphasizes specifying clear, measurable objectives and evaluating student achievement of those objectives. While widely used, critics argue it can oversimplify curriculum and neglect broader goals.
This document outlines a curriculum evaluation chapter that will teach learners about different models for evaluating curriculum, including the CIPP, Stake's, and Eisner's Connoisseurship models. It will also cover various methods for collecting data on curriculum effectiveness, such as interviews, observations, tests, surveys, content analysis, and portfolios. The chapter aims to explain what curriculum evaluation is, why it's important, and how to apply specific models and instruments to the evaluation process.
The document discusses an illuminative/responsive approach to evaluating an English as a foreign language (EFL) learning support program (LSP) in Greece. It describes a 4-step evaluation process: 1) Preparing stakeholders, 2) Identifying the program setting, 3) Sharing, observing, and seeking feedback, and 4) Reviewing, reflecting, and remedying issues. The evaluation aims to foster autonomous learning and involvement of all stakeholders at each step. It is argued that this participatory, formative approach can help programs improve, build ownership among stakeholders, and make evaluation less opposed in the Greek educational system.
The document discusses curriculum improvement and the curriculum development process. It covers several key points:
1. It defines curriculum improvement and describes it as the continuous modification and betterment of curriculum throughout the school year based on current students. This can be viewed as curriculum development or curriculum change.
2. It outlines the four phases of curriculum development: planning, content and methods, implementation, and evaluation and reporting. It provides details on the steps and processes involved in each phase.
3. It discusses different approaches to curriculum improvement, including technical/scientific, behavioral/rational, systems-managerial, intellectual/academic, and various non-technical approaches like humanistic/aesthetic and reconstructionism. It notes
Curriculum evaluation: The assessment of the merit and worth of any program curriculum.
Curriculum evaluation is an attempt to toss light on two questions: Do planned programs, courses, activities, and learning opportunities as developed and organized actually produce desired results/learning outcomes? How can the curriculum offerings best be improved?
Curriculum Evaluation Models: How can the merits and worth of such aspects of curriculum is determined? Evaluation specialists have proposed an array of models, an examination of which can provide useful background for the process curriculum evaluation.
This document discusses curriculum development and Hilda Taba's model for curriculum design. It defines curriculum as activities designed by teachers and students to achieve educational goals. Curriculum development is the systematic planning of what is taught and learned, as reflected in courses of study. Taba's model involves 7 steps: 1) diagnosing student needs, 2) formulating objectives, 3) selecting content, 4) organizing content, 5) selecting learning experiences, 6) organizing learning experiences, and 7) evaluating. This grass-roots approach places teachers at the center of curriculum design rather than higher authorities.
Evaluation is the process of collecting data on a programme to determine its value or worth with the aim of deciding whether to adopt, reject, or revise the programme. The public want to know whether the curriculum implemented has achieved its aims and objectives; teachers want to know whether what they are doing in the classroom is effective; and the developer or planner wants to know how to improve the curriculum product.
The document discusses a curriculum evaluation class at PED 109. It provides details of the class facilitators and their topics. The topics include what, why and how to evaluate a curriculum, curriculum evaluation through learning assessment, and planning, implementing, and evaluating curriculum understanding connections. It then discusses curriculum evaluation in more depth, including definitions, objectives, reasons for evaluating curriculum, and different models of curriculum evaluation.
Evaluation is the process of determining the value or extent to which goals are being achieved through assessment. It provides information to learners, teachers, and external agencies. Curriculum evaluation is the process of obtaining information to judge the worth of an educational program by determining if the curriculum as outlined is being carried out in the classroom. Key questions asked in curriculum evaluation include whether objectives are being addressed, contents are presented in sequence, students are involved in suggested experiences, and how students react. Summative evaluation occurs at the end to determine what has happened, while formative evaluation takes place during to determine what is happening.
The document discusses curriculum evaluation, including its objectives, forms, and approaches. It defines curriculum evaluation as determining the outcomes and effectiveness of an educational program. There are three main forms: formative evaluation improves programs during development, summative evaluation assesses programs after implementation, and diagnostic evaluation identifies causes of learning issues. Approaches include the scientific approach using quantitative data, the humanistic approach capturing qualitative narratives, and bureaucratic, autocratic, and democratic evaluations led by different stakeholders. Curriculum evaluation is an essential part of the development process and provides feedback to improve educational programs over time.
Formative, summative, and diagnostic evaluations are important strategies for curriculum development. Formative evaluation identifies problems early to allow corrective action, while summative evaluation occurs at the end of a project to assess its overall value. Diagnostic evaluation analyzes curriculum when content is updated or changed. Evaluations can be done by insiders like teachers and outsiders like consultants, with each having advantages and disadvantages. The best approach uses a combination of internal and external evaluators to increase the reliability and validity of results while encouraging stakeholder participation and ownership.
This document discusses evaluation of curriculum and the teaching-learning process. It defines evaluation as the systematic process of determining if changes in student behavior are occurring as intended. Evaluation is an ongoing, continuous process that assesses programs, individuals and institutions, and helps make decisions about students, teaching methods, and objectives. It includes both measurement of outputs and qualitative considerations to suggest improvements. The document also discusses the meaning and need for curriculum evaluation, its various methods and techniques, levels of formative and summative evaluation, factors that influence curriculum changes, and outlines a curriculum evaluation plan.
Curriculum evaluation involves systematically assessing the appropriateness and effectiveness of learning experiences to determine if instructional objectives have been achieved, and can be informed by collecting feedback from various sources such as students, teachers, peer groups, and external professional evaluators. The sources of evaluation include surveying students who have completed the program to get their perspective on relevance and job opportunities, self-evaluation to assess objectives and methods, peer groups who may provide candid feedback, and experts in the field who can offer an unbiased assessment.
This document outlines 5 types of evaluation techniques: formative evaluation, summative evaluation, pay off evaluation, intrinsic evaluation, and inference. Formative evaluation is done during preliminary trials of an educational program to improve the curriculum. Summative evaluation involves judging a finished educational product and assessing if it is better than others. Pay off evaluation examines the effects of a program by comparing pre- and post-test scores. Intrinsic evaluation assesses an educational program based on its objectives and instructional methods. Inference refers to the process of coding observational data.
The document discusses the CIPP model for curriculum evaluation developed by Daniel Stufflebeam. The CIPP model guides evaluators and stakeholders in systematically assessing a curriculum at four stages: context, input, process, and product. Context evaluation involves analyzing needs and goals. Input evaluation considers resources and design. Process evaluation monitors implementation. Product evaluation judges outcomes against anticipated results to determine if the curriculum should continue, be modified, or discontinued. The model helps answer four questions: what should we do, how should we do it, are we doing it as planned, and did the program work.
This document discusses various models and approaches for evaluating curriculum. It begins by defining curriculum evaluation and its purposes. Several evaluation models are then described in detail, including the Bradley Model, Tyler's Objectives Model, Stufflebeam's CIPP Model, Stake's Responsive Model, and Scriven's Consumer Oriented Approach. Common steps in the evaluation process are also outlined, such as identifying stakeholders, issues to examine, appropriate data sources and collection techniques. The overall goal of curriculum evaluation is to assess the strengths and weaknesses of the curriculum to inform necessary improvements or changes.
The document discusses curriculum evaluation and different factors that should be considered. It outlines that a valid curriculum is needed for students to learn necessary workplace skills, even if instructors and students are good. When evaluating a curriculum, multiple factors must be taken into account including the learner, curriculum, instructors, available facilities, and other program aspects. Curriculum evaluation is defined as making value judgements about a curriculum's worth and involves gathering data to decide whether to accept, change, or eliminate the curriculum. Models of curriculum evaluation are useful as they define what should be studied and procedures used to extract important data. Evaluation can be formative during development or summative at the end of a program.
This document discusses different components and approaches to curriculum development:
- Component 3 discusses teaching methods that should stimulate different domains of learning while considering student learning styles and outcomes.
- Component 4 examines curriculum evaluation models and defines evaluation as determining quality, effectiveness, and value, or meeting goals and outcomes.
- Approaches include behavioral based on goals/objectives, managerial focused on implementation, and systems examining relationships between parts.
- The humanistic approach considers formal/planned and informal/hidden curriculums with the learner at the center.
This document discusses techniques for evaluating curriculum. It begins by defining curriculum evaluation as an attempt to gauge the effectiveness and value of educational projects and work undertaken by students. There are two levels of curriculum evaluation: formative evaluation during curriculum development to provide feedback and influence revisions, and summative evaluation after implementation to appraise the final curriculum. Several techniques are described for evaluating curriculum, including observation, questionnaires, checklists, interviews, workshops/group discussions, and the Delphi technique. The goal of curriculum evaluation is to improve educational goals, materials, and instructional methods over time based on advances in fields and feedback from students, teachers, and other stakeholders.
This document provides an overview of curriculum evaluation, including reasons for evaluation, types of evaluation, and models of evaluation. It also discusses levels and approaches to curriculum improvement. Specifically, it outlines that curriculum evaluation establishes strengths and weaknesses, provides feedback for improvement, and informs strategic decisions. The two main types of evaluation are formative, which provides ongoing feedback, and summative, which measures outcomes. Several models of curriculum evaluation are also presented, including Tyler's objective-centered model, Stufflebeam's CIPP model, Taba's antecedents-transactions-outcomes model, Stake's responsive model, and Kirkpatrick's reaction-learning-behavior-results model.
The document summarizes two models of curriculum evaluation: Tyler's model from 1949 and Stufflebeam's CIPP model from 1971. Tyler's model emphasizes measuring student progress towards instructional objectives. The CIPP model evaluates the context, input, process, and products of a program to facilitate rational decision-making. It assesses needs, available resources, implementation processes, and outcomes to guide decisions about continuing, modifying, or terminating an educational program.
The document discusses evaluation as both a process and a tool for curriculum development. As a process, evaluation follows procedures based on models and frameworks to achieve desired results. As a tool, evaluation helps teachers and implementers judge the worth and merit of programs, innovations, and curricular changes. The results of evaluation, whether as a process or tool, form the basis for improving the curriculum. The document then examines several evaluation models, including Bradley's effectiveness model and Tyler's objectives-centered model, which provide indicators and processes for evaluating curriculum elements and determining strengths and weaknesses.
Ralph Tyler proposed a model for developing curricula that involves 4 steps: 1) defining learning objectives based on student and societal needs, 2) selecting useful learning experiences to meet the objectives, 3) organizing experiences for effective instruction, and 4) evaluating effectiveness and revising areas of weakness. The model emphasizes specifying clear, measurable objectives and evaluating student achievement of those objectives. While widely used, critics argue it can oversimplify curriculum and neglect broader goals.
This document outlines a curriculum evaluation chapter that will teach learners about different models for evaluating curriculum, including the CIPP, Stake's, and Eisner's Connoisseurship models. It will also cover various methods for collecting data on curriculum effectiveness, such as interviews, observations, tests, surveys, content analysis, and portfolios. The chapter aims to explain what curriculum evaluation is, why it's important, and how to apply specific models and instruments to the evaluation process.
The document discusses an illuminative/responsive approach to evaluating an English as a foreign language (EFL) learning support program (LSP) in Greece. It describes a 4-step evaluation process: 1) Preparing stakeholders, 2) Identifying the program setting, 3) Sharing, observing, and seeking feedback, and 4) Reviewing, reflecting, and remedying issues. The evaluation aims to foster autonomous learning and involvement of all stakeholders at each step. It is argued that this participatory, formative approach can help programs improve, build ownership among stakeholders, and make evaluation less opposed in the Greek educational system.
The document discusses curriculum improvement and the curriculum development process. It covers several key points:
1. It defines curriculum improvement and describes it as the continuous modification and betterment of curriculum throughout the school year based on current students. This can be viewed as curriculum development or curriculum change.
2. It outlines the four phases of curriculum development: planning, content and methods, implementation, and evaluation and reporting. It provides details on the steps and processes involved in each phase.
3. It discusses different approaches to curriculum improvement, including technical/scientific, behavioral/rational, systems-managerial, intellectual/academic, and various non-technical approaches like humanistic/aesthetic and reconstructionism. It notes
Curriculum evaluation: The assessment of the merit and worth of any program curriculum.
Curriculum evaluation is an attempt to toss light on two questions: Do planned programs, courses, activities, and learning opportunities as developed and organized actually produce desired results/learning outcomes? How can the curriculum offerings best be improved?
Curriculum Evaluation Models: How can the merits and worth of such aspects of curriculum is determined? Evaluation specialists have proposed an array of models, an examination of which can provide useful background for the process curriculum evaluation.
This document discusses curriculum development and Hilda Taba's model for curriculum design. It defines curriculum as activities designed by teachers and students to achieve educational goals. Curriculum development is the systematic planning of what is taught and learned, as reflected in courses of study. Taba's model involves 7 steps: 1) diagnosing student needs, 2) formulating objectives, 3) selecting content, 4) organizing content, 5) selecting learning experiences, 6) organizing learning experiences, and 7) evaluating. This grass-roots approach places teachers at the center of curriculum design rather than higher authorities.
Evaluation is the process of collecting data on a programme to determine its value or worth with the aim of deciding whether to adopt, reject, or revise the programme. The public want to know whether the curriculum implemented has achieved its aims and objectives; teachers want to know whether what they are doing in the classroom is effective; and the developer or planner wants to know how to improve the curriculum product.
Daniel Stufflebeam was an American education evaluator born in 1936 who developed the CIPP Evaluation Model. The CIPP model evaluates programs through assessing the context, inputs, processes, and products. It involves focusing evaluation goals, outlining an information collection process, analyzing information, and reporting findings. The model also provides for detailed cost analysis of administering the evaluation.
There are several common models for curriculum development that differ in their perspectives and approaches. The Tyler model is a linear, technical-scientific model that follows four basic steps: setting objectives, selecting content, organizing learning experiences, and evaluating outcomes. The Taba model is similar but emphasizes grassroots involvement of teachers. The Wheeler model depicts curriculum development as cyclical rather than linear. Walker's deliberative model describes how curriculum is naturally developed through platform-building, deliberation of alternatives, and consensus-based design.
Is Implementation The Master Of Curriculum V3wtremonti
The document discusses curriculum implementation and the Australian Qualifications Framework. It notes that Print's 1993 model was the first to devote a phase to implementation. It then discusses the aims of an AQF Council consultation paper from May 2009, including strengthening the AQF and achieving education targets by 2025. The document states that perfecting implementation is the only way to deliver on these targets through an engaging, inclusive, and flexible curriculum.
(Civic and political education 2) murray print, dirk lange (auth.), murray pr...Zaky Luthfi
This document discusses the challenge of developing civic education in schools to prepare students for active citizenship. It argues that civic education needs to go beyond single subjects and involve the entire school community. It proposes an approach called "democratic learning" where students gain experience participating in democratic processes throughout their education. This helps students develop democratic competencies and the ability to address complex global issues. The document outlines 14 theses on how schools can create a learning environment that promotes civic participation and commitment among students. It emphasizes that the school must support democratic learning for citizenship education to be effective.
This session dives deep into the DOM traversal methods of the jQuery API where you will learn the difference between brittle and fluid traversal methods, strategies for structuring your HTML, and how to leverage some of the more obscure jQuery selectors.
The document discusses curriculum planning and describes the goal-based model. It outlines several key steps in the curriculum planning process: 1) determining who will make planning decisions, 2) establishing necessary organizational structures, 3) aligning educational goals with curriculum fields, 4) developing a curriculum database for teachers to access, and 5) creating a calendar to keep the planning process on schedule. It emphasizes the importance of conducting needs assessments, providing resources and staff development to ensure successful implementation of new curricula.
Evaluation assessment & 2g curriculum a pril 26 2016Mr Bounab Samir
The document outlines the agenda and activities for a teacher training session on assessment, evaluation, and testing.
The morning session includes brainstorming on teachers' perspectives of testing, assessment, and evaluation. Workshops will cover assessing and evaluating a test using recommendations from the second generation curriculum.
The document also provides definitions and purposes of evaluation, assessment, and testing. Evaluation collects information about courses and programs, assessment collects student information over time to help improvement, and testing measures specific skills at a point in time.
Formative assessment occurs during teaching to evaluate effectiveness, while summative assessment occurs after teaching to evaluate learning. The document discusses various assessment tools and strategies that can be used for the second generation curriculum.
The cyclical curriculum model views the curriculum process as circular rather than fixed and rigid. It is responsive to ongoing needs and requires constant updating. The model emphasizes situational analysis of environmental factors and sees curriculum elements as interrelated. Nicholls and Nicholls' 1976 model represents the cyclical approach well, with curriculum development as a never-ending process that allows educators to continually refine and improve the curriculum over time based on new information and changes.
A presentation for my Ed. D. Degree Program relating to Program Evaluation Models: Developers of the Management-Oriented Evaluation Approach and their Contributions;
How the Management-Oriented Evaluation Approach Has Been Used; Strengths and Limitations of the Management-Oriented Evaluation Approach; Other References, Questions for Discussion
This document provides an overview of curriculum evaluation based on the CIPP model. It defines curriculum evaluation as making judgments about changes needed in students by using information to modify teaching and curriculum. The CIPP model evaluates the context, inputs, processes, and products of a curriculum. Context evaluation analyzes goals and needs. Input evaluation assesses resources and strategies. Process evaluation collects ongoing data on curriculum implementation. Product evaluation determines if goals were achieved by getting feedback from students, graduates, and employers.
Models of curriculum evaluation and application in educationalKoledafe Olawale
Curriculum can be defined as the planned and guided learning experiences and intended learning outcomes, formulated through the systematic reconstruction of knowledge and experiences, under the auspices of the school, for the learners’ continuous and willful growth in personal social competence (Tanner & Tanner, 1975)
TSL3143 Topic 2a Models of Curriculum DesignYee Bee Choo
The document discusses several models of curriculum design: Tyler's Objective Model (1949), Taba's Interactive Model (1962), Wheeler's Process Model (1967), and Walker's Naturalistic Model (1971). It provides details on the key aspects of each model, including their advantages and disadvantages. Tyler's model is linear and focuses on objectives. Taba's model is interactive and involves more teacher input. Wheeler's model is cyclical with feedback. Walker's model is descriptive and emphasizes stakeholder consensus. In conclusion, while the models provide useful frameworks, actual curriculum design in practice may vary and draw from multiple approaches.
Walker's deliberative approach emphasizes the process of curriculum development. The ways of proceeding were not predetermined but negotiated and documented as stakeholders worked towards completing the task.
The Wheeler cyclical model of curriculum development views the curriculum process as ongoing and constantly changing. It presents curriculum development as having five interrelated phases that build on each other logically. The cyclical model is responsive to changing needs and incorporates new information flexibly. It also views curriculum elements as interdependent and allows for interaction between elements.
The chapter discusses three models for curriculum development: the Tyler Model, the Taba Model, and the Oliva Model. The Tyler Model is a deductive model that begins with examining societal needs and ends with specifying instructional objectives. The Taba Model uses an inductive approach, starting with creating teaching units and building to a overall design. The Oliva Model is also deductive and provides a process for a school faculty to develop the entire curriculum based on the needs of their students. The models illustrate different approaches to curriculum planning but should be adapted based on the unique needs and context of each situation.
Eisner was a highly respected scholar who published widely since the 1960s on topics of art, education, and curriculum. He identified weaknesses in empirical-analytic educational research and advocated for qualitative research methods. Eisner proposed an artistic approach to curriculum planning that is more convoluted and circuitous than rational linear models. He emphasized considering less defined objectives, deliberation on priorities, and nonlinear learning opportunities to encourage diverse outcomes.
The document discusses different aspects of curriculum including definitions, designs, and models. It defines curriculum as the planned learning experiences and intended outcomes designed by schools. Three common curriculum designs are discussed - subject-centered focusing on content, learner-centered centered on learners, and problem-centered organizing around problems. Four curriculum development models are summarized - Tyler's model originating in 1949 uses objectives, Taba's grassroots model engages teachers, Saylor, Alexander, and Lewis's model specifies goals before design, and Oliva's deductive model allows faculty input.
The document discusses various models of curriculum evaluation. It describes several purposes of curriculum evaluation including providing feedback to learners, determining how well objectives are achieved, and improving the curriculum. Curriculum evaluation can take place at the classroom level, where teachers collect data from tools like tests and observations, and at the school or system level using surveys, focus groups, and standardized tests. The document outlines several prominent models of curriculum evaluation, including Provus' Discrepancy Model, Tyler's objectives-based model, Stufflebeam's CIPP model, Stake's Congruency Model, and Eisner's Educational Connoisseurship qualitative model.
The document reviews 20 models of curriculum evaluation, beginning with a description of Okparaugo Obinna Joseph's presentation on curriculum evaluation models. It then reviews the Educational Connoisseurship and Criticism Model, the Logical Model, Tyler's Model, Stufflebeam's CIPP Model, and several others. For each model, it discusses the purpose, key aspects, merits and demerits, and provides a brief comment. The review provides high-level information on the various curriculum evaluation models addressed in the document.
The document discusses action research and its importance in education. It begins by outlining the objectives and contents of an action research training. It then defines action research and discusses its key characteristics, including being practitioner-based, cyclical, participatory, and aimed at addressing practical problems. The document compares action research to formal research, noting differences in goals, participants, samples, and generalizability. It also outlines types of action research like individual, collaborative, and school-wide research. Finally, it discusses the importance of action research in connecting theory to practice, improving educational practices, and empowering teachers professionally.
Action research is a cyclical process conducted by educators to improve their own practices and understandings. It involves identifying an issue or problem in the classroom, collecting and analyzing data related to the problem, planning and implementing actions for improvement, then reflecting on the results to further refine practices. The goal is to simultaneously solve problems and contribute to knowledge about teaching and learning. Key aspects include its collaborative and reflective nature, with educators studying their own classrooms to enact positive changes.
The document summarizes the development and purpose of the Revised Two Factor Study Process Questionnaire (R-SPQ-2F). It was created to assess students' deep and surface learning approaches using fewer items than previous versions. The revised questionnaire was tested on students in Hong Kong and showed acceptable reliability and validity. The goal was to create a simple tool teachers could use to evaluate their own teaching and students' learning approaches.
This document discusses action research and its key features. It begins with a small story of teachers sharing classroom experiences. It then discusses Kurt Lewin's advocacy for action research in 1946 and its cyclical, iterative approach. The key features of action research are discussed, including its close relationship to action and knowledge acquisition, its collaborative nature, and its cycle of planning, acting, observing and reflecting. Types of action research and its characteristics are also outlined. Throughout, the document provides examples and explanations of action research in an educational context.
Running head: EDUCATIONAL RESEARCH 1
EDUCATIONAL RESEARCH 2
Translating Educational Research into Practice
Problem
For a long time, education research has not been able to impact classroom instructional practices and educational policies. Educational based researchers argue that their primary work is to research the various aspects of learning and teaching to then present their findings at various conferences and publishing them in different educational journals. Their busy schedule does not allow them to train practitioners (Powney & Watts, 2018). On the other hand, practitioners are busy concentrating on there, and they do not have time to review new literature. This brings up the question as to who is responsible for this gap. In the real sense, there should be a connection between the two, and both parties should play a role in bridging this gap.
Practices, Policies, and Procedures That Have Led to the Problem
There are various reasons for this persistent gap between the teaching practices that teachers use and the guidance that educational research provides. However, three of them stand out. They include the trustworthiness issue, teacher preparation issues, and the research practice issue. The trustworthiness issue comes in because much of the published educational research and disseminated to teachers, policymakers and researchers are often not good and of uneven quality. Research is incredibly demanding, and it is not always possible to choose the most appropriate methodological approach. It is essential that the methodology is applied rigorously whether it is for qualitative or quantitative research (Suter, 2012).
Teachers, on the other hand, want to provide quality education to their children. When they turn into research to aid in teaching, their main expectation is that the information they get is trustworthy. If the information is not trustworthy both the teacher and the student will fail terribly. The teachers also have to be prepared. The applicability and relevance of a research finding will be minimal if the administrators and teachers are unable to access the data, unable to develop strategies for implementing the research findings and do not understand or are unable to interpret the research findings in a meaningful and accurate manner (Fenwick, Edwards, & Sawchuk, 2012).
While teacher preparation and research trustworthiness play significant roles in determining the extent to which research informs instructional practices and educational policies, a fundamental problem is our inability to understand and identify an environment where the research findings can be applied in complex school systems as well as classrooms. While specific strategies, instructional models and approaches may be useful in a setting that is controlled, there is scanty information about the factors that impede or foster application of these modalities under varying contexts and among diverse teachers and students' pop.
Formative assessment and contingency in the regulation of learning processes ...Iris Medrano
This document discusses the origins and definitions of formative assessment. It begins by exploring different perspectives on the relationship between instruction and learning. It then reviews the origins of formative assessment, tracing it back to Michael Scriven in 1967 and Benjamin Bloom in 1969. The document examines debates around defining formative assessment, looking at perspectives from Black and Wiliam, Sadler, the Assessment Reform Group, and others. It proposes a definition of formative assessment as any assessment that is used to make decisions about next steps in instruction that are likely to improve learning outcomes compared to decisions made without the assessment evidence.
Conversations with the Mathematics Curriculum: Testing and Teacher DevelopmentSaide OER Africa
This paper addresses the question: how do mathematics teachers make meaning from curriculum statements in relation to their teaching practices. We report on a teacher development activity in which teachers mapped test items from an international test against the national curriculum statement in mathematics. About 50 mathematics teachers across Grades 3-9 worked in small groups with a graduate student or staff member as a group leader. Drawing on focus group interviews with the teachers and the group leaders we show that the activity focused the teachers on the relationships between the intended curriculum and their teaching, i.e. the enacted curriculum, in four areas: content coverage; cognitive challenge; developing meaning for the assessment standards; and sequence and progression. We argue that the activity illuminates ways in which international tests can provide a medium for teacher growth rather than teacher denigration and alienation.
1. This chapter outlines the scope of syllabus design and its relationship to curriculum development. It discusses differing views on defining syllabus and curriculum. Some see syllabus as solely concerning content selection, while others see it as also specifying learning tasks.
2. A general curriculum model is presented that looks at curriculum from the perspectives of planning, implementation in the classroom, assessment/evaluation, and institutional management. For effective language programs, all elements of the curriculum model should be integrated.
3. The role of the classroom teacher in syllabus design is examined. While some teachers design their own syllabuses, many act as "consumers" of externally-designed syllabuses. Teachers may have primary responsibility for implementation
1. This chapter outlines the scope of syllabus design and its relationship to curriculum development. It discusses differing views on defining syllabus and curriculum. Some see syllabus as solely concerning content selection, while others see it as also specifying learning tasks.
2. A general curriculum model is presented that looks at curriculum from the perspectives of planning, implementation in the classroom, assessment/evaluation, and institutional management. For effective language programs, all elements of the curriculum model should be integrated.
3. The role of the classroom teacher in syllabus design is examined. While some teachers design their own syllabuses, many act as "consumers" of externally designed syllabuses. Teachers may have primary responsibility for implementation and
Towards a framework of teaching effectiveness in publicjaderex
This document proposes a mixed methods study to develop a framework for teaching effectiveness in public schools. It aims to (1) describe and analyze factors that influence effective teaching practices through observations, appraisals, field notes and perceptions; (2) explore how classroom practices vary across school contexts, career phases and ages; (3) analyze relationships between classroom practices and student needs, school context, teacher career and age; and (4) provide implications for stakeholders. The study will use surveys, interviews, and classroom observations to understand influences on teaching effectiveness. Results will inform educational leaders and improve teaching quality.
Towards a framework of teaching effectiveness in publicjaderex
This document presents a research proposal that aims to develop a framework for teaching effectiveness in public schools using a mixed methods approach. It will examine the factors that influence effective teaching practices, how these may vary across different school contexts, teacher ages and experience levels. The study will collect both quantitative and qualitative data through surveys, interviews, and classroom observations of teachers. It seeks to understand the relationships between teaching practices, teacher backgrounds, school environments and student outcomes. The results intend to inform educational policy and leadership on evidence-based strategies to promote quality education and support teacher development.
What philosophical assumptions drive the teacher/teaching standards movement ...Ferry Tanoto
What philosophical assumptions drive the teacher/teaching standards movement today? Are standards dangerous?
Week 4 - Reading highlights
Falk, B., 2002 and Tuinamuana, K., 2011
This document discusses teacher evaluation systems and the Charlotte Danielson Framework for Teaching. It outlines that effective systems ensure quality teaching and promote professional learning. The Danielson Framework divides teaching into four domains with multiple elements and provides performance levels to guide teacher development. An effective system requires a clear definition of teaching quality, fair methods to evaluate teaching, and trained evaluators to make consistent judgments and support growth.
This document discusses Charlotte Danielson's framework for teacher evaluation, which includes four domains of teaching practice and specific criteria within each domain. It emphasizes that teacher evaluation systems should ensure quality teaching and promote professional learning. The framework provides common language and structures professional conversations to guide teacher reflection and growth. Evaluators must be trained to make consistent judgments based on evidence from classroom observations, teacher artifacts, and other sources.
This document provides definitions for key terms related to curriculum mapping. It groups the terms into
topics such as types of maps, map elements, standards, alignments, and review processes. Some key points:
- Curriculum mapping is defined as an ongoing process involving teacher-designed operational and planned
learning curriculum, collaborative inquiry, and data-driven decision making.
- Different types of maps include curriculum maps, diary maps, projected maps, and consensus maps.
- Map elements include units, content, skills, assessments, resources, and standards.
- Alignments refer to connections within and between maps at different grade levels.
- A seven-step review process is outlined for analyzing maps and other curriculum data
Action research is an approach to collecting and interpreting data that involves repeated cycles of planning, action, observation, and reflection. It is collaborative, aimed at improving practice, and involves practitioners carrying out research in their own contexts. Kurt Lewin described the process of action research as a cycle of planning, action, observing, and reflecting. Classroom research refers to research conducted in language classrooms, often focusing on teacher and student interactions, to induce teacher learning.
A review of classroom observation techniques used in postsecondary settings..pdfErin Taylor
This document reviews different techniques for observing college classrooms. It discusses two main uses of observation - for faculty professional development and for teaching assessment/evaluation. It describes key characteristics that distinguish observation protocols, such as whether they evaluate quality or just describe teaching, and their level of detail. It provides examples of commonly used protocols, including the Reformed Teaching Observation Protocol (RTOP) and UTeach Observation Protocol (UTOP), both of which are criterion-referenced and evaluative in nature. The document concludes by discussing areas for future research on classroom observation techniques in postsecondary education.
Similar to Curriculum evaluation models practical applications for teache (20)
A review of classroom observation techniques used in postsecondary settings..pdf
Curriculum evaluation models practical applications for teache
1. Australian Journal of Teacher Education
Volume 13
Issue 1 Vol.13 (Issues 1 & 2 combined) Article 1
1988
Curriculum Evaluation Models : Practical
Applications for Teachers
John D. Woods
Nedlands Campus, W.A.C.A.E.
Recommended Citation
Woods, J. D. (1988). Curriculum Evaluation Models : Practical Applications for Teachers. Australian Journal of Teacher Education,
13(1).
http://dx.doi.org/10.14221/ajte.1988v13n2.1
This Journal Article is posted at Research Online.
http://ro.ecu.edu.au/ajte/vol13/iss1/1
2. 2
.... --~~------~~---------------~~-~--------~-
CURRICillUM EVALUATION MODELS,
PRACTICAL APPLICATIONS FOR TEACHERS
John D. Wbods
Senior Lecturer, Education
Nedlands Campus WA.G.A.E.
INTRODUCTION
The scope and focus of evaluation generally, and of curriculum evaluation in
p'articular, has changed markedly over' recent' times. With the move towards
school-based curriculum development attention has shifted away from
measurement and testing alone. More emphasis is now being placed upo~ a
growing number of facets of curriculum development, reflecting the need to
collect information and make judgements about all aspects of curriculum
activities from planning to implementation. While curriculum theorists arid'
some administrators have realized the significanee of this shift many teachers .
still appear to feel that curriculum evaluation activities are something which
do not' directly concern them.
However, the general public, as well as the authorities,' expect teachers to know
about the effectiveness of their' teaching process and programmes. Giv~ the
range of alternatives possible we need to be confiderit that our choices are valid.'
If we are to make adjustments in the future we must knm:v why we are Changing
and the direction in which change should proceed. This emphasizes the 'fact
that evaluation is not something which takes place after a decision has been
made. Rather, it is the basis for proposing change and its value lies in its ability
to help clarify curriculum issues and to enable teachers,' as well as schools and
systems, to make informed decisions.
Given'the need, why is it then, that teachers may not become as involved in
evaluation as we might like. Hunkins (1980, p 297) suggests that it might be
because the teacher has to be:
"the doer, the person who reflects on his own behaviour during the
planoing and implementation phases;
the observer of the students and the resources used during the
implementation;
the judge. who, receives .and interprets th~ data collected; and
the actor who _acts upon and makes informed decisions based ,upon
the date collected."
3
3. Expressed this way it does appear that this task may simply be too onerous
. when forced to compete against all other activities in which teachers must
engage. Seiffert (1986, p 37) expands on this point by noting that
" ... there are limitations to the amount and nature of the evaluative rol,e
that a teacher may take. First, a teacher's'life is a busy one, arid time
constraints will limit the amount of effort that most teachers may put
into evaluation. Second, because a teacher is a teacher, and thus a
significant person in the learning process, her roles as evaluator will
be limited. It is possible to be too closely involved in a situation,
politically and emotionally, to ask questions that might challenge one's
own interests."
The problem cannot be ignored, however, as it is only through the processes
of marshalling information -and mounting arguments that interested individuals
are able to participate in critical debate about curriculum matters and issues.
What can be done? The solution would seem to be to share the tasks. In this
way, co-operative, group ~fforts can spread the load and reduce the pressure
on individual teachers. As our approach to teaching opens up we are able to
identify more and more examples of teachers working together to plan and
deliver the curriculum - ~t involves only one further small step to allow
evaluation also to benefit from this sharing approach.
Co-operation does bring problems, however! A classroom is a very complex
place and it is impossible to evaluate everything. Even with the best intentions
two or more people evaluating a lesson may see different things. The task is'
to enable people to look through the same eyes. We need to be able to agree
on what is to be observed, when, by whom and for what purpose. We then
need to be able to discuss our findings in such a way that individuals do not
feel threatened, so that positive and constructive evaluation can be made. Unless
structures are established to facilitate interaction and free-flowing ~iscussion
throughout the evaluation exercise: there is a danger that the benefits of
evaluation will b~ eroded by unresolved c?nflict.
There is no simple 'Way of ensuring that such agreement will be reached. There
does exist, however, a range of curriculum models,which can provide,a useful
structure for teachers wishing to make more effective their role as curricu~um
~uators. Three that hav~ been selected for special att~ntion ill this _paper
include Davis' Process Model (1981), Stake's Countenance Model (1967) and.
Eisner's Connoisseurship Model (1979).
CURRICULUM EVALUATION MODELS
Davis', Process Model
This model provides a simple overviCW' of the processes involved in" curriculum
ev.Uuation. It is suitable for 'Use by either individual teacliers or teams of teachers.
The fIrst stage of this model involves what Davis (1981, p 49) calls the delineatil1g
sub-process. No investigation of classrooms or curricula will ever-be'able to
4
capture the total picture so decisions must be made which structure and focus
the evaluation. Evaluators should begin by asking for whom is the evaluation
intended and what does the audience want to fmd out. Examples of prospective
audiences might include :
an individual teacher
a group of teachers (year level, subject department)
senior administrators (senior masters/mistresses, deputies,
principals)
Ministry of Education Officials
parent and community groups
commercial organizations
The type of information will 'also vary and could include :
* teacher attitudes
* student performance
community perceptions
organizational structures
curriculum performance
strategy selection
Such decisions need to be made in consultation. The types of questions which
need to be asked have been comprehensively documented by Hughes, et. al.,
as part of their work on the Teachers as Evaluators Project (CDC 1982, pp39-42).
Some thirteen sub-groups were identified, ranging from questions related to
purposes through those involving roles and audiences to those focussing
attention upon judgements and, finally, outcomes. Each of these sub-groups
contains further dimensions which provide a comprehensive structure from
which the evaluator may select a framework of questions to defme and delineate
the particular task in hand.
Once the basis of the investigation has been determined the task of collecting
information, described by Davis (1981, pSI) as the obtaining sub-process, can
commence. At this stage it would appear to be appropriate to enlarge upon
the steps identified by Davis to clarify some of the factors which impact upon
this aspect of effective evaluation. Thus Stake's Countenance Model may be
useful in describing a procedure which groups may need to follow when
involved in a team approach to evaluation.
Stake's Countenance Model
This model can be readily inserted into the Davis model at this stage. The
rationale referred to by Stake allows for the influence of presage factors which
Davis subsumes as part of his delineating sub-process. The greatest strength
of Stake's model is the manner in which intents and actions are defined and
observed, together 'With standards and judgements.
5
4. Stake believes that the starting off point is to detenrune the "intents" of :1
particul~ curriculum. These need to be described in terms of antecedents,
transactions and outcomes. Antecedent intents relate to any conditions prior
to the commencement of a curriculum and might include both students' and
teachers' backgrounds and interests. Transaction intents are the procedures and
events which it is expected will transpire as the curriculum unfolds. They take
place in the classroom or teachingllearning environment. Outcome intents are
the intended student outcomes in terms of achievements, together with the
anticipated effects upon teachers, administrators and other parties.
Prior to any data collection those involved in the perfonnance and those
involved in the evaluation must meet to establish a common frame of reference
with respect to the three sets of intents. Not only does this clarify the purpose
of the evaluation but it also allows for checks of what Stake refers to as logical
consistencies between the intended antecedents, transactions and outcomes.
In a similar fashion the intended standards which will be used to determine
the appropriateness of the curriculum need also to be discussed and agreed
upon. Again, logical consistency between the various elements can be monitored
at this stage.
Once agreement has been reached the next step involves collecting observational
data about the dynamics of a particular curriculum. As well as informal
observations Stake suggests that all kinds of empirical data collection should
be employed, including instruments such as questionnaires and psychometric
tests. Such data needs to be collected to determine the extent of discrepancies
between intents and observations, standards and judgements. If discrepancies
do appear, they may be either discrepancies of empirical contingency (i.e.
between antecedents, ·transactions and outcomes) or discrepancies of
congruence (Le. between intents and observations or between standards and
judgements).
Completion of the data collection activities leads to the third phase of the Davis
model, the providing sub~process. At this stage evaluation and interpretation
of the data needs to be undertaken. This is almost inevitably a crucial time
in any curriculum evaluation as the perfonners and the evaluators reassemble
to discuss the information which has been collected. Just as the Stake model
fills out the earlier process, so too Eisner's Connoisseurship Model appears to
be most appropriate at this junction. In particular, it is the second component
of his model, which focuses upon what Eisner calls the 'art of disclosure', which
has greatest Significance.
Eisner's Connoisseurship Model
In the post~mortems which follow data collection it is essential that rational
and unemotional discussion is allowed to take place. The process described
in this paper requires colleagues to collect data about each other and to submit
6
themselves to self~reflective activities. As Marsh and Stafford (1984, p 70) point
out:
"it will undoubtedly lead to data being presented which shows that
some discussion segments were not very productive, and that arguments
enunciated by some colleagues were superficial, critical or downright
fallacious! Teachers in a planning group have to be sufficiently
empathetic towards each other to accept candid, but positive criticism."
The three stages identified as Eisner's art of disclosure (1979, pp 202-213) would
appear to aid in the development of such empathy. Eisner begins by suggesting
that first discussions should simply involve a deSCription of what took place.
This is the least threatening type of evaluation as few, if any, judgements are
being made at this stage. The intention here is simply to get all parties to agree
in order to proceed to stage two.
In stage two particular aspects of the curriculum may be singled out for further
attention. Having agreed that certain events took place the task is now to explain
and interpret why these events occurred. As different theories may be used
to assign meaning to these events, it is again important that a consensus is
reached. Such consensus will be easier, Eisner argues, if we have previously
reached agreement at the level of description. While interpretation is potentially
more threatening, the constant emphasis upon first establishing agreement before
moving into new areas underpins the way in which this aspect of the evaluation
should be conducted.
The fmal stage in the process of disclosure -is that of appraisal. This is where
value judgements will be made and again constitutes an area in which
individuals may perceive themselves as under attack. Such recommendations
must stem from the evaluation exercise, however, hopefully in the form of
consensus statements from both perfonners and evaluators. Eisner believes that
the process of moving from the least threatening situation through to the final
stage by securing agreement throughout all stages is the one most likely to lead
to success.
Davis's last sub-process is that of utilization. The way in which this will be
done will be determined by the particular audience for whom the evaluation
is intended. For an individual teacher it may simply involve modifying lesson
plans or progranunes. For groups of teachers it may involve making wider
decisions about the type and sequence of units which they are prepared to offer.
Administrators may relate the evaluation to changes in school policy, while for
commercial organizations the whole process may be viewed as an exercise in
market research. In any or all, of these instances formal reports may be written
but at all levels some form of written summary should be made. What may
be a conclusion at this point in an evaluation of a curriculum will inevitably
provide presage material for ongoing investigations.
7
5. · CONCLUSION
This paper has attempted to highlight a number of factors focussing on the
irnportaflce of evaluation in curriculum management. It has-attempted to avoid
the tendency noted by numerous authors for teachers to view curriculum
evaluation as a spectator sport. By the same token it has recognized that
individual teachers can only engage in a limited range of evaluation activities
if left to their own devices. The opportunity for collective evaluation by groups
of teachers appears to be on the increase, in terms of both logistics and desires.
The effectiveness of cooperative evaluation may be significantly reduced,
however, unless we remain aware that staff require guidance in developing group
skills. Some of these skills entail understanding the behaviour and motives of
others, together with a willingness to adapt one's own behaviour to the needs
of the group. Most importantly though, effective evaluation requires that the
task is undertaken with a clear purpose and shared understanding of what is
involved.
By welding together the essential elements of the three models described in
this paper a workable blueprint for evaluation emerges. This blueprint would
seem to provide the framework around which the purposes and understandings
referred to above can be built. While difficulties involving consensus may still
emerge the use of the various models described in the paper should provide
structure and encouragement for those engaging in what Marsh and Stafford
(1984, p 52) see as 'arguably the major component of the curriculum decisionmaking
process.'
LIST OF REFERENCES
Curriculum Development Centre, (1982) Curriculum Evaluation: How It Can Be Done, Canberra,
CDC.
Davis, E. (1980), Teachers as Curriculum Evaluators, Sydney George Alien and Unwin, Sydney.
Eisner, E. (1979), The Educational Imagination, MacMillan Publishing Company, New York.
Hunkins, F. (1980), Curriculum Development; Programme Improvement, Colombus, Ohio Merrill.
Marsh, C. & Stafford, K. (1984), Curriculum: Australian Practices and Issues, Sydney, McGraw-HilL
Seiffert, M. (1986), "Outsiders Helping Teachers", CurriculUm Perspectives 6, 2, 37-40.
Stake, R. (1967), "The Countenance of Education Evaluation", Teachers College Record, 68,7.
8