The presentation is a systematic and comprehensive formative evaluation plan to investigate the implementation of social studies education for Democratic citizenship (SSEDC) in the mature stage. The lead evaluator will select a team to guide and conduct key actions throughout the evaluation process. The plan will begin with the Grades K-6 program description, followed by the theoretical framework, including the research questions that will guide the project over a 12-week period. The methodology will be mixed method survey design, using multiple methods to collect quantitative and qualitative data. The sampled target group will include various stakeholders in the school community, including the implementers and others as the need arises. Content and descriptive data analyses will be the suggested methods to extract themes and concepts and highlight possible findings influenced by (a) teachers’ understanding of SSEDC goal; (b) methods used by teachers; and (c) problems the teachers are experiencing during the implementation process. The evidence will form the basis for findings and conclusions, and for recommending strategies for improvement of SSEDC. The evaluation team will put measures in place to promote accurate results, and efficient reporting procedures. The evaluation team will put efficient reporting procedures or measures in place respected by the internal stakeholders – designers and implementers.
This presentation has a vivid description of the basics of doing a program evaluation, with detailed explanation of the " Log Frame work " ( LFA) with practical example from the CLICS project. This presentation also includes the CDC framework for evaluation of program.
N.B: Kindly open the ppt in slide share mode to fully use all the animations wheresoever made.
This gives the information about programme evaluation, planning of evaluation, requirement and purpose of evaluation, steps involved in evaluation, Uses of evaluation, Stakeholder and their role in evaluation, finding and analysing the result of evaluation, Standards of effective evaluation, utilization of evaluation.
Difference between monitoring and evaluationDoreen Ty
Monitoring involves tracking project performance and progress toward goals during implementation to ensure accountability. It answers whether things are being done right and allows for timely management decisions. Evaluation assesses efficiency, impact and relevance after completion to judge the overall merits and determine if the right things were done. Both aim to improve projects, but monitoring focuses on day-to-day management during implementation while evaluation provides longer-term perspective at critical points like midway or after completion.
A series of modules on project cycle, planning and the logical framework, aimed at team leaders of international NGOs in developing countries.
Part 7 of 11.
There are two handouts to go with this module, Population Indicators, and a Logframe with blanks. http://www.slideshare.net/Makewa/population-indicators-handout and http://www.slideshare.net/Makewa/exercise-watsan-logframe-with-blanks
The document discusses monitoring and evaluation (M&E) of health programs, defining monitoring as the routine collection of data to track progress towards objectives, while evaluation assesses the impact of a program by measuring outcomes at baseline and endline using a control group. It provides guidance on developing M&E plans, including describing programs and expected outcomes, identifying indicators, data collection sources and schedules, and disseminating findings to inform decision-making.
Program Evaluation: Forms and Approaches by Helen A. CasimiroHelen Casimiro
This document discusses different forms and approaches to program evaluation. It describes five forms of evaluation: 1) Proactive Evaluation which occurs before program design to synthesize knowledge for decisions, 2) Clarificative Evaluation which occurs early in a program to document essential dimensions, 3) Participatory/Interactive Evaluation which occurs during delivery to involve stakeholders, 4) Monitoring Evaluation which occurs over the life of an established program to check progress, and 5) Impact Evaluation which assesses the effects of a settled program. It also outlines several evaluation approaches including behavioral objectives, four-level training outcomes, responsive, goal-free, and utilization-focused evaluations.
This document discusses program evaluation in public health. It begins by defining key terms like program, evaluation, and monitoring. It describes the need for evaluation to improve health programs and allocate resources. The types of evaluation include formative, process, outcome, and economic evaluations. Steps of evaluation involve engaging stakeholders, describing the program, focusing the design, gathering evidence, justifying conclusions, and ensuring use. Frameworks for public health evaluation include CDC's 30 standards across utility, feasibility, propriety and accuracy.
This document provides an introduction to monitoring and evaluation (M&E) plans. It discusses what an M&E plan is, how it relates to a logic model, and how it can contribute to a program's success. An M&E plan describes a program's approach to implementing M&E activities, including what data will be collected, how and when data collection will occur, and who is responsible. It helps programs measure progress toward objectives and determine if desired results were achieved. The document also provides a template for components to include in an M&E plan and discusses how complexity of M&E plans has increased over time with different requirements from organizations like USAID, CDC, and GAC. It emphasizes involving relevant technical
This presentation has a vivid description of the basics of doing a program evaluation, with detailed explanation of the " Log Frame work " ( LFA) with practical example from the CLICS project. This presentation also includes the CDC framework for evaluation of program.
N.B: Kindly open the ppt in slide share mode to fully use all the animations wheresoever made.
This gives the information about programme evaluation, planning of evaluation, requirement and purpose of evaluation, steps involved in evaluation, Uses of evaluation, Stakeholder and their role in evaluation, finding and analysing the result of evaluation, Standards of effective evaluation, utilization of evaluation.
Difference between monitoring and evaluationDoreen Ty
Monitoring involves tracking project performance and progress toward goals during implementation to ensure accountability. It answers whether things are being done right and allows for timely management decisions. Evaluation assesses efficiency, impact and relevance after completion to judge the overall merits and determine if the right things were done. Both aim to improve projects, but monitoring focuses on day-to-day management during implementation while evaluation provides longer-term perspective at critical points like midway or after completion.
A series of modules on project cycle, planning and the logical framework, aimed at team leaders of international NGOs in developing countries.
Part 7 of 11.
There are two handouts to go with this module, Population Indicators, and a Logframe with blanks. http://www.slideshare.net/Makewa/population-indicators-handout and http://www.slideshare.net/Makewa/exercise-watsan-logframe-with-blanks
The document discusses monitoring and evaluation (M&E) of health programs, defining monitoring as the routine collection of data to track progress towards objectives, while evaluation assesses the impact of a program by measuring outcomes at baseline and endline using a control group. It provides guidance on developing M&E plans, including describing programs and expected outcomes, identifying indicators, data collection sources and schedules, and disseminating findings to inform decision-making.
Program Evaluation: Forms and Approaches by Helen A. CasimiroHelen Casimiro
This document discusses different forms and approaches to program evaluation. It describes five forms of evaluation: 1) Proactive Evaluation which occurs before program design to synthesize knowledge for decisions, 2) Clarificative Evaluation which occurs early in a program to document essential dimensions, 3) Participatory/Interactive Evaluation which occurs during delivery to involve stakeholders, 4) Monitoring Evaluation which occurs over the life of an established program to check progress, and 5) Impact Evaluation which assesses the effects of a settled program. It also outlines several evaluation approaches including behavioral objectives, four-level training outcomes, responsive, goal-free, and utilization-focused evaluations.
This document discusses program evaluation in public health. It begins by defining key terms like program, evaluation, and monitoring. It describes the need for evaluation to improve health programs and allocate resources. The types of evaluation include formative, process, outcome, and economic evaluations. Steps of evaluation involve engaging stakeholders, describing the program, focusing the design, gathering evidence, justifying conclusions, and ensuring use. Frameworks for public health evaluation include CDC's 30 standards across utility, feasibility, propriety and accuracy.
This document provides an introduction to monitoring and evaluation (M&E) plans. It discusses what an M&E plan is, how it relates to a logic model, and how it can contribute to a program's success. An M&E plan describes a program's approach to implementing M&E activities, including what data will be collected, how and when data collection will occur, and who is responsible. It helps programs measure progress toward objectives and determine if desired results were achieved. The document also provides a template for components to include in an M&E plan and discusses how complexity of M&E plans has increased over time with different requirements from organizations like USAID, CDC, and GAC. It emphasizes involving relevant technical
The document discusses the Health Promoting Schools scheme which aims to promote strategic and holistic approaches to developing health and wellbeing across the whole school community. It outlines six key stages in the process, including identifying priority areas, developing a school mission statement and health action plan, and ensuring policies support pupil's emotional, physical and social needs. The scheme also focuses on curriculum and learning, parental and community involvement, staff wellbeing, and accessing specialist support.
The document discusses the logical framework approach for project planning, which involves identifying objectives, expected results and activities, and defining objectively verifiable indicators and sources to measure progress. The logical framework matrix lays out the intervention logic in a table format with objectives in the first column and their corresponding indicators, sources of verification, and assumptions in subsequent columns. The process involves defining measurable indicators and data sources for each level of the project from overall objectives down to activities.
What is PRECEDE/PROCEED?
PRECEDE/PROCEED is a community-oriented, participatory model for creating successful community health promotion interventions.
Monitoring is the continuous collection of data and information on specified indicators to assess the implementation of a development intervention in relation to activity schedules and expenditure of allocated funds, and progress and achievements in relation to its intended outcome.
Evaluation is the periodic assessment of the design implementation, outcome, and impact of a development intervention. It should assess the relevance and achievement of the intended outcome, and implementation performance in terms of effectiveness and efficiency, and the nature, distribution, and sustainability of impact.
The purpose of community diagnosis is to define existing problems, determine available resources and set priorities for planning, implementing and evaluating health action, by and for the community.
This document contains lecture notes on Health Service Management for second year public health students. It covers the following topics over two lecture days:
Day 1 topics include definitions of management and health management, the historical development of management, the differences between management and administration, management functions including planning, organizing, staffing, directing, and controlling.
Day 2 topics include management and the external environment, the roles and types of managers including first line, middle, and top managers, necessary management skills at different levels, and key management concepts and principles such as effectiveness, efficiency, and management by objectives.
The document provides an overview of important concepts in health service management through detailed lecture outlines intended to educate students in this subject area.
Evaluation plays a lot in teaching. Most of the faculty members have not undergone any pre-service training on teaching and learning. Some attempts to undergo in-service programs. The institutes could offer more in-service courses to improve the competencies of the faculty members.
Monitoring involves systematically collecting information about activities, products, services, users, and external factors affecting an organization. Evaluation involves making judgements about the value and effectiveness of an organization. Monitoring and evaluation are important for organizations to learn and improve, and to be accountable. They involve planning what to monitor and evaluate, collecting monitoring data, conducting evaluations, and using findings to learn and improve.
Monitoring and Evaluation for Project management.Muthuraj K
Monitoring and evaluation (M&E) is a set of techniques used in project management to establish controls and ensure a project stays on track to achieve its objectives. Monitoring involves systematically collecting, analyzing, and using information for management decisions and control. It provides information to identify and solve problems and assess progress. Evaluation determines the effectiveness, efficiency, relevance, impact, and sustainability of a project. Both monitoring and evaluation are important for project management and should be integrated throughout the project cycle.
This document provides an overview of monitoring and evaluation (M&E) for programs and interventions. It discusses what M&E is, the differences between monitoring and evaluation, why M&E is important, how to develop an M&E plan, and key components of an M&E plan. Monitoring involves routine data collection to track progress towards objectives, while evaluation assesses overall impact by comparing outcomes between program and non-program groups. Developing a strong M&E plan from the beginning is essential to demonstrate accountability and guide effective implementation.
Monitoring and evaluation are important for e-governance projects to track their outputs and outcomes. Monitoring relates to tracking project progress and deliverables against the project plan. Evaluation assesses achievement of objectives and provides recommendations. Outputs are tangible deliverables like processes, systems, and infrastructure. Outcomes are intended results like increased efficiency and quality services. A monitoring and evaluation framework should define indicators to measure outputs and outcomes. This allows evaluating project performance and assessing progress toward business goals.
Monitoring and Evaluation of Health ServicesNayyar Kazmi
This document provides an overview of monitoring and evaluation (M&E) of health services. It discusses the key differences between monitoring and evaluation, and explains that M&E is important to assess whether health programs and services are achieving their goals and objectives. The document also outlines the main components and steps involved in conducting evaluations, including developing indicators, collecting and analyzing data, reporting findings, and implementing recommendations.
M&E faces several challenges that can prevent it from being an effective management tool. Some key challenges include: 1) M&E is often treated in isolation rather than being used to improve project excellence. 2) Non-M&E stakeholders want to see everything in a logframe, whereas logframes are meant to present key issues, not all details. 3) There is a lack of clarity around M&E terminology for non-M&E stakeholders and inconsistencies between documents. 4) M&E data is often not used for improvements due to a lack of commitment to action.
Monitoring involves continuous assessment of project implementation to provide feedback and identify successes and problems. It focuses on schedules, inputs, and services. Evaluation assesses outcomes, impacts, effectiveness, and sustainability. The document discusses the importance of monitoring and evaluation for improving decision-making, achieving outcomes, and organizational learning. It provides definitions and comparisons of monitoring and evaluation. Participatory approaches are emphasized to empower stakeholders. Clear objectives and indicators are needed to measure progress.
Prototype for health education program on utilization of family planningMohammad Aslam Shaiekh
1) The document outlines a proposed health education program on family planning services in Rupa Rural Municipality of Kaski District, Nepal.
2) The program aims to increase utilization of temporary and permanent family planning methods through a short lecture covering various family planning options, their benefits and limitations.
3) A 60-minute session will be conducted for 30 reproductive age women, using group discussions, presentations, demonstrations and IEC materials. Effectiveness will be evaluated based on knowledge gained, increased utilization of services, and improved health outcomes over time.
Impact evaluation is used to determine the effectiveness of programs by examining outcomes and determining if goals were achieved. It typically occurs retrospectively on mature programs and uses approaches like objectives-based, needs-based, or process-outcome evaluations to establish what works and why by measuring outcomes rather than just outputs. The major concerns are determining if the program was implemented as planned and what benefits were achieved for participants.
The document discusses the logical framework approach (LFA), a systematic planning procedure used for project cycle management. It was developed in the 1960s by organizations like USAID and GTZ to improve development project planning and monitoring. The key aspects of the LFA include: (1) developing a hierarchy of goals, purposes, outputs and activities with clear cause-effect relationships; (2) specifying objectively verifiable indicators and means of verification for measuring progress and success; and (3) identifying important assumptions and risks outside the project's control that could affect success. The logical framework matrix visually captures these elements to facilitate participatory planning, implementation, monitoring and evaluation of a project.
Any humanitarian or service project begins by
understanding a community’s needs. This crucial
first step identifies your beneficiaries’ needs as well
as the natural assets that will help you address them.
We will give you the knowledge and resources to
involve community members, inventory assets, build
relationships with local leaders, and more. Learn how
to maximize your project’s impact by deepening your
understanding of the communities you serve.
Moderator: Victor Barnes, Director of Programs and
This reflection discusses the outcomes Jennifer envisioned and realized from her Instructional Leadership course. While she initially expected to learn about best instructional practices, the course focused solely on technology. Through readings, her knowledge of new technologies grew and she found relevance for her role as an instructional coach. However, excessive discussion board requirements took away from quality learning. Jennifer also learned about using blogs and Google Docs to enhance communication and shares ideas for how teachers and principals could use blogs to engage stakeholders.
This professional development plan outlines strategies to:
1) Provide training to teachers on innovative technology like SMARTboards to enhance classroom instruction.
2) Increase teacher participation in online courses and trainings through the use of Eduphoria and Blackboard.
3) Develop teachers' skills in using data management, online curriculum, and other technology through hands-on professional development.
The document discusses the Health Promoting Schools scheme which aims to promote strategic and holistic approaches to developing health and wellbeing across the whole school community. It outlines six key stages in the process, including identifying priority areas, developing a school mission statement and health action plan, and ensuring policies support pupil's emotional, physical and social needs. The scheme also focuses on curriculum and learning, parental and community involvement, staff wellbeing, and accessing specialist support.
The document discusses the logical framework approach for project planning, which involves identifying objectives, expected results and activities, and defining objectively verifiable indicators and sources to measure progress. The logical framework matrix lays out the intervention logic in a table format with objectives in the first column and their corresponding indicators, sources of verification, and assumptions in subsequent columns. The process involves defining measurable indicators and data sources for each level of the project from overall objectives down to activities.
What is PRECEDE/PROCEED?
PRECEDE/PROCEED is a community-oriented, participatory model for creating successful community health promotion interventions.
Monitoring is the continuous collection of data and information on specified indicators to assess the implementation of a development intervention in relation to activity schedules and expenditure of allocated funds, and progress and achievements in relation to its intended outcome.
Evaluation is the periodic assessment of the design implementation, outcome, and impact of a development intervention. It should assess the relevance and achievement of the intended outcome, and implementation performance in terms of effectiveness and efficiency, and the nature, distribution, and sustainability of impact.
The purpose of community diagnosis is to define existing problems, determine available resources and set priorities for planning, implementing and evaluating health action, by and for the community.
This document contains lecture notes on Health Service Management for second year public health students. It covers the following topics over two lecture days:
Day 1 topics include definitions of management and health management, the historical development of management, the differences between management and administration, management functions including planning, organizing, staffing, directing, and controlling.
Day 2 topics include management and the external environment, the roles and types of managers including first line, middle, and top managers, necessary management skills at different levels, and key management concepts and principles such as effectiveness, efficiency, and management by objectives.
The document provides an overview of important concepts in health service management through detailed lecture outlines intended to educate students in this subject area.
Evaluation plays a lot in teaching. Most of the faculty members have not undergone any pre-service training on teaching and learning. Some attempts to undergo in-service programs. The institutes could offer more in-service courses to improve the competencies of the faculty members.
Monitoring involves systematically collecting information about activities, products, services, users, and external factors affecting an organization. Evaluation involves making judgements about the value and effectiveness of an organization. Monitoring and evaluation are important for organizations to learn and improve, and to be accountable. They involve planning what to monitor and evaluate, collecting monitoring data, conducting evaluations, and using findings to learn and improve.
Monitoring and Evaluation for Project management.Muthuraj K
Monitoring and evaluation (M&E) is a set of techniques used in project management to establish controls and ensure a project stays on track to achieve its objectives. Monitoring involves systematically collecting, analyzing, and using information for management decisions and control. It provides information to identify and solve problems and assess progress. Evaluation determines the effectiveness, efficiency, relevance, impact, and sustainability of a project. Both monitoring and evaluation are important for project management and should be integrated throughout the project cycle.
This document provides an overview of monitoring and evaluation (M&E) for programs and interventions. It discusses what M&E is, the differences between monitoring and evaluation, why M&E is important, how to develop an M&E plan, and key components of an M&E plan. Monitoring involves routine data collection to track progress towards objectives, while evaluation assesses overall impact by comparing outcomes between program and non-program groups. Developing a strong M&E plan from the beginning is essential to demonstrate accountability and guide effective implementation.
Monitoring and evaluation are important for e-governance projects to track their outputs and outcomes. Monitoring relates to tracking project progress and deliverables against the project plan. Evaluation assesses achievement of objectives and provides recommendations. Outputs are tangible deliverables like processes, systems, and infrastructure. Outcomes are intended results like increased efficiency and quality services. A monitoring and evaluation framework should define indicators to measure outputs and outcomes. This allows evaluating project performance and assessing progress toward business goals.
Monitoring and Evaluation of Health ServicesNayyar Kazmi
This document provides an overview of monitoring and evaluation (M&E) of health services. It discusses the key differences between monitoring and evaluation, and explains that M&E is important to assess whether health programs and services are achieving their goals and objectives. The document also outlines the main components and steps involved in conducting evaluations, including developing indicators, collecting and analyzing data, reporting findings, and implementing recommendations.
M&E faces several challenges that can prevent it from being an effective management tool. Some key challenges include: 1) M&E is often treated in isolation rather than being used to improve project excellence. 2) Non-M&E stakeholders want to see everything in a logframe, whereas logframes are meant to present key issues, not all details. 3) There is a lack of clarity around M&E terminology for non-M&E stakeholders and inconsistencies between documents. 4) M&E data is often not used for improvements due to a lack of commitment to action.
Monitoring involves continuous assessment of project implementation to provide feedback and identify successes and problems. It focuses on schedules, inputs, and services. Evaluation assesses outcomes, impacts, effectiveness, and sustainability. The document discusses the importance of monitoring and evaluation for improving decision-making, achieving outcomes, and organizational learning. It provides definitions and comparisons of monitoring and evaluation. Participatory approaches are emphasized to empower stakeholders. Clear objectives and indicators are needed to measure progress.
Prototype for health education program on utilization of family planningMohammad Aslam Shaiekh
1) The document outlines a proposed health education program on family planning services in Rupa Rural Municipality of Kaski District, Nepal.
2) The program aims to increase utilization of temporary and permanent family planning methods through a short lecture covering various family planning options, their benefits and limitations.
3) A 60-minute session will be conducted for 30 reproductive age women, using group discussions, presentations, demonstrations and IEC materials. Effectiveness will be evaluated based on knowledge gained, increased utilization of services, and improved health outcomes over time.
Impact evaluation is used to determine the effectiveness of programs by examining outcomes and determining if goals were achieved. It typically occurs retrospectively on mature programs and uses approaches like objectives-based, needs-based, or process-outcome evaluations to establish what works and why by measuring outcomes rather than just outputs. The major concerns are determining if the program was implemented as planned and what benefits were achieved for participants.
The document discusses the logical framework approach (LFA), a systematic planning procedure used for project cycle management. It was developed in the 1960s by organizations like USAID and GTZ to improve development project planning and monitoring. The key aspects of the LFA include: (1) developing a hierarchy of goals, purposes, outputs and activities with clear cause-effect relationships; (2) specifying objectively verifiable indicators and means of verification for measuring progress and success; and (3) identifying important assumptions and risks outside the project's control that could affect success. The logical framework matrix visually captures these elements to facilitate participatory planning, implementation, monitoring and evaluation of a project.
Any humanitarian or service project begins by
understanding a community’s needs. This crucial
first step identifies your beneficiaries’ needs as well
as the natural assets that will help you address them.
We will give you the knowledge and resources to
involve community members, inventory assets, build
relationships with local leaders, and more. Learn how
to maximize your project’s impact by deepening your
understanding of the communities you serve.
Moderator: Victor Barnes, Director of Programs and
This reflection discusses the outcomes Jennifer envisioned and realized from her Instructional Leadership course. While she initially expected to learn about best instructional practices, the course focused solely on technology. Through readings, her knowledge of new technologies grew and she found relevance for her role as an instructional coach. However, excessive discussion board requirements took away from quality learning. Jennifer also learned about using blogs and Google Docs to enhance communication and shares ideas for how teachers and principals could use blogs to engage stakeholders.
This professional development plan outlines strategies to:
1) Provide training to teachers on innovative technology like SMARTboards to enhance classroom instruction.
2) Increase teacher participation in online courses and trainings through the use of Eduphoria and Blackboard.
3) Develop teachers' skills in using data management, online curriculum, and other technology through hands-on professional development.
- The document introduces a new teacher professional growth and evaluation plan for the 2009-2010 school year at E.L. Haynes.
- It outlines the roles of supervisors, professional development mentors, and coaches who will provide ongoing feedback and support to teachers.
- A calendar of events is presented which includes observations, surveys, self-assessments, and evaluation meetings to guide teacher professional development and feedback over the course of the year.
Evaluation of teacher education initiative of CEMCA for three year plan2012 1...Gurumurthy Kasinathan
A Brainstorming meeting/workshop on ICT Integrated Teacher Education for SCERTs of South India Organised by
Commonwealth Educational Media Centre for Asia (CEMCA), New Delhi. Venue: Regional Institute of Education (NCERT), Mysore
Date: 22nd April 2016
This document provides a collection evaluation and development plan for the media center at Alice Coachman Elementary School. It describes the school's demographics, noting that it serves 302 students in grades K-5, with 94% being black and 6% white. The plan focuses on improving the 4th grade social studies collection on the topic of Westward Expansion. It analyzes the current collection, finding it to be on average 14 years old. Several activities are proposed to engage students with the topic, including a Native American jigsaw activity and analyzing the impact of technology on expansion.
This document discusses accountability trends in bilingual education programs. It notes that program directors in Texas minimally use accountability data collected from various sources to evaluate bilingual programs. Both short-term and long-term outcomes of bilingual programs should be evaluated. While quantitative data provides some insights, qualitative data from ethnographic field notes can provide additional context about non-school factors. When examining bilingual program accountability, it is important to consider the political, economic, and social factors that influence the programs and build authentic partnerships with stakeholders.
The basic steps to program evaluation are to first define the purpose and objectives of the evaluation by identifying stakeholders, budget, timeline and intended outcomes. Next, a plan is created which determines the evaluation questions and selects a model to collect both qualitative and quantitative data from sources like questionnaires and interviews. Finally, the data is analyzed and findings are reported in a final evaluation report.
This document outlines the goals, objectives, and evaluation plan for the school media center program for the 2014 school year. The goals are to 1) expand the media center collection to align with Common Core standards, 2) promote collaboration between teachers, instructional coach, and media specialist, and 3) promote instructional use of online resources. Objective 1A is to add new resources by August 2014 that meet Common Core standards for English/language arts. Activities to achieve this include reviewing standards and collections, soliciting teacher input, and expanding the collection. Objective 2A is to include the media specialist in collaborative planning meetings to encourage use of resources. Activities and evaluations are outlined.
This document discusses approaches to program evaluation. It defines program evaluation as the systematic gathering of information to make decisions about a program. There are four main approaches discussed: product-oriented, which focuses on achieving goals and objectives; static-characteristic, which uses outside experts to determine effectiveness; process-oriented, which questions the worth of program goals; and decision-facilitation, which gathers information to help administrators make judgments. The document also outlines dimensions that shape evaluation perspectives, including formative vs. summative purposes, process vs. product focuses, and quantitative vs. qualitative data types.
This document discusses various approaches to program evaluation including objective-oriented, expertise-oriented, participant-oriented, and consumer-oriented approaches. It provides examples of each approach and how they may be applied. Strengths and weaknesses of each approach are considered. The document also discusses evaluation methods such as surveys, interviews, and mixed methods. References are provided on related research and examples of evaluation studies.
The document provides an overview of Program Evaluation Review Technique (PERT). It defines PERT and its purpose, compares it to Critical Path Method (CPM), discusses its historical background, and outlines the key steps and terminologies used in PERT including how to create a PERT diagram and calculate activity durations, critical paths, and uncertainties.
This presentation tackles the following information:
*Approaches to Program Evaluation
*Three Dimensions that Shape Point of View on Evaluation
*Doing Program Evaluation
*Program Components as Data Sources
Reference: The Elements of Language Curriculum (A Systematic Approach to Program Development) by James Dean Brown of University of Hawaii at Manoa
Reporters: Joy Anne R. Puazo & Marie Buena S. Bunsoy
Program: Bachelor in Secondary Education Major in English
Year: 4th
Instructor: Mrs. Yolanda D. Reyes
Subject: Language Curriculum for Secondary Schools
The document discusses key qualities of measurement devices: validity, reliability, practicality, and backwash effect. It defines each quality and provides examples. Validity refers to what a test measures, and includes content, construct, criterion-related, concurrent, and predictive validity. Reliability is how consistent measurements are, including equivalency, stability, internal, and inter-rater reliability. Practicality means a test is easy to construct, administer, score and interpret. Backwash effect is a test's influence on teaching and learning.
Organizing is the process by which managers establish working relationships among employees to achieve goals. According to Chester Barnard, an organization is defined as a system of consciously coordinated activities or efforts of two or more people. An organization is also defined as a deliberate arrangement of people to accomplish some specific purpose.
The document discusses various aspects of the curriculum development process. It describes curriculum development as a long-term, collaborative process involving various stakeholders. It also outlines several models of curriculum development, including Tyler's objectives model, Taba's refinement of Tyler's model, and Hunkins' decision-making model. The document emphasizes that curriculum development requires needs assessment, input from learners, evaluation of impact, and quality control through revision and modification.
Leadership and learning with revisions dr. lisa bertrand-nfeasj (done)William Kritsonis
NATIONAL FORUM JOURNALS are a group of national and international refereed, blind-reviewed academic journals. NFJ publishes articles academic intellectual diversity, multicultural issues, management, business, administration, issues focusing on colleges, universities, and schools, all aspects of schooling, special education, counseling and addiction, international issues of education, organizational behavior, theory and development, and much more. DR. WILLIAM ALLAN KRITSONIS is Editor-in-Chief (Since 1982). See: www.nationalforum.com
This document provides an introduction and overview of Montgomery College's annual progress report on its cultural diversity programs as required by law. It summarizes the college's diversity plan, which was approved in 2013 and aims to achieve diversity and inclusion through 22 strategies and 96 action measures over seven years. The document outlines the plan's implementation process and provides examples of accomplishments within the plan's thematic areas of educational excellence, community engagement, access and student success, economic development, and assessment. It describes efforts to increase diversity among students, staff, and faculty as well as create cultural awareness on campus.
An Evaluation Of One District S Think Through Math Program And Curriculum Imp...Martha Brown
This program evaluation examines the implementation of the Think Through Math (TTM) program in middle school Intensive Math classes in the Claitt School District. Over 3,300 middle school students were enrolled in the program during the 2015-2016 school year. The evaluation focuses on teacher perceptions of the resources, impact on student achievement, and instructional practices used in TTM. Data was collected through teacher surveys and classroom observations to determine the effectiveness of TTM and identify ways to improve student math performance.
This document discusses the process of curriculum development. It begins with defining curriculum and outlining its functions. It then covers the historical perspectives of curriculum development approaches. The stages of curriculum development discussed are diagnosis of needs, curriculum construction, implementation, and evaluation. Key aspects of each stage like formulating objectives, selecting content, organizing learning experiences, developing curriculum packages, and orienting teachers are explained.
An Evaluation Of One District U27S Think Through Math Program And Curriculum ...Gina Brown
This document summarizes Dywayne Hinds' 2017 dissertation that evaluated the implementation of the Think Through Math (TTM) program in middle school intensive math classes in one school district. The study examined teachers' perceptions of resources, impact on student achievement, and instructional practices used. Data from teacher surveys and classroom observations revealed mixed perceptions about TTM's potential to improve achievement. The findings suggest more robust execution of TTM could increase math achievement.
Situation analysis is an examination of political, social, economic, and institutional factors that could impact a curriculum project. It identifies strengths, weaknesses, opportunities, and threats. The goal is to determine what could positively or negatively affect implementing curriculum changes. Key factors include social roles of languages; characteristics of teachers, learners, and institutions; and compatibility with existing expectations. Analyzing these situations helps plan for potential obstacles and considerations in curriculum development.
This document discusses the determinants of curriculum. It defines curriculum and lists its key components like objectives, content, methods, materials, and assessment. It then explains that the curriculum is influenced by both internal and external factors. Internally, factors include acceptance by teachers and administrators, leadership, resources, and student acceptance. Externally, the curriculum is shaped by sociopolitical forces, technological advances, educational policies, society's expectations, feedback from employers, and international standards. The document maintains that curriculum planners must consider all these influencing factors for successful curriculum design and implementation.
The document discusses the key elements and factors involved in curriculum development. It identifies five main elements of curriculum according to Wheeler: selection of aims/goals/objectives, selection of learning experiences, selection of content, organization of learning experiences and content, and evaluation. It also outlines several factors that influence curriculum development, including situational analysis, setting objectives, selecting subject matter/content, teaching methods, and evaluation. The roles of various stakeholders in implementing the curriculum are also highlighted.
Designing an evaluation of a tertiary preparatory program soundsphysrcd
The document discusses designing an evaluation of a tertiary preparatory program within a university context. It outlines the benefits of evaluating the program, including identifying areas for improvement and assessing whether the program's objectives are being met. It then describes challenges first-year university students often face and the program's aims to address these challenges. Finally, it proposes a mixed-methods evaluation approach utilizing surveys, academic performance tracking, observations, and focus groups to evaluate the program's effects on students and staff.
Designing an evaluation of a tertiary preparatory program soundsphysrcd
The document summarizes the design of an evaluation plan for a tertiary preparatory program in schools. The plan aims to assess the program's effectiveness in improving students' motivation, skills and career decision-making. It involves collecting academic performance data, student journals, observations and focus groups. Staff surveys and a SWOT analysis will also gather feedback. Ethical considerations like informed consent and confidentiality are discussed.
The document discusses different approaches to curriculum development, including:
- Model by Taba that considers selection and organization of learning experiences.
- Ornstein and Hunkins that consider curriculum experiences as the instructional component.
- Learning takes place through experiencing content mediated by social processes.
It also discusses the development of the New Secondary Education Curriculum in the Philippines which aimed to address problems identified in the secondary education system.
A comprehensive needs assessment should identify gaps between a school's current performance and its goals. It provides direction by determining priorities and resources to maximize impact. The assessment involves gathering both qualitative and quantitative data from multiple sources to develop goals and monitor implementation. It is critical for a planning team to conduct the needs assessment and analyze the data to identify root causes and priorities. The results should be used to create SMART goals and select strategies to meet identified needs.
A comprehensive needs assessment should identify gaps between a school's current performance and its goals. It provides direction by determining priorities and resources to maximize impact. The assessment involves gathering both qualitative and quantitative data from multiple sources to develop goals and monitor implementation. It is critical for a planning team to conduct the needs assessment and analyze the data to identify root causes and priorities. The results should be used to create SMART goals and select strategies to meet identified needs.
Comprehensive Guidance And Counseling Programfaith.seals
This document outlines the history and components of a comprehensive guidance and counseling program for students. It describes how the program is based on a state framework from 2003 and national school counseling standards. The program focuses on helping students develop skills in academic achievement, career development, personal/social development, and community involvement. It details the program's accomplishments over three years, which included evaluating needs, designing the program curriculum, and implementing systems like a social skills program. Future priorities include expanding career education, using data to guide decisions, and partnering with the community on cultural competence efforts.
This document outlines the planning process for developing the Division Education Development Plan (DEDP) for 2023-2028 in Davao del Sur. It presents the objectives of planning workshops which are to review the national education plan, assess the division's status and targets, explain the strategic planning process, and provide the outline for the DEDP. The proposed DEDP outline includes sections for goals, outcomes, strategies, targets, implementation, monitoring and evaluation, risk management, timelines and appendices. Upcoming planning activities are also listed, including a pre-planning consultation, formulation of the strategic plan, plan appraisal, and presentation.
The curriculum framework provides guidance for curriculum development committees on how to address state standards. It summarizes key educational issues, discusses how state goals relate to those issues, and provides a structure to help address student standards. The framework also guides professional development and recommends instructional and assessment strategies. It is not a detailed set of lesson plans, scope and sequence, or mandate for specific methodologies. The framework serves to outline meaningful curriculum experiences and activities, as well as constraints, in order to clarify what will and will not be covered.
Running Head Target of Program Evaluation Plan, Part 11TARG.docxtoltonkendal
Running Head: Target of Program Evaluation Plan, Part 1
1
TARGET OF PROGRAM EVALUATION PLAN
6
Shamika Cockfield
Strayer University
Dr. Melanie Gallman
EDU571: Evaluating School Programs
January 19, 2017
Teacher Preparation Program
The evaluation of an education program is an evolving profession. The purpose of testing the efficiency of a program is to give the decision-makers substantial information to use in enhancing or improving the recommended program. For example, an institution, say a school, may use program evaluation to assist in making decisions regarding whether to establish a program (needs assessment), ways of developing a program (formative evaluation) and whether to revise or continue using the existing program (summative evaluation) (Faxon-Mills, Hamilton, Rudnick & Stecher, 2013). As such, the objective of this paper is to evaluate the efficiency of a teacher preparation program in enhancing the value of the teachers and the performance of the students.
Describe three (3) elements of a worthy object for program evaluation - its type, the department administrating it, and target population.
The program evaluation under perspective is the Teacher Preparation program. It is a program that the three levels of government, Federal, State and local government establishes to ascertain the efficiency of the teachers engaged in educational institutions at all the levels ranging from the Pre-school to the University Levels. As such, the program falls under or it’s rather administered by the Council for the Accreditation of Education Programs (CAEP). The target focuses mostly on the teacher candidates (Faxon-Mills, Hamilton, Rudnick & Stecher, 2013).
Describe the program's history, primary purpose(s), and / or expected outcomes.
Effective tutoring has always been significant and is recently a nationwide concern. The increased emphasis on effective tutoring can be attributed to a several factors, such as (a) long-lasting accomplishment gaps that endure in spite of the comprehensive transitions at both the national and State levels, (b) the poorer academic performance registered by the students on international examination compared to their counterparts living in other industrialized nations and lastly(c) the need of managing the expenditure by the government at the Federal, State and local positions. All these aspects have raised a major concern concerning the efficiency of the teachers in schools and the significance of preparing teachers adequately while in colleges and campuses. Furthermore, the emphasis on enhancing teacher education is as well triggered by the competition and assessment with the alternate certification programs and the fresh standards recommended by the Board mandated to accredit the education preparation programs.
The board requires these programs to illustrate that the approved candidates can impact strong positive impacts on the students learning. One key outcome of these developments is the level o ...
1SIP BEDP 2030 by DepEd Planning Service Director Roger Masapol.pptxberiniaedeno
Adopted through DepEd Order No. 24, s. 2022
It Shall
Serve as blueprint in the next decade in formulating, implementing, coordinating, monitoring plans, programs and projects
Provide strategic roadmap for the Department to follow in improving the delivery and quality of basic education
Address the immediate impacts of pandemic on education and anticipate the future of education and introduce innovation in fostering resiliency and embedding the rights of children in education
All offices and units in all governance levels shall align their policies, plans and programs with the BEDP
The BEDP shall be a living document, serving as guide to all DepEd units and offices in their operational programming
This is a Walden University course (EDUC 8103), A8: Course Project—Program Proposal. It is written in APA format, has been graded by an instructor (A), and includes references. Most higher-education assignments are submitted to turnitin, so remember to paraphrase. Let us begin.
This document discusses key concepts related to assessment including testing, measurement, and evaluation. It defines assessment as a systematic process of obtaining information about student learning through various techniques, measuring achievement against learning goals, and making judgements about progress. Testing involves administering an instrument to sample student performance, measurement yields scores that indicate performance levels, and evaluation makes value judgements about performance based on measurements. The characteristics of reliable, suitable, objective and valid assessment are also outlined to ensure decisions are based on sound criteria. The purposes and types of assessment are explained to guide educational decision making.
A mini research investigating the challenges experienced by special needs students in a mainstream classroom, in Antigua and Barbuda, following the implementation of an initiative to prepare them for the Common Entrance Examinations (now called national Assessment)
The document discusses teacher professional development through training, mentoring, and observation/assessment. It provides background on each approach and how they are implemented. Training involves experts demonstrating skills with feedback and coaching. Mentoring pairs experienced teachers with novices to guide and assist them. Observation/assessment involves teachers observing each other's lessons for feedback to improve practice, though distinguishing evaluation from assessment can be difficult. The document examines benefits and challenges of each approach.
This document outlines a change model called the Day in a School Teacher Support (DiaS-TS) model. The model aims to provide teachers with support through classroom observations to improve lesson planning, delivery, and student engagement. Key aspects of the model include pre- and post-observation conferences to facilitate reflection, exposing students to active learning, and improving the classroom environment. The model also seeks to foster collaboration among teachers and encourage planning. Potential benefits include better monitoring practices, teamwork, and heightened student engagement. The document discusses implementing the model through change leaders and addressing possible challenges to success.
This presentation highlights the importance of curriculum design, structure of unite and provides a reminder of the curriculum development process after designing...THE WAY FORWARD - piloting, implementing, monitoring, evaluation,
Provides examples of philosophical, psychological, social and historical foundations - these foundations influence the development, implementation and evaluation of curriculum;
Introductory information including the strategic plan for a national curriculum development process, including a strategic plan and to guide a a backward design discussion of the characteristic, of the 'ideal' student, envisaged at the end of primary and secondary schooling.
The document presents the Experiential Instructional Appraisal (ExpIA) Model, which was developed as an assessment tool to evaluate teachers' instructional skills in social studies based on experiential learning theory. The ExpIA model outlines three domains: 1) Planning and Preparation for Instruction, 2) Teaching, and 3) Self-Monitoring. It also describes procedures for administering ExpIA that involve pre-observation conferences, classroom observations, and post-observation feedback sessions. The goal of the ExpIA model is to improve social studies instruction and promote school-based professional development through collaborative, reflective assessment.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Film vocab for eal 3 students: Australia the movie
Program evaluation plan
1. Running head: INSTRUCTIONAL PROGRAM EVALUATION 1
Instructional Program Evaluation Plan: Education for Democratic Citizenship
Cynthia Crump
June 20, 2011
Instructional Program Evaluation Plan: Education for Democratic Citizenship
2. Instructional Program Evaluation Plan 2
The presentation is a systematic and comprehensive formative evaluation plan to
investigate the implementation of social studies education for Democratic citizenship (SSEDC)
in the mature stage. The lead evaluator will select a team to guide and conduct key actions
throughout the evaluation process. The plan will begin with the Grades K-6 program
description, followed by the theoretical framework, including the research questions that will
guide the project over a 12-week period. The methodology will be mixed method survey design,
using multiple methods to collect quantitative and qualitative data. The sampled target group
will include various stakeholders in the school community, including the implementers and
others as the need arises. Content and descriptive data analyses will be the suggested methods to
extract themes and concepts and highlight possible findings influenced by (a) teachers‟
understanding of SSEDC goal; (b) methods used by teachers; and (c) problems the teachers are
experiencing during the implementation process. The evidence will form the basis for findings
and conclusions, and for recommending strategies for improvement of SSEDC. The evaluation
team will put measures in place to promote accurate results, and efficient reporting procedures.
The evaluation team will put efficient reporting procedures or measures in place respected by the
internal stakeholders – designers and implementers.
Some motivating factors influencing the evaluation of the SSEDC instructional program
include the need of designers and supervisors to (a) influence program improvement or
strengthen planning and delivery; (b) identify areas of challenge; and (c) foster accountability;
(Chen, 2005; Gard, Flannigan, & Cluskey, 2004; Jason, 2008; Posavak & Carey, 2007). The
proposed evaluation plan will include:
1. Committee Selection
2. Part I: Background of the SSEDC Program
3. Instructional Program Evaluation Plan 3
a. Determination of the Current State of the Program
b. Evaluation Goal
3. Part II: Theoretical Framework
a. Standards
b. Program Theory
c. Stakeholders
d. Methodology
e. Target Population
4. Part III: Mixed Design
a. Rationale
b. Data Collection Methods
c. Validity and Reliability
5. Part IV: Analysis of Data
a. Possible Findings
b. Recommendation
c. Reporting Considerations
The preceding list is an outline of the main sections of the evaluation plan. The
committee selection is important to ensure key personnel selected are competent to lead and
perform major roles. Part I will be a brief description of the program and a statement of the
evaluation goal and the questions to guide the evaluation. Part II is a guide to the design, the
standards and criteria that will be used as a checklist to judge the quality of the evaluation, and
will have a description of the individuals who will affect or be affected by the implementation
the program, including the beneficiaries. Part III, the mixed design, emphasizes the plan to
4. Instructional Program Evaluation Plan 4
conduct a detailed evaluation by means of multiple data collection tools, from multiple sources.
In Part IV, possible findings from the data analysis will provide the basis for suggestions to
improve the program. The presentation will form the basis to conduct a formative process
evaluation and provide recommendations for improvement of teachers‟ implementation of
SSEDC.
Committee Selection
The director, supervisor, one other education officer, teachers, principals, a community
member, a parent, and a private school principal, a school mentor, will comprise the evaluation
committee. The responsibility of the committee members is to respond to the strengths and
challenges of the program to refine the program. Gard, Flannigan, and Cluskey (2004) cited the
evaluation committee has the responsibility “to use data to identify strengths and weaknesses of
the program” (p. 176).
The coordinator of the development and revision processes and the supervisor coupled
with the stakeholders in the school community “…. are vital to the survival and success of the
[program]” (Gard et al., 2004, p. 4). Collaboration with external evaluators will ensure a
supportive environment (Chen, 2005). The director must guard against bias and conflict of
interests (Posavac & Carey, 2007) because of involvement in all stages of the program. Ethics
and values are two elements necessary to plan, conduct, and evaluate a program to ensure
accuracy of results. Using external and internal evaluators would help to lessen or eliminate
perceived internal bias while empowering internal and external stakeholders (Chen).
Part I: Background of the SSEDC Program
Before 2006, the last attempt at social studies curriculum review and renewal was in the
late 1980s, supported by USAID curriculum specialists. After a quarter of a century, rebranding
5. Instructional Program Evaluation Plan 5
of the social studies curriculum was necessary including renaming the program to social studies
education for Democratic citizenship (SSEDC). Besides datedness, factors, affecting the social
behaviors of citizens, especially among the youth, influenced the development of SSEDC.
In 2007, a team, including the Director of Curriculum as expert, a core of teachers, and
representatives from the environment ministry, completed a first draft of SSEDC. After several
reviews, SSEDC was piloted among a sample of schools and classes (K –9), over a period of 12
weeks in 2008. At various review sessions, all grade teachers had the opportunity to input
changes, based on the results and recommendations of the pilot implementation data.
Implementation of the revised instructional program took place in September 2009, Familiarity
seminars and training workshops were actions to develop teacher competence and support the
implementation. Between 2009 and present, the director, the supervisor, education officers,
principals, and senior teachers continue to conduct monitoring of the SSEDC.
Goals of SSEDC
The following is a section of the rationale of SSEDC (Ministry of Education, 2009)
outlining several reasons that influenced program development.
First, in Antigua and Barbuda [is] a Democratic state; independent from Britain
since 1981; Education for Democratic Citizenship (EDC) would mean that the
main outcome of schooling should be citizens with civic consciousness; not only
equipped with knowledge but[also have] the ability to demonstrate skills
appropriate to such a citizen, who also exhibit democratic values. Second, there
appears to be a democratic deficit. A high percentage of individuals (youth) do
not vote or even show much interest in politics. SSEDC should help to improve
individuals‟ levels of understanding of their lives and how they interact within
6. Instructional Program Evaluation Plan 6
society. Third, [because the mid-2000s] there has been an upsurge of crime and
violence. Of particular interest are the negative activities among the youth. These
include school violence, drug-related violence, increases in cases of HIV/AIDS,
home invasions coupled with robbery and rape, murders, and other gun-related
crimes. Fourth, surge in migration of Caribbean neighbors and an influx of other
migrants from China has opened up the avenue for the focus on themes, such as
civic ideals and practices, identity, traditions, multiculturalism, cultural diversity
and tolerance. All citizens need to tolerate peoples from other places, and also to
tolerate their differences (p. 1)
The focus of SSEDC is on relationships to promote (i) understanding the role and
responsibility of citizens in a democratic society and (ii) awareness of the link and
interdependence locally, regionally, globally. The overarching goal of SSEDC is citizenship;
achievable through:
1. Knowledge of social issues and concerns;
2. Skill development;
3. Development of values and attitudes; and
4. Social participation (p. 3)
Teachers should provide the preceding experiences. The program„s rationale and goal
emphasize the outcome capabilities including knowledge, skills, values, and dispositions the
students should achieve. Students should also receive opportunities to participate in the society
by transferring classroom learning to perform the role of productive citizens. These long-term
outcomes should drive lesson objectives as well as the teaching learning experiences.
7. Instructional Program Evaluation Plan 7
The director introduced the instructional guide with the following statement adapted from
the Organization of Eastern Caribbean States Educational and Research Unit (OERU):
The [program] offers a range of ideas and suggestions to help teachers
organize participatory learning experiences designed to prepare students
for lifelong learning”…. Social studies classrooms place major emphasis
on student-centered learning through the acquisition and development of
specific cognitive skills and competencies. The focus is on learning
through activities, practice, and participation…. These skills are expected
to produce the ultimate outcomes of SSEDC: students as citizens,
acquiring and demonstrating social understanding and civic efficacy”
(Ministry of Education, 2009, p. 2)
Social and Contextual Factors
The SSEDC Instructional program is a part of private and public schools curriculum.
The pilot implementation findings highlighted some gaps and the intent of the review was to
improve on the program. Currently, the curriculum unit personnel conduct support and
monitoring evaluation to provide feedback information on a regular basis to facilitate supervision
of the program. The qualitative and quantitative reports obtained from observation of teaching
using a rating scale, reflections, the classroom environment, students‟ work, and the interactions
reveal areas that mentors could assist with on a continuous basis.
Determination of the Status of the Program
The monitoring in public schools revealed variations exist in the teaching-learning
contexts within and across schools and classes that result in differentiated delivery and students‟
learning experiences. The nature of school leadership and support, supporting materials, and out
8. Instructional Program Evaluation Plan 8
of class experiences could have differentiated effects on students in achieving the goals of the
curriculum. The public-private dichotomy could also be an influential factor on the teaching-
learning process of SSEDC, because the monitoring of the SSEDC is a feature of public schools
only. The information is important to recommend a more in-depth process evaluation.
Evaluation Goal
The end of July 2011 school year will make two years of implementation. Therefore,
2011-2012 is the year of mature implementation. The purpose of the evaluation is formative, to
inform ways to improve SSEDC the program. The proposed plan will therefore outline a
development-oriented process evaluation to examine perceived problems and recommend
the way forward (Chen, 2005, Posavak & Carey, 2007). Formative evaluation is ongoing,
relevant to address the purpose of the evaluation. Throughout the implementation process, the
team would collect data as the program is in effect. The team will be able to identify strengths
and limitations, and intermediate results during implementation, rather than waiting for the one-
time outcomes evaluation (Posavak & Carey). The central question to guide the evaluation is:
1. How well is SSEDC implemented?
Sub-questions:
1. Is the focus for democratic citizenship clear to the teachers?
2. What methods are teachers to prepare students?
3. What problems are teachers experiencing?
The response to the questions should help in identifying the sources of problems and the role of
stakeholders to improve the program.
The preceding section sets the stage for the proposed evaluation of SSEDC. The selected
committee will collaborate with the users and implementers at the Grades K-6 levels at private
9. Instructional Program Evaluation Plan 9
and public schools. The main purpose is to investigate the implementation process to identify the
strengths and weaknesses and suggest improvements.
Part II: Theoretical Framework
Part II is a discussion of the theoretical basis of the plan, including criteria, standards,
program theory, and model. The aim is to (a) discuss how standards and stakeholders will
influence the evaluation plan; (b) provide a rationale for the selected evaluation model; and (c)
identify the design. The purpose of proposed process evaluation will be to examine the quality
of the implementation focusing on the following criteria:
1. teachers‟ understanding of SSEDC goal
2. student-centered instruction and assessment congruent with the experiential learning,
behaviorist, and constructivist theories; and
3. Social and contextual factors
Standards
Standards are necessary to “identify and define evaluation quality and guide evaluators”
(Yarbrough, 2011, p. xxii). Attention to attributes of quality such as utility, feasibility, propriety,
and accuracy promote accountability. Evaluation accountability is important to foster program
improvement, and improved decision making, and create reflective practitioners. For the
purpose of the proposed evaluation, the following standards could help to define the quality
necessary for a successful evaluation (Yarbrough, 2011).
1. Utility
a. Evaluator credibility
i. Clarify that individuals will be responsible for the various elements of
the evaluation
10. Instructional Program Evaluation Plan 10
ii. Provide assurance that each has the expertise or support required to
complete the work.
2. Feasibility
a. Practical procedures
i. Implement practical and responsive procedures aligned with the
operation of the program.
3. Propriety
a. Human rights and respect
i. Design and conduct evaluation to protect human and legal rights and
maintain the dignity of participants and stakeholders.
4. Accuracy
a. Sound designs and analyses
i. Employ technically adequate designs and analyses appropriate for the
purpose of the evaluation
The description of the standards supports the importance of the stakeholders developing trust in
the expertise of the evaluator to plan and implement appropriate procedures and designs to
promote successful and valid evaluation. Stakeholders must also feel protected and respected.
The following discussion will support how the standards will influence the plan in the choice of
theory, stakeholders, model, design, and human rights and respect.
Program Theory
Chen (2005) supported the view program theory is useful in “improving the
generalizability of evaluation results, contributing to social science theory, uncovering
unintended effects,… achieving consensus in evaluation planning…[and providing] …early
11. Instructional Program Evaluation Plan 11
indications of program effectiveness” (p. 15). Chen (2005) identified program theories as
causative or normative. Normative stakeholder theory highlights the input of designers,
directors, and staff in an organization. Normative theory is different from the scientific theory
that controls evaluation conducted by academics (outsider or expert interest). The leader of the
evaluation, the director, will perform the role of the internal evaluator, guiding the staff and
selected users and implementers during the evaluation. The activities of the program are
ongoing and information on the process is necessary to determine strengths and weaknesses to
promote improvement, to enable achievement of the goals. An external reviewer could be an
expert in another government department.
Stakeholders
McCawley (2001) defined stakeholders to include a wide cross-section of actors;
individuals who all make contributions or benefit from the inputs and resources and activity,
which result in short-, medium-, and long-term outcomes. In the primary institutions, the main
beneficiaries are the students; the other important stakeholders are the principals, teachers,
parents, and individuals in the community. Corporate citizens, other government and
nongovernmental partners collaborate with schools to promote learning (Beaumier, Marchand,
Simoneau, & Savard, 2000; Chen, 2005; Eaton, 2009).
Yarbrough (2011) described several stakeholders generic to program evaluation.
Stakeholders include evaluators, designers, implementers, participants, intended users, and other
respondents. For the purpose of SSEDC program, evaluation stakeholders include individuals
from the administrative center or the Ministry of Education (MOE), other government ministries,
school community, and the wider community as in Table 1.
Table 1
12. Instructional Program Evaluation Plan 12
Stakeholders
School Personnel Community Ministry Other
Stakeholders Teachers
Students
Principals or
administrators
External evaluator
Parents
Others who could
contribute information
Teacher trainers
Internal evaluator
Supervisor
Education officers
Government
departments
Non-
governmental
organizations
Local specialist
Normative theory depicted by the stakeholder model would influence or prescribe the
components and activities considered necessary for the success of SSEDC program
implementation and evaluation. Table 2 shows overlap of stakeholders‟ responsibilities in some
areas; however, some individuals have roles more dominant than others. The working group
format would be an important strategy to build consensus on tasks, roles, and issues affecting
relationships during the process obtaining buy-in (Chen, 2005).
Table 2
Stakeholders and their Responsibilities
Stakeholders Responsibilities
Evaluators - external and internal experts Plan, guide, and conduct the evaluation; reviews; decides on
strategy; act as facilitator or consultant, and gives technical
assistance.
Designers – evaluator, supervisor, and
stakeholders from the school community
and community
Work together to plan purpose, objectives
Implementers –ministry personnel,
specialist, evaluators, school personnel
Collaborate to manage, oversee, and ensure the quality of the
evaluation; share information on the program is implementation.
13. Instructional Program Evaluation Plan 13
Teachers will provide feedback; data related to their pedagogies
including teaching, learning, and assessment; the challenges,
benefits, and suggestions. Principals will provide feedback on the
program in their school.
Evaluation participants Provide information or data
Other (in the community or school) provide additional information about the program
Teachers had input during the development and review stages. Feedback during the
monitoring stage allowed them to participate and have a voice in identifying strengths and
shortcomings. Problems experienced with principals support for training and review sessions for
teachers is evidence that principals needed buy-in, to claim ownership and better understand the
goals and objectives of SSEDC. In the mature implementation stage evaluation, the evaluator
must ensure continuous communication with the users and other implementers, allowing them to
share their concerns and doubt, and receive clarification on the goals of the program and their
role as leaders or implementers (Yarbrough, 2011).
The Logic Model
Program theory depicted by the logic model focuses on causal assumptions – a systems
approach linking program resources, activities, and intended outcomes. Table 3 is a
representation of the SSEDC logic model (Cojocaru, 2009; Jason, 2008; McCawley, 2001).
Table 3
SSEDC Logic Model
Inputs Activities Outputs Outcomes
Resources Programs Products Benefits
short-term
changes
medium-term
changes
long-term
changes
14. Instructional Program Evaluation Plan 14
Prescriptive
Curriculum:, guiding
philosophy; sample
lessons; rubrics;
Supporting teacher
material; Student
text;
Mentors
Curriculum
preparation and
review,
Training workshops,
seminars,
monitoring
teachers,
students
others in the
community
Knowledge
skills, attitude,
awareness,
motivation
Behaviors,
practices
Environment
and social
changes
The logic model provides a framework to examine the implementation of SSEDC in the primary
grades. The formative process evaluation will provide evidence on whether teachers understand
their role in preparing learners to become Democratic citizens. The preparation should include
using the resources and applying the suggested experiential, student-centered activities or
methods. Analysis of the data could reveal limitations needing attention to foster adjustments to
the program. The recommendations could focus on clarifying rationale, training, and retraining
of teachers, providing additional resources, all in preparation for evaluating the effectiveness or
outcome of implementing SSEDC.
Methodology
The proposed program evaluation will be a mixed method survey design, placing priority
on collection of qualitative data. Qualitative data may describe the ongoing process of the
SSEDC activities and strategies in the form of words from open-ended questionnaire and
observations. Quantitative data might result from close-ended form of questionnaire and
observation schedules (Neuman, 2003).
Target Group
Population
15. Instructional Program Evaluation Plan 15
The target group will be teachers of Grades K-6 - primary or elementary classes at public
and private schools. Public (32) and private (30) schools, located in four districts or zones,
provide education for 10801 students, taught by 807 teachers. Table 4 is a breakdown of the
number of schools and teachers in the four districts/zones.
Table 4
Target Population: Number of Schools and Teachers
Sampling
Units of sampling will be the schools, teachers, and principals
1. Simple random sampling
a. Select one of each school type from each zone by putting the names in a bag and
choosing 4 public and 4 private primary schools; n=8 schools. All the teachers
(and classes) in each of the eight schools are participants.
2. Purposive sampling
a. social studies supervisor
b. other participants.
Simple random sampling is useful to promote generalizability of findings to all schools because
each has a chance for selection, and the sample is representative of the population (Neuman,
2003). Purposive sampling of supervisors and others is necessary to obtain information in the
selected school communities.
ZONE 1 ZONE 2 ZONE 3 ZONE 4 Total
Public primary schools 7 7 9 9 32
Public primary Teachers 467
Private primary schools 7 7 8 8 30
Private primary Teachers 340
16. Instructional Program Evaluation Plan 16
Human Subject Consideration
Involvement in the evaluation will be voluntary. The evaluator must ensure
confidentiality; stating to stakeholders the information collected is for the purpose of program
evaluation only. All information will be secure; the evaluator will put measures in place to
maintain the confidentiality, for example:
1. by assigning secret numbers to participants;
2. reminding participants not to write names on data collection instruments; and
3. asking personnel in the unit, or who handles the data to keep the information confidential.
The evaluator is able to convince the prospective participants that their rights are protected and
respected; they sign and agree to take part in the evaluation.
The framework links the input of stakeholders, the activities, the resources at different
stages – planning, implementation, and evaluation. Standards that describe utility, feasibility,
propriety, and accuracy define the quality against which to judge the merit of the evaluation.
Various stakeholders in the school and wider communities, the government departments, and
other groups should collaborate to promote buy-in and to conduct an effective evaluation and
promote improvement. The SSEDC logic model is applicable because looking first at the
possible outcomes the stakeholders can identify factors that might influence the implementation
process. The mixed method survey design is appropriate to collect ongoing qualitative and
quantitative data from implementers at public and primary schools, and other participants as
become necessary. Appropriate sampling methods will promote generalizability to the target
population and seek input from suitable stakeholders. The human rights of the subjects are a
major consideration.
Part III: Methodology
17. Instructional Program Evaluation Plan 17
The proposed program evaluation of social studies education for democratic citizenship
(SSEDC) will be a mixed method survey design, using various methods to collect quantitative
and qualitative data but placing priority on collection of qualitative data. Qualitative data could
describe the ongoing process of the SSEDC activities and strategies in the form of words from
open-ended questionnaire and observations. Quantitative or numerical data could result from
close-ended form of questionnaire and observation schedules (Neuman, 2003). Triangulation of
mixed data could provide valid and reliable evaluation instruments and promote understanding of
(analysis) results (Grammatikopoulos, Zachopoulou, Tsangaridou, Liukkonen, & Pickup, 2008).
The presentation is a discussion of the rationale of the method and design of the proposed
evaluation. The development of the rationale will include (a) an outline of the proposed data
collection instruments, including how and why stakeholders will contribute to the decision-
making process; (b) a discourse on the importance of putting mechanisms in place to promote
validity and reliability, followed by (c) a simple plan to analyze the data. The presentation ends
with a conclusion.
Mixed Method Design: A Rationale
Formative evaluation requires flexible methodology that provides quick feedback using
mixed methodology (Chen, 2005). The program is in the implementation stage, and a survey
would be most useful among smaller samples. The mixed method provides a comprehensive
scope to promote sound program evaluation and program improvement (Jason, 2008).
Grammatikopoulos et al. (2008) promoted mixed design as a method to improve or increase the
degrees of validity and reliability while criticizing the use of one data source to make decisions
during an evaluation.
The mixed method provides a comprehensive scope to promote sound program evaluation and
program improvement (Jason, 2008). At the interpretation stage, the usefulness of mixed design
18. Instructional Program Evaluation Plan 18
is "providing a versatile and more complete picture of the procedures under evaluation”
(Grammatikopoulos et al., 2008, p. 5). Qualitative or quantitative data used in isolation will not
provide the same insight as using both. The voices of participants can be most convincing as
they tell their stories, useful to corroborate results of quantitative data (Grammatikopoulos et al.,
2008; Creswell, 2005). Participants‟ comments and open responses can influence how readers
accept or reject a program. The themes interpreted from such presentations could corroborate
quantitative measures.
Instruments or Data Collection Methods
Process (formative) data, through triangulation method is necessary to obtain information
on how the program is working and identify the influencing factors – to examine the
implementation fidelity of SSEDC. Several evaluation research (Burnstein, Hutton, & Curtis,
2006; Gallavan, 2008) identified different instrumentation approaches useful to mixed qualitative
and quantitative data analyses. The examples relevant to the evaluation of SSEDC include:
Qualitative Data
1. Schedule for systematic observation (Appendix A) to include comments of class
observations of teaching, learning, and assessment to obtain first hand information of
the process;
2. Teachers‟ respond to open-ended questionnaire items (Appendix B) as they share
freely their view of the program in process;
3. Interview (Appendix C) with parents, principals, teachers, concerning how the
curriculum is used and impact of the context;
4. Weekly reflections written by teachers about their classroom practice and student
participation, highlighting challenges, difficulties, and ease of delivery;
19. Instructional Program Evaluation Plan 19
5. Document analyses including analyzing the curriculum and supporting documents,
such as texts, and guides; and
6. Focus groups composed of students, teachers, and parents of students to obtain
additional information on how students respond to the curriculum.
Quantitative Data
1. Teachers ratings of close ended teacher questionnaire (Appendix A) (Likert type)
items to obtain data about the process and principals responses to obtain data on the
context and views of the implementation
The preceding examples show evaluators could collect in-depth data from the users in the
school community, parents, community personnel, and administrators. The stakeholders or
implementers are an important group (in the context) that could influence the results of the
evaluation. They represent the outputs or people who benefit from or influence the processes (or
outcomes), directly or indirectly (Fontaine, Haarman, & Schmid, 2006).
Validity and Reliability
Validity and reliability of research are major considerations. Researchers must examine
several important aspects of instruments; the appropriateness; the measurement properties; and
the process of administering and scoring (Borg & Gall, 1998). The structure and contents of the
researcher-designed instruments could affect the responses provided by the participants,
influencing interpretation of data (Creswell, 2005; Neuman, 2003). Using themes evolving from
the qualitative data or the literature review, dividing into appropriate subsets, training observers,
and testing the instrument will promote content validity and reliable results (VanTassel-Baska et
al., 2008).
20. Instructional Program Evaluation Plan 20
Simple random sampling may be (more) appropriate to select units and participants when
evaluating SSEDC. Probability sampling promotes generalizability of findings to all schools
because each has a chance for selection, and the sample is representative of the population
(Neuman, 2003). Retraining teachers to deliver SSEDC and using the multiple methods of data
collection will promote triangulation and improve validity and reliability (Amadeo & Cepeda,
2007).
Data Analysis
The results of the evaluation will guide decision-making about the program. A
combination of the qualitative and quantitative data at the interpretation stage results in deeper
understanding of the issues and promotes validity and reliability. Ethical evaluators must put
measures in place to ensure accuracy and consistency of results and consequently the
conclusions (Grammatikopoulos et al., 2008).
Content analysis of qualitative data could determine themes that evolve from the
comments or responses of participants. Quantitative data in the form of means and percentages
could complement the data from interviews, open-ended responses, or documents. Pictorial
representations could be in the form of tables and graphs (Neuman, 2003).
The mixed method design is the preferred design to guide data collection and analysis
during the SSEDC evaluation. The survey approach using a variety of data collection tools will
provide qualitative and quantitative data from multiple sources and methods. The statistical
methods appropriate to making meaning of mixed data will provide greater insight into the
issues. Triangulation is one component in mixed method design that increases the degree of
validity and reliability, promoting generalizability of findings to the population
21. Instructional Program Evaluation Plan 21
(Grammatikopoulos et al., 2008). Presenting data in different forms such as words, or in tables
and graphs, is advantageous, supporting several means to interpret, analyze, and report findings.
Part IV: Analysis of Data
This section is a presentation of the statistical approach to mixed data analysis. Content
analysis of qualitative data will determine themes that evolve from the comments or responses of
participants. Quantitative data in the form of means and percentages will complement the data
from interviews, open-ended responses, or documents. Pictorial representations of data will
highlight the possible use of tables and graphs (Neuman, 2003). Triangulation and integration of
multiple data and sources could reveal possible findings and conclusions about the
implementation of SSEDC. Finally, the recommendations will be presented to develop an
improvement plan.
Methodology
The procedures to analyze qualitative data will have two phases; the steps include:
Phase 1
(i) transcribing data collected from observations, interviews, focus groups, and
document analysis;
(ii) perusing the information to identify a theme;
(iii) coding words and phrases using an interactive approach to allow additional phrase
applicable to the analysis;
(iv) coding for frequency;
(v) formulating categories and generalizing based on similarity of words or phrases;
(vi) coding relationship between word;
Phase 2
22. Instructional Program Evaluation Plan 22
(vii) applying themes based on theory in order to explain the data and answer the research
questions; and
(viii) presenting data in the form of narrative, tables, and graphs.
Content analysis procedures promote the discovery of themes, patterns, and characteristics from
the data. The procedures include transcribing, perusing, and extracting words and concepts to be
able to apply coding to identify the themes. In phase two applying theory from past research can
also support development of themes and findings. Numerical percentages and means of words
and themes could result from the qualitative analysis, and presented graphically. The steps of the
qualitative analysis could be done manually or by using qualitative analysis software.
Quantitative
Quantitative data will derive from ratings of Likert type questionnaire and observation
schedule and the frequency of themes and subthemes from the qualitative data. Presentation of
data will be in the form of means and percentages represented in graphs and tables. Data from
the teacher observation schedule (Appendix A) will demonstrate teachers‟ competence in
planning, teaching, and assessment by the frequency of Yes or No selected by the observer. Data
from the teacher questionnaire will represent teachers‟ perception of the quality of the
curriculum, the strategies, and the assessments most frequently used.
Tracking concepts stated most frequently would help to identify the patterns that emerge.
Specific statements written or spoken by the participants will give a voice to the issues in relation
to the research questions. Further analysis will show the most frequently used approaches to
teaching and learning.
The central question to guide the evaluation is:
1. How well is SSEDC implemented?
23. Instructional Program Evaluation Plan 23
Sub-questions:
2. Is the focus for democratic citizenship clear to the teachers?
3. What methods are teachers using to prepare students?
4. What problems are teachers experiencing?
Possible Findings
The responses of the participants from multiple sources should help in identifying the
sources of problems and the role of stakeholders to improve the program. The findings would
give ideas to improve the program as necessary. The data collected using the observation
schedules, questionnaires, interviews, focus groups, weekly reflections, and document analyses
will be the basis of the following discussion. The possible findings will reflect the quality of the
implementation focusing on the stated criteria, resulting in themes such as (a) social studies
goals, (b) teacher centered and student centered teaching, learning, and assessment strategies,
and (c) difficulties and challenges. The research questions will be the basis for the discussion of
the possible findings of how well teachers are implementing SSEDC.
Sub-question 1. Is the focus of democratic citizenship clear to the teachers?
The designers expected articulation of the goal of SSEDC in the program document,
during training, and the monitoring by supervisors to help teacher demonstrate a clear
understanding of the goal of SSEDC. Griffith and Barth (2006) explained teachers might voice
social studies (SSEDC) goal from various perspective:
1. disciplines or content necessary for nation building;
2. transmission of knowledge about what a citizen should know or do to be productive;
and
24. Instructional Program Evaluation Plan 24
3. active engagement to develop competencies to function effectively as a productive
citizen.
The extent to which teachers can explain the importance of SSEDC in the national curriculum
will be a reflection of whether the goal of SSEDC to develop competence to function effectively
as a productive citizen is clear or not.
Sub-question 2. What methods are teachers using to plan, teach, and assess SSEDC?
Comments written by the observers would indicate the extent to which teachers are
applying the teaching, learning, and assessment strategies outlined in the program, or other
strategies applicable to meet the needs of the students. The SSEDC teachers‟ guide (Ministry of
Education, 2009) provides examples to engage students in various ways. For example, students
“can engage in research projects, cooperative group work, drama, and role-play, discussions, and
community service learning” (p. 3).
The experiential learning is the main philosophical basis of the curriculum,
complemented by the behaviorist and constructivist theories. The purpose of the experiential
approach is ““to increase knowledge, develop skills and clarify values” (Roberts, 2006, p. 13).
Further, the goal of the SSEDC is to promote democratic citizenship competencies, subsumed
under the experiential approach. It follows therefore; teachers need to have a clear focus of this
goal. The delivery of the lesson plan should build students‟ experience, reflection, concept
development, and active experimentation on previous experience, incorporating direct and
indirect instruction (Hunt et al., 2009).
Teachers should provide opportunities for students to engage in traditional paper and
pencil test and performance based assessment. Service learning, coined in the early 1960s, is
becoming an important component of experiential learning. Community service learning (CSL)
25. Instructional Program Evaluation Plan 25
is a summative approach to link teaching learning and assessment. Schneller (2008) noted
service learning as an offshoot of Dewey‟s theory of experience, describing the strategy as
“pedagogy, curriculum, activities and programmes that embrace organized, hands-on community
service and volunteerism to enhance student learning and the schooling experience” (p. 294).
This aspect of experiential learning culminates a period of learning, giving opportunity for the
learners to demonstrate transfer of learning competencies in similar or new situations in the
school environment and in the community.
The documents will include the program document and teachers‟ lesson plans with their
reflections. The documents could bear evidence of the teaching, learning, and assessment
strategies that teachers use in planning and delivery of lessons. Numerous instructional
strategies are available for use in the classroom (See Teacher questionnaire, Appendix B).
Planning is important since “without a careful plan for presenting content, [students] experience
may be akin to a jigsaw puzzle” (Gunter, Estes, & Schwab, 2003, p. 39). Planning the
procedures portion of lesson plans requires teachers to select appropriate strategies to meet
identified needs, interests, motivations, and dispositions of learners. Teachers should consider
learner characteristics and learning styles when choosing an instructional strategy (Hunt,
Wiseman, & Touzel, 2009).
Sub-question 3. What problems are teachers experiencing?
The questionnaire should give further support to the delivery of the curriculum.
Although prescriptive, some variables might yet affect the implementation process, including the
allotted periods, the scope of the topics, the available resources, and the appropriateness of the
content for the prescribed grade level, and the learning environment. The evaluation could
highlight the difficulty and ease with which teachers were able to implement the objectives and
26. Instructional Program Evaluation Plan 26
content of the program. The analysis should demonstrate the teaching, learning, and assessment
strategies used most to achieve the goals, and those used less or not at all.
Interview data (including focus group) from teachers and parents could support possible
findings above and support improvement of SSEDC program. Difficulties and challenges
teachers experience may result in gaps that influence the achievement of SSEDC program goal
and objectives. This would be evident in teachers‟ knowledge of SSEDC goal, and the
application of student centered experiential activities, and traditional and performance-based
assessment strategies. The preceding analysis including the identification of difficulties teachers
are experiencing, and their suggestions for improvement will provide the evidence to design and
implement the improvement plan. This evidence should be the basis on which to recommend the
strategies to improve the following:
1. teachers‟ understanding of the goal of SSEDC
2. assistance from personnel in the curriculum development unit; including the director and
supervisors
3. teacher preparation procedures for planning, delivery, and assessment of SSEDC; and
4. involvement of teachers in further revision of the SSEDC program.
Reporting
Collaboration is important and communication is even more important, especially for the
evaluator to communicate with the implementers, the goal, theory, and procedures for the
evaluation and sharing and discussing findings (Jason, 2008). Reporting of evaluation findings
requires the development of a communication plan, outlining schedule of presentations. A draft
report to the heads of the Ministry of Education such as the Minister and the Director of
Education will clarify misconceptions or provoke responses to any conflicting results (Llosa &
27. Instructional Program Evaluation Plan 27
Slayton, 2009). The evaluator should add a personal touch by making the live presentation,
using supporting aids to clarify points and keep the interests of the other stakeholders (Posavac
& Carey, 2007). Llola and Slayton emphasized “…findings be communicated appropriately and
convincingly to stakeholders, so the recommendations would be considered and not
dismissed”(p. 37).
Program evaluation is iterative, ongoing, and cyclical to achieve different purposes,
contributing to communication, collaboration, and learning in an organization (Jason, 2008). The
main goal of SSEDC is to foster democratic citizenship competencies, practices, and social
change. The causal relationship between resources, activities, and outputs should influence the
change observed in the users over a period. This proposed evaluation plan was a formative
process evaluation of social studies education of democratic citizenship (SSEDC). The purpose
was to examine how teachers were implementing SSEDC at Grades K-6 at private and public
primary schools. The participants of interest were (a) teachers randomly selected from four of
each of the type of school, and (b) supervisor and other influential stakeholders. Mixed method
design supported the collection and analysis of qualitative and quantitative data, including
questionnaire, observation, and interview. Content analysis or descriptive analysis were the
applied statistical approaches to identify possible findings. The evidence supported negative and
positive findings of SSEDC goals and teaching, learning, and assessment strategies, and
difficulties experienced during implementation. These findings, consequent conclusions, and
participants‟ suggestions form the basis for the recommendation of improvement strategies. The
strategies included stakeholder involvement, continuous training, and accountability measures.
Reporting evaluation findings required effective planning, collaboration, communication, and
28. Instructional Program Evaluation Plan 28
presentation strategies to ensure the client or stakeholders see merit in the evaluation as stated in
the purpose.
References
29. Instructional Program Evaluation Plan 29
Amadeo, J., & Cepeda, A. (2007). National policies on Education for Democratic Citizenship in
the Americas. Draft analytic Report. Washington D.C.: Inter-American Program on
Education for Democratic Values and Practices. Organization of American States (OAS).
Beaumier, J-P., Marchand, C., Simoneau, C., & Savard, D. (2000). The institutional evaluation
guide. Commission D'évaluation de L'enseignement Collegial at its 103th meeting in
Québec City.
Borg, W. R., & Gall, M. D. (1998). Educational research: An introduction. (5th
ed.). New York:
Longman.
Burnstein, J. H., Hutton, L. A., & Curtis, R. (2006). The state of elementary social studies
teaching in one urban district. Journal of Social Studies Research, 30(1), 15-20.
Chen, H. (2005). Practical program evaluation: Assessing and improving planning,
implementation, and effectiveness. Thousand Oaks, CA: Sage.
Cojocaru, S. (2009). Clarifying the theory-based evaluation. Review of Research and Social
Intervention, 26, 76-86. Retrieved from
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1448390
Creswell, J. (2005). Educational research: Planning, conducting, and evaluating quantitative
and qualitative research. (2nd
ed.). New Jersey: Pearson, Merrill Prentice Hall.
Eaton, J. S. (2009). Accreditation in the United States. New Directions for Higher Education,
(145), 79-86.
Fontaine, C., Haarman, A., & Schmid, S. (2006). Stakeholder theory. Retrieved from
http://www.edalys.fr/documents/Stakeholders%20theory.pdf
Gallavan, N. P. (2008). Examining teacher candidates‟ views on teaching world citizenship.
Social Studies, 99(6), 249-254.
30. Instructional Program Evaluation Plan 30
Gard, C.L., Flannigan, P. N., & Cluskey, M. (2004). Program evaluation: An ongoing
systematic process. Nursing Education Perspectives, 25(4), 176-179.
Grammatikopoulos, V., Zachopoulou, E., Tsangaridou, N., Liukkonen, J., & Pickup, I. (2008).
Applying a mixed method design to evaluate training seminars within an early childhood
education project. Evaluation & Research in Education, 21(1), 4-17.
Gunter, M.A., Estes, T. H., & Schwab, J. (2003). Instruction: A models approach (4th ed.).
Boston: Pearson Education Inc.
Hunt, G. H., Wiseman, D. G., & Touzel, T. J. (2009). Effective teaching: Preparation and
implementation (4th ed.). Illinois: Charles C. Thomas – Publisher Ltd
Jason, M.H. (2008). Evaluating programs to increase student achievement (2nd ed.). Thousand
Oaks, CA: Sage.
Llosa, L., & Slayton, J. (2009). Using program evaluation to inform and improve the education
of young English language learners in US schools. Language Teaching Research 13(1),
35–54.
McCawley, F. (2001). The logic model for program planning and evaluation. Retrieved from
http://www.uiweb.uidaho.edu/extension/LogicModel.pdf
Ministry of Education. (2009). Social studies teachers’ guide: Social studies education for
democratic citizenship. St. John‟s, Antigua & Barbuda: Curriculum Development Unit.
Ministry of Education. (2009). Draft national curriculum policy framework. St. John‟s, Antigua
& Barbuda: Curriculum Development Unit.
Neuman, L. W. (2003). Social research methods. Qualitative and quantitative approaches (5th
ed.). Boston: Allyn & Bacon.
Posavac, E. J., & Carey. R. G. (2007). Program evaluation: Methods and case studies (7th ed.).
Upper Saddle River, N.J.: Pearson/Prentice Hall.
31. Instructional Program Evaluation Plan 31
Roberts, J. W. (2005). Disney, Dewey, and the death of experience in education. Education and
Culture, 21(2), 12-30.
VanTassel-Baska, J., Feng, A., MacFarlane, B., Heng, M., Wong, M. L., Quek, C.G., & Khong,
B. C. (2008). A cross-cultural study of teachers‟ instructional practices in Singapore and
the United States. Journal for the Education of the Gifted, 31(3), 338-363.
Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. L. (2011). The program evaluation
standards: A guide for evaluators and evaluation users (3rd. ed.). Thousand Oaks, CA:
Sage.
32. Instructional Program Evaluation Plan 32
Appendix A
Classroom Observation Schedule
Date _______________________________ Grade _______________________________
School: ______________________________
Teacher _____________________________ Topic _______________________________
A. Tick the appropriate column
1. Teacher is using the Social Studies Curriculum ____ ____
2. Teacher has a completed lesson plan ____ ____
3. Lesson plan incorporates a variety of teaching learning activities ____ ____
4. Activities are in keeping with those suggested in curriculum ____ ____
5. Teacher seems comfortable with activities suggested in curriculum ____ ____
6. Materials are appropriate to the lesson ____ ____
7. Materials are used appropriately during the lesson ____ ____
8. Students respond positively to lesson activities ____ ____
9. Students are active participants in the lesson ____ ____
10. Activities are planned to cater for students individual differences ____ ____
11. Lesson objectives are achieved ____ ____
12. Students are assessed in a variety of ways ____ ____
13. Methods of assessment are clear and appropriate ____ ____
B. Elaborate or comment on any of your observations. Suggest support that could help the teacher
improve.
_____________________________________________________________________________________________
_____________________________________________________________________________________________
_____________________________________________________________________________________________
_____________________________________________________________________________________________
YES NO
33. Instructional Program Evaluation Plan 33
Appendix B
Teacher Questionnaire
Teacher: ________________________________________________________________
School: _________________________________________________________________
Class:
Term: __________________________________________________________________
Section A
i. How often do you use the social studies education for democratic citizenship (SSEDC) instructional guide
Always Sometimes Never
ii. Lessons contain realistic teaching time frames. Yes _____ No ______
iii. Number of teaching lessons/activities. Sufficient ____ Insufficient ____
iv. Number of available resources listed. Sufficient ____ Insufficient ____
v. Content for the level of teaching? Appropriate ____ Inappropriate_____
Section B
1.What objectives did you cover this term?
[use unit & objective numbers]
2.What content was difficult to teach?
3.What content was easy to teach?
Section C: Strategies/methods
1. Which teaching-learning strategies or activities do you use?
Research ____
Grouping ____
Peer teaching ____
Investigation ____
Simulations ____
Role Play ____
Dramatization ____
Community Service Learning ____
Lecture ____
Reading textbook ____
Project ____
Poster ____
Chart ____
Poem/song ____
Displays ____
Exhibitions ____
Questioning ____
Field trip ____
Journal ____
Discussion ____
Lecture ____
Vocabulary development ____
Presentation ____
Notes ____
Class work ____
34. Running head: INSTRUCTIONAL PROGRAM EVALUATION 34
2. Which assessment methods do you use?
Journals ____
Investigation & Projects ____
Observation ____
Oral assessment ____
Pencil& Paper Tests/exercises ____
Worksheets ____
Practical / Performance Assessment ____
Portfolio Assessment ____
Peer assessment ____
Questionnaires ____
Community Service Learning ____
Section D
Respond to the following:
1. Describe TWO main difficulties you encounter in using the curriculum/program guide
2. State THREE suggestions for improving the curriculum
3. Explain the importance of SSEDC in the national curriculum.
4. Other comments. [e.g. your feelings, your practice, and students‟ responses]
35. Instructional program Evaluation Plan 35
Appendix C
Interview
1. How can the curriculum development unit assist with the teaching of SSEDC?
2. Is the focus for democratic citizenship clear to the teachers?
3. Is the goal of democratic citizenship clearly outlined in the guide?
4. How helpful is the interaction with supervisors?
5. What comments do you have about preparation of teachers for planning and delivering SSEDC?
6. What comments do you have about your involvement in the development of the program?