This document discusses different evaluation design approaches including quantitative, qualitative, and mixed methods. It provides details on the design process, common quantitative and qualitative approaches like experimental, quasi-experimental, and case study designs. Specific qualitative data collection methods like interviews, focus groups, and observations are also outlined. The benefits of mixed methods designs are highlighted as combining both quantitative and qualitative data can increase validity and understanding of evaluation findings.
Experimental research is the most conclusive scientific method because the researcher directly manipulates the independent variable and studies its effects on the dependent variable. This allows the researcher to determine causation, unlike other research methods. The purpose is to establish cause-and-effect relationships between variables. Basic steps include having an experimental group that receives a treatment and a control group that does not, then comparing outcomes. Key characteristics include random assignment to control threats to internal validity. Poor designs do not include control groups or random assignment, making it impossible to determine if results are due to the treatment.
This document discusses causal-comparative research, which investigates potential causes of existing differences between groups. Key points include: causal-comparative research is a form of associative research that examines nonexperimental group differences; threats to internal validity like subject characteristics must be evaluated; and data analysis may involve t-tests, ANCOVAs, or crossbreak tables to compare group means or associations between categorical variables.
PowerPoint presentation created for graduate course in Research Methodologies. Very wordy and not my usual style, but had too much information to include to do much style-wise.
This document discusses different evaluation design approaches including quantitative, qualitative, and mixed methods. It provides details on key aspects of each approach such as data collection instruments, strengths, and when each is most applicable. For quantitative methods, it describes experimental, quasi-experimental, time series, and cross-sectional designs. For qualitative methods, it discusses observation, interviews, focus groups, document studies, and key informants. It notes that mixed methods combine quantitative and qualitative approaches to provide multiple perspectives on outcomes and implementation.
Ex post facto research examines how an independent variable that is already present affects a dependent variable, through a quasi-experimental study where participants are not randomly assigned. It involves 8 stages: defining the problem, stating hypotheses, selecting subjects, classifying data, gathering data where the outcome occurs and does not occur, comparing the data to infer causes, and analyzing findings. There are two approaches - starting with subjects differing on the independent variable or dependent variable. While it is useful when experiments are not possible, difficulties include not being able to establish causality or control all variables.
This document discusses the concept of validity in psychological testing and research. It provides definitions of validity from authoritative sources like the American Psychological Association. It distinguishes between different types of validity like construct validity, content validity, criterion validity, predictive validity, concurrent validity, and experimental validity, which includes statistical conclusion validity, internal validity, external validity, and ecological validity. The relationships between these types of validity are explored in depth through multiple examples and implications. The document emphasizes that validity concerns the appropriate interpretation and use of test scores rather than a test itself. It is intended as a guide on validity for Dr. GHIAS UL HAQ from SARHAD UNIVERSITY OF INFORMATION TECHNOLOGY, PESHAWAR.
Expertise, Consumer-Oriented, and Program-Oriented Evaluation Approachesdctrcurry
All information referenced from: Fitzpatrick, J., Sanders, J., & Worthen, B. (2011). Program evaluation: Alternative approaches and practical guidelines (4th ed.). Upper Saddle River, N.J.: Pearson Education.
Validity:
Validity refers to how well a test measures what it is purported to measure.
Types of Validity:
1. Logic valididty:
Validity which is in the form of theory, statements. It has 2 types.
I. Face Validity:
It is the extent to which the measurement method appears “on its face” to measure the construct of interest.
• Example:
• suppose you were taking an instrument reportedly measuring your attractiveness, but the questions were asking you to identify the correctly spelled word in each list
II. Content Validity:
Measuring all the aspects contributing to the variable of the interest.
Example:
For physical fitness temperature, height and stamina are supposed to be assess then a test of fitness must include content about temperatures, height and stamina.
2. Criterion
It is the extent to which people’s scores are correlated with other variables or criteria that reflect the same construct
Example:
An IQ test should correlate positively with school performance.
An occupational aptitude test should correlate positively with work performance.
Types of Criterion Validity
Concurrent validity:
• When the criterion is something that is happening or being assessed at the same time as the construct of interest, it is called concurrent validity.
• Example:
Beef test.
Predictive validity:
• A new measure of self-esteem should correlate positively with an old established measure. When the criterion is something that will happen or be assessed in the future, this is called predictive validity.
• Example:
GAT, SAT
Other types of validity
Internal Validity:
It is basically the extent to which a study is free from flaws and that any differences in a measurement are due to an independent variable and nothing else
External Validity
• It is the extent to which the results of a research study can be generalized to different situations, different groups of people, different settings, different conditions, etc.
Experimental research is the most conclusive scientific method because the researcher directly manipulates the independent variable and studies its effects on the dependent variable. This allows the researcher to determine causation, unlike other research methods. The purpose is to establish cause-and-effect relationships between variables. Basic steps include having an experimental group that receives a treatment and a control group that does not, then comparing outcomes. Key characteristics include random assignment to control threats to internal validity. Poor designs do not include control groups or random assignment, making it impossible to determine if results are due to the treatment.
This document discusses causal-comparative research, which investigates potential causes of existing differences between groups. Key points include: causal-comparative research is a form of associative research that examines nonexperimental group differences; threats to internal validity like subject characteristics must be evaluated; and data analysis may involve t-tests, ANCOVAs, or crossbreak tables to compare group means or associations between categorical variables.
PowerPoint presentation created for graduate course in Research Methodologies. Very wordy and not my usual style, but had too much information to include to do much style-wise.
This document discusses different evaluation design approaches including quantitative, qualitative, and mixed methods. It provides details on key aspects of each approach such as data collection instruments, strengths, and when each is most applicable. For quantitative methods, it describes experimental, quasi-experimental, time series, and cross-sectional designs. For qualitative methods, it discusses observation, interviews, focus groups, document studies, and key informants. It notes that mixed methods combine quantitative and qualitative approaches to provide multiple perspectives on outcomes and implementation.
Ex post facto research examines how an independent variable that is already present affects a dependent variable, through a quasi-experimental study where participants are not randomly assigned. It involves 8 stages: defining the problem, stating hypotheses, selecting subjects, classifying data, gathering data where the outcome occurs and does not occur, comparing the data to infer causes, and analyzing findings. There are two approaches - starting with subjects differing on the independent variable or dependent variable. While it is useful when experiments are not possible, difficulties include not being able to establish causality or control all variables.
This document discusses the concept of validity in psychological testing and research. It provides definitions of validity from authoritative sources like the American Psychological Association. It distinguishes between different types of validity like construct validity, content validity, criterion validity, predictive validity, concurrent validity, and experimental validity, which includes statistical conclusion validity, internal validity, external validity, and ecological validity. The relationships between these types of validity are explored in depth through multiple examples and implications. The document emphasizes that validity concerns the appropriate interpretation and use of test scores rather than a test itself. It is intended as a guide on validity for Dr. GHIAS UL HAQ from SARHAD UNIVERSITY OF INFORMATION TECHNOLOGY, PESHAWAR.
Expertise, Consumer-Oriented, and Program-Oriented Evaluation Approachesdctrcurry
All information referenced from: Fitzpatrick, J., Sanders, J., & Worthen, B. (2011). Program evaluation: Alternative approaches and practical guidelines (4th ed.). Upper Saddle River, N.J.: Pearson Education.
Validity:
Validity refers to how well a test measures what it is purported to measure.
Types of Validity:
1. Logic valididty:
Validity which is in the form of theory, statements. It has 2 types.
I. Face Validity:
It is the extent to which the measurement method appears “on its face” to measure the construct of interest.
• Example:
• suppose you were taking an instrument reportedly measuring your attractiveness, but the questions were asking you to identify the correctly spelled word in each list
II. Content Validity:
Measuring all the aspects contributing to the variable of the interest.
Example:
For physical fitness temperature, height and stamina are supposed to be assess then a test of fitness must include content about temperatures, height and stamina.
2. Criterion
It is the extent to which people’s scores are correlated with other variables or criteria that reflect the same construct
Example:
An IQ test should correlate positively with school performance.
An occupational aptitude test should correlate positively with work performance.
Types of Criterion Validity
Concurrent validity:
• When the criterion is something that is happening or being assessed at the same time as the construct of interest, it is called concurrent validity.
• Example:
Beef test.
Predictive validity:
• A new measure of self-esteem should correlate positively with an old established measure. When the criterion is something that will happen or be assessed in the future, this is called predictive validity.
• Example:
GAT, SAT
Other types of validity
Internal Validity:
It is basically the extent to which a study is free from flaws and that any differences in a measurement are due to an independent variable and nothing else
External Validity
• It is the extent to which the results of a research study can be generalized to different situations, different groups of people, different settings, different conditions, etc.
Causal-comparative research aims to determine cause-and-effect relationships by forming groups of individuals in whom an independent variable is present or absent - or present at several levels - and comparing the groups on a dependent variable. It involves comparing two or more groups that differ on some independent variable of interest without direct manipulation. Threats to internal validity include subject characteristics and other threats like mortality, instrumentation, and history. Researchers evaluate these threats and aim to control for them to strengthen causal inferences from the research.
This document defines key terms related to reliability and discusses various methods for measuring reliability. It defines reliability as consistency in measurement and discusses sources of error such as test construction, administration, and scoring. It then covers classical test theory, domain sampling theory, item response theory, generalizability theory, and various methods to measure reliability including test-retest, parallel/alternate forms, split-half, inter-item consistency, inter-scorer, and standard error of measurement. It concludes with ways to improve reliability such as using quality test items, adequately sampling content, developing a scoring plan, and ensuring validity.
This document discusses causal-comparative research, which studies cause-and-effect relationships between variables that are not manipulated by the researcher. It examines how causal-comparative research explores how independent variables influence dependent variables after the fact, without experimenter control. Some key points made are that this method is used when manipulation is impossible for ethical or practical reasons, and helps study problems that can't be replicated experimentally, though it is limited by a lack of control over variables.
This document discusses steps in test administration and scoring. It begins by outlining the objectives of understanding test administration procedures, scoring tests, and analyzing test scores. It then describes important considerations for administering tests properly such as ensuring a comfortable testing environment and clearly communicating instructions. The document explains methods for scoring answer sheets, coordinating scores between examiners, and using scores to evaluate student performance through methods like ranking, grades, percentiles and stanines.
This document discusses different types of triangulation that can be used in research to increase validity and credibility. It defines triangulation as using multiple methods, data sources, investigators, theories or research contexts to check results. Specifically, it outlines data triangulation, investigator triangulation, theory triangulation, methodological triangulation and environmental triangulation. For each type, it provides an example of how it could be applied in practice to strengthen research findings.
Experimental Research Design (True, Quasi and Pre Experimental Design)Alam Nuzhathalam
The document discusses research design and experimental research. It defines research design as a broad framework that specifies objectives, methods, timelines and responsibilities for conducting a research project. Experimental research involves manipulating independent variables and measuring their effect on dependent variables. The key aspects of experimental research design discussed are random assignment to experimental and control groups, manipulation of the independent variable, use of control groups, pre-tests and post-tests, and measuring the independent variable's effect on the dependent variable. Strengths of experimental research noted are strong control over variables and ability to identify cause-and-effect relationships.
The document discusses various qualitative research methods including interviews, observation, and focus group discussions. It provides details on the different types of interviews such as structured, unstructured, and semi-structured interviews. It also outlines the key elements and considerations for conducting effective interviews, observations, and focus group discussions. These methods are explained as approaches for obtaining direct information from research participants and exploring their perspectives in an in-depth manner.
Validity refers to the appropriateness and usefulness of assessment interpretations and results, while reliability refers to the consistency of measurements. There are various types of validity evidence including content, criterion, and construct validity. Reliability can be estimated through methods like test-retest, equivalent forms, and internal consistency. Ensuring both validity and reliability of assessments is important for making fair and meaningful evaluations of students.
Reliability refers to the consistency of test scores. There are three main types of reliability: stability, equivalence, and homogeneity. Stability measures consistency over time, equivalence uses alternative versions of a test, and homogeneity examines internal consistency. Factors like data collection methods, time intervals, and test administration can influence reliability. To improve reliability, tests should have clear, unambiguous questions and objective scoring. Rater reliability specifically measures consistency between raters or judges.
This document discusses the different types of validity in psychological testing: face validity, content validity, criterion validity (including predictive and concurrent validity), and discriminant validity. It provides examples for each type of validity. Criterion validity refers to how a test correlates with other measures of the same construct. Discriminant validity shows a test does not correlate with measures of different constructs. Validity is determined through empirical evidence over many studies, and is not an all-or-none concept. Factors like history, maturation, testing, and selection can threaten a test's validity if not controlled.
The document discusses validity and reliability in research. It defines validity as measuring what the research intends to measure and having truthful results. There are three types of validity: content, construct, and criterion-related. Reliability refers to consistency of results over time and accurately representing the population. It can be measured through test-retest, alternative forms, and split-half methods. Validity and reliability are both important but distinct concepts for assessing quality of research.
This document discusses correlational research design, which examines relationships between variables without manipulation. It has independent and dependent variables, and the researcher determines how changes in one variable correlate with changes in the other through statistical analysis. Correlational design can be either prospective, observing relationships from cause to effect over time, or retrospective, examining current phenomena in relation to past events. Examples given are a prospective study of tsunami stress and a retrospective study of adolescent drug abuse risk factors.
The document outlines the 5 main steps in test development: 1) test conceptualization which includes defining what will be measured and pilot studies, 2) test construction including scaling methods, writing items, and approaches, 3) test tryout, 4) item analysis to evaluate item difficulty, reliability, validity, and discrimination, and 5) test revision to ensure quality over time as needed. Key aspects include defining the construct being measured, using various scaling and scoring models, analyzing item performance, and revalidating tests periodically.
This document discusses ex post facto research. Ex post facto research is a type of non-experimental research where the independent variable has already occurred and the researcher studies the effects. Some key points:
1) Ex post facto research uses groups that differ on the independent or dependent variable and compares them.
2) The independent variable cannot be manipulated. The research focuses on analyzing the effects or causes of events.
3) There are strengths like relevance when variables cannot be manipulated, and weaknesses like inability to randomly assign groups or manipulate variables.
4) The steps of ex post facto research include determining the problem, literature review, formulating hypotheses, research design, validity, and interpreting conclusions.
The document discusses various experimental research designs used in scientific studies. It describes true experimental designs which involve manipulation of an independent variable, control, and randomization. True experiments allow for causal inferences but have limitations in human studies. Quasi-experimental designs lack randomization or a control group. Pre-experimental designs have no control and are weak for establishing causality. Common designs discussed include pretest-posttest, Solomon four-group, randomized block, crossover, and time-series designs. Advantages and disadvantages of each type of design are provided.
This document discusses the concept of reliability in testing. It defines reliability as giving consistent results across different administrations of a test. It then provides definitions of reliability from various sources that similarly emphasize consistency of measurement. The document goes on to list and briefly describe five common methods used to measure reliability: test-retest method, split-half method, parallel forms method, internal consistency method, and scorer's reliability. It provides a one sentence description of how each method is used to assess reliability.
This short SlideShare presentation explores a basic overview of test reliability and test validity. Validity is the degree to which a test measures what it is supposed to measure. Reliability is the degree to which a test consistently measures whatever it measures. Examples are given as well as a slide on considerations for writing test questions that demand higher-order thinking.
This document discusses different types of experimental research designs, including their advantages and disadvantages. It covers true experimental designs like pretest-posttest and Solomon four-group designs. It also discusses quasi-experimental designs like nonequivalent control group and time series designs, as well as pre-experimental designs. Threats to internal and external validity are explained for different designs.
Reliability refers to the consistency of a measure. There are several types of reliability: test-retest, equivalency, inter-rater, and internal consistency. Test-retest reliability assesses consistency over time, equivalency assesses consistency between alternate forms, inter-rater assesses consistency between raters, and internal consistency assesses consistency between items. Factors like memory, practice effects, and maturation can impact reliability over time. Reliability is important for a measure to be valid and useful. Ways to improve reliability include making tests longer, carefully constructing items, and standardizing administration procedures.
Evaluation is a critical component of public policy making and other forms of policy making processes. Thus this slides give a brief overview of the importance of evaluation in every sphere of policy making and implementation.
The document discusses different types of research study designs. It defines research design and explains its key functions, including conceptualizing the research plan and procedures to obtain valid answers. It also covers quantitative versus qualitative designs. Specific study designs discussed in detail include cross-sectional studies, before-and-after studies, and longitudinal studies. The strengths and limitations of each design are presented.
The document discusses various data collection instruments that can be used for education projects, including their purposes, advantages, and challenges. It provides details on interviews, focus groups, questionnaires/surveys, observation, literature reviews, tests, concept maps, document/product reviews, and case studies. It also discusses determining sample sizes, levels of evaluation from resources to social/environmental impacts, and using rubrics and logic models to assess performance. The key instruments covered are interviews, focus groups, questionnaires, observation, and document reviews.
Causal-comparative research aims to determine cause-and-effect relationships by forming groups of individuals in whom an independent variable is present or absent - or present at several levels - and comparing the groups on a dependent variable. It involves comparing two or more groups that differ on some independent variable of interest without direct manipulation. Threats to internal validity include subject characteristics and other threats like mortality, instrumentation, and history. Researchers evaluate these threats and aim to control for them to strengthen causal inferences from the research.
This document defines key terms related to reliability and discusses various methods for measuring reliability. It defines reliability as consistency in measurement and discusses sources of error such as test construction, administration, and scoring. It then covers classical test theory, domain sampling theory, item response theory, generalizability theory, and various methods to measure reliability including test-retest, parallel/alternate forms, split-half, inter-item consistency, inter-scorer, and standard error of measurement. It concludes with ways to improve reliability such as using quality test items, adequately sampling content, developing a scoring plan, and ensuring validity.
This document discusses causal-comparative research, which studies cause-and-effect relationships between variables that are not manipulated by the researcher. It examines how causal-comparative research explores how independent variables influence dependent variables after the fact, without experimenter control. Some key points made are that this method is used when manipulation is impossible for ethical or practical reasons, and helps study problems that can't be replicated experimentally, though it is limited by a lack of control over variables.
This document discusses steps in test administration and scoring. It begins by outlining the objectives of understanding test administration procedures, scoring tests, and analyzing test scores. It then describes important considerations for administering tests properly such as ensuring a comfortable testing environment and clearly communicating instructions. The document explains methods for scoring answer sheets, coordinating scores between examiners, and using scores to evaluate student performance through methods like ranking, grades, percentiles and stanines.
This document discusses different types of triangulation that can be used in research to increase validity and credibility. It defines triangulation as using multiple methods, data sources, investigators, theories or research contexts to check results. Specifically, it outlines data triangulation, investigator triangulation, theory triangulation, methodological triangulation and environmental triangulation. For each type, it provides an example of how it could be applied in practice to strengthen research findings.
Experimental Research Design (True, Quasi and Pre Experimental Design)Alam Nuzhathalam
The document discusses research design and experimental research. It defines research design as a broad framework that specifies objectives, methods, timelines and responsibilities for conducting a research project. Experimental research involves manipulating independent variables and measuring their effect on dependent variables. The key aspects of experimental research design discussed are random assignment to experimental and control groups, manipulation of the independent variable, use of control groups, pre-tests and post-tests, and measuring the independent variable's effect on the dependent variable. Strengths of experimental research noted are strong control over variables and ability to identify cause-and-effect relationships.
The document discusses various qualitative research methods including interviews, observation, and focus group discussions. It provides details on the different types of interviews such as structured, unstructured, and semi-structured interviews. It also outlines the key elements and considerations for conducting effective interviews, observations, and focus group discussions. These methods are explained as approaches for obtaining direct information from research participants and exploring their perspectives in an in-depth manner.
Validity refers to the appropriateness and usefulness of assessment interpretations and results, while reliability refers to the consistency of measurements. There are various types of validity evidence including content, criterion, and construct validity. Reliability can be estimated through methods like test-retest, equivalent forms, and internal consistency. Ensuring both validity and reliability of assessments is important for making fair and meaningful evaluations of students.
Reliability refers to the consistency of test scores. There are three main types of reliability: stability, equivalence, and homogeneity. Stability measures consistency over time, equivalence uses alternative versions of a test, and homogeneity examines internal consistency. Factors like data collection methods, time intervals, and test administration can influence reliability. To improve reliability, tests should have clear, unambiguous questions and objective scoring. Rater reliability specifically measures consistency between raters or judges.
This document discusses the different types of validity in psychological testing: face validity, content validity, criterion validity (including predictive and concurrent validity), and discriminant validity. It provides examples for each type of validity. Criterion validity refers to how a test correlates with other measures of the same construct. Discriminant validity shows a test does not correlate with measures of different constructs. Validity is determined through empirical evidence over many studies, and is not an all-or-none concept. Factors like history, maturation, testing, and selection can threaten a test's validity if not controlled.
The document discusses validity and reliability in research. It defines validity as measuring what the research intends to measure and having truthful results. There are three types of validity: content, construct, and criterion-related. Reliability refers to consistency of results over time and accurately representing the population. It can be measured through test-retest, alternative forms, and split-half methods. Validity and reliability are both important but distinct concepts for assessing quality of research.
This document discusses correlational research design, which examines relationships between variables without manipulation. It has independent and dependent variables, and the researcher determines how changes in one variable correlate with changes in the other through statistical analysis. Correlational design can be either prospective, observing relationships from cause to effect over time, or retrospective, examining current phenomena in relation to past events. Examples given are a prospective study of tsunami stress and a retrospective study of adolescent drug abuse risk factors.
The document outlines the 5 main steps in test development: 1) test conceptualization which includes defining what will be measured and pilot studies, 2) test construction including scaling methods, writing items, and approaches, 3) test tryout, 4) item analysis to evaluate item difficulty, reliability, validity, and discrimination, and 5) test revision to ensure quality over time as needed. Key aspects include defining the construct being measured, using various scaling and scoring models, analyzing item performance, and revalidating tests periodically.
This document discusses ex post facto research. Ex post facto research is a type of non-experimental research where the independent variable has already occurred and the researcher studies the effects. Some key points:
1) Ex post facto research uses groups that differ on the independent or dependent variable and compares them.
2) The independent variable cannot be manipulated. The research focuses on analyzing the effects or causes of events.
3) There are strengths like relevance when variables cannot be manipulated, and weaknesses like inability to randomly assign groups or manipulate variables.
4) The steps of ex post facto research include determining the problem, literature review, formulating hypotheses, research design, validity, and interpreting conclusions.
The document discusses various experimental research designs used in scientific studies. It describes true experimental designs which involve manipulation of an independent variable, control, and randomization. True experiments allow for causal inferences but have limitations in human studies. Quasi-experimental designs lack randomization or a control group. Pre-experimental designs have no control and are weak for establishing causality. Common designs discussed include pretest-posttest, Solomon four-group, randomized block, crossover, and time-series designs. Advantages and disadvantages of each type of design are provided.
This document discusses the concept of reliability in testing. It defines reliability as giving consistent results across different administrations of a test. It then provides definitions of reliability from various sources that similarly emphasize consistency of measurement. The document goes on to list and briefly describe five common methods used to measure reliability: test-retest method, split-half method, parallel forms method, internal consistency method, and scorer's reliability. It provides a one sentence description of how each method is used to assess reliability.
This short SlideShare presentation explores a basic overview of test reliability and test validity. Validity is the degree to which a test measures what it is supposed to measure. Reliability is the degree to which a test consistently measures whatever it measures. Examples are given as well as a slide on considerations for writing test questions that demand higher-order thinking.
This document discusses different types of experimental research designs, including their advantages and disadvantages. It covers true experimental designs like pretest-posttest and Solomon four-group designs. It also discusses quasi-experimental designs like nonequivalent control group and time series designs, as well as pre-experimental designs. Threats to internal and external validity are explained for different designs.
Reliability refers to the consistency of a measure. There are several types of reliability: test-retest, equivalency, inter-rater, and internal consistency. Test-retest reliability assesses consistency over time, equivalency assesses consistency between alternate forms, inter-rater assesses consistency between raters, and internal consistency assesses consistency between items. Factors like memory, practice effects, and maturation can impact reliability over time. Reliability is important for a measure to be valid and useful. Ways to improve reliability include making tests longer, carefully constructing items, and standardizing administration procedures.
Evaluation is a critical component of public policy making and other forms of policy making processes. Thus this slides give a brief overview of the importance of evaluation in every sphere of policy making and implementation.
The document discusses different types of research study designs. It defines research design and explains its key functions, including conceptualizing the research plan and procedures to obtain valid answers. It also covers quantitative versus qualitative designs. Specific study designs discussed in detail include cross-sectional studies, before-and-after studies, and longitudinal studies. The strengths and limitations of each design are presented.
The document discusses various data collection instruments that can be used for education projects, including their purposes, advantages, and challenges. It provides details on interviews, focus groups, questionnaires/surveys, observation, literature reviews, tests, concept maps, document/product reviews, and case studies. It also discusses determining sample sizes, levels of evaluation from resources to social/environmental impacts, and using rubrics and logic models to assess performance. The key instruments covered are interviews, focus groups, questionnaires, observation, and document reviews.
The document discusses various data collection instruments that can be used for education projects, including their purposes, advantages, and challenges. It provides details on interviews, focus groups, questionnaires/surveys, observation, literature reviews, tests, concept maps, document/product reviews, and case studies. It also discusses determining sample sizes, levels of evaluation from resources to social/environmental impacts, and using rubrics and logic models to assess performance. The key instruments covered are interviews, focus groups, questionnaires, observation, and document reviews.
UNIT 1 Business Research Method by Dr. Rashmi Maini-1.pptxHarshkumarSingh74
The document outlines the objectives, pedagogy, textbooks, and units for a course on business research methods. The key objectives are to understand research concepts, techniques, importance of scaling and sampling, and how to conduct analysis and write a report. Pedagogy includes presentations, discussions, case studies and student presentations. Textbooks on research methodology are also listed. The first unit aims to understand the meaning of research and its types, latest trends, the scientific process, and how to write a research proposal.
This document discusses specific techniques for evaluating curriculum, including:
1. Observation - Gathering information by directly observing programs and student/teacher behaviors. This can be unstructured or structured.
2. Interviews - Collecting verbal information from interviewees. Interviews can be unstructured or structured.
3. Questionnaires - Collecting quantitative data through surveys to get information from many people easily.
4. Unobtrusive measures - Obtaining non-reactive observations by examining physical traces or records without participants' awareness.
5. Tests - Assessing learning outcomes through various types of tests like diagnostic, proficiency, aptitude, and achievement tests as well as formative vs summative assessments.
This document discusses pilot studies and pretesting research instruments. It defines a pilot study as a miniature version of the planned research that tests various research elements to identify problems. A pilot study covers the entire research process from planning to data collection and analysis. The objectives of a pilot study are to test all aspects of the research proposal and identify issues. Pretesting refers specifically to testing a research instrument like a survey. The purposes of pretesting are to evaluate if the instrument will collect the needed data and if the questions are clear to respondents.
Evaluation is the process of making judgements about the value or worth of an individual, program, or policy by collecting evidence and assessing progress towards goals. There are several tools used for evaluation, including observation, rating scales, interviews, and tests. Observation can provide direct information about an ongoing process. Rating scales allow for qualitative attributes to be judged quantitatively by describing varying degrees of performance. Interviews are used to understand perspectives and can be structured, semi-structured, or unstructured. The purpose of evaluation is to improve instruction, assess teachers and programs, and help students reach their potential.
This document discusses different study designs for quantitative and qualitative research. For quantitative research, common designs include cross-sectional, before-and-after, longitudinal, retrospective, prospective, experimental, non-experimental, and quasi-experimental. Experimental designs can be randomized, use control groups, or be blind studies. Qualitative designs include case studies, oral histories, focus groups, participant observation, and action research. The document contrasts the philosophical perspectives of quantitative and qualitative research and how they determine flexible vs rigid study approaches.
This document discusses various qualitative data collection methods used in descriptive research, including observations, interviews, questionnaires, surveys, and examining records. It provides details on how to conduct interviews and design questionnaires, as well as the advantages and disadvantages of different techniques. Specifically, it outlines steps for structured interviews, factors to consider in choosing data collection methods, and how to write questions to avoid biases and ensure understandability.
Classroom Based Assessment Tools and Techniques 27-09-2022.pptNasirMahmood976516
This document discusses various methods and purposes of classroom-based assessment. It defines assessment as the systematic process of documenting and using data on student knowledge, skills, attitudes, and beliefs to improve learning. The document outlines different types of assessments including achievement tests, psychological tests, and performance tests. It also discusses formative assessment, which provides feedback to help students improve, versus summative assessment, which evaluates performance against standards. Finally, the document details specific formative assessment techniques teachers can use like interviews, checklists, observations, and case studies.
Barbara dale jones presentation-frf event_17032015_finalbarbaradj02
This document discusses lessons for improving systems performance from BRIDGE, a knowledge management agency in South Africa. It provides recommendations in four areas: 1) Planning around key factors like ensuring teacher buy-in and understanding the current performance stage of the system. 2) Achieving scale and systems impact by using a staged iterative approach, carefully selecting projects to scale, and managing change. 3) Committing to knowledge management by sharing knowledge from practice and providing knowledge products. 4) Ensuring proper evaluation by defining impact, conducting long-term impact evaluations, and applying evaluation standards. The document concludes with recommendations for the education sector to commit to collaboration, knowledge management, and rigorous evaluation.
Keys to success with assessment and evaluationFrank Cervone
Evaluation and assessment are grounded in social research methods. Proper research methodology and qualified personnel are critical to conducting effective evaluations. There are different types of evaluations - formative evaluations aim to improve programs, while summative evaluations examine outcomes. Effective evaluation requires understanding the goals and pretesting data collection instruments to ensure they provide useful information. Focus groups and understanding customer needs can also guide evaluation efforts.
This document discusses evaluation methodology for practices in science communication. It begins by noting the lack of systematic evaluation has made it difficult to compare practices, develop theories, and ensure accountability. The author argues for developing a common evaluation language while acknowledging the diversity of science communication. A key challenge is that practices have diverse purposes and actors. The author proposes using program theory and logic models to systematically evaluate practices in an ex post facto manner. This involves practitioners describing the purposes and means of a practice after completion to facilitate evaluation. The discussion considers how to account for change and complexity in program theories. The goal of developing evaluation is to improve practices for public benefit rather than administrative control.
Monitoring and evaluation - A presentation in Arabic/English prepared the Palestinian Center for Peace and Democracy (PCPD)
اعداد المركز الفلسطيني للسلام والديمقراطية
فلسطين , ديمقراطية , ديموقراطية , monitoring , elections, evaluation , politics , methods
The document discusses research methodology. It defines methodology as the systematic process used to solve a research problem. It lists the key parts of methodology as the research design, sample size determination, sampling techniques, subjects, research instruments, validation of instruments, data gathering procedures, data processing methods, and statistical treatment. It explains the importance of methodology to customers, business partners, suppliers, and professionals. It also outlines some key characteristics of methodology such as rationale, aims, description, and tips for determining sample size.
This document discusses various quality criteria and methods for qualitative research. It covers reliability, validity, objectivity, alternative criteria for evaluating theories, and challenges in quality assessment. It also discusses triangulation, analytic induction, generalization, constant comparative method, and process evaluation. Different approaches to quality criteria are examined, including reformulating traditional criteria, evaluating grounded theory studies, and using triangulation within and between methods. Combining qualitative and quantitative research through continuous data collection is also proposed.
This document discusses different types of evaluation used at various stages of instructional design: formative, summative, and confirmative evaluation. Formative evaluation informs instructors during development, summative evaluates learning outcomes at completion, and confirmative evaluates long-term outcomes. Different evaluation methods are suited to different purposes, such as objective tests for knowledge and performance assessments for skills. Validity and reliability of evaluation instruments are important to ensure accurate measurement of learning objectives.
This document discusses key elements and concepts in research methods. It defines concepts and constructs, and explains that concepts simplify observations while constructs are abstract ideas made up of dimensions. It also defines independent and dependent variables, and discrete and continuous variables. The document outlines the differences between reliability and validity in research instruments. It describes four types of validity and discusses advantages and disadvantages of survey research methods. It provides guidance on sampling methods, question construction, and tools used in data collection.
The document provides an overview of research design, including:
1. Research design involves planning how a study will be conducted to answer research questions and control variance. It specifies data sources, approaches, and time/cost budgets.
2. Key concepts in research design include independent and dependent variables, control of extraneous variables, and experimental and control groups.
3. There are different types of research design including exploratory, causal, descriptive, and experimental designs. Experimental designs further include pre-experimental, true experimental, and quasi-experimental approaches.
The document discusses various aspects of curriculum design, including different design models and considerations. It describes subject-centered, problem-centered, and learner-centered designs. Subject-centered design organizes curriculum by academic subjects and disciplines. Problem-centered design focuses on solving real-world problems. Learner-centered design emphasizes students' interests, creativity, and self-direction. Effective curriculum design requires considering philosophical foundations, content, learning experiences, assessments, and balancing different stakeholder needs.
This document discusses definitions of theory from various sources and defines educational theory. It explores approaches to educational theory, including the relationship between theory and practice/philosophy. Descriptive and prescriptive theories are examined. School administration and instructional theory are provided as examples of developing sub-theories within educational theory. The role of theorists and practitioners is addressed, as well as conflicts that can arise between them. Overall the document outlines the development of educational theory through examining definitions, approaches, and examples of sub-theories.
This document provides an overview of curriculum theory and defines key terms related to theory and education. It discusses how educational theory attempts to explain educational practices through a systematic set of ideas and assumptions. The document also examines different approaches to developing educational theory, including the relationships between theory and philosophy/practice. It analyzes sub-theories within educational theory, such as theories related to school administration and instruction, and how they are working to build a more comprehensive theory of education.
STAGE OF CURRICULUM DEVELOPMENT AND EVALUATION IN UPDATING THE ENTIRE CURRICULUMMina Badiei
The document outlines the stages of curriculum development and evaluation involved in updating an entire curriculum. It discusses 1) establishing general aims from educational policies, 2) the tasks of curriculum developers in determining subject balance and time allocation, 3) defining major educational objectives, 4) important concepts like critical changes in society and minimum learning requirements, and 5) constraints like political, socio-cultural and psychological factors. It emphasizes evaluating objectives, content scope and sequence, teaching strategies, and instructional materials using expert reviews and trials to ensure the curriculum achieves its aims.
The document discusses the importance of sequencing learning objectives and curriculum. It notes that sequencing objectives in a logical order and considering relationships between objectives can help learning. Different sequencing methods are described such as from simple to complex, critical sequence, and relationships between objectives. The document also discusses Bloom's taxonomy of learning domains and the need for process-product research to relate instructional variables to student achievement. Problems with past process-product research are outlined.
The document discusses definitions and philosophies of curriculum. It defines curriculum as the experiences learners have under teacher guidance, including a set of objectives and content knowledge to acquire. Four educational philosophies are described: idealism focuses on ideas and intellectual development; realism emphasizes the physical world and basic skills; pragmatism sees learning as problem-solving; and existentialism prioritizes self-direction. The document also outlines philosophies that influence curriculum models like perennialism, essentialism, progressivism, and reconstructionism. Overall, the document examines how history and philosophy shape understandings of curriculum.
This document discusses innovation in education. It defines innovation as the creation of better or more effective processes, technologies, or ideas that are accepted by others. It also discusses the distinction between invention and innovation, noting that invention is the creation of a new idea while innovation is implementing that idea successfully. The document also outlines several factors that can influence the adoption of innovations, including characteristics of the innovation itself, individual adopters, social networks, and other considerations.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
2. What Is Evaluation Design?
• The plan for an evaluation project is called a
"design“.
• It is a particularly vital step to provide an
appropriate assessment.
• A good design offers an opportunity to
maximize the quality of the evaluation, helps
minimize and justify the time and cost
necessary to perform the work.
3. Design Process
• 1. Identifying evaluation questions and issues
• 2. Identifying research designs and
comparisons
• 3. Sampling methods
• 4. Data collection instruments
• 5. Collecting and coding qualitative data
5. Quantitative approach
• Quantitative data can be counted, measured, and
reported in numerical form and answer questions
such as who, what, where, and how much.
• The quantitative approach is useful for describing
concrete phenomena and for statistically
analyzing results.
• Data collection instruments can be used with
large numbers of study participants.
• Data collection instruments can be
standardized, allowing for easy comparison
within and across studies.
6. • Experimental designs tend to be rigorous in that they control
for external factors and enable you to argue, with some
degree of confidence, that your findings are due to the
Experimental effects of the program rather than other, unrelated, factors.
• They are rarely applicable in educational settings where
there is a chance that students may be denied an opportunity
to participate in a program because of the evaluation design.
• Quasi-experimental designs are those in which participants
are matched beforehand, or after the fact, using statistical
Quasi- methods
• These studies offer a reasonable solution for schools or
Experimental districts that cannot randomly assign students to different
programs, but still desire some degree of control so that they
can make statistical statements about their findings.
• it is intended to demonstrate trends or changes over time.
Time-series Study • the purpose of the design is not to examine the impact of an
intervention ,but simply to explore and describe changes in
the construct of interest.
• Time series offer more data points, but because there is little
control over extraneous factors.
7. • it is intended to show a snapshot in time.
• it might be used to answer questions like:
Cross-sectional • -what do parents think about our school?
• -what do parents see as the strengths and
weaknesses the school environment?
• case studies are those which seek to follow
program implementation or impact on an
individual, group, or organization, such as a
school or classroom.
Case-studies • However, case studies are an excellent way to
collect evidence of program effectiveness, to
increase understanding of how an intervention is
working in particular settings, and to inform a
larger study to be conducted later.
8. Types of Experimental Design
Post-test only design Pre-post design
• the least complicated of the • it is employed when a
experimental design.
pretreatment measure can
• it has three steps:
1) decide what comparisons are supply useful information.
desired and meaningful. • in Field-trial stage this
2) the students in two or more design in common to use.
comparison groups must be
similar.
3) collect the information after
the posttest to determine
whether the differences
occurred.
9. Example:
• Equilibrium Effects of Education Policies: A Quantitative
Evaluation
By : Giovanni Gallipoli, Costas Meghir, Giovanni L. Violante
The paper compares partial and general equilibrium effects of alternative education
policies on the distribution of education and earnings. The numerical counterpart of
the model, parameterized through a variety of data sources, education enrollment
responses which are broadly in line with reduce-form estimates. Through numerical
simulations, they compare the effects of alternative policy interventions on optimal
education decisions, inequality, and output. It’s a king of Quasi-Experimental design.
10. Qualitative Approach
• Qualitative data are reported in narrative form.
• Qualitative approach can provide important insights into how well a
program is working and what can be done to increase its impact.
• Qualitative data can also provide information about how
participants – including the people responsible for operating the
program as well as the target audience – feel about the program.
• It promotes understanding of diverse stakeholder perspectives
(e.g., what the program means to different people).
• Stakeholders, funders, policymakers, and the public may find
quotes and anecdotes easier to understand and more appealing
than statistical data.
12. observation
• Observational techniques are methods by which an individual or
individuals gather firsthand data on programs, processes, or
behaviors being studied.
• They provide evaluators with an opportunity to collect data on a
wide range of behaviors, to capture a great variety of
interactions, and to openly explore the evaluation topic.
• By directly observing operations and activities, the evaluator can
develop a holistic perspective, i.e., an understanding of the context
within which the project operates.
• Observational approaches also allow the evaluator to learn about
things the participants or staff may be unaware of or that they are
unwilling or unable to discuss in an interview or focus group.
13. When to use observations
• Observations can be useful during both the formative and summative
phases of evaluation. For example, during the formative
phase, observations can be useful in determining whether or not the
project is being delivered and operated as planned.
• In the hypothetical project, observations could be used to describe the
faculty development sessions, examining the extent to which participants
understand the concepts, ask the right questions, and are engaged in
appropriate interactions.
• Observations during the summative phase of evaluation can be used to
determine whether or not the project is successful. The technique would
be especially useful in directly examining teaching methods employed by
the faculty in their own classes after program participation.
14. Interviews
• Interviews provide very different data from observations: they
allow the evaluation team to capture the perspectives of project
participants, staff, and others associated with the project.
• In the hypothetical example, interviews with project staff can
provide information on the early stages of the implementation and
problems encountered.
• An interview, rather than a paper and pencil survey, is selected
when interpersonal contact is important and when opportunities
for follow up of interesting comments are desired.
• Two types of interviews are used in evaluation research: structured
interviews, in which a carefully worded questionnaire is
administered; and in depth interviews, in which the interviewer
does not follow a rigid form.
15. Contd..
structured interviews: In-depth interviews:
• the interviewers seek to encourage
• the emphasis is on obtaining free and open responses, and
answers to carefully phrased there may be a trade off between
questions. comprehensive coverage of topics
• Interviewers are trained to and in depth and limited set of
deviate only minimally from the questions.
question wording to ensure • In depth interviews also encourage
uniformity of interview capturing of respondents’
administration. perceptions in their own words.
This allows the evaluator to
present the meaningfulness of the
experience from the respondent’s
perspective.
• In depth interviews are conducted
with individuals or with a small
group of individuals.
16. When to use interviews
• interviews can be used at any stage of the evaluation process. They
are especially useful in answering questions such as those
suggested by Patton (1990):
• What does the program look and feel like to the participants? To
other stakeholders?
• What are the experiences of program participants?
• What do stakeholders know about the project?
• What thoughts do stakeholders knowledgeable about the program
have concerning program operations, processes, and outcomes?
• What are participants’ and stakeholders’ expectations?
• What features of the project are most salient to the participants?
• What changes do participants perceive in themselves as a result of
their involvement in the project?
17. Focus Groups
• Focus groups combine elements of both interviewing and
participant observation.
• The focus group session is, indeed, an interview not a
discussion group, problem-solving session, or decision-
making group. (Patton, 1990)
• The hallmark of focus groups is the explicit use of the
group interaction to generate data and insights that would
be unlikely to emerge without the interaction found in a
group.
• Focus groups are a gathering of 8 to 12 people who share
some characteristics relevant to the evaluation. Originally
used as a market research tool to investigate the appeal of
various products.
18. Contd.
• the focus group technique has been adopted by
other fields, such as education, as a tool for data
gathering on a given topic.
• It conducted by experts take place in a focus
group facility that includes recording apparatus
(audio and/or visual) and an attached room with
a one-way mirror for observation. There is an
official recorder who may or may not be in the
room.
• Participants are paid for attendance and
provided with refreshments.
19. When to use focus groups
• When conducting evaluations, focus groups are useful
in answering the same type of questions as in-depth
interviews, except in a social context.
• Specific applications of the focus group method in
evaluations include :
- identifying and defining problems in project
implementation;
-identifying project strengths, weaknesses, and
recommendations;
-assisting with interpretation of quantitative findings;
-obtaining perceptions of project outcomes and
impacts; and generating new ideas.
20. Other Qualitative Methods
• Document Studies: defined a document as "any written or recorded material"
not prepared for the purposes of the evaluation or at the request of the
inquirer. Documents can be divided into two major categories: public
records, and personal documents (Guba and Lincoln, 1981).
• Key Informant :A key informant is a person (or group of persons) who has
unique skills or professional background related to the issue/intervention
being evaluated, is knowledgeable about the project participants, or has
access to other information of interest to the evaluator.
• Key informants can help the evaluation team better understand the issue
being evaluated, as well as the project participants, their
backgrounds, behaviors, and attitudes, and any language or ethnic
considerations. They can offer expertise beyond the evaluation team. They are
also very useful for assisting with the evaluation of curricula and other
educational materials. Key informants can be surveyed or interviewed
individually or through focus groups.
21. Example:
• A Qualitative Evaluation Process for Educational Programs
Serving Handicapped Students in Rural Areas
By : LUCILLE ANNESEZEPH
The paper describes a qualitative methodology designed to evaluate special education programs
in rural areas serving students with severe special needs. A rationale is provided for the use of
the elements of aesthetic criticism as the basis of methodology, and specific descriptions of the
steps for its implementation and validation are provided. Some practical limitations and particular
areas of usefulness are also discussed.
22. Mix method
• In recent years evaluators of educational and social programs have
expanded their methodological repertoire with designs that include
the use of both qualitative and quantitative methods. Such practice,
however, needs to be grounded in a theory that can meaningfully
guide the design and implementation of mixed-method evaluations.
• In many cases a mixture of designs can work together as a design
for evaluating a large, complex program.
• The ideal evaluation combines quantitative and qualitative
methods. A mixed-method approach offers a range of perspectives
on a program's processes and outcomes.
• For example, the impact of a reading intervention on student
performance may be compared for all students in a school over a
period of time using repeated measures from exams administered
for this purpose, but could also include more focused case studies
of particular classes to learn about crucial implementation issues.
23. benefits
• It increases the validity of your findings by allowing you to
examine the same phenomenon in different ways.
• It can result in better data collection instruments. For
example, focus groups can be invaluable in the
development or selection of a questionnaire used to gather
quantitative data.
• It promotes greater understanding of the findings.
Quantitative data can show that change occurred and how
much change took place, while qualitative data can help
you and others understand what happened and why.
• It offers something for everyone. Some stakeholders may
respond more favorably to a presentation featuring charts
and graphs. Others may prefer anecdotes and stories.
24. Example:
• A Mixed Methods Evaluation of a 12-Week Insurance-Sponsored
Weight Management Program Incorporating Cognitive–Behavioral
Counseling
By:Christiaan Abildso , Sam Zizzi, Diana Gilleland, James Thomas, and Daniel Bonner
A sequential mixed methods approach was used to assess the physical
and psychosocial impact of a 12-week cognitive–behavioral weight
management program and explore factors associated with weight loss.
Quantitative data revealed program completion rate and mean
percentage weight loss that compare favorably with other
interventions, and differential psychosocial impacts on those losing
more weight. Telephone interviews revealed four potential
mechanisms for these differential impacts: (a) fostering
accountability, (b) balancing perceived effort and success, (c)
redefining ‘‘success,’’ and (d) developing cognitive flexibility.
Editor's Notes
The use of interviews as a data collection method begins with the assumption that the participants’ perspectives are meaningful, knowable, and able to be made explicit, and that their perspectives affect the success of the project.
The technique inherently allows observation of group dynamics, discussion, and firsthand insights into the respondents’ behaviors, attitudes, language, etc.