This document summarizes two projects focused on research and evaluation in educational settings.
The first project examined how a museum program provided opportunities for parental engagement in their child's schooling. Evaluations looked at participation levels and implementation issues, while research explored the program's ability to engage parents from a low-income urban community.
The second project, called LEAP, piloted and expanded a STEM program across multiple sites. Formative evaluations assessed interest, attitudes, and implementation, as well as individual and institutional capacity. Research goals included identifying learning outcomes and how museums and community centers could address STEM gaps. Key successes included evaluation contributing different perspectives, and lessons about coordinating evaluations across sites. Research implications included examining STE
Exploring the Scholarship of Teaching and Learning A Comprehensive Overview (...
ITEST Convening Research Evaluation
1. ITEST Convening - August 2014
Implementing High Quality
Research and Evaluation
Sukey Blanc, PH.D.
Creative Research &
Evaluation, LLC
Principal Researcher
Dale McCreedy, Ph.D.
The Franklin Institute
Director of Gender, Adult
Learning & Community
Engagement
3. Framing Evaluation and
Research
Each project positions evaluation as
critical – both formative and summative.
Essential role of evaluation in
conceptualizing research.
Theoretical frames used for research.
Research grew out of commitment to
contribute to the field.
External evaluators played integral role
during the proposal and research
phases.
5. Evaluation: Participation; Implementation
issues; Progress towards goals.
Research: How does a museum program
provide opportunities for parents in a low-
income, urban community to engage in
their child’s schooling?
Theoretical Frame: – Angie Calabrese
Barton’s Ecologies of Parent Engagement
(EPE)
6. Evaluation:
LEAP Pilot:
Interest, science attitudes,
implementation
LEAP FSD:
Capacity of individuals and
institutions
Sustainability of program at sites
Potential for/interest in scale-up
Research:
• Focus on broader LEAP trajectory
• Scale up (Coburn, 2003) defined with fou
inter-related dimensions (depth,
sustainability, spread, & shift in ownership
8. Evaluation: primarily formative
Research originally conceptualized as:
• identifying learning processes and
outcomes
• exploring how museums and OST
centers could work together to
address gaps in STEM experiences in
underserved communities
9. Key evaluation successes:
• Evaluation is seen by all as key to
project.
• Evaluation data contributes a different
lens on enacted curriculum, site
capacities, and family involvement.
A key evaluation challenge:
• Coordination and communication with
out-of-school sites (this is true for
10. Implications for research questions and
research design
• STEM outcomes for youth and adults
• Partnerships to support STEM capacity
in out-of-school sites
Research topics/approaches will connect to
literature in the field and will build on
knowledge built through process of STEM
3D evaluation.
Editor's Notes
the purpose of evaluation is essentially to improve the existing program for the target population, while research is intended to prove a theory or hypothesis. Although both use similar data collection and analysis methods, the two disciplines diverge again during use and dissemination. This relationship can be visualized using an hourglass shape:
We became interested in the often unrecognized and untapped potential of parents through our collaborative work on a parent involvement program designed and implemented at three urban elementary schools in Philadelphia.
drew upon The Ecologies of Parent Engagement (EPE) framework (Calabrese Barton et al., 2004). This framework posits a definitional shift from parent involvement, typically defined in terms of individual actions and behaviors desired by school staff, to parent engagement, defined in terms of parents’ social and contextual relationships relative to others in their child’s schooling. The EPE framework was compelling to us because it allowed us to think about expectations for and the practice of parent involvement in schools and in supporting children within a larger social and cultural context. In addition, it focused on parent involvement in low-income, urban elementary schools, acknowledging that the realities of these parents’ lives often dictate different forms of involvement than those engaged in by middle-class, Caucasian parents. And it takes into account the “what,” “how,” and “why” of involvement, thus offering a holistic model for understanding the mechanisms through which parent involvement occurs.
Sub-questions under this primary research question guided the development of theoretical propositions; they were as follows:
How are parents engaged in their children’s schooling as a result of participation in the museum program?
What were the conditions that facilitated parents’ engagement through the program?
What were the obstacles that hindered parents’ engagement through the program?
How well does the EPE framework, and specifically the constructs of authoring and positioning, describe the mechanisms through which the museum program facilitated parent engagement?
STEM 3D is most recent project. Started in Spring of 2013 –
First three months – program staff identified 5 out of school time centers that were committed to attending professional development, implementing new STEM curriculum in after school and summer programs , and in nvolving parents in activities about STEM and STEM awareness.
During the first complete year, the centers began implementing propblem based unit themed around STEM and STEM careers
In the upcoming year, the goal is for each center to develop its own problem based units and to increase parent engagement in the project
So far, evaluation has been primarily formative. One of the key things that I’ve been able to do as an evaluator is bring an outsider’s lens to the process of program implementation. The staff on the project is also very reflective and thoughtful about program implementation, but there are things that I can bring as an outsider that are hard to do for staff members focused on developing the project. One useful thing was an analysis of the differences between the intended curriculum and the enacted curriculum. By intended curriculum, I mean the materials that the staff and centers use in planning and professional development. By enacted curriculum, I mean what actually happens in activities in the out of school sites. The program staff is very aware that there are differences between what they expect to appen and what actually happens in sites, but they value having someone else who is not involved wit the program help them look at this systematically and think about how to address problematic gaps.
In this upcoming year, I’ll be continuing formative evaluation, and together with the staff, I’ll also start thinking about how the formative evaluation learnings can help us refine the research questions we identified in our original proposal. As you can see, the research questions originally fell into two big areas –which we’ll narrow down.
One has to do with big questions about learning and the other one has to do with big questions about capacity building and partnership.
When the project was introduced, facilitators and coordinars at the sites were clear that evaluation would be part of this from te beginning. They know me and tey don’t feel like it is an extra burden. I also interact with them, and at times get their input on evaluation issues.
The other success I already mentioned but it bears repeating – that data for formative evaluation is being collected simultaneously with program implementation, but what the evaluation provides is the lens of an outsider who is skilled in assessing educational programs.
One key evaluation challenge is a challenge tht is shared by the program staff, out of school sites are incredibly over-burdened. The project picked high capacity sites, but even these are understaffed and always juggling multiple roles. Concretely, this means that sites are not always good about communicating when they are doing what, which means that both staff and evaluator have to be patient and flexible. Of course this is a problem from my perspecives. As a professional evaluator, I have lots o fdifferent projects, and I can’t necesssarity drop what I am doing if I site gives me short notices about an important activity.
Resolution is flexibility of roles. The project staff does share their data with me, both in the form of informally notes and thoughts and in the form of videos. And sometimes I do the same with them.
.
Moving into research, the fact that we know the sites and the players will let the project staff and me make an informed decisions about research related to outcomes - the most meaningful and realistic outcomes to look at (likely to be about STEM careers, at least for youth). Evaluation experience will also help us frame the theoretical framework we use to talk about building partnerships between the Science Centers and community sites – especially important is something that I will be investigatingin the upcoming year, which has to do with the existence of f knowledge brokers in each site who can mediate between the wolrd view of the project staff and the world view of faciliators at community centers. This is coming up first s an issue for implementation and formative evaluation, but it is very likely that it will be a generative thee that can enrich the field as a whole as we move into the research phase.
Just to summarize, there is a value in having an outside person provide formative evaluation and in this project, like the ones Dale talked about, the formative evaluation will help provide grounding for the research phase that comes later.