Careful evaluation of effects (Herrera, et al., 2007)
Example: Summer Institute
Goal: To improve services for youth who participate in formal youth mentoring programs
Premise: A sustained dialogue between experienced professionals and researchers stimulates research with relevance to the field and enhances its translation to practical application
Strategy: A direct relationship between researchers and professionals
Model: A series of highly interactive discussions that provide an in-depth view of the research and examine its implications for program policies and practices
Professionals eager for cutting edge research and exchange of information and ideas
Researchers who want work to be relevant and want findings translated into improvements in services for youth
Small-group format to encourage active exchange
Ample time to think critically and creatively about issues and explore opportunities for innovation
Get away and go back to school
Mixing within and between roles
Mixture model Advocates Program Leaders Researchers YOU
Professional development and career
Training tied to transitions (Caplan & Curry, 2001)
Internship: Transition from student to worker
Transfer knowledge from class to real-life
Entry-level: Transition to professional
Learn skills and tasks of position
Leadership development: Transition to leader in organization
Preparation for supervision and management
Master practitioner: Transition to leader in the field
Special opportunities for experienced professionals
Wisdom and insight to share at institute
Wisdom and insight to share in communities
Transfer of learning
Hold positions of influence
Training and supervision of staff
Development of program models
Implementation of service delivery changes
Think about new program models, program policies and procedures, training materials for program staff, training materials for volunteer mentors and youth participants
Summer Institute aims
Contribute to the development of policy and practice in the field of youth mentoring
Convene leaders in the field for substantive discussion of practices, policies, and new directions
Create new networks of peer relationships among professionals and researchers from different programs and backgrounds
Promote professional identity and commitment of participants and researchers
Moving forward Idea Action Observation
Support of advocacy agencies providing training and technical assistance?
Yes! (support with plans and announcements)
Interest on part of researchers?
Yes! (eager to attend)
Interest on part of professionals?
Yes! (competitive application process)
Ability to reach target participants?
Yes! (look around)
Respond to questions at end of institute
Respond to questions after 6-12 months
What types of opportunities for collaboration among colleagues (research-practice, practice-practice, research-research) do you see emerging from the institute: a) this week and b) continuing into the future?
What types of initiatives or changes could you undertake upon returning to your program?
How will you share the new information and ideas with others in your agency or community?
Assignment for each presentation
Summarize (few sentences)
Follow-up questions (2-3)
Implications for program (example)
Year 1: School-based Mentoring
Reprise in year 3:
Symposium on Monday
Guest speakers on Friday
Year 2: Diversity in Mentoring
Year 3: Use of Evidence for Practice
MENTOR—Elements of Effective Practice, 3 rd Ed.
BBBSA program models (SBM, CBM)
Standards and accreditation discussions
WT Grant Distinguished Fellows
Use of high quality research in practice
Marc Wheeler, VP Programs BBBS Alaska (SIYM alum) to PSU
David DuBois, to BBBSA
VP of Program/Program Director (18%)
Program Coordinator (25%)
Mostly with BBBS agencies (54%)
Average employment in field=8.9 yrs.
Survey: current assessment
Current level of EB decision making in field
A lot more needed (30%)
Somewhat more needed (55%)
About right (10%)
Personally comfortable using EB decision making
Very/somewhat uncomfortable (26%)
Somewhat comfortable (33%)
Very comfortable (32%)
Survey: type of data
Use of published research
Small steps (37%)
Creating systems (16%)
Have systems to routinely do this (37%)
Use of internal agency data
Small steps (31%)
Creating systems (22%)
Have systems to routinely do this (40%)
Survey: evidence used 33% 40% 21% 7% Performance data on program operation/outcome 16% 40% 31% 13% Data on local trends/needs 27% 50% 17% 7% Professional experience/expertise 14% 42% 28% 17% Data on client/stakeholder preferences 11% 34% 37% 18% Published external research Extensive Substantial Somewhat Little/none Use of the following:
Survey: reasons to use evidence 59% 33% 4% 4% Focus resources on effective areas 49% 38% 8% 6% Design new programs for specific populations 47% 41% 6% 6% Guide decisions of board and stakeholders 60% 26% 8% 7% Prevent negative outcomes for youth 78% 14% 3% 4% Demonstrate impact to funders 83% 11% 1% 5% Improve youth outcomes Very important Somewhat important Neutral Un-important Goals:
Survey: Greatest needs (very imp.)
How to analyze data and report findings (59%)
Step-by-step guide for EB decision making (56%)
How to find/select measures/metrics (56%)
How to find, read, use existing research (54%)
Description of different types of evidence (42%)
How to collaborate with researchers/colleagues on EB decision making (38%)
Glossary of common research terms (37%)
Guidance on ethical issues with research (36%)
Description of scientific method and applications (29%)
Research principles and statistical analysis Briefly….
What works for whom under what circumstances? Why?
Does program work better/differently for certain types of mentees (age, gender, race, stress, aptitude)?
Does program work better/differently in certain settings (community, school, etc.)?
Does program work better/differently with certain types of volunteers (age, gender, occupation, personality)?
What are the essential processes that yield the results?
Reflects complexity of experience
Captures contexts and processes
Good for discovering what is happening
Good for within-system view
Translating results to others
Reflects clear definitions and theories
Captures relations between variables
Good for demonstrating what is happening
Good for comparisons
Framing the issue
Criteria for selecting “outcomes”
Outcome can reasonably be expected to change during period, given intensity of intervention
Proximal (intervening) vs distal (ultimate)
Number of other uncontrolled factors
Portion of variance explained
Outcome is measurable and assessment is sensitive enough to detect likely change
Clearly and narrowly defined
Measures X X X X X O O O O O Z Z Z Z Z Reliability Validity
Defining population of interest
Random sampling (vs. assignment)
Each individual has equal chance of being selected
Population parameter vs. sample statistic
Inferences apply to population
Unlikely to get a sample statistic exactly equal to population parameter (sampling error).
Imagine a hypothetical sampling distribution if you took multiple samples from population and plotted all the sample statistics.
Central Limit Theorem:
If a population has a mean of μ and a standard deviation of σ, then the sampling distribution of the mean based on sample size N will have a mean of μ and a standard deviation of σ/sq root (N) and will approach a normal distribution as the sample size N upon which it is based becomes larger (regardless of population distribution).
We can determine the true effect of a program (or experience) if we compare what happens to an individual who is in the program versus what would have happened to that individual if he or she were not in the program (impossible in one lifetime)
We always lack the ideal counterfactual (the outcome in the what if situation…)
Missing data solution
Compare participants to non-participants who are as similar as possible in every way except for having or not having the intervention
How do we get comparison group?
Experimental design (optimal)
Researcher controls exposure to intervention through random assignment to intervention
Research tries to get a non-participant comparison group as equal/similar as possible
Random assignment means everyone has equal chance of being in program Imagine this dimension is motivation to succeed. We would have equal distribution of low, middle, high among participants and “control group” With random assignment, this would be true on EVERY dimension (observed and unobserved). Without random assignment, they may differ on important dimensions. For example, program participants may have higher motivation to succeed than non-participants—that’s why they signed up
Experimental design Pre-test Post-test Program (assume equivalence) Xp Control Xc Program X Xp Control X Xc Test of effect = mean (Xp) – mean (Xc)
Experimental comparison Pre-test Post-test Program group Control group Program group Control group
No control group Pre-test Post-test Program group Program group Control group
Comparing Group Means
State a null hypothesis (e.g. X 1 – X 2 = 0)
Create sampling distribution for difference between means.
Compare observed difference between means to null hypothesis.
If difference is relatively small, it could be due to sampling error (p>.05, FAIL to reject null).
If difference is relatively large, it is unlikely due to sampling error (p<.05, REJECT null) and conclude actual difference exists.
Conclusions are based on probabilities
Conclusions can be incorrect
Reject null hypothesis when we shouldn’t (freaky sample)
Fail to reject null hypothesis when we should (don’t detect actual difference)
Low statistical power, need larger sample
No imposed difference between groups--naturalistic observation
Can see how certain variables correspond
Multiple regression (can evaluate one factor controlling for others)
Sampling distribution for each estimate (assumption of no association)
Causal inference depends on several considerations (temporal order, ruling out other explanations, etc.)
Orienting frameworks Central ideas….
Universal processes vs. individual differences
Continuity and change
Change possible at any time
Change constrained by prior adaptation
Diversity in process and outcome
Developmental adaptation Source: Sroufe, L.A. (1997). Psychopathology as and outcome of development. Development & Psychopathology, 9, 251-268.
What distinguishes relationships?
(Laursen & Bukowski, 1997)
Voluntary, kinship, committed
Resources, experience/knowledge, rank
Male-male, female-female, cross-gender
Relationship dimensions Friend Cousin Equal social power (horizontal) Mentor Parent Unequal social power (vertical) Voluntary (mutual) Permanent (obligation)
Stage model Facilitating closure, rematching Negotiating terms of future contact or rejuvenating relationship Redefinition Supervising and supporting, facilitating closure Addressing challenges to relationship or ending relationship Decline and dissolution Supervising and supporting, ongoing training Meeting regularly and establishing patterns of interaction Growth and maintenance Matching, making introductions Beginning relationship and becoming acquainted Initiation Recruiting, screening, training Anticipating and preparing for relationship Contemplation Program practices Conceptual features Stage
Human beings of all ages are happiest and able to deploy their talents to best advantage when they are confident that, standing behind them, there are one or more trusted persons who will come to their aid should difficulties arise.