What types of opportunities for collaboration among colleagues (research-practice, practice-practice, research-research) do you see emerging from the institute: a) this week and b) continuing into the future?
What types of initiatives or changes could you undertake upon returning to your program?
How will you share the new information and ideas with others in your agency or community?
Survey: evidence used 33% 40% 21% 7% Performance data on program operation/outcome 16% 40% 31% 13% Data on local trends/needs 27% 50% 17% 7% Professional experience/expertise 14% 42% 28% 17% Data on client/stakeholder preferences 11% 34% 37% 18% Published external research Extensive Substantial Somewhat Little/none Use of the following:
Survey: reasons to use evidence 59% 33% 4% 4% Focus resources on effective areas 49% 38% 8% 6% Design new programs for specific populations 47% 41% 6% 6% Guide decisions of board and stakeholders 60% 26% 8% 7% Prevent negative outcomes for youth 78% 14% 3% 4% Demonstrate impact to funders 83% 11% 1% 5% Improve youth outcomes Very important Somewhat important Neutral Un-important Goals:
Unlikely to get a sample statistic exactly equal to population parameter (sampling error).
Imagine a hypothetical sampling distribution if you took multiple samples from population and plotted all the sample statistics.
Central Limit Theorem:
If a population has a mean of μ and a standard deviation of σ, then the sampling distribution of the mean based on sample size N will have a mean of μ and a standard deviation of σ/sq root (N) and will approach a normal distribution as the sample size N upon which it is based becomes larger (regardless of population distribution).
We can determine the true effect of a program (or experience) if we compare what happens to an individual who is in the program versus what would have happened to that individual if he or she were not in the program (impossible in one lifetime)
We always lack the ideal counterfactual (the outcome in the what if situation…)
Missing data solution
Compare participants to non-participants who are as similar as possible in every way except for having or not having the intervention
Researcher controls exposure to intervention through random assignment to intervention
Research tries to get a non-participant comparison group as equal/similar as possible
Random assignment means everyone has equal chance of being in program Imagine this dimension is motivation to succeed. We would have equal distribution of low, middle, high among participants and “control group” With random assignment, this would be true on EVERY dimension (observed and unobserved). Without random assignment, they may differ on important dimensions. For example, program participants may have higher motivation to succeed than non-participants—that’s why they signed up
Experimental design Pre-test Post-test Program (assume equivalence) Xp Control Xc Program X Xp Control X Xc Test of effect = mean (Xp) – mean (Xc)
Experimental comparison Pre-test Post-test Program group Control group Program group Control group
No control group Pre-test Post-test Program group Program group Control group
Stage model Facilitating closure, rematching Negotiating terms of future contact or rejuvenating relationship Redefinition Supervising and supporting, facilitating closure Addressing challenges to relationship or ending relationship Decline and dissolution Supervising and supporting, ongoing training Meeting regularly and establishing patterns of interaction Growth and maintenance Matching, making introductions Beginning relationship and becoming acquainted Initiation Recruiting, screening, training Anticipating and preparing for relationship Contemplation Program practices Conceptual features Stage
Human beings of all ages are happiest and able to deploy their talents to best advantage when they are confident that, standing behind them, there are one or more trusted persons who will come to their aid should difficulties arise.