Call Girls Colaba Mumbai ❤️ 9920874524 👈 Cash on Delivery
Chal 3 final 2.5
1. Challenge 3: Developing a
D&I Template for
Sustainability Across
Settings
Mary Daly England, Kelli Gora, Jose Daniel
Navarro, Jill Weinstein
NURS 738 – Group 8
College of Nursing, University of Arizona
November 11, 2013
3. Factors affecting adoption
Homophilous verses heterophilous groups
• Innovations penetrate homophilous groups more readily
Passive dissemination verses active dissemination
• Active dissemination is more effective
Characteristics of the intervention or innovation
• Relative advantage and compatibility
Contextual factors
• Norms and attitudes, organizational structure and
process, resources
4. Methods to measure penetration
and adoption
Surveys
Interviews
Administrative data
Surveys
Interviews
Administrative data
Observation
Case audits
Checklists
Outcome Indicators
•Individual provider/consumer
•Service providers’ acceptance of EBP aimed at
reducing child neglect
•Individual provider
•Organization or setting
•RE-AIM/Reach
•Change in knowledge, attitude, behavior, quality of
life
•Change in organizational policies and practice
5. Factors to be Considered in Determining Sample Size
SAMPLE SIZE IS
CRITICAL AND
SHOULD BE
REPRESENTITIVE
AND
GERNERALIZABLE
TO THE WIDER
WORLD.
POWER TO
REDUCE
TYPE I &
TYPE II
ERRORS
Variability
SIGNIFICANCE
sample
LEVEL OF
CONFIDENCE
COST
MARGIN OF
RANDOM
ERROR
EFFECT
SIZE
(Fox, Hunn & Mathers, 2009;Bacchetti, McCullouch & Segal, 2008)
7. Criteria for Determining Effectiveness of the
Translational Intervention(s)
Practicability
(focus
groups)
Level of
utilization (case
audit)
Uptake
(survey)
APPROPRIATENESS
PENETRATION
SUSTAINABILIT
Y
EFFECTIVENESS
ADOPTION
ACCEPTABILITY
Continuation
(interviews)
FIDELITY
Adherence
(check-lists)
Satisfaction
of the
innovation
(survey)
FEASIBILITY
Suitable for
everyday use
(survey)
Taxonomy of implementation outcomes (Proctor et al., 2011)
11. References
Aarons, G. A., & Palinkas, L. A. (2007). Implementation of evidence-based practice in child welfare: Service provider
perspectives. Administration and Policy in Mental Health and Mental Health Services Research, 34(4),
411-419.
Bakken, S., & Ruland, C. M. (2009). Translating clinical informatics interventions into routine clinical care: How can the
RE-AIM framework help? Journal of the American Medical Informatics Association : JAMIA, 16(6), 889897. doi:10.1197/jamia.M3085; 10.1197/jamia.M3085
Bachetti, P., McCulloch, C.E., & Segal, M.R. (2008). Simple, defensible
sample sizes based on cost efficiency. Biometrics, 64, 577-594.
doi: 10.1111/j.1541-0420.2008.01004.x
Bowen, D. J., Kreuter, M., Spring, B., Cofta-Woerpel, L., Linnan, L., Weiner, D., . . . Fernandez, M. (2009). How we
design feasibility studies. American Journal of Preventive Medicine, 36(5), 452-457.
doi:http://dx.doi.org.ezproxy2.library.arizona.edu/10.1016/j.amepre.2009.02.002
Brownson, R. C., & Jones, E. (2009). Bridging the gap: Translating research into policy and practice. Preventive
Medicine, 49(4), 313-315. doi:10.1016/j.ypmed.2009.06.008; 10.1016/j.ypmed.2009.06.008
Cain, M., & Mittman, R. (2002). Diffusion of innovation in health care. Retrieved from
http://faculty.mercer.edu/thomas_bm/classes/641/Diffusion%20of%20Innovations%20in%20Health
care.pdf
Cain, M., & Mittman, R. (2002). Diffusion of innovation in health care. Retrieved from
http://faculty.mercer.edu/thomas_bm/classes/641/Diffusion%20of%20Innovations%20in%20Health
care.pdf
Chambers, d., Glasgow, R. & Stange, K. (2013). The dynamic sustainability framework: addressing the paradox of
sustainment amid ongoing change. Implementation Science; 8:17.
Fox, N., Hunn, A., & Mathers, N. (2009) Sampling and sample size calculation. Retrieved from National Institute for
Health Research website: rds- eastmidlands.nihr.ac.uk/.../9-sampling-and-sample-size-calculation.ht.
12. References
Gitlin, L. N. (2013). Introducing a new intervention: An overview of research phases and common challenges. The American
Journal of Occupational Therapy : Official Publication of the American Occupational Therapy
Association, 67(2), 177-184. doi:10.5014/ajot.2013.006742; 10.5014/ajot.2013.006742
New York: Oxford Press.
Gladwell, M. (2000). The Tipping Point, How Little Things Can Make a Big Difference. Boston: Little, Brown & Company.
Glasgow, R. E., McKay, H. G., Piette, J. D., & Reynolds, K. D. (2001). The RE-AIM framework for evaluating interventions: What
can it tell us about approaches to chronic illness management? Patient Education and Counseling, 44(2), 119-127.
Grady, P.A. (2010). Translation Research and Nursing Science. Nursing Outlook, 58(3), 164-166.
Israel, G.D. (2009). University of Florida. Determining sample size. Retrieved from http://edis.ifas.ufl.edu/pdffiles/PD/
PD00600.pdf
Johnson, K., Hays, C., Center, H., & Daley, C. (2004). Building capacity and sustainable prevention innovations: A
sustainability planning model. Evaluation and Program Planning, 27(2), 135-149.
Nietert, P. J., Wessell, A. M., Jenkins, R. G., Feifer, C., Nemeth, L. S., & Ornstein, S. M. (2007). Using a summary measure for
multiple quality indicators in primary care: the Summary QUality InDex (SQUID). Implementation Science, 2(11),
1-34.
Mendel, P., Meredith, L. S., Schoenbaum, M., Sherbourne, C. D., & Wells, K. B. (2008). Interventions in organizational and
community context: a framework for building evidence on dissemination and implementation in health services
research. Administration and Policy in Mental Health and Mental Health Services Research, 35(1-2), 21-37.
Proctor, E., Silmere, H., Raghavan, R., Hovmand, P., Aarons, G., Bunger, A.,… Henesly, M. (2011). Outcomes for implementation
research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in
Mental Health, 38(2), 65-76. doi: 10.1007/s10488-010-0319-7
Rabin, B. A., Glasgow, R. E., Kerner, J. F., Klump, M. P., & Brownson, R. C. (2010). Dissemination and Implementation Research
on Community-Based Cancer Prevention: A Systematic Review. American Journal of Preventive Medicine, 38(4),
443-456. doi: http://dx.doi.org/10.1016/j.amepre.2009.12.035
13. References
Rabin, B. A., Brownson, R. C., Haire-Joshu, D., Kreuter, M. W., & Weaver, N. L. (2008). A glossary for dissemination and
implementation research in health. Journal of Public Health Management and Practice, 14(2), 117-123.
Rabin, B. A., Brownson, R. C., Kerner, J. F., & Glasgow, R. E. (2006). Methodologic Challenges in Disseminating EvidenceBased Interventions to Promote Physical Activity. American Journal of Preventive Medicine, 31(4, Supplement),
24-34. doi: http://dx.doi.org/10.1016/j.amepre.2006.06.009
Scheirer, M. (2005) Is Sustainability possible? A review and commentary on Empirical studies of program sustainability. American
Journal of Evaluation. 26,(320). Retrieved from http://aje.sagepub.com/content/26/3/320
Shediac-Rizkallah, M. C., & Bone, L. R. (1998). Planning for the sustainability of community-based health programs: conceptual
frameworks and future directions for research, practice and policy. Health Education Resource, 13(1), 87-108.
Shubert, T. E., Altpeter, M., & Busby-Whitehead, J. (2011). Using the RE-AIM framework to translate a research-based falls
prevention intervention into a community-based program: Lessons learned. Journal of Safety Research, 42(6), 509516. doi:10.1016/j.jsr.2011.09.003; 10.1016/j.jsr.2011.09.003
Smith, J. R., & Donze, A. (2010). Assessing environmental readiness: First steps in developing an evidence-based practice
implementation culture. The Journal of Perinatal & Neonatal Nursing, 24(1), 61-71; quiz 72-3. doi:10.1097/JPN.
0b013e3181ce1357; 10.1097/JPN.0b013e3181ce1357
Stiles, P., Boothroyd, R., Snyder, K., & Zong, X. (2002). Service penetration by persons with severe mental illness: How should it
be measured? The Journal of Behavioral Health Services & Research, 29(2), 198-207. doi:0.1007/bf02287706
Tinetti, M. E., Baker, D. I., King, M., Gottschalk, M., Murphy, T. E., Acampora, D., . . . Allore, H. G. (2008). Effect of
dissemination of evidence in reducing injuries from falls. New England Journal of Medicine, 359(3), 252-261.
Vedel, I., Ghadi, V., De Stampa, M., Routelous, C., Bergman, H., Ankri, J., & Lapointe, L. (2013). Diffusion of a collaborative care
model in primary care: a longitudinal qualitative study. BMC Family Practice, 14 (3). doi: 10.1186/1471-2296-14-3.
Editor's Notes
Phase 1: Problem The initial step in translating research findings to improve clinical practice is to first identify the practice gap or problem. Identify what the current practice is and discover if the problem is a systems problem or a practice problem.Identifying the practice gap or problem can be done through looking at prior data of confining factors. For example, if you were looking at increase frequency of falls you would first want to identify the problem. Some of the common problems are with patients not using the call light,the bed-exit alarm was not set, the patient was confused, the patient assessment was inadequate, or if there is a delayed response to the nurse call bell. After the problem is identified researchers are able to provide proper evidence. I UNDERSTAND THIS TEMPLETE TO BE DESIGNED TO FOCUS ON PROVIDERS AT THE SYSTEMS/ORGANIZATION LEVEL RATHER THAN THE PATIENT LEVEL. SHOULD WE CHANGE THE FOCUS TO PROVIDERS? AND SHOULD THIS BE INCLUDED IN THE TITLE PAGE??Phase 2: Evidence In Phase 2 we formulate a PICO question and a comprehensive review of literature to find the evidence that best fits the organizational needs of the intervention. For example, if the problem was inadequate patient assessment with high frequency of falls, the organizational need would be to identify an evidence based assessment tool for high risk patients. In addition, you would want to find research that has well defined interventions and that financially fits the needs of the organization. A cost effectiveness analysis can help compare the cost of an intervention to its effectiveness. An analysis is able to measure cases prevented and years of life saved. The appraisal of systematic reviews is the most efficient way to become familiar with the best available research. The use of the systematic method reduces chance effects and eliminates a bias opinion, therefore providing more reliable results. In addition, combining the systematic review with the comparative effectiveness research focuses on which treatment works best, for which population, and under what circumstances. Phase 3: Development Phase 3 is the development of an inter-disciplinary team, and the assessing of organizational readiness to change. An inter-disciplinary team consists of practitioners from different professions who share a common patient population, common patient care goals and have responsibility for complementary tasks. An important asset to the development is a project facilitator. A project facilitator must be appointed to coordinate committee meetings, create ground rules, document all team decisions, setting project deadlines, and ensuring the project is progressing as scheduled. A member of nursing leadership should be included in the inter-disciplinary team to insure the team understands the workflow and how implementation will affect the staff. Including nursing staff members to the interdisciplinary team has demonstrated a positive staff buy-in and a sense of departmental ownership to openness change. Thorough assessment of the organization’s readiness is significant precursor to the successful implementation of complex changes in the healthcare setting. In order to provide evidence-based care, providers need to be aware of organizational factors that may hinder or encourage the receptiveness of an EBP culture. An Organization that is ready for change, revolves around having clarity of the organization’s mission and goals. Transformational leadership, staff autonomy, organizational transparency, low stress are key to openness to change. A way identifying organizational culture is through an organizational culture inventory measurement tool. The organizational measurement tool is crucial to the developmental success of the intervention. Phase 4: ImplementationDuring the implementation phase the interdisciplinary team assesses penetration of the intervention, formulates an adequate sample size, and determines the effectiveness of the intervention. The implementation phase will be covered thoroughly through our presentation. Phase 5: Evaluation During our final phase we are able to evaluate the outcomes of the intervention. We are also able to ensure sustainability measures. The Evaluation phase will be covered thoroughly through our presentation. (Bakken, & Ruland, 2009; Bowen et al., 2009; Brownson, & Jones, 2009; Gitlin, 2013; Grady, 2010; Shubert, Altpeter, & Busby-Whitehead, 2011; Smith, & Donze, 2010)
Cain and Mitman (2002) suggest that innovations will spread faster among homophilous groups (groups in which members share common characteristics, beliefs, values and norms) than among heterophilous groups (groups whose members differ fundamentally). Additionally, if the innovation is congruent with the beliefs and norms of the group, the innovation will ‘penetrate’ more rapidly and be more readily adopted by the group. Penetration of an intervention – the integration of a practice, policy, tool, guideline within an organization, community, or practice setting. (Proctor et al., 2011).In discussing the penetration in accessing service, Stiles, Boothroyd, Snyder, & Zong (2002) describes service penetration as the proportion of individuals eligible to receive a service relative to the number of people who actually receive service. This is similar to the reach in the RE-AIM framework; reach is defined as the percentage of persons willing to participate in the intervention within a target population(Gaglio & Glasgow, 2012).One can conclude that the higher the rate of penetration, the more likely the innovation will be sustained(Gaglio & Glasgow, 2012).Rabin, Brownson, Kerner, & Glasgow (2010) found that high adoption rate occurred with a homogenous (homophilous) target population using planned, active multi-modal dissemination. Adoption is defined as the percent of target audience that implements an innovation. In other words, adoption is the successful dissemination intervention + effectiveness of the intervention + target audience.Examples of active multi modal dissemination include: workshops, train the trainer, individualized programs, formal and informal meetings. In contrast, passive dissemination of an innovation to a heterogenous audience using mass mailings, clinical practice guidelines and presentations, proved to be largely ineffective (Rabin, Glasgow, Kerner, Klump, & Brownson, 2010)Characteristics of the intervention or innovation: Interventions which are believed to be better than status quo and are consistent/compatible with the norms, values and needs of potential adopters (Mendel, Meredith, Schoenbaum, Sherbourne, & Wells, 2008). Vedel et al.(2013) found that the adoption of a collaborative team model (CTM) of care by primary care physicians was less (80%) in comparison to the adoption of the model by nurses (100%). The difference in adoption was attributed to the nurses’ perception of the CTM of care being more congruent with their own values, beliefs and concept of optimum healthcare delivery(Vedel et al., 2013).Contextual factors: The willingness and ability of a target population to adopt new interventions is affected by the value placed on a problem and the perception that the intervention will make a difference and that implementation of the intervention will bring about the desired outcome. The way in which an organization operates and how well the intervention “fits” with the current operational practices, may result in ease or difficulty of adoption. Additionally, mobilization of resources – financial, human, social and political capital – spread and sustain change (Mendel, Meredith, Schoenbaum, Sherbourne, & Wells, 2008)Rabin et al (2010) performed a systematic review of D & I research and found a variety of measures were used to evaluate implementation; mediating factors- identified as reach, adoption, intervention and maintenance; moderating factors – intervention characteristics, adopter characteristics contextual factors; outcomes – analyzed on individual, setting and process level….of the studies reviewed, few studies reported on all stages of measurement of dissemination – the most commonly reported outcome was related to change in process followed by change in attitude and behavior. The researchers concluded that need for further research is needed, focusing on reliable D & I measurements, standardization of terminology and reporting criteria.
Institutionalization - The extent in which an innovation is integrated in a setting is measured by 3 distinct stages. Passage is a one time event in which the innovation becomes part of the organizational structure, Malcolm Gladwell referred to this as the “Tipping Point”(Gladwell, 2000). Routine is the cycle of repetitively reinforcing the innovation and niche saturation is defined by the spread of the innovation throughout the organization (Johnson, Hays, Center, & Daley, 2004;Rabin, Brownson, Kerner, & Glasgow, 2006). As you can see, penetration of an intervention varies depending on when, during the implementation stage, penetration is measured (Proctor, 2011).The extent to which adoption of an intervention occurs can be measured at different levels of analysis – the individual provider or consumer, the organizational level, community and policy (Proctor, 2011). Proctor et al. (2011) provide researchers with a framework by which to evaluate implementation strategies. Based on implementation, service and client outcomes; the most important implementation outcome is based on improvement of client well-being. Clinicians and facilities were targeted to address the issue of fall related injuries. Multiple practice change interventions ( active and passive dissemination of information) were implemented to increase rate of adoption of fall related practices. The adoption of the interventions is measured by decreased fall related injuries and decrease use of medical services in the targeted region. Through increased provider knowledge and practice changes, positively impacted quality of life in the elderly population (Tinetti et al. 2008).Aarons and Palinkas (2007) used semi structured interviews to assess acceptance of implementation of EBP aimed at reducing child neglect Six primary factors were identifiedas critical determinants of EBP implementation: (1)Acceptability of the EBP to the caseworker and to thefamily, (2) Suitability of the EBP to the needs of the family, (3) Caseworker motivations for using the EBP, (4)Experiences with being trained in the EBP, (5) Extent of organizational support for EBP implementation, and (6)Impact of EBP on process and outcome of services (p 411). Stiles et al (2002) – service penetration and niche saturationChange in organizational policies and practiceCounseling becomes part of routine practice in a community study to promote change in physical activity(Rabin, Brownson, Kerner & Glasgow, 2006).Change in knowledge, attitude, behavior, quality of lifeFall related injuries decreased by 9% Fall related use of medical services declined by 11%(Tinetti et al., 2008)Program records and electronic medical records (Nietert et al., 2007).
Inferential studies (test hypotheses)Larger sample size to increase the chance of:Statistical significance (5% or 1%) Statistical power (80% a difference does exist)Reduces type I error (false positive)Reduces type II error (false negative)Effect sizeSmall effect sizes necessitate large sample sizesAn accurate estimation is necessary to determine the power prior to beginning the study (Fox, Hunn & Mathers, 2009).No matter the study design, sample size should be chosen that minimizes the costs per subject. Research is competitive and recourses are limited so other methods should be considered besides the conventional approach. The are multiple methods available for choosing a sample size that include the maximization of expected utility (MEU) and the value of information (VOI). These methods take into account the study’s projected value minus the cost by quantifying these values on the same scale.Assume there is a large population but that we do not know the variability in the proportion that will adopt the practice; therefore, assume p=.5 (maxi- mum variability). Furthermore, suppose we desire a 95% confidence level and ±5% precision. The Level of Precision The level of precision, sometimes called sampling error,is the range in which the true value of the population is estimated to be. This range is often expressed in percentage points (e.g., ±5 percent) in the same way that results for political campaign polls are reported by the media. Thus, if a researcher finds that 60% of farmers in the sample have adopted a recommended practice with a precision rate of ±5%, then he or she can conclude that between 55% and 65% of farmers in the population have adopted the practice. The Confidence Level The confidence or risk level is based on ideas encompassed under the Central Limit Theorem. The key idea encom- passed in the Central Limit Theorem is that when a popula- tion is repeatedly sampled, the average value of the attribute obtained by those samples is equal to the true population value. Furthermore, the values obtained by these samples are distributed normally about the true value, with some samples having a higher value and some obtaining a lower score than the true population value. In a normal distribu- tion, approximately 95% of the sample values are within two standard deviations of the true population value (e.g., mean). In other words, this means that if a 95% confidence level is selected, 95 out of 100 samples will have the true popula- tion value within the range of precision specified earlier (Figure 1). There is always a chance that the sample you obtain does not represent the true population value. Such samples with extreme values are represented by the shaded areas in Figure 1. This risk is reduced for 99% confidence levels and increased for 90% (or lower) confidence levels.
All of the above outcomes can be used as indicators for implementation success. This includes determining if the intervention was deployed correctly. Currently there is no conceptualized method for evaluating implantation success. Measurement methods are included within the parenthesis and can be based off of attitudes, opinions, intentions, or behaviors that were either reported or observed.
Composite definition several conceptsSustainability is not well defined in literature. It is used interchangeably with rountinization, maintenance and institutionalization but is not synonymous.Sustain = Nourish focus on health benefitInstitutionalization = persistence of the program itself not the benefit1) maintaining healthbenefits achieved through the initial program, 2)continuation of the program activities within anorganizational structure and 3) building the capacity of the recipient communityEffected by: 1) Project design2) Factors with in the organization3) Factors in the broader communityShediac-Rizkallah, M. C., & Bone, L. R. (1998). Planning for the sustainability of community-based health programs: conceptual frameworks and future directions for research, practice and policy. Health Educ Res, 13(1), 87-108.Most of these studies did not differentiate among the three types of sustainability measuresdescribed by Shediac-Rizakallah and Bone (1998) in the framework described above: thesustainability of beneficial outcomes for clients, the continuation of program activities, and themaintenance of community attention to the problem addressed by the program. (a) continuing to deliver beneficial services (outcomes) to clients (an individual level of analysis);(b) maintaining the program and/or its activities in an identifiable form, even if modified(an organizational level of analysis)Sustainability is the extent an intervention can deliver its intended benefits over time, after initial funding has ended (Rabin & Brownson, 2012). Outcome indicators are maintenance, institutionalization (defined on slide 3), and capacity building (Shediac-Rizkallah & Bone, 1998). Project:Effectiveness - ConfidenceDuration - IntegrationFunding – Integration from trial or seed to permanent budgetType - Training – On going resourcesShediac-Rizkallah, M. C., & Bone, L. R. (1998). Planning for the sustainability of community-based health programs: conceptual frameworks and future directions for research, practice and policy. Health Educ Res, 13(1), 87-108.
Chambers et al., 2013The DSF (Figure 2) emphasizes that change exists inthe use of interventions over time, the characteristics ofpractice settings, and the broader system that establishesthe context for how care is delivered. As classical thinkingeloquently captures, change impacts the ability of healthinterventions to be optimally used and sustained overtime. This dynamism exists in the evidence base for inter-ventions that links causal factors to health outcomes, asjudged by the continual stream of new publications in aca-demic journals that add to available evidence on the ef-fectiveness of interventions, as well as ongoing practicesurveillance systems that capture intervention impact.Dynamism exists in the interventions that support the evi-dence, which acknowledge ad hoc adaptation and experi-mentation of evidence-based interventions. Furthermore,it exists in a constantly changing multi-level context [34],internal to a clinical or community setting and the broadercare system, be it an organization, community, county,state or countryThe DSF, which has benefitted from the authors’on-going dialogue with the Implementation Science com-munity about the challenge of sustainability, follows thespirit of a number of existing models that emphasizethree things—importance of context, the need for on-going evaluation and decision-making, and the goal ofcontinuous improvement. These include Wandersman’sGetting to Outcomes model [31], Continuous QualityImprovement (CQI) [34], system dynamics [35], com-plexity theory [36], adaptive management [37], and theEvidence Integration Triangle [30]. In addition, the DSFis consistent with alternative views of organizationaldevelopment [38] and the principles of system science[39]. Distinct in the DSF from many of these othermodels is the emphasis on omnipresent change, and thecentral goal of continuously optimizing the fit betweenthe intervention and a dynamic delivery context to achievemaximal benefit. The DSF is anchored around the follo-wing seven tenets, for which we think there is evidence,but recommend explicit testing in this context. Page 5 Chamber etal