• Save
The Utilization of DHHS Program Evaluations: A Preliminary Examination
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

The Utilization of DHHS Program Evaluations: A Preliminary Examination

on

  • 143 views

Washington Evaluators Brown Bag ...

Washington Evaluators Brown Bag
by Andrew Rock and Lucie Vogel
October 5, 2010

The presentation will describe a study conducted by the Lewin Group on the utilization of program evaluations in the Department of Health and Human Services for the Assistant Secretary for Planning and Evaluation. The study used an online survey of project officers and managers from a sample of program evaluations selected from the Policy Information Center database. To supplement the survey data, Lewin conducted focus groups with senior staff in six agencies. Key findings of the study focused on direct, conceptual and indirect use and the importance of high quality methods, stakeholder involvement in evaluation design, presence of a champion, and study findings that were perceived to be important. The study concluded with recommendations for a strengthened internal evaluation group within HHS and future research using a case study approach for greater in-depth examination.

Mr. Andrew Rock initiated/conceived and was the Project Officer (COTR) for the study. He works for the Office of Planning and Policy Support in the Office of the Assistant Secretary for Planning and Evaluation (ASPE), HHS. He is responsible for the Department's annual comprehensive report to Congress on HHS evaluations, coordinates the HHS legislative development process, represents his office on the Continuity of Operations Workgroup, and has worked on various cross-cutting issues including homelessness, tribal self-governance, and health reform. In addition to his work in ASPE, he has worked at the Centers for Medicare and Medicaid Services, the Public Health Service, and the Office of the National Coordinator for Health Information Technology.

Ms Lucie Vogel served as a Stakeholder Committee Member for the study. She works in the Division of Planning, Evaluation and Research in the Indian Health Service, developing Strategic and Health Service Master Plans, conducting evaluation studies, and reporting on agency performance. She previously served in evaluation and planning positions in the Food Safety and Inspection Service, the Virginia Department of Rehabilitative Services, the University of Virginia, and the Wisconsin Department of Health and Social Services.

Statistics

Views

Total Views
143
Views on SlideShare
140
Embed Views
3

Actions

Likes
0
Downloads
0
Comments
0

3 Embeds 3

http://washingtonevaluators.org 1
http://www.washingtonevaluators.roundtablelive.org 1
http://www.washingtonevaluators.org 1

Accessibility

Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

The Utilization of DHHS Program Evaluations: A Preliminary Examination Document Transcript

  • 1. Evaluation Framework Fogarty International Center Advancing science for global health Initial Document: December 2002 Last Modified: July 2008 Contact: Linda Kupfer, Ph.D. I. Evaluation Criteria The goals of evaluation at Fogarty are: • To stimulate the performance of Fogarty programs and to encourage innovative approaches to address problems and issues relating to improving global health • To provide a transparent process for assessment of Fogarty programs and to demonstrate sound stewardship of federal funds and the programs they support • To provide information for strategic planning, strengthen programs, improve performance, enhance funding decisions, demonstrate public health and economic benefits, and provide new directions for Fogarty programs • To provide mechanisms to identify program accomplishments to Fogarty, NIH, HHS, funding agencies, national and international partners and the U.S. Congress • To identify important lessons learned and best-management practices in performance of Fogarty programs as a whole, and make recommendations for implementation of future programs Continuing evaluation is designed to strengthen, improve and enhance the impact of Fogarty programs. There are several important areas of evaluation that can be used to assess the effectiveness of a Fogarty program. Areas of Evaluation 1. Program Planning 2. Program Management a) Project Selection, b) Recruiting Talent, c) Institutional Setting, d) Program Components, e) Human Subjects and Fiscal Accountability, and f) Best Practices 3. Partnerships and Communications 4. Program Results a) Program Input, b) Program Outputs, and c) Program Outcomes 5. Program Impacts a) Program Efficiency/Effectiveness and b) Program Relevance
  • 2. The criteria for evaluation are described in detail below as well as coinciding metrics for assessment: 1. Program Planning Criteria: Effective programs will use the strategic planning framework of Fogarty as well as that of program partners as the basis for development of the program RFA/PA. The RFA/PA should also be based on the needs of the U.S. scientific community, host countries, and as identified in collaboration with stakeholders such as other government agencies, foreign scientists and experts in the field. Metrics: Program Planning ▪ Evidence of a planning process and a plan (priority determination, clear articulation) ▪ Relevance of program to Fogarty, NIH ICs, and HHS strategic plans ▪ Stakeholder involvement in planning ▪ Re-evaluation of program over time ▪ Integration of recommendations into planning ▪ Planning for sustainability of program results 2. Program Management a. Project Selection: An effective program should incorporate a strong peer review process. The selection/review process should take into account host country needs in the program’s scientific area as well as any other criteria listed in the RFA or PA. Peer review should include reviewers with relevant developing country research experience. b. Recruiting Talent: Every program will attract a variety of talent. Strong programs will have mechanisms in place to identify and attract the best and most appropriate talent available. c. Program Components: Each program is made up of various projects or grants that together form a program. It is the role of the Program Officer to ensure that the various projects or grantees have a chance to interact and gain experience from one another. Network meetings should have goals and objectives that are clear to all participants from the beginning. Stakeholders and partners should be involved in the network meetings. d. Institutional Setting: Programs vary in their institutional setting and institutional support. The program should be well supported by 2 Metrics: Project Selection ▪ Composition of panels ▪ Review criteria ▪ Quality of feedback to PI ▪ Amount of time allowed for review ▪ Conflict of interest issues ▪ Involvement of the Program Officer Metrics: Recruiting Talent ▪ Recruitment of new/young investigators ▪ Recruitment of foreign investigators ▪ Minority applicants ▪ Interdisciplinary teams ▪ Success rate ▪ Turnover of investigators Metrics: Program Components ▪ Network meetings – goals and objectives of the meetings ▪ Other meetings/ways at which PIs and/or trainees get together exchange ideas ▪ Program operation (award size, length of time, funding mechanism, funding amount, reapplication restrictions) Metrics: Institutional Setting ▪ Matching funds ▪ Mentorship support ▪ Laboratory support ▪ Administrative support and good business practices
  • 3. both the academic institution(s) involved and the federal institutions involved. There must be appropriate business practices available at both the domestic and the foreign institution for grant implementation to go smoothly. e. Human Subjects and Fiscal Accountability: Programs should demonstrate that they have appropriate mechanisms in place to account for federal funds and are properly documenting protocol reviews for human subjects. Metrics: Human Subjects and Fiscal Accountability ▪ Presence of operational IRB ▪ Good accounting practices ▪ Good documentation practices ▪ Assurance that all intended funding is reaching foreign collaborators and trainees f. Metrics: Best Practices (Examples) ▪ Strategies used to prevent brain drain ▪ Strategies used to target program goals (e.g., Interdisciplinarity) ▪ Strategies used to promote long-term mentoring ▪ Strategies for selecting trainees ▪ Strategies used to promote long-term networking ▪ Other best practices Best Practices: As a result of ongoing evaluation, strong programs will help identify best practices with regard to various program factors, for example, prevention of brain drain, sustainability, and mentorship. 3. Partnerships and Communication a. Partnerships: Federal, national and international partnerships are essential to addressing global health issues. Partnerships should be pursued, nurtured and maintained. b. Communications: To be fully successful, scientific results must be communicated to the user community and utilized. During the evaluation of the program, the link to the user community will be reviewed and implementation of the science into policy or practice will be assessed. 3 Metrics: Partnerships ▪ Number of partnerships ▪ Different types of partnerships (NIH, HHS, other federal, NGO, private sector) ▪ Involvement of partners in development of the program and its strategic goals ▪ Funds from partners ▪ Cost of partnership Metrics: Communications ▪ Appropriate community input into strategic planning through informational meeting/training sessions held with community ▪ Involvement of community on advisory board of program ▪ Involvement of program in the community ▪ Requests for information, presentations ▪ Community needs surveys ▪ User community feedback (mechanisms and tracking)
  • 4. 4. Program Results Depending upon the age of a program, significant results will fall into different categories. The following should be documented and reported, analyzed and evaluated: a. Program Input: The total of the resources put into the program (funds and in-kind input from partners nationally and internationally – any “enabling resources”). b. Program Outputs: The program must be managed to produce program outputs that are the immediate, observable products of research and training activities, such as publications or patent submissions, citations, and degrees conferred. Quantitative indices of output are tools for the program that allow POs and PIs to track changes, highlight progress and identify potential problems. c. Program Outcomes: Longer-term results for which a program is designed to contribute, such as strengthened research capacity within the U.S. and foreign sites, effective transfer of scientific principles and methods, success in obtaining/attracting further scientific and/or international support (expected for more mature programs). Metrics: Outputs ▪ Number and list of publications (journal articles, book chapters, reports, etc.) ▪ List of trainees as first author ▪ Number and list of presentations ▪ Number of trainees ▪ Fields of training ▪ Number and type of degrees/certificates earned ▪ New curriculum developed and implemented ▪ Number and list of meetings Metrics: Outcomes ▪ Number of laboratories started ▪ Scientific departments started or strengthened ▪ Scientific methods discovered – number and type ▪ Number of new grants or new funding procured ▪ Awards received ▪ Careers paths initiated or enhanced 5. Program Impacts The total consequences of the program, including unanticipated benefits. These can include the influence of research activities on clinical public health practice or health policy, success in establishing a sustainable career structure, affecting the career path of trainees, changes in health care systems, and alterations in health care laws. Demonstrating impacts requires more complex analysis and synthesis of multiple lines of evidence of both a quantitative and qualitative nature (expected for the most mature programs). 4 Metrics: Impacts ▪ New policies adopted or advanced ▪ New scientific advancement developed ▪ Alteration of health care system ▪ Alteration of health care laws ▪ Alteration of health care practice ▪ Alteration of intervention implementation ▪ New clinical procedures adopted ▪ New career structure in place ▪ Improve health of population
  • 5. a. Program Efficiency/Effectiveness: In addition to assessing program impacts, assessment of program efficiency can help strengthen program effectiveness. b. Program Relevance: An effective program will demonstrate relevance to progress of scientific field as well as utility to greater program community (e.g., practitioners, policymakers). Metrics: Efficiency/Effectiveness ▪ Publications per dollar ▪ Publications per program ▪ Cost per trainee ▪Metrics: Relevance program Trainees skilled per ▪ Evidence of research outcomes disseminated ▪ Citation/impact factor related to program publications ▪ Qualitative evidence that program outcomes were useful to program field/greater program community II. Evaluation Principles and Elements 1. 2. 3. 4. Principals of Evaluation at Fogarty Elements and Basis for Review and Evaluation Program Development Self-Evaluation Process Each is described in detail below: 1. Principles of Evaluation at Fogarty: • Evaluation at Fogarty is a routine, continuous quality improvement, review process. • Evaluation focuses on outputs, outcomes, and impacts and mechanisms to ensure that these occur. While reporting of metrics (number of trainees achieving advanced degrees, number of publications, etc.) is necessary, reviews will go beyond metrics and will incorporate qualitative data and depend on the basic principle of external peer review to generate recommendations. • Programs are assessed against their own goals and objectives, taking into account fiscal resources and granting mechanisms. • Review and evaluation uses retrospective measurements of the achievements over a specific time period (eventually a cyclical period) based in part on measured quantitative outputs, outcomes, and impacts (metrics), as well as success stories and more qualitative outputs, outcomes and impacts. This information is used to make recommendations for the future. 2. Elements and Basis for Review and Evaluation The review and evaluation process is a continuum that spans a period of time beginning with strategic planning. Fogarty programs arise from the Fogarty Strategic Plan. Specific program plans are then developed with input from stakeholders, in the form of well-developed Requests for Applications (RFA) and Program Announcements (PA). 5
  • 6. Program Officers then monitor the progress of trainees and projects. At the five year point, a team of experts conduct a process evaluation and make suggestions for improving the program. This type of correction can improve a program mid-course. During year 9/10 of the program, an outcome evaluation is conducted that includes data collection and data analysis by a contractor who specializes in evaluation. A key to effective program review is the degree to which the review is normalized to the resources, objectives and program planning of the individual program. Given that each program has different financial resources, utilizes different talent pools with various specialties, faces different issues in host countries, works under unique institutional policies, and uses different approaches to reducing global health disparities, the reviews are tailored to take program variability into account. 3. Program Development The foundation for individual program review is a well-developed program plan that culminates in an RFA (PA). Importantly, planning a program at NIH normally requires a two-year lead time to allow sufficient input, partnership development and administrative review. Each program has its own RFA (PA) that can act as a strategic plan for that program. The RFA (PA) stems from Fogarty and NIH strategic plans, as well as the strategic plans of the program partners. Planning is imperative to program effectiveness and should be based on experience, past program results, and stakeholder needs and expectations. Each program should develop a plan that addresses its goals and objectives. Although this plan need not be formalized and written down, written form will ensure continuity for the program. The program plan can be developed and informed through consultations, workshops, and meetings and should be specific to resource needs, managing the program to meet those needs, data needs, and data gathering, analysis and storage. A program plan, reflecting the input of management and constituents, will include: • • • Articulation of the vision and focus of the program as well as why this direction is being taken; Background on scientific relevance of the program area, program implementation issues and mechanisms for establishing priorities for investment of resources; and Goals, objectives and performance milestone targets that provide guidance for evaluating program performance. Planning is fundamental to program evaluation. Developing the understanding, communication and data collection processes necessary to meet the basic goals of the program is necessary. A program should be reassessed and new planning (planning workshops, planning meetings etc.) take place every 5 years or as appropriate. Network meetings can also be used as part of the continuous review and planning for a program. 4. Self-Evaluation Process Each program should conduct self-evaluation and analysis on a regular basis, in between the more formal program evaluations. Each program’s self-evaluation will be based on performance milestones unique to that program, as well as the criteria given below for all programs. Annual self-evaluation can be accomplished at network meetings or following the 6
  • 7. submission of progress reports from the projects under the program. It is important that the selfevaluation include identification of results, potential problems and mechanisms for resolving these problems. Analysis of program data should be conducted as part of the program selfanalysis. In some cases, both collection and analysis of program data may need to be contracted out. Data collected by the program could include the metrics mentioned in the criteria section above. III. Evaluation Roles 1. Role of the Fogarty International Center Advisory Board (FICAB) and Fogarty Administration 2. Role of the Program Officer (PO) 3. Role of the Evaluation Officer (EO) 4. Expert Panel—Make-up and Role Each is described in detail below: 1. Role of the Fogarty International Center Advisory Board (FICAB) and Fogarty Administration It is anticipated that the Fogarty International Center Advisory Board (FICAB) will play a role in evaluation, either by participating in Program reviews or by reviewing the evaluations as they are distributed. Fogarty will communicate the results of all the Fogarty evaluations to the FICAB and Fogarty Administration. It is anticipated that Fogarty administration will use program evaluations to make strategic funding and programmatic decisions. 2. Role of the Program Officer (PO) 7
  • 8. Fogarty has ultimate responsibility for the effectiveness of its programs. The PO is responsible for the day-to-day evaluation and analysis of the program progress. The PO works with the Evaluation Officer to analyze program progress, synthesize program results, and to set up the review or evaluation. Together they determine the appropriate outside experts to be part of the review as well as determine specifics of the review e.g. dates and questions to be asked within the Framework as well as review the report to ensure accuracy prior to its being finalized. Once the review is finalized, the PO will write a response to the review recommendations in a timely manner. 3. Role of the Evaluation Officer (EO) The evaluation officer, in coordination with the Fogarty POs and Fogarty administration is responsible for setting the annual schedule for review and evaluation and applies for all funds for reviews and evaluations. The evaluation officer works with the PO to set the agenda and schedule for the reviews and provides training for reviewers and experts. The evaluation officer works with the review panel to conduct the review write the final report and works with other NIH IC s and other experts on evaluation to ensure that the Fogarty evaluations are current. She serves as the overall planner and interface for program evaluations. Review recommendations should be incorporated into the following evaluation. 4. Expert Review Panels – Make-up and Role An expert panel can be used to help conduct any evaluation of Fogarty programs using the formal Framework and criteria. The panel can be made up of 3-5 members, including, if possible one Fogarty Advisory Board member, and 3 to 6 experienced administrators and decision-makers, health care professionals and scientists, as well as people experienced in program review from other disciplines as appropriate. Expert panel members should be highly respected and recognized in their fields. Panel membership should be jointly determined and agreed to by Fogarty staff and the evaluation officer. An individual respected by all parties, very familiar with Fogarty objectives and programs, and someone with a longer-term commitment to Fogarty should chair the panel, if needed. 8