The Green Park Collaborative (GPC) has developed a new tool to help health care decision makers confidently and consistently use Real World Evidence (RWE) when making tough coverage and care choices. Called RWE Decoder, the spreadsheet-based assessment tool lets users review and evaluate all existing studies and evidence for both rigor and relevance. Informed by these factors, users can assess study quality, and generate a visual summary to help gauge the evidence under review.
Published RWE studies developed from data-rich electronic medical records or medical claims data are increasingly available from health care systems. However, the quality of this research can vary widely, and payers, clinicians and other health care decision makers often dismiss it out of hand. RWE Decoder and its associated user guide and framework, offer a thoughtful approach to helping these decision makers assess whether RWE studies address their questions and can appropriately guide their choices.
The tool, user guide, and supporting white paper are available here: https://goo.gl/AhbHUw
Gurgaon Sector 45 Call Girls ( 9873940964 ) Book Hot And Sexy Girls In A Few ...
Real World Evidence Initiative Report
1. GPC is a major initiative of the
Center for Medical Technology Policy
REAL WORLD EVIDENCE
INITIATIVE REPORT
February 2017
2. GREEN PARK COLLABORATIVE
• A multi-stakeholder forum to clarify the evidence expectations of
payers
– Public and private payer requirements for evidence of effectiveness and
value for specific conditions and technologies
• Key participant-stakeholders in initiatives
– Public and private payers, FDA, NIH, AHRQ, guideline developers,
professional societies, life sciences companies
• Output/Activities (depending on initiative)
– Effectiveness guidance documents (EGDs)
– High priority topic-specific workshops and webinars
• Benefits of participation
– Greater transparency of decision-maker expectations (payers, HTA,
guideline developers, health systems, etc.); substantive interaction with
key stakeholders; input into methods guidance and recommendations
3. RWE INITIATIVE: PURPOSE
Problem
Many frameworks and checklists
exist to assess the quality of
observational and other real-world
studies
• Frameworks complex, lengthy,
difficult to use
• Relatively little input from users
• How users make decisions on
single studies and bodies of
evidence remains unclear
Solution
Develop easy-to-use tool for
transparent assessment of RWE
• User-vetted
• “Informed judgment” and visual
summary of methodological rigor
and relevance of RW studies
• Detailed assessment can follow,
if merited
• Aims: consistency, improved
understanding, transparency
5. FOUR-PHASED APPROACH
Incorporation of Key Findings and Final RWE Framework
Vetting of Draft Framework and In-Person Meetings
Background Research
Stakeholder and Expert Engagement
6. TIMELINE
QUARTER
1
QUARTER
2
JUNE 2016
QUARTER
3 – 4
FEBRUARY
2017
Identify target users in
health plans, health
systems, other contexts;
assess decision needs of
users through interviews
Convene expert
workgroups
Advisory Committee
Methods WG
Dissemination WG
Hold in-person
workshop with
decision-makers
to road test tool
Incorporate feedback
from workshop and
additional user vetting
Produce final framework
and interactive tool with
user’s guide
7. ADVISORY COMMITTEE
MEMBERS
Joseph Chin, CMS Sally Okun, PatientsLikeMe
Gregory Daniel, Duke University Tom Oliver, ASCO
Nancy Dryer, Quintiles Eleanor Perfetto, UMD School of Pharmacy
Scott Flanders, Astellas Edmund Pezalla, Aetna
John Fox, Priority Health Catherine Piech, Janssen Pharmaceuticals
Johnathan Jarow, FDA Megan Maguire Priolo, GBMC
Sachin Kamal-Bahl, Pfizer Alan Rosenberg, Anthem
Julie Locklear, EMD Serono Lucy Savitz, Intermountain Healthcare
Joan McClure, NCCN Marcus Wilson, Healthcore
Elizabeth McGlynn, Kaiser Permanente Brande Ellis Yaist, Eli Lilly and Company
Peter Neumann, Tufts Medical Center
8. METHODS WORKGROUP
MEMBERS
Kristen Bibeau, Teva Pharmaceuticals George Browman, University of British
Columbia
Scott Flanders, Astellas Jennifer Graff, NPC
Craig Henderson, UCSF David Henry, University of Toronto
Brad Hirsch, Flatiron Health Sachin Kamal-Bahl, Pfizer
Mark Levenson, CDER Office of Biostatistics Gary Lyman, ASCO
Jim Murray, Eli Lilly and Company Sally Morton, Virginia Tech
Josée Poirier, MeYou Health, LLC Beverly Shea, University of Ottowa
Mike Stoto (Chair), Georgetown University Timothy Vaughan, PatientsLikeMe
Mingliang Zang, Janssen
9. DISSEMINATION WORKGROUP
MEMBERS
Aylin Altan, Optum Labs Rabia Kahveci, HTA Consultant
Megan Klopchin, Eli Lilly and Company Karen Lencoski, Astellas
Julie Locklear, EMD Serono Joan McClure, NCCN
Troy Sarich, Janssen Marcus Wilson, HealthCore
Julie Simmons (Co-Chair), CMTP John Beilenson (Co-Chair), Strategic
Communications & Planning
12. MODULES
MODULE 1:
Articulating
Question
• What is the
decision to be
made?
• What
information is
required?
MODULE 2A:
Assessing
Relevance
• Population
• Intervention
• Comparators
• Outcome(s)
• Timing
• Setting
MODULE 2B:
Assessing
Rigor
• Quality of
Research
Question
• Potential for
Bias
• Precision
• Data Integrity
MODULE 2C:
Effect Size
• What is the
magnitude and
direction of the
effect?
BASIC QUESTIONS RATED WITH SCORING
SYSTEM THAT CAN BE PLOTTED AND VISUALIZED
13. MODULE 2A (EXAMPLE)
DIRECTIONS
1. Check each box as you consider the
domains within the Relevance
dimension
2. Rate Relevance for each study along a
continuum (1 – minimally relevant to
4 – maximally relevant)
3. Enter “0” only if there is a necessary
piece of information the study fails to
provide
DOMAINS
Population
Intervention
Comparator
Primary Outcome
Timing
Setting
Population
Intervention
Comparator
Primary Outcome
Timing
Setting
Population
Intervention
Comparator
Primary Outcome
Timing
Setting
Population
Intervention
Comparator
Primary Outcome
Timing
Setting
STUDY 1
STUDY 2
STUDY 3
STUDY 4
Smith, 2005
Johnson, 2003
Arnold, 2010
Jenson, 2012
14. MODULE 3 (EXAMPLE OF OUTPUT)
• Each sphere is the data
point for one study and
represents the Likert-type
assessments for
Relevance (x-axis) and
Rigor (y-axis) (Modules
2A and 2B).
• The size of the sphere
indicates the magnitude of
an effect (Module 2C).
• The color of the sphere
indicates the direction of
an effect (green=positive;
white=negative or no
difference) (Module 2C).
• Note: “0” scores for
Relevance and Rigor are
not plotted
15. NEXT STEPS
Testing and Demonstration
• Current iteration: a free and downloadable Excel Tool (Version 1.0)
– Will continue to vet and test Tool for usefulness, relevance,
reliability, and interpretation
• Focus groups and early adopter survey to inform the next iteration of the
Excel Tool (Version 2.0), as well as the development of a desktop
software solution
Additional Tools
• Growing a library of use cases, video demonstrations for users,
webinars, and additional training tools based on feedback from early
adopters
17. INITIATIVE TEAM
CENTER FOR MEDICAL TECHNOLOGY POLICY
Rachael Moloney, Research Manager Initiative Lead
Donna Messner, Sr. Vice President GPC Program Director
Sean Tunis, President & CEO Advisor
Jennifer Al Naber, GPC Program Manager Initiative Manager
Julie Simmons, Manager, Marketing &
Communications
Co-chair, Dissemination
Workgroup
EXTERNAL COLLABORATORS
Michael Stoto, Georgetown University Chair, Methods Workgroup
John Beilenson, Strategic Communications &
Planning (SCP)
Co-chair, Dissemination
Workgroup