This document discusses strategies for improving clinical trial site performance through quantifying site metrics and providing feedback. It recommends generating site-specific performance reports using data from the IVRS, calculating metrics like screening and enrollment rates. These reports should be shared with sites via email on a regular basis to start evidence-based conversations about performance. It also suggests using a web-based platform to provide ongoing feedback through features like leaderboards, awards, and educational resources in order to build relationships with sites and motivate improved performance. While data is important, it's also critical to understand the human factors influencing performance and support sites in addressing challenges.
7 Cases Where You Can't Afford to Skip Analytics TestingObservePoint
Data isn't inherently true—true data is true. So for your data strategy to work, you need to verify your analytics data is telling the truth.
In 7 Cases Where You Can’t Afford To Skip Analytics Testing, ObservePoint's VP Customer Success Patrick Hillery walks through how to avoid bad data quality at 7 critical breakage points. Hillery explains how to:
Slides from my presentation at the Data Intelligence conference in Washington DC (6/23/2017). See this link for the abstract: http://www.data-intelligence.ai/presentations/36
Presentation for the PNI Institute on the development of continuous applications of storysharing, sensemaking and change management with examples in Healthcare and Public Transport.
Overcoming the difficulties of managing multiple databasesMSM Software
With the management of multiple databases becoming a common problem as organisations evolve and expand it's important to be prepared be aware of some important Do's and Don'ts
DAMA Webinar: Influencing with Data – Facts Don’t Matter Much!DATAVERSITY
Business analytics can provide a major stimulus for transformational change. Yet neuroscience and psychology show us that people are typically not "wired" to engage with facts. Analytics leaders seeking to develop a data-driven culture must take positive steps to engage with emotions. What can we do to overcome these psychological resistance factors?
Key Challenges
Humans are "wired" with multiple mechanisms that inhibit their ability to engage with facts.
People's initial responses tend to be emotional rather than rational.
Being presented with evidence that contradicts existing beliefs and values can mean that positions become more entrenched, even when there is benefit to be derived from change.
Business benefits are mostly presented as benefits to the business only, rather than considering their effect on the beliefs and motivations of individuals.
Why to attend this webinar?
Information and Analytics leaders should attend this webinar to learn how to:
Connect emotionally with stakeholders, engage with both collective and personal benefits to increase engagement and ensure that there is both organizational and personal buy-in to decisions.
Avoid the psychological responses that inhibit engagement with data by communicating analytics outputs in ways that take such reactions into account.
Apply critical thinking when presenting analytics results, to overcome logical fallacies and decision biases.
Risk Management and Reliable Forecasting using Un-reliable Data (magennis) - ...Troy Magennis
To meet expectations and optimize flow, managing risk is an important part of Kanban. Anticipating and adapting to things that "go wrong" and the uncertainty they cause is topic of this session. We look at techniques for quantifying what risks should be considered important to deal with.
Although discouraging, forecasting size, effort, staff and cost is sometimes necessary. Of course we have to do as little of this as possible, but when we do, we have to do it well with the data we have available. Forecasting is made difficult by un-reliable information as inputs to our process – the amount of work is uncertain, the historical data we are basing our forecasts on is biased and tainted, the situation seems hopeless. But it isn't. Good decisions can be made on imperfect data, and this session discusses how. This session shows immediately usable and simple techniques to capture, analyze, cleanse and assess data, and then use that data for reliable forecasting.
Second and hopefully draft of LKCE 2014 talk.
7 Cases Where You Can't Afford to Skip Analytics TestingObservePoint
Data isn't inherently true—true data is true. So for your data strategy to work, you need to verify your analytics data is telling the truth.
In 7 Cases Where You Can’t Afford To Skip Analytics Testing, ObservePoint's VP Customer Success Patrick Hillery walks through how to avoid bad data quality at 7 critical breakage points. Hillery explains how to:
Slides from my presentation at the Data Intelligence conference in Washington DC (6/23/2017). See this link for the abstract: http://www.data-intelligence.ai/presentations/36
Presentation for the PNI Institute on the development of continuous applications of storysharing, sensemaking and change management with examples in Healthcare and Public Transport.
Overcoming the difficulties of managing multiple databasesMSM Software
With the management of multiple databases becoming a common problem as organisations evolve and expand it's important to be prepared be aware of some important Do's and Don'ts
DAMA Webinar: Influencing with Data – Facts Don’t Matter Much!DATAVERSITY
Business analytics can provide a major stimulus for transformational change. Yet neuroscience and psychology show us that people are typically not "wired" to engage with facts. Analytics leaders seeking to develop a data-driven culture must take positive steps to engage with emotions. What can we do to overcome these psychological resistance factors?
Key Challenges
Humans are "wired" with multiple mechanisms that inhibit their ability to engage with facts.
People's initial responses tend to be emotional rather than rational.
Being presented with evidence that contradicts existing beliefs and values can mean that positions become more entrenched, even when there is benefit to be derived from change.
Business benefits are mostly presented as benefits to the business only, rather than considering their effect on the beliefs and motivations of individuals.
Why to attend this webinar?
Information and Analytics leaders should attend this webinar to learn how to:
Connect emotionally with stakeholders, engage with both collective and personal benefits to increase engagement and ensure that there is both organizational and personal buy-in to decisions.
Avoid the psychological responses that inhibit engagement with data by communicating analytics outputs in ways that take such reactions into account.
Apply critical thinking when presenting analytics results, to overcome logical fallacies and decision biases.
Risk Management and Reliable Forecasting using Un-reliable Data (magennis) - ...Troy Magennis
To meet expectations and optimize flow, managing risk is an important part of Kanban. Anticipating and adapting to things that "go wrong" and the uncertainty they cause is topic of this session. We look at techniques for quantifying what risks should be considered important to deal with.
Although discouraging, forecasting size, effort, staff and cost is sometimes necessary. Of course we have to do as little of this as possible, but when we do, we have to do it well with the data we have available. Forecasting is made difficult by un-reliable information as inputs to our process – the amount of work is uncertain, the historical data we are basing our forecasts on is biased and tainted, the situation seems hopeless. But it isn't. Good decisions can be made on imperfect data, and this session discusses how. This session shows immediately usable and simple techniques to capture, analyze, cleanse and assess data, and then use that data for reliable forecasting.
Second and hopefully draft of LKCE 2014 talk.
Databases versus Spreadsheets-do you know where your data is?stefanchauveau
As a data management tool, databases are far better than spreadsheets. We present some evidence of spreadsheet problems and look at all the things a database can do better that you might not have thought of.
Rinse and Repeat : The Spiral of Applied Machine LearningAnna Chaney
Analyze and Improve Performance of Machine Learning in Four Easy Steps
Step 0. Deploy your machine learning application
Step 1. Assess performance of app using human judgement
Step 2. Analyze and optimize operating thresholds
Step 3. Retrain machine learning with golden examples from humans
Step 4. Go to Step 0 with new changes
Bundledarrows160 bit.ly/teamcaptainsguild
Only 5 percent of entrepreneurship is the big idea, the business model, the whiteboard strategizing, and the splitting up of the spoils.
The other 95 percent is the gritty work that is measured by innovation accounting: product prioritization decisions, deciding which customers to target or listen to, and having the courage to subject a grand vision to constant testing and feedback.
At Microsoft I experienced how A/B testing grew from being occasionally used by a few teams in Bing and MSN several years ago to becoming widely used by many Microsoft products including Office, Windows, xBox, Skype, Visual Studio, and others. In some products it is already a standard required part of the software release process, helping ensure software quality, understand customer value, and make better data driven decisions. In others products it is growing steadily. At Microsoft, A/B testing is winning and will soon be part of everyone's daily job. However, when I left Microsoft to join Outreach, a startup that makes sales automation software, I got exposed to a different world. Even though Outreach provided A/B testing functionality, it was rarely used and the usage was often incorrect. While the need for trustworthy decision making through A/B testing in sales was clear, it was also clear that simply giving sales teams an A/B testing system like the one we had at Microsoft will not be enough. I learned that there is a big difference between a Microsoft engineer and a sales representative, with respect to their needs for successfully using A/B testing. In this talk I will discuss the gaps. What are our experimentation platforms, tools and processes, which were built for highly trained engineers, missing to make A/B testing truly available to everyone? I will also discuss ongoing work and future research directions to fill these gaps. While required to make A/B testing a success in sales, I believe that solving these problem will also help to increase adoption and successful usage of A/B testing in the software industry.
Predictive Model and Record Description with Segmented Sensitivity Analysis (...Greg Makowski
Describing a predictive data mining model can provide a competitive advantage for solving business problems with a model. The SSA approach can also provide reasons for the forecast for each record. This can help drive investigations into fields and interactions during a data mining project, as well as identifying "data drift" between the original training data, and the current scoring data. I am working on open source version of SSA, first in R.
Betsol | Machine Learning for IT Project Estimates BETSOL
IT projects notoriously exceed initial time and budget estimates. Accurately estimating these large, complex projects is a challenging task with many unknowns and assumptions. Nevertheless, big-dollar business decisions are made based on IT project estimations, so it is critical to get it right.
Is machine learning the technology that can finally provide the solution?
1. page 1
The Problem
“We need to pick up the pace of enrollment.”
“Sites need to get their data in ASAP.”
“We can’t sustain these screen failure rates.”
oughts like these haunt trial managers like anxious ghosts.
ey wait for them at the breakfast table, and hitch a ride home
from the office late at night. at’s as it should be. If the status
quo isn’t a worry, it won’t change.
But worry in itself is powerless. We know this, and so we act;
more specifically, we communicate. rough emails, over
teleconferences, and at interim analysis meetings, we say what
we believe sites need to hear. “We’ll have to screen a lot more
patientsthisfall.”Evenbetter:“Collectively,weneedtorandomize
50 more patients by September 30.” We recognize why this
second call to action is superior. It’s specific. It posits a deadline.
ere’s no ambiguity as to what’s needed and when.
e problem? Except in the mind of the trial manager, there is
no “collectively.”
Consider the proverbial office coffee pot. e fact that it needs
to be cleaned daily doesn’t oblige a particular person to clean
it on a particular day. Instead, we trust in common sense and
comradery.Alltoooen,theresultismold.Socialpsychologists
call this phenomenon “diffusion of responsibility,” and the only
antidote is assigning specific tasks with specific deadlines to
specific individuals, even when doing so feels patronizing.
Moving from pots to patients, we can see why the general
directive “randomize Y more patients by X” is unlikely to moti-
vate sites. ose sites that have led the field in recruitment are
happy to continue producing, but really, they have already done
theirpart.osethathavelaggedbehindarealwaysonthelook-
out for eligible subjects, but simply can’t meet the goals set
by the sponsor. Or so the thinking goes. Not always, and not
consciously, of course. Overwhelmingly, investigators and
Quantifying, Recognizing
and Optimizing Site Performance
T
he speed with which sites activate, recruit subjects, and manage their data is a key
determinant of study duration, and consequently of study cost and filing timelines.
It is therefore in the interest of trial sponsors and patients awaiting safe and effective
therapies to improve the pace at which start-up, recruitment and data management occur.
While a future paper will address start-up, in this paper we discuss tactics and strategy for
site performance in two sections.
First, we prescribe to study managers’ very specific tactics for encouraging speedier
recruitment, data entry and query resolution at all of their sites. These tactics, grounded in
behavioral psychology and a “biofeedback” model of behavior change, rely on providing
real-time, site-specific performance metrics to each site.
Second, we discuss the theory and factors of site-sponsor relationship-building that is
focused on maximizing each site’s performance.
By Beth Harper, President of Clinical Performance Partners, Inc. and
Bryan Farrow, Director of Trial Site Experience for the DrugDev TrialNetworks study platform
DD SitePerformance_Layout 1 2/19/15 9:45 AM Page 1
2. page 2
coordinators desire to help their patients and fellow sufferers
by contributing as much as they can to their studies. But in the
day-to-day work of even the most energetic, compassionate
professionals, the call to go above and beyond is a weak signal
in a lot of noise.
The Solution (in Theory)
Imagineaworldofsupersonictravel,orperhapshumancloning,
where you or a CRA could meet with the Principal Investigator
and Primary Study Coordinator at each of your sites every
Monday morning. Imagine further that you’ve brought a
detailed site performance report. ese reports include all the
metrics you care about, from screening rate to query aging, and
indicate how the site you’re visiting has performed on each one.
Most importantly, each report indicates how far above or below
the study-wide average that particular site falls on each metric.
Can you imagine a more productive way to begin a conversa-
tion? Would you need to say anything at all before the site,
examining the report, opens up to you about problems they’re
facing, recommits to meeting their projections, or feels
genuinely recognized for their success? In essence, the report
becomes the dialogue-starter, and the data drives the conversa-
tion from an emotionally sensitive one to an evidence-based
one. And if there is one thing doctors appreciate — it is data!
A version of this scenario is possible, sans cloning. e afore-
mentioneddataisalreadyfamiliartoyou.erearereamsupon
reams of it. But the stakeholders that need to see it most never
get a peek: your site staff.
Yes, engaged sites are well aware of how many patients they’ve
screened and randomized. ey know how many queries they
have outstanding. A truly engaged site can even tell you where
trial-wide recruitment stands and how many patients still need
to be enrolled — provided, of course, that you’ve told them
recently. Too oen, their knowledge of the numbers ends there.
It doesn’t have to. More emphatically, it shouldn’t.
The Solution (in Practice)
In the world of trial management, Excel makes up 78% of the
atmosphere. For every flaw in that application (and there are
many), there is a power you may not have tapped. Any IVR
worth its salt makes it easy for you to download a complete
subject report, with dates for each screening, screen failure,
and randomization. Typically, another click or two will lead
you to a report of site activation dates. In combination, these
two data sets enable you to produce exactly the kind of
performance report your sites want and need. By bringing
these two IVR reports together and combining just a few
simple Excel formulas, you can calculate, for each site:
» e number of days that have passed since their
last screening
» e number of days that have passed since their
last randomization
» eir screen failure rate or, better yet, their pre-screen
failure rate
» eir average number of screenings per month
» eir average number of randomizations per month
» e percent difference between their screening rate
and the trial-wide screening rate
» e percent difference between their randomization rate
and the trial-wide randomization rate
» e percent of subjects converting across each stage of the
recruitment funnel (pre-screening, consenting, screening
and completion) and a forecast of when they will complete
their enrollment goal
If your original reports indicate each site’s country, you can
index their performance to their country’s averages.
is exercise alone offers immense value to you, as you’ll then
be able to filter and sort on the metrics that best signal a site’s
recruitment performance; namely, enrollment rates and their
percent differences from country and global averages. (e
same may be said for query aging and missing pages data.) To
unleash the real power of these numbers, though, you’ll need
to communicate them to sites. But how?
Email is a big part of the answer. Hundreds of manually typed
messages are not. All of the most common email clients, in-
cluding Outlook and Gmail, offer a “mail merge” feature as part
of their basic set-up. While the techniques differ, the concept
is the same. You’ll upload to your client a spreadsheet whose
first column consists of recipient names or email addresses,
and whose subsequent columns indicate, for each recipient,
the value that should be assigned to the variable name in the
header. A simple example follows on the next page:
DD SitePerformance_Layout 1 2/19/15 9:45 AM Page 2
3. page 3
When it comes time to compose your email, you’ll forgo any
actual values and instead insert the corresponding variable
name. For example:
“…In the <months> months since your site was activated,
you’ve screened <screened> patients and randomized <ran-
domized>. Your randomization rate of <site randomization
rate> patients per month is <randomization rate difference>
thanthatofthetrial-widerate,whichcurrentlystandsat<trial
randomization rate> patients per month…”
Or
“….Asof<date>,yoursitehas<queries>queriesgreaterthan
14 days old still unresolved. Your rate of <site-query-rate>
queriespersubjectis<query-rate-difference><above/below>
that trial wide rate of <trial-query-rate> queries per subject”
ereis,atleastpractically,nolimittothenumberofvariables
you could include in your message. However, the usual rules
ofeffectiveemailcommunicationremaininforce.Beginwith
a clear call to action. Keep it short. Clearly state how and to
whom recipients should respond.
Fine-tuningtheExceltemplateandmessagetemplatethatare
right for your trial will take time, but that investment offers
powerful and regular returns. If set up properly, updating the
Excel template is simply a matter of dropping into the “input”
sheets the reports automatically generated by your IVR or
eDC. e formulas in your “output” sheet will perform all the
calculations. Designate one to two hours each week to this
task, and the performance report scenario imagined earlier
has become a reality.
More Solutions
Emails do have a shelf life. While they are effective at sound-
ing alarms, it takes only minutes for a newer and allegedly
more urgent message to push yours off the screen – literally
and figuratively. For that reason, a web-based, site-facing
home for this data, a place where investigators and coordina-
tors can go to learn about trial progress and assess their con-
tributions,canmakethedifferencebetweenacorrespondence
with your sites and a relationship with them.
Web-based, study-specific platforms, like DrugDev’s
TrialNetworks platform, provide a continuity and sense
of community that no email distribution list can. ese
networks are also the proper home of other recruitment
and data management drivers.
Leaderboards
Patient recruitment is not a game. (Nothing that concerns
health is.) But we aren’t helping those we most want to help –
current and future patients – if we aren’t doing everything in
ourpowertorunanefficientandcleanstudy.Instillingasense
offriendlycompetitionisatime-honoredmethodofspurring
performance toward this goal.
Leaderboards are not new to the trial world. In one form or
another, they’ve been a staple of study newsletters for years.
But our industry has lagged behind others in adopting the
technology that enables information to travel at 21st century
speed. On those trials that are enrolling daily or hourly, even
a monthly newsletter, beautifully formatted as a pdf can seem
quaint. Worse, the enrollment information it carries will be
woefully out of date.
Portion of an example mail merge file, conveying site-specific recruitment performance
Email
a@site123.com
b@site456.com
c@site789.com
Name
James M.
Diane K.
Elias R.
Site
123
456
789
Months
2.4
7.6
3.5
Randomized
14
26
19
Site
Randomization
Rate
5.83
3.42
5.43
4.37
4.37
4.37
25.04% Less
27.78% Greater
19.52% Less
Trial
Randomization
Rate
Randomization
Rate Difference
DD SitePerformance_Layout 1 2/19/15 9:45 AM Page 3
4. page 4
e answer is online leaderboards. Integrated with your
IVR, leaderboards provide real-time recognition to your top
performers, and an implicit nudge to those further down the
list. Every trial has its own culture, based on geography,
indication, and other factors, and so there is no “one-size-
fits-all” leaderboard. But whether you decide to assign
rankingsbasedonsite,countryorregion,onabsolutenumbers
or rates, maintaining some version of a dynamic leaderboard
will only further your mission to help sites reach their
enormous potential.
Awards and Badges
ink of the last time you met an ambitious goal, reached a
critical milestone, or went above and beyond the call of duty.
Now think of the last time someone thanked you for it, and
even broadcasted your success to others.
Odds are terrific that the first happened a lot more recently
than the second. at’s not an injustice. We work in a
demanding industry that requires self-motivation. Our real
reward comes with the approval of desperately needed new
therapies.
On the other hand, a little gratitude can go a long way, as
no end of social psychology experiments will show us.1
Fortunately, the same technology that brought us electronic
data capture and IVWS has made extending that gratitude
easier than ever.
Here, too, the process begins with Excel or another spread-
sheet application. A few simple formulas are all that are
required to determine which sites have met which recruit-
ment milestones. Identify the milestones and achievements
that correspond with your study’s scope and timeline. (For
example, in a study where each site is expected to enroll at
least 10 patients, a site enrolling their first, fih and 10th
patients are all conspicuous milestones.) Communicate to
your sites early, before FPI, which milestones you’ll recog-
nize for each site, and when a site meets that milestone –
congratulate them! Not in passing, and not with a brief men-
tion in an operations-focused email, but directly and
publicly. If you aren’t using a study portal with news forums
(and you should be), consider a personalized email that asks
nothing of the recipient but instead simply thanks them.
Spend the hour aer your generation of site performance
reports to systematically send these emails. Whenever
possible, include all sites in “cc.”
Beyond the Numbers
Sofar,we’vereviewedeffectivesiteengagementstrategiesthat
are quantitatively driven. But what about quality – of the
clinical science, yes, but also of the hard-working people
on the site, CRO and sponsor sides. Here is where an online
platform becomes even more crucial, serving as a secure
home for:
» Video interviews with your clinical scientists
» Interviews with your top-performing sites, so you can
share best practices
» Learning tools for understanding everything from
IP mechanism of action to safety profile
Sites are responsible for identifying and carefully consenting
qualified and interested study subjects. We, on the operations
side,areresponsibleforconvincingsitesthatthisiseminently
worthdoing–notthroughover-communicationordemands
– but through an enthusiastic and accurate representation of
the clinical science and its potential benefits.
Just like the medicine they’re trying to advance, trial
operations are a science, and so data must lead. But to lose
sight of the human element of trial operation– the desire to
know, the passion to innovate, the drive to offer better care –
is to lose sight of the purpose of data collection.
The Reality – When the Best Laid
Plans Go Awry
Now that you have perfected the science of quantifying site
contributions and using these tools to engage the sites in
productive conversations, you may still be faced with site
performance issues. As a site, knowing WHERE I stand
relative to my own goals or my peers may motivate me to
improve,butitwon’ttellmeHOW.at’swheretheartcomes
in; understanding WHY the sites may be underperforming
and using this information to support the sites on a path
towards improved performance.
Intheory,thesolutionisobvious:lookatthehighperforming
sites to see what they are doing well and share this with the
othersites.Gonearethedaysofpittingsitesagainsteachother
in a performance battle (while recognizing the power and
value of instilling a little competitive spirit towards achieving
the overall study goals). Creating a collaborative community
where sites are encouraged to share best practices is a much
DD SitePerformance_Layout 1 2/19/15 9:45 AM Page 4
5. page 5
more powerful weapon in your site performance manage-
ment arsenal. But alas, what works at one site may not
be transferrable to other sites for a whole host of reasons;
cultural, infrastructure, geographic, resources and so on.
A Framework for Site Intervention
Understanding why sites participate in clinical trials and
how to optimize their performance is equally if not more
complex than understanding why and how to enhance
subject participation in clinical trials. A robust root cause
analysis of the problem reveals there are over 150 potential
contributing root causes.2
And while a detailed discussion
of how to do a root cause analysis or the actual contributing
factors is beyond the scope of this paper, suffice it to say they
can be categorized into a number of elements that one of
the authors has coined — the Clinical Trials Participation
Equation, or CTPQ. e CTPQ methodology and how it
relates to solving the patient recruitment problem has previ-
ously been described.3
Simply put, the equation states that
participation is a function of AE + CRC/PI (simple clinical
research acronyms are used to help remember the elements).
Here we expand on the concept to illustrate how the CTPQ
applies to the theory of site engagement and how the
formula can be used to design an effective site performance
intervention strategy.
e CTPQ states that a site’s participation and level of
engagement is a function of:
» eir Awareness of the trial + the Availability of the
patient population + the extent to which they feel their
contributions and efforts are Appreciated by the sponsor
» How well Educated the site is about the trial + their
Experience in conducting prior trials
» e Credibility of the Sponsor/CRO personnel interacting
with the site, how Collaborative the interactions are
(e.g., budget negotiations) + their level of Commitment
to the trial (particularly the PI)
» e strength of the Relationship they have with the
sponsor, and the Responsiveness of the Sponsor/CRO
to the site’s needs
» e Resources available to support the study
(personnel and financial)
» e nature and frequency of Communication between
the sponsor/CRO and the key site staff
Divided by:
» e Peril or Risk of participating in the trial in terms of
lost revenue, disruption to the staff schedule and workload
(e.g., will the study require weekend, on-call appointments
for site staff)
An Investigative Site’s Willingness to Participate
and Perform in a Clinical Trial is a Function of:
Awareness
Availability
Appreciation
Education
Experience
Peril (Risk)
Inconveniences
Protocol
“Intimidation”
Decrease the
Denominator
(where feasible)
Credibility
Collaboration
Commitment
Relationship
Responsiveness
Resources
Increase the
NumeratorCommunication
The CTPQ Applied to Site Engagement
DD SitePerformance_Layout 1 2/19/15 9:45 AM Page 5
6. page 6
» e level of Inconvenience (e.g., administrative burden to
participate: difficult EDC system; complex budgeting and
contracting process with the sponsor, etc.) and amount of
Protocol“Intimidation”involved(e.g.,difficultyinterpreting
the protocol, complex visit requirements, equipment and
preparation involved, etc.)
To have an impact on site performance, study teams need
to consider all of the factors and pay particular attention
totryingtodecreasetheamountofPI,(Peril,Risk,Inconven-
ience and Protocol Intimidation) where at all possible. In
cases where studies have a high degree of PI, it is even more
important to focus on increasing the elements in the numer-
ator of the equation (the AE + CRC) in an effort to keep the
elements in balance.
Putting Theory into Practice
Following the logic of the CTPQ, one can start to see where
and how the previous strategies fit into the equation. e
various techniques previously outlined (i.e., communicating
site performance reports, leaderboards, communities of
practice, etc.) easily cover several categories of the equation.
Awareness
» All of these techniques keep the trial top of mind
if nothing else
Appreciation
» ey recognize and acknowledge the site’s contributions
to the important research effort
Credibility of the Sponsor/CRO personnel
interacting with the site
» Arming the site-facing personnel with metrics and
data about the site’s performance go a long way towards
enhancing their credibility
Collaboration
» e techniques in part, help to convey a spirit
of collaboration
Relationship
» e techniques are aimed at furthering a productive
and positive relationship between the sponsor, CRO
and the site personnel; making small deposits in the site
relationship bank can go a long way towards preserving
the relationship when the going gets tough
Communication
» All of the techniques are part and parcel of a more robust
site communications approach beyond the typical slew of
emails the sites are barraged with on a daily basis
ere are some factors in the equation that aren’t explicitly
addressed in this paper, namely:
Availability of patients
» Successful enrollment is a function of whether the site
has access to the proper patient population. Enrollment
validation and proactive patient recruitment planning
techniques have also been widely published and are
the first and one of the most critical factors to ensuring
successful site performance
Education and Experience
» While not directly discussed here, selecting experienced
site staff is a fundamental factor for success. But even with
or without experience (everyone has to start their clinical
research career somewhere!), there are myriad ways
that sponsors and CROs can improve the quality of their
site training
Commitment
» Assessing the PI’s willingness and commitment to
conducting the trial within the timeframes, constraints
of the protocol and budget is an all-too-oen missed
opportunity during the traditional site selection process
Responsiveness
» While this paper has emphasized proactive site
communication, it is also important for sponsor and
CRO personnel to respond in a timely fashion to
all site requests
Resources
» Recognizing the work effort associated with
conducting increasingly complex trials is one thing,
but complementing this with study budgets that
are truly reflective of the work effort particularly
for patient prescreening is yet another important
facet for balancing the CTPQ
Risk or Peril
» is is oen the most difficult facet to really influence
or control and oen comes down to the site’s perception
of the trial. At a minimum, being clear and transparent
about what will be involved can go a long way towards
helping the site staff ascertain the risk/benefit ratio of
participating
DD SitePerformance_Layout 1 2/19/15 9:45 AM Page 6
7. page 7
To consult with DrugDev on creating a
site performance report for your trial,
contact solutions@drugdev.com and
reference this white paper.
For more information on the CTPQ
contact Clinical Performance Partners:
info@clinicalperformancepartners.com
Join our webinar on this topic on
April 2, 10-11 am EST
http://bit.ly/quantifying-webinar
1. Tsang, Jo-Ann, “Gratitude and prosocial behavior: An experimental test of gratitude,” Cognition and Emotion, 20 (1), 138-148, 2006.
2. Harper, B. Independent analysis of the root causes contributing to poor enrollment performance- 1996 – 2014.
3. Harper, B. and A. Neuer. A Strategic Formula to Enhance Subject Enrollment and Retention. e Monitor, February 2009. PP. 59- 63.
Inconvenience and Protocol Intimidation
» Where possible, solicit operational input into the study
design during the protocol development and feasibility
stages to minimize the burdens. Getting the patient and
site perspectives early on can minimize the inconven-
iences or at least enable better budgeting and training to
overcome these challenges
So, when teams are faced with a low or non-performing site,
the CTPQ can be an effective diagnostic and performance
intervention tool. Are all of the elements “in balance” across
the trial and are there systemic levers that can be pulled to
enhance performance across all sites (for example, would a
protocol amendment to ease the burdens help all sites be
more successful)?
Which factors are at play for a given site? Does the level of
communicationorrecognitionneedtobeampedup?Isthere
a particular resource issue at the site where re-evaluating the
budget would be an appropriate intervention? Is the site in
need of additional education?
As the saying goes, one size does not fit all and the formula
(andassociatedinterventionplan)needstobetailoredforeach
site.Nonetheless,theCTPQprovidesahelpfulframeworkfor
at least organizing the possible root causes and opportunities
for intervention into a more manageable structure.
Summary and Conclusion
We began this paper with a focus on the importance of
providing frequent metrics to each of your sites to enable
them to better understand and manage their performance
Communicating these metrics via excel formulas and
emails is a good start; however, a web-based portal (study
network) providing online leaderboards and other features
provides performance data to sites in real-time and helps
create a true feeling of comradery and community with
all stakeholders involved in the study. Such a network has
the potential to:
» Improve enrollment
» Boost site engagement
» Enhance the site’s overall study experience
For those sites who are not reaching their performance
goals we suggest using the CTPQ equation to identify factors
holding the site back, and using this formula to prioritize the
types of interventions that can get the site back on track.
An effective approach to optimizing site performance must
embrace both theory and practice, strategy and tactics.
At the end of the day however, many of these techniques
boil down to communication and information-sharing. As
we all know – knowledge is power. If we can give the sites
information on how they are doing against their goals as
well as identify how and where they can improve their
performance, everyone wins.
Everyone wins — especially the patient. •
DD SitePerformance_Layout 1 2/19/15 9:45 AM Page 7
8. page 8
About DrugDev
DrugDev simplifies life at the clinical trial site through technology and services that enable sponsors,
CROs and investigators to do more trials together. By providing study startup solutions, global
payments by CFS Clinical, and a clinical trial optimization platform by TrialNetworks, DrugDev
creates and drives standards to improve quality, shorten timelines and reduce cost. In addition,
DrugDev maintains the industry’s largest global network of opt-in sites actively seeking new trial
opportunities, and serves as the third-party host of the pre-competitive Investigator Databank
sponsored by Novartis, Janssen, Merck, Lilly and Pfizer. Learn more at drugdev.com.
Beth Harper, BS, MBA
Beth is the President of Clinical Performance Partners, Inc., a clinical research
consulting firm specializing in enrollment and site performance management. She
has passionately pursued solutions for optimizing protocols, enhancing patient
recruitment and retention and improving sponsor and site relationships for 30
years. Beth is an Adjunct Assistant Professor at the George Washington University
who has published and presented extensively in the areas of study feasibility, site
selection, patient recruitment and protocol optimization. Beth received her B.S. in
Occupational Therapy from the University of Wisconsin and an M.B.A. from the University of Texas.
She can be reached at 817-946-4728 or bharper@clinicalperformancepartners.com
About the Authors
Bryan Farrow
Bryan is Director of Trial Site Experience for the DrugDev TrialNetworks study
platform. He is responsible for creating TrialNetworks content that engages and
educates study members spread across the globe, and for incorporating trial
site feedback into network enhancements. Drawing on a love of statistics, he
has pioneered site-specific "report cards" — which quantify recruitment and
data management performance for sites on a weekly basis.
Prior to joining TrialNetworks, he spent six years at Children’s Hospital Boston,
helping physician-scientists communicate their expertise and passion to referring clinicians and
families. This role strengthened his own interest in championing the clinical applications of the
biosciences. Bryan holds a BA in English and Philosophy from Dartmouth College and an MA in
English from the University of Virginia.
Bryan can be reached at bfarrow@trialnetworks.com.
DD SitePerformance_Layout 1 2/19/15 9:45 AM Page 8