Prolific aims to make online data collection more ethical and accessible by connecting researchers with a diverse global participant pool. Existing platforms like MTurk often treat participants unfairly by underpaying them or kicking them out of studies. Prolific addresses these issues by ensuring fair pay of at least $5 per hour for participants and providing tools for researchers to screen participants and design high-quality studies. The company's goal is to build trust between researchers and participants to improve data quality for scientific research.
4. OUR STORY
Founded in 2014
4
Phelim Bradley
DPhil student in
Bioinformatics
Cofounder and CTO
3,000+ researchers
have collected 1.5M
complete responses
8. Mturk: Paved way for rapid online
data collection
v Solved logistical problems (e.g.,
payments infrastructure, reputation
management)
v provided critical mass of people
Traditional approach to data
collection:
v panel companies (expensive, slow,
not transparent, not flexible) &
v undergraduate samples (highly
biased)
8
9. v In coming years, nearly half of all cognitive science
research articles will involve online samples (Stewart,
Chandler, & Paolacci, 2017)
v Online data collection is quick and low cost
v Fast iteration between developing theory and
generating evidence
v Many papers confirm that data quality is
comparable to lab studies (e.g., Crump et al., 2013;
Zwaan et al., 2017) 9
ONLINE DATA COLLECTION IS
BOOMING
10. Unethical practices or: “slave labour”
v Participants get kicked out of studies
v Participants don’t hear back from researchers, lack
of regulation and mediation
v Rewards unfair (often < $2/hour)– participants’
time not valued enough.
This wouldn’t happen in lab studies!
But it is common practice on MTurk.
10
KEY CHALLENGE #1
11. We are ethical and low-cost
11
How Prolific can help
Min £5.00 per hour
for participants
30% service
charge (+VAT)
PS: We are GDPR-compliant!
We don’t kick
participants out
We help mediate
between r’s & p’s
12. REWARD FAIRLY & PROMPTLY
v Participants’ time is valuable and they
deserve every respect, so reward fairly
v More broadly, a fair minimum
compensation helps maintain
a motivated, diverse, and
high quality participant pool
12
What you as a researcher can do
13. MAKE YOUR STUDY FUN & ENGAGING
v Intrinsic motivation (e.g., framing
tasks as helping others) can help
improve the quality of submissions
(Rogstadius et al., 2011)
v Make sure to randomise / counter-
balance survey items à makes surveys
more interesting & good scientific
practice
13
What you as a researcher can do
14. Prescreening functionality
v Limited set of screeners; eligibility unclear
v Many Turkers (up to 62%) misrepresent
demographics to access studies
(Chandler & Paolacci, 2017)
v Non-naivety
(Chandler, Mueller, & Paolacci, 2014)
v Pricey (premium screeners up to $0.50
per participant per study)
14
KEY CHALLENGE #2
10% most
active Turkers
complete 41%
of studies
15. We enable flexible
prescreening
15
Choose from 250
screeners to filter based
on 12.5 million
demographic responses
We run many tests on our
pool to vet participants
How Prolific can help
We offer a flexible
prescreening system
Screeners always decoupled
from studies à less cheating
All default screeners free of
charge, incl. non-naivety!
16. 16
Poor framing:
“Do you play football?”
☐ Yes
☐ No
à Too obvious
Great framing:
“Which of the following sports do
you play? (Choose up to three you
enjoy the most)”
☐ Tennis
☐ Basketball
☐ Football
☐ Volleyball
☐ ...
What you as a researcher can do
FRAME QUESTIONS WISELY
17. Niche, diverse, and representative
samples difficult too find
v MTurk: Mainly US American par>cipants
v Representa>ve samples not possible
v Niche samples hard to find
v MTurk’s pool not as big as claimed: The average
lab samples from popula>on of only 7,300 Turkers
(Stewart et al., 2015)
17
KEY CHALLENGE #3
18. We have a diverse
participant pool*
*Currently
51,000+ active
participants in
OECD countries
Learn more here:
https://app.prolific.ac/demographics
How Prolific can help
Plan to launch
affordable
representative
sample feature
later this year
(stay tuned J)
19. TEST TEST TEST before you go live
Run pilots! Get feedback!
Cannot emphasize this enough.
Data collection is quick, which means you
might not spot a bug until it’s too late.
19
What you as a researcher can do
20. USE ATTENTION CHECKS
Also called “Instructional Manipulation Checks”
(Oppenheimer, Meyvis, & Davidenko, 2009)
20
Bad attention check: Good attention check:
“Please watch this 5-min video of a
charismatic speaker…”
“Now, please answer: Did the speaker
wear a red hat? …”
Nobody can remember every single
detail – this check is too harsh.
Embed an item into a scale, for example:
“Please tick ‘strongly agree’ …”
This is a great attention check
because it requires ‘pure
attention’ (rather than memory)
What you as a researcher can do
21. Take home message
We care about data quality
21
We want to make data trustworthy
& connect the public with science
by building an online
community where
researchers &
participants trust each
other
22. WHERE NEXT?
v High data quality
v Best practice guide
v Representative and international
samples
v Lab experiments / clinical trials?
What do you think?
For example…22
23. TO SUMMARIZE
Researchers want to recruit diverse participants
quickly and ethically. Most existing platforms are
unethical, not flexible, and not diverse enough.
For example…23