New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Our publishing system boxes academic knowledge in expensive journals
1. The broadest problem in science:
Our publishing system
Alex.Holcombe@sydney.edu.au
School of Psychology
http://www.slideshare.net/holcombea/
@ceptional
2.
3. Scientist meets Publisher
Academic knowledge is boxed
in by expensive journals.
http://www.youtube.com/watch?v=GMIY_4t-DR0
7. JOURNAL / PUBLISHER COST ($USD)
$10,780 per article (not including charges for color figures)
$85 per page
$80 per page (introductory rate is even cheaper)
$1350 per article
8. JOURNAL / PUBLISHER COST ($USD)
$10,780 per article (not including charges for color figures)
$85 per page
$80 per page (introductory rate is even cheaper)
$1350 per article
$99 per life
9. JOURNAL / PUBLISHER COST ($USD)
$10,780 per article (not including charges for color figures)
$85 per page
$80 per page (introductory rate is even cheaper)
$1350 per article
$99 per life
10. Little steps, that preserve most of the publishers’ profits
•Requirements from funders that
publications be OA
•NIH (US) within 12 months
•Wellcome Trust (UK) within 6 months
•final grant payment withheld if you
don’t comply
•NHMRC (Australia) within 12 months
•ARC
•You can use DP funds to pay open-
access fees, but must be taken from
the funds you were awarded to pay
for other things.
•“Strongly encourages” open
access, but no teeth. Compliance
rate very low.
11. Research assessment
system reinforces
journals status quo
“This slaughter of the talented relies entirely on a
carefully designed set of retrospective counts of the
uncountable. These are labelled research ‘metrics’.”
12. Research assessment
system reinforces
journals status quo
“This slaughter of the talented relies entirely on a
carefully designed set of retrospective counts of the
uncountable. These are labelled research ‘metrics’.”
Research
Stevan Harnad
Publishers
13. Immediate open access
GREEN ROAD
•Deposit your manuscripts in
the university repository
(http://ses.library.usyd.edu.au/
•Even with closed
Stevan Harnad
journals, you often have
the right to deposit your
final version (e.g. Word
document before typeset
by publisher)
•Funders, universities should
mandate this.
•Publishers will adapt, as they
have in physics.
14. The File-Drawer Problem
unpublished
results
files
http://www.flickr.com/photos/nickperez/2569423078 t. magnum
15. The File-Drawer Problem
•Difficult to publish non-
replications and replications
•Most journals only publish
papers that “make a novel
unpublished contribution”
results
files •Reviewers/editors tend to hold
non-replicating manuscript to
higher standard than original.
•Bem
http://www.flickr.com/photos/nickperez/2569423078 t. magnum
•Little career incentive to publish
a non-replication or a replication
18. The File-Drawer Problem
Corollary 4: The greater the
flexibility in designs, definitions,
outcomes, and analytical modes in
a scientific field, the less likely the
research findings are to be true.
Flexibility increases the potential for
transforming what would be “negative”
results into “positive” results.
http://www.flickr.com/photos/nickperez/2569423078 t. magnum
Corollary 6: The hotter a scientific field (with more
scientific teams involved), the less likely the research
findings are to be true.
19. The File-Drawer Problem
Corollary 4: The greater the
flexibility in designs, definitions,
outcomes, and analytical modes in
a scientific field, the less likely the
research findings are to be true.
Flexibility increases the potential for
transforming what would be “negative”
results into “positive” results.
t
at mos
idis th
Ioann
http://www.flickr.com/photos/nickperez/2569423078 t. magnum
ith
ree w
le we ag .”
ry, whi e false..
“In s umma ings ar Corollary 6: The hotter a scientific field (with more
fi nd
res earch scientific teams involved), the less likely the research
findings are to be true.
20. Barriers to publishing replications and failed-
replications
• No glory in publishing a replication
• Few journals publish replications
• usually uphill battle even with
those that do
• The wrath of the original researcher
21. Barriers to publishing replications and failed-
replications
• No glory in publishing a replication
• Few journals publish replications
• usually uphill battle even with
those that do
• The wrath of the original researcher
22. Barriers to publishing replications and failed-
replications
• No glory in publishing a replication
• Few journals publish replications
• usually uphill battle even with
those that do
• The wrath of the original researcher
23. File-drawer fixes
• Journals that don’t reject
replications for being
uninteresting or unimportant
◦ • ✔
• Pre-registration of study designs
and analysis methods
◦ • ◦
• Brief reporting of replications
◦ ✔ ◦
27. File-drawer fixes
• Journals that don’t reject
replications for being
uninteresting or unimportant
◦ • ✔
• Pre-registration of study designs
and analysis methods
◦ ✔ ◦
• Brief reporting of replications
◦ ✔ ◦
28. J-REPS
Journal of Registered Evidence in Psychological Science
Dan Simons
1. Authors plan a replication study
2. They submit an introduction and methods section
3. It is sent to reviewers, including the targeted author
4. The editor decides whether to accept/reject, based on:
1. Reviewer comments regarding the proposed protocol
2. Importance of the study, judged by argument in the
introduction, number of citations of original, reviewer
comments
5. The Intro, Method and analysis plan, and reviewer comments
are posted on the journal website ✔ ✔ ✔
6. When the results come in, the authors write a conventional
results and discussion section and that together with the raw
data are posted, yielding the complete publication
29. Journal of Registered Evidence in Psychological Science
• Original author sort-of signed off on it, so can’t
complain / hate the replication authors as much.
• Good way to start for a new PhD project, anyone
planning to build on some already-published results
• Will post the raw data ✔ ✔ ✔
• Will facilitate, publish meta-analyses when replications
accrue
• Reduce the incentive to publish flashy, headline-
grabbing but unreliable studies?
30. Comprehensive solution? Open Science
• Funders may eventually demand:
• Any data collected with their money be
made available
• Experiment software code be posted
• As data comes in, put on web
• Electronic lab notebook
• Papers written via open collaborative
documents on the web
Editor's Notes
-prep: have movie open. \n-prep: Have AUDIO connected.\n-I’m a researcher, I love doing research, but\n-I feel trapped. Trapped in a system of science that has some unfortunate aspects\n-One that pushes us to do silly things. Not just silly things, but things that have very negative consequences for science. \n-We need to reform the system to encourage us to do the right thing.\n\n-We researchers, by the choices we make, perpetuate the absurdities of the system or we push it towards reform\n-Everytime that we agree to review a paper, that we accept an invitation to write a book chapter or for a special issue\n-When we agree to join the editorial board of a journal\n-Everytime that we SUBMIT a paper to a journal\n-We’re steering the future of science. Today I’ll talk about two aspects of the problems of science and the future of science\n-Both are aspects of our publishing system\n-First is the issue of who owns our papers and making them free to access\n-Second is another big problem in science, the file-drawer problem\n\n-I’ll post my slides on slideshare\n\n\n
-Every talk should have a demo. Ok, but how to obey that for today’s talk which isn’t about any experimental task or visual phenomenon?\n-I made a short video that here can serve to remind people of how absurd the standard publishing system is- having grown up as a researcher in this system, many of us lose sight of this. \n-Can we look at the system again with fresh eyes, to see the absurdity?\nirst showcases the silliness of this situation and then describes a few of the things we can do as individuals\n\n\n\n
\n
-The publisher profit margins are high, well over 30% in an era where the profit margin on all other types of publishing have rapidly\n-The reason is that it’s a lot like a monopoly pricing situation\n-The Journal publishers own the journal titles,\n then researchers demand their libraries subscribe to certain journals with no consideration of price, so this has led to massive price inflation\n-There are significant costs involved in publishing a journal, in X, Y, and Z, but they’re much less than what the journals can charge thanks to the monopoly situation.\n-Periodically you get university librarians pleading that something be done about this\n-The Faculty Advisory Council of Harvard University, recently wrote that the traditional journal system is unsustainable and \nasked the faculty to consider resigning from editorial boards of journals owned by objectionable publishers\n-Usually everybody just ignores the librarians, so the situation continues\n\n\n
-The way things should be is that instead of corporations owning these journal titles, it should be the researchers. Publishing is a simple service that should be done through contracts with the publisher.\n-After all, the content of the journal is provided by the researchers. Moreover, the editorial board is made up of researchers, all the decisions in these journals are made by researchers. And almost all of this work is done by the researchers, and almost all done for free- or paid by their universities. \n-And then the universities have to buy it back from these publishers after they take many millions in profits. No, the publishing should be contracted out.\n-Right now most of us are perpetuating this terrible system.\n-Sometimes people question whether we’re really getting a raw deal from these corporate publishers, even when they charge $10 or $20K for an online subscription to a journal.\n\n\n\n\n
-Sometimes people question whether we’re really getting a raw deal from these corporate publishers, even when they charge $10 or $20K for an online subscription to a journal. \n-So I did some calculations to compare different journals, corporate subscription journals vs. open-access journals. The ones I’m most familiar with.\n-But I did some calculations and based on the total subscription revenue of Elsevier and the number of articles they publish, Elsevier earns over $10K/article they publish.\n-Whereas looking at some open-access journals in my field that charge the author, one pays only $80 or $85/page to publish, which means less than $1000 for the average article\n-Finally, we know it can be much cheaper than even these non-profit publishers are charging\n-PeerJ is a new for-profit business started by the former publisher of PLoS ONE which \n\n\n
-these are difficult times financially for universities\n-corporate welfare. \n-monopoly pricing\n\n\n
-In the 21st century, it’s silly that most university research not freely available. Almost all the barriers are unnecessary barriers.\n-In the overwhelming majority of cases, researchers and universities have the power to make their research free to download\n-Instead, we’re still paying publishers for subscriptions to their journals. \n-Most of you know Journals are a bit like magazines, but with authors who are unpaid.\n-How much do subscriptions cost to these journals? \nAUDIENCE: I remember, back before the internet, I used to subscribe to magazines. It cost maybe $30 or $40 a year, right?\n-Right so there’s been a bit of inflation since , but 13 THOUSAND DOLLARS? 19 THOUSAND DOLLARS?\n-Some of these journals are just websites, and they don’t even have to pay the writers\n-Now there are real costs associated with editing a manuscript and marshalling it through the peer-review and web production processes, but they’re estimated to be $500- $2K per article, so with the number of subscriptions they get they’ll make the money many times over\n-means most scientists in the world can’t read these\n\n\nmedian=$889/yr, mean=$1759/yr still a lot more than free and means most people in the world don’t have a hope of reading them\n-Most scholarly articles are only available through journals which require a subscription. and a subscription costs money.\n\n------\nInstitutional Rate Print incl Free Access or E-Only\n\n
SUMMARY\n-We have seen an enormous amount of progress on this issue in the last few years. 3 or 4 years ago many people hadn’t heard of it and those that had were wary.\n-Whereas in the last few months, we have people at the highest levels of government, at least in the UK, saying that open access to scientific research is inevitable.\n-And we’ve seen people work hard on tools taking advantage of new technology that make running a journal very cheap.\n-However we still have a long way to go, because the bureaucracy still ties us to the journals published by the corporations that charge the most and pocket all the profits. So they either have to find new ways to evaluate us besides the prestige of the journals we publish in, or do an end-run around the journals. \n-But what the funders are doing is neither changing the system, not doing an end-run.\n-Instead they’re mainly nibbling around the edges.\n\n\n\n
-Instead they’re mainly nibbling around the edges.\n-These funders’ motivation of course is that the research was done in the public interest. So the public and all the people that serve the public interest should be able to read the research.\n-Australia hasn’t done much even in the way of taking these little steps. In an interview, the outgoing ARC chief indicated that she has “no plans to make academics publish taxpayer-funded scholarly research in places where anyone can access it for free”\n-These steps of requiring open access, but only after 6 months or a year, are measures chosen to preserve the profits of the publishers\n\n------------------------------------------------------------\nHere she explains why the ARC has no plans to make academics publish taxpayer-funded scholarly research in places where anyone can access it for free\nhttp://theconversation.edu.au/open-access-not-as-simple-as-it-sounds-outgoing-arc-boss-6628\n\n\n
-There’s a broader problem here.\n-We’re stuck in a system that assesses our work in large part by the name of the journal that it appears in. \n-This causes everyone to scramble to submit to the journal that has the highest impact factor, which reinforces the lead of those journals regardless of their true quality and regardless of the price they charge to universities.\n-This means that the legacy journals owned by the corporations will remain on top no matter what\n-Refusing to participate in this absurd rat-race not only reduces one’s chances for promotion and grants, but can even cause one to lose one’s job.\n-Here I’m highlighting Queen Mary University of London. It’s at a safe enough distance- I wouldn’t want anyone to think I’m referring to Sydney University.\n-Queen Mary has installed a metrics-based system to identify underperformers they may want to fire. They use research quantity (# of papers), research quality quantified by impact factor, and research income.\n-One of their scientists responded by sending a critical letter to The Lancet in which he described the system thusly:\n-\n-DOG. If we are the research dog here, the researchers, then what we do is being determined by the interests of the publishers. The bureaucrats evaluate us by the journals we publish in, which reinforces the monopoly that the corporate publishers have. As long as researchers are evaluated in that way, they’ll keep submitting to these closed journals, giving easy money to the publishers.\nInstead of the interest of the publishing tail wagging the dog that is research, publishing should be a service industry that researcher communities or the government make contracts with.\n-So, the funder mandates to make the expensive journals OA after 6 months doesn’t redress the issue that they’re reinforcing the most expensive journals\n\n\n\n
-Nowadays anybody with a website can publish things instantaneously. And taxpayer-funded research should therefore be published instantaneously.\n-Rather than doing this willy-nilly on personal or departmental websites that tend to come and go, this can be done systematically with \n-It’s like a green road. To go this way, we don’t have to commission the government to study everything and build a whole new infrastructure and wait years for that to happen. To go this way, you just start driving.\n-QUT and Harvard have both implemented partial mandates\n\n-And for the rest of science, we’re seeing new low-cost players that can provide the services we need in this environment. PeerJ does it for $99.\n\n-DOES ANYBODY WANT TO INTERRUPT WHILE THIS PART OF THE TALK IS STILL FRESH IN THE MIND\n\n\n\n\n-Publishers should be service providers. They shouldn’t be owners of all the intellectual property the government is paying us to create.\n-------------------------------------------------------\nhttp://openaccess.eprints.org/index.php?/archives/808-RIN-Report-The-Green-Road-to-Open-Access-is-Wide-Open.html\n
-Now I want to leave open access and talk about something that at first may seem unrelated, the problem not of how to make published results free but rather the problem of UNpublished results\n-Towards the end I’ll describe my vision of how these things are linked\n-I think that this is one of the biggest problems in science. Because it means we have no real way to know whether to believe a lot of published results. \n-Let’s say you read an interesting paper comparing two groups or two conditions, that reports that the two groups or conditions yields significantly different results.\n-How do you know whether to believe that or whether it instead might be some kind of type 1 error? \nAnd I’m talking about cases where the data is out there, where people have actually done replications of the phenomenon. They simply haven’t published it- instead it’s in the file drawer.\n-Has anybody here ever tried to replicate a published result but failed to replicate it? you couldn’t get it to work?\nDID YOU PUBLISH THAT? WHY NOT?\n
-As researchers, many of us get a project started by trying to build on a published result. But then find ourselves unable to replicate the original result. \n-Now, the knowledge that those results are difficult or impossible to replicate is valuable knowledge! Nevertheless, most of us simply drop it.\nJPSP example. There was a paper published by Daryl Bem in 2010 called “Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect.” He descrbied it as\n “strong evidence for extrasensory perception, the ability to sense future events.”\n-3 teams of researchers independently attempted an exact replication of this study and failed to replicate the ESP result. They got together and wrote a manuscript together, as a sort of triple replication-failure. They first submitted it to the journal that had published the original study, who rejected it straightaway, saying “We don’t publish replications. Not original enough!” Then they sent it to Science Brevia, a section of the journal Science, who also rejected it without sending it for review. for lack of originality. Then they submitted it to a top psychology journal, Psych. Science, who also rejected it without sending it out to review. Finally the 4th journal they sent it to sent it out for peer review!\n-One reviewer at that journal gave a positive review, reviewer 2 had reservations, and the editor rejected it. They suspected that the original author, Bem, was the reviewer with reservations, and indeed he later confirmed he was.\n-In summary, a triple failure-to-replicate the Bem ESP experiments were rejected by 4 journals (three for not being original enough, one when Bem was the reviewer), \n-Finally they sent it to PLoS ONE, where it was accepted and came out earlier this year.\n------------------\nBem 2011 Feeling the Future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100, 407-425.\n Eliot Smith of Indiana University in Bloomington, the Journal of Personality and Social Psychology editor who handled the submitted paper, declined to send it out to review. "This journal does not publish replication studies, whether successful or unsuccessful," he wrote. http://www.newscientist.com/article/dn20447-journal-rejects-studies-contradicting-precognition.html\n
-I hope this disturbs you! It certainly was very disturbing to me!\n-If you haven’t read stuff by John Ioannidis before, I urge you to do so, because he’s got a pretty good argument. He makes a series of calculations and for \nIt depends on how many tests people are doing of possible effects, what the probability is of any particular thing tested being true, the average size of studies in the field, etc.\n-I’m not going to go through it, but to give you a taste\nBOTH of which are related to the file-drawer problem\nNEXT:\n-Corollary 4. Since everybody is trying to get a positive result so they can publish it, people tend to fish around doing lots of analyses and \n-Corollary 6. Related to the file-drawer problem. This happens in areas where lots of people are testing related things, but understandably only publish the positive findings. So occasionally a team will get a significant result, which often might be a type-1 error (that is, it happened due to chance rather than being a real effect). Then, some other teams in the area will jump on that and try to build on it. Probably they won’t be able to replicate it and hopefully they’ll manage to publish that non-replication somehow, in which case the field will swing to some other area when somebody else finds a different significant result or \n-So publishing of replications is sorely needed. One of the replies to the Ioannidis article relates to this and I find it kinda funny\n-I’m in an absurd system of science that pushes me to publish my own type-1 errors and not correct those of others #publicationBias\n-It shouldn’t be this way, science should be a model human endeavour.\n------------------\nBem 2011 Feeling the Future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100, 407-425.\n Eliot Smith of Indiana University in Bloomington, the Journal of Personality and Social Psychology editor who handled the submitted paper, declined to send it out to review. "This journal does not publish replication studies, whether successful or unsuccessful," he wrote. http://www.newscientist.com/article/dn20447-journal-rejects-studies-contradicting-precognition.html\n
-I hope this disturbs you! It certainly was very disturbing to me!\n-If you haven’t read stuff by John Ioannidis before, I urge you to do so, because he’s got a pretty good argument. He makes a series of calculations and for \nIt depends on how many tests people are doing of possible effects, what the probability is of any particular thing tested being true, the average size of studies in the field, etc.\n-I’m not going to go through it, but to give you a taste\nBOTH of which are related to the file-drawer problem\nNEXT:\n-Corollary 4. Since everybody is trying to get a positive result so they can publish it, people tend to fish around doing lots of analyses and \n-Corollary 6. Related to the file-drawer problem. This happens in areas where lots of people are testing related things, but understandably only publish the positive findings. So occasionally a team will get a significant result, which often might be a type-1 error (that is, it happened due to chance rather than being a real effect). Then, some other teams in the area will jump on that and try to build on it. Probably they won’t be able to replicate it and hopefully they’ll manage to publish that non-replication somehow, in which case the field will swing to some other area when somebody else finds a different significant result or \n-So publishing of replications is sorely needed. One of the replies to the Ioannidis article relates to this and I find it kinda funny\n-I’m in an absurd system of science that pushes me to publish my own type-1 errors and not correct those of others #publicationBias\n-It shouldn’t be this way, science should be a model human endeavour.\n------------------\nBem 2011 Feeling the Future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100, 407-425.\n Eliot Smith of Indiana University in Bloomington, the Journal of Personality and Social Psychology editor who handled the submitted paper, declined to send it out to review. "This journal does not publish replication studies, whether successful or unsuccessful," he wrote. http://www.newscientist.com/article/dn20447-journal-rejects-studies-contradicting-precognition.html\n
-I hope this disturbs you! It certainly was very disturbing to me!\n-If you haven’t read stuff by John Ioannidis before, I urge you to do so, because he’s got a pretty good argument. He makes a series of calculations and for \nIt depends on how many tests people are doing of possible effects, what the probability is of any particular thing tested being true, the average size of studies in the field, etc.\n-I’m not going to go through it, but to give you a taste\nBOTH of which are related to the file-drawer problem\nNEXT:\n-Corollary 4. Since everybody is trying to get a positive result so they can publish it, people tend to fish around doing lots of analyses and \n-Corollary 6. Related to the file-drawer problem. This happens in areas where lots of people are testing related things, but understandably only publish the positive findings. So occasionally a team will get a significant result, which often might be a type-1 error (that is, it happened due to chance rather than being a real effect). Then, some other teams in the area will jump on that and try to build on it. Probably they won’t be able to replicate it and hopefully they’ll manage to publish that non-replication somehow, in which case the field will swing to some other area when somebody else finds a different significant result or \n-So publishing of replications is sorely needed. One of the replies to the Ioannidis article relates to this and I find it kinda funny\n-I’m in an absurd system of science that pushes me to publish my own type-1 errors and not correct those of others #publicationBias\n-It shouldn’t be this way, science should be a model human endeavour.\n------------------\nBem 2011 Feeling the Future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100, 407-425.\n Eliot Smith of Indiana University in Bloomington, the Journal of Personality and Social Psychology editor who handled the submitted paper, declined to send it out to review. "This journal does not publish replication studies, whether successful or unsuccessful," he wrote. http://www.newscientist.com/article/dn20447-journal-rejects-studies-contradicting-precognition.html\n
-previously no “good” journals publish (non)replications\n-You don’t get much of a carrot for doing it, but you have the real possibility of being hit with a stick for your efforts\n-There are no carrots, only the sticks or the blades of the original authors possibly getting pissed off\n-It’s been suggested by many people that we recently witnessed a case of this online. \n-I encourage you to read it yourself online. It’s a seemingly vengeful attack on the authors of a failed replication, with a few ad hominem attacks scattered through it and a number of apparent inaccuracies describing the original study\n-It attracted a lot of attention online, even leading to the term “getting Barghed” was invented by those \n-Of course, worse with anonymous review\n-The situation is really truly bad, in that there is really no way in most areas of science for general skepticism about a result to get voiced.\nThere are a lot of people out there who doubt one result or another, but unless you can hang out with the right people at the right conferences, you’ll never find out.\n-Sometimes you \n\n------------------\nBem 2011 Feeling the Future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100, 407-425.\n Eliot Smith of Indiana University in Bloomington, the Journal of Personality and Social Psychology editor who handled the submitted paper, declined to send it out to review. "This journal does not publish replication studies, whether successful or unsuccessful," he wrote. http://www.newscientist.com/article/dn20447-journal-rejects-studies-contradicting-precognition.html\n
-previously no “good” journals publish (non)replications\n-You don’t get much of a carrot for doing it, but you have the real possibility of being hit with a stick for your efforts\n-There are no carrots, only the sticks or the blades of the original authors possibly getting pissed off\n-It’s been suggested by many people that we recently witnessed a case of this online. \n-I encourage you to read it yourself online. It’s a seemingly vengeful attack on the authors of a failed replication, with a few ad hominem attacks scattered through it and a number of apparent inaccuracies describing the original study\n-It attracted a lot of attention online, even leading to the term “getting Barghed” was invented by those \n-Of course, worse with anonymous review\n-The situation is really truly bad, in that there is really no way in most areas of science for general skepticism about a result to get voiced.\nThere are a lot of people out there who doubt one result or another, but unless you can hang out with the right people at the right conferences, you’ll never find out.\n-Sometimes you \n\n------------------\nBem 2011 Feeling the Future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100, 407-425.\n Eliot Smith of Indiana University in Bloomington, the Journal of Personality and Social Psychology editor who handled the submitted paper, declined to send it out to review. "This journal does not publish replication studies, whether successful or unsuccessful," he wrote. http://www.newscientist.com/article/dn20447-journal-rejects-studies-contradicting-precognition.html\n
-So the PLoS ONE criteria explicitly exclude any criterion of importance. The criteria are essentially that the study be appropriate designed, executed, and that the conclusions be justified.\nFor that reason, it tends to be much easier to get a replication in than it is with other journals. \n --However, it can still be an uphill battle. The reviewers and editors in some cases still tend to hold the replication attempt to a significantly higher standard than the original\n --And it doesn’t address the incentives problem at all\n-Preregistration of the plan of a study has become very common and sometimes required for Clinical Trials\nThis \n
-PsychFileDrawer is a website my colleagues and I created to make an end-run around the uphill battle problem. \n-We wanted a place where researchers could quickly upload notices of successful and failed replications of published papers.\n-So if you go to the site you’ll see a list of the articles for which people have posted notices of replications and non-replications\n\n
-Let’s say a fresh-faced PhD student and a supervisor try to replicate a published study. \n-If the replication attempt fails, the PhD student is likely to react with wow! that result isn’t real! We have to tell all the world!\n-An old cynical PhD supervisor will react with, “it’s not going to be worth the trouble, the additional control experiments, the frustration of rejection from multiple journals, all to publish something that only creates some doubt about something. And then there’s the hostile reaction we may get from the targeted article and all the papers and grants of mine that he could reject in the future\n-But if the PhD student is fresh-faced enough, he may maintain his idealism about what science really is all about.\n-He may not be able to publish the finding as a paper without a lot of support from his supervisor, but he might be able to write up a short notice for the PsychfileDrawer\n\n\n
-When I tell people about the PsychFileDrawer site, a lot of people say “great idea!”\n-But none of those people actually post anything\n-So if you go to the list of notices, you’ll find only 9 entries, and 3 of those were entered by the creators of the site.\n-So PFD is basically a failure.\n-The website has been live for 3 or 4 months and in that time has gotten some buzz, with many thousands of visitors to the site\n-it’s played a role in some big internet stories\n-but no matter how many people visit the site, noone posts to it.\n-Just need to wait for cultural change? We know awareness of the file-drawer problem has been steadily increasing, \nas has awareness of related issues like the growing number of scientists who have admitted to fraud and therefore their results need replicating\n-So maybe it’s just a matter of time. On the other hand, maybe we won’t get anywhere without the carrot of journal publications\n\n
-New idea: combine the top two approaches to yield the possibly all-important carrot of journal publication\n\n
-New idea: combine the top two approaches to yield the possibly all-important carrot of journal publication\n-Helps a little bit with the issue of pissing off the original author because they sign off on it. Altho the original authors are heavily biased against studies that don’t replicate them, here they’re seeing the \n-Here, the “attacking” authors appear at the stage where they look more like disinterested people.\n-Would be great for first year grad students and undergrad honors students to take on a replication as their first project. I could imagine that becoming a standard first-year project.\n-it could change the incentives for researchers who publish flashy studies. Those studies would be highly likely to be replicated in this system. So, if you aren't confident that your flashy result will replicate, you might be hesitant to publish it in the first place without replicating it first yourself (and using adequate power to be sure you're right(. I could see this change in the incentive structure leading to greater power in studies, fewer published false positives,\n-\n\n
-It actually helps a little bit with the issue of pissing off the original author because they sort of signed off on it. Normally the authors of such a study wouldn’t materialize to the targeted author until they’re going around submitting manuscripts saying that they failed to replicate!\n-Here, the “attacking” authors appear at the stage where they look more like disinterested people.\n-would be great for first year grad students and undergrad honors students to take on a replication as their first project. I could imagine that becoming a standard first-year project.\n-it could change the incentives for researchers who publish flashy studies. Those studies would be highly likely to be replicated in this system. So, if you aren't confident that your flashy result will replicate, you might be hesitant to publish it in the first place without replicating it first yourself (and using adequate power to be sure you're right(. I could see this change in the incentive structure leading to greater power in studies, fewer published false positives,\n-Reduce problem of flashy studies- currently if you get a cool result like “having sex makes people more generous” or “Eating meat makes you more violent”, then even if you think it’s a type-1 error, most of the incentives are to publish it. If it gets accepted, as it likely would, it would be a very long time if ever before anyone published a non-replication and in the meantime you’d accrue hundreds of citations, get promoted, and one day, maybe even be able to buy a house.\n-ANY COMMENTS? What do you think, would you submit to a ROIM journal?\n \n
Funders going to reward this\n-People chip in, offer the use of reagents\n-Might sound crazy, but it can actually work and lead to scientific solutions that couldn’t otherwise be achieved\n-Todd knew he couldn’t do it by himself- needed\n