The document discusses problems with the current academic publishing system, including its monopoly held by for-profit publishers and incentive structures that discourage replication studies and filing "negative" results. It proposes several solutions like open access publishing models, pre-registering studies to prevent selective reporting of results, and journals dedicated to publishing replication studies in order to address these issues and make the scientific process more transparent and self-correcting.
Scott Edmunds @ Balti & Bioinformatics: New Models in Open Data Publishing. January 21st 2015. Video archive https://plus.google.com/u/0/events/cbtuikle0h2619obgjrgfu74424
Open Research Practices in the Age of a Papermill PandemicDorothy Bishop
Talk given to Open Research Group, Maynooth University, October 2022.
Describes the phenomenon of large-scale fraudulent science publishing (papermills), and discusses how open science practices can help tackle this.
Presentation at Télécom Bretagne on my thesis & related work (draws from EKAW talk). Part of a European Science foundation Exploratory Workshop on Measuring Evaluation managing online communities European virtual institute #OOC2013
Ciaran O'Neill & Amye Kenall: Peering into review - Innovation, credit & repr...GigaScience, BGI Hong Kong
Ciaran O'Neill & Amye Kenall: Peering into review - Innovation, credit & reproducibility. Talk 1 in the "What Bioinformaticians need to know about digital publishing beyond the PDF2" workshop at ISMB 2014, Boston, 16th July 2014
Aiming for Innovation: Living Design in a Business WorldBayCHI
Brynn Evans and Krista Sanders at BayCHI, December 8, 2009: Design thinking and how it relates to software product development in general and HCI design in particular. The values and methods of strategic ideation and see how they can be applied in various real life/real work situations.
Scott Edmunds @ Balti & Bioinformatics: New Models in Open Data Publishing. January 21st 2015. Video archive https://plus.google.com/u/0/events/cbtuikle0h2619obgjrgfu74424
Open Research Practices in the Age of a Papermill PandemicDorothy Bishop
Talk given to Open Research Group, Maynooth University, October 2022.
Describes the phenomenon of large-scale fraudulent science publishing (papermills), and discusses how open science practices can help tackle this.
Presentation at Télécom Bretagne on my thesis & related work (draws from EKAW talk). Part of a European Science foundation Exploratory Workshop on Measuring Evaluation managing online communities European virtual institute #OOC2013
Ciaran O'Neill & Amye Kenall: Peering into review - Innovation, credit & repr...GigaScience, BGI Hong Kong
Ciaran O'Neill & Amye Kenall: Peering into review - Innovation, credit & reproducibility. Talk 1 in the "What Bioinformaticians need to know about digital publishing beyond the PDF2" workshop at ISMB 2014, Boston, 16th July 2014
Aiming for Innovation: Living Design in a Business WorldBayCHI
Brynn Evans and Krista Sanders at BayCHI, December 8, 2009: Design thinking and how it relates to software product development in general and HCI design in particular. The values and methods of strategic ideation and see how they can be applied in various real life/real work situations.
Big data classification refers to the process of categorizing or classifying data based on predefined categories or classes. It involves using machine learning algorithms and statistical techniques to analyze large volumes of data and assign them to specific classes or categories.
Here are some key aspects of big data classification:
Training Data: Classification algorithms require a labeled dataset for training. This dataset consists of examples where the class or category of each data point is known. The training data is used to build a classification model that can generalize patterns and relationships between the input features and the corresponding classes.
Feature Selection: In classification, the features or attributes of the data play a crucial role in determining the class labels. Feature selection involves identifying the most relevant and informative features that contribute to accurate classification. This helps reduce dimensionality and improve the performance of the classification model.
Classification Algorithms: Various machine learning algorithms can be used for big data classification, such as decision trees, random forests, support vector machines (SVM), logistic regression, and deep learning techniques like neural networks. Each algorithm has its strengths, limitations, and suitability for different types of data and classification tasks.
Model Training and Evaluation: The classification model is trained using the labeled training data. The model learns the patterns and relationships between the features and the corresponding class labels. After training, the model is evaluated using evaluation metrics such as accuracy, precision, recall, F1 score, and area under the receiver operating characteristic (ROC) curve to assess its performance and generalization ability.
Predictive Classification: Once the classification model is trained and evaluated, it can be used to predict the classes of new, unseen data. The model takes the input features of the new data and applies the learned patterns to assign the appropriate class label.
Handling Big Data Challenges: Big data classification comes with specific challenges, including the volume, velocity, variety, and veracity of data. Processing large-scale datasets requires distributed computing frameworks like Apache Hadoop or Apache Spark. Additionally, data preprocessing, feature engineering, and model optimization techniques are used to handle the complexity and scalability of big data.
Big data classification finds applications in various domains, including customer segmentation, fraud detection, image recognition, sentiment analysis, and medical diagnosis, among others. It enables organizations to extract valuable insights from massive datasets and automate the process of categorizing and organizing data.
To successfully perform big data classification, it is important to have a good understanding of the data, the domain, and the appropriate choice of algorithms and techniqu
JCDL doctoral consortium 2008: Proposed Foundations for Evaluating Data Shar...Heather Piwowar
Science progresses by building upon previous research. Progress can be most rapid, efficient, and focused when raw datasets from previous studies are available for reuse. To facilitate this practice, funders and journals have begun to request and require that investigators share their primary datasets with other researchers. Unfortunately, it is difficult to evaluate the effectiveness of these policies. This study aims to develop foundations for evaluating data sharing and reuse decisions in the biomedical literature by developing tools to answer the following research questions, within the context of biomedical gene expression datasets: What is the prevalence of biomedical research data sharing? Biomedical research data reuse? What features are most associated with an investigator’s decision to share or reuse a biomedical research dataset? Does sharing or reusing data contribute to the impact of a research article, independently of other factors? What do the results suggest for developing efficient, effective policies, tools, and initiatives for promoting data sharing and reuse? I suggest a novel approach to identifying publications that share and reuse datasets, through the application of natural language processing techniques to the full text of primary research articles. Using these classifications and extracted covariates, univariate and multivariate analysis will assess which features are most important to data sharing and reuse prevalence, and also estimate the contribution that sharing data and reusing data make to a publication’s research impact. I hope the results will inform the development of effective policies and tools to facilitate this important aspect of scientific research and information exchange.
Computational Reproducibility vs. Transparency: Is It FAIR Enough?Bertram Ludäscher
Keynote at CLIR Workshop (Webinar): Torward Open, Reproducible, and Reusable Research. February 10, 2021. https://reusableresearch.com/
ABSTRACT. The “reproducibility crisis” has resulted in much interest in methods and tools to improve computational reproducibility. FAIR data principles (data should be findable, accessible, interoperable, and reusable) are also being adapted and evolved to apply to other artifacts, notably computational analyses (scientific workflows, Jupyter notebooks, etc.). The current focus on computational reproducibility of scripts and other computational workflows sometimes overshadows a somewhat neglected and arguably more important issue: transparency of data analysis, including data wrangling and cleaning. In this talk I will ask the question: What information is gained by conducting a reproducibility experiment? This leads to a simple model (PRIMAD) that aims to answer this question by sorting out different scenarios. Finally, I will present some features of Whole-Tale, a computational platform for reproducible and transparent computational experiments.
What data scientists really do, according to 50 data scientistsHugo Bowne-Anderson
My talk at PyData NYC, 2018.
This is the abstract:
Hugo Bowne-Anderson, data scientist and host of the DataFramed podcast, will give you a view into the thinking of 50 leading data scientists from around the world about the trends driving the data science revolution. During his interviews with these thought leaders, Hugo discovered themes and lessons about the past, present, and future of data science.
This presentation was given at the CLIR/DLF Postdoctoral Fellowship Summer Seminar at Bryn Mawr college in Pennsylvania on July 29th 2014. The intention was to communicate what we are doing in the fields of text and data mining in the domain of chemistry and specifically around mining the RSC archive publication and chemistry dissertations and theses. How would these experiences map over to the humanities?
Where are we going and how are we going to get there?David De Roure
Keynote from JISC Projects start-up meeting
Information Environment 2009-11 & Virtual Research Environment http://www.jisc.ac.uk/whatwedo/programmes/inf11/inf11startup.aspx
Is the current measure of excellence perverting Science? A Data deluge is com...Lourdes Verdes-Montenegro
Talk prepared for motivating the Session proposed by AMIGA team to SKA Office and organized by William Garnier (SKAO) for ESOF (European Science Forum) held in Toulouse in July 2018
Subscription costs versus open access costs, & Dissolving journals' boundariesAlex Holcombe
draft of talk for Reclaiming the Knowledge Commons http://www.eventbrite.com.au/e/reclaiming-the-knowledge-commons-the-ethics-of-academic-publishing-and-the-futures-of-research-tickets-17560178968
More Related Content
Similar to The broadest problem in science: Our publishing system
Big data classification refers to the process of categorizing or classifying data based on predefined categories or classes. It involves using machine learning algorithms and statistical techniques to analyze large volumes of data and assign them to specific classes or categories.
Here are some key aspects of big data classification:
Training Data: Classification algorithms require a labeled dataset for training. This dataset consists of examples where the class or category of each data point is known. The training data is used to build a classification model that can generalize patterns and relationships between the input features and the corresponding classes.
Feature Selection: In classification, the features or attributes of the data play a crucial role in determining the class labels. Feature selection involves identifying the most relevant and informative features that contribute to accurate classification. This helps reduce dimensionality and improve the performance of the classification model.
Classification Algorithms: Various machine learning algorithms can be used for big data classification, such as decision trees, random forests, support vector machines (SVM), logistic regression, and deep learning techniques like neural networks. Each algorithm has its strengths, limitations, and suitability for different types of data and classification tasks.
Model Training and Evaluation: The classification model is trained using the labeled training data. The model learns the patterns and relationships between the features and the corresponding class labels. After training, the model is evaluated using evaluation metrics such as accuracy, precision, recall, F1 score, and area under the receiver operating characteristic (ROC) curve to assess its performance and generalization ability.
Predictive Classification: Once the classification model is trained and evaluated, it can be used to predict the classes of new, unseen data. The model takes the input features of the new data and applies the learned patterns to assign the appropriate class label.
Handling Big Data Challenges: Big data classification comes with specific challenges, including the volume, velocity, variety, and veracity of data. Processing large-scale datasets requires distributed computing frameworks like Apache Hadoop or Apache Spark. Additionally, data preprocessing, feature engineering, and model optimization techniques are used to handle the complexity and scalability of big data.
Big data classification finds applications in various domains, including customer segmentation, fraud detection, image recognition, sentiment analysis, and medical diagnosis, among others. It enables organizations to extract valuable insights from massive datasets and automate the process of categorizing and organizing data.
To successfully perform big data classification, it is important to have a good understanding of the data, the domain, and the appropriate choice of algorithms and techniqu
JCDL doctoral consortium 2008: Proposed Foundations for Evaluating Data Shar...Heather Piwowar
Science progresses by building upon previous research. Progress can be most rapid, efficient, and focused when raw datasets from previous studies are available for reuse. To facilitate this practice, funders and journals have begun to request and require that investigators share their primary datasets with other researchers. Unfortunately, it is difficult to evaluate the effectiveness of these policies. This study aims to develop foundations for evaluating data sharing and reuse decisions in the biomedical literature by developing tools to answer the following research questions, within the context of biomedical gene expression datasets: What is the prevalence of biomedical research data sharing? Biomedical research data reuse? What features are most associated with an investigator’s decision to share or reuse a biomedical research dataset? Does sharing or reusing data contribute to the impact of a research article, independently of other factors? What do the results suggest for developing efficient, effective policies, tools, and initiatives for promoting data sharing and reuse? I suggest a novel approach to identifying publications that share and reuse datasets, through the application of natural language processing techniques to the full text of primary research articles. Using these classifications and extracted covariates, univariate and multivariate analysis will assess which features are most important to data sharing and reuse prevalence, and also estimate the contribution that sharing data and reusing data make to a publication’s research impact. I hope the results will inform the development of effective policies and tools to facilitate this important aspect of scientific research and information exchange.
Computational Reproducibility vs. Transparency: Is It FAIR Enough?Bertram Ludäscher
Keynote at CLIR Workshop (Webinar): Torward Open, Reproducible, and Reusable Research. February 10, 2021. https://reusableresearch.com/
ABSTRACT. The “reproducibility crisis” has resulted in much interest in methods and tools to improve computational reproducibility. FAIR data principles (data should be findable, accessible, interoperable, and reusable) are also being adapted and evolved to apply to other artifacts, notably computational analyses (scientific workflows, Jupyter notebooks, etc.). The current focus on computational reproducibility of scripts and other computational workflows sometimes overshadows a somewhat neglected and arguably more important issue: transparency of data analysis, including data wrangling and cleaning. In this talk I will ask the question: What information is gained by conducting a reproducibility experiment? This leads to a simple model (PRIMAD) that aims to answer this question by sorting out different scenarios. Finally, I will present some features of Whole-Tale, a computational platform for reproducible and transparent computational experiments.
What data scientists really do, according to 50 data scientistsHugo Bowne-Anderson
My talk at PyData NYC, 2018.
This is the abstract:
Hugo Bowne-Anderson, data scientist and host of the DataFramed podcast, will give you a view into the thinking of 50 leading data scientists from around the world about the trends driving the data science revolution. During his interviews with these thought leaders, Hugo discovered themes and lessons about the past, present, and future of data science.
This presentation was given at the CLIR/DLF Postdoctoral Fellowship Summer Seminar at Bryn Mawr college in Pennsylvania on July 29th 2014. The intention was to communicate what we are doing in the fields of text and data mining in the domain of chemistry and specifically around mining the RSC archive publication and chemistry dissertations and theses. How would these experiences map over to the humanities?
Where are we going and how are we going to get there?David De Roure
Keynote from JISC Projects start-up meeting
Information Environment 2009-11 & Virtual Research Environment http://www.jisc.ac.uk/whatwedo/programmes/inf11/inf11startup.aspx
Is the current measure of excellence perverting Science? A Data deluge is com...Lourdes Verdes-Montenegro
Talk prepared for motivating the Session proposed by AMIGA team to SKA Office and organized by William Garnier (SKAO) for ESOF (European Science Forum) held in Toulouse in July 2018
Similar to The broadest problem in science: Our publishing system (20)
Subscription costs versus open access costs, & Dissolving journals' boundariesAlex Holcombe
draft of talk for Reclaiming the Knowledge Commons http://www.eventbrite.com.au/e/reclaiming-the-knowledge-commons-the-ethics-of-academic-publishing-and-the-futures-of-research-tickets-17560178968
Our Scholarship System is Broke. Can Open Access Fix It?Alex Holcombe
Auckland talk24october openaccessweek.
"Broke" in the sense of ain't got no money because giving too much to publishers. And "Broke" in the sense of broken, e.g. not publishing replication studies.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4
The broadest problem in science: Our publishing system
1. The broadest problem in science:
Our publishing system
Alex.Holcombe@sydney.edu.au
School of Psychology
http://www.slideshare.net/holcombea/
@ceptional
2.
3. Scientist meets Publisher
Academic knowledge is boxed
in by expensive journals.
http://www.youtube.com/watch?v=GMIY_4t-DR0
7. JOURNAL / PUBLISHER COST ($USD)
$10,780 per article (not including charges for color figures)
$85 per page
$80 per page (introductory rate is even cheaper)
$1350 per article
8. JOURNAL / PUBLISHER COST ($USD)
$10,780 per article (not including charges for color figures)
$85 per page
$80 per page (introductory rate is even cheaper)
$1350 per article
$99 per life
9. JOURNAL / PUBLISHER COST ($USD)
$10,780 per article (not including charges for color figures)
$85 per page
$80 per page (introductory rate is even cheaper)
$1350 per article
$99 per life
10. Little steps, that preserve most of the publishers’ profits
•Requirements from funders that
publications be OA
•NIH (US) within 12 months
•Wellcome Trust (UK) within 6 months
•final grant payment withheld if you
don’t comply
•NHMRC (Australia) within 12 months
•ARC
•You can use DP funds to pay open-
access fees, but must be taken from
the funds you were awarded to pay
for other things.
•“Strongly encourages” open
access, but no teeth. Compliance
rate very low.
11. Research assessment
system reinforces
journals status quo
“This slaughter of the talented relies entirely on a
carefully designed set of retrospective counts of the
uncountable. These are labelled research ‘metrics’.”
12. Research assessment
system reinforces
journals status quo
“This slaughter of the talented relies entirely on a
carefully designed set of retrospective counts of the
uncountable. These are labelled research ‘metrics’.”
Research
Stevan Harnad
Publishers
13. Immediate open access
GREEN ROAD
•Deposit your manuscripts in
the university repository
(http://ses.library.usyd.edu.au/
•Even with closed
Stevan Harnad
journals, you often have
the right to deposit your
final version (e.g. Word
document before typeset
by publisher)
•Funders, universities should
mandate this.
•Publishers will adapt, as they
have in physics.
14. The File-Drawer Problem
unpublished
results
files
http://www.flickr.com/photos/nickperez/2569423078 t. magnum
15. The File-Drawer Problem
•Difficult to publish non-
replications and replications
•Most journals only publish
papers that “make a novel
unpublished contribution”
results
files •Reviewers/editors tend to hold
non-replicating manuscript to
higher standard than original.
•Bem
http://www.flickr.com/photos/nickperez/2569423078 t. magnum
•Little career incentive to publish
a non-replication or a replication
18. The File-Drawer Problem
Corollary 4: The greater the
flexibility in designs, definitions,
outcomes, and analytical modes in
a scientific field, the less likely the
research findings are to be true.
Flexibility increases the potential for
transforming what would be “negative”
results into “positive” results.
http://www.flickr.com/photos/nickperez/2569423078 t. magnum
Corollary 6: The hotter a scientific field (with more
scientific teams involved), the less likely the research
findings are to be true.
19. The File-Drawer Problem
Corollary 4: The greater the
flexibility in designs, definitions,
outcomes, and analytical modes in
a scientific field, the less likely the
research findings are to be true.
Flexibility increases the potential for
transforming what would be “negative”
results into “positive” results.
t
at mos
idis th
Ioann
http://www.flickr.com/photos/nickperez/2569423078 t. magnum
ith
ree w
le we ag .”
ry, whi e false..
“In s umma ings ar Corollary 6: The hotter a scientific field (with more
fi nd
res earch scientific teams involved), the less likely the research
findings are to be true.
20. Barriers to publishing replications and failed-
replications
• No glory in publishing a replication
• Few journals publish replications
• usually uphill battle even with
those that do
• The wrath of the original researcher
21. Barriers to publishing replications and failed-
replications
• No glory in publishing a replication
• Few journals publish replications
• usually uphill battle even with
those that do
• The wrath of the original researcher
22. Barriers to publishing replications and failed-
replications
• No glory in publishing a replication
• Few journals publish replications
• usually uphill battle even with
those that do
• The wrath of the original researcher
23. File-drawer fixes
• Journals that don’t reject
replications for being
uninteresting or unimportant
◦ • ✔
• Pre-registration of study designs
and analysis methods
◦ • ◦
• Brief reporting of replications
◦ ✔ ◦
27. File-drawer fixes
• Journals that don’t reject
replications for being
uninteresting or unimportant
◦ • ✔
• Pre-registration of study designs
and analysis methods
◦ ✔ ◦
• Brief reporting of replications
◦ ✔ ◦
28. J-REPS
Journal of Registered Evidence in Psychological Science
Dan Simons
1. Authors plan a replication study
2. They submit an introduction and methods section
3. It is sent to reviewers, including the targeted author
4. The editor decides whether to accept/reject, based on:
1. Reviewer comments regarding the proposed protocol
2. Importance of the study, judged by argument in the
introduction, number of citations of original, reviewer
comments
5. The Intro, Method and analysis plan, and reviewer comments
are posted on the journal website ✔ ✔ ✔
6. When the results come in, the authors write a conventional
results and discussion section and that together with the raw
data are posted, yielding the complete publication
29. Journal of Registered Evidence in Psychological Science
• Original author sort-of signed off on it, so can’t
complain / hate the replication authors as much.
• Good way to start for a new PhD project, anyone
planning to build on some already-published results
• Will post the raw data ✔ ✔ ✔
• Will facilitate, publish meta-analyses when replications
accrue
• Reduce the incentive to publish flashy, headline-
grabbing but unreliable studies?
30. Comprehensive solution? Open Science
• Funders may eventually demand:
• Any data collected with their money be
made available
• Experiment software code be posted
• As data comes in, put on web
• Electronic lab notebook
• Papers written via open collaborative
documents on the web
Editor's Notes
-prep: have movie open. \n-prep: Have AUDIO connected.\n-I’m a researcher, I love doing research, but\n-I feel trapped. Trapped in a system of science that has some unfortunate aspects\n-One that pushes us to do silly things. Not just silly things, but things that have very negative consequences for science. \n-We need to reform the system to encourage us to do the right thing.\n\n-We researchers, by the choices we make, perpetuate the absurdities of the system or we push it towards reform\n-Everytime that we agree to review a paper, that we accept an invitation to write a book chapter or for a special issue\n-When we agree to join the editorial board of a journal\n-Everytime that we SUBMIT a paper to a journal\n-We’re steering the future of science. Today I’ll talk about two aspects of the problems of science and the future of science\n-Both are aspects of our publishing system\n-First is the issue of who owns our papers and making them free to access\n-Second is another big problem in science, the file-drawer problem\n\n-I’ll post my slides on slideshare\n\n\n
-Every talk should have a demo. Ok, but how to obey that for today’s talk which isn’t about any experimental task or visual phenomenon?\n-I made a short video that here can serve to remind people of how absurd the standard publishing system is- having grown up as a researcher in this system, many of us lose sight of this. \n-Can we look at the system again with fresh eyes, to see the absurdity?\nirst showcases the silliness of this situation and then describes a few of the things we can do as individuals\n\n\n\n
\n
-The publisher profit margins are high, well over 30% in an era where the profit margin on all other types of publishing have rapidly\n-The reason is that it’s a lot like a monopoly pricing situation\n-The Journal publishers own the journal titles,\n then researchers demand their libraries subscribe to certain journals with no consideration of price, so this has led to massive price inflation\n-There are significant costs involved in publishing a journal, in X, Y, and Z, but they’re much less than what the journals can charge thanks to the monopoly situation.\n-Periodically you get university librarians pleading that something be done about this\n-The Faculty Advisory Council of Harvard University, recently wrote that the traditional journal system is unsustainable and \nasked the faculty to consider resigning from editorial boards of journals owned by objectionable publishers\n-Usually everybody just ignores the librarians, so the situation continues\n\n\n
-The way things should be is that instead of corporations owning these journal titles, it should be the researchers. Publishing is a simple service that should be done through contracts with the publisher.\n-After all, the content of the journal is provided by the researchers. Moreover, the editorial board is made up of researchers, all the decisions in these journals are made by researchers. And almost all of this work is done by the researchers, and almost all done for free- or paid by their universities. \n-And then the universities have to buy it back from these publishers after they take many millions in profits. No, the publishing should be contracted out.\n-Right now most of us are perpetuating this terrible system.\n-Sometimes people question whether we’re really getting a raw deal from these corporate publishers, even when they charge $10 or $20K for an online subscription to a journal.\n\n\n\n\n
-Sometimes people question whether we’re really getting a raw deal from these corporate publishers, even when they charge $10 or $20K for an online subscription to a journal. \n-So I did some calculations to compare different journals, corporate subscription journals vs. open-access journals. The ones I’m most familiar with.\n-But I did some calculations and based on the total subscription revenue of Elsevier and the number of articles they publish, Elsevier earns over $10K/article they publish.\n-Whereas looking at some open-access journals in my field that charge the author, one pays only $80 or $85/page to publish, which means less than $1000 for the average article\n-Finally, we know it can be much cheaper than even these non-profit publishers are charging\n-PeerJ is a new for-profit business started by the former publisher of PLoS ONE which \n\n\n
-these are difficult times financially for universities\n-corporate welfare. \n-monopoly pricing\n\n\n
-In the 21st century, it’s silly that most university research not freely available. Almost all the barriers are unnecessary barriers.\n-In the overwhelming majority of cases, researchers and universities have the power to make their research free to download\n-Instead, we’re still paying publishers for subscriptions to their journals. \n-Most of you know Journals are a bit like magazines, but with authors who are unpaid.\n-How much do subscriptions cost to these journals? \nAUDIENCE: I remember, back before the internet, I used to subscribe to magazines. It cost maybe $30 or $40 a year, right?\n-Right so there’s been a bit of inflation since , but 13 THOUSAND DOLLARS? 19 THOUSAND DOLLARS?\n-Some of these journals are just websites, and they don’t even have to pay the writers\n-Now there are real costs associated with editing a manuscript and marshalling it through the peer-review and web production processes, but they’re estimated to be $500- $2K per article, so with the number of subscriptions they get they’ll make the money many times over\n-means most scientists in the world can’t read these\n\n\nmedian=$889/yr, mean=$1759/yr still a lot more than free and means most people in the world don’t have a hope of reading them\n-Most scholarly articles are only available through journals which require a subscription. and a subscription costs money.\n\n------\nInstitutional Rate Print incl Free Access or E-Only\n\n
SUMMARY\n-We have seen an enormous amount of progress on this issue in the last few years. 3 or 4 years ago many people hadn’t heard of it and those that had were wary.\n-Whereas in the last few months, we have people at the highest levels of government, at least in the UK, saying that open access to scientific research is inevitable.\n-And we’ve seen people work hard on tools taking advantage of new technology that make running a journal very cheap.\n-However we still have a long way to go, because the bureaucracy still ties us to the journals published by the corporations that charge the most and pocket all the profits. So they either have to find new ways to evaluate us besides the prestige of the journals we publish in, or do an end-run around the journals. \n-But what the funders are doing is neither changing the system, not doing an end-run.\n-Instead they’re mainly nibbling around the edges.\n\n\n\n
-Instead they’re mainly nibbling around the edges.\n-These funders’ motivation of course is that the research was done in the public interest. So the public and all the people that serve the public interest should be able to read the research.\n-Australia hasn’t done much even in the way of taking these little steps. In an interview, the outgoing ARC chief indicated that she has “no plans to make academics publish taxpayer-funded scholarly research in places where anyone can access it for free”\n-These steps of requiring open access, but only after 6 months or a year, are measures chosen to preserve the profits of the publishers\n\n------------------------------------------------------------\nHere she explains why the ARC has no plans to make academics publish taxpayer-funded scholarly research in places where anyone can access it for free\nhttp://theconversation.edu.au/open-access-not-as-simple-as-it-sounds-outgoing-arc-boss-6628\n\n\n
-There’s a broader problem here.\n-We’re stuck in a system that assesses our work in large part by the name of the journal that it appears in. \n-This causes everyone to scramble to submit to the journal that has the highest impact factor, which reinforces the lead of those journals regardless of their true quality and regardless of the price they charge to universities.\n-This means that the legacy journals owned by the corporations will remain on top no matter what\n-Refusing to participate in this absurd rat-race not only reduces one’s chances for promotion and grants, but can even cause one to lose one’s job.\n-Here I’m highlighting Queen Mary University of London. It’s at a safe enough distance- I wouldn’t want anyone to think I’m referring to Sydney University.\n-Queen Mary has installed a metrics-based system to identify underperformers they may want to fire. They use research quantity (# of papers), research quality quantified by impact factor, and research income.\n-One of their scientists responded by sending a critical letter to The Lancet in which he described the system thusly:\n-\n-DOG. If we are the research dog here, the researchers, then what we do is being determined by the interests of the publishers. The bureaucrats evaluate us by the journals we publish in, which reinforces the monopoly that the corporate publishers have. As long as researchers are evaluated in that way, they’ll keep submitting to these closed journals, giving easy money to the publishers.\nInstead of the interest of the publishing tail wagging the dog that is research, publishing should be a service industry that researcher communities or the government make contracts with.\n-So, the funder mandates to make the expensive journals OA after 6 months doesn’t redress the issue that they’re reinforcing the most expensive journals\n\n\n\n
-Nowadays anybody with a website can publish things instantaneously. And taxpayer-funded research should therefore be published instantaneously.\n-Rather than doing this willy-nilly on personal or departmental websites that tend to come and go, this can be done systematically with \n-It’s like a green road. To go this way, we don’t have to commission the government to study everything and build a whole new infrastructure and wait years for that to happen. To go this way, you just start driving.\n-QUT and Harvard have both implemented partial mandates\n\n-And for the rest of science, we’re seeing new low-cost players that can provide the services we need in this environment. PeerJ does it for $99.\n\n-DOES ANYBODY WANT TO INTERRUPT WHILE THIS PART OF THE TALK IS STILL FRESH IN THE MIND\n\n\n\n\n-Publishers should be service providers. They shouldn’t be owners of all the intellectual property the government is paying us to create.\n-------------------------------------------------------\nhttp://openaccess.eprints.org/index.php?/archives/808-RIN-Report-The-Green-Road-to-Open-Access-is-Wide-Open.html\n
-Now I want to leave open access and talk about something that at first may seem unrelated, the problem not of how to make published results free but rather the problem of UNpublished results\n-Towards the end I’ll describe my vision of how these things are linked\n-I think that this is one of the biggest problems in science. Because it means we have no real way to know whether to believe a lot of published results. \n-Let’s say you read an interesting paper comparing two groups or two conditions, that reports that the two groups or conditions yields significantly different results.\n-How do you know whether to believe that or whether it instead might be some kind of type 1 error? \nAnd I’m talking about cases where the data is out there, where people have actually done replications of the phenomenon. They simply haven’t published it- instead it’s in the file drawer.\n-Has anybody here ever tried to replicate a published result but failed to replicate it? you couldn’t get it to work?\nDID YOU PUBLISH THAT? WHY NOT?\n
-As researchers, many of us get a project started by trying to build on a published result. But then find ourselves unable to replicate the original result. \n-Now, the knowledge that those results are difficult or impossible to replicate is valuable knowledge! Nevertheless, most of us simply drop it.\nJPSP example. There was a paper published by Daryl Bem in 2010 called “Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect.” He descrbied it as\n “strong evidence for extrasensory perception, the ability to sense future events.”\n-3 teams of researchers independently attempted an exact replication of this study and failed to replicate the ESP result. They got together and wrote a manuscript together, as a sort of triple replication-failure. They first submitted it to the journal that had published the original study, who rejected it straightaway, saying “We don’t publish replications. Not original enough!” Then they sent it to Science Brevia, a section of the journal Science, who also rejected it without sending it for review. for lack of originality. Then they submitted it to a top psychology journal, Psych. Science, who also rejected it without sending it out to review. Finally the 4th journal they sent it to sent it out for peer review!\n-One reviewer at that journal gave a positive review, reviewer 2 had reservations, and the editor rejected it. They suspected that the original author, Bem, was the reviewer with reservations, and indeed he later confirmed he was.\n-In summary, a triple failure-to-replicate the Bem ESP experiments were rejected by 4 journals (three for not being original enough, one when Bem was the reviewer), \n-Finally they sent it to PLoS ONE, where it was accepted and came out earlier this year.\n------------------\nBem 2011 Feeling the Future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100, 407-425.\n Eliot Smith of Indiana University in Bloomington, the Journal of Personality and Social Psychology editor who handled the submitted paper, declined to send it out to review. "This journal does not publish replication studies, whether successful or unsuccessful," he wrote. http://www.newscientist.com/article/dn20447-journal-rejects-studies-contradicting-precognition.html\n
-I hope this disturbs you! It certainly was very disturbing to me!\n-If you haven’t read stuff by John Ioannidis before, I urge you to do so, because he’s got a pretty good argument. He makes a series of calculations and for \nIt depends on how many tests people are doing of possible effects, what the probability is of any particular thing tested being true, the average size of studies in the field, etc.\n-I’m not going to go through it, but to give you a taste\nBOTH of which are related to the file-drawer problem\nNEXT:\n-Corollary 4. Since everybody is trying to get a positive result so they can publish it, people tend to fish around doing lots of analyses and \n-Corollary 6. Related to the file-drawer problem. This happens in areas where lots of people are testing related things, but understandably only publish the positive findings. So occasionally a team will get a significant result, which often might be a type-1 error (that is, it happened due to chance rather than being a real effect). Then, some other teams in the area will jump on that and try to build on it. Probably they won’t be able to replicate it and hopefully they’ll manage to publish that non-replication somehow, in which case the field will swing to some other area when somebody else finds a different significant result or \n-So publishing of replications is sorely needed. One of the replies to the Ioannidis article relates to this and I find it kinda funny\n-I’m in an absurd system of science that pushes me to publish my own type-1 errors and not correct those of others #publicationBias\n-It shouldn’t be this way, science should be a model human endeavour.\n------------------\nBem 2011 Feeling the Future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100, 407-425.\n Eliot Smith of Indiana University in Bloomington, the Journal of Personality and Social Psychology editor who handled the submitted paper, declined to send it out to review. "This journal does not publish replication studies, whether successful or unsuccessful," he wrote. http://www.newscientist.com/article/dn20447-journal-rejects-studies-contradicting-precognition.html\n
-I hope this disturbs you! It certainly was very disturbing to me!\n-If you haven’t read stuff by John Ioannidis before, I urge you to do so, because he’s got a pretty good argument. He makes a series of calculations and for \nIt depends on how many tests people are doing of possible effects, what the probability is of any particular thing tested being true, the average size of studies in the field, etc.\n-I’m not going to go through it, but to give you a taste\nBOTH of which are related to the file-drawer problem\nNEXT:\n-Corollary 4. Since everybody is trying to get a positive result so they can publish it, people tend to fish around doing lots of analyses and \n-Corollary 6. Related to the file-drawer problem. This happens in areas where lots of people are testing related things, but understandably only publish the positive findings. So occasionally a team will get a significant result, which often might be a type-1 error (that is, it happened due to chance rather than being a real effect). Then, some other teams in the area will jump on that and try to build on it. Probably they won’t be able to replicate it and hopefully they’ll manage to publish that non-replication somehow, in which case the field will swing to some other area when somebody else finds a different significant result or \n-So publishing of replications is sorely needed. One of the replies to the Ioannidis article relates to this and I find it kinda funny\n-I’m in an absurd system of science that pushes me to publish my own type-1 errors and not correct those of others #publicationBias\n-It shouldn’t be this way, science should be a model human endeavour.\n------------------\nBem 2011 Feeling the Future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100, 407-425.\n Eliot Smith of Indiana University in Bloomington, the Journal of Personality and Social Psychology editor who handled the submitted paper, declined to send it out to review. "This journal does not publish replication studies, whether successful or unsuccessful," he wrote. http://www.newscientist.com/article/dn20447-journal-rejects-studies-contradicting-precognition.html\n
-I hope this disturbs you! It certainly was very disturbing to me!\n-If you haven’t read stuff by John Ioannidis before, I urge you to do so, because he’s got a pretty good argument. He makes a series of calculations and for \nIt depends on how many tests people are doing of possible effects, what the probability is of any particular thing tested being true, the average size of studies in the field, etc.\n-I’m not going to go through it, but to give you a taste\nBOTH of which are related to the file-drawer problem\nNEXT:\n-Corollary 4. Since everybody is trying to get a positive result so they can publish it, people tend to fish around doing lots of analyses and \n-Corollary 6. Related to the file-drawer problem. This happens in areas where lots of people are testing related things, but understandably only publish the positive findings. So occasionally a team will get a significant result, which often might be a type-1 error (that is, it happened due to chance rather than being a real effect). Then, some other teams in the area will jump on that and try to build on it. Probably they won’t be able to replicate it and hopefully they’ll manage to publish that non-replication somehow, in which case the field will swing to some other area when somebody else finds a different significant result or \n-So publishing of replications is sorely needed. One of the replies to the Ioannidis article relates to this and I find it kinda funny\n-I’m in an absurd system of science that pushes me to publish my own type-1 errors and not correct those of others #publicationBias\n-It shouldn’t be this way, science should be a model human endeavour.\n------------------\nBem 2011 Feeling the Future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100, 407-425.\n Eliot Smith of Indiana University in Bloomington, the Journal of Personality and Social Psychology editor who handled the submitted paper, declined to send it out to review. "This journal does not publish replication studies, whether successful or unsuccessful," he wrote. http://www.newscientist.com/article/dn20447-journal-rejects-studies-contradicting-precognition.html\n
-previously no “good” journals publish (non)replications\n-You don’t get much of a carrot for doing it, but you have the real possibility of being hit with a stick for your efforts\n-There are no carrots, only the sticks or the blades of the original authors possibly getting pissed off\n-It’s been suggested by many people that we recently witnessed a case of this online. \n-I encourage you to read it yourself online. It’s a seemingly vengeful attack on the authors of a failed replication, with a few ad hominem attacks scattered through it and a number of apparent inaccuracies describing the original study\n-It attracted a lot of attention online, even leading to the term “getting Barghed” was invented by those \n-Of course, worse with anonymous review\n-The situation is really truly bad, in that there is really no way in most areas of science for general skepticism about a result to get voiced.\nThere are a lot of people out there who doubt one result or another, but unless you can hang out with the right people at the right conferences, you’ll never find out.\n-Sometimes you \n\n------------------\nBem 2011 Feeling the Future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100, 407-425.\n Eliot Smith of Indiana University in Bloomington, the Journal of Personality and Social Psychology editor who handled the submitted paper, declined to send it out to review. "This journal does not publish replication studies, whether successful or unsuccessful," he wrote. http://www.newscientist.com/article/dn20447-journal-rejects-studies-contradicting-precognition.html\n
-previously no “good” journals publish (non)replications\n-You don’t get much of a carrot for doing it, but you have the real possibility of being hit with a stick for your efforts\n-There are no carrots, only the sticks or the blades of the original authors possibly getting pissed off\n-It’s been suggested by many people that we recently witnessed a case of this online. \n-I encourage you to read it yourself online. It’s a seemingly vengeful attack on the authors of a failed replication, with a few ad hominem attacks scattered through it and a number of apparent inaccuracies describing the original study\n-It attracted a lot of attention online, even leading to the term “getting Barghed” was invented by those \n-Of course, worse with anonymous review\n-The situation is really truly bad, in that there is really no way in most areas of science for general skepticism about a result to get voiced.\nThere are a lot of people out there who doubt one result or another, but unless you can hang out with the right people at the right conferences, you’ll never find out.\n-Sometimes you \n\n------------------\nBem 2011 Feeling the Future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100, 407-425.\n Eliot Smith of Indiana University in Bloomington, the Journal of Personality and Social Psychology editor who handled the submitted paper, declined to send it out to review. "This journal does not publish replication studies, whether successful or unsuccessful," he wrote. http://www.newscientist.com/article/dn20447-journal-rejects-studies-contradicting-precognition.html\n
-So the PLoS ONE criteria explicitly exclude any criterion of importance. The criteria are essentially that the study be appropriate designed, executed, and that the conclusions be justified.\nFor that reason, it tends to be much easier to get a replication in than it is with other journals. \n --However, it can still be an uphill battle. The reviewers and editors in some cases still tend to hold the replication attempt to a significantly higher standard than the original\n --And it doesn’t address the incentives problem at all\n-Preregistration of the plan of a study has become very common and sometimes required for Clinical Trials\nThis \n
-PsychFileDrawer is a website my colleagues and I created to make an end-run around the uphill battle problem. \n-We wanted a place where researchers could quickly upload notices of successful and failed replications of published papers.\n-So if you go to the site you’ll see a list of the articles for which people have posted notices of replications and non-replications\n\n
-Let’s say a fresh-faced PhD student and a supervisor try to replicate a published study. \n-If the replication attempt fails, the PhD student is likely to react with wow! that result isn’t real! We have to tell all the world!\n-An old cynical PhD supervisor will react with, “it’s not going to be worth the trouble, the additional control experiments, the frustration of rejection from multiple journals, all to publish something that only creates some doubt about something. And then there’s the hostile reaction we may get from the targeted article and all the papers and grants of mine that he could reject in the future\n-But if the PhD student is fresh-faced enough, he may maintain his idealism about what science really is all about.\n-He may not be able to publish the finding as a paper without a lot of support from his supervisor, but he might be able to write up a short notice for the PsychfileDrawer\n\n\n
-When I tell people about the PsychFileDrawer site, a lot of people say “great idea!”\n-But none of those people actually post anything\n-So if you go to the list of notices, you’ll find only 9 entries, and 3 of those were entered by the creators of the site.\n-So PFD is basically a failure.\n-The website has been live for 3 or 4 months and in that time has gotten some buzz, with many thousands of visitors to the site\n-it’s played a role in some big internet stories\n-but no matter how many people visit the site, noone posts to it.\n-Just need to wait for cultural change? We know awareness of the file-drawer problem has been steadily increasing, \nas has awareness of related issues like the growing number of scientists who have admitted to fraud and therefore their results need replicating\n-So maybe it’s just a matter of time. On the other hand, maybe we won’t get anywhere without the carrot of journal publications\n\n
-New idea: combine the top two approaches to yield the possibly all-important carrot of journal publication\n\n
-New idea: combine the top two approaches to yield the possibly all-important carrot of journal publication\n-Helps a little bit with the issue of pissing off the original author because they sign off on it. Altho the original authors are heavily biased against studies that don’t replicate them, here they’re seeing the \n-Here, the “attacking” authors appear at the stage where they look more like disinterested people.\n-Would be great for first year grad students and undergrad honors students to take on a replication as their first project. I could imagine that becoming a standard first-year project.\n-it could change the incentives for researchers who publish flashy studies. Those studies would be highly likely to be replicated in this system. So, if you aren't confident that your flashy result will replicate, you might be hesitant to publish it in the first place without replicating it first yourself (and using adequate power to be sure you're right(. I could see this change in the incentive structure leading to greater power in studies, fewer published false positives,\n-\n\n
-It actually helps a little bit with the issue of pissing off the original author because they sort of signed off on it. Normally the authors of such a study wouldn’t materialize to the targeted author until they’re going around submitting manuscripts saying that they failed to replicate!\n-Here, the “attacking” authors appear at the stage where they look more like disinterested people.\n-would be great for first year grad students and undergrad honors students to take on a replication as their first project. I could imagine that becoming a standard first-year project.\n-it could change the incentives for researchers who publish flashy studies. Those studies would be highly likely to be replicated in this system. So, if you aren't confident that your flashy result will replicate, you might be hesitant to publish it in the first place without replicating it first yourself (and using adequate power to be sure you're right(. I could see this change in the incentive structure leading to greater power in studies, fewer published false positives,\n-Reduce problem of flashy studies- currently if you get a cool result like “having sex makes people more generous” or “Eating meat makes you more violent”, then even if you think it’s a type-1 error, most of the incentives are to publish it. If it gets accepted, as it likely would, it would be a very long time if ever before anyone published a non-replication and in the meantime you’d accrue hundreds of citations, get promoted, and one day, maybe even be able to buy a house.\n-ANY COMMENTS? What do you think, would you submit to a ROIM journal?\n \n
Funders going to reward this\n-People chip in, offer the use of reagents\n-Might sound crazy, but it can actually work and lead to scientific solutions that couldn’t otherwise be achieved\n-Todd knew he couldn’t do it by himself- needed\n