Frontiers Collaborative Review
June 21, 2015
Marie Soulière, Ph.D. | Program Manager
Frontiers Collaborative Review
- Back to basics
- How it actually works
- Future engagement and latest developments
How different is it?
Key principles
- Collaborative – unites the authors and reviewers in a direct online dialogue
- Objective – peer review will concentrate on technical rather than subjective concerns
BioMed Central ‘inclusive’ peer review, PLOS One focuses on technical concerns.
- Transparent – reviewers and editor names acknowledged on published papers.
BioMed Central, BMJ, F1000Research - publish names, full reports and pre-publication
discussion. Copernicus - Interactive Public Peer-Review.
- Rigorous – questionnaires for each article types, with focus on validity of research
Rubriq - ScoreCard, Peerage of Science - pre-submission questionnaires
How efficient is it?
Averagereviewtime(Days)
0
20
40
60
80
100
120
105
97
92 90
84 82
77
74 74 74 74 73
69 67 66
62 61
25
Average review time (submission to
acceptance) per field at Frontiers
Data as of June 2015
How efficient is it?
Frequency
0
50
100
150
200
250
300
350
400
450
0 50 100 150 200 250 300 350
Review time analysis
Submission to Acceptance (days)
50% of all papers: 37-92 days
75% of all papers: 21-122 days
Average time from
submission to acceptance
for all Frontiers Journals
Median time from
submission to acceptance
for all Frontiers Journals
86
52
Data as of June 2015
How efficient is it?
PLOS One (2013)
Average: 134 days
Median*: 104 days
PNAS (as of 2015)
Average: 152 days
Frontiers (as of 2015)
Average: 86 days
Median*: 52 days
Open Biology, Royal
Society (as of 2015)
Average: 96 days
eLIFE (2014)
Median*: 90 days
Journal of Applied
Mathematics, Hindawi (2013)
Median*: 88 days
Data as of June 2015
How efficient is it?
Data on iterations in the Review Forum (RF):
Average # comments posted in RF by participants 4
Median # comments posted in RF by participants 3
Average MS resubmission 1.8
Median MS resubmission 1
Collaborative? Yes. But not too much.
Data as of June 2015
Positive author and reviewer feedback
New Online Review Forum (2014)
Current and future developments
- Plagiarism check for ALL submissions
(iThenticate program)
- Conflicts of interest between editors
and authors
- Final validation stage
(all files received, permissions for
figures or ethics committees provided,
no COI or plagiarism issues left)
- Policy on authorship changes and
disputes
Current and future developments
- Plagiarism check for ALL submissions
(iThenticate program)
- Conflicts of interest between editors
and authors
- Final validation stage
(all files received, permissions for
figures or ethics committees provided,
no COI or plagiarism issues left)
- Policy on authorship changes and
disputes
Current and future developments
- Plagiarism check for ALL submissions
(iThenticate program)
- Conflicts of interest between editors
and authors
- Final validation stage
(all files received, permissions for
figures or ethics committees provided,
no COI or plagiarism issues left)
- Policy on authorship changes and
disputes
Current and future developments
- Plagiarism check for ALL submissions
(iThenticate program)
- Conflicts of interest between editors
and authors
- Final validation stage
(all files received, permissions for
figures or ethics committees provided,
no COI or plagiarism issues left)
- Policy on authorship changes and
disputes
- Affiliation list selection during submission and on
Loop network
- Manuscript length check at submission
- FundRef
- Crossmark
- New Associate Editor tab in the Review Forum
- Article type Review Questionnaires per Program
- Algorithm for Associate and Review Editors invitations
Current and future developments
As of June 2015
28.6%
24.3%
28.1% 27.7%
20.6%
Sept Oct Nov Dec Jan
% Acceptance rate manual invitations to REV
22.7%
17.7%
23.3%
19.7%
18.0%
Sept Oct Nov Dec Jan
% Acceptance rate manual invitations to RE
Review Editor Smart invitations: Case study Human Neuroscience
% acceptance of manual invitations sent by month to Review Editors (RE)
or external Reviewers (REV)
Data as of May 2015
KPI: Decrease % Not expertise declination rate of automatic invitations sent by month
(baseline # declined invitations)
Top-10 invitations
Keywords Semantic Keywords Semantic
18.9%
22.3%
23.3%
24.6%
14.2% 17.6%
13.4%
21.4%
14.0%
0.0%
5.0%
10.0%
15.0%
20.0%
25.0%
30.0%
Sep-14 Oct-14 Nov-14 Dec-14 Jan-15 Feb-15 Mar-15 Apr-15 May-15
23.1%
29.3%
29.0%
40.6%
21.6% 22.3% 20.9% 20.2% 22.4%
0.0%
5.0%
10.0%
15.0%
20.0%
25.0%
30.0%
35.0%
40.0%
45.0%
Sep-14 Oct-14 Nov-14 Dec-14 Jan-15 Feb-15 Mar-15 Apr-15 May-15
RE Smart invitations
Top-20 invitations
Data as of May 2015
Accepted
Interested
KPI: Increase % of Acceptance and Interested automatic invitations sent by month
(baseline total invitations sent in the month)
Keywords Semantic Keywords Semantic
1.0%
2.4% 2.5%
1.9%
3.7%
5.6%
2.7%
4.0%
6.4%
0.2%
0.6%
1.0%
1.3%
0.2%
1.0%
0.2%
2.0%
0.6%
0.0%
1.0%
2.0%
3.0%
4.0%
5.0%
6.0%
7.0%
8.0%
Sep-14 Oct-14 Nov-14 Dec-14 Jan-15 Feb-15 Mar-15 Apr-15 May-15
1.4% 1.2% 1.5%
0.9%
1.9% 1.9%
2.4%
1.5%
4.0%
0.0% 0.3%
0.8%
1.4%
0.3% 0.6%
0.5%
0.4%
1.9%
0.0%
1.0%
2.0%
3.0%
4.0%
5.0%
6.0%
7.0%
Sep-14 Oct-14 Nov-14 Dec-14 Jan-15 Feb-15 Mar-15 Apr-15 May-15
*Max acceptance rate is 20% (2 reviewers reached)
Top-10 invitations Top-20 invitations
RE Smart invitations
Data as of May 2015
KPI: Decrease % of manuscripts delayed in review assignment
(baseline # manuscripts submitted by month)
Keywords Semantic
15.0%
13.0%
16.0%
13.0%
11.8% 12.4%
10.6%
0.0%
2.0%
4.0%
6.0%
8.0%
10.0%
12.0%
14.0%
16.0%
18.0%
30-Nov-14 28- Feb-1531- Jan-1531- Dec-14 31- March-15 30- April-15 31- May-15
RE Smart invitations
Data as of May 2015
Associate Editor assignment: Case study Microbiology
28.7%
21.2%
33.3%
28.3%
22.8%
0.0%
5.0%
10.0%
15.0%
20.0%
25.0%
30.0%
35.0%
Jan-15 Feb-15 Mar-15 Apr-15 May-15
Acceptance rate of the Associate Editor selected by
the author at submission
Submitted month of the manuscript
29.0%
22.0%
56.0%
30.0% 29.0%
68.0%
72.0%
38.0%
67.0%
50.0%
3.0% 6.0% 6.0% 3.0%
21.0%
0.0%
10.0%
20.0%
30.0%
40.0%
50.0%
60.0%
70.0%
80.0%
90.0%
100.0%
Jan-15 Feb-15 Mar-15 Apr-15 May-15
% of reasons for declining an invitation the
preferred Associate Editor
No expertise
No time
Others & COI
Data as of May 2015
Questions?

Frontiers' Collaborative Review

  • 1.
    Frontiers Collaborative Review June21, 2015 Marie Soulière, Ph.D. | Program Manager
  • 2.
    Frontiers Collaborative Review -Back to basics - How it actually works - Future engagement and latest developments
  • 3.
    How different isit? Key principles - Collaborative – unites the authors and reviewers in a direct online dialogue - Objective – peer review will concentrate on technical rather than subjective concerns BioMed Central ‘inclusive’ peer review, PLOS One focuses on technical concerns. - Transparent – reviewers and editor names acknowledged on published papers. BioMed Central, BMJ, F1000Research - publish names, full reports and pre-publication discussion. Copernicus - Interactive Public Peer-Review. - Rigorous – questionnaires for each article types, with focus on validity of research Rubriq - ScoreCard, Peerage of Science - pre-submission questionnaires
  • 4.
    How efficient isit? Averagereviewtime(Days) 0 20 40 60 80 100 120 105 97 92 90 84 82 77 74 74 74 74 73 69 67 66 62 61 25 Average review time (submission to acceptance) per field at Frontiers Data as of June 2015
  • 5.
    How efficient isit? Frequency 0 50 100 150 200 250 300 350 400 450 0 50 100 150 200 250 300 350 Review time analysis Submission to Acceptance (days) 50% of all papers: 37-92 days 75% of all papers: 21-122 days Average time from submission to acceptance for all Frontiers Journals Median time from submission to acceptance for all Frontiers Journals 86 52 Data as of June 2015
  • 6.
    How efficient isit? PLOS One (2013) Average: 134 days Median*: 104 days PNAS (as of 2015) Average: 152 days Frontiers (as of 2015) Average: 86 days Median*: 52 days Open Biology, Royal Society (as of 2015) Average: 96 days eLIFE (2014) Median*: 90 days Journal of Applied Mathematics, Hindawi (2013) Median*: 88 days Data as of June 2015
  • 7.
    How efficient isit? Data on iterations in the Review Forum (RF): Average # comments posted in RF by participants 4 Median # comments posted in RF by participants 3 Average MS resubmission 1.8 Median MS resubmission 1 Collaborative? Yes. But not too much. Data as of June 2015
  • 8.
    Positive author andreviewer feedback
  • 9.
    New Online ReviewForum (2014)
  • 12.
    Current and futuredevelopments - Plagiarism check for ALL submissions (iThenticate program) - Conflicts of interest between editors and authors - Final validation stage (all files received, permissions for figures or ethics committees provided, no COI or plagiarism issues left) - Policy on authorship changes and disputes
  • 13.
    Current and futuredevelopments - Plagiarism check for ALL submissions (iThenticate program) - Conflicts of interest between editors and authors - Final validation stage (all files received, permissions for figures or ethics committees provided, no COI or plagiarism issues left) - Policy on authorship changes and disputes
  • 14.
    Current and futuredevelopments - Plagiarism check for ALL submissions (iThenticate program) - Conflicts of interest between editors and authors - Final validation stage (all files received, permissions for figures or ethics committees provided, no COI or plagiarism issues left) - Policy on authorship changes and disputes
  • 15.
    Current and futuredevelopments - Plagiarism check for ALL submissions (iThenticate program) - Conflicts of interest between editors and authors - Final validation stage (all files received, permissions for figures or ethics committees provided, no COI or plagiarism issues left) - Policy on authorship changes and disputes
  • 16.
    - Affiliation listselection during submission and on Loop network - Manuscript length check at submission - FundRef - Crossmark - New Associate Editor tab in the Review Forum - Article type Review Questionnaires per Program - Algorithm for Associate and Review Editors invitations Current and future developments As of June 2015
  • 17.
    28.6% 24.3% 28.1% 27.7% 20.6% Sept OctNov Dec Jan % Acceptance rate manual invitations to REV 22.7% 17.7% 23.3% 19.7% 18.0% Sept Oct Nov Dec Jan % Acceptance rate manual invitations to RE Review Editor Smart invitations: Case study Human Neuroscience % acceptance of manual invitations sent by month to Review Editors (RE) or external Reviewers (REV) Data as of May 2015
  • 18.
    KPI: Decrease %Not expertise declination rate of automatic invitations sent by month (baseline # declined invitations) Top-10 invitations Keywords Semantic Keywords Semantic 18.9% 22.3% 23.3% 24.6% 14.2% 17.6% 13.4% 21.4% 14.0% 0.0% 5.0% 10.0% 15.0% 20.0% 25.0% 30.0% Sep-14 Oct-14 Nov-14 Dec-14 Jan-15 Feb-15 Mar-15 Apr-15 May-15 23.1% 29.3% 29.0% 40.6% 21.6% 22.3% 20.9% 20.2% 22.4% 0.0% 5.0% 10.0% 15.0% 20.0% 25.0% 30.0% 35.0% 40.0% 45.0% Sep-14 Oct-14 Nov-14 Dec-14 Jan-15 Feb-15 Mar-15 Apr-15 May-15 RE Smart invitations Top-20 invitations Data as of May 2015
  • 19.
    Accepted Interested KPI: Increase %of Acceptance and Interested automatic invitations sent by month (baseline total invitations sent in the month) Keywords Semantic Keywords Semantic 1.0% 2.4% 2.5% 1.9% 3.7% 5.6% 2.7% 4.0% 6.4% 0.2% 0.6% 1.0% 1.3% 0.2% 1.0% 0.2% 2.0% 0.6% 0.0% 1.0% 2.0% 3.0% 4.0% 5.0% 6.0% 7.0% 8.0% Sep-14 Oct-14 Nov-14 Dec-14 Jan-15 Feb-15 Mar-15 Apr-15 May-15 1.4% 1.2% 1.5% 0.9% 1.9% 1.9% 2.4% 1.5% 4.0% 0.0% 0.3% 0.8% 1.4% 0.3% 0.6% 0.5% 0.4% 1.9% 0.0% 1.0% 2.0% 3.0% 4.0% 5.0% 6.0% 7.0% Sep-14 Oct-14 Nov-14 Dec-14 Jan-15 Feb-15 Mar-15 Apr-15 May-15 *Max acceptance rate is 20% (2 reviewers reached) Top-10 invitations Top-20 invitations RE Smart invitations Data as of May 2015
  • 20.
    KPI: Decrease %of manuscripts delayed in review assignment (baseline # manuscripts submitted by month) Keywords Semantic 15.0% 13.0% 16.0% 13.0% 11.8% 12.4% 10.6% 0.0% 2.0% 4.0% 6.0% 8.0% 10.0% 12.0% 14.0% 16.0% 18.0% 30-Nov-14 28- Feb-1531- Jan-1531- Dec-14 31- March-15 30- April-15 31- May-15 RE Smart invitations Data as of May 2015
  • 21.
    Associate Editor assignment:Case study Microbiology 28.7% 21.2% 33.3% 28.3% 22.8% 0.0% 5.0% 10.0% 15.0% 20.0% 25.0% 30.0% 35.0% Jan-15 Feb-15 Mar-15 Apr-15 May-15 Acceptance rate of the Associate Editor selected by the author at submission Submitted month of the manuscript 29.0% 22.0% 56.0% 30.0% 29.0% 68.0% 72.0% 38.0% 67.0% 50.0% 3.0% 6.0% 6.0% 3.0% 21.0% 0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% 70.0% 80.0% 90.0% 100.0% Jan-15 Feb-15 Mar-15 Apr-15 May-15 % of reasons for declining an invitation the preferred Associate Editor No expertise No time Others & COI Data as of May 2015
  • 22.

Editor's Notes

  • #4 eLIFE (collaborative between reviewers) http://www.biomedcentral.com/authors/authorfaq/medical http://blog.f1000research.com/2014/05/21/what-is-open-peer-review/
  • #5 How efficient is it and how can we determine this? Average review time for fields, and compare it with some other publishers. The review times provided here by fields are for articles accepted for publication. This excludes rejected articles.
  • #6 We are faster, but this is not at the expense of quality, rather because there are less exchanges of emails and waste of time.
  • #7 Social Science & Medicine, Elsevier, IF 2.5, OA options Proceedings of the National Academy of Sciences, IF 9.8, traditional Social Science & Medicine, Elsevier (as of 2015) - Typically within ~80 days http://www.ams.org/notices/201410/rnoti-p1268.pdf Journal of Mathematical Physics, AIP (2013) Median*: ~135 days https://metarabbit.wordpress.com/2013/06/03/how-long-does-plos-one-take-to-accept-a-paper/ http://royalsocietypublishing.org/publishing-times http://journals.plos.org/plosone/s/journal-information http://openaccesspublishing.org/oa11/article.pdf
  • #8 How many iterations occur in the review forum? This includes comments posted both by authors, reviewers and editors (average of times the email with title “New comments posted in the review forum” is sent.
  • #9 Another indication of how efficient this process is, is obviously user feedback. Authors survey, 1600 Reviewers survey, 777 Conclusion from this is that we are doing something different, and that people actually like it. We did this survey after the launch of our new review forum which happened a bit over a year ago now.
  • #10 The goal of the review forum revamp was to improve the functionality. The principle of the peer review itself is the same, we improved the user-friendliness mostly.
  • #13 One of the main concerns that we run into is do to our level of transparency. Since we name reviewers on published papers, we have to be more careful than other publishers about potential COIs.
  • #14 One of the main concerns that we run into is do to our level of transparency. Since we name reviewers on published papers, we have to be more careful than other publishers about potential COIs.
  • #15 One of the main concerns that we run into is do to our level of transparency. Since we name reviewers on published papers, we have to be more careful than other publishers about potential COIs.
  • #16 One of the main concerns that we run into is do to our level of transparency. Since we name reviewers on published papers, we have to be more careful than other publishers about potential COIs.
  • #17 For quite some time we had been focusing on improving the functionality, to ensure that authors, reviewers, editors would have an easier time navigating the system, and would have access to the options that they need. In the past year, we then shifted the focus to a more thorough verification of the manuscripts. One of the main concerns that we run into is due to our level of transparency. Since we name reviewers on published papers, we have to be more careful than other publishers about potential COIs. CrossMark. The CrossMark identification service from CrossRef sends a signal to researchers that publishers are committed to maintaining their scholarly content. It gives scholars the information they need to verify that they are using the most recent and reliable versions of a document. Readers simply click on the CrossMark logos on PDF or HTML documents, and a status box tells them if the document is current or if updates are available. - See more at: http://www.crossref.org/crossmark/#sthash.TiXxthYx.dpuf FundRef. FundRef, a multipublisher initiative started by CrossRef to provide a standard way of reporting funding for published research. The funding data are available through CrossRef for interested parties to analyze. To see if a PNAS paper has FundRef data, click on the CrossMark logo that appears with the paper, and select the Record tab. ORCID. ORCID provides a persistent digital identifier that distinguishes you from every other researcher and, through integration in key research workflows such as manuscript and grant submission, supports automated linkages between you and your professional activities ensuring that your work is recognized. PNAS encourages authors to use their ORCID identifier when submitting papers. ORCID registration takes 30 seconds. For more information or to register, visit http://orcid.org/.