See here the disconnect between traditional publication and the daily efforts of investigators. Also awareness of data repositories and open access/single figure journals
3. Age:
# Answer Response %
1 less than 22 years 1 1%
2 22-29 years 27 39%
3 30-49 years 41 59%
4 50-64 years 0 0%
5 older than 64 years 0 0%
Total 69 100%
4. Current Position:
# Answer Response %
1 Undergraduate 0 0%
2 Grad Student 29 42%
3 Postdoc 32 46%
4 Investigator 8 12%
Total 69 100%
5. When is the last time you published a paper?
# Answer Response %
1 less than 1 year ago 24 40%
2 1-2 years ago 18 30%
3 more than 2 years ago 18 30%
Total 60 100%
6. On average, how often do you publish?
# Answer Response %
1 less than 1 paper per
year
41 68%
2 1-2 papers per year 16 27%
3 3-6 papers per year 1 2%
4 more than 6 papers per
year
2 3%
Total 60 100%
7. Approximately how many are first author (co-first
author)papers?
# Answer Response %
1 less than 25% 29 48%
2 25-50% 18 30%
3 50-75% 5 8%
4 more than 75% 8 13%
Total 60 100%
8. How much of the data you have generated were
NOT used in a final publication?
# Answer Response %
1 less than 25% 12 20%
2 25-50% 20 33%
3 50-75% 11 18%
4 more than 75% 17 28%
Total 60 100%
9. Approximately how much of the data you
generated were used to confirm previous
findings?
# Answer Response %
1 less than 25% 31 52%
2 25-50% 20 33%
3 50-75% 4 7%
4 more than 75% 5 8%
Total 60 100%
10. Have you ever published in an online open-access
publishingplatform?
# Answer Response %
1 Yes 25 42%
2 No 35 58%
Total 60 100%
11. Which of the online only journals are you familiar with?
16. What other tools do you think Scimpact should
include to improve functionality and appeal?
autolinks to peer-reviewed pubs if the data is ultimately integrated into such a publication
There must be a reviewer for initial screening of all the data
twitter account, links to key words or science themed hashtags
Create a "paper" for me based on my submissions
Some sort of tracking and/or rating metric(s) to validate how the submitted data is being accessed and received by the scientific
community
Ensure that if papers replicate experimental designs posted on the database, the original experimenter gets credit. Such a system
would be akin to patenting and intellectual property.
Peer review - post-publication peer review will help in wider acceptance of the data as well as establishment of the website as
worth spending time on, among so many peer-reviewed journals that a scientist has to keep tabs on.
Another idea would be to link a submitted database to an existing publication, allowing the scientist to post all data related to a
study. Often, there are negative results that don't contribute to a final finding, but the process of finding these dead ends should
be available to the community so that it isn't endlessly replicated.
A third suggestion is a methods only section. Much of the unpublished data are about methods development and validation.
Something like JoVE, but linked to a submission.
Anonymous submissions
Searchable data
Ability to rate data and search based on ratings
I think people will be hesitant to publish things on Scimpact, as it could detract from my ability to later publish those findings in a
more traditional journal. I would possibly use it for negative results once a project is finished, but I wouldn't want to share new
results immediately, as that would put me at risk for being scooped.
Ability to somehow determine whether other people's data are properly controlled and comparable experiments. (Not exactly
sure how you would do this!) But otherwise, there might just be a lot of scientific junk to search through.
17. How is this going to affect the career of an academic, which is still based on publishing full stories in journals? It may be better to
start with this approach in more collaborative environments, such as industry or government research. When it becomes more widely
recognized and valued (like Wikipedia), it'll naturally draw in academics. That being said, Wikipedia users don't have to pay to add
content.
Improve web searchability. When I googled "scimpact" (which is what I would do if I heard of the site through word-of-mouth) the top
results had to do with a social change website.
The website is also very new at the moment, and I would hesitate to put my unpublished data there (especially if I have to pay for
each submission/additional replicate). Are you planning to charge for each submission, or have annual memberships at the
PI/institution level?
I'd also be interested in seeing a page on how Scimpact differentiates itself from bioRXiv (and whether the databases can becross-
referenced, in case I wanted to pool reproducibility data)
That being said, I think this platform is a great idea and would love to see it come to fruition!
ways to filter experiments by # of submissions, model organisms, disease models, proteins, molecules etc.
chats, forums, or discussions for people to communicate and ask questions
Some sort of rating of each publisher/experimenter, to know how reliable their data is. Plenty of times when someone is newto an
experiment or technique, it is noisy or not reproducible at the beginning. I would not want to use an unreliable person's results as a
reference or resource.
A tool to explain the idea to your PI, who may not be keen on leaking half-baked ideas to their competitors... the experiments you're
doing can provide intel on which direction you're moving.
I'm convinced the Scimpact model is where we need to be heading, but I hope that others higher up on the chain see it the same way.