Altmetrics: painting a broader picture of impact
Upcoming SlideShare
Loading in...5
×
 

Altmetrics: painting a broader picture of impact

on

  • 2,021 views

Presentation for Academic Publishing in Europe 9 (APE2014)

Presentation for Academic Publishing in Europe 9 (APE2014)

Statistics

Views

Total Views
2,021
Views on SlideShare
1,404
Embed Views
617

Actions

Likes
10
Downloads
13
Comments
7

8 Embeds 617

http://thinklinks.wordpress.com 575
https://twitter.com 17
https://thinklinks.wordpress.com 15
http://feedly.com 6
http://digg.com 1
http://www.linkedin.com 1
http://webcache.googleusercontent.com 1
http://translate.googleusercontent.com 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

CC Attribution License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • @DavidColquhoun That's weird... it disappeared. I'm reposting:

    David Colquhoun said:
    I'm sorry, but I think we are a long way apart. You say 'you've used them as evidence of scholarly impact assessment yourself'. The whole point of my last comment was that my blog is not scholarly. The blog is just a hobby. If you want to see my real science, look here http://www.onemol.org.uk/?page_id=10 (our 1982 paper in Proc Roy Soc B was 59 pages with 400-odd equations -not exactly pop stuff). You could also look at my Google scholar entry, but that would reveal a lot of citations for some very trivial papers (though you would have to read the papers, and know the field to tell which). Of course I agree that publishing will soon all be web-based, but that is a totally different question. It's to do with cost, open access and escaping the grip of glamour journals. It has nothing whatsoever to do with how you assess the merit of peoples work. I repeat, I think that metrics in general, and altmetrics in particul ar, alter behaviour, provide perverse incentives, and are a corrupting influence. I think it might help if you consider the examples that we gave in http://www.dcscience.net/?p=6369 If you think they are atypical, please produce some evidence to that effect.
    Are you sure you want to
    Your message goes here
    Processing…
  • Hi David,

    Ok so we're not that far apart.

    For me google stats and other just plan hit measures are part of altmetrics. And you've used them as evidence of scholarly impact assessment yourself.

    3.5k visitors to a specialized blog would be interesting. This is why it's important to combine numbers in conjunction with qualitative approaches.

    I would disagree with you that blogs can't be part of doing science. There are many specialist blogs that are extremely useful for communicating science. One of my favorites is http://lambda-the-ultimate.org a programming languages blog. I think it's not a waste of time to write up what your doing on a blog.

    In general, I think things are moving towards science communication using the web. Paul Krugman has a nice reflection on this with respect to economics (http://krugman.blogs.nytimes.com/2013/12/17/the-facebooking-of-economics/).

    Finally, by being careful, what I mean is that using metrics with social systems is tricky as you've pointed out. For example, it's not a good to just count numbers. A interesting take on this is here: http://www.wired.com/business/2014/01/quants-dont-know-everything/
    Are you sure you want to
    Your message goes here
    Processing…
  • Well I agree that hits on one's blog are interesting, in a narcissistic point of view. I get some gratification that (according to Statcounter) my blog has been viewed 3.5 million times, and have even cited that number when UCL decided to submit the blog as part of UCL's submission to the Research Excellence Framework as evidence of 'impact'. But my blog has next to nothing to do with my science (though it does rely to some extent on statistical knowledge gained in the course of my real work.

    If I were to blog about matrix algebra, stochastic processes and maximum likelihood fitting of single molecule records, I'd be lucky to get 3.5k readers, never mind 3.5 m. The blog is a fun hobby for my 'retirement' but I could not possibly have found time to do it when I was doing real science, The fact that it gets so many readers tells you nothing whatsoever about my science.

    In any case there is no need to have bibliometricians to count blog hits. They come free with Google Analytics, Statcounter etc.

    Finally, you say one should be 'careful in all assessment procedures'. I've often heard that said, but I have no idea what 'being careful' means in practice. If you haven't got good data about what the numbers measure, how can you be 'careful' about how you use them?
    Are you sure you want to
    Your message goes here
    Processing…
  • Hmm... usefulness doesn't always imply prediction. For example, I find it useful to know how many and who is reading my blog, downloading my slides, software or using platforms I've built. I think telling people about it isn't a bad thing.

    It would be odd to promote anybody just based on the number of hits on a blog or the fact that they wrote a paper with a catch title. But if part of my scholarship is outreach to a certain community and I can demonstrate that through the help of statistics about my blog readership I think that's useful.

    I actually think bibliometricians (which, btw, I'm not) are concerned with the misuse of these indicators. See again [1]. Also check out [2] for some current thinking on the efficacy of these measures.

    Finally, +1 for being careful in all assessment procedures. I think I emphasize that throughout the talks I've given on this topic.

    [1] http://www.slideshare.net/paulwouters1/issi2013-wg-pw
    [2] https://openaccess.leidenuniv.nl/bitstream/handle/1887/20468/CWTS-WP-2013-002.pdf?sequence=1
    Are you sure you want to
    Your message goes here
    Processing…
  • Perhaps I should have said experimental scientists: those who are adding real knowledge of the natural world. I don't think that bibliometrics will qualify as a science until such time as it is shown that your measures predict something useful. That's not the case at the moment. There is an ever-increasing number of different measures and next-to-no evidence that any of them predict anything useful, like how to pick, or promote, a candidate. It really is thoroughly irresponsible to promote methods when their usefulness simply isn't known.

    What you haven't done in your reply is to respond to the particular papers that we picked out for analysis. They suggest strongly to me that, if anything, altmetrics scores will be highest for trivial papers with buzzwords in the titles that probably haven't been read by those who promote them. That is a very serious matter, because it encourages practices which I would regard as verging on corruption.

    A constructive approach would be to do research on the corrupting influence of metrics. I presume that bibliometricians are not enthusiastic about doing that because it might put them out of business (much like homeopaths). There is a real risk that use of bibliometrics will result in the best young scientists being fired. If the methods in use at Imperial College had been used to evaluate Bert Sakmann (Nobel Prize 1991) he would probably have been fired before he'd had a chance, as I showed in http://www.dcscience.net/?p=182
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • This is a pain to calculate!
  • We believe metrics are it but policy makers don’t necessarily. What they want is evidence.
  • There’s more and more “stuff” – where are we
  • From Assistant Provost for Faculty Appointments and Information at Harvard University, where she manages the review of faculty appointments University-wide - See more at: http://elife.elifesciences.org/content/2/e00452#sthash.9shtp91n.dpuf
  • From Assistant Provost for Faculty Appointments and Information at Harvard University, where she manages the review of faculty appointments University-wide - See more at: http://elife.elifesciences.org/content/2/e00452#sthash.9shtp91n.dpuf
  • This will happen closer than 20 years
  • Clear correlation between F1000 recommendations and citations
  • Side note: From the slides:“There is no reason to condemn the incorrectly used Impact Factor and h-index. They can provide supplementary information if they are used in combination with qualitative methods, and are not used as the only decision criterion. Example:• Good practice (h-index as supporting argument): “The exceptionally high h-index of the applicant confirms his/her international standing aested to by our experts.” • Qestionable use (h-index as decision criterion):“We are inclined to support this scientist because his/her h-index distinctly exceeds that of all other applicants.”

Altmetrics: painting a broader picture of impact Altmetrics: painting a broader picture of impact Presentation Transcript

  • Altmetrics: building a broader picture of impact Paul Groth @pgroth Web & Media Group Department of Computer Science The Network Institute VU University Amsterdam http://www.few.vu.nl/~pgroth #APE2014
  • Research Project Grants Applications, awards, and success rates NIH Data Book – (http://report.nih.gov/ndb/index.aspx) Data provided by the Division of Information Services, Reporting Branch
  • "Outside letters basically trump everything," says Robert Simoni, chairman of the biology department at Stanford University in California. Metrics: Do metrics matter? Nature 2010 http://doi.org/10.1038/465860a
  • ―Imagine how the academic appointment process might change if search and review committees had access—within an appropriately tagged or linked online CV, for example, or via the ORCID system— to information about the specific contributions made by a candidate to each of his/her works, including contributions that might not otherwise have qualified for ‗authorship‘ status?‖ Point of view: Faculty appointments and the record of scholarship Amy Brand http://dx.doi.org/10.7554/eLife.00452
  • Point of view: Faculty appointments and the record of scholarship Amy Brand http://dx.doi.org/10.7554/eLife.00452 Opportunities • Individuals and institutions need better tools for curating and networking their own record of scholarship • Institutions need more information about scholarly contribution • ALMs that reliably differentiate sources of input (general; academic; expert; etc.) would be more useful‖ Slide 3: http://article-level-metrics.plos.org/files/2013/10/Brand.pptx
  • ENTER ALTMETRICS
  • Altmetrics is the study and use of scholarly impact measures based on activity in online tools and environments. http://doi.org/10.1371/journal.pone.0048753
  • http://blog.peerj.com/post/65345738206/changing-the-currency-of-science-to-solve-our-greatest
  • Thanks Ian Mulvany
  • ALTMETRICS AS MEASURES OF IMPACT?
  • ―It took approximately a generation (20 years) for bibliographic citation analysis to achieve acceptability as a measure of academic impact." (Vaughan and Shaw, 2003)
  • (Birkholtz et al. 2013) (Fausto et al 2012) http://jasonpriem.org/self-archived/5uni-poster.png The Research is Happening http://ploscollections.org/altmetrics http://asis.org/Bulletin/Apr-13/AprMay13_Piwowar.html
  • http://www.cwts.nl/pdf/CWTS-WP-2013-003.pdf
  • http://www.slideshare.net/paulwouters1/issi2013-wg-pw
  • Bottom Line: use altmetrics as evidence in a larger story
  • Examples
  • Published AND discussed AND cited 23
  • 24
  • Summary: a broader view • Different research artifacts – papers, preprints, slides, videos, code, data • Different measures – usage, mentions, views, sharing • Different stories – progress so far, workshop impact, outreach
  • Conclusion • Altmetrics is still developing – But useful today • Allows to build a broader picture of impact – Using a variety of Artifacts & Measures • Final thought: research artifacts exist in a network, we‘re starting to connect it
  • Thanks Collaborators: Peter van den Besselaar, Julie Birkholz, Frank van Harmelen, Shenghui Wang, Rinke Hoekstra, Thomas Gurney, Mike Taylor, Anita de Waard, Jason Priem, Dario Taraborelli, Cameron Neylon, Ian Mulvany