• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Niso Article Level Metrics Presentation For Online 2
 

Niso Article Level Metrics Presentation For Online 2

on

  • 1,814 views

 

Statistics

Views

Total Views
1,814
Views on SlideShare
1,779
Embed Views
35

Actions

Likes
3
Downloads
5
Comments
0

2 Embeds 35

https://twitter.com 32
http://everyone.plos.org 3

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Niso Article Level Metrics Presentation For Online 2 Niso Article Level Metrics Presentation For Online 2 Presentation Transcript

    • Committed to making the world’s scientific and medical literature a public resource Article-Level Metrics (at PLoS & beyond) Peter Binfield, Managing Editor of PLoS ONE, pbinfield@plos.org
      • Article-Level Metrics (at PLoS & beyond)
      • These are the slides and audio from a webinar organised by NISO on May 13 th , 2009. The topic of the webinar was “ New Applications of Usage Data”.
      • This segment (the second of three) was given by Peter Binfield, Managing Editor of PLoS ONE
      • More information (as well as details of the other presentations) can be found at the NISO site at: http://www.niso.org/news/events/2009/usage09/
      • Presentation Outline
      • Brief overview of PLoS
      • What is the problem?
      • Current assessment methods
      • Possible assessment methods
      • Article-Level Metrics at PLoS and beyond
      • (also, see the NISO resource page for relevant links)
      • The Public Library of Science (PLoS)
      • Largest Not for Profit, Open Access Publisher
      • Based in San Francisco and Cambridge, UK
      • Publisher of 7 journals
        • PLoS Biology, PLoS Medicine, PLoS Pathogens, PLoS Computational Biology, PLoS NTDs, PLoS Genetics, PLoS ONE
      • Largest journal is PLoS ONE
        • Which is almost doubling in volume each year
      • What is the Problem?
      • As a consumer of research output:
      • How do you measure the ‘worth’ or the ‘impact’ of research output
        • typically, but not exclusively, a journal article
      • and how do you filter what you read?
        • You can’t read everything, so how do you filter to find the ‘best’ material for your interests?
      • Aside: ‘impact’ is a loosely used term
        • We could use words like ‘utility’ ; ‘value’ ; ‘relevance’ ; ‘interest (to me)’ ; ‘Return on Investment’ etc.
        • For this presentation I will just use the word ‘impact’
      • What is the Problem?
      • Again, as a consumer of research output:
      • At what level of granularity do you want to measure ‘impact’ and why are you measuring it?
        • Journals - to know which titles to subscribe to?
        • Research groups / institutions - to know who to fund?
        • Individual researchers - to know who to promote?
        • Individual articles - to know what to read?
      • and how do you compare ‘impact’ across different journals / disciplines / sub-disciplines / publishers?
      • What is the Problem?
      • Given these inherent complexities, how is ‘impact’ currently measured?
        • Citation based measures (mostly)
        • Usage based measures (occasionally)
        • Anecdotal / personal evaluations (often)
      • Current Assessment Methods
      • (quick overview)
      • Mostly journal-based measures
        • the Impact Factor
        • COUNTER usage reports
      • Some article-level or ‘person’-level measures
        • the h-index
        • Article level citation information
        • Article level usage data
      • Current Assessment Methods
      • Article level usage data
        • What does it mean to have ‘a download’? Can a download be equated to ‘impact’? If you downloaded it, did you read it?
        • Easy to game?
        • What is a high vs. low number in any given field, or over any given time frame?
        • An apparent fear by data owners (publishers) that publicizing this data could be dangerous
        • Very little provision of this data – only the ‘Journal of Vision’ and the ‘Frontiers’ series currently provide hard data. Some other journals provide summaries (e.g. ‘top downloaded articles’ from HighWire titles) but no actual numbers are provided
    • Journal of Vision Screenshot
    • Frontiers in Neuroscience Screenshot
    • NEJM Screenshot
      • Possible assessment methods
      • There are many possible ways to assess the ‘impact’ of any piece of research (or any researcher):
        • Citations ; Usage ; Media Coverage ; Blog Coverage ; Discussion Thread activity ; Expert Ratings ; Social Bookmarking activity ; whether or not public policy was affected; where the research was done ; who is citing it ; who is reading it etc etc
      • Current technology, and Web 2.0, is now making it possible to automate many of these metrics
      • Possible assessment methods
      • If these types of measures could be collated and evaluated correctly then they could be more powerful (and useful) than citations or usage metrics alone
      • This is what PLoS is attempting to do with our Article-Level Metrics program - to implement new approaches to the evaluation and filtering of journal articles.
      • Article-Level Metrics (at PLoS)
      • Important to note: Article-Level Metrics at PLoS are not just about citations and usage. The concept refers to a whole range of additional measures which might give a window on ‘impact’
      • We are providing metrics at the article-level, for every article, in every one of our titles.
      • We are the first publisher to provide this range of data, but we hope that others will follow
      • Article-Level Metrics (at PLoS)
      • Rollout Plan:
      • We are starting by acquiring and presenting the relevant data.
      • We have a preference for data that is not ‘owned’ by third parties, and we are seeking ‘open’ APIs onto the data (so that anyone else can verify or replicate these measures themselves).
      • ‘ Phase 2’ will provide data analysis, filtering and navigation tools using this data.
      • Article-Level Metrics (at PLoS)
      • In March 2009, we launched with:
      • Number of Citations
        • PubMedCentral and Scopus
      • Amount of Blog coverage
        • Postgenomic, Nature Blogs and Bloglines
      • Number of Social Bookmarks
        • CiteULike and Connotea
      • Commenting, notation and ‘star ratings’
      • Each measure is provided as a simple number, but with a link to a landing page at the third party
    • Current View (March ‘09)
    • Current View (March ‘09)
    • Citeulike Landing Page
    • Current View (March ‘09)
    • Postgenomic Landing Page
      • Article-Level Metrics (at PLoS)
      • Future Expansion of Current Provision
        • Add ‘behavioural’ or ‘reference management’ tools (e.g. Mendeley)
        • Expand the number of sources for each data type (e.g. more blog aggregators, more citation sources)
        • Add expert ratings (e.g. F1000 or similar)
        • Add news coverage (hopefully)
        • Add usage data
      • Article-Level Metrics (at PLoS)
      • Adding Usage Data
      • In mid-2009, we will add usage data to every PLoS article, showing:
        • HTML Page Views
        • PDF Downloads
        • XML Downloads
      • To be displayed numerically and graphically, including historical data, and in the context of other articles within the journal.
      • Article-Level Metrics (at PLoS)
      • Future Developments
      • Provide an open data set
      • COUNTER certification of the usage data
      • Inclusion of usage data from third parties (e.g. PubMed Central)
      • De-duplication of the data sources
      • Provision of tools to act on all these metrics for:
        • Discovery, Filtering and Comparison purposes
      • Adherence to any standards that may evolve
      • Article-Level Metrics (beyond PLoS)
      • Article-Level Metrics could be very significant for the academic community on a number of levels.
      • We do not want this to be a PLoS-only initiative. By demonstrating it, we hope other publishers will follow
      • We hope that acquisition and presentation standards will evolve or be created
      • We hope that by creating an open data set of standardized (cross journal) results, academia will then develop new and powerful ways to combine and evaluate this data
      • There is likely to be a range of ways to evaluate impact, for a range of different fields and different purposes, but the important thing will be consistent, reliable and comparable data sets
    • Article-Level Metrics We could be at the start of a major new development in academic publishing.
    • Article-Level Metrics We could be at the start of a major new development in academic publishing. To misquote from Jerry Macquire:
    • Article-Level Metrics We could be at the start of a major new development in academic publishing. To misquote from Jerry Macquire: “ Show Me The Metrics!”
    • Article-Level Metrics We could be at the start of a major new development in academic publishing. To misquote from Jerry Macquire: “ Show Me The Metrics!” Peter Binfield Managing Editor, PLoS ONE [email_address]