Assessing Digital Output in New Ways


Published on

Assessing Digital Output in New Ways
Mike Taylor, Research Specialist, Elsevier Labs
Presented during NISO/BISG 8th Annual Changing Standards Landscape on June 27, 2014

Published in: Education, Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Assessing Digital Output in New Ways

  1. 1. Assessing Digital Output in New Ways Mike Taylor Research Specialist
  2. 2. Looking at emerging alternative metrics for measuring author impact and usage data, this presentation will focus on methods for capturing more granular data around researchers and topics, including new assessment tools, usage data sets, and understanding how that impacts overall author contributions and understanding.
  3. 3. • Alternative metrics, altmetrics, article data, usage data, assessments, metrics, impact, understanding, attention, reach… • If this seems confusing… • Altmetrics is at the big bang stage – this universe has not yet cooled down and coalesced Some words on terms
  4. 4. • A set of altmetric data is about a common document and represents usage, recommendation, shares, re- usage • Identified by DOI, URL, ID • It does not show common intent: a tweet is not the same as a Mendeley share is not the same as a Data Dryad data download is not the same as mass media coverage or a blog • Although I talk about journals, articles data, this data can be derived from any digital output • Books, conference presentations, policy papers, patents What is the data?
  5. 5. Metrics are an interpretive layer derived from this data • Usage • Attention • Engagement • Scholarly impact • Social impact What are metrics?
  6. 6. • • Plum Analytics • PLOS / PLOS code • GrowKudos • • Altmetrics is not Each has strengths and weaknesses, no canonical source Various providers…
  7. 7. • Altmetrics isn’t one thing, so attempting to express it as one thing will fail. • Elsevier (and others) favour intelligent clusters of data: social activity, mass media, scholarly activity, scholarly comment, re-use • Elsevier believes that more research is needed, and that best indicators for scholarly impact are scholarly activity and scholarly comment Bringing together sources…
  8. 8. Example from 13,500 papers: • Highly tweeted stories focus on policy, gender, funding, ‘contentious science’ issues, mostly summaries on Nature News • Highly shared papers in Mendeley are hard core original research • Different platforms have discipline bias • Scholarly blogs both lead interest and respond • Data from Different data have different characteristics
  9. 9. • Communities have to agree to agree • Innovation and co-operation • When adapting metrics from data, there needs to be broad consensus that what we say we’re measuring is what’s being measured • We need to reflect and adapt The importance of openness
  10. 10. • If people take this data seriously, will they cheat? • Eg, Brazilian citation scandal, strategies used by people to increase IF of journals • Expertise in detecting fraudulent downloads (eg, SSRN), self-tweeting – when is ‘normal’ corrupt? • One thing to buy 1000 tweets, another to buy 10 blogs, or mass media coverage • Do those twitter accounts have scholarly followers? • Pattern analysis, usage analysis, network analysis • Public data = public analysis = public response Gaming / cheating
  11. 11. • Biggest criticisms are when people try and conflate all the data into a single thing • Easy point of attack – tweets are all about “sex and drugs and rock ‘n’ roll papers”* • Using clusters is more intelligible to academic community – eg, re-use, scholarly activity, scholarly comment (blogs, reviews, discussions) • * this isn’t true anyway Other criticisms
  12. 12. • Altmetrics has got where it is today on the basis of standards • Without ISSNs and DOIs, the world is a harder place x 1000 • Elsevier is supporting research to discover scholarly impact in areas that don’t use DOIs • (Other standards exist: PubMed IDs, Arxiv IDs) Making altmetrics work
  13. 13. • Increasingly, we’re seeing altmetrics being used to describe objects other than articles, but also institutions, journals, data and people • For institutions, Snowball Metrics has recently adopted the same formulation for grouping altmetrics as Elsevier ( Expanding views of altmetrics
  14. 14. • More funders are insisting on open data • And the way to understand whether its being used … is data metrics – combining altmetrics and traditional (web-o-)metrics • Downloads, citations, shares, re-uses… • Downside: data repository is fragmented, 600+ repositories registered at • Upside: Datacite, ODIN, ORCID, DOI, RDA, Draft Declaration of Data Citation Principles Making data count
  15. 15. • Governments don’t operate like scholars • Rhetoric, argument, polemics • Personal reputation is important • Laws don’t contain citations • The relationship is fuzzy – less a chain of evidence, more a miasma of influence • Elsevier is sponsoring work to understand this relationship Measuring the effect of research on society
  16. 16. • Standards are vital to altmetrics • NISO is involved in shaping the conversation around what implicit standards need to be developed • (My example) – is a retweet the same as a tweet, do we count replies or favourites? And how about modified tweets, conversations? Shaping communications
  17. 17. • • Comments requested until July 18th • End of stage 1 • Some of the observations: NISO’s White Paper 1. Develop specific definitions for alternative assessment metrics. 2. Agree on proper usage of the term “Altmetrics,” or on using a different term. 3. Define subcategories for alternative assessment metrics, as needed. 4. Identify research output types that are applicable to the use of metrics. 5. Define relationships between different research outputs and develop metrics for this aggregated model.
  18. 18. 6. Define appropriate metrics and calculation methodologies for specific output types, such as software, datasets, or performances. 7. Agree on main use cases for alternative assessment metrics and develop a needs-assessment based on those use cases. 8. Develop statement about role of alternative assessment metrics in research evaluation. 9. Identify specific scenarios for the use of altmetrics in research evaluation (e.g., research data, social impact) and what gaps exist in data collection around these scenarios. 10. Promote and facilitate use of persistent identifiers in scholarly communications. 11. Research issues surrounding the reproducibility of metrics across providers. 12. Develop strategies to improve data quality through normalization of source data across providers. 13. Explore creation of standardized APIs or download or exchange formats to facilitate data gathering. 14. Develop strategies to increase trust, e.g., openly available data, audits, or a clearinghouse. 15. Study potential strategies for defining and identifying systematic gaming. 16. Identify best practices for grouping and aggregating multiple data sources. 17. Identify best practices for grouping and aggregation by journal, author, institution, and funder. 18. Define and promote the use of contributorship roles. 19. Establish a context and normalization strategy over time, by discipline, country, etc. 20. Describe how the main use cases apply to and are valuable to the different stakeholder groups. 21. Identify best practices for identifying contributor categories (e.g., scholars vs. general public). 22. Identify organizations to include in further discussions. 23. Identify existing standards that need to be applied in the context of further discussions. 24. Identify and prioritize further activities. 25. Clarify researcher strategy (e.g., driven by researcher uptake vs. mandates by funders and institutions).
  19. 19. • Use DOIs when you communicate • Use ORCIDs • Develop, deploy and document APIs • (that use DOIs, that use ORCIDs) • Tell the world about your #altmetrics Your role in improving the (altmetrics) world