Assessing Digital Output in
Looking at emerging alternative metrics for
measuring author impact and usage data, this
presentation will focus on methods for capturing
more granular data around researchers and
topics, including new assessment tools, usage
data sets, and understanding how that impacts
overall author contributions and understanding.
• Alternative metrics, altmetrics, article data,
usage data, assessments, metrics, impact,
understanding, attention, reach…
• If this seems confusing…
• Altmetrics is at the big bang stage – this
universe has not yet cooled down and
Some words on terms
• A set of altmetric data is about a common document
and represents usage, recommendation, shares, re-
• Identified by DOI, URL, ID
• It does not show common intent: a tweet is not the
same as a Mendeley share is not the same as a Data
Dryad data download is not the same as mass media
coverage or a blog
• Although I talk about journals, articles data, this data
can be derived from any digital output
• Books, conference presentations, policy papers,
What is the data?
Metrics are an interpretive layer derived from
• Scholarly impact
• Social impact
What are metrics?
• Plum Analytics
• PLOS / PLOS code
• Altmetrics is not Altmetric.com
Each has strengths and weaknesses,
no canonical source
• Altmetrics isn’t one thing, so attempting to
express it as one thing will fail.
• Elsevier (and others) favour intelligent clusters
of data: social activity, mass media, scholarly
activity, scholarly comment, re-use
• Elsevier believes that more research is needed,
and that best indicators for scholarly impact
are scholarly activity and scholarly comment
Bringing together sources…
Example from 13,500 papers:
• Highly tweeted stories focus on policy, gender,
funding, ‘contentious science’ issues, mostly
summaries on Nature News
• Highly shared papers in Mendeley are hard core
• Different platforms have discipline bias
• Scholarly blogs both lead interest and respond
• Data from Altmetric.com
Different data have different
• Communities have to agree to agree
• Innovation and co-operation
• When adapting metrics from data, there
needs to be broad consensus that what we say
we’re measuring is what’s being measured
• We need to reflect and adapt
The importance of openness
• If people take this data seriously, will they cheat?
• Eg, Brazilian citation scandal, strategies used by people
to increase IF of journals
• Expertise in detecting fraudulent downloads (eg, SSRN),
self-tweeting – when is ‘normal’ corrupt?
• One thing to buy 1000 tweets, another to buy 10 blogs,
or mass media coverage
• Do those twitter accounts have scholarly followers?
• Pattern analysis, usage analysis, network analysis
• Public data = public analysis = public response
Gaming / cheating
• Biggest criticisms are when people try and
conflate all the data into a single thing
• Easy point of attack – tweets are all about “sex
and drugs and rock ‘n’ roll papers”*
• Using clusters is more intelligible to academic
community – eg, re-use, scholarly activity,
scholarly comment (blogs, reviews,
• * this isn’t true anyway
• Altmetrics has got where it is today on the
basis of standards
• Without ISSNs and DOIs, the world is a harder
place x 1000
• Elsevier is supporting research to discover
scholarly impact in areas that don’t use DOIs
• (Other standards exist: PubMed IDs, Arxiv IDs)
Making altmetrics work
• Increasingly, we’re seeing altmetrics being
used to describe objects other than articles,
but also institutions, journals, data and
• For institutions, Snowball Metrics has recently
adopted the same formulation for grouping
altmetrics as Elsevier
Expanding views of altmetrics
• More funders are insisting on open data
• And the way to understand whether its being
used … is data metrics – combining altmetrics
and traditional (web-o-)metrics
• Downloads, citations, shares, re-uses…
• Downside: data repository is fragmented,
600+ repositories registered at databib.org
• Upside: Datacite, ODIN, ORCID, DOI, RDA,
Draft Declaration of Data Citation Principles
Making data count
• Governments don’t operate like scholars
• Rhetoric, argument, polemics
• Personal reputation is important
• Laws don’t contain citations
• The relationship is fuzzy – less a chain of
evidence, more a miasma of influence
• Elsevier is sponsoring work to understand this
Measuring the effect of research on
• Standards are vital to altmetrics
• NISO is involved in shaping the conversation
around what implicit standards need to be
• (My example) – is a retweet the same as a
tweet, do we count replies or favourites? And
how about modified tweets, conversations?
• Comments requested until July 18th
• End of stage 1
• Some of the observations:
NISO’s White Paper
1. Develop specific definitions for alternative assessment metrics.
2. Agree on proper usage of the term “Altmetrics,” or on using a different term.
3. Define subcategories for alternative assessment metrics, as needed.
4. Identify research output types that are applicable to the use of metrics.
5. Define relationships between different research outputs and develop metrics
for this aggregated model.
6. Define appropriate metrics and calculation methodologies for specific output types, such as software,
datasets, or performances.
7. Agree on main use cases for alternative assessment metrics and develop a needs-assessment based
on those use cases.
8. Develop statement about role of alternative assessment metrics in research evaluation.
9. Identify specific scenarios for the use of altmetrics in research evaluation (e.g., research data, social
impact) and what gaps exist in data collection around these scenarios.
10. Promote and facilitate use of persistent identifiers in scholarly communications.
11. Research issues surrounding the reproducibility of metrics across providers.
12. Develop strategies to improve data quality through normalization of source data across providers.
13. Explore creation of standardized APIs or download or exchange formats to facilitate data gathering.
14. Develop strategies to increase trust, e.g., openly available data, audits, or a clearinghouse.
15. Study potential strategies for defining and identifying systematic gaming.
16. Identify best practices for grouping and aggregating multiple data sources.
17. Identify best practices for grouping and aggregation by journal, author, institution, and funder.
18. Define and promote the use of contributorship roles.
19. Establish a context and normalization strategy over time, by discipline, country, etc.
20. Describe how the main use cases apply to and are valuable to the different stakeholder groups.
21. Identify best practices for identifying contributor categories (e.g., scholars vs. general public).
22. Identify organizations to include in further discussions.
23. Identify existing standards that need to be applied in the context of further discussions.
24. Identify and prioritize further activities.
25. Clarify researcher strategy (e.g., driven by researcher uptake vs. mandates by funders and
• Use DOIs when you communicate
• Use ORCIDs
• Develop, deploy and document APIs
• (that use DOIs, that use ORCIDs)
• Tell the world about your #altmetrics
Your role in improving the (altmetrics)