These slides introduce a range of research impact metrics. They were presented at the ER&L Conference (April 2017) by Chris James, Product Manager Research Metrics, Elsevier.
A tool for librarians to select metrics across the research lifecycle
1. | 0
Chris James, Product Manager Research Metrics, Elsevier
A tool for librarians to select metrics across
the research lifecycle
ER&L Conference, Austin, Texas, 5th April 2017
2. | 1
Session outline
Following this session, you will:
• Be familiar with the range of research impact metrics
available to you
• Have some guidelines to help you select appropriate
sets of metrics for different use cases
• Understand a free, new set of journal metrics, called
CiteScore metrics
4. | 3
Librarians need to report and track research outputs
Whether it’s to:
• Improve your electronic resource
collections
• Assist researchers with funding
applications
To do this a basket of metrics is
needed…
Metrics
5. | 4
The basket of metrics is diverse…
Metric theme Metric sub-theme
A. Funding Awards
B. Outputs Productivity of research outputs
Visibility of communication channels
C. Research Impact Research influence
Knowledge transfer
D. Engagement Academic network
Non-academic network
Expertise transfer
E. Societal Impact Societal Impact
F.Qualitativeinput
6. | 5
… and the diverse metrics are available for all entities
Outputs
e.g. article, research data, blog,
monograph
Custom set of outputs
e.g. funders’ output, articles I’ve
reviewed
Researcher or group
Institution or group
Subject Area
Serial
e.g. journal, proceedings
Portfolio
e.g. publisher’s title list
Country or group
Metric theme
A. Funding
B. Outputs
C. Research Impact
D. Engagement
E. Societal Impact
F.Qualitativeinput
7. | 6
Users in different countries select different metrics
Metric World Australia Canada China Germany Japan
United
Kingdom
United
States
Field-Weighted Citation
Impact
1 1 1 3 2 4 3 1
Outputs in Top Percentiles 2 2 3 1 4 1 1 6
Publications in Top Journal
Percentiles
3 4 2 2 6 2 2 5
Collaboration 4 6 6 5 1 3 5 7
Citations per Publication 5 3 7 6 3 5 4 3
Citation Count 6 5 5 4 8 6 6 2
h-indices 7 7 4 8 7 7 7 4
Usage of metrics available in SciVal’s Benchmarking module from 11 March 2014 to 28 June 2015.
A partial list of the metrics available at that time is shown, focusing on the most frequently-used. Scholarly Output it excluded since this is the default.
Note that recently added metrics based on e.g. media mentions and awards data were not available at this time and so are not represented in this
analysis.
8. | 7
Resources for Librarians
https://libraryconnect.elsevier.com/articles/librarian-quick-reference-cards-research-impact-metrics
9. | 8
Select the metrics to match the situation
Both evaluation and profiling use cases
10. | 9
Introducing, the latest free addition…
This comprehensive, current and open metric for journal
citation impact (introduced in December 2016) is available
in a free layer of Scopus.com. It includes a yearly release
and monthly CiteScore Tracker updates. Find CiteScore
metrics for journals, conference proceedings, book series
and trade journals at https://www.scopus.com/sources
https://libraryconnect.elsevier.com/metrics
citations in a year to documents published
in previous 3 years
# of documents in previous 3 years
• CITESCORE
11. | 10
CiteScore is a simple metric for all Scopus journals
A free basket of comprehensive, transparent and current
metrics that provide a simple way to measure the citation
impact of serials, such as journals, conference
proceedings and books, over a 3 year period.
Metrics
12. | 11
CiteScore calculation
CiteScore Impact Factor
A = citations to 3 years of documents A = citations to 2 or 5 years of documents
B = all documents indexed in Scopus, same as A B = only citable items (articles and reviews), different from
A (not editorials or letters-to-the-editor)
B
CiteScore 2015 value
B
=
A
Documents from 3 years
2012 2013 2014 2015 2016
A
2011
B
A
Documents from 2 years
Impact Factor 2015 value
B
=
A
Citations received in
2015
Citations received in
2015
New from Scopus
13. | 12
Advantages of CiteScore metrics
Comprehensive
Based on Scopus, the
world’s broadest abstract
and citation database
CiteScore metrics will be
available for all serial
titles, not just journals
CiteScore metrics could
be calculated for
portfolios
Transparent
CiteScore metrics will be
available for free
CiteScore metrics are
easy to calculate for
yourself
The underlying
database is available for
you to interrogate
Current
CiteScore Tracker is
updated monthly
New titles will have
CiteScore metrics the
year after they are
indexed in Scopus
14. | 13
How does CiteScore compare to the
Impact Factor?
17. | 16
Comparison of CiteScore, CiteScore Tracker and Impact Factor
Desirable characteristic CiteScore CiteScore Tracker Impact Factor
Metric measures citations per document
Replicate strong
characteristics
Simple method
Annual snapshot for reporting purposes
Document type consistency (num. and denom.)
Improved
methodology
Fair compromise for all fields – 3y citation window
Derivative metric addresses disciplinary
differences
Ongoing inclusion of error correction
Available for all serials indexed (not only journals) Comprehensive
New titles have the metric next calendar year
CurrentTracking view for verification and decision making
Metric is current – updated monthly
It’s calculated from the same database I use
Transparent
Metric and derivative metrics are free
I can use a free widget on my webpage
Journal-level evaluation functionality is free
Underlying database available to verify calculation
19. | 18
# of citations received by a document
expected # of citations for similar documents
FIELD-WEIGHTED
CITATION IMPACT (FWCI)
Similar documents are ones in the same discipline,
of the same type (e.g., article, letter, review) and of the
same age.
An FWCI of 1 means that the output performs
just as expected against the global average. More than
1 means that the output is more cited than expected
according to the global average; for example,
1.48 means 48% more cited than expected.
https://libraryconnect.elsevier.com/metrics
20. | 19
# of articles in the collection (h) that
have received at least (h) citations over
the whole period
h-INDEX
For example, an h-index of 8 means that 8 of the
collection’s articles have each received at least 8 citations.
h-index is not skewed by a single highly cited paper,
nor by a large number of poorly cited documents.
This flexible measure can be applied to any collection
of citable documents. Related h-type indices emphasize
other factors, such as newness or citing outputs’ own
citation counts. http://www.harzing.com/pop_hindex.htm
https://libraryconnect.elsevier.com/metrics
21. | 20
average # of weighted citations received
in a year
# of documents published in previous 3 years
SCIMAGO JOURNAL
RANK (SJR)
Citations are weighted – worth more or less – depending
on the source they come from. The subject field, quality
and reputation of the journal have a direct effect on the
value of a citation. Can be applied to journals, book
series and conference proceedings.
https://libraryconnect.elsevier.com/metrics
Calculated by SCImago Lab based on Scopus data.
http://www.scimagojr.com
22. | 21
journal’s citation count per paper
citation potential in its subject field
SOURCE NORMALIZED
IMPACT PER PAPER (SNIP)
The impact of a single citation will have a higher
value in subject areas where citations are less likely,
and vice versa. Stability intervals indicate the reliability
of the score. Smaller journals tend to have wider
stability intervals than larger journals.
https://libraryconnect.elsevier.com/metrics
Calculated by CWTS based on Scopus data.
http://www.journalindicators.com
23. | 22
citations in a year to documents
published in previous 2 years
JOURNAL IMPACT FACTOR
Based on Web of Science data, this metric is updated
once a year and traditionally released in June following
the year of coverage as part of the Journal Citation
Reports®. JCR also includes a Five-year Impact Factor.
https://libraryconnect.elsevier.com/metrics
# of citable items in previous 2 years
24. | 23
# of users who added an article into
their personal scholarly collaboration
network library
SCHOLARLY
ACTIVITY ONLINE
The website How Can I Share It? links to publisher
sharing policies, voluntary principles for article sharing
on scholarly collaboration networks, and places to share
that endorse these principles, including Mendeley, figshare,
SSRN and others. http://www.howcanishareit.com
https://libraryconnect.elsevier.com/metrics
25. | 24
# of mentions in scientific blogs
and/or academic websites
SCHOLARLY
COMMENTARY ONLINE
Investigating beyond the count to actual mentions by
scholars could uncover possible future research
collaborators or opportunities to add to the promotion
and tenure portfolio. These mentions can be found in
the Scopus Article Metrics module and within free and
subscription altmetric tools and services.
https://libraryconnect.elsevier.com/metrics
26. | 25
SOCIAL ACTIVITY ONLINE
https://libraryconnect.elsevier.com/metrics
# of mentions on micro-blogging sites
Micro-blogging sites may include Twitter, Facebook,
Google+ and others. Reporting on this attention is
becoming more common in academic CVs as a way
to supplement traditional citation-based metrics,
which may take years to accumulate. They may also
be open to gaming.
http://www.altmetric.com/blog/gaming-altmetrics
27. | 26
# of mentions in mass or popular media
MEDIA MENTIONS
https://libraryconnect.elsevier.com/metrics
Media mentions are valued indicators of social impact
as they often highlight the potential impact of the
research on society. Sources could include an
institution’s press clipping service or an altmetric
provider. Mendeley, Scopus (Article Metrics module),
Pure and SciVal also report on mass media.
28. | 27
compares items of same age, subject area
& document type over an 18-month window
PERCENTILE BENCHMARK
(ARTICLES)
https://libraryconnect.elsevier.com/metrics
The higher the percentile benchmark, the better. This
is available in Scopus for citations, and also for
Mendeley readership and tweets. Particularly useful
for authors as a way to contextualize citation counts
for journal articles as an indicator of academic impact.
29. | 28
extent to which a research entity’s
documents are present in the most cited
percentiles of a data universe
OUTPUTS IN TOP
PERCENTILES
https://libraryconnect.elsevier.com/metrics
Found within SciVal, Outputs in Top Percentiles can
be field weighted. It indicates how many articles are
in the top 1%, 5%, 10% or 25% of the most cited
documents. Quick way to benchmark groups of
researchers.
31. | 30
Two Golden Rules for using research metrics
When used correctly, research metrics together with qualitative input
give a balanced, multi-dimensional view for decision-making
Always use both qualitative
and quantitative input into
your decisions
Always use more than one
research metric as the
quantitative input
32. | 31
Example: importance of using multiple metrics from the basket -
compensate for weaknesses
Compensates for differences in
field, type and age
Meaningful benchmark is “built in”
– 1 is average for a subject area
× People may not like small
numbers
× Complicated; difficult to validate
× No idea of magnitude: how many
citations does it represent?
with
Large number
Simple, easy to validate
Communicates magnitude of
activity
× Affected by differences in field,
type and age
× Meaningless without additional
benchmarking
Field-Weighted Citation
Impact
= 2.53
Citations per Publication
= 27.8
33. | 32
Filling the gap in the Scopus basket of journal metrics
32
withSNIP and SJR
CiteScore
and associated metrics
Compensates for differences in
field, type and age
Meaningful benchmark is “built
in” – 1 is average for a subject
area
× People may not like small
numbers
× Complicated; difficult to validate
× No idea of magnitude: how many
citations does it represent?
Large number
Simple, easy to validate
Communicates magnitude of
activity
× Affected by differences in field,
type and age
× Meaningless without additional
benchmarking
34. | 33
Evaluation
(top-down)
Profiling
(bottom-up)
Should ideally be openly communicated
A core set of metrics (KPIs)
– determined per evaluator S
S
S
S
Supplementary sets per
discipline and entity-type
S
S
S
S
Supplement with tailored metrics
– customized per individual entity
S
Draw from the core set
– as relevant to the evaluator
S
S
S S
S
Evaluation and profiling both draw from the same basket
35. | 34
CiteScore
metrics
Usage
Citations
Audience
Patents
Scholarly Activity
Academic
Opinion
Social Activity
Media Activity
Outputs
Funding awards
Entities to which metrics
apply:
Outputs
e.g. article, research
data, book, monograph
Custom set
e.g. articles I’ve reviewed
Serial
e.g. journal
Publisher or library
portfolio
Researcher
Institution
Country
Subject Area
Editor
Board
Authors
Community Contributions Consumption
Scholarly
Impact
Social Impact
Geographical
spread
Collaboration
network
Sector
distribution
h-, g-, m-
indices
Scholarly
Output
Research data
output
Conference
output
Citation counts
Usage counts
SNIP, SJR, IF
Audience
Scholarly
Discussion
Peer review
metrics
Prizes and
awards
Social media
mentions
Media
mentions
Medical
guidelines
Influence
policies
Mendeley
Counts
Type of metric:
Individual
metrics
Funding
sources
Patent metrics Snowball
Metrics
CiteScore metrics anchor the broader basket of metrics
36. | 35
From where can you access these metrics?
There are a number of free and paid resources
Online demo time…
Altmetrics
CiteScore metrics, including SNIP and SJR
42. | 47
Summary
Golden Rules: both expert opinion and research metrics are needed to fully
describe research performance
The basket of metrics is a “menu” that contains diverse metrics for all entities
Metrics relevant to your question should be selected from the basket, for both
entities you are investigating and suitable peers
The basket of metrics enables both evaluation and profiling use cases
46. | 51
LEGEND
Document*
Author
Journal
Indicates that the Snowball Metrics group
agreed to include as a standardized metric,
which is data source and system agnostic.
https://www.snowballmetrics.com
*“Document” in the definitions refers to primary document types such as journal
articles, books and conference papers. See Scopus Content Coverage Guide
(page 9) for a full list of document types: https://goo.gl/bLYH0v
47. | 52
# of citations accrued since publication
CITATION COUNT
A simple measure of attention for a particular article,
journal or researcher. As with all citation-based measures,
it is important to be aware of citation practices. The paper
“Effective Strategies for Increasing Citation Frequency” lists
33 different ways to increase citations.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2344585
https://libraryconnect.elsevier.com/metrics
48. | 53
# of items published by an individual
or group of individuals
DOCUMENT COUNT
A researcher using document count should also provide
a list of document titles with links. If authors use an
ORCID iD – a persistent scholarly identifier – they can
draw on numerous sources for document count
including Scopus, ResearcherID, CrossRef and PubMed.
Register for an ORCID iD at http://orcid.org
https://libraryconnect.elsevier.com/metrics
49. | 54
Partnering with the Library Community
LIBRARY CONNECT
https://libraryconnect.elsevier.com
Content by
Elsevier Library Connect &
Jenny Delasalle
Freelance librarian & consultant
@JennyDellasalle
CC for Quick Reference
Cards:
Elsevier, Scopus, SciVal, Mendeley, Pure and other Elsevier trademarks are the property of Elsevier B.V. and its
affiliates. Other trademarks, including the SNIP and SJR icons, are the property of their respective owners.
50. | 55
Join the conversation:
@library_connect
libraryconnect
company/libraryconnect