This presentation was provided by Holly Falk-Krzesinski of Elsevier during the NISO event, "Is This Still Working? Incentives to Publish, Metrics, and New Reward Systems," held on February 20, 2019.
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptx
Â
Falk-Krzesinski, "Administrator (Institutional Use of the Data): Data-informed Strategic Planning for the Research Enterprise"
1. Research Metrics: Data-informed
Strategic Planning for the Research
Enterprise
NISO Virtual Conference ⌠February 20, 2019
Is This Still Working? Incentives to Publish, Metrics, and New Reward Systems
Holly J. Falk-Krzesinski, PhD
Vice President, Research Intelligence ⌠Elsevier
4. Strategic Context for Research Metrics
⢠Decreasing government grant funding for research
⢠Increasing competition for government research funding
⢠Rise of interdisciplinary and international grand challenge themes
⢠Increased team science and cross-sector collaboration
⢠Competition to attract the best research leaders globally
⢠Growing need to demonstrate both economic and social impact of research
5. Research Metrics at Different Levels
https://libraryconnect.elsevier.com/articles/librarian-quick-reference-cards-research-impact-metrics
Journal Level
⢠CiteScore
⢠Journal Impact Factor
⢠Scimago Journal
Rank (SJR)
⢠Source Normalized
Impact Per Paper (SNIP)
Article Level
⢠Citation count
⢠Citations per paper
⢠Field-Weighted Citation Impact
(FWCI)
⢠Outputs in top quartile
⢠Citations in policy and medical
guidelines
⢠Usage
⢠Captures
⢠Mentions
⢠Social media
Researcher Level
⢠Document count
⢠Total citations
⢠h-Index
⢠i10-Index
⢠g-Index
7. Categories of Metrics for Analysis
https://www.elsevier.com/__data/assets/pdf_file/0020/53327/ELSV-13013-Elsevier-Research-Metrics-Book-r5-Web.pdf
8. Analyze research
strengths
Determine
where research is a
good potential
investment
Demonstrate
impact (Return On
Investment) of
research funding
Showcase researchers
or identify rising
stars
Tell a better
narrative about
everything that
is happening
with research
Research Metrics Use Cases
9. A Building Need to Demonstrate Impact
https://report.nih.gov/nihdatabook/report/20
10. âGiven how tight budgets are around
the world, governments are rightfully demanding
effectiveness in the programs they pay for.
To address these demands, we
need better measurement tools
to determine which approaches
work and which do not.â
Bill Gates
Gates Foundation Annual Letter 2013
12. ⢠Number of Library holdings
(WorldCat OCLC)
⢠Views on Slideshare
⢠Plays on YouTube
⢠Amazon book reviews
⢠Clinical citations or Health
policy/guideline citations
⢠Government policy citations
⢠News mentions
⢠Patent citations
⢠Academic: Industry
partnerships
⢠Licenses
⢠Business consultancy activities
⢠Number of patents filed and
granted
⢠Wikipedia citations
⢠Blog mentions
⢠StackExchange links
⢠Downloads from Github,
RePEc, IRs
⢠Citations (field normalised,
%iles, counts)
⢠Collaborators on Github
⢠Full text, pdf, html views on
ScienceDirect, Figshare etc
⢠Social media metrics (Shares,
likes, +1, Tweets)
Educational
impact
Societal impact
Commercial
impact
Innovation
Informational
impact
Academic
impact
Promotion /
attention /
buzz
Types of
impact
Diverse Set of Metrics for Demonstrating Impact
Qualitativeinput:Expertfeedbackonqualityandimpactofresearch
14. https://rdmi.uchicago.edu/papers/08212017144742_deWaard082117.pdf
Research Data Metrics
Goal: Metric: How to measure
Research Data is Shared:
1 Stored, i.e. safely available in long-term
repository)
Nr of datasets stored in long-term storage Mendeley Data, Pure; Plum Indexes Figshare,
Dryad, Mendeley Data and working on
Dataverse
2. Published, i.e. long-term preserved,
accessible via web, have a GUID, citable,
with proper metadata
Nr of datasets published, in some form Scholix, ScienceDirect, Scopus
3. Linked, to articles or other datasets Nr of datasets linked to articles Scholix, Scopus
4. Validated, by a reviewer/curated Nr of datasets in curated databases/peer
reviewed in data articles
Science Direct, DataSearch (for curated
dBses)
Research Data is Seen and Used:
5. Discovered Nr of datasets viewed in
databases/websites/search engines
DataSearch, metrics from other search
engines/repositories
6. Identified DOI is resolved DataCite has DOI resolution: made available?
7. Mentioned Social media and news mentions Plum and Newsflo
8. Cited Nr of datasets cited in articles Scopus
9. Downloaded Downloaded from repositories Downloads from Mendeley Data, access data
from Figshare/Dryad
10. Reused Mention of usage in article or other dataset ScienceDirect, access to other data
repositories
15. Open Science Metrics
⢠Impact of Open Science
⢠Engagement in Open
Science activities and
impact of that
engagement
https://ec.europa.eu/research/openscience/pdf/os_rewards_wgreport_final.pdf
16. Always use both qualitative and
quantitative input into your
decisions
Always use more than one research
metric as the quantitative input
Using multiple metrics drives desirable
changes in behavior
There are many different ways of representing
impact
A research metricâs strengths can complement
the weaknesses of other metrics
Combining both approaches will get you closer
to the whole story
Valuable intelligence is available from the
points where these approaches differ in their
message
This is about benefitting from the strengths of
both approaches, not about replacing one
with the other
Golden Rules for Using Research Metrics
17. Mechanisms for Gathering Data for Metrics
From the NISO Code of Conduct for Altmetrics: https://www.niso.org/press-releases/2016/02/niso-releases-draft-recommended-practice-altmetrics-data-quality-public
Describe all known
limitations of the data
Provide a clear
definition of each metric
Describe how data are
aggregated
Detail how often data
are updated
18. Responsible Metrics
⢠Robustness: basing metrics on the best possible data in terms of accuracy and scope
⢠Humility: recognizing that quantitative evaluation should support â but not supplant â
qualitative, expert assessment
⢠Transparency: keeping data collection and analytical processes open and
transparent, so that those being evaluated can test and verify the results
⢠Diversity: accounting for variation by field, and using a variety of indicators to support
diversity across the research system
⢠Reflexivity: recognizing systemic and potential effects of indicators and updating them
in response
http://www.hefce.ac.uk/pubs/rereports/year/2015/metrictide/
21. Gathering Data for Evidence
Gather data as you go along rather than retrospectively
Think about what success would look like for each question or impact
activity and how to evidence it
Use all the data available, be clear & specific, and build a coherent
narrative to provide context
22. Data in Institutional Systems
⢠Persons - researchers, postgraduate students, external persons
⢠Organizations - faculties, departments, research groups, external units
⢠Publications - peer-reviewed journal articles, books, chapters, theses, non-textual, etc.
⢠Publishers and journals - names, IDs, ratings
⢠Bibliometrics - citations, impact factors, altmetrics
⢠Activities - conferences, boards, learned societies, peer reviewing, prizes
⢠Narratives - narrative recordings of the impact of research
⢠Datasets - stored locally or in separate data repository
⢠Equipment - type, placement, ownership details
⢠Funding opportunities - funder, program, eligibility, etc.
⢠Grant applications - stage, funder, program, amount applied, documents attached
⢠Grant awards - funder, program, amount, dates, contract docs, applicants, budget
⢠Projects - budget, expenditure, participants, collaborators, students, outputs
⢠Press clippings - national and international papers, electronic media
Joachim SchĂśpfel et al. / Procedia Computer Science 106 (2017) 305 â 320
23. Sources of Data and Ingestion Options
Type of Data Source(s) of Data Ingestion into Pure, a CRIS
Persons Internal HR system Pure XML format (automatic recurring sync job)
Organizations Internal HR system Pure XML format (automatic recurring sync job)
Publications Manual entry
Online sources, e.g.
Scopus
Legacy systems
User-friendly templates
Single-record import
Automated import by person
Automated import by department
Pure XML format (single or repeated legacy
import)
Elsevier PRS service
Publishers and journals Manual entry
Online sources, e.g.
Scopus
Legacy systems
Automatically together with import
Elsevier PRS service
Pure XML format (single or repeated legacy
import)
Bibliometrics Scopus
Web of Science
Automatically together with import
Automatic sync job for citations (Scopus and
WoS)
Pure XML format (single or repeated legacy
import)
Elsevier PRS service (Scopus bibliometrics only)
Activities Manual entry
Legacy systems
User-friendly templates
XML format for legacy import
Narratives Manual entry User-friendly templates
Datasets Manual entry
Legacy systems
User-friendly templates
XML format for legacy import
Elsevierâs Pureâs datamodel
https://www.niso.org/events/2019/02/still-working-incentives-publish-metrics-and-new-reward-systems
Research Metrics: Data-informed Strategic Planning for the Research Enterprise
Administrator (Institutional Use of the Data) perspective
As competition for extramural research funding continues to increase and resources become more difficult to acquire and even maintain, universities and other research institutions are relying more heavily on data to help inform their decision-making. In this session, I will address considerations for research leaders and institutional administrators using research information systems, data, metrics and analytics to support the strategic planning for their institutionsâ research enterprise.Â
Holly J. Falk-Krzesinski, PhD is the Vice President, Research Intelligence on the Global Strategic Networks team at Elsevier, an information analytics company. Her key role is building and maintaining long term relationships with research institutions and funding bodies, giving voice to research leaders at those organizations within Elsevier to help the business deliver the most impactful solutions to support research globally. Dr. Falk-Krzesinski also focuses on how open science is advancing, in particular, how institutions are addressing issues of recognition and reward for research data sharing throughout the research life cycle. Prior to joining Elsevier, Dr. Falk-Krzesinski was a faculty member and administrator at Northwestern University. Notably, she launched the central Office of Research Development and examined the use of various tools to support intra- and inter-institutional scientific collaboration and demonstrate the impact of the universityâs research programs. She also investigated how universities are changing structures to reward engagement in interdisciplinary research and team science.Â
NORDP Workshop
A Basket of Metrics for Research Evaluation
Increasingly, institutions are interested in tracking and reporting on research outputs to understand their strengths, set goals, chart progress, and make budgetary decisions. Universities globally are, more and more, investing in an evidence-based approach to develop a clear understanding of their position and progress. When used correctly, research metrics, together with qualitative input, offer a balanced, multi-dimensional view for decision-making. While metrics help illuminate the impact of research outputs, they can be a challenge for researchers, research development professionals, and other research leaders unfamiliar with when best to use which metrics. And importantly, when research metrics are misunderstood they have the potential to be misused, becoming a serious point of contention. This session will provide an overview of four levels of research metricsâinstitutional (rankings); journal (e.g., CiteScore); article (e.g., citations); and author (e.g., h-index)âto guide university decision makers to assemble the most appropriate âbasket of metricsâ for their institutionsâ research evaluation needs.Â
The need for research metrics are is pervasive, to support research institutions and researchers
There are different types of metrics
Elsevier is evolving our research metrics strategy to empower scholars/researchers to claim the narrative of what they do and why it matters
Interpretation of Data/Metrics at Administrative Level (Provost, Univ Library, etc.)
Research Information Systems (value-add, cautionary notes, etc.)
Interpretation of Data/Metrics at Administrative Level (Provost, Univ Library, etc.)
Research Information Systems (value-add, cautionary notes, etc.)
Traditional article-related metrics
Institutional aggregate
Metrics can and are calculated across all areas of the research workflow from input metrics such as grant awards volume, process metrics such as income volume or amount spent through to metrics around outputs, outcomes and impacts. There are lots of metrics in the areas in orange and these are associated more with the traditional bibliometrics measures such as usage and citations and we provide many of these in our tools such as Scopus and SciVal. Looking further to the right though around areas such as engagement and impact, there is opportunity for innovation and this is where we are currently pushing so we can start to uncover more of the stories around research impact for example.
Â
To enable us to achieve this, we acquired Plum Analytics early in 2017 which is allowing us to markedly extend the basket of metrics we can offer and the research outputs we can track and analyse. Plum currently tracks over 100 million research artifacts and has captured billions of interactions with these artifacts from over 40 different sources of metrics. By combining Elsevierâs rich data with Plum Analytics capabilities we can now monitor research across all disciplines much more effectively and extensively to help the research community to get closer to the whole story of how research is being engaged with and the impact the research is having.
Â
In addition to expanding our technical capabilities in this area, we are a partner in an initiative called Snowball Metrics. The initiative has successfully developed robust and clearly defined metric methodologies, across the whole research workflow to enable confident comparison in an âapples to applesâ way. Importantly, the development of the metric methodologies or ârecipesâ was sector led and is owned by research intensive universities around the world. The recipes are available free of charge for anyone to use and implement and are system and supplier agnostic. The initiative has now defined 32 metric methodologies which are available to download in the 3rd edition of the recipe book.
Letâs talk about the second golden rule âAlways use more than one research metric as the quantitative inputâ. Scopus aims to provide a basket of research metrics to measure research performance.
This slide is showing a basket of metrics for measuring research excellence, where each theme has a different set of metrics associated with it.
These research metrics are supported by qualitative input (remember golden rule #1) as represented by the grey bar on the left side.
Itâs important to make metrics available for all the different entities that you would want to measure: authors, institutions, journals, subject fields, etc.
Letâs talk about the second golden rule âAlways use more than one research metric as the quantitative inputâ. Scopus aims to provide a basket of research metrics to measure research performance.
This slide is showing a basket of metrics for measuring research excellence, where each theme has a different set of metrics associated with it.
These research metrics are supported by qualitative input (remember golden rule #1) as represented by the grey bar on the left side.
Itâs important to make metrics available for all the different entities that you would want to measure: authors, institutions, journals, subject fields, etc.
Research metrics play a significant role in key decisions of institutions and funders
Research metrics can be used for a variety of purposes such as to help analyze research strengths, identify hot topics or areas for investment, to demonstrate the return on an investment or program, showcase researcher performance or to identify rising stars or to help tell a narrative around research and demonstrate impact for example. In all cases however, we recommend using more than one metric and also complementing the quantitative information the metrics provide with qualitative judgment or expert opinion.
Recognition and reward of individual researchers
International rankings
Institutional benchmarking
Portfolio analysis
Research evaluation
Measuring collaboration
Demonstrating impact
Ever-Increasing Competition for Research Funding
NIH Data Book
Competition for Research $âs is more competitive than ever-
# applicants *way* up
Award ($) funding similar to 2003 levels
% submissions accepted down year over year
Metrics to measure a return on investment
Becker Medical Library Model for Assessment of Research Impact
THE MODEL FOR ASSESSMENT OF RESEARCH IMPACT IS A FRAMEWORK FOR TRACKING DIFFUSION OF RESEARCH OUTPUTS AND ACTIVITIES TO LOCATE INDICATORS THAT DEMONSTRATE EVIDENCE OF BIOMEDICAL RESEARCH IMPACT.
This slide is showing a basket of article-level metrics for measuring research excellence, importantly these research metrics are supported by qualitative input as represented by the grey bar on the right side of the figure
In this slide, I have tried to demonstrate example metrics and the different things they can indicate for a piece of research. I will not go through all of the examples but ways you could use research metrics could be in demonstrating: educational impact through looking at the number of libraries holding your book, To demonstrate Societal impact you could look at the number of clinical citations in health policy documents or guidelines to demonstrate an effect on clinical practice. Or you could demonstrate more academic impact through citation metrics, usage metrics such as downloads on sites such as SSRN or for Computer Science downloads or code forks in GitHub.
Â
One area we are focused on currently though is exploring how we can use our capabilities and technology to help demonstrate the impact research has in the policy space to further investigate the relationship between research and policy.
We would like to explore, understand and so help actors in both the research and policy space understand this relationship more fully by leveraging our capabilities and technology more effectively.
Â
Being able to extract the references to research in policy documents is one way to help but requires us to have access to policy documents so we can extract and analyze them.
Â
We could then use our technology to identify research, researchers or institutions being cited or referenced in policy documents as well as potentially the context around the citation or reference. For example, was the research used instrumentally or as an idea to influence the policy climate.
Â
Or was a researcher influencing policy by being a participant in a consultation
Unlike Scopus, which is an objective set of curated content, PlumX considers a very broad range of research output that the research community considers, all of which can be collected within Pure
Historically a focus on publications as the major research output, but Research Data are growing in their importance as a research output, one that also has an associated set of metrics, but different than publications.
Credit for Sharing and Reuse of Research Data, Framework that affords credit throughout the data lifecycle
Demonstrate the value of data sharing and reward data discovery behavior
Work closely with NISO and BD2K working group
Show value of data sharing and reward data discovery behavior
Two forms of research data
Research Datasets â experimental datasets, as stored in data.mendeley.com and in other repositories
Research Data Entities âdata objects defined by experimental research, for example, genes, proteins, astronomical bodies; these are often identified by type specific IDs, e.g., accession numbers, which usually take the form of URLs to particular domains
Types of metrics
Usage data
Citation data
Linking and sharing data (aka altmetrics)
Development of a new Data Citation Index
Working closely with NISO on their research data recommendations
To provide a balanced, multi-dimensional view
See more at: http://plumanalytics.com/niso
http://www.niso.org/news/pr/view?item_key=72abb8f785b18bbe2cdfdb8b6a237c21f75e6a2f
https://plumanalytics.com/wp-content/uploads/2018/10/NISO-Self-Reporting-Table-Plum-Analytics-October-15-2018.pdf
CiteScore is a simple and transparent metric for all Journals Indexed in Scopus
CiteScore is essentially the average citations per document that a title receives over a three-year period.
This is the calculation for the CiteScore 2015 value of a particular journal
A= Sum of citations received in 2015 to documents published in the journal during 2012, 2013, and 2014
B= All of the documents indexed in Scopus published during 2012, 2013, and 2014
CiteScore metrics are available for all Scopus serial types: Peer review journals, including supplements and special issues; Book series; Conference proceedings; and Trade journals
Rather than using a two- or five-year citation window, CiteScore uses three. Research over the years has found that in slower-moving fields, two years worth of data is too short; yet five years is too long to consider in faster-moving fields. The peer-reviewed bibliometric literature shows that three years is the best compromise for a broad-scope database, such as Scopus. It incorporates a representative proportion of citations in all disciplines while also reflecting relatively recent data.
Snowball Metrics
Defined and agreed by research-intensive universities
Commonly understood metrics to uncover research strengths by benchmarking, provide valuable input into strategic decision making
Tested methodologies that are data and tool provider agnostic
Recipes that are owned by universities, and are open for the community to use without cost or restriction
A set of global standards and cover the entire spectrum of research activities
Interpretation of Data/Metrics at Administrative Level (Provost, Univ Library, etc.)
Research Information Systems (value-add, cautionary notes, etc.)
Underlying all metrics are data, this data-informed approach offer trustworthy evidence
Harkening back to the Range of Research Output Types from the previous section
Pure aggregates an organization's data from numerous internal and external sources, and ensures the data that drives strategic decisions is trusted, comprehensive and accessible in real time.
Interrelated actors that are all connected
A highly versatile centralized system, Pure enables your organization to build reports, carry out performance assessments, manage researcher profiles, enable research networking and expertise discovery and more, all while reducing administrative burden for researchers, faculty and staff.
On this slide, simply walk your audience through the text on the slide âfirst the type of content, then the source where it comes from, and then the method of getting it into Pure
Availability of APIs enable sharing of data and metrics into an institutionâs own systems (e.g., dashboards, web sites, platforms, repositories)
Enhanced transparency
Ability to combine data and metrics from multiple sources
Batelle Snowball Metrics working group: https://www.osti.gov/servlets/purl/1462196
âThe purpose of the Battelle Snowball Metric Working Group is to use the Snowball Metrics framework to build and employ a method to calculate metrics that will enable Battelle-affiliated National Laboratories to better understand their strengths and weaknesses in a few representative areas.â
âThere is an advantageous alignment of the recommended Snowball Metrics with the U.S. Department of Energy (DOE) Performance Evaluation and Measurement Plan (PEMP) framework. Using the PEMP as a foundation, an integrated set of metrics, which include Snowball Metrics, can help inform Battelle on the progress of delivering S&T results that contribute to and enhance DOEâs mission by providing world-class scientific research capacity and advancing scientific knowledge through peer-reviewed scientific results.â
The working group, as a consensus, recommends the following subset of Snowball Metrics:
Scholarly Output
Collaboration
Intellectual Property Volume
Citations per Output
Example dashboard: http://maryland.sla1.org/wp-content/uploads/2018/09/PNNL-Day_Justin.pdf
Customizable dashboards provide administrators with an overview of strategically important metrics
Dashboards can be personalized, shared and used for monitoring and reporting
User controls ensure that only data relevant to the user are visible
Metrics integrated w/i RIS, allows viewing at different aggregate levels, e.g., individual researchers, departmental group of faculty, etc