Communicating About
Standards: Creating a
Technical Infrastructure That
Everyone Can Use
NETTIE LAGACE, NISO
CEAL ERMB WORKSHOP, WASHINGTON DC
MARCH 20, 2018
About NISO
Non-profit industry trade association accredited by ANSI with 150+ members
Mission of developing and maintaining standards related to information,
documentation, discovery and distribution of published materials and media
Represent U.S. interests to ISO TC 46 (Information and Documentation) and also serve
as Secretariat for ISO TC46/SC9 (Identification and Description). New in 2017: TAG for
ISO/IEC JTC 1/SC 34 - Document description and processing languages
Responsible for standards such as ISSN, DOI, Dublin Core metadata, DAISY digital
talking books, OpenURL, SIP, NCIP, MARC records and ISBN (indirectly)
Volunteer-driven organization: 400+ across the world
2
ALL NISO PUBLICATIONS
ARE FREELY-AVAILABLE!
New Structure
Board of Directors
Topic Committee Topic Committee Topic Committee
Architecture Committee
Working Group
Working Group
Working Group
Working Group
Working Group
Working Group
Working Group
Working Group
Working Group
Working Group
Who manages ideas?
NISO Areas of Work & Example Projects
Information Creation & Curation Topic Committee
◦ PIE-J
◦ E-Book Metadata Working Group
◦ Z39.2
Information Policy & Assessment Topic Committee
◦ DDA (Demand-Driven Acquisition)
◦ Transfer
◦ Altmetrics Initiative
Information Discovery & Interchange Topic Committee
◦ Open Discovery Initiative
◦ KBART
◦ Access & License Indicators (ALI)
◦ Z39.50
New NISO work items
What is not working and how does that affect stakeholders?
Does this relate to other existing efforts? Is a shift in one/more of
them required or is a new effort needed?
Who will benefit from the deliverables?
Who needs to participate?
What does the initiative need to do; what does it NOT need to do
How can uptake of the deliverable be encouraged?
6
NISO Working Group Process
NISO Approval
Working Group formed
Scope & determine deliverables
Gather input
Produce draft output
Public review
Respond & Modify
Publish final version
Market/Educate
Maintain & Update
Participation: from the heart or a competitive interest?
Scope: What has to be left out?
Practices: Ideal state or Lowest Common Denominator?
Getting through the Slog
◦ Draft
◦ Responses to Public Comments
Challenges to Standards Zen
Costs for Publishers/Vendors
NISO publications are free – but other barriers exist
Time to understand
Time to talk to others in the company
Time to code – or find/evaluate a vendor
Time to update documentation
VERSUS…..
Lower costs of exchange with other vendors/libraries
Prevent customers from being confused
More easily/quickly allow end users to access data/information [increasing usage]
Minimize support/questions/issues – more self-sufficiency
Information Industry and Competition
What About NISO Compliance?
• NISO does nor formally measure compliance/create scores – all standards and
recommended practices are voluntary [not regulatory]
• Some NISO Standing Committees maintain Registries
• ODI
• KBART
• SUSHI
• Other compliance-related activities
• Circulate surveys
• Education sessions
• Include adherence to standards in RFPs
• Good Citizen
The Context for ODI
Based on a meeting at ALA Annual Conference in New
Orleans on Sunday, June 26, 2011. Recognition of the
following trends and issues:
◦ Emergence of Library Discovery Services solutions
◦ Based on index of a wide range of content
◦ Commercial and open access
◦ Primary journal literature, ebooks, and more
◦ Adopted by thousands of libraries around the world, and impact
millions of users
◦ Agreements between content providers and discovery providers ad-
hoc, not representative of all content, and opaque to customers.
14
General Goals of Working Group
Define ways for libraries to assess the level of content
providers’ participation in discovery services
Help streamline the process by which content providers work
with discovery service vendors
Define models for “fair” linking from discovery services to
publishers’ content
Determine what usage statistics should be collected for
libraries and for content providers
15
Areas of Focus
Education
Technology
Conformance
Conformance Disclosure
Content Provider Conformance Checklist – Appendix B
17
Conformance Disclosure
Content Provider Conformance Checklist – Appendix C
18
Conformance Statements
Published content provider statements:
◦ Credo Reference
◦ EBSCO
◦ Gale
◦ IEEE
◦ SAGE
Published discovery provider statements
◦ EBSCO
◦ Ex Libris
◦ ProQuest
ODI Conformance http://www.niso.org/workrooms/odi/conformance/
PIE-J survey
Other standards developers
Standards and
Recommended
Practices to Support
Adoption of Altmetrics
Why are standards
important when we are
measuring things?
Are we measuring scholarship
using “English” or “Metrics”
Image: Flickr user karindalziel
NISO Altmetrics Initiative:
Why worth funding?
Scholarly assessment is critical to the overall process
◦ Which projects get funded
◦ Who gets promoted and tenure
◦ Which publications are prominent
Assessment has been based on citation since the 60s
Today’s scholars multiple types of interactions with scholarly content are not reflected
◦ Is “non-traditional” scholarly output important too?
Steering Committee
Calendar
April 2015 – Group(s) start working
October 2015 – Draft document(s)
Fall 2015 – Comment period(s)
November 2015 – NISO Report to Sloan Foundation
Spring 2016 – Completion of final draft(s)
34
3 meetings:
PLOS, San Francisco
CNI, Washington, DC
ALA MW, Philadelphia
White Paper Released, Summer 2014
35
37
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
1.Develop
specificdefinitionsforalternative
assessm
entm
etrics.
2.Agree
on
properusage
ofthe
term
“Altm
etrics,”oron
usinga
different…
3.Define
subcategoriesforalternative
assessm
entm
etrics,asneeded.
4.Identifyresearch
outputtypesthatare
applicable
to
the
use
ofm
etrics.
5.Define
relationshipsbetw
een
differentresearch
outputsand
develop…
6.Define
appropriate
m
etricsand
calculation
m
ethodologiesforspecific…
7.Agree
on
m
ain
use
casesforalternative
assessm
entm
etricsand
develop
a…
8.Develop
statem
entaboutrole
ofalternative
assessm
entm
etricsin…
9.Identifyspecificscenariosforthe
use
ofaltm
etricsin
research
evaluation…
10.Prom
ote
and
facilitate
use
ofpersistentidentifiersin
scholarly…
11.Research
issuessurroundingthe
reproducibilityofm
etricsacross…
12.Develop
strategiesto
im
prove
dataqualitythrough
norm
alization
of…
13.Explore
creation
ofstandardized
APIsordow
nload
orexchange
form
ats…
14.Develop
strategiesto
increase
trust,e.g.,openlyavailable
data,audits,or…
15.Studypotentialstrategiesfordefiningand
identifyingsystem
aticgam
ing.
16.Identifybestpracticesforgrouping
and
aggregatingm
ultiple
data
sources.
17.Identifybestpracticesforgrouping
and
aggregation
byjournal,author,…
18.Define
and
prom
ote
the
use
ofcontributorship
roles.
19.Establish
a
contextand
norm
alization
strategyovertim
e,bydiscipline,…
20.Describe
how
the
m
ain
use
casesapplyto
and
are
valuable
to
the…
21.Identifybestpracticesforidentifyingcontributorcategories(e.g.,…
22.Identifyorganizationsto
include
in
furtherdiscussions.
23.Identifyexisting
standardsthatneed
to
be
applied
in
the
contextof…
24.Identifyand
prioritize
furtheractivities.
25.Clarifyresearcherstrategy(e.g.,driven
byresearcheruptake
vs.…
Unimportant
Of little importance
Moderately important
Important
Very important
Community Feedback on Project Idea Themes
Working Groups
A development of definitions and descriptions of use
Bdefinitions for appropriate metrics and calculation methodologies for specific output
types and promotion and facilitation of use of persistent identifiers
Cdevelopment of strategies to improve data quality through source data providers.
Definitions and Use Cases
Code of Conduct
Output Types for Assessment
Data Metrics
Persistent Identifiers and Assessment
39
40
Definitions and Use Cases
41
Caveats - important
• Citations, usage, altmetrics are ALL potentially important and
potentially imperfect
• Please don’t use altmetrics as uncritical proxy for scholarly
impact – must consider quantitative and qualitative information
too
• data quality and indicator construction are key factors in the
evaluation of specific altmetrics (read as: this is important –
garbage in, garbage out!)
42
NISO Altmetrics
Working Group A
Charge:
Development of specific definitions for
alternative assessment metrics – This
working group will come up with specific
definitions for the terms commonly used in
alternative assessment metrics, enabling
different stakeholders to talk about the same
thing. This work will also lay the groundwork
for the other working groups.
43
NISO Altmetrics
Working Group A
Charge:
Descriptions of how the main use cases apply to
and are valuable to the different stakeholder
groups – Alternative assessment metrics can be
used for a variety of use cases from research
evaluation to discovery. This working group will
try to identify the main use cases, the
stakeholder groups to which they are most
relevant, and will also develop a statement about
the role of alternative assessment metrics in
research evaluation.
44
Process
• Discussion, Research, Discussion, Research!
• WGA extensively studied the altmetrics literature, other
communications
• Discussed in depth various stakeholders' perspectives
and requirements for these new evaluation measures
• Iterations!
• Need to write agreed-upon def’n to use consistently,
across all parties – can’t be narrow
45
What is Altmetrics? Definition
Altmetrics is a broad term that encapsulates the digital collection,
creation, and use of multiple forms of assessment that are derived from
activity and engagement among diverse stakeholders and scholarly
outputs in the research ecosystem.
The inclusion in the definition of altmetrics of many different outputs and
forms of engagement helps distinguish it from traditional citation-based
metrics, while at the same time, leaving open the possibility of their
complementary use, including for purposes of measuring scholarly
impact.
However, the development of altmetrics in the context of alternative
assessment sets its measurements apart from traditional citation-based
scholarly metrics.
46
Use Cases
Very important! Developed eight personas, three themes:
Showcase achievement: Indicates stakeholder interest in highlighting
the positive achievements garnered by one or more scholarly outputs.
Research evaluation: Indicates stakeholder interest in assessing the
impact or reach of research.
Discovery: Indicates stakeholder interest in discovering or increasing
the discoverability of scholarly outputs and/or researchers.
.
47
Persona: librarian
48
Persona: member of hiring committee
49
Personas: academic/researcher
50
Personas: academic/researcher (cont’d)
51
Personas: publishing editor
52
Glossary (of course!)
Activity. Viewing, reading, saving, diffusing, mentioning, citing, reusing, modifying, or
otherwise interacting with scholarly outputs.
Altmetric data aggregator. Tools and platforms that aggregate and offer online events
as well as derived metrics from altmetric data providers, for example, Altmetric.com,
Plum Analytics, PLOS ALM, ImpactStory, and Crossref.
Altmetric data provider. Platforms that function as sources of online events used as
altmetrics, for example, Twitter, Mendeley, Facebook, F1000Prime, Github, SlideShare,
and Figshare.
Attention. Notice, interest, or awareness. In altmetrics, this term is frequently used to
describe what is captured by the set of activities and engagements generated around a
scholarly output.
53
Glossary (much more...)
Engagement. The level or depth of interaction between users and scholarly outputs,
typically based upon the activities that can be tracked within an online environment. See
also Activity.
Impact. The subjective range, depth, and degree of influence generated by or around a
person, output, or set of outputs. Interpretations of impact vary depending on its
placement in the research ecosystem.
Metrics. A method or set of methods for purposes of measurement.
Online event. A recorded entity of online activities related to scholarly output, used to
calculate metrics.
54
Glossary (much more...)
Scholarly output. A product created or executed by scholars and investigators in the course of
their academic and/or research efforts. Scholarly output may include but is not limited to journal
articles, conference proceedings, books and book chapters, reports, theses and dissertations,
edited volumes, working papers, scholarly editions, oral presentations, performances, artifacts,
exhibitions, online events, software and multimedia, composition, designs, online publications, and
other forms of intellectual property. The term scholarly output is sometimes used synonymously
with research outputs.
Traditional metrics. The set of metrics based upon the collection, calculation, and manipulation of
scholarly citations, often at the journal level. Specific examples include raw and relative (field-
normalized) citation counts and the Journal Impact Factor.
Usage. A specific subset of activity based upon user access to one or more scholarly outputs, often
in an online environment. Common examples include HTML accesses and PDF downloads.
55
Code of Conduct
56
Code of Conduct
Why a Code of Conduct?
Scope
Altmetric Data Providers vs.
Aggregators
57
Working Group C
Scope: The Code of Conduct aims to improve the quality
of altmetric data by increasing the transparency of data
provision and aggregation as well as ensuring
replicability and accuracy of online events used to
generate altmetrics. It is not concerned with the
meaning, validity or interpretation of indicators derived
from that data. Altmetric online events include online
activities “derived from engagement between diverse
stakeholders in the research ecosystem and various
scholarly outputs”, as defined in NISO WG A Definition of
Altmetrics
58
Code of Conduct Key Elements
Transparency
Replicability
Accuracy
59
Code of Conduct: Transparency
How data are generated, collected, and curated
How data are aggregated, and derived data generated
When and how often data are updated
How data can be accessed
How data quality is monitored
60
Code of Conduct: Replicability
Provided data is generated using the same methods over time
Changes in methods and their effects are documented
Changes in the data following corrections of errors are
documented
Data provided to different users at the same time is identical or,
if not, differences in access provided to different user groups
are documented
Information is provided on whether and how data can be
independently verified
61
Code of Conduct : Accuracy
The data represents what it purports to reflect
Known errors are identified and corrected
Any limitations of the provided data are communicated
62
Code of Conduct: Reporting
List all available data and metrics (providers & aggregators) and altmetrics data providers from which data are collected (aggregators).
Provide a clear definition of each metric provided.
Describe the method(s) by which data is generated or collected and how this is maintained over time.
Describe any and all known limitations of the data provided.
Provide a documented audit trail of how and when data generation and collection methods change over time with any and all known effects of these
changes, including whether changes were applied historically or only from change date forward.
Describe how data is aggregated.
Detail how often data is updated.
Provide the process of how data can be accessed.
Confirm that data provided to different data aggregators and users at the same time is identical and, if not, how and why they differ.
Confirm that all retrieval methods lead to the same data and, if not, how and why they differ.
Describe the data quality monitoring process.
Provide process by which data can be independently verified (aggregators only).
Provide a process for reporting and correcting suspected inaccurate data or metrics.
63
Non-traditional Outputs
64
Charge
Definitions for appropriate metrics and calculation methodologies for
specific output types. Research outputs that are currently
underrepresented in research evaluation will be the focus of this working
group. This includes research data, software, and performances, but also
research outputs commonly found in the social sciences.
Promotion and facilitation of use of persistent identifiers in scholarly
communications. Persistent identifiers are needed to clearly identify
research outputs for which collection of metrics is desired, but also
to describe their relationships to other research outputs, to contributors,
institutions and funders.
65
Alternative outputs
66
Recommendations re Data Metrics
Metrics on research data should be made available as widely
as possible
Data citations should be implemented following the Force11
Joint Declaration of Data Citation Principles, in particular:
◦ Use machine-actionable persistent identifiers
◦ Provide metadata required for a citation
◦ Provide a landing page
◦ Data citations should go into the reference list or similar
metadata.
67
Recommendations re Data Metrics
Standards for research-data-use statistics need to be developed.
◦ Based on COUNTER; consider special aspects of research data
◦ Two formulations for data download metrics: examine human and non-
human downloads
Research funders should provide mechanisms to support data
repositories in implementing standards for interoperability and obtaining
metrics.
Data discovery and sharing platforms should support and monitor
“streaming” access to data via API queries.
68
Persistent Identifiers
69
Altmetrics for #NISOALMI
70
39 presentation slides have
been downloaded 32,740
times (as of July 26, 2016
The Phase 1 report
published in 2014
downloaded 9,636 times
Pages hosting content
related to this project
were accessed 60,548
times
>2,000 people
attended the 22 in-
person presentations
about the project
Final Report has been downloaded 15,432 times
More than 50
articles/blogs/papers
about the initiative
Where to next?
71
Initial
• Metrics
from
provider
• Ad-hoc
Repeatable
• Common
measurement
criteria from
provider
• Documented
measurements
and processes
• Comparable
and consistent
Defined
• Measurements
defined/confirmed
as a standard for
provider
• Made public
• Business
processes
followed
consistently
• Transparent
Managed
• Standards
applied
• Controls in place
• Checks and
balances
repeated over
time
• Open for
comment and
feedback
• Accountable
Governed
• Independent
verification or 3rd
party audit
• Evolving
common
industry defined
standards
• Trust and
confidence
Maturity Model for Standards Adoption
Increasing trust and confidence in altmetrics
72
Key Original Ideas Not Yet Done
What is the role of alternative assessment metrics in research evaluation
and identify and what gaps exist in data collection around evaluation
scenarios.
Identify best practices for grouping and aggregating multiple data sources
Identify best practices for grouping and aggregating by journal, author,
institution and funder.
73
Questions?
Nettie Lagace
Associate Director, Programs
nlagace@niso.org
@abugseye
National Information Standards Organization (NISO)
3600 Clipper Mill Road, Suite 302
Baltimore, MD 21211 USA
+1 (301) 654-2512
www.niso.org
June 25, 2016 74

Communicating About Standards: Creating A Technical Infrastructure That Everyone Can Use

  • 1.
    Communicating About Standards: Creatinga Technical Infrastructure That Everyone Can Use NETTIE LAGACE, NISO CEAL ERMB WORKSHOP, WASHINGTON DC MARCH 20, 2018
  • 2.
    About NISO Non-profit industrytrade association accredited by ANSI with 150+ members Mission of developing and maintaining standards related to information, documentation, discovery and distribution of published materials and media Represent U.S. interests to ISO TC 46 (Information and Documentation) and also serve as Secretariat for ISO TC46/SC9 (Identification and Description). New in 2017: TAG for ISO/IEC JTC 1/SC 34 - Document description and processing languages Responsible for standards such as ISSN, DOI, Dublin Core metadata, DAISY digital talking books, OpenURL, SIP, NCIP, MARC records and ISBN (indirectly) Volunteer-driven organization: 400+ across the world 2
  • 3.
    ALL NISO PUBLICATIONS AREFREELY-AVAILABLE!
  • 4.
    New Structure Board ofDirectors Topic Committee Topic Committee Topic Committee Architecture Committee Working Group Working Group Working Group Working Group Working Group Working Group Working Group Working Group Working Group Working Group Who manages ideas?
  • 5.
    NISO Areas ofWork & Example Projects Information Creation & Curation Topic Committee ◦ PIE-J ◦ E-Book Metadata Working Group ◦ Z39.2 Information Policy & Assessment Topic Committee ◦ DDA (Demand-Driven Acquisition) ◦ Transfer ◦ Altmetrics Initiative Information Discovery & Interchange Topic Committee ◦ Open Discovery Initiative ◦ KBART ◦ Access & License Indicators (ALI) ◦ Z39.50
  • 6.
    New NISO workitems What is not working and how does that affect stakeholders? Does this relate to other existing efforts? Is a shift in one/more of them required or is a new effort needed? Who will benefit from the deliverables? Who needs to participate? What does the initiative need to do; what does it NOT need to do How can uptake of the deliverable be encouraged? 6
  • 7.
    NISO Working GroupProcess NISO Approval Working Group formed Scope & determine deliverables Gather input Produce draft output Public review Respond & Modify Publish final version Market/Educate Maintain & Update
  • 8.
    Participation: from theheart or a competitive interest? Scope: What has to be left out? Practices: Ideal state or Lowest Common Denominator? Getting through the Slog ◦ Draft ◦ Responses to Public Comments Challenges to Standards Zen
  • 10.
    Costs for Publishers/Vendors NISOpublications are free – but other barriers exist Time to understand Time to talk to others in the company Time to code – or find/evaluate a vendor Time to update documentation VERSUS….. Lower costs of exchange with other vendors/libraries Prevent customers from being confused More easily/quickly allow end users to access data/information [increasing usage] Minimize support/questions/issues – more self-sufficiency
  • 11.
  • 13.
    What About NISOCompliance? • NISO does nor formally measure compliance/create scores – all standards and recommended practices are voluntary [not regulatory] • Some NISO Standing Committees maintain Registries • ODI • KBART • SUSHI • Other compliance-related activities • Circulate surveys • Education sessions • Include adherence to standards in RFPs • Good Citizen
  • 14.
    The Context forODI Based on a meeting at ALA Annual Conference in New Orleans on Sunday, June 26, 2011. Recognition of the following trends and issues: ◦ Emergence of Library Discovery Services solutions ◦ Based on index of a wide range of content ◦ Commercial and open access ◦ Primary journal literature, ebooks, and more ◦ Adopted by thousands of libraries around the world, and impact millions of users ◦ Agreements between content providers and discovery providers ad- hoc, not representative of all content, and opaque to customers. 14
  • 15.
    General Goals ofWorking Group Define ways for libraries to assess the level of content providers’ participation in discovery services Help streamline the process by which content providers work with discovery service vendors Define models for “fair” linking from discovery services to publishers’ content Determine what usage statistics should be collected for libraries and for content providers 15
  • 16.
  • 17.
    Conformance Disclosure Content ProviderConformance Checklist – Appendix B 17
  • 18.
    Conformance Disclosure Content ProviderConformance Checklist – Appendix C 18
  • 19.
    Conformance Statements Published contentprovider statements: ◦ Credo Reference ◦ EBSCO ◦ Gale ◦ IEEE ◦ SAGE Published discovery provider statements ◦ EBSCO ◦ Ex Libris ◦ ProQuest ODI Conformance http://www.niso.org/workrooms/odi/conformance/
  • 20.
  • 24.
  • 25.
    Standards and Recommended Practices toSupport Adoption of Altmetrics
  • 26.
    Why are standards importantwhen we are measuring things?
  • 27.
    Are we measuringscholarship using “English” or “Metrics” Image: Flickr user karindalziel
  • 30.
    NISO Altmetrics Initiative: Whyworth funding? Scholarly assessment is critical to the overall process ◦ Which projects get funded ◦ Who gets promoted and tenure ◦ Which publications are prominent Assessment has been based on citation since the 60s Today’s scholars multiple types of interactions with scholarly content are not reflected ◦ Is “non-traditional” scholarly output important too?
  • 32.
  • 33.
    Calendar April 2015 –Group(s) start working October 2015 – Draft document(s) Fall 2015 – Comment period(s) November 2015 – NISO Report to Sloan Foundation Spring 2016 – Completion of final draft(s)
  • 34.
    34 3 meetings: PLOS, SanFrancisco CNI, Washington, DC ALA MW, Philadelphia
  • 35.
    White Paper Released,Summer 2014 35
  • 37.
    37 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 1.Develop specificdefinitionsforalternative assessm entm etrics. 2.Agree on properusage ofthe term “Altm etrics,”oron usinga different… 3.Define subcategoriesforalternative assessm entm etrics,asneeded. 4.Identifyresearch outputtypesthatare applicable to the use ofm etrics. 5.Define relationshipsbetw een differentresearch outputsand develop… 6.Define appropriate m etricsand calculation m ethodologiesforspecific… 7.Agree on m ain use casesforalternative assessm entm etricsand develop a… 8.Develop statem entaboutrole ofalternative assessm entm etricsin… 9.Identifyspecificscenariosforthe use ofaltm etricsin research evaluation… 10.Prom ote and facilitate use ofpersistentidentifiersin scholarly… 11.Research issuessurroundingthe reproducibilityofm etricsacross… 12.Develop strategiesto im prove dataqualitythrough norm alization of… 13.Explore creation ofstandardized APIsordow nload orexchange form ats… 14.Develop strategiesto increase trust,e.g.,openlyavailable data,audits,or… 15.Studypotentialstrategiesfordefiningand identifyingsystem aticgam ing. 16.Identifybestpracticesforgrouping and aggregatingm ultiple data sources. 17.Identifybestpracticesforgrouping and aggregation byjournal,author,… 18.Define and prom ote the use ofcontributorship roles. 19.Establish a contextand norm alization strategyovertim e,bydiscipline,… 20.Describe how the m ain use casesapplyto and are valuable to the… 21.Identifybestpracticesforidentifyingcontributorcategories(e.g.,… 22.Identifyorganizationsto include in furtherdiscussions. 23.Identifyexisting standardsthatneed to be applied in the contextof… 24.Identifyand prioritize furtheractivities. 25.Clarifyresearcherstrategy(e.g.,driven byresearcheruptake vs.… Unimportant Of little importance Moderatelyimportant Important Very important Community Feedback on Project Idea Themes
  • 38.
    Working Groups A developmentof definitions and descriptions of use Bdefinitions for appropriate metrics and calculation methodologies for specific output types and promotion and facilitation of use of persistent identifiers Cdevelopment of strategies to improve data quality through source data providers.
  • 39.
    Definitions and UseCases Code of Conduct Output Types for Assessment Data Metrics Persistent Identifiers and Assessment 39
  • 40.
  • 41.
  • 42.
    Caveats - important •Citations, usage, altmetrics are ALL potentially important and potentially imperfect • Please don’t use altmetrics as uncritical proxy for scholarly impact – must consider quantitative and qualitative information too • data quality and indicator construction are key factors in the evaluation of specific altmetrics (read as: this is important – garbage in, garbage out!) 42
  • 43.
    NISO Altmetrics Working GroupA Charge: Development of specific definitions for alternative assessment metrics – This working group will come up with specific definitions for the terms commonly used in alternative assessment metrics, enabling different stakeholders to talk about the same thing. This work will also lay the groundwork for the other working groups. 43
  • 44.
    NISO Altmetrics Working GroupA Charge: Descriptions of how the main use cases apply to and are valuable to the different stakeholder groups – Alternative assessment metrics can be used for a variety of use cases from research evaluation to discovery. This working group will try to identify the main use cases, the stakeholder groups to which they are most relevant, and will also develop a statement about the role of alternative assessment metrics in research evaluation. 44
  • 45.
    Process • Discussion, Research,Discussion, Research! • WGA extensively studied the altmetrics literature, other communications • Discussed in depth various stakeholders' perspectives and requirements for these new evaluation measures • Iterations! • Need to write agreed-upon def’n to use consistently, across all parties – can’t be narrow 45
  • 46.
    What is Altmetrics?Definition Altmetrics is a broad term that encapsulates the digital collection, creation, and use of multiple forms of assessment that are derived from activity and engagement among diverse stakeholders and scholarly outputs in the research ecosystem. The inclusion in the definition of altmetrics of many different outputs and forms of engagement helps distinguish it from traditional citation-based metrics, while at the same time, leaving open the possibility of their complementary use, including for purposes of measuring scholarly impact. However, the development of altmetrics in the context of alternative assessment sets its measurements apart from traditional citation-based scholarly metrics. 46
  • 47.
    Use Cases Very important!Developed eight personas, three themes: Showcase achievement: Indicates stakeholder interest in highlighting the positive achievements garnered by one or more scholarly outputs. Research evaluation: Indicates stakeholder interest in assessing the impact or reach of research. Discovery: Indicates stakeholder interest in discovering or increasing the discoverability of scholarly outputs and/or researchers. . 47
  • 48.
  • 49.
    Persona: member ofhiring committee 49
  • 50.
  • 51.
  • 52.
  • 53.
    Glossary (of course!) Activity.Viewing, reading, saving, diffusing, mentioning, citing, reusing, modifying, or otherwise interacting with scholarly outputs. Altmetric data aggregator. Tools and platforms that aggregate and offer online events as well as derived metrics from altmetric data providers, for example, Altmetric.com, Plum Analytics, PLOS ALM, ImpactStory, and Crossref. Altmetric data provider. Platforms that function as sources of online events used as altmetrics, for example, Twitter, Mendeley, Facebook, F1000Prime, Github, SlideShare, and Figshare. Attention. Notice, interest, or awareness. In altmetrics, this term is frequently used to describe what is captured by the set of activities and engagements generated around a scholarly output. 53
  • 54.
    Glossary (much more...) Engagement.The level or depth of interaction between users and scholarly outputs, typically based upon the activities that can be tracked within an online environment. See also Activity. Impact. The subjective range, depth, and degree of influence generated by or around a person, output, or set of outputs. Interpretations of impact vary depending on its placement in the research ecosystem. Metrics. A method or set of methods for purposes of measurement. Online event. A recorded entity of online activities related to scholarly output, used to calculate metrics. 54
  • 55.
    Glossary (much more...) Scholarlyoutput. A product created or executed by scholars and investigators in the course of their academic and/or research efforts. Scholarly output may include but is not limited to journal articles, conference proceedings, books and book chapters, reports, theses and dissertations, edited volumes, working papers, scholarly editions, oral presentations, performances, artifacts, exhibitions, online events, software and multimedia, composition, designs, online publications, and other forms of intellectual property. The term scholarly output is sometimes used synonymously with research outputs. Traditional metrics. The set of metrics based upon the collection, calculation, and manipulation of scholarly citations, often at the journal level. Specific examples include raw and relative (field- normalized) citation counts and the Journal Impact Factor. Usage. A specific subset of activity based upon user access to one or more scholarly outputs, often in an online environment. Common examples include HTML accesses and PDF downloads. 55
  • 56.
  • 57.
    Code of Conduct Whya Code of Conduct? Scope Altmetric Data Providers vs. Aggregators 57
  • 58.
    Working Group C Scope:The Code of Conduct aims to improve the quality of altmetric data by increasing the transparency of data provision and aggregation as well as ensuring replicability and accuracy of online events used to generate altmetrics. It is not concerned with the meaning, validity or interpretation of indicators derived from that data. Altmetric online events include online activities “derived from engagement between diverse stakeholders in the research ecosystem and various scholarly outputs”, as defined in NISO WG A Definition of Altmetrics 58
  • 59.
    Code of ConductKey Elements Transparency Replicability Accuracy 59
  • 60.
    Code of Conduct:Transparency How data are generated, collected, and curated How data are aggregated, and derived data generated When and how often data are updated How data can be accessed How data quality is monitored 60
  • 61.
    Code of Conduct:Replicability Provided data is generated using the same methods over time Changes in methods and their effects are documented Changes in the data following corrections of errors are documented Data provided to different users at the same time is identical or, if not, differences in access provided to different user groups are documented Information is provided on whether and how data can be independently verified 61
  • 62.
    Code of Conduct: Accuracy The data represents what it purports to reflect Known errors are identified and corrected Any limitations of the provided data are communicated 62
  • 63.
    Code of Conduct:Reporting List all available data and metrics (providers & aggregators) and altmetrics data providers from which data are collected (aggregators). Provide a clear definition of each metric provided. Describe the method(s) by which data is generated or collected and how this is maintained over time. Describe any and all known limitations of the data provided. Provide a documented audit trail of how and when data generation and collection methods change over time with any and all known effects of these changes, including whether changes were applied historically or only from change date forward. Describe how data is aggregated. Detail how often data is updated. Provide the process of how data can be accessed. Confirm that data provided to different data aggregators and users at the same time is identical and, if not, how and why they differ. Confirm that all retrieval methods lead to the same data and, if not, how and why they differ. Describe the data quality monitoring process. Provide process by which data can be independently verified (aggregators only). Provide a process for reporting and correcting suspected inaccurate data or metrics. 63
  • 64.
  • 65.
    Charge Definitions for appropriatemetrics and calculation methodologies for specific output types. Research outputs that are currently underrepresented in research evaluation will be the focus of this working group. This includes research data, software, and performances, but also research outputs commonly found in the social sciences. Promotion and facilitation of use of persistent identifiers in scholarly communications. Persistent identifiers are needed to clearly identify research outputs for which collection of metrics is desired, but also to describe their relationships to other research outputs, to contributors, institutions and funders. 65
  • 66.
  • 67.
    Recommendations re DataMetrics Metrics on research data should be made available as widely as possible Data citations should be implemented following the Force11 Joint Declaration of Data Citation Principles, in particular: ◦ Use machine-actionable persistent identifiers ◦ Provide metadata required for a citation ◦ Provide a landing page ◦ Data citations should go into the reference list or similar metadata. 67
  • 68.
    Recommendations re DataMetrics Standards for research-data-use statistics need to be developed. ◦ Based on COUNTER; consider special aspects of research data ◦ Two formulations for data download metrics: examine human and non- human downloads Research funders should provide mechanisms to support data repositories in implementing standards for interoperability and obtaining metrics. Data discovery and sharing platforms should support and monitor “streaming” access to data via API queries. 68
  • 69.
  • 70.
    Altmetrics for #NISOALMI 70 39presentation slides have been downloaded 32,740 times (as of July 26, 2016 The Phase 1 report published in 2014 downloaded 9,636 times Pages hosting content related to this project were accessed 60,548 times >2,000 people attended the 22 in- person presentations about the project Final Report has been downloaded 15,432 times More than 50 articles/blogs/papers about the initiative
  • 71.
  • 72.
    Initial • Metrics from provider • Ad-hoc Repeatable •Common measurement criteria from provider • Documented measurements and processes • Comparable and consistent Defined • Measurements defined/confirmed as a standard for provider • Made public • Business processes followed consistently • Transparent Managed • Standards applied • Controls in place • Checks and balances repeated over time • Open for comment and feedback • Accountable Governed • Independent verification or 3rd party audit • Evolving common industry defined standards • Trust and confidence Maturity Model for Standards Adoption Increasing trust and confidence in altmetrics 72
  • 73.
    Key Original IdeasNot Yet Done What is the role of alternative assessment metrics in research evaluation and identify and what gaps exist in data collection around evaluation scenarios. Identify best practices for grouping and aggregating multiple data sources Identify best practices for grouping and aggregating by journal, author, institution and funder. 73
  • 74.
    Questions? Nettie Lagace Associate Director,Programs nlagace@niso.org @abugseye National Information Standards Organization (NISO) 3600 Clipper Mill Road, Suite 302 Baltimore, MD 21211 USA +1 (301) 654-2512 www.niso.org June 25, 2016 74