Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Loading in …3
×
1 of 21

Getting (and giving) credit for all that we do

1

Share

Download to read offline

Presented at the NISO webinar for Research Data Metrics from the Altmetrics working group on output types and identifiers

Getting (and giving) credit for all that we do

  1. 1. Title: Getting (and giving) credit for all that we do Melissa Haendel NISO Research Data Metrics Landscape: An update from the NISO Altmetrics Working Group B: Output Types & Identifiers 11.16.2015 @ontowonka
  2. 2. What *IS* “success”?
  3. 3. https://goo.gl/b60moX It’s not always what you see
  4. 4. What is attribution???
  5. 5. Over 1000 authors
  6. 6. Project CRediT http://projectcredit.net
  7. 7. Many contributions don’t lead to authorship BD2K co-authorship D.Eichmann N.Vasilevsky 20% key personnel are not adequately profiled using publications
  8. 8. Some contributions are anonymous Data deposition Image credit: http://disruptiveviews.com/is-your-data-anonymous-or-just-encrypted/ Anonymous review
  9. 9. The Research Life Cycle EXPERIMENT CONSULT PUBLISHDATA FUND
  10. 10. The Research Life Cycle EXPERIMENT CONSULT PUBLISHDATA FUND
  11. 11. • Measurement instruments • Continuing education materials • Cost-effective intervention • Change in delivery of healthcare services • Quality measure guidelines • Gray literature Evidence of meaningful impact • New experimental methods, data models, databases, software tools • New diagnostic criteria • New standards of care • Biological materials, animal models • Consent documents • Clinical/practice guidelines https://becker.wustl.edu/impact-assessment http://nucats.northwestern.edu/ Diverse outputs Diverse impacts Diverse roles Each a critical component of the research process
  12. 12. EXAMPLE OUTPUTS related to software: Outputs: binary redistribution package (installer), algorithm, data analytic software tool, analysis scripts, data cleaning, APIs, codebook (for content analysis), source code, software to make metadata for libraries archives and museums, data analytic software tool, source code, program codes (for modeling), commentary in code(thinking of open source-need to attribute code authors and commentator/enhancers/hackers, who can document what they did and why), computer language (a syntax to describe a set of operations or activities), software patch (set of changes to code to fix bugs, add features, etc.), digital workflow (automated sequence of programs, steps to an outcome), software library (non- stand alone code that can be incorporated into something larger), software application (computer code that accomplishes something) Roles: catalog, design, develop, test, hacker, bug finder, software developer, software engineer, developer, programmer, system administrator, execute, document, software package maintainer, project manager, database administrator Attribution workshop results - >500 scholarly products
  13. 13. Connecting people to their “stuff”
  14. 14. Modeling & implementation VIVO-ISF: Suite of ontologies that integrates and extends community standards
  15. 15. Credit extends beyond the original contribution  Stacy creates mouse1  Kristi creates mouse2  Karen uses performs RNAseq analysis on mouse1 and mouse2 to generate dataset3, which she subsequently curates and analyzes  Karen writes publication pmid:12345 about the results of her analysis  Karen explicitly credits Stacy as an author but not Kristi.
  16. 16. Credit is connected Credit to Stacy is asserted, but credit to Kristi can be inferred
  17. 17. Introducing openRIF The Open Research Information Framework openRIF SciENcv eagle-i VIVO- ISF
  18. 18. Ensuring an openRIF that meets community needs Interoperability A domain configurable suite of ontologies to enable interoperability across systems A community of developers, tools, data providers, and end-users
  19. 19. Developing a computable research ecosystem Research information is scattered amongst: Research networking tools Citation databases (e.g., PubMED) Award databases (e.g., NIH Reporter) Curated archives (e.g., GenBank) Locked up in text (the research literature) Map SciENcv data model to VIVO- ISF/openRIF Enable bi-directional data exchange Integrate SciENcv, ORCID data into CTSAsearch http://research.icts.uiowa.edu/polyglot/ CTSAsearch: The Open Research Information Framework David Eichmann
  20. 20. Thank you! Join the Force Attribution Working Group at: https://www.force11.org/group/attributionwg Join the openRIF listserv at: http://group.openrif.org
  21. 21. Identifying those scholarly outputs Identifiers for things that are not publications, or documents, need to get beyond thinking about DOIs

Editor's Notes

  • Current metrics for scholarly output rely heavily on those things which are easy to count, such as papers, grant dollars, and patents. With altmetrics taking hold, new opportunities are emerging to understand broader societal impact of scholarship. However, many of these measures don’t track the specific products that someone creates, nor their specific participation in various scholarly activities. This limits an understanding of personal impact and value in the scholarly ecosystem and has negative consequences for funding, career progression, and program planning. Further, there is a need to understand downstream outcomes that leverage prior contributions. This presentation will discuss approaches for attribution of non-traditional scholarly products and their relationship to people, organizations, and more traditional scholarly works.
  • Even more critical in science today – more interdisciplinary, more moving pieces, more team-based
    (translational workforce is a good example of this: eg for a clinical trial may have PI, study coordinators, ethicist, lab tech, biobanking facility, analyst, etc.)
    (another good example is an open source software project like VIVO where contributors have different roles, produce software and data models, different workflow and dissemination patterns)
    The contributions of all of these people are required for science to move forward, but there are not mechanisms in place to properly recognize these contributions and represent them in a meaningful manner
  • http://g3journal.org/content/5/5/719
  • co-authorship is cross-award, but expertise is within award
    There are key persons that connect communities
    20% awardees are not adequately profiled using publications

    Social network visualization of 282 BD2K awardees (key personnel) on 38 grants of 6 award types. Coauthorship between personnel from those same publications. Edge length is inversely proportional to publication count (200 edges). Note that K01 key personnel include senior mentors and there are still a few non-responders missing.

    Connectivity and cluster composition changes when comparing domain expertise to co-authorship. For example, there is substantive co-authorship across awards, but expertise tends to stay in the same award. A final key finding from this SNA was that approximately 20% of the awardees did not have publications as a primary outcome (often in roles like software engineers, developers, programmers, analysts, etc.) from these and prior efforts, implying that traditional means for profiling learners, experts, and collaborations does not provide a complete picture of the data science landscape
  • ×