Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Introducing the fair evaluator

754 views

Published on

An overview of the current functionality of the FAIR Evaluator - a framework for automating the evaluation of FAIRness of digital resources. The screenshots here are of the early strawman prototype, which is only available for use by the FAIR Metrics Authoring group at this time. Nevertheless, feedback on the functionality of the Evaluator would be welcome! We anticipate having a fully public version before August 2018.

This work is supported, in part, by the Ministerio de Economía y Competitividad grant number TIN2014-55993-RM

Published in: Internet
  • Be the first to comment

Introducing the fair evaluator

  1. 1. The FAIR Evaluator Automated, Objective, Aspirational! “Strawman” code by Mark Wilkinson Sneak-preview for Bio IT World 2018
  2. 2. Overview - Every Metric is associated with a Web-based interface that can evaluate compliance with that Metric - New Metrics can be registered simply by pointing to the URL for the interface of their evaluation software - Collection of Metrics can be assembled by anyone, to represent the aspects of FAIRness that they care about (e.g. a journal vs. funding agency vs researcher) - You can execute an evaluation by providing an IRI to be tested, and a collection of Metrics to be applied - Evaluations can be recovered (see previous input data) and/or re-executed, either through the Web interface, or by direct connection to the Evaluator from software (i.e. the Web page is only for people)
  3. 3. The Evaluator will soon be peer-reviewed so we are NOT inviting you to try it yourself ...yet! We need it to be stable and consistent :-)
  4. 4. Homepage
  5. 5. Homepage Browse Metrics
  6. 6. Browse Metrics
  7. 7. Browse Metrics What principle does this Metric test?
  8. 8. Browse Metrics What is the address of the testing interface?
  9. 9. Browse Metrics Browse to one of them….
  10. 10. Interface Definition (Swagger 2.0 - a globally popular standard)
  11. 11. Homepage Browse Collections of Metrics
  12. 12. Metrics Collections (at the moment, I have only collected the individual FAIR facets together - anyone can create a collection)
  13. 13. Homepage Browse evaluations
  14. 14. Browse Evaluations
  15. 15. Explore one evaluation
  16. 16. Explore one evaluation See the result!
  17. 17. The Result Page
  18. 18. Want to re-execute? See the raw input data
  19. 19. Raw Input Data (copy/edit/paste and HTTP POST to the Evaluator)
  20. 20. Web-based Execution of an Evaluation
  21. 21. Web-based Execution of an Evaluation What is being tested?
  22. 22. Web-based Execution of an Evaluation This questionnaire field is automatically created by examining the Swagger definition for the Metric Tester Swagger Doc.
  23. 23. What happens next? http://fairsharing.org/standards/identifiers/doi SUBMIT
  24. 24. Verification of Answer(s) to Questionnaire The Evaluator Metric Tester FAIRSharing.org F1: DOI DOI - OK?
  25. 25. Verification of Answer(s) to Questionnaire The Evaluator Metric Tester FAIRSharing.org Standard! Is this really a DOI?
  26. 26. Verification of Answer(s) to Questionnaire The Evaluator Metric Tester FAIRSharing.org Yes, This is a DOI
  27. 27. Automated, Objective, Aspirational! It’s HARD to pass the automated tests! (much harder than you would think!) They are unforgiving, because they are automated They force you to think about your interface from the perspective of a machine-visitor
  28. 28. Automated, Objective, Aspirational! If you ever start to score 100% on the Metrics… ...we will just create harder Metrics!! FAIR is ASPIRATIONAL! You can always do better!
  29. 29. Contact: Mark Wilkinson mark.wilkinson@upm.es Ministerio de Economía y Competitividad grant number TIN2014-55993-RM

×