About

FAIR Enough is a web service to evaluate how much online resources follow to the FAIR principles ♻️ (Findable, Accessible, Interoperable, Reusable).

This FAIR evaluation service is compliant with the specifications defined by the FAIRMetrics working group, hence it can register and use the same FAIR metrics test APIs as the FAIR Evaluator.

You can easily define and publish new FAIR metrics tests for your community with the fair-test python library. The published metric test API can then be registered in FAIR enough using their public URL.

Credits

FAIR enough is developed and hosted by the Institute of Data Science at Maastricht University.

This platform takes inspiration from existing FAIR evaluation services implementations: the FAIR evaluator, FAIRsFAIR's F-UJI, and FOOPS! ontology validator.

How it works

An evaluation runs a collection of metrics tests against the resource to evaluate.

  • Evaluations can be created by anyone without authentication. An evaluation takes the URI of the resource to evaluate, and a collection of Metrics Tests to run against this resource.
  • Collections can be created through the API after authenticating with ORCID. A collection is an unsorted list of Metrics Tests
  • Metrics Tests are tests of a subject URL that returns a score between 0 and 1 exposed as an API.
Tests results