Skip to content

First set of indicators #8

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
vuillaut opened this issue Mar 27, 2025 · 6 comments
Open

First set of indicators #8

vuillaut opened this issue Mar 27, 2025 · 6 comments
Assignees

Comments

@vuillaut
Copy link
Contributor

vuillaut commented Mar 27, 2025

We need to publish a first set of indicators

The tools and indicators that have been identified:

Howfairis

  • Description: Checks compliance against the five recommendations from https://fair-software.eu/
  • Dimension: FAIRness
  • Indicator: hasLicense
  • GitHub action: yes

Gitleaks

  • Description: Detects leaks in repo
  • Dimension: Security
  • Indicator: hasSecurityLeak
  • GitHub action: yes

CFFconvert

  • Description: Converts CITATION.cff file to different formats and do validation of CITATION.cff file
  • Dimension: FAIRness
  • Indicator: hasCffFile
  • GitHub action: yes

Super-linter

  • Description: Programming language agnostic linter
  • Dimension: Quality
  • Indicator: hasLintingIssues
  • GitHub action: yes
@dgarijo
Copy link
Contributor

dgarijo commented Mar 27, 2025

Maybe we can do also https://scorecard.dev/

@vuillaut
Copy link
Contributor Author

Maybe we can do also https://scorecard.dev/

It's an interesting one, but let's focus on a few to test schemas and build something tangible first.

@kaygraf
Copy link

kaygraf commented Apr 8, 2025

While implementing the pipeline, we came across the general issue that those indicators outcome needs interpretation.
E.g.
hasLicense = True is ok
hasLintingIssues = True is not ok
Maybe we should define that binary outcome True is ok, and if a scoring outcome 100 is good, 0 is bad (or similar)?

@dgarijo
Copy link
Contributor

dgarijo commented Apr 8, 2025

@kaygraf good point, but I think that is not interpretation, that is an explanation of the indicator. In the description instructions we set for indicators we said the description should contain:

### What is being measured?
    Explain what you are measuring
### Why should we measure it?
    Explain why
### What must be provided for the measurement? 
    For example, a zenodo record, or a GitHub id.
### How is the measurement executed?
    Explain the exact process for assessing the indicator
### What is/are considered valid result(s)?
    What outcomes are pass/fail

I believe the "what is/are considered valid results?" goes in the direction of your observation. I would leave it as part of the description, but we may attempt some representation.

IMO, the interpretation of the indicator depends of the community, or the use case. For example, some may consider that failing to have an open license is bad, while other communities may not be bothered.

@kaygraf
Copy link

kaygraf commented Apr 8, 2025

Agreed, that this is necessarily interpretation based on community standards and use cases - but of course for the implementation, the somehow has to be machine actionable in the end, so we can provide a "pass/fail" at least for a standard set of indicators and tools.

@fdiblen
Copy link
Contributor

fdiblen commented Apr 30, 2025

We could also consider just renaming the indicator. For example, instead of hasLintingIssues we can call it

  • hasNoLintingIssues
  • noLintingIssues
  • islintingValid
  • lintingPasses
    I also agree with @kaygraf, hasLintingIssues = True is not ok can be confusing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants