Skip to content

feat(RHOAIENG-21045): Add fairness metrics and tests #5

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

christinaexyou
Copy link

No description provided.

def filter_rows_by_inputs(data, filter_func):
return data[np.apply_along_axis(filter_func, 1, data)]

def calculate_confusion_matrix(test: np.array, truth: np.array, positive_class: int) -> dict:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@staticmethod
def calculate_model(
samples: np.ndarray,
model: Any,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have a structure/planned datatype for this yet? While I like this idea, I fear it could run into the same security issues as we've been seeing with the explainers in the Java service

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these to test parity against the existing Java service? I wonder if we might want to use synthetically generated data to avoid storing a lot of raw datasets in the repo

@christinaexyou christinaexyou force-pushed the add-fairness-metrics branch from b83a31e to 1e28682 Compare May 8, 2025 15:03
@christinaexyou christinaexyou requested a review from RobGeada May 8, 2025 15:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants