Skip to content

Evaluate possibility of implementation of an AI-based Automated Report Evaluation and Classification #4443

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
evilaliv3 opened this issue Mar 24, 2025 · 0 comments

Comments

@evilaliv3
Copy link
Member

Proposal

This ticket is to evaluate the possibility to implement an AI-based classification system that analyzes whistleblower reports and assigns them a priority score based on predefined criteria. The system should run entirely on local infrastructure, ensuring compliance with privacy regulations.

It would be in fact interesting to explore implementation of the following automated evaluation criteria:

  • Spam Detection: Flagging irrelevant or malicious submissions.
  • Credibility Assessment: Evaluating textual coherence, factual consistency, and linguistic patterns.
  • Relevance Filtering: Identifying whether the report pertains to the organization’s investigative scope.
  • Priority Assessment: Assigning priority based on the severity and urgency of the issue.

This ticket is subject to prior evaluation of the privacy concerns expressed in ticket #4441 and the evaluation of the possibility of implementing a local AI model

@evilaliv3 evilaliv3 changed the title Suggest a new feature Evaluate possibility of implementation of an AI-based Automated Report Evaluation and Classification Mar 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant