You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The integration of Artificial Intelligence (AI) into GlobaLeaks presents an opportunity to enhance both the usability for whistleblowers and the efficiency of the evaluation process for reviewers. AI models can assist users in submitting well-structured and complete reports while also helping administrators prioritize and filter submissions more effectively. However, any implementation must ensure strict adherence to privacy, security, and open-source principles, avoiding reliance on external proprietary AI services.
While AI models, such as Large Language Models (LLMs) for text interaction and classification models for automated analysis, offer significant advantages, they also introduce well-recognized privacy risks that cannot be overlooked. Before any adoption, we must rigorously assess whether these risks can be sufficiently mitigated through local AI deployments and clearly define the constraints of their application. Some privacy concerns may not have a viable solution, and in such cases, AI integration should be reconsidered. Ensuring that all computations occur entirely on-premises and using only open-source AI frameworks is critical to maintaining whistleblower confidentiality and compliance with data protection regulations.
This ticket serves to document the analysis of privacy concerns and assess the feasibility of implementing a local AI model within a whistleblowing software in general while identifying viable solutions and limitations.
The text was updated successfully, but these errors were encountered:
evilaliv3
changed the title
Evaluate possibilities of integration of a local AI engine withing a whistleblowing software
Evaluate possibilities of integration of a local AI engine within a whistleblowing software
Mar 24, 2025
Proposal
The integration of Artificial Intelligence (AI) into GlobaLeaks presents an opportunity to enhance both the usability for whistleblowers and the efficiency of the evaluation process for reviewers. AI models can assist users in submitting well-structured and complete reports while also helping administrators prioritize and filter submissions more effectively. However, any implementation must ensure strict adherence to privacy, security, and open-source principles, avoiding reliance on external proprietary AI services.
While AI models, such as Large Language Models (LLMs) for text interaction and classification models for automated analysis, offer significant advantages, they also introduce well-recognized privacy risks that cannot be overlooked. Before any adoption, we must rigorously assess whether these risks can be sufficiently mitigated through local AI deployments and clearly define the constraints of their application. Some privacy concerns may not have a viable solution, and in such cases, AI integration should be reconsidered. Ensuring that all computations occur entirely on-premises and using only open-source AI frameworks is critical to maintaining whistleblower confidentiality and compliance with data protection regulations.
This ticket serves to document the analysis of privacy concerns and assess the feasibility of implementing a local AI model within a whistleblowing software in general while identifying viable solutions and limitations.
The text was updated successfully, but these errors were encountered: