Back to Stories AI Policy

LeakPro: Privacy Risk Assessment for AI Systems

LeakPro is an open-source tool developed by RISE for data protection risk assessment, testing models against realistic privacy attacks to gauge training data traceability and reconstruction feasibility.

January 1, 2025 | State of AI 2025 Report | Page 20–21
Secure server room with blue LED lights
Photograph: GPT-IMAGE-1

Regulatory compliance under the AI Act and GDPR isn’t about good intentions; organizations must demonstrate that their AI systems do not expose sensitive data. Without a clear way to manage data disclosure risks, many projects fail even before investment.

What is LeakPro?

LeakPro is an open-source tool developed by RISE and collaborators for data protection risk assessment. It tests models against realistic privacy attacks to gauge training data traceability and the feasibility of reconstructing source data.

Capabilities

These evaluations also cover synthetic data and federated learning, converting results into standardized risk scores for compliance reporting and technical mitigation.

From Promise to Evidence

By aligning privacy engineering with regulation, LeakPro transforms privacy from a vague promise into actionable evidence. It fosters transparent collaboration and shared evaluation frameworks, essential for building trust in AI.

Impact

As privacy-enhancing technologies become legal requirements, LeakPro offers the foundation for responsible, compliant, and investment-ready AI.

Share this story