Back to Stories AI Policy

From Regulation to Reality: The EU AI Act in Practice at RISE

Although mostly not in force yet, the EU AI Act is already shaping how AI projects are designed and deployed at RISE, translating into practical decisions around risk, responsibility, and compliance.

January 1, 2025 | State of AI 2025 Report | Page 27
EU certification documents with CE marking
Photograph: GPT-IMAGE-1

Although mostly not in force yet, the EU AI Act is already shaping how AI projects are designed and deployed at RISE. Rather than remaining a policy framework, the regulation translates into practical decisions around risk, responsibility, and compliance.

What the Act Means in Practice

Earlier this year, one of RISE’s legal counsels outlined what the Act means in practice: a risk-based approach to AI that requires organisations to:

  • Build AI literacy
  • Map and classify AI systems
  • Classify roles as provider or deployer
  • Update internal policies and documentation accordingly

Research Exemptions

While research and testing activities benefit from certain exemptions, these no longer apply once AI systems are tested in real-world conditions or prepared for market use.

Product Safety Legislation

The AI Act is first and foremost a product safety legislation: it aims to protect EU citizens from unsafe AI systems and to ensure that providers can be held accountable for the AI they place on the market.

RISE’s Role

As a leading developer of AI, it is essential that RISE not only remains at the forefront of technological innovation, but also takes a leading role in advancing safe, trustworthy, and responsible AI.

Share this story