Back to Stories AI Policy

Running AI Locally: Privacy, Control, and Flexibility

RISE researchers are exploring what it means to run advanced AI models securely on-premise, offering advantages for privacy, regulatory compliance, and experimentation while maintaining data sovereignty.

January 1, 2025 | State of AI 2025 Report | Page 22
Server rack with colorful network cables

As organizations weigh the trade-offs between cloud and local AI deployment, RISE researchers are exploring what it means to run advanced models securely on-premise. Local AI offers advantages for privacy, regulatory compliance, and experimentation.

Capable Local Models

Using capable local models such as GPT-oss, Qwen2.5 and Gemma 3, teams can deploy local AI on their own servers or even laptops. This approach ensures:

  • Full control over data
  • Predictable costs and energy footprint
  • Resilience: models can run even in the event of connectivity failures
  • Ability to be finetuned and evaluated for specific purposes and use cases

Challenges

However, local AI also introduces challenges in capability, hardware capacity, updates, and multimodal integration. RISE understanding of promising use cases as well as testing and evaluation experience help organizations navigate these trade-offs while ensuring data sovereignty and information security.

Impact

In a world increasingly dependent on AI, running locally makes sense for many organizations seeking to maintain control over their data and operations.

Share this story