Sorena AI says proof may matter more than speed as AI enters compliance work
STOCKHOLM, Sweden, March 15, 2026 – As artificial intelligence moves deeper into governance and compliance work, Sorena AI says the central policy question is shifting. The company argues that organizations should be less focused on how fast AI can respond and more focused on whether those responses can be traced, verified, and safely governed.
That argument sits at the center of two concerns Sorena has been raising publicly. One is the risk of false confidence in compliance work, where fluent AI systems produce answers that appear complete while failing to cover all relevant obligations. The other is the risk of agentic systems being influenced by untrusted inputs, especially when AI tools are allowed to read external websites, documents, messages, and uploaded files before taking action.
Why is proof becoming the central issue?
Sorena says both issues matter in domains where institutions are expected to demonstrate accountability across regulations and standards such as the EU AI Act, GDPR, NIS2, DORA, CSRD, ISO/IEC 42001, and ISO 27001. In those settings, the company says, AI is not just supporting writing. It is increasingly being asked to support decisions, evidence gathering, risk analysis, and workflow execution.
What the benchmark indicates
The company says its January 2026 benchmark illustrates the danger of relying on fluency alone. In an internal two-auditor evaluation covering 43 compliance and regulatory research sessions and 4,332 requirements, Sorena says its Research Copilot reached 100% requirement coverage with 0 factual errors. A baseline general-purpose AI assistant averaged 25% coverage and 183 factual errors, according to the company. Sorena says the sessions included privacy, AI governance, sustainability, and technical review work, while also noting that the benchmark was internal and that results may vary by use case.
For Sorena, the larger implication is that AI in compliance should be judged less like a chatbot and more like an operational system. The company says teams need source-linked answers, controlled trust boundaries, permissioned access, and outputs that can survive external scrutiny. In other words, it is not enough for AI to sound reasonable. It must be able to show its work.
Why does prompt injection changes the design problem
Trust starts with what the system can read and use
Sorena applies the same logic to prompt injection and other agentic AI risks. The company says the more freedom an agent has to read mixed-trust content and take meaningful action, the more expensive a bad assumption becomes. That is why Sorena says the design problem is not merely safety tuning at the output layer. It is deciding what the system is allowed to read, trust, and use in the first place.
How Sorena says it operationalizes that model
A governed content layer for compliance work
Sorena describes its AI-powered compliance platform as an effort to operationalize that principle through source-linked research, structured assessments, and a governed content layer that acts as a single source of truth. The company says the goal is to make provenance, reviewability, and evidence reuse part of the workflow rather than an afterthought.
The broader policy question
Whether that model becomes standard practice remains to be seen. But Sorena’s position reflects a broader question that is likely to shape how AI is governed in regulated environments: when AI enters compliance work, what matters more – speed of response or proof of reliability?
