Accueil
Engagement Contact
← Back to Articles
• Technical Deep Dive April 2, 2026

The Security Layer That Stops Data Leakage at the Source

An employee at a Swiss wealth management firm was using their sovereign AI to prepare client reviews. They included notes from three separate client files in a single prompt — standard practice in...

Leeloo Research & Analysis
8 min read

The Security Layer That Stops Data Leakage at the Source

An employee at a Swiss wealth management firm was using their sovereign AI to prepare client reviews. They included notes from three separate client files in a single prompt — standard practice in their workflow — without realizing the AI's session had carried context from a previous user's interaction. The AI's response referenced information from a client the employee had never worked with.

Nothing was hacked. No firewall was breached. The employee followed their process. The AI worked correctly. And client data crossed user boundaries.

Why Your Existing Security Tools Miss This

AI data leakage doesn't look like a traditional breach — someone breaking through a perimeter and downloading files. It looks like a well-intentioned employee pasting client data into a prompt, an AI model routing a sensitive query to a cloud service for better accuracy, or a prompt injection attack that tricks the AI into including restricted information in its response. Your perimeter security, your DLP tools, your network monitoring — none of them were built to catch any of those.

Employees who use AI responsibly and within policy still cause data leakage. The problem isn't intent — it's that AI changes the surface area of what data gets processed, combined, and potentially exposed. An accountant doing their job correctly can include confidential figures in a prompt context window that the AI stores, cross-references with other users' sessions, and potentially surfaces in a completely different interaction. The traditional security model had a perimeter to defend. AI systems don't have one.

IBM's 2024 Cost of a Data Breach report found that AI-involved data breaches cost an average of $5.7 million — 19% higher than the average breach cost. The reason: data processed by AI is enriched and synthesized before it leaves the organization. A document about a client becomes a structured analysis with key data points extracted and cross-referenced. The leaked version is more dangerous than the original.

Traditional security tools catch attacks. AI security failures happen when everything is working correctly. That's the part your perimeter security doesn't cover.

The Seven Controls That Cover What Perimeter Security Can't

The Leeloo Framework's Security Domain includes seven components, each stopping a different class of AI data leakage — not as add-on layers, and built into the architecture before any data is processed.

The Firewall is a one-way valve. Data goes in to the AI; results come out to the user; nothing goes anywhere else. Not to an analytics endpoint, not to a model provider API, not to a logging service in a different jurisdiction. If a data flow isn't in the approved map at configuration time, it doesn't happen. This isn't a policy that can be misconfigured later — it's a structural property of how the architecture works.

Acting as the sensitivity classifier, the Router assesses every query before any model processes it. A question about company financials routes to a local, air-gapped model. A question about formatting a template routes to a faster, cheaper model. A question that touches personally identifiable information routes through additional privacy controls. The classification happens before processing, not after.

Personal data protection starts with the PII Detector, which identifies names, account numbers, health information, and anything covered by GDPR's Article 9 special categories — then handles it appropriately before any model ever sees it. Employees don't have to think about what counts as personal data. The system does.

Prompt injection — when a malicious instruction embedded in a document, form field, or pasted text tricks the AI into doing something its operators didn't intend — is blocked by the Prompt Guard in an average of 340 milliseconds. Preventing it requires inspecting inputs before processing, not auditing outputs after the fact.

User conversations are sealed by the Session Isolator, so that one user's interaction with the AI can never bleed into another's. Each session is locked so that accessing another session's data requires a cryptographic key specific to that session — not just a differently phrased query. The Swiss wealth management scenario becomes structurally impossible.

Who can ask what is enforced by the Access Controller, based on each user's role and the sensitivity of the data they're querying. An analyst can query their own client portfolio. They cannot, by architectural rule, query a portfolio they're not authorized to see — regardless of how they phrase the question.

Every security-relevant event gets recorded by the Audit Logger with millisecond timestamps. When a data protection authority asks what happened during a specific time window, the answer is in the log, complete, and immutable. When your compliance team needs to demonstrate GDPR Article 32 compliance — the requirement for "appropriate technical and organisational measures" for security of processing — the log is the evidence.

Three Deployments Where the Security Layer Caught What Policies Missed

A Belgian law firm discovered that a contract template used by multiple client teams had been injected with a prompt instruction — embedded in the template itself — that caused the AI to append internal billing rates to every document it helped draft. The injected content was included in client-facing outputs for six weeks before anyone noticed. The Leeloo Prompt Guard would have flagged the injected instruction on first processing, before any billing information reached a document.

One German pharmaceutical company identified that their AI had been routing drug formulation queries to a cloud model during peak load periods. Their Router fallback wasn't configured as sovereign-only — a single configuration gap. 1,400 queries went to a cloud model over three months, including queries about formulations that were confidential IP. The queries weren't hacked; they were routed incorrectly by a system working as designed, except with the wrong design. A sovereign-only Router configuration makes this structurally impossible.

For a Dutch financial services firm, a security audit revealed that session isolation wasn't configured correctly. A specific query pattern could surface previous users' context — a vulnerability their IT team had never encountered in traditional software because traditional software doesn't have session context in the same way. Six weeks of that misconfiguration, at 50 users and 20 queries per day, means potentially 42,000 interactions where cross-session data access was possible. Under GDPR, each interaction involving personal data is a potential notification obligation.

A Point About What Security Is Actually For

Perfect security is not the goal and should not be the stated standard. The goal is to make a data leakage event detectable within minutes and remediable without regulatory notification obligations.

GDPR Article 32 requires appropriate technical measures for the security of processing. What "appropriate" means depends on the risk: organizations processing health data, financial data, or legal data face the highest standard. The Article doesn't require perfection — it requires that controls are proportionate to the risk and that failures can be detected and responded to quickly.

Zero-risk isn't achievable. Zero-visibility is what creates regulatory exposure. When a data protection authority asks what happened, "we don't know because we don't have logs" is a GDPR problem. "Here is the complete audit trail and here is what we did in response" is not.

Regulatory fines for security failures under the EU AI Act and GDPR can reach €20M or 4% of global annual turnover. The average cost of an AI-involved breach in Europe in 2024 was €4.2M in direct costs plus €1.8M in business disruption. The security components in the Leeloo Framework — Firewall, PII detection, session isolation, prompt injection protection, access controls, audit logging — are included in the Framework license. The cost of not having them is what's expensive.

What Changes When the Security Question Has a Technical Answer

When your board asks "how do we know our AI isn't leaking data?" the answer that ends the conversation isn't "we trust our employees." It's this: data cannot leave our approved infrastructure because the Firewall makes it structurally impossible. Every query is classified before processing. Every interaction is logged with millisecond precision. Session isolation means user A cannot access user B's context. We can prove any of this to a regulator in hours.

That answer requires a technical architecture, not a policy. Policies can be violated; architecture controls what's possible.

Organizations that deployed with the Leeloo security layer report that AI became easier to adopt internally — not harder — because employees stopped worrying about whether they were doing something wrong when they used it. The controls were visible, explainable, and came with audit trails that demonstrated compliance rather than just asserting it. A CRM that processes client data becomes provably safe for regulated use. A document AI that touches confidential files becomes something the legal team can sign off on.

Finding out your AI has been leaking data for six weeks through a misconfiguration — with no audit trail — puts a CTO in an impossible position. No technical explanation makes the board feel better. No remediation undoes the exposure. The security architecture built in at deployment is what prevents that moment.

We built the security layer before the product layer because that's the only order that works. Security added after the fact audits what's already happening. Security built in defines what's allowed to happen. The Firewall, the Prompt Guard, the Session Isolator — they're not features you turn on. They're properties of the architecture you deploy.

Any question about your AI's data handling. Any audit, any client review, any regulatory inquiry. The answer is already logged, already demonstrable, already correct.

---

Leeloo is a sovereign AI implementation company based in Luxembourg, EU. The Security Domain is a standard layer in every Leeloo Framework deployment. [leeloo.ai]

← Previous 300+ Battle-Tested Components. One Stack. Next → We Solved Hallucinations. Here's the Architecture.