Complira gives regulated teams a tamper-evident audit trail of every AI interaction — built for the EU AI Act's evidence obligations. Five-line SDK integration. Frankfurt-hosted. Ready when the regulator asks.
Compliance isn't a feature. It's evidence — produced at runtime, signed at the source, ready when the regulator asks.
Complira sits between your AI client and the model, capturing every prompt, every response, every reviewer decision — into an append-only ledger that even your team cannot edit after the fact.
Plenty of tools will help you write an AI policy. Complira is for what happens after — when those policies are in production and someone has to prove they were followed.
The EU AI Act's record-keeping and oversight obligations, mapped to what Complira ships. Honest about what's live, honest about what's roadmap.
No new infrastructure. No model retraining. No rebuild of your AI workflow. Drop the SDK in, and evidence starts flowing.
One npm install. The SDK is lightweight — no dependencies on heavy AI libraries, no telemetry beyond what you log.
One function call wraps your existing OpenAI, Anthropic, or Azure client. Every prompt, every response, every reviewer event is captured.
From the next AI call onward, every interaction is captured into the audit trail with timestamp, hash, model, deployer, and Annex III classification.
When the deterministic policy engine flags an interaction, the assigned compliance reviewer sees it, decides, and the decision joins the chain.
When Finanstilsynet asks, you issue a time-limited read-only token. Their queries are themselves logged. Article 74(12), satisfied.
Add Complira to your existing AI workflow with a few lines of code — no new infrastructure, no rebuild.
The SDK wraps your existing AI client. No code rewrite, no model retraining, no telemetry pipeline to set up.
01import OpenAI from 'openai' 02import { wrapOpenAI } from '@complira/sdk' 03 04const openai = wrapOpenAI(new OpenAI(), { 05 apiKey: process.env.COMPLIRA_API_KEY, 06 appName: 'credit-scoring', 07}) 08 09// Every call from here is logged automatically. 10// No further changes required. 11const response = await openai.chat.completions.create({ 12 model: 'gpt-4o', 13 messages: [{ role: 'user', content: 'Hello' }], 14})
Compliance evidence sounds abstract — until it's the morning of the audit. Here's when Complira earns its keep.
Finanstilsynet emails Tuesday morning: “Send us the audit log for credit decisions, last 90 days, by Friday.” Without Complira, that's a 6-week scramble. With it: issue a scoped token, share the dashboard URL, done by lunch.
A customer claims your AI denied their loan unfairly. With Complira, you don't dig through Slack and CloudWatch — you pull the exact prompt, response, model version, and reviewer notes for that decision in seconds.
Your team ships a new fraud-detection feature. Without Complira, you spend two weeks building audit infrastructure first. With it: classify under Annex III, assign a reviewer, ship — evidence flows from call one.
EU AI Act enforcement isn't a future date — parts of it are already in force. The window for “we'll figure it out later” closed in 2025.
Complira is shaped by what we believe regulated AI infrastructure should look like — and what it shouldn't.
A 60-page AI policy nobody reads doesn't satisfy a regulator. An immutable log of what your AI actually did, when, and why — that's what holds up under scrutiny. We optimise for evidence.
Complira's risk scoring is 100% deterministic — regex, keyword matching, additive scoring. Zero ML inference. If we used AI to judge AI, we'd be regulated under our own product. We're not. Read our self-assessment.
All data lives in Frankfurt. All sub-processors are EU-hosted. No US data transfers, no Schrems III risk, no “we'll figure out the SCCs later”. Built for Nordic banks. Designed for Finanstilsynet.
We don't claim Article 14 live oversight today. We claim it for October 2027 — aligned to enforcement. The compendium above shows exactly what's live, what's partial, and what ships when. No fairy-tale capability claims.
Book a walkthrough. We'll show you how Complira maps to your EU AI Act obligations — and exactly what your team needs to do to be ready by 2 December 2027.