EU AI Act high-risk obligationsapply 2 Dec 2027/Are you using the time?Check readiness →
§ 01EU AI Act compliance infrastructure
LIVE·Frankfurt-1·v1.4.2

Prove what your AI did,when, and why.

Tamper-evident chain · 4,821 entries

Complira gives regulated teams a tamper-evident audit trail of every AI interaction — built for the EU AI Act's evidence obligations. Five-line SDK integration. Frankfurt-hosted. Ready when the regulator asks.

See how it works
/01Data residency
Frankfurt
EEA · DE-FRA
/02Tamper evidence
SHA-256
Chain-anchored
/03Storage
Append-only
Read-only forward
/04Time to integrate
< 1 hour
Five-line SDK
"

Compliance isn't a feature. It's evidence — produced at runtime, signed at the source, ready when the regulator asks.

The Complira thesis · est. 2026
The runtime audit dashboard

Every AI call. Logged, hashed, signed.

Complira sits between your AI client and the model, capturing every prompt, every response, every reviewer decision — into an append-only ledger that even your team cannot edit after the fact.

§ 02REPLICA OF
/DASHBOARD
AT v1.4.2
Compliance posture · EU AI Act, Article 19
Operational
Verified · 1m ago
Tamper-evident log chain intact across 4,821 entries · SHA-256 · 5-year retention
Coverage
100%
4 of 4 AI systems
Hash chain · last 12 anchorsnext anchor in 28m 12s
root · 7c3a91f8 e4d2 b6c1 …a90f·anchored 23 Apr 2026, 14:31 UTC
✓ Article 12✓ Article 19✓ Article 26(6)⏱ Article 73
AI systems monitored
4/ 4
FCSKall classified
Calls logged · 30d
4,821
+412 vs prev period
Flagged for review
12
3 awaiting decision
Avg risk score · 30d
32/ 100
03366100
Time
System
Article
Model
Hash
Status
14:23:01.412
CreditScoring
⌘ § 12 · 19
gpt-4o
0x3a7c91e8 e4d2…
✓ Logged
14:22:47.103
RiskAdvisor
⌘ § 14
claude-opus-4
0x9f12bb34 7c8a…
⚑ Flagged
14:22:31.847
CompliBot
⌘ § 50
gpt-4o
0x612d8a01 2f5e…
✓ Logged
14:22:18.231
CreditScoring
⌘ § 12 · 19
gpt-4o
0x84a1f5cc e9b0…
✓ Logged
Positioning

Not an AI governance platform.
A runtime audit layer.

Plenty of tools will help you write an AI policy. Complira is for what happens after — when those policies are in production and someone has to prove they were followed.

§ 03DEFINING THE
CATEGORY
/01What Complira is

Runtime evidence for every AI interaction.

  • Tamper-evident audit logs. SHA-256 chained, append-only, EU-hosted.
  • Human oversight workflows. Reviewer assignments, decisions, override reasons — timestamped.
  • Regulator API access. Time-limited read-only tokens. Article 74(12)-ready.
  • Five-line SDK. Wraps your existing OpenAI, Anthropic, or Azure client.
→ Built like infrastructure. Deployed like a product.
/02What Complira isn't

Not a policy or governance platform.

  • No statistical risk modelling. Our risk engine is 100% rule-based — we register what's high-risk under Annex III and flag what matches your policy rules. No predictions, no ML scoring.
  • No governance framework authoring. Our policy engine flags calls deterministically — it's not for drafting AI governance documents.
  • No statistical bias auditing. We don't run fairness metrics on your training data — we capture the evidence so your reviewers can spot patterns themselves.
  • No live blocking, today. Article 14 live oversight ships 2027 — until then, evidence-only.
→ A focused tool. Not the whole stack.
Article coverage

Every obligation Complira covers today.

The EU AI Act's record-keeping and oversight obligations, mapped to what Complira ships. Honest about what's live, honest about what's roadmap.

§ 047 ARTICLES · 6 LIVE
1 PARTIAL
Article
Obligation
What Complira does
Status
12 · 19
Record-keeping & audit trail
SDK auto-captures every AI call. Append-only PostgreSQL with SHA-256 hashing and chain anchoring. Up to 5-year retention.
Live
13
Information to deployers
Documented SDK behaviour, data captured, and how data flows. Embedded in onboarding and contract.
Live
14
Human oversight
Reviewer assignment, post-hoc decisions, override reasons — all logged. Live intervention ships October 2027.
Partial
26 · 27
AI system registry & FRIA
Per-system Annex III classification, owner, FRIA tracking, oversight assignment, deployer log retention.
Live
50 · 86
Transparency & explanation
Deterministic risk scoring. Every flag explains exactly which rule matched. No ML inference in scoring.
Live
72 · 73
Post-market monitoring & incidents
Trend analytics across AI systems. Incident registry with escalation flow and audit trail integration.
Live
74(12)
Regulator access
Scoped, time-limited, read-only access tokens. Issued in seconds. Every regulator query is itself logged.
Live
02·12·27
Annex III high-risk obligations apply from 2 December 2027 following the 7 May 2026 Digital Omnibus on AI political agreement. If your AI is in production, the clock is running.
How it works

From SDK install to regulator-ready in five steps.

No new infrastructure. No model retraining. No rebuild of your AI workflow. Drop the SDK in, and evidence starts flowing.

§ 05AVERAGE DEPLOY
UNDER 1 HOUR
STEP I01

Install the SDK

One npm install. The SDK is lightweight — no dependencies on heavy AI libraries, no telemetry beyond what you log.

Terminal
$npm install @complira/sdk
added 1 package · 0 vulnerabilities
STEP II02

Wrap your AI client

One function call wraps your existing OpenAI, Anthropic, or Azure client. Every prompt, every response, every reviewer event is captured.

credit-scoring.ts
import { wrapOpenAI } from '@complira/sdk'
const openai = wrapOpenAI(new OpenAI())
STEP III03

Logs flow in automatically

From the next AI call onward, every interaction is captured into the audit trail with timestamp, hash, model, deployer, and Annex III classification.

/dashboard/logs
14:23:01CreditScoring · gpt-4oLogged
14:22:47CreditScoring · gpt-4oLogged
STEP IV04

Reviewer takes action on flagged calls

When the deterministic policy engine flags an interaction, the assigned compliance reviewer sees it, decides, and the decision joins the chain.

Review queue
Anna L. · Compliance reviewer3 items in queue · last decision 4m ago
STEP V05

Regulator gets scoped access

When Finanstilsynet asks, you issue a time-limited read-only token. Their queries are themselves logged. Article 74(12), satisfied.

Regulator access token
cmpl_rgr_8f2a4c91e8d7…
scope: read-onlyexpires: 72h
Integration

Five lines of code. That's the integration.

Add Complira to your existing AI workflow with a few lines of code — no new infrastructure, no rebuild.

§ 06NODE.JS · PYTHON
SDK v0.1.1

Drop-in compatibility

The SDK wraps your existing AI client. No code rewrite, no model retraining, no telemetry pipeline to set up.

  • Zero performance overhead — async logging, fire-and-forget
  • Open SDK — inspect what gets captured, modify what doesn't
  • Frankfurt-only routing — no US data transfers, ever
Compatible with
+ custom endpoints
@complira/sdk · v0.1.1
01import OpenAI from 'openai'
02import { wrapOpenAI } from '@complira/sdk'
03
04const openai = wrapOpenAI(new OpenAI(), {
05  apiKey: process.env.COMPLIRA_API_KEY,
06  appName: 'credit-scoring',
07})
08
09// Every call from here is logged automatically.
10// No further changes required.
11const response = await openai.chat.completions.create({
12  model: 'gpt-4o',
13  messages: [{ role: 'user', content: 'Hello' }],
14})
When it matters

Three moments. One platform.

Compliance evidence sounds abstract — until it's the morning of the audit. Here's when Complira earns its keep.

§ 07THE THREE
PRESSURE EVENTS
/01Scenario

When the regulator asks.

Finanstilsynet emails Tuesday morning: “Send us the audit log for credit decisions, last 90 days, by Friday.” Without Complira, that's a 6-week scramble. With it: issue a scoped token, share the dashboard URL, done by lunch.

Article 74(12) · Regulator access
/02Scenario

When something goes wrong.

A customer claims your AI denied their loan unfairly. With Complira, you don't dig through Slack and CloudWatch — you pull the exact prompt, response, model version, and reviewer notes for that decision in seconds.

Article 86 · Right to explanation
/03Scenario

When you onboard a new AI system.

Your team ships a new fraud-detection feature. Without Complira, you spend two weeks building audit infrastructure first. With it: classify under Annex III, assign a reviewer, ship — evidence flows from call one.

Article 26 · 27 · System registry & FRIA
Why now

The clock is already running.

EU AI Act enforcement isn't a future date — parts of it are already in force. The window for “we'll figure it out later” closed in 2025.

§ 08ENFORCEMENT
TIMELINE
02 Feb 2025
Banned uses in force
Social scoring, biometric categorisation, manipulative AI prohibited.
02 Aug 2025
GPAI obligations
General-purpose AI models must publish summaries, comply with copyright.
May 2026
You are here
19 months until high-risk obligations apply. Most teams have not started.
Today
02 Dec 2027
High-risk in force
Annex III systems must be registered, monitored, audit-trailed.
02 Aug 2028
Full enforcement
All remaining provisions including Article 14 live oversight.
Why we built it this way

Four opinions we hold strongly.

Complira is shaped by what we believe regulated AI infrastructure should look like — and what it shouldn't.

§ 09PRINCIPLES
NOT FEATURES
i.

Evidence beats policy.

A 60-page AI policy nobody reads doesn't satisfy a regulator. An immutable log of what your AI actually did, when, and why — that's what holds up under scrutiny. We optimise for evidence.

ii.

No AI in the compliance layer.

Complira's risk scoring is 100% deterministic — regex, keyword matching, additive scoring. Zero ML inference. If we used AI to judge AI, we'd be regulated under our own product. We're not. Read our self-assessment.

iii.

EU-only, by construction.

All data lives in Frankfurt. All sub-processors are EU-hosted. No US data transfers, no Schrems III risk, no “we'll figure out the SCCs later”. Built for Nordic banks. Designed for Finanstilsynet.

iv.

Honest about the roadmap.

We don't claim Article 14 live oversight today. We claim it for October 2027 — aligned to enforcement. The compendium above shows exactly what's live, what's partial, and what ships when. No fairy-tale capability claims.

Common questions

What teams actually ask.

Most regulated organisations end up with at least one Annex III system — credit scoring, fraud detection, KYC review, customer eligibility. Even if you think you're safe, your DPA, your auditor, or your board will likely ask for the same evidence Complira produces. Better to have it before someone asks. Article 12 record-keeping applies broadly to high-risk providers.
Different layer. SOC 2 and ISO 27001 prove your security posture. Your SIEM logs infrastructure events. Complira logs AI-specific evidence — every prompt, every response, every reviewer override, hashed and chained. The EU AI Act doesn't care that your SIEM is well-configured. It cares whether you can show what your AI did on day 47 of last quarter.
Yes — under GDPR Article 28, we're your processor. Frankfurt-hosted, EU-only sub-processors, signed DPA, configurable retention (12 / 24 / 60 months). You can redact PII before it reaches us, or have us redact it on capture. No US data transfers, ever. Full DPA available on request.
We give you the data on the way in and the way out. Bulk export to S3-compatible buckets, Postgres dumps, signed Merkle proofs of the audit chain. The export format is documented and stable. Our DORA exit-strategy document spells out the 90-day transition window. Your evidence is yours.
Yes. We have native integrations for the tools your reviewers actually use like Slack, Jira, and PagerDuty. Reviewer decisions sync back to the audit chain via the integration's native ID, so the regulator can trace the human decision back to its origin. No paid add-ons.
/ § 10 · Begin

Make your AI usage audit-ready.

Book a walkthrough. We'll show you how Complira maps to your EU AI Act obligations — and exactly what your team needs to do to be ready by 2 December 2027.

Get in touch