EU AI Act high-risk obligationsapply 2 Dec 2027/Are you using the time?Check readiness →
EU AI ACT INFORMATION HUB

The EU AI Act,
made operational.

What it requires. When it bites. How regulated financial services teams can be ready by 2 December 2027 — without rebuilding their AI stack.

High-risk deadline
02 Dec 2027
Articles covered
7 · live
Log retention
5 years
Hosting
Frankfurt · EU only
UPDATED7 May 2026

Digital Omnibus on AI deal: high-risk obligations now apply 2 December 2027.

On 7 May 2026, the Council of the EU and European Parliament reached provisional political agreement on the Digital Omnibus on AI. High-risk obligations under Annex III now apply from 2 December 2027 (previously 2 August 2026). AI embedded in regulated products under Annex I applies from 2 August 2028. Article 50(2) watermarking obligations apply 2 December 2026 after a three-month grace period.

Status: provisional political agreement, pending formal Council and Parliament endorsement, legal-linguistic revision, and publication in the Official Journal. The dates above are the operational planning baseline.

§ 02WHAT IT IS

Europe's first horizontal AI regulation.

Regulation (EU) 2024/1689 — the “EU AI Act” — is the world's first comprehensive legal framework for artificial intelligence. It entered into force on 01 August 2024 and applies in stages through to 2 August 2028.

The Act sets obligations based on the risk an AI system poses. Prohibited practices (such as social scoring by public authorities and untargeted facial image scraping) are banned outright. High-risk systems — including most AI in financial services — are subject to substantive obligations covering risk management, data governance, transparency, human oversight, accuracy, and record-keeping. General-purpose AI models have their own obligation tier.

The regulation is enforceable. Penalties range up to €35 million or 7% of global annual turnover, whichever is higher, for prohibited-AI breaches. Non-compliance with high-risk obligations carries up to €15 million or 3% of global turnover. National market surveillance authorities — supervised through the European AI Office — are responsible for enforcement.

For financial services in particular, supervision is typically routed through existing financial regulators acting as designated AI Act surveillance authorities, in coordination with national data protection authorities.

Source: Regulation (EU) 2024/1689 on EUR-Lex →

§ 03THE TIMELINE

Five phases. One law.

The Act applies in stages. Each phase activates a different set of obligations. Most financial services teams should plan against the 2 December 2027 milestone for high-risk system obligations.

IN FORCE

Regulation enters into force

Text becomes law. Implementation period begins.

APPLIED

Prohibited AI practices banned

Social scoring, untargeted facial scraping, manipulative techniques and emotion-recognition in workplace and education are prohibited.

APPLIED

General-Purpose AI obligations apply

Foundation model providers (OpenAI, Anthropic, Mistral, Google) take on transparency, copyright, and systemic-risk obligations.

UPCOMING

Article 50 transparency obligations apply

AI-system disclosure to natural persons interacting with AI and the marking of AI-generated content (watermarking) take effect after a three-month grace period. Applies to providers and deployers of general-purpose AI systems regardless of high-risk classification.

CURRENT FOCUS

Annex III high-risk obligations apply

AI in credit scoring, insurance pricing, KYC, fraud detection, and worker monitoring becomes subject to substantive obligations: risk management, data governance, technical documentation, record-keeping, human oversight, accuracy, and post-market monitoring. This is the deadline most regulated finance teams are planning against.

FUTURE

Full applicability

Remaining provisions apply, including Article 6 high-risk obligations for AI systems embedded as safety components in regulated products.

Digital Omnibus on AI: The dates above reflect the provisional political agreement reached between Council and Parliament on 7 May 2026. Pending formal endorsement, legal-linguistic revision, and Official Journal publication, these are the operational planning baselines.
§ 04WHO IT APPLIES TO

In finance, almost everyone.

Annex III lists the AI use cases the Act treats as high-risk. Five of them apply directly to financial services. Most regulated organisations have at least one Annex III system in production already — even if compliance teams haven't classified them yet.

5(b)

Credit scoring & creditworthiness

AI used to evaluate loan applications, set credit limits, or determine lending decisions for natural persons. Annex III, point 5(b).

Banks · Lenders · Buy-now-pay-later · Mortgage providers
5(c)

Insurance risk & pricing

AI for risk assessment and pricing in life and health insurance. Annex III, point 5(c).

Life insurers · Health insurers · Insurtech
FS

KYC & customer onboarding

AI used in identity verification, sanctions screening, and PEP detection. High-risk where decisions materially affect customer access to financial services.

All regulated financial entities
FS

Fraud detection

AI flagging suspicious transactions, AML alerts, and behavioural anomaly detection. Risk classification depends on degree of automated decisioning.

Payment processors · Banks · Card networks · Fintech
4

Worker monitoring & HR

AI used in recruitment, performance management, or task allocation for employees. Annex III, point 4.

All organisations with EU-based workers
FS

AML transaction monitoring

AI used in anti-money-laundering surveillance, sanctions screening at scale, and suspicious activity reporting. High-risk where automated decisions trigger account restrictions or regulatory filings.

Banks · Payment processors · Crypto exchanges · Money remitters

Provider vs Deployer — the distinction that matters

A Provider develops or has an AI system developed and places it on the EU market. A Deployer uses an AI system under their own authority. Most regulated organisations are Deployers — they buy or license AI from vendors and use it operationally. Article 26 obligations (oversight, monitoring, log retention) apply to Deployers. Article 16 obligations (technical documentation, conformity assessment) apply to Providers.

If your AI is built by OpenAI, Anthropic, Mistral, or any third-party vendor — you are the Deployer, they are the Provider. Plan against Article 26.

§ 05THE OBLIGATIONS

Article by article, what it requires.

The obligations relevant to compliance teams using third-party AI in regulated finance. Complira coverage indicates which obligations our audit and evidence layer addresses today.

Article
Obligation
What it requires
Complira
Art. 12 · 19 · 26(6)
Record-keeping & audit trail
Automatic generation of logs sufficient to enable post-market monitoring and supervisory authority access. Logs must be preserved for at least six months (Art. 19); five years for high-risk Deployers (Art. 26(6)).
● Live
Art. 13
Information to deployers
Providers must supply Deployers with documentation explaining the system's capabilities, limitations, and appropriate use. Deployers must read and apply this documentation.
● Live
Art. 14
Human oversight
Design measures enabling natural persons to oversee operation, intervene, and override outputs of high-risk AI. The Act distinguishes between capability to oversee and obligation to ensure oversight applies.
◐ Partial
Art. 26 · 27
Deployer obligations & FRIA
Use AI in line with Provider instructions. Maintain logs. Ensure human oversight resources. For specified Deployers (banks, insurers, public bodies), conduct a Fundamental Rights Impact Assessment per Article 27.
● Live
Art. 50 · 86
Transparency & explanation
Inform natural persons that they are interacting with an AI system. Provide meaningful explanations of decisions made by high-risk AI when the affected person requests one (Art. 86).
● Live
Art. 72 · 73
Post-market monitoring & incidents
Providers must collect and analyse data on system performance after deployment. Serious incidents must be reported to market surveillance authorities within 15 days, with severe cases requiring 2-day reporting.
● Live
Art. 74(12)
Regulator access
On reasoned request, market surveillance authorities can require access to data, documentation, and source code. Deployers must facilitate this access under appropriate confidentiality.
● Live

“Live” means evidence-and-audit capability is shipping today. “Partial” for Article 14 means we capture post-hoc oversight evidence; live synchronous intervention is on the roadmap — see § 07 below.

§ 06OUR OWN POSITION
DELIBERATE DESIGN

Complira is not a Provider of an AI system.

Article 3(3) defines a Provider as “a natural or legal person who develops an AI system, or that has an AI system developed, with a view to placing it on the market or putting it into service under its own name.” Article 3(1) defines an AI system as a machine-based system that exhibits adaptiveness or generates outputs influenced by inferences from input data.

The Complira product consists of regex-driven content analysis, keyword-based classification, user-configured policy rules, and additive scoring algorithms. Every flag, every score, every alert is the output of deterministic code that a regulator can read line-by-line. There is no machine learning model generating outputs presented to customers.

This places Complira outside Article 16 Provider obligations entirely — it is not a regulatory dependency layered on top of yours. The audit trail you generate using Complira is your evidence, supervised under your authority.

Why we built it this way

Compliance infrastructure for AI shouldn't itself be regulated as an AI system. Determinism keeps the supervision chain clean: your Provider, your AI, your deployment — Complira sits next to it as evidence layer.

What it means in procurement

Your DPO and audit committee don't need to assess Complira against Provider obligations. The technical verification — no AI SDK imports, no outbound AI API calls in production — is part of our DORA Vendor Readiness Statement.

Full analysis available in our EU AI Act Self-Compliance Analysis (v1.1, May 2026). See § 10 below to request it under NDA.

§ 07ARTICLE 14 ROADMAP

Honest about what ships when.

Article 14 requires design-level human oversight capability — the ability for a natural person to intervene before or during AI system operation. Most compliance tooling addresses Article 19 record-keeping. Live Article 14 capability is rarer and harder to ship correctly.

TODAY

Post-hoc oversight evidence

Complira tracks reviewer assignment, post-hoc review decisions, override reasons, and ties them back to the underlying AI interaction. This satisfies the evidence-and-audit dimension of Article 14: regulators can verify that oversight occurred and what decisions were made.

  • Reviewer assignment per AI system
  • Decision logging (approve / block / modify)
  • Audit trail integration with original AI call
  • Override reasoning capture
OCTOBER 2027

Live human oversight (Article 14 v1)

Synchronous intervention capability: pausing AI calls pending review, with a real-time queue for designated reviewers. Aligned to ship ahead of the 2 December 2027 Annex III enforcement deadline.

  • Synchronous SDK mode with optional AI call pausing
  • Real-time review queue with low-latency notifications
  • Approve / block / modify with full context
  • Configurable fail-open vs fail-closed for review timeouts

This is a roadmap commitment, not a contractual guarantee. Customers with immediate Article 14 requirements should implement oversight through internal processes or third-party guardrails platforms today, with Complira providing the audit and evidence layer until the v1 capability ships.

§ 08HOW WE HELP YOU COMPLY

Four concrete capabilities.

Complira sits as an audit and evidence layer next to your AI infrastructure. No rebuild. Non-blocking by design. Five-minute SDK integration.

ART. 12 · 19 · 26(6)

Tamper-evident audit trail

Every AI interaction is captured by SDK and written to append-only PostgreSQL with SHA-256 content hashing. Logs are chain-anchored. Retention is configurable up to five years. Database privileges enforce write-once semantics — even Complira engineers cannot modify or delete records.

ART. 26 · 27

AI system registry with FRIA tracking

Per-system Annex III classification, owner assignment, oversight assignment, deployer log retention configuration, and Fundamental Rights Impact Assessment status tracking. Single source of truth for your AI portfolio.

ART. 50 · 86

Deterministic risk scoring

Configurable rules (regex, keyword, threshold) flag interactions for review. Every flag explains exactly which rule matched. No ML inference, no probabilistic decisioning. A regulator can read the scoring code line-by-line.

ART. 74(12)

Regulator access tokens

Scoped, time-limited, read-only access tokens issued in seconds. Every regulator query is itself logged. Supervisor access is orderly, auditable, and revocable — without granting standing dashboard credentials.

§ 09FREQUENTLY ASKED

What teams actually ask.

Eight questions we hear most often from compliance officers, CTOs, and procurement teams in Nordic financial services.

Do I need EU AI Act compliance if my AI is not classified high-risk?

Most regulated organisations end up with at least one Annex III high-risk system — credit scoring, fraud detection, KYC review, customer eligibility. Even outside Annex III, your DPO, your auditor, or your board will likely ask for the same evidence the EU AI Act requires for high-risk systems. Article 12 record-keeping applies broadly. Better to have the evidence before someone asks for it.

What is the difference between a Provider and a Deployer?

A Provider develops or has an AI system developed and places it on the EU market. A Deployer uses an AI system under their own authority, typically to support business operations. Most regulated organisations are Deployers using vendor AI systems, with obligations under Article 26 covering oversight, monitoring, and log retention. Provider obligations under Article 16 are more onerous and apply to vendors like OpenAI or Mistral.

How does the EU AI Act differ from GDPR?

GDPR governs personal data processing. The EU AI Act governs AI system development and deployment. They overlap when an AI system processes personal data — typical for credit scoring, KYC, and HR systems. Compliance teams need both frameworks running in parallel: GDPR supervised by national data protection authorities, AI Act supervised by designated market surveillance authorities (often the same financial regulator for finance use cases).

When does the EU AI Act apply to financial services?

High-risk obligations under Annex III apply from 2 December 2027 following the Digital Omnibus on AI political agreement of 7 May 2026. This covers AI in credit scoring, insurance pricing, KYC and customer onboarding, fraud detection, and worker monitoring systems. Prohibited AI practices already apply (since 02 February 2025) and General-Purpose AI rules apply since 02 August 2025. Article 50(2) watermarking obligations apply from 2 December 2026 after a three-month grace period. Full applicability for Annex I AI in regulated products arrives 2 August 2028.

What did the 7 May 2026 Digital Omnibus deal change?

On 7 May 2026, the Council of the EU and European Parliament reached provisional political agreement on the Digital Omnibus on AI. Annex III high-risk obligations now apply from 2 December 2027 (previously 2 August 2026). AI embedded in regulated products under Annex I applies from 2 August 2028 (previously 2 August 2027). Article 50(2) watermarking obligations apply from 2 December 2026 after a three-month grace period. The agreement is provisional, pending formal Council and Parliament endorsement, legal-linguistic revision, and publication in the Official Journal.

Is SOC 2 or ISO 27001 enough for EU AI Act compliance?

No. SOC 2 and ISO 27001 prove security and control posture. The EU AI Act requires AI-specific controls: tamper-evident logs of every AI interaction, AI system registry with Annex III classification, FRIA documentation for qualifying Deployers, regulator access, and human oversight evidence. These are different layers — security audits do not satisfy AI Act evidence requirements. Compliance teams need both.

What is an FRIA and who needs to do one?

A Fundamental Rights Impact Assessment (FRIA) is required under Article 27 for Deployers of high-risk AI systems used by public bodies, banking institutions for creditworthiness assessment, and insurers for risk and pricing in life and health insurance. It documents how a high-risk AI system affects fundamental rights, identifies affected groups, and sets out mitigations. Complira's AI system registry tracks FRIA status per registered system.

How does Article 14 differ from Article 19?

Article 14 requires Deployers to ensure human oversight of high-risk AI systems while they operate — design-level intervention capability. Article 19 requires preservation of audit logs by Providers and Deployers for at least six months. Article 14 is operational and synchronous; Article 19 is record-keeping and historical. Most compliance tooling addresses Article 19 today; live Article 14 capability is rarer. See § 07 above for Complira's Article 14 roadmap.

Ready to be ready?

Two ways to go deeper. A walkthrough showing how Complira maps to your specific obligations — or our full EU AI Act Self-Compliance Analysis (v1.1, 35 pages, available under NDA).

Email privacy@complira.io

Self-Compliance Analysis available under NDA. Response commitment: 5 working days.