Provider vs. Deployer Under the EU AI Act: Which One Are You?
A practical guide for Nordic compliance and risk teams using third-party AI in regulated workflows.
A scenario most Nordic compliance officers will recognise.
A bank wants to speed up loan onboarding. The team builds an internal tool that wraps GPT-4. Customer-facing analysts paste in passport scans, payslips, and account statements. The model performs document checks, extracts financial data, and surfaces flags into the bank's existing creditworthiness workflow — where a human credit officer makes the final decision.
A simple question lands in the compliance team's inbox:
"Are we a provider or a deployer of an AI system under the EU AI Act?"
Most teams answer "provider", because they "built it". A few answer "neither", because they're "just using OpenAI". Both answers are usually wrong, and the misunderstanding is expensive. Provider obligations and deployer obligations are not different sizes of the same workload — they are different workloads. Pointing your compliance budget at the wrong articles is one of the most costly mistakes a Nordic financial institution can make in the run-up to the Annex III deadline.
This article walks through the distinction the way I'd talk it through with a Head of Risk who has the AI Act on the desk and a deadline in the calendar.
What the AI Act actually says
Two definitions in Article 3 do most of the work.
Article 3(3) — Provider:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge."
Article 3(4) — Deployer:
Any organisation using an AI system under its own authority — with the narrow exception of purely personal, non-professional use.
The first thing to notice: provider status hinges on placing on the market or putting into service under your own name or trademark. Deployer status hinges on using a system under your authority.
Now apply that to the scenario. The bank did not develop a new AI system from scratch. It wrote a wrapper around GPT-4. GPT-4 was developed by OpenAI and placed on the EU market by OpenAI. The bank is using it inside the institution.
In the default case — and this is where the regulatory weight actually sits for most financial institutions — the bank is a deployer. OpenAI is the provider of GPT-4. The bank has its own deployer obligations under Article 26 and, because the system informs a creditworthiness decision, under Article 27 as well. Those obligations are real and substantive. They are not provider obligations.
The vast majority of Nordic banks, fintechs, and insurers using third-party LLMs are in this position. They are deployers of someone else's model.
Two notes on classification before we move on.
First, KYC document checking on its own does not appear in Annex III. The high-risk hook for the scenario above is the connection to creditworthiness — Annex III, point 5(b). If the tool only verified identity for AML purposes and never touched a lending decision, the Annex III analysis would be different. In Nordic onboarding, KYC and creditworthiness are usually entangled in the same workflow, which is what pulls the system into scope.
Second, fraud detection is explicitly excluded from Annex III 5(b). A standalone fraud-screening system is not, by virtue of that use, a high-risk AI system under the Act. Other obligations may still apply, but the Annex III burden does not.
The grey zone: when a deployer becomes a provider
Article 25 is where the cleanly drawn line gets harder to read. It identifies three situations in which an organisation that started as a deployer (or distributor or importer) becomes a provider — with the full set of provider obligations — for the modified system:
- Rebranding. Putting your own name or trademark on a high-risk AI system that has already been placed on the market or put into service.
- Substantial modification. Making a substantial modification to a high-risk AI system, where it remains high-risk after the change.
- Repurposing. Modifying the intended purpose of an AI system that was not originally classified as high-risk, in a way that makes it high-risk.
The pivotal concept is "substantial modification". Article 3(23) defines it as a change "after its placing on the market or putting into service which affects the compliance of the AI system with the requirements set out in this Regulation or results in a modification to the intended purpose for which the AI system has been assessed."
This is where teams overestimate their own activity. In practice, the line between configuration and substantial modification looks like this:
| Configuration (you remain a deployer) | Substantial modification (you become a provider) |
|---|---|
| Writing a system prompt that scopes the model to a specific task | Fine-tuning a foundation model on the bank's proprietary data |
| Building a retrieval (RAG) layer over the bank's own documents | Removing or disabling safety controls the provider built in |
| Setting temperature, max-tokens, or other inference parameters within ranges the provider documents | Stitching multiple models into a new pipeline whose intended purpose differs from any single component |
| Wrapping the API in a thin internal interface | Using a general-purpose model for a high-risk purpose the original provider did not assess for |
| Bug fixes and UI work |
The boundary between configuration and substantial modification is functional, not technical. Until supervisory practice and harmonised standards emerge, the practical test is: does the change affect the system's compliance posture or its intended purpose? If yes, you are at risk of becoming a provider for the modified system.
For the bank in our scenario: writing a system prompt and adding a RAG layer over its own KYC documents is configuration. Fine-tuning GPT-4 on the bank's own labelled credit decisions to learn its lending policy is the kind of modification that crosses the line.
What's actually at stake
Provider obligations (Article 16 and the supporting Articles 9–15, 17, 43, 47–49, 72–73) include: a risk management system, data governance, technical documentation per Annex IV, automatic logging in the system itself, transparency information for deployers, human oversight by design, accuracy/robustness/cybersecurity requirements, a quality management system, conformity assessment, an EU declaration of conformity, CE marking, registration in the EU database, a post-market monitoring plan, and serious-incident reporting.
This is the workload that lands on OpenAI, Anthropic, Mistral, and any institution that builds and markets a high-risk AI system under its own name. It is structural, deep, and ongoing.
Deployer obligations (Article 26, with Article 27 FRIA on top) are different work, not less work:
- Use the system in accordance with the provider's instructions (Art. 26(1))
- Assign human oversight to a named person with the necessary competence, training, authority, and support (Art. 26(2))
- Where the deployer controls input data, ensure that data is relevant and sufficiently representative (Art. 26(4))
- Monitor operation and inform the provider or authorities of risks or incidents (Art. 26(5))
- Retain automatically generated logs for at least six months, where those logs are under the deployer's control (Art. 26(6))
- Inform workers and their representatives before deploying the system in the workplace (Art. 26(7))
- Inform natural persons subject to AI-assisted decisions (Art. 26(11))
- Cooperate with competent authorities (Art. 26(12))
- Conduct a Fundamental Rights Impact Assessment under Article 27 — which applies to every deployer using AI for creditworthiness or credit scoring (Annex III 5(b)), and to every deployer using AI for risk assessment and pricing in life and health insurance (Annex III 5(c)), regardless of whether the deployer is public or private
Lighter than provider — but not light. Article 26 and Article 27 are where most Nordic financial institutions will spend their compliance effort, and they are widely underestimated.
Five questions to determine your role
A short checklist a Head of Risk can run on a Monday morning:
-
Did your organisation develop the underlying AI model, or are you using one built by someone else? Using OpenAI, Anthropic, Mistral, or any other third-party model places you on the deployer side by default.
-
Are you placing the AI system on the EU market — selling it, licensing it, or making it available to other organisations under your name or trademark? If yes, you are a provider. If the system is only used inside your own organisation, you are not.
-
Have you fine-tuned the base model on your own data, or stripped out provider-built safety controls? If yes, Article 25 may convert you into the provider for the modified system.
-
Are you using the model for a use case the original provider documented, or for one outside its intended purpose? Repurposing a general-purpose model into a high-risk use case is one of the Article 25 triggers.
-
Does your high-risk use case fall under Annex III? For Nordic financial services this almost always means creditworthiness or credit scoring under point 5(b) — fraud detection excluded — or risk assessment and pricing in life and health insurance under point 5(c).
If your answers are: third-party model, internal use, no fine-tuning, intended-purpose use, Annex III applies — you are a deployer of a high-risk AI system. Article 26 and Article 27 are your home address.
The same logic, as a decision tree:
Why most financial deployers underestimate Article 26
The conversation in many Nordic compliance teams stops as soon as someone confirms "we're not a provider". That's where it should start. Three patterns recur:
The vendor's certifications are not your evidence. OpenAI's policies, Anthropic's audits, and any provider's technical documentation tell a regulator how the model was built. They do not tell the regulator what happened inside your bank when an analyst used the tool to inform a credit decision. The deployer needs its own evidence: who reviewed which output, when, and on what basis.
Human oversight is a control, not an org chart. Article 26(2) requires named persons with competence, training, authority, and support. An organisational chart and a job description are not enough. The evidence is the named reviewer, the intervention trigger, the escalation route, and a record of what happened the last time the system behaved abnormally.
Logs you don't control are logs you can't produce. Article 26(6) requires deployers to retain automatically generated logs for at least six months, where those logs are under their control. If your only logs are OpenAI's server logs — which you cannot independently access, query, or export — you will struggle to produce the evidence on a regulator's timeline. The deployer needs an independent audit trail of every high-risk AI interaction, retained under deployer control.
The FRIA isn't optional for credit and insurance. Article 27 catches every deployer of an Annex III 5(b) creditworthiness system or 5(c) life/health insurance pricing system. Public or private, large or small. A FRIA is not a re-labelled DPIA: Article 27(4) lets a DPIA cover overlapping ground, but the rights scope is broader. Plan for both, conducted before the system is put into use, and update when material facts change.
The deadline matters. The original AI Act timeline puts Annex III high-risk obligations in force from 2 August 2026. The proposed Digital Omnibus on AI would shift this to 2 December 2027 (with the Council and Parliament aligned around fixed dates as of March 2026), but that text has not been formally adopted. Until it is, August 2026 is the operative date a Nordic regulator will hold you to.
The practical takeaway
Most Nordic financial institutions using AI in customer-facing or decision-making workflows are deployers, not providers. Resources spent building provider-style technical files for systems you don't actually provide are resources not spent on the Article 26 and Article 27 evidence a Finanstilsynet examiner will actually ask for.
Get the role right first. Then build the evidence the role requires.
If you're a deployer, Article 26 is the article to understand. We've put together a free Readiness Assessment that maps your obligations against your current state in five minutes — start the assessment →.
For a deeper reference on how the EU AI Act applies to Nordic financial services, see our EU AI Act overview. For Complira's own approach to data residency, sub-processors, and audit-chain integrity, see the Trust Centre.
The full text of the AI Act (Regulation (EU) 2024/1689) is available on EUR-Lex: eur-lex.europa.eu/eli/reg/2024/1689/oj.
This article is informational and is not legal advice. The application of the EU AI Act to a specific organisation depends on facts that should be reviewed with qualified counsel.