The EU AI Act is not primarily a technology law. It is a risk management law.
The EU AI Act — Regulation 2024/1689, in force from August 2024 — is the world's first comprehensive legal framework for artificial intelligence. Its primary architecture is a risk-based classification system: AI systems are categorised by the risk they pose, and the obligations attached to each category scale accordingly. Prohibited practices are banned outright. High-risk AI systems face the most substantial compliance obligations. Limited-risk and minimal-risk systems carry lighter requirements.
For the compliance function, the AI Act is significant for two reasons that are sometimes conflated but are genuinely distinct. The first is the organisation's obligations as a deployer of AI systems — the compliance burden of using AI in ways that the Act regulates. The second is the AI Act as a compliance risk in itself — the institutional, legal, and reputational exposure that attaches to non-compliance with a major new EU regulation.
The law's territorial scope is broad. It applies not only to providers of AI systems established in the EU, but to providers established outside the EU whose systems are placed on the market or put into service within the EU, and to deployers of AI systems located in the EU. A non-EU company that deploys an AI system to manage a process affecting EU employees or customers is, in many cases, within scope.
The obligations on deployers are more substantial than most organisations have recognised.
The AI Act's most demanding obligations apply to high-risk AI systems — defined by their application area rather than their technical characteristics. High-risk categories include AI used in employment and workers management (including recruitment, performance evaluation, and task allocation), access to essential services, law enforcement, administration of justice, and critical infrastructure. Many AI tools currently in deployment across large organisations fall into these categories.
Deployers of high-risk AI systems — the organisations that put systems developed by others into use within their own operations — carry obligations that go well beyond simply using the system as intended. They must conduct a fundamental rights impact assessment before deploying systems that carry particular risks to individuals. They must implement human oversight measures adequate to the risk of the system. They must monitor the system's operation for risks not foreseen at deployment. They must maintain logs of the system's operation where the system generates them. And they must inform employees who interact with the system that they are working with AI.
For compliance functions, the employment and workers management category is particularly salient. AI systems used in performance management, task assignment, or behavioural monitoring of employees are high-risk under the Act. Organisations that have deployed such systems — including tools that use AI to score productivity, assess engagement, or generate recommendations about personnel decisions — should treat this as an immediate compliance review item.
The practical starting point for any compliance function approaching the AI Act is an inventory: which AI systems does the organisation use, in which processes, and what do those systems do to or about people? That inventory, mapped against the Act's risk categories, will identify where the compliance obligations are most substantial and where the review needs to begin. An organisation that does not know what AI it is deploying cannot assess its AI Act exposure.
AI in compliance processes creates its own category of obligation.
Compliance functions are not only subject to the AI Act through the organisation's use of AI in its operations. They are increasingly deployers of AI within their own processes — transaction monitoring, third-party screening, speak-up channel triage, risk scoring, document review. Where these systems affect individuals — flagging employees, scoring suppliers, triaging reports — they carry obligations under the Act that the compliance function itself must manage.
The governance question this creates is novel: who in the organisation is responsible for AI compliance? The answer is not obvious. Technology teams understand the systems but may not understand the regulatory framework. Legal teams understand the framework but may not understand the systems. Compliance functions understand risk management but are simultaneously among the deployers most exposed to the Act's requirements. Building a cross-functional AI governance structure — before enforcement makes the absence of one visible — is a programme that is urgently needed in most organisations and has barely begun in most of them.
The AI Act also creates a category of integrity risk that compliance functions are not accustomed to managing: the risk of automated decision-making that is opaque, inconsistent, or discriminatory in ways that no individual in the organisation intended or is aware of. Managing this risk requires the compliance function to develop — or to access — a form of technical literacy that has not previously been part of its core competence. That development is not optional. It is a compliance obligation.
This article reflects the compliance advisory perspective of Compliance House and is intended for informational purposes. It does not constitute legal advice. The regulatory landscape described is subject to ongoing development. Organisations seeking guidance on specific obligations should consult qualified legal counsel in the relevant jurisdiction.
Télécharger cet article
Enregistrez une copie PDF pour la lecture hors ligne ou partagez-la avec un collègue qui pourrait la trouver utile.