AI GovernanceLegal Document

The EU AI Act Compliance Guide: What Legal Professionals Need to Know in 2024

The EU AI Act represents the world's first comprehensive legal framework for artificial intelligence. With requirements phased in through 2027, organisations deploying AI systems face significant compliance obligations—and legal professionals must understand how to advise clients navigating this new regulatory landscape.

Effective DateJanuary 31, 2026
Last UpdatedFebruary 19, 2026
Reading Time22 minutes
Document IDEU-AI-ACT-COMPLIANCE-GUIDE-LEGAL-PROFESSIONALS-2024
268 views

Executive Summary

The EU AI Act represents the world's first comprehensive legal framework for artificial intelligence. With requirements phased in through 2027, organisations deploying AI systems face significant compliance obligations—and legal professionals must understand how to advise clients navigating this new regulatory landscape.

On a Monday morning in early 2024, the general counsel of a European financial services company received an urgent message from the CEO: "Our vendor just told us our credit scoring AI is 'high-risk' under the new EU AI Act. What does that mean? Are we compliant? What do we need to do?" The questions were straightforward. The answers—as the general counsel quickly discovered—required understanding the most complex piece of technology regulation the EU has ever enacted.

The EU AI Act, formally adopted in March 2024, establishes a comprehensive legal framework for artificial intelligence that will reshape how organisations develop, deploy, and govern AI systems. For legal professionals, it represents both a significant new compliance domain and a fundamental shift in how technology regulation operates.

This guide provides the detailed understanding necessary to advise clients effectively on EU AI Act compliance.

AI technology concept with European regulation symbols
The EU AI Act creates the world's first comprehensive framework for AI governance

1. Understanding the EU AI Act Framework

9.1 The Risk-Based Approach

The EU AI Act structures its requirements around a risk-based classification system. Rather than regulating all AI identically, the Act imposes requirements proportionate to the potential harm different AI applications may cause.

The framework establishes four risk categories:

Risk LevelRegulatory ApproachExamples
Unacceptable RiskProhibitedSocial scoring, manipulative AI, real-time biometric identification (with exceptions)
High RiskStrict requirements before market placementCredit scoring, recruitment AI, medical devices, law enforcement
Limited RiskTransparency obligationsChatbots, emotion recognition, deepfake generators
Minimal RiskVoluntary codes of practiceAI-enabled video games, spam filters, inventory management

9.2 Key Definitions: What Counts as an "AI System"?

The Act defines "AI system" as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

This definition is deliberately broad, encompassing:

  • Machine learning systems (supervised, unsupervised, reinforcement learning)
  • Deep learning and neural networks
  • Knowledge-based and expert systems
  • Statistical approaches and Bayesian estimation
  • Search and optimisation methods

Importantly, the definition captures systems that "infer" outputs—excluding purely deterministic systems that follow fixed rules without learning or adaptation.

9.3 Territorial Scope: Who Must Comply?

The EU AI Act applies to:

Providers: Organisations that develop AI systems or have AI systems developed for them, and place them on the market or put them into service under their own name or trademark, regardless of where they're established.

Deployers: Organisations using AI systems under their authority, except for personal non-professional activity.

Importers and Distributors: Organisations bringing AI systems into the EU market or making them available.

Like GDPR, the AI Act has extraterritorial reach. Non-EU providers whose AI systems are used within the EU must comply, and must appoint an EU authorised representative.

Global map showing EU AI Act jurisdiction reach
The EU AI Act reaches organisations worldwide when AI systems are used in the EU

2. Prohibited AI Practices

Article 5 prohibits certain AI practices outright as presenting unacceptable risk:

9.4 Social Scoring Systems

AI systems used by public authorities (or on their behalf) for evaluating or classifying individuals based on their social behaviour or personal characteristics, where the resulting social score leads to detrimental treatment in unrelated contexts or disproportionate treatment.

Example prohibited: A government system that tracks citizens' public behaviour and restricts access to services based on behavioural scores.

Example permitted: Fraud scoring in financial services (regulated as high-risk, not prohibited).

9.5 Manipulation and Exploitation

AI systems that deploy subliminal techniques or purposefully manipulative/deceptive techniques to materially distort behaviour, causing or likely to cause significant harm.

AI systems that exploit vulnerabilities of individuals due to age, disability, or specific social/economic situation to materially distort behaviour causing significant harm.

Example prohibited: AI targeting gambling content at users showing addiction vulnerabilities.

9.6 Real-Time Remote Biometric Identification

Using real-time remote biometric identification (like facial recognition) in publicly accessible spaces for law enforcement purposes is generally prohibited, with narrow exceptions for serious crimes, missing persons, and terrorism prevention.

Exceptions require prior judicial or independent administrative authorisation except in emergencies.

9.7 Biometric Categorisation

AI systems that categorise individuals based on biometric data to deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation are prohibited (with exceptions for labelling of lawfully acquired biometric data and law enforcement filtering).

9.8 Emotion Recognition in Certain Contexts

Emotion recognition AI is prohibited in workplaces and educational institutions, except for medical or safety purposes.

9.9 Untargeted Facial Image Scraping

AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.

Biometric identification technology concept
Real-time biometric identification faces strict limitations under the EU AI Act

3. High-Risk AI Systems: The Core Compliance Challenge

9.10 What Makes an AI System "High-Risk"?

The Act designates two categories of high-risk AI:

Category 1: AI as Safety Component or Product

AI systems that are safety components of products already covered by EU product safety legislation (medical devices, machinery, toys, lifts, vehicles, aircraft, etc.) and require third-party conformity assessment under that legislation.

Category 2: Annex III Listed Applications

AI systems in specified high-risk areas, including:

DomainCovered AI Applications
BiometricsRemote biometric identification (not real-time), emotion recognition, biometric categorisation
Critical InfrastructureAI for management/operation of roads, water, gas, electricity, heating
EducationAI determining access to education, assessing students, detecting prohibited behaviour
EmploymentAI for recruitment, promotion decisions, contract termination, task allocation, performance monitoring
Essential ServicesAI for creditworthiness assessment, insurance pricing, emergency services dispatch
Law EnforcementAI for risk assessment, polygraphs, evidence evaluation, profiling, crime analytics
Migration/BorderAI for visa applications, asylum claims, security risk assessment
Justice/DemocracyAI influencing judicial decisions, electoral processes

9.11 High-Risk Requirements: What Providers Must Do

Providers of high-risk AI systems must satisfy extensive requirements before placing systems on the market:

Risk Management System (Article 9)

Establish and maintain a continuous risk management process throughout the AI system lifecycle:

  • Identify and analyse known and reasonably foreseeable risks
  • Estimate and evaluate risks arising from intended use and reasonably foreseeable misuse
  • Evaluate risks based on post-market monitoring data
  • Adopt appropriate risk mitigation measures

Data Governance (Article 10)

Training, validation, and testing datasets must be subject to appropriate data governance practices:

  • Relevant design choices documented
  • Data collection processes transparent
  • Examination for biases that may lead to discrimination
  • Identification of data gaps and shortcomings
  • Appropriate measures for bias mitigation

Technical Documentation (Article 11)

Comprehensive technical documentation demonstrating compliance, including:

  • General system description
  • Detailed elements and development process
  • Monitoring, functioning, and control information
  • Risk management information
  • Applicable standards applied
  • Description of conformity assessment performed

Record-Keeping (Article 12)

High-risk AI systems must be designed to automatically record events (logs) enabling tracing of system functioning:

  • Recording periods for use duration and appropriate duration for intended purpose
  • Traceability of functioning through logs
  • Relevant events during lifecycle recorded

Transparency (Article 13)

Design and develop to ensure operation is sufficiently transparent for deployers:

  • Instructions for use enabling understanding of system functioning
  • Clear information on capabilities and limitations
  • Performance levels and contexts
  • Accuracy, robustness, and cybersecurity specifications

Human Oversight (Article 14)

Design enabling effective oversight by natural persons during use:

  • Enabling full understanding of system capabilities and limitations
  • Enabling monitoring of operation and detection of anomalies
  • Enabling interpretation of outputs
  • Enabling intervention in operation or system halt

Accuracy, Robustness, and Cybersecurity (Article 15)

Achieve appropriate levels of accuracy, robustness, and cybersecurity:

  • Accuracy documented and communicated to deployers
  • Resilience to errors and inconsistencies
  • Technical redundancy and fail-safe mechanisms
  • Protection against attacks and vulnerabilities
AI system compliance documentation and testing
High-risk AI compliance requires comprehensive documentation and testing

9.12 Conformity Assessment

Before market placement, high-risk AI systems must undergo conformity assessment demonstrating compliance with Act requirements:

Self-Assessment: For most Annex III systems, providers may self-assess compliance using internal control procedures, affixing CE marking upon satisfaction.

Third-Party Assessment: Required for biometric systems and for AI as safety components of products requiring third-party assessment under product legislation.

9.13 Deployer Obligations

Organisations using high-risk AI systems (deployers) have distinct obligations:

  • Use systems in accordance with instructions
  • Ensure input data is relevant and representative for intended purpose
  • Monitor operation based on instructions for use
  • Inform provider of serious incidents or malfunctions
  • Maintain logs generated by the system
  • Conduct fundamental rights impact assessments (for certain deployers)
  • Inform affected individuals they're subject to high-risk AI use

4. Limited Risk Systems: Transparency Requirements

AI systems presenting limited risk face transparency obligations rather than full high-risk compliance:

Chatbots and Conversational AI: Inform users they're interacting with an AI system (unless obvious from circumstances).

Emotion Recognition Systems: Inform individuals that they're being exposed to such a system (where permitted at all).

Biometric Categorisation: Inform individuals exposed to the system (where permitted).

Deepfake Content: Clearly label content as artificially generated or manipulated.

5. General Purpose AI and Foundation Models

The Act includes specific provisions for general-purpose AI models (including large language models):

9.14 All General-Purpose AI Providers Must:

  • Maintain technical documentation
  • Provide information and documentation to downstream deployers
  • Establish policies for copyright compliance
  • Publish summary of training data content

9.15 Systemic Risk Models (Additional Requirements)

Models with high-impact capabilities (defined by compute thresholds or Commission designation) face additional obligations:

  • Model evaluation and adversarial testing
  • Risk assessment and mitigation
  • Incident reporting
  • Cybersecurity protection
  • Energy consumption reporting
Large language model and foundation AI concept
Foundation models and general-purpose AI face specific regulatory requirements

6. Implementation Timeline

The EU AI Act phases requirements over several years:

DateRequirements Taking Effect
6 months after entry into force (early 2025)Prohibited practices bans
12 months (mid-2025)General-purpose AI requirements; governance structure operational
24 months (mid-2026)Most high-risk requirements; deployer obligations; penalties
36 months (mid-2027)Annex I high-risk system requirements (product legislation)

7. Enforcement and Penalties

The Act establishes significant penalties for non-compliance:

ViolationMaximum Penalty
Prohibited AI practices€35 million or 7% of global annual turnover
High-risk system violations€15 million or 3% of global annual turnover
Incorrect information to authorities€7.5 million or 1.5% of global annual turnover

SMEs and start-ups face proportionately lower caps.

8. RUNO's AI Governance Tools

RUNO's AI Governance module helps organisations navigate EU AI Act compliance:

AI System Registry: Centralised inventory of AI systems across the organisation, with risk classification, documentation status, and compliance tracking.

Risk Assessment Framework: Guided assessment tools help determine whether AI systems are prohibited, high-risk, limited risk, or minimal risk under the Act's framework.

Compliance Documentation: Templates and workflows for generating required technical documentation, instructions for use, and conformity assessment records.

Ongoing Monitoring: Tools for monitoring AI system performance, logging events, and identifying issues requiring attention—supporting the continuous risk management the Act requires.

9. Conclusion: Preparing for AI Regulation

The EU AI Act represents a fundamental shift in technology governance. Like GDPR before it, organisations worldwide will need to comply when their AI systems affect EU individuals—regardless of where those organisations are based.

For legal professionals, the Act demands new expertise: understanding AI technology sufficiently to classify systems, advising on compliance requirements, negotiating liability allocation in AI supply chains, and managing regulatory risk in a domain that will only grow more important.

The financial services general counsel who received that Monday morning message now has a path forward. The credit scoring AI is indeed high-risk. Compliance requires documented risk management, data governance, technical documentation, logging capabilities, transparency measures, human oversight mechanisms, and conformity assessment before the Act's requirements take full effect.

The preparation required is substantial. But with appropriate planning, organisations can achieve compliance—and those who treat AI governance as a strategic priority rather than a compliance burden may find competitive advantage in the trust and transparency that rigorous governance enables.

Explore RUNO's AI Governance Suite or request a demonstration to begin your EU AI Act compliance journey.

Related Topics

EU AI ActAI GovernanceAI ComplianceAI RegulationHigh-Risk AILegal TechnologyGDPREuropean Union
Share this page:

Disclaimer: This document is provided for informational purposes only and does not constitute legal advice. For specific legal guidance, please consult with a qualified legal professional.

Copyright Notice: © 2026 All rights reserved. Unauthorized reproduction or distribution of this document, or any portion of it, may result in legal action.

No comments yet

Be the first to share your thoughts!

Leave a Comment

Your email address will not be published. Comments are moderated before appearing.

      EU AI Act Compliance Guide 2024: Complete Legal Framework Analysis