Retour aux articles
9 MIN READ

EU AI Act 2026: What Developers Need to Know

By Learnia Team

EU AI Act 2026: What Developers Need to Know

This article is written in English. Our training modules are available in French.

The European Union's AI Act is now in full effect, representing the world's first comprehensive legal framework for artificial intelligence. With major compliance deadlines arriving throughout 2025-2027, every organization developing or deploying AI in Europe must understand and implement the requirements.

This comprehensive guide breaks down the EU AI Act for developers and technical teams, covering risk classifications, specific obligations, and practical compliance steps.


Timeline of Obligations

The AI Act enters into force in phases:

DateWhat Takes Effect
Feb 2, 2025Prohibited AI practices banned
Aug 2, 2025GPAI model obligations begin
Aug 2, 2026High-risk AI system rules apply
Aug 2, 2027Embedded AI systems in regulated products

Current Status (January 2026): Prohibited practices are already illegal. GPAI obligations are in effect. High-risk requirements take effect in 7 months.


Risk Classification System

The AI Act creates a risk-based framework with four tiers:

Prohibited AI (Unacceptable Risk)

Banned since February 2, 2025:

❌ Prohibited Systems (Banned since Feb 2, 2025):

  1. Subliminal manipulation — AI that manipulates beyond person's consciousness, causes or risks significant harm
  2. Exploitation of vulnerabilities — Targeting age, disability, social situation to materially distort behavior
  3. Social scoring by public authorities — Evaluating trustworthiness over time leading to detrimental treatment
  4. Real-time remote biometric identification — In public spaces for law enforcement (limited exceptions for serious crimes)
  5. Emotion inference in workplace/education — Except for medical/safety purposes
  6. Untargeted facial image scraping — Building databases from internet/CCTV
  7. Biometric categorization — Inferring sensitive attributes (race, politics, etc.)

High-Risk AI

Fully regulated from August 2, 2026:

Systems considered high-risk fall into two categories:

Category 1: Safety Components

  • AI in medical devices
  • AI in vehicles
  • AI in machinery
  • AI in toys
  • AI in aviation
  • AI in marine equipment

Category 2: Specific Use Cases

  • Biometric identification/categorization
  • Critical infrastructure management
  • Education access and assessment
  • Employment and HR decisions
  • Essential services access (credit, benefits)
  • Law enforcement applications
  • Migration and border control
  • Justice administration

Limited Risk

Systems requiring transparency:

  • Chatbots (must disclose AI nature)
  • Emotion recognition systems
  • Biometric categorization
  • Deepfakes and AI-generated content

Minimal Risk

All other AI systems—no specific obligations beyond existing law.


General Purpose AI (GPAI) Requirements

Since August 2025, providers of general-purpose AI models face specific obligations:

All GPAI Models

Requirements for ALL foundation models:

1️⃣ Technical Documentation

  • Training and testing processes
  • Evaluation results with methodology
  • Known limitations

2️⃣ Information for Downstream Providers

  • Capabilities and limitations
  • Intended and unintended uses
  • Integration guidance

3️⃣ Copyright Compliance

  • Policy for respecting copyright
  • Opt-out mechanism compliance (for EU training)
  • Summary of training data

4️⃣ Transparency

  • Publish sufficiently detailed summary
  • EU AI Office template available

Systemic Risk GPAI

Additional requirements for models with systemic risk (10^25 FLOPs training threshold or designation):

Additional requirements for SYSTEMIC RISK models:

1️⃣ Model Evaluation

  • Adversarial testing
  • Red teaming for vulnerabilities
  • Document and mitigate risks

2️⃣ Incident Tracking

  • Monitor and report serious incidents
  • Notify AI Office within 24 hours

3️⃣ Cybersecurity

  • Adequate protection measures
  • Model weight security

4️⃣ Energy Reporting

  • Training compute consumption
  • Energy usage data

Who qualifies? GPT-5, Claude 4, Gemini 3 and similar frontier models.


High-Risk AI Obligations

Starting August 2, 2026, high-risk AI systems must comply with comprehensive requirements:

Risk Management System

# Conceptual implementation of risk management

class AIRiskManagementSystem:
    def __init__(self, ai_system):
        self.system = ai_system
        self.risks = []
        self.mitigations = []
        
    def identify_risks(self):
        """
        Continuous process to identify:
        - Known and foreseeable risks
        - Risks from intended use
        - Risks from reasonably foreseeable misuse
        """
        return self.analyze_system()
    
    def implement_mitigations(self, risks):
        """
        For each identified risk:
        - Design mitigation measures
        - Test effectiveness
        - Document decisions
        """
        for risk in risks:
            mitigation = self.design_mitigation(risk)
            if not self.test_mitigation(mitigation):
                self.escalate(risk)
    
    def monitor_continuously(self):
        """
        Post-deployment monitoring for:
        - New emerging risks
        - Mitigation effectiveness
        - Incident patterns
        """
        pass

Data Governance

Requirements for training, validation, and testing data:

RequirementWhat It Means
RelevanceData appropriate for intended purpose
RepresentativenessReflects deployment population
CompletenessSufficient for the use case
Bias examinationActively look for and address biases
Gap identificationDocument data limitations
Error-freeReasonable measures to ensure quality

Technical Documentation

Prepare and maintain documentation covering:

Technical Documentation Requirements:

1️⃣ General Description

  • Intended purpose
  • System architecture
  • Interfaces with other systems
  • Software versions

2️⃣ Development Process

  • Design specifications
  • Decisions made and rationale
  • Training methodologies
  • Testing procedures

3️⃣ Performance Metrics

  • Accuracy measures
  • Robustness testing
  • Bias evaluation
  • Cybersecurity measures

4️⃣ Monitoring System

  • How performance is tracked
  • Logging capabilities
  • Update procedures

Transparency for Users

Deployers (those who use high-risk AI) must:

User Transparency Requirements:

1️⃣ Inform Affected Persons

  • That they are subject to high-risk AI
  • Purpose of the system
  • How decisions are made (to extent possible)

2️⃣ Meaningful Explanation

  • When requested by affected person
  • Explain the decision's main reasoning
  • Within 30 days of request

3️⃣ Human Oversight

  • Persons assigned to oversee
  • Competent to perform oversight
  • Authority to override or reject

Human Oversight

High-risk AI must be designed for effective human oversight:

Human Oversight Design Requirements:

1️⃣ Interface Design

  • Enable operator to understand capabilities/limitations
  • Correctly interpret system output
  • Override or interrupt operation

2️⃣ Operator Capabilities

  • Decide not to use in particular situation
  • Disregard or reverse system output
  • Intervene or stop operation

3️⃣ Documentation

  • Clear instructions for operators
  • Training requirements specified
  • Escalation procedures defined

Conformity Assessment

Before deployment, high-risk systems must undergo:

  • Self-assessment for most high-risk categories
  • Third-party assessment for biometric identification and critical infrastructure

AI Literacy Obligation

New requirement effective February 2025:

Article 4 - AI Literacy:

Organizations deploying AI must ensure staff have sufficient AI literacy to:

  • Understand AI capabilities and limitations
  • Use AI systems appropriately
  • Make informed decisions about AI outputs
  • Recognize potential risks and biases

This applies to all organizations deploying AI, not just high-risk systems.

Implementation:

  • Training programs for AI users
  • Documentation and guidelines
  • Competency assessments
  • Regular updates as technology evolves

Practical Compliance Steps

Step 1: Classify Your AI Systems

For each AI system, determine:

  1. Is it prohibited? (Stop immediately if yes)
  2. Is it high-risk? (Full compliance by Aug 2026)
  3. Is it limited risk? (Transparency requirements)
  4. Is it minimal risk? (Voluntary codes of practice)

For GPAI models:

  1. Are you a provider? (Documentation + info requirements)
  2. Does it have systemic risk? (Additional requirements)

Step 2: Gap Analysis

Compare current practices to requirements:

RequirementCurrent StateGapPriority
Risk managementInformalMajorHigh
Technical docsPartialModerateHigh
Data governanceBasicMajorHigh
Human oversightExistsMinorMedium
TransparencyAd hocMajorHigh
AI literacyNoneMajorMedium

Step 3: Prioritized Implementation

Priority 1 (Immediate):

  • Ensure no prohibited practices
  • Begin risk management system design
  • Start documentation

Priority 2 (Q1-Q2 2026):

  • Complete technical documentation
  • Implement data governance
  • Establish human oversight

Priority 3 (By August 2026):

  • Conformity assessment
  • Registration in EU database
  • Full compliance verification

Penalties

Non-compliance carries significant fines:

ViolationMaximum Fine
Prohibited AI€35M or 7% global turnover
High-risk obligations€15M or 3% global turnover
Other provisions€7.5M or 1.5% global turnover
Incorrect information€7.5M or 1% global turnover

For SMEs and startups: Fines calculated proportionally to size.


Resources

Official Sources

  • EU AI Office: Central coordinating authority
  • AI Act text: Official Journal of the European Union
  • Guidance documents: AI Office templates and guides

Implementation Support

  • Regulatory sandboxes in member states
  • AI Pact for voluntary early compliance
  • Standards development (ISO, CEN/CENELEC)

Key Takeaways

  1. The EU AI Act is active now—prohibited practices banned, GPAI rules in effect

  2. Risk classification determines obligations—know which category your AI falls into

  3. High-risk requirements begin August 2026—seven months to achieve full compliance

  4. GPAI providers face specific obligations—documentation, transparency, copyright compliance

  5. AI literacy is now mandatory—staff must be trained to use AI appropriately

  6. Penalties are significant—up to €35M or 7% of global turnover

  7. Start compliance now—the timeline is tight for comprehensive requirements


Navigate AI Regulation and Ethics

The EU AI Act represents a new era of AI governance. Understanding the regulatory landscape—and the ethical principles behind it—is essential for anyone building or deploying AI systems.

In our Module 8 — AI Ethics & Safety, you'll learn:

  • The global AI regulatory landscape
  • Ethical frameworks for AI development
  • Bias detection and mitigation
  • Transparency and explainability principles
  • Human oversight design patterns
  • Risk assessment methodologies

These skills prepare you for responsible AI development in a regulated world.

Explore Module 8: AI Ethics & Safety

GO DEEPER

Module 8 — Ethics, Security & Compliance

Navigate AI risks, prompt injection, and responsible usage.