Back to all articles
11 MIN READ

EU AI Act 2026: What Developers Need to Know

By Dorian Laurenceau

📅 Last reviewed: April 24, 2026. Updated with April 2026 findings and community feedback.

The European Union's AI Act is now in full effect, representing the world's first comprehensive legal framework for artificial intelligence. With major compliance deadlines arriving throughout 2025-2027, every organization developing or deploying AI in Europe must understand and implement the requirements.

This comprehensive guide breaks down the EU AI Act for developers and technical teams, covering risk classifications, specific obligations, and practical compliance steps.


<!-- manual-insight -->

EU AI Act in year two: what compliance teams have learned vs what the guidance documents said

The AI Act entered into force in August 2024; the first real obligations (prohibitions, literacy) took effect February 2025; GPAI obligations August 2025; high-risk rules staged through 2026-2027. We're now deep enough into implementation that the gap between written guidance and operational reality is visible. Threads on r/GDPR, r/compliance, and the EU-facing business communities show what's actually happening.

Where the Act is working as intended:

  • The prohibited-practices list is straightforward. Social scoring by public authorities, emotion recognition at work and school, biometric categorisation by sensitive attributes: the prohibitions are clear and organisations are complying. See Article 5 of the AI Act.
  • The GPAI model obligations have teeth. The frontier labs (OpenAI, Google, Anthropic, Mistral, Meta for EU-facing releases) are producing the technical documentation, training data summaries, and systemic-risk assessments required. The Code of Practice negotiation produced something workable.
  • Article 4 literacy requirements are reshaping training programmes. See our detailed analysis in AI literacy requirements.

Where implementation is messier than the guidance suggests:

  • High-risk system classification is operationally hard. The list in Annex III is clear in principle; in practice, determining whether a specific system is high-risk requires legal interpretation that most organisations can't do in-house. Expect more enforcement actions as national authorities take positions.
  • The interaction with GDPR is underdocumented. Both regulations apply to AI systems processing personal data. When they conflict (e.g., transparency requirements vs trade secrets), no clear precedence exists yet. Compliance teams are making judgment calls that may be second-guessed.
  • SME exemptions and proportionality are generous on paper and stingy in practice. The text says requirements scale with organisation size and risk; in practice, SMEs are often held to enterprise standards by vendors who won't customise.

What's changed for non-EU companies:

  • Extraterritorial reach is real. If your AI output affects EU residents, you're in scope. This has driven concrete product changes at major US vendors.
  • The Commission's enforcement posture is cooperative but tightening. Year one was guidance-heavy; year two is beginning to see enforcement actions. Waiting for enforcement before acting is a losing strategy.

Resources that matter for operational compliance:

  • The official AI Act text and explainer — the best single reference.
  • CNIL, ICO, and DPA guidance — national regulators are producing sector-specific guidance that often clarifies what the Act left abstract.
  • Industry association templates — for SMEs without dedicated legal teams, trade-association templates are a practical starting point.

The honest framing: the AI Act is a serious regulatory regime that's being enforced with increasing rigour. It's not a checkbox exercise, and "we'll deal with it when enforcement actions start" is a strategy that has already cost some organisations. Getting ahead of it with documented, risk-based compliance programmes is cheaper than reacting.


Learn AI — From Prompts to Agents

10 Free Interactive Guides120+ Hands-On Exercises100% Free

Timeline of Obligations

The AI Act enters into force in phases:

DateWhat Takes Effect
Feb 2, 2025Prohibited AI practices banned
Aug 2, 2025GPAI model obligations begin
Aug 2, 2026High-risk AI system rules apply
Aug 2, 2027Embedded AI systems in regulated products

Current Status (January 2026): Prohibited practices are already illegal. GPAI obligations are in effect. High-risk requirements take effect in 7 months.


Risk Classification System

The AI Act creates a risk-based framework with four tiers:

Prohibited AI (Unacceptable Risk)

Banned since February 2, 2025:

❌ Prohibited Systems (Banned since Feb 2, 2025):

  1. Subliminal manipulation, AI that manipulates beyond person's consciousness, causes or risks significant harm
  2. Exploitation of vulnerabilities, Targeting age, disability, social situation to materially distort behavior
  3. Social scoring by public authorities, Evaluating trustworthiness over time leading to detrimental treatment
  4. Real-time remote biometric identification, In public spaces for law enforcement (limited exceptions for serious crimes)
  5. Emotion inference in workplace/education, Except for medical/safety purposes
  6. Untargeted facial image scraping, Building databases from internet/CCTV
  7. Biometric categorization, Inferring sensitive attributes (race, politics, etc.)

High-Risk AI

Fully regulated from August 2, 2026:

Systems considered high-risk fall into two categories:

Category 1: Safety Components

  • AI in medical devices
  • AI in vehicles
  • AI in machinery
  • AI in toys
  • AI in aviation
  • AI in marine equipment

Category 2: Specific Use Cases

  • Biometric identification/categorization
  • Critical infrastructure management
  • Education access and assessment
  • Employment and HR decisions
  • Essential services access (credit, benefits)
  • Law enforcement applications
  • Migration and border control
  • Justice administration

Limited Risk

Systems requiring transparency:

  • Chatbots (must disclose AI nature)
  • Emotion recognition systems
  • Biometric categorization
  • Deepfakes and AI-generated content

Minimal Risk

All other AI systems-no specific obligations beyond existing law.


General Purpose AI (GPAI) Requirements

Since August 2025, providers of general-purpose AI models face specific obligations:

All GPAI Models

Requirements for ALL foundation models:

1️⃣ Technical Documentation

  • Training and testing processes
  • Evaluation results with methodology
  • Known limitations

2️⃣ Information for Downstream Providers

  • Capabilities and limitations
  • Intended and unintended uses
  • Integration guidance

3️⃣ Copyright Compliance

  • Policy for respecting copyright
  • Opt-out mechanism compliance (for EU training)
  • Summary of training data

4️⃣ Transparency

  • Publish sufficiently detailed summary
  • EU AI Office template available

Systemic Risk GPAI

Additional requirements for models with systemic risk (10^25 FLOPs training threshold or designation):

Additional requirements for SYSTEMIC RISK models:

1️⃣ Model Evaluation

  • Adversarial testing
  • Red teaming for vulnerabilities
  • Document and mitigate risks

2️⃣ Incident Tracking

  • Monitor and report serious incidents
  • Notify AI Office within 24 hours

3️⃣ Cybersecurity

  • Adequate protection measures
  • Model weight security

4️⃣ Energy Reporting

  • Training compute consumption
  • Energy usage data

Who qualifies? GPT-5, Claude 4, Gemini 3 and similar frontier models.


High-Risk AI Obligations

Starting August 2, 2026, high-risk AI systems must comply with comprehensive requirements:

Risk Management System

# Conceptual implementation of risk management

class AIRiskManagementSystem:
    def __init__(self, ai_system):
        self.system = ai_system
        self.risks = []
        self.mitigations = []
        
    def identify_risks(self):
        """
        Continuous process to identify:
        - Known and foreseeable risks
        - Risks from intended use
        - Risks from reasonably foreseeable misuse
        """
        return self.analyze_system()
    
    def implement_mitigations(self, risks):
        """
        For each identified risk:
        - Design mitigation measures
        - Test effectiveness
        - Document decisions
        """
        for risk in risks:
            mitigation = self.design_mitigation(risk)
            if not self.test_mitigation(mitigation):
                self.escalate(risk)
    
    def monitor_continuously(self):
        """
        Post-deployment monitoring for:
        - New emerging risks
        - Mitigation effectiveness
        - Incident patterns
        """
        pass

Data Governance

Requirements for training, validation, and testing data:

RequirementWhat It Means
RelevanceData appropriate for intended purpose
RepresentativenessReflects deployment population
CompletenessSufficient for the use case
Bias examinationActively look for and address biases
Gap identificationDocument data limitations
Error-freeReasonable measures to ensure quality

Technical Documentation

Prepare and maintain documentation covering:

Technical Documentation Requirements:

1️⃣ General Description

  • Intended purpose
  • System architecture
  • Interfaces with other systems
  • Software versions

2️⃣ Development Process

  • Design specifications
  • Decisions made and rationale
  • Training methodologies
  • Testing procedures

3️⃣ Performance Metrics

  • Accuracy measures
  • Robustness testing
  • Bias evaluation
  • Cybersecurity measures

4️⃣ Monitoring System

  • How performance is tracked
  • Logging capabilities
  • Update procedures

Transparency for Users

Deployers (those who use high-risk AI) must:

User Transparency Requirements:

1️⃣ Inform Affected Persons

  • That they are subject to high-risk AI
  • Purpose of the system
  • How decisions are made (to extent possible)

2️⃣ Meaningful Explanation

  • When requested by affected person
  • Explain the decision's main reasoning
  • Within 30 days of request

3️⃣ Human Oversight

  • Persons assigned to oversee
  • Competent to perform oversight
  • Authority to override or reject

Human Oversight

High-risk AI must be designed for effective human oversight:

Human Oversight Design Requirements:

1️⃣ Interface Design

  • Enable operator to understand capabilities/limitations
  • Correctly interpret system output
  • Override or interrupt operation

2️⃣ Operator Capabilities

  • Decide not to use in particular situation
  • Disregard or reverse system output
  • Intervene or stop operation

3️⃣ Documentation

  • Clear instructions for operators
  • Training requirements specified
  • Escalation procedures defined

Conformity Assessment

Before deployment, high-risk systems must undergo:

  • Self-assessment for most high-risk categories
  • Third-party assessment for biometric identification and critical infrastructure

AI Literacy Obligation

New requirement effective February 2025:

Article 4 - AI Literacy:

Organizations deploying AI must ensure staff have sufficient AI literacy to:

  • Understand AI capabilities and limitations
  • Use AI systems appropriately
  • Make informed decisions about AI outputs
  • Recognize potential risks and biases

This applies to all organizations deploying AI, not just high-risk systems.

Implementation:

  • Training programs for AI users
  • Documentation and guidelines
  • Competency assessments
  • Regular updates as technology evolves

Practical Compliance Steps

Step 1: Classify Your AI Systems

For each AI system, determine:

  1. Is it prohibited? (Stop immediately if yes)
  2. Is it high-risk? (Full compliance by Aug 2026)
  3. Is it limited risk? (Transparency requirements)
  4. Is it minimal risk? (Voluntary codes of practice)

For GPAI models:

  1. Are you a provider? (Documentation + info requirements)
  2. Does it have systemic risk? (Additional requirements)

Step 2: Gap Analysis

Compare current practices to requirements:

RequirementCurrent StateGapPriority
Risk managementInformalMajorHigh
Technical docsPartialModerateHigh
Data governanceBasicMajorHigh
Human oversightExistsMinorMedium
TransparencyAd hocMajorHigh
AI literacyNoneMajorMedium

Step 3: Prioritized Implementation

Priority 1 (Immediate):

  • Ensure no prohibited practices
  • Begin risk management system design
  • Start documentation

Priority 2 (Q1-Q2 2026):

  • Complete technical documentation
  • Implement data governance
  • Establish human oversight

Priority 3 (By August 2026):

  • Conformity assessment
  • Registration in EU database
  • Full compliance verification

Penalties

Non-compliance carries significant fines:

ViolationMaximum Fine
Prohibited AI€35M or 7% global turnover
High-risk obligations€15M or 3% global turnover
Other provisions€7.5M or 1.5% global turnover
Incorrect information€7.5M or 1% global turnover

For SMEs and startups: Fines calculated proportionally to size.


Resources

Official Sources

  • EU AI Office: Central coordinating authority
  • AI Act text: Official Journal of the European Union
  • Guidance documents: AI Office templates and guides

Implementation Support

  • Regulatory sandboxes in member states
  • AI Pact for voluntary early compliance
  • Standards development (ISO, CEN/CENELEC)

In Brief

  1. The EU AI Act is active now-prohibited practices banned, GPAI rules in effect

  2. Risk classification determines obligations-know which category your AI falls into

  3. High-risk requirements begin August 2026-seven months to achieve full compliance

  4. GPAI providers face specific obligations-documentation, transparency, copyright compliance

  5. AI literacy is now mandatory-staff must be trained to use AI appropriately

  6. Penalties are significant-up to €35M or 7% of global turnover

  7. Start compliance now-the timeline is tight for comprehensive requirements


The EU AI Act represents a new era of AI governance. Understanding the regulatory landscape-and the ethical principles behind it-is essential for anyone building or deploying AI systems.

In our Module 8, AI Ethics & Safety, you'll learn:

  • The global AI regulatory landscape
  • Ethical frameworks for AI development
  • Bias detection and mitigation
  • Transparency and explainability principles
  • Human oversight design patterns
  • Risk assessment methodologies

These skills prepare you for responsible AI development in a regulated world.

Explore Module 8: AI Ethics & Safety

GO DEEPER — FREE GUIDE

Module 8 — Ethics, Security & Compliance

Navigate AI risks, prompt injection, and responsible usage.

D

Dorian Laurenceau

Full-Stack Developer & Learning Designer

Full-stack web developer and learning designer. I spent 4 years as a freelance full-stack developer and 4 years teaching React, JavaScript, HTML/CSS and WordPress to adult learners. Today I design learning paths in web development and AI, grounded in learning science. I founded learn-prompting.fr to make AI practical and accessible, and built the Bluff app to gamify political transparency.

Prompt EngineeringLLMsFull-Stack DevelopmentLearning DesignReact
Published: January 30, 2026Updated: April 24, 2026
Newsletter

Weekly AI Insights

Tools, techniques & news — curated for AI practitioners. Free, no spam.

Free, no spam. Unsubscribe anytime.

FAQ

What is the EU AI Act?+

The EU AI Act is the world's first comprehensive AI regulation. It categorizes AI systems by risk level (unacceptable, high, limited, minimal) and imposes corresponding requirements.

When does the EU AI Act take effect?+

The Act entered into force August 2024. Key deadlines: AI literacy (Feb 2025), prohibited AI (Feb 2025), GPAI rules (Aug 2025), high-risk systems (Aug 2026-2027).

What AI systems are prohibited under the EU AI Act?+

Banned systems include social scoring, predictive policing, emotion recognition in workplaces/schools, untargeted facial recognition scraping, and AI exploiting vulnerabilities.

Does the EU AI Act apply to non-EU companies?+

Yes. The Act applies to anyone placing AI systems on the EU market or whose AI outputs affect EU residents-regardless of where the company is based.