The EU AI Act enters full force in August 2026. A practical guide.

The EU AI Act's biggest deadline hits August 2, 2026. If you use ChatGPT, an AI recruitment tool, or any AI in your product, you have obligations. Here is what they are, explained for CTOs, not lawyers.

8 min readLire en françaisAuf Deutsch lesen
The EU AI Act enters full force in August 2026. A practical guide.

TL;DR

The EU AI Act classifies AI systems into four risk levels and assigns obligations accordingly. Prohibited practices are already banned since February 2025. The major enforcement date is August 2, 2026, when high-risk system rules, transparency requirements, and fines up to 35 million Euro or 7% of global turnover become fully applicable. Most European companies using AI tools like ChatGPT or Copilot are "deployers" with real, independent obligations they cannot outsource to their vendor. European AI tools like Mistral and Aleph Alpha hold a structural compliance advantage thanks to open-weight models, EU data hosting, and no CLOUD Act exposure.

The EU AI Act is the world's first comprehensive AI regulation. It entered into force on August 1, 2024. The first rules are already applicable. The big deadline, August 2, 2026, is four months away.

If you use AI in your company, you are probably already subject to it. If you build AI products, you certainly are. And the penalties are serious: up to 35 million Euro or 7% of global annual turnover.

This is not a legal brief. It is a practical guide for CTOs and founders who use AI tools and need to understand what is changing, what they need to do, and how much time they have left.

What is already in force

Two sets of rules have been applicable since February 2, 2025.

Eight AI practices are now banned. These are not future prohibitions. They are law today. The list includes: AI systems that manipulate behavior through subliminal techniques, systems that exploit vulnerabilities (age, disability), social scoring by public authorities, criminal risk prediction based solely on profiling, untargeted scraping of facial images for recognition databases, emotion recognition in workplaces and schools, biometric categorization inferring race, religion, or sexual orientation, and real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions).

If any of that sounds abstract, here are concrete examples. An HR tool that analyzes candidates' facial expressions during video interviews to assess "cultural fit"? Banned. A SaaS product that builds facial recognition databases by scraping LinkedIn or Instagram photos? Banned. A productivity tool that monitors employees' emotional states through webcam analysis? Banned.

AI literacy is mandatory. Every company using AI, regardless of risk level, must ensure that staff working with AI systems have a sufficient level of understanding of how they work, what they can do, and what the limitations are. There is no mandated certification or curriculum. But companies should document their training efforts. A 2024 AI Office survey found fewer than 25% of EU organizations had formal AI literacy programs.

Since August 2, 2025, providers of general-purpose AI models (GPT-4, Claude, Mistral Large, Gemini) also face obligations: technical documentation, copyright compliance, and a publicly available training data summary. 26 major AI companies signed the GPAI Code of Practice, including OpenAI, Anthropic, Google, Microsoft, and Mistral AI. Meta declined to sign.

The four risk levels, explained with examples you will recognize

The AI Act sorts every AI system into one of four categories.

Unacceptable risk: banned. The eight practices above. Already in force.

High risk: heavy obligations. This is where most of the August 2026 deadline matters. An AI system is high-risk if it falls into one of eight categories in Annex III of the regulation:

CategoryWhat this means in practice
BiometricsFacial recognition, biometric categorization
Critical infrastructureAI managing electricity grids, water supply, traffic
EducationAI deciding admissions, grading, monitoring exams
Employment & HRAI screening CVs, ranking candidates, deciding promotions, monitoring worker performance
Essential servicesCredit scoring, insurance risk assessment, public benefits eligibility
Law enforcementEvidence evaluation, recidivism prediction
Migration & borderVisa/asylum application processing, border identification
Justice & democracyAI assisting courts, AI influencing elections
If your company uses an AI tool for any of these purposes, you have obligations as a deployer. If you build an AI product for any of these purposes, you have much heavier obligations as a provider.

Limited risk: transparency required. If your product includes a chatbot, users must be told they are interacting with AI. If it generates images, audio, or video, the content must carry machine-readable markers (watermarking). Deepfakes must be labeled. Emotion recognition systems must inform the person being analyzed.

Minimal risk: no specific obligations. Spam filters, AI in video games, inventory management, basic recommendation engines. Most AI systems fall here. The only requirement is AI literacy (which applies to everyone).

Are you a provider or a deployer?

This is the most important distinction in the AI Act. It determines your obligations.

You are a deployer if you use an AI system under your authority in a professional context. This is the most common role for European companies. If your team uses ChatGPT for drafting, Claude for analysis, Copilot for code, or any AI-powered SaaS tool, you are a deployer.

You are a provider if you develop an AI system (or have it developed) and place it on the market under your own name or trademark. This includes building a product on top of AI APIs.

Here is where it gets tricky. Using the ChatGPT API to build a recruitment screening tool that you sell under your brand? You are not a deployer of OpenAI's system. You are the provider of a new AI system. And if that system falls into a high-risk category (recruitment does), you carry the full weight of provider obligations: risk management system, technical documentation, conformity assessment, CE marking, post-market monitoring, and incident reporting.

The "accidental provider" trap is real. White-labeling a high-risk AI system, substantially modifying one, or changing its intended purpose so it becomes high-risk, all turn a deployer into a provider.

What deployers must do by August 2, 2026

If you use high-risk AI systems as a deployer, Article 26 gives you 12 specific obligations. The most important ones:

Use the system according to the provider's instructions. Sounds obvious, but it means reading the documentation and respecting stated limitations.

Assign human oversight to competent, trained, authorized people. Someone in your organization must be able to understand, interpret, and override the AI system's outputs. This does not always mean a human reviews every single decision (human-in-the-loop). For many systems, human-on-the-loop (continuous monitoring with ability to intervene) is sufficient. But the people doing it must have the training and authority to act.

Monitor the system and retain logs for at least 6 months. If the system generates automatic logs, you must keep them.

Inform affected individuals. If a person is subject to a decision made or assisted by a high-risk AI system, they must know. Workers and their representatives must be informed before any workplace AI is deployed.

Conduct a Fundamental Rights Impact Assessment for certain use cases (credit scoring, insurance, public services).

Report serious incidents to the provider and authorities within 15 days.

For non-high-risk uses (marketing copy, general productivity, code assistance), your obligations are simpler: ensure AI literacy and disclose AI use when the system interacts with people.

The penalty structure

The fines scale with the severity of the violation:

ViolationMaximum fine
Prohibited AI practices35 million Euro or 7% of global turnover
High-risk and GPAI obligations15 million Euro or 3% of global turnover
Incorrect information to authorities7.5 million Euro or 1% of global turnover
For large enterprises, the cap is whichever is higher. For SMEs and startups, it is whichever is lower. That is a significant protection.

No formal penalties have been imposed yet as of April 2026. But enforcement infrastructure is being built: Finland became the first EU country with fully operational AI Act enforcement in January 2026, Italy enacted a full national AI law with criminal penalties for deepfakes, and Spain's AI supervision agency has been running a regulatory sandbox since 2023.

Why European AI tools have an edge

Using a US AI tool does not make you non-compliant. OpenAI, Anthropic, Google, and Microsoft all signed the GPAI Code of Practice and are investing in compliance. They offer EU-region hosting options.

But European AI providers hold structural advantages that simplify compliance:

No CLOUD Act exposure. Mistral AI, Aleph Alpha, and DeepL are not subject to the US CLOUD Act, which lets US authorities compel data access regardless of hosting location. For regulated industries, this eliminates a fundamental legal conflict.

Open-weight models support auditability. Mistral's open-weight approach lets deployers and regulators inspect model architecture and behavior, directly supporting the AI Act's transparency requirements (Article 13). When a regulator asks how your AI system works, being able to point to inspectable model weights is a strong answer.

Self-hosting preserves sovereignty. You can deploy Mistral models on European infrastructure (OVHcloud, Scaleway) with no data leaving the EU. Aleph Alpha's Pharia models are claimed to be 100% AI Act compliant from launch.

Regulatory proximity. European AI companies are actively shaping the Code of Practice and harmonized standards. Aleph Alpha helped ensure transparency and copyright provisions in the final Code. Mistral's CEO is an active voice in EU AI policy.

For a deeper look at the European AI ecosystem, see our article on European AI tools that can actually replace your US ones. And for the sovereignty dimension, our guide on digital sovereignty explains why data residency alone is not enough.

What you should do now

August 2, 2026 is four months away. Here is a realistic timeline:

Now (April 2026): inventory every AI system your company uses or builds. Include shadow AI (personal ChatGPT accounts). Classify each by risk level and determine whether you are a provider or deployer.

May 2026: conduct impact assessments for high-risk systems. Update vendor contracts to require compliance documentation, incident notification, and cooperation on regulatory inquiries.

June 2026: finalize human oversight procedures. Complete AI literacy training for all staff. Build documentation trails.

July 2026: internal audit and gap remediation.

August 2, 2026: full enforcement begins.

One caveat: the EU proposed a Digital Omnibus package in November 2025 that could delay some high-risk obligations to December 2027. The Council and Parliament adopted positions in March 2026 and trilogue negotiations are underway. But the delay is not guaranteed. Plan for August 2 and treat any extension as a bonus.

The EU AI Act Compliance Checker, run by the Commission's AI Act Service Desk, can help you assess where your systems fall. And the European AI tools directory on From Europe, With Love lets you find EU-native alternatives for every category.

The bottom line

The AI Act is not GDPR 2.0. It is more targeted. Most AI systems are minimal risk and require nothing beyond basic literacy. But if you use AI for anything involving people's rights, jobs, finances, or safety, you need to take this seriously.

The good news: if you already care about data sovereignty and use European tools, you are ahead of most. The AI Act rewards transparency, documentation, and auditability, exactly the qualities that European AI providers have built their products around.

Start with your AI inventory. The rest follows from there.

Key Takeaways

  • 1. Eight AI practices are already banned in the EU since February 2025, including workplace emotion recognition, social scoring, and untargeted facial recognition scraping.
  • 2. If your company uses ChatGPT, Claude, or Copilot, you are a "deployer" under the AI Act with independent compliance obligations that your US vendor cannot fulfill for you.
  • 3. If your company builds a product on top of an AI API and sells it under your own name, you are a "provider" with significantly heavier obligations including conformity assessments.
  • 4. Fines reach up to 35 million Euro or 7% of global turnover for prohibited practices, but SMEs pay whichever amount is lower, not higher.
  • 5. European AI tools (Mistral, Aleph Alpha, DeepL) offer structural compliance advantages: open-weight auditability, EU-only data hosting, and no exposure to the US CLOUD Act.

Frequently Asked Questions

Does the EU AI Act apply to my company if I just use ChatGPT for internal tasks?
Yes. If you use AI systems in a professional context within the EU, you are a "deployer" under the AI Act. For internal productivity uses like drafting or brainstorming (minimal risk), your obligations are limited to ensuring AI literacy for your team and disclosing AI use when the system interacts with people. For high-risk uses like screening job candidates or scoring creditworthiness, you face additional obligations including human oversight, monitoring, and informing affected individuals.
What is the difference between a provider and a deployer under the AI Act?
A provider develops an AI system and places it on the market under their name. A deployer uses it in a professional context. Most European companies are deployers. The critical exception: if you build a product on top of an AI API (like OpenAI or Anthropic) and sell it under your brand, you become a provider with significantly heavier obligations, including conformity assessments, CE marking, and post-market monitoring.
What are the fines for non-compliance with the EU AI Act?
Fines reach up to 35 million Euro or 7% of global annual turnover for prohibited AI practices, 15 million Euro or 3% for other violations (high-risk or GPAI obligations), and 7.5 million Euro or 1% for providing incorrect information. For large enterprises, the cap is whichever is higher. For SMEs and startups, it is whichever is lower, providing significant protection for smaller companies.
Do European AI tools have a compliance advantage over US tools under the AI Act?
European AI providers hold structural advantages. Mistral's open-weight models allow inspection of architecture and behavior, directly supporting transparency requirements. EU-based providers are not subject to the US CLOUD Act, eliminating legal conflicts for regulated industries. Self-hosting on European infrastructure (OVHcloud, Scaleway) keeps data fully within EU jurisdiction. That said, US providers like OpenAI and Anthropic have also signed the GPAI Code of Practice and offer EU-region hosting.
Could the August 2026 deadline be delayed?
Possibly. The EU proposed a Digital Omnibus package in November 2025 that could push some high-risk obligations to December 2027. The Council and Parliament adopted their positions in March 2026, and trilogue negotiations are underway. But the delay is not guaranteed and would not affect all obligations. Prohibited practices and AI literacy requirements are already in force. The safe approach: plan for August 2, 2026 and treat any extension as a bonus.

Related Posts

Help us map the European stack.

Submit a tool or suggest an edit. We review every entry.