106 days. That’s how long you have until the EU AI Act’s high-risk rules take full effect on August 2, 2026. If you’re building, deploying, or selling AI systems that touch the EU market, this deadline is not optional — and the penalties for non-compliance can reach 7% of your company’s global annual revenue.
The EU AI Act is the world’s first comprehensive AI law. It doesn’t care where your company is headquartered. If your AI system affects EU citizens, you’re in scope. Here’s exactly what’s changing, what’s already live, and what you need to do before August.
For a broader overview of the regulation, see our EU AI Act guide for developers.
What’s already in effect
The EU AI Act is being enforced in stages. Since February 2, 2025, the following AI practices are outright banned:
- Social scoring systems by governments
- Real-time remote biometric identification in public spaces (with narrow law enforcement exceptions)
- AI that exploits vulnerabilities of specific groups (age, disability)
- AI that uses subliminal manipulation to cause harm
- Untargeted scraping of facial images for facial recognition databases
- Emotion recognition in workplaces and educational institutions (with limited exceptions)
If your system does any of the above, it’s already illegal in the EU. No grace period.
What changes August 2, 2026
This is the big one. On August 2, the high-risk AI rules become enforceable. These are the obligations that require real engineering work — conformity assessments, technical documentation, risk management systems, and ongoing monitoring.
This affects any AI system classified as high-risk, plus new transparency obligations for general-purpose AI (GPAI) models.
Risk classification explained
The EU AI Act uses a four-tier risk system. Your obligations depend entirely on where your system falls:
| Risk level | Examples | Obligations |
|---|---|---|
| Unacceptable | Social scoring, manipulative AI, real-time biometric ID in public | Banned outright (since Feb 2025) |
| High-risk | Healthcare diagnostics, employment screening, credit scoring, education assessment, law enforcement, critical infrastructure | Conformity assessments, technical documentation, risk management, data governance, human oversight, accuracy & robustness requirements |
| Limited risk | Chatbots, deepfake generators, emotion recognition (non-banned uses) | Transparency obligations — users must know they're interacting with AI |
| Minimal risk | Spam filters, AI in video games, inventory management | No specific obligations |
Most developer attention should focus on the high-risk category. If your AI system is used in any of these domains — even indirectly — you likely need to comply.
What developers (providers) must do
Under the EU AI Act, a provider is anyone who develops an AI system or has one developed on their behalf and places it on the EU market. If that’s you, here’s your compliance checklist before August 2:
- Risk management system — Implement a continuous process to identify, analyze, and mitigate risks throughout the AI system’s lifecycle. Not a one-time assessment — this must be ongoing.
- Data governance — Ensure training, validation, and testing datasets meet quality criteria. Document data sources, collection methods, and preprocessing steps.
- Technical documentation — Maintain detailed docs covering system architecture, design choices, training procedures, performance metrics, and known limitations. This must be ready before placing the system on the market.
- Record-keeping and logging — Build automatic logging capabilities into your system. Logs must be sufficient to trace the system’s operation and identify risks post-deployment.
- Transparency and user information — Provide clear instructions for deployers, including intended purpose, performance levels, known risks, and human oversight requirements.
- Human oversight — Design the system so humans can effectively oversee its operation. This means interpretable outputs, ability to override or halt the system, and clear escalation paths.
- Accuracy, robustness, and cybersecurity — Meet appropriate levels of accuracy and robustness. Implement protections against adversarial attacks, data poisoning, and other security threats.
- Conformity assessment — Complete the required conformity assessment procedure before placing your system on the market. For some high-risk categories, this requires third-party auditing.
- EU Declaration of Conformity — Draw up a written declaration and keep it available for national authorities for 10 years.
- CE marking — Affix the CE marking to your high-risk AI system or its documentation.
This is a significant engineering and documentation effort. If you haven’t started, 106 days is tight.
What deployers must do
A deployer is any organization using an AI system under its authority (except for personal, non-professional use). If you’re deploying someone else’s high-risk AI system, your obligations are lighter but still real:
- Use the system according to the provider’s instructions — Don’t repurpose a system outside its intended use case.
- Human oversight — Assign competent individuals to oversee the system’s operation.
- Monitor for risks — Watch for risks to health, safety, and fundamental rights during operation. Report serious incidents to the provider and relevant authorities.
- Keep logs — Retain logs generated by the system for the period specified by the provider (minimum 6 months).
- Data protection impact assessment — Where required under GDPR, complete a DPIA before deploying the system. See our GDPR guide for AI developers for more on this overlap.
- Inform affected individuals — When decisions are made about natural persons using high-risk AI, those individuals must be informed.
If you’re both building and deploying AI, you carry both sets of obligations.
GPAI model obligations
The EU AI Act introduces specific rules for general-purpose AI models — foundation models and large language models that can be adapted for many tasks. These obligations also kick in on August 2, 2026.
All GPAI model providers must:
- Maintain and make available technical documentation about the model
- Provide information and documentation to downstream providers who integrate the model into their systems
- Establish a policy to comply with EU copyright law
- Publish a sufficiently detailed summary of the training data
GPAI models with systemic risk (models trained with more than 10^25 FLOPs, or designated by the European Commission) face additional requirements:
- Perform model evaluations, including adversarial testing
- Assess and mitigate systemic risks
- Track and report serious incidents
- Ensure adequate cybersecurity protections
If you’re building on top of a GPAI model (e.g., using an LLM API to power a high-risk application), you’re still responsible for your system’s compliance. The GPAI provider’s obligations don’t replace yours.
Penalties
The EU AI Act has teeth. Enforcement penalties scale with the severity of the violation:
| Violation type | Maximum fine |
|---|---|
| Prohibited AI practices | Up to €35 million or 7% of global annual revenue (whichever is higher) |
| High-risk AI non-compliance | Up to €15 million or 3% of global annual revenue |
| Supplying incorrect information to authorities | Up to €7.5 million or 1.5% of global annual revenue |
For SMEs and startups, fines are capped at the lower of the two thresholds. But even the reduced amounts can be existential for smaller companies.
National authorities in each EU member state will handle enforcement, with the European AI Office coordinating at the EU level.
Does this apply to you?
Almost certainly, if any of these are true:
- You sell or distribute AI systems to users in the EU
- Your AI system’s output is used in the EU, even if you’re based elsewhere
- You’re a deployer of AI systems within the EU
- You provide a GPAI model that downstream providers use in the EU market
The EU AI Act has extraterritorial reach, similar to GDPR. A company in the US, India, or anywhere else is subject to these rules if their AI system is placed on the EU market or its output affects people in the EU.
If you’re unsure whether your system qualifies as high-risk, start with a risk classification exercise. Our AI compliance automation guide covers tooling that can help, and our AI privacy laws by region overview puts the EU AI Act in global context.
Start now, not in July
106 days sounds like a lot until you factor in conformity assessments, documentation reviews, and engineering changes to logging and oversight systems. Companies that wait until summer will be scrambling.
The minimum viable plan:
- Classify your AI systems by risk level
- Gap analysis — compare current state against the checklist above
- Prioritize technical documentation and risk management systems (these take the longest)
- Implement logging and human oversight mechanisms
- Engage legal counsel familiar with the EU AI Act for conformity assessment guidance
For more on how the EU AI Act intersects with data protection, see our guide to GDPR-compliant AI APIs.