The EU AI Act enters full enforcement on August 2, 2026. Fines reach up to €35 million or 7% of global annual turnover — nearly double GDPR’s maximum. If you build or deploy AI systems that touch EU users, this affects you.
The risk tiers
The AI Act doesn’t regulate “AI” broadly — it regulates specific use cases by risk level:
Unacceptable risk (BANNED since Feb 2025)
- Social scoring systems
- Real-time biometric surveillance (with exceptions)
- Manipulation of vulnerable groups
- Emotion recognition in workplaces/schools
High risk (full requirements from Aug 2026)
- AI in hiring/recruitment
- Credit scoring
- Healthcare diagnostics
- Critical infrastructure management
- Law enforcement tools
Requirements: Risk assessment, data governance, technical documentation, human oversight, accuracy/robustness testing, logging, transparency to users.
Limited risk (transparency requirements)
- Chatbots (must disclose they’re AI)
- Deepfakes (must be labeled)
- AI-generated content (must be marked)
Minimal risk (no requirements)
- Spam filters
- AI coding tools
- Recommendation systems
- Game AI
Does this affect AI coding tools?
Mostly no. AI coding tools like Claude Code, Cursor, Aider, and Copilot fall under “minimal risk” — no specific requirements.
But: If you’re BUILDING AI products (not just using AI tools), your product might fall into a higher risk tier. An AI hiring tool, medical diagnosis system, or credit scoring model built with Claude Code IS regulated — even though Claude Code itself isn’t.
What developers building AI products need to do
If your AI product is high-risk:
- Risk assessment — Document what can go wrong and how you mitigate it
- Data governance — Document your training data, check for bias
- Technical documentation — Architecture, model cards, evaluation results
- Human oversight — Users must be able to override AI decisions
- Logging — Keep audit trails of AI decisions for regulatory review
- Accuracy testing — Benchmark your system, document failure modes
- Transparency — Tell users they’re interacting with AI
The foundation model angle
Companies providing foundation models (OpenAI, Anthropic, Mistral, Google) have additional obligations:
- Technical documentation of training process
- Copyright compliance documentation
- Energy consumption reporting
- Safety testing results
This is why Mistral being EU-based matters — they’re building compliance into their models from the start, which makes YOUR compliance easier if you build on their models.
Timeline
| Date | What happens |
|---|---|
| Feb 2025 | Unacceptable risk AI banned ✅ |
| Aug 2025 | General-purpose AI model rules apply ✅ |
| Aug 2, 2026 | High-risk AI system requirements (UPCOMING) |
| Aug 2027 | Certain embedded AI systems |
Practical advice
- Classify your AI product — Is it minimal, limited, or high risk?
- If minimal risk — You’re fine. Keep building.
- If high risk — Start compliance work NOW. August is 4 months away.
- Use EU-based providers when possible — Mistral simplifies compliance
- Document everything — The Act requires demonstrable compliance, not just good intentions
Related: AI and GDPR for Developers · Which AI APIs Are GDPR Compliant? · What is Mistral AI? · Self-Hosted AI for GDPR