🤖 AI Tools
· 2 min read

EU AI Act for Developers — What Changes in August 2026


The EU AI Act enters full enforcement on August 2, 2026. Fines reach up to €35 million or 7% of global annual turnover — nearly double GDPR’s maximum. If you build or deploy AI systems that touch EU users, this affects you.

The risk tiers

The AI Act doesn’t regulate “AI” broadly — it regulates specific use cases by risk level:

Unacceptable risk (BANNED since Feb 2025)

  • Social scoring systems
  • Real-time biometric surveillance (with exceptions)
  • Manipulation of vulnerable groups
  • Emotion recognition in workplaces/schools

High risk (full requirements from Aug 2026)

  • AI in hiring/recruitment
  • Credit scoring
  • Healthcare diagnostics
  • Critical infrastructure management
  • Law enforcement tools

Requirements: Risk assessment, data governance, technical documentation, human oversight, accuracy/robustness testing, logging, transparency to users.

Limited risk (transparency requirements)

  • Chatbots (must disclose they’re AI)
  • Deepfakes (must be labeled)
  • AI-generated content (must be marked)

Minimal risk (no requirements)

  • Spam filters
  • AI coding tools
  • Recommendation systems
  • Game AI

Does this affect AI coding tools?

Mostly no. AI coding tools like Claude Code, Cursor, Aider, and Copilot fall under “minimal risk” — no specific requirements.

But: If you’re BUILDING AI products (not just using AI tools), your product might fall into a higher risk tier. An AI hiring tool, medical diagnosis system, or credit scoring model built with Claude Code IS regulated — even though Claude Code itself isn’t.

What developers building AI products need to do

If your AI product is high-risk:

  1. Risk assessment — Document what can go wrong and how you mitigate it
  2. Data governance — Document your training data, check for bias
  3. Technical documentation — Architecture, model cards, evaluation results
  4. Human oversight — Users must be able to override AI decisions
  5. Logging — Keep audit trails of AI decisions for regulatory review
  6. Accuracy testing — Benchmark your system, document failure modes
  7. Transparency — Tell users they’re interacting with AI

The foundation model angle

Companies providing foundation models (OpenAI, Anthropic, Mistral, Google) have additional obligations:

  • Technical documentation of training process
  • Copyright compliance documentation
  • Energy consumption reporting
  • Safety testing results

This is why Mistral being EU-based matters — they’re building compliance into their models from the start, which makes YOUR compliance easier if you build on their models.

Timeline

DateWhat happens
Feb 2025Unacceptable risk AI banned ✅
Aug 2025General-purpose AI model rules apply ✅
Aug 2, 2026High-risk AI system requirements (UPCOMING)
Aug 2027Certain embedded AI systems

Practical advice

  1. Classify your AI product — Is it minimal, limited, or high risk?
  2. If minimal risk — You’re fine. Keep building.
  3. If high risk — Start compliance work NOW. August is 4 months away.
  4. Use EU-based providers when possible — Mistral simplifies compliance
  5. Document everything — The Act requires demonstrable compliance, not just good intentions

Related: AI and GDPR for Developers · Which AI APIs Are GDPR Compliant? · What is Mistral AI? · Self-Hosted AI for GDPR