🤖 AI Tools
· 4 min read

AI Liability for Developers — Who's Responsible When AI Fails?


Your AI coding assistant wrote a function with a subtle bug. It passed code review (also AI-assisted). It deployed to production. It caused a data breach. Who’s responsible?

This question is becoming urgent as AI moves from “helpful suggestion” to “autonomous agent writing production code.”

Short answer: The company deploying the AI is responsible, not the AI provider. AI providers’ terms of service universally disclaim liability for output quality.

From Anthropic’s terms: Output is provided “as is” without warranties. You’re responsible for reviewing and validating AI-generated content.

From OpenAI’s terms: Similar disclaimers. You own the output but also own the liability.

What this means: If Claude Code writes code that causes a breach, Anthropic isn’t liable. Your company is.

The EU AI Act angle

The EU AI Act (full enforcement August 2026) introduces:

  • Provider obligations: AI model providers must document capabilities and limitations
  • Deployer obligations: Companies using AI must ensure appropriate human oversight
  • Liability framework: The EU AI Liability Directive (proposed) would make it easier for affected parties to claim damages from AI deployers

For high-risk AI systems (hiring, credit, healthcare), the deployer must demonstrate that human oversight was in place.

Practical scenarios

Scenario 1: AI-generated code has a security vulnerability

Who’s liable: Your company. You deployed the code. The AI provider’s terms say you must review output.

Mitigation: Code review (human or AI-assisted), security scanning, testing.

Scenario 2: AI agent makes an unauthorized purchase

Who’s liable: Your company. You gave the agent access to payment tools via MCP.

Mitigation: Least privilege, human approval for financial actions, spending limits.

Scenario 3: AI chatbot gives wrong medical/legal advice

Who’s liable: Your company. Potentially criminal liability in regulated industries.

Mitigation: Don’t use AI for regulated advice without human oversight. Add disclaimers. Log everything.

Scenario 4: AI leaks customer data via prompt injection

Who’s liable: Your company under GDPR. The AI provider if they failed to implement reasonable security.

Mitigation: Prompt injection defenses, data minimization, observability.

What developers should do

1. Never deploy AI output without review

For code: automated tests, linting, security scanning. For content: human review before publishing. For decisions: human-in-the-loop for anything consequential.

2. Document your AI usage

Keep records of which AI tools you use, how they’re configured, and what oversight is in place. This is your defense if something goes wrong. See our governance guide.

3. Limit AI agent permissions

MCP servers should have minimal access. Read-only where possible. Human approval for destructive actions. See our security checklist.

4. Have insurance

Cyber liability insurance increasingly covers AI-related incidents. Check if your policy includes AI-generated code and autonomous agent actions.

5. Stay informed

AI liability law is evolving fast. The EU AI Liability Directive, US state-level AI laws, and industry-specific regulations are all in flux. What’s legal today might not be tomorrow.

The bottom line

AI providers have structured their terms to push liability to deployers (you). The law is catching up but currently favors this arrangement. Your best protection is:

  • Human oversight on AI output
  • Documentation of your AI governance
  • Technical controls (testing, monitoring, security)
  • Insurance

Don’t let this stop you from using AI — just use it responsibly and document that you do.

What your employment contract probably says

Most employment contracts already have clauses about liability for tools used in work. AI tools fall under these existing frameworks. But check for:

  • Acceptable use policies — does your company have an AI usage policy?
  • Code ownership — who owns AI-generated code? Usually the employer, same as human-written code.
  • Indemnification — are you personally liable for AI-generated bugs? Usually no, if you followed company procedures.

If your company doesn’t have an AI usage policy yet, propose one. It protects both you and the company. See our governance guide for a template.

Insurance considerations

Cyber liability insurance is evolving to cover AI-related incidents:

  • Errors & Omissions (E&O) — covers professional mistakes, including AI-assisted ones
  • Cyber liability — covers data breaches, including those caused by AI vulnerabilities
  • Technology E&O — specifically for tech companies, covers software defects

Ask your insurer whether AI-generated code and autonomous agent actions are covered. Many policies written before 2024 don’t explicitly address AI. Get it in writing.

The trajectory

AI liability law is moving toward:

  1. More deployer responsibility — the EU AI Act makes this explicit
  2. Mandatory documentation — you’ll need to prove you had oversight
  3. Sector-specific rules — healthcare, finance, and legal will have stricter requirements
  4. Insurance requirements — high-risk AI systems may require mandatory insurance

The companies that document their AI governance now will be ahead when regulations tighten.

Related: AI Governance for Startups · EU AI Act for Developers · AI and GDPR · AI Security Checklist · AI Risk Assessment Template