If you build AI products that touch Latin America, Brazil just became the jurisdiction you can’t ignore. PL 2338/2023 is on track to become the region’s first comprehensive AI law — and some of its provisions, especially around copyright, go further than anything in the EU, Japan, or the US.
Brazil has over 200 million people and one of the world’s largest tech markets. It already enforces the LGPD (Lei Geral de Proteção de Dados), a GDPR-equivalent data protection law that reshaped how companies handle personal data in the country. PL 2338 follows the same playbook: set clear rules early, enforce them broadly, and let the rest of the region follow.
If you’re shipping AI features to Brazilian users — or training models on Brazilian content — here’s what you need to know.
What PL 2338 requires
The bill is built on four guiding principles:
- Centrality of the human person — AI systems must respect fundamental rights and serve human interests.
- Responsible innovation — Development should balance progress with safety.
- Market competitiveness — Rules shouldn’t crush startups or favor incumbents.
- Safe and reliable systems — AI must be robust, secure, and predictable.
In practice, PL 2338 creates three main obligations for developers and deployers:
-
Transparency — Users must be told when they’re interacting with an AI system. Generated content must be identifiable as AI-produced. Developers must document how systems work, what data they use, and what their limitations are.
-
Accountability — There must be a clear chain of responsibility. If an AI system causes harm, someone — the developer, deployer, or operator — is legally accountable. The bill establishes governance requirements including impact assessments for higher-risk systems.
-
Risk classification — AI systems are categorized by risk level, with obligations scaling accordingly (more on this below).
These aren’t suggestions. The bill includes enforcement mechanisms and penalties, similar to the EU AI Act.
The copyright bombshell
Here’s the provision that has the AI industry paying attention: PL 2338 requires companies to compensate rights holders when copyrighted works are used to train commercial AI systems.
This is stricter than any major jurisdiction:
- The EU AI Act requires transparency about training data but doesn’t mandate payment.
- Japan has broad text-and-data-mining exceptions for AI training.
- The US is still litigating the question in courts with no settled law.
Brazil’s approach is straightforward — if you scrape copyrighted content to train a model you sell, you owe the creator money. The bill doesn’t yet specify exact rates or collection mechanisms, but the principle is written into the legislation.
For developers, this means:
- Audit your training data. If your model was trained on Brazilian copyrighted content, you may have a liability.
- Track provenance. Documentation of data sources becomes a legal requirement, not just a best practice.
- Budget for licensing. If you deploy commercial AI in Brazil, training-data licensing costs could become a line item.
This also has implications for AI-generated code ownership — if the model that generated the code was trained on copyrighted material, the legal picture gets complicated fast.
Risk classification
PL 2338 uses a tiered risk system similar to the EU AI Act:
- Unacceptable risk — Banned outright. This includes social scoring systems and AI that manipulates human behavior in ways that cause harm.
- High risk — Subject to strict requirements including impact assessments, human oversight, and detailed documentation. Covers areas like healthcare, criminal justice, employment, education, and credit scoring.
- Limited/low risk — Lighter obligations, primarily transparency requirements.
If your AI system falls into the high-risk category, expect mandatory conformity assessments before deployment in Brazil.
How it compares to the EU AI Act
| Area | Brazil PL 2338 | EU AI Act |
|---|---|---|
| Risk-based tiers | Yes — unacceptable, high, limited | Yes — unacceptable, high, limited, minimal |
| Copyright / training data | Must pay rights holders | Transparency only; no payment mandate |
| Transparency | Required for all AI systems | Required, with extra rules for GPAI |
| Accountability | Clear liability chain required | Provider/deployer responsibility split |
| Existing data law | LGPD (in force since 2020) | GDPR (in force since 2018) |
| Status (April 2026) | Passed Senate; before House | In force; phased enforcement ongoing |
| Scope | 200M+ population market | 450M+ population market |
The biggest divergence is copyright. Brazil is staking out a position that could set a precedent across Latin America — and potentially influence ongoing debates in the US and elsewhere.
Current status and timeline
As of April 2026:
- December 10, 2024 — PL 2338/2023 approved by the Brazilian Senate.
- Current — The bill is before the Câmara dos Deputados (House of Representatives) for review and vote.
- Next — If the House amends the bill, it returns to the Senate. If passed as-is, it goes to the President for signature.
There’s no fixed deadline for the House vote, but political momentum is strong. Brazil’s government has signaled AI regulation as a priority, and the LGPD precedent shows the country can move from bill to enforcement relatively quickly.
A realistic timeline: passage in late 2026, with a compliance grace period extending into 2027.
What developers should do now
You don’t need to wait for the final vote to start preparing:
- Map your Brazil exposure. Do you have Brazilian users? Is your training data sourced from Brazilian content? Either one puts you in scope.
- Classify your AI systems by risk. Use the EU AI Act tiers as a proxy — Brazil’s categories are similar enough to start the exercise now.
- Document your training data pipeline. Provenance tracking is a requirement under both PL 2338 and the EU AI Act. Build it once, comply in both jurisdictions.
- Review your copyright position. The training-data compensation requirement is the most novel part of this bill. Talk to legal counsel about your exposure.
- Build transparency into your UX. AI disclosure requirements are converging globally. If your product doesn’t already tell users when they’re interacting with AI, fix that now.
- Watch the House vote. The final text may change. Follow updates from Brazil’s National Congress and the ANPD (National Data Protection Authority).
For a broader view of how AI privacy laws vary by region, we maintain a running comparison.