US State AI Laws for Developers β Colorado, California, Texas, Illinois (2026)
2026 is the year US AI regulation stops being theoretical. Four states β Texas, Illinois, California, and Colorado β now have AI-specific laws on the books, and three of them are already in effect. If you build, deploy, or integrate AI systems that touch users in any of these states, you have compliance obligations right now.
This isnβt a policy overview. Itβs a practical guide for developers and engineering teams who need to understand what these laws actually require, what deadlines matter, and what you need to build into your systems.
The 2026 Timeline
| Date | State | Law | What It Requires |
|---|---|---|---|
| Jan 1, 2026 β | Texas | TRAIGA | Governance docs, transparency disclosures, high-risk controls |
| Jan 1, 2026 β | Illinois | HB 3773 | AI employment-decision rules, bias audits, notice requirements |
| Jan 1, 2026 β | California | AB 2013 | Training-data transparency for generative AI |
| Jan 1, 2026 β | California | SB 53 | Frontier model safety reports, catastrophic-risk plans, whistleblower protections |
| Jun 30, 2026 | Colorado | SB 24-205 | EU-style AI governance: impact assessments, risk management, disclosure obligations |
| Aug 2, 2026 | California | SB 942 + AB 853 | AI-content watermarking, visible disclosure on generated content |
Three laws are already enforceable. Two more deadlines are coming fast. Letβs break each one down.
Texas β TRAIGA (In Effect)
The Texas Responsible AI Governance Act applies to anyone deploying AI that interacts with Texas residents β not just companies headquartered in Texas. If your SaaS product serves Texas users and makes consequential decisions, youβre in scope.
What it requires:
- Governance documentation β maintain written records describing how your AI systems work, what data they use, and how decisions are made.
- Consumer disclosures β when AI drives a consequential decision (credit, insurance, employment, healthcare), you must tell the affected person that AI was involved and explain the basis for the decision.
- High-risk controls β systems touching healthcare, biometrics, or discrimination-sensitive categories need additional safeguards and documentation.
- Regulatory sandbox β Texas created a statewide AI advisory council and sandbox program. If youβre building experimental AI, this is worth watching.
Developer action: Audit your systems for Texas-facing consequential decisions. Add disclosure mechanisms and document your AI pipelines. If youβre using third-party AI APIs for decision-making, you still own the compliance obligation.
Illinois β HB 3773 (In Effect)
Illinois went narrow but deep: this law targets AI in employment decisions β hiring, evaluation, promotion, and termination. It applies to employers with one or more employees in Illinois, which makes the scope enormous.
What it requires:
- No disparate impact β AI tools used in employment cannot create discriminatory outcomes against protected classes. You need to prove this with data.
- Notice to applicants and employees β anyone subject to an AI-driven employment decision must be notified that AI was used, what it evaluated, and how it factored into the outcome.
- Documentation retention β keep evaluation evidence, audit results, and decision records. Expect regulators to ask for them.
Developer action: If you build HR tech, recruiting tools, or performance-evaluation systems, this is your law. Implement bias testing as part of your CI/CD pipeline. Build notification hooks into your employment-decision workflows. Log everything.
California β AB 2013 + SB 53 (In Effect)
California passed two complementary laws that took effect January 1. Together, they cover training-data transparency and frontier-model safety.
AB 2013 β Training-Data Transparency
If you develop or provide generative AI, you must publicly describe the categories and sources of your training data. This includes:
- What types of data were used (web scrapes, licensed datasets, user-generated content, synthetic data)
- Where the data came from
- Safety documentation describing how the data was filtered or curated
This is a disclosure law, not a consent law β you donβt need permission for the data, but you do need to tell people what you used. If youβre fine-tuning models on proprietary data, check whether your data pipeline documentation meets the standard.
SB 53 β Frontier Model Safety
SB 53 targets developers of large-scale frontier models. Requirements include:
- Safety reports β publish detailed assessments of model capabilities and risks
- Risk-mitigation frameworks β document what youβre doing to prevent misuse
- Catastrophic-risk plans β yes, you need a written plan for worst-case scenarios
- Incident reporting β critical safety incidents must be reported
- Whistleblower protections β employees who flag safety concerns are legally protected
Most indie developers wonβt hit the frontier-model threshold, but if youβre building on top of a covered model, understand that your upstream provider has obligations that may affect your access and usage terms.
For more on how data privacy intersects with AI development, see our AI and data privacy guide.
Colorado β SB 24-205 (June 30, 2026 Deadline)
This is the big one. Coloradoβs AI Act is the most comprehensive AI law in the United States and draws direct parallels to the EU AI Act. If youβve been tracking European regulation, Colorado will feel familiar β and thatβs intentional.
Scope β high-risk AI systems used in:
- Hiring and employment
- Housing
- Credit and lending
- Insurance
- Education
- Healthcare
Obligations for Developers
If you build AI systems that deployers use in high-risk contexts, you must:
- Document functionality, limitations, and known risks β not marketing copy, real technical documentation
- Provide evaluation guidance β give deployers the information they need to assess whether your system is appropriate for their use case
- Disclose known harms β if youβre aware of failure modes, bias patterns, or accuracy limitations, you must say so
Obligations for Deployers
If you deploy AI in high-risk decisions, you must:
- Maintain a risk-management program β ongoing, not one-time
- Conduct impact assessments β before deployment and periodically after
- Disclose AI use to affected individuals β people must know AI is involved
- Provide contest opportunities β individuals must be able to challenge AI-driven decisions
This is the first US state law that creates EU-style governance obligations for both sides of the AI supply chain. The June 30 deadline is 10 weeks away. If you ship AI products used in any of those high-risk categories, start your documentation and impact-assessment work now.
For teams already working on EU compliance, thereβs significant overlap β see our EU AI Act developer guide and GDPR guide for AI.
California β SB 942 + AB 853 (August 2, 2026 Deadline)
The final 2026 deadline targets AI-generated content transparency. Starting August 2, California will require:
- Visible notices on AI-generated content β users must be able to tell that content was created by AI
- Embedded disclosure (watermarking) for AI-generated images, audio, and video
- Platform compliance β large platforms and hosting providers must support and preserve these disclosures
This applies to generative AI providers, large online platforms, and hosting providers. If youβre building image generators, voice synthesis tools, video generation, or any content-creation AI, youβll need watermarking infrastructure in place by August.
Developer action: Evaluate watermarking solutions now. C2PA (Coalition for Content Provenance and Authenticity) is emerging as the leading standard. Build content-provenance metadata into your generation pipelines rather than bolting it on later.
What This Means for Developers
The common thread across all six laws: documentation, transparency, and auditability. Hereβs what changes in practice:
- Logging is no longer optional. Every AI-driven decision in a regulated category needs an audit trail β what model was used, what inputs it received, what output it produced, and what the human (if any) did with it.
- Disclosure is a product feature. Multiple laws require telling users that AI is involved. Build notification systems into your UX, not as afterthoughts.
- Bias testing is a legal requirement. Illinois mandates it for employment. Colorado mandates it for high-risk systems. Integrate fairness metrics into your testing pipeline.
- Documentation is a deliverable. Colorado requires developers to ship technical documentation with their AI products. Treat model cards and system documentation as first-class artifacts.
- Geography doesnβt protect you. Texas, Illinois, and Colorado all use residency-based jurisdiction. If your users are in those states, the laws apply to you regardless of where youβre incorporated.
Compliance Checklist for Development Teams
| Action | Relevant Laws | Priority |
|---|---|---|
| Audit AI systems for consequential/high-risk decisions | All | π΄ Now |
| Implement user-facing AI disclosure notices | TRAIGA, HB 3773, CO SB 24-205 | π΄ Now |
| Add decision logging and audit trails | TRAIGA, HB 3773, CO SB 24-205 | π΄ Now |
| Run bias/disparate-impact testing on employment AI | HB 3773, CO SB 24-205 | π΄ Now |
| Publish training-data documentation | AB 2013 | π΄ Now |
| Write technical documentation (model cards, risk disclosures) | CO SB 24-205, SB 53 | π‘ By June 30 |
| Build impact-assessment workflows | CO SB 24-205 | π‘ By June 30 |
| Implement AI-content watermarking | SB 942, AB 853 | π‘ By Aug 2 |
| Set up compliance automation tooling | All | π’ Ongoing |
For tooling recommendations, see our guide on AI compliance automation.
Related Reading
- EU AI Act β What Developers Need to Know
- AI and GDPR β A Developerβs Guide
- AI Compliance Automation Tools
- AI-Generated Code and Data Privacy
- Who Owns AI-Generated Code?
This guide covers laws enacted as of April 2026. State AI regulation is evolving rapidly β additional bills are moving through legislatures in New York, Connecticut, and Virginia. Weβll update this guide as new laws pass.