AI law is no longer theoretical. The EU AI Act has already started to bite, with general-purpose AI duties taking effect this summer. In the U.S., federal action has accelerated as well. The Take It Down Act, signed into law in May 2025, is the first federal statute to directly regulate AI-generated content, criminalizing non-consensual intimate deepfakes and requiring platforms to remove flagged content within 48 hours. At the same time, the Trump administration unveiled **America’s AI Action Plan** — a sweeping blueprint with over 90 policy actions — and issued executive orders mandating “unbiased AI” in federal procurement, accelerating permits for data-center infrastructure, and promoting exports of the “American AI technology stack.”
But the most immediate source of binding obligations for AI systems and vendors comes not from Congress, but from the states. With no omnibus federal AI statute on the horizon, legislatures in Colorado, Texas, California, and beyond have stepped in, creating a fragmented legal landscape that companies must navigate carefully.
Below, I explore some examples of the various ways that states have approached AI legislation and regulation in recent months.
What it does: Colorado’s Law Concerning Consumer Protections in Interactions with Artificial Intelligence Systems, effective February 1, 2026, is the most comprehensive state AI law to date. It regulates “high-risk AI systems” — any system that helps make important decisions in areas such as hiring, housing, health care, credit, or education.
Who it applies to:
Why it matters: This is the first U.S. state law that looks a lot like the EU AI Act. It sets a structured compliance regime: risk management, documentation, consumer notice.
Going forward: For AI vendors, this means systems will need to be shipped with more documentation and transparency. For businesses using AI, it means new compliance work — tracking risks, filing assessments, and giving customers clear notice.
What it does: The Responsible AI in Government and Industry Act (TRAIGA), effective January 1, 2026, requires developers and deployers to use “reasonable care” in designing and using AI systems but creates a safe harbor for organizations that align with the National Institute of Standards & Technology (NIST) AI Risk Management Framework and the Generative AI Profile. The law also imposes direct prohibitions on harmful AI uses (like systems that promote self-harm, enable social scoring, or create child sexual abuse material).
Who it applies to: Developers and deployers across industries that offer services in Texas.
Key feature: Provides a safe harbor: if developers and deployers align with the NIST AI Risk Management Framework and the Generative AI Profile, they are presumed to be using “reasonable care.”
Why it matters: If AI developers and deployers follow the NIST Framework, they are safe. This makes NIST the de facto national standard for responsible AI practice.
Going forward: Companies doing business in Texas will need to build compliance programs around both NIST standards and the Act’s explicit prohibitions. Companies that want to do business in Texas will likely adopt NIST standards across the board, making them the practical baseline in the U.S.