Texas Moves to Regulate AI Texas introduces landmark bill to regulate AI development and use

Published
Reading time
3 min read
A bull in a grassy field with a neural network diagram integrated into its horns.

Lawmakers in the U.S. state of Texas are considering stringent AI regulation.

What’s new: The Texas legislature is considering the proposed Texas Responsible AI Governance Act (TRAIGA). The bill would prohibit a short list of harmful or invasive uses of AI, such as output intended to manipulate users. It would impose strict oversight on AI systems that contribute to decisions in key areas like health care.

How it works: Republican House Representative Giovanni Capriglione introduced TRAIGA, also known as HB 1709, to the state legislature at the end of 2024. If it’s passed and signed, the law would go into effect in September 2025.

  • The proposed law would apply to any company that develops, distributes, or deploys an AI system while doing business in Texas, regardless of where the company is headquartered. It makes no distinction between large and small models or research and commercial uses. However, it includes a modest carve-out for independent small businesses that are based in the state.
  • The law controls “high-risk” AI systems that bear on consequential decisions in areas that include education, employment, financial services, transportation, housing, health care, and voting. The following uses of AI would be banned: manipulating, deceiving, or coercing users; inferring race or gender from biometric data; computing a “social score or similar categorical estimation or valuation of a person or group;” and generating sexually explicit deepfakes. The law is especially broad with respect to deepfakes: It outlaws any system that is “capable of producing unlawful visual material.”
  • Companies would have to notify users whenever AI is used. They would also have to safeguard against algorithmic discrimination, maintain and share detailed records of training data and accuracy metrics, assess impacts, and withdraw any system that violates the law until it can achieve compliance.
  • The Texas attorney general would investigate companies that build or use AI, file civil lawsuits, and impose penalties up to $200,000 per violation, with additional fines for ongoing noncompliance of $40,000 per day.
  • The bill would establish a Texas AI Council that reports to the governor, whose members would be appointed by the governor, lieutenant governor, and state legislative leaders. The council would monitor AI companies, develop non-binding ethical guidelines for them, and recommend new laws and regulations.

Sandbox: A “sandbox” provision would allow registered AI developers to test and refine AI systems temporarily with fewer restrictions. Developers who registered AI projects with the Texas AI Council would gain temporary immunity, even if their systems did not fully comply with the law. However, this exemption would come with conditions: Developers must submit detailed reports on their projects’ purposes, risks, and mitigation plans. The sandbox status would be in effect for 36 months (with possible extensions), and organizations would have to bring their systems into compliance or decommission them once the period ends. The Texas AI Council could revoke sandbox protections if it determined that a project posed a risk of public harm or failed to meet reporting obligations.

Behind the news: Other U.S. states, too, are considering or have already passed laws that regulate AI:

  • California SB 1047, aimed to regulate both open and closed models above a specific size. The state’s governor vetoed the proposed bill due to concerns about regulatory gaps and overreach.
  • Colorado signed its AI Act into law in 2024. Like the Texas proposal, it mandates civil penalties for algorithmic discrimination in “consequential use of AI.” However, it doesn’t create a government body to regulate AI or outlaw specific uses.
  • New York state is considering a bill similar to California SB 1047 but narrower in scope. New York’s proposed bill would focus on catastrophic harms potentially caused by AI models that require more than 1026 FLOPs or cost $100 million or more to train). It would mandate third-party audits and protection for whistleblowers.

Why it matters: AI is not specifically regulated at the national level in the United States. This leaves individual states free to formulate their own laws. However, state-by-state regulation risks a patchwork of laws in which a system — or a particular feature — may be legal in some states but not others. Moreover, given the distributed nature of AI development and deployment, a law that governs AI in an individual state could affect developers and users worldwide.

We’re thinking: The proposed bill has its positive aspects, particularly insofar as it seeks to restrict harmful applications rather than the underlying technology. However, it imposes burdensome requirements for compliance, suffers from overly broad language, fails to adequately protect open source, and doesn’t distinguish between research and commercial use. Beyond that, state-by-state regulation of AI is not workable. On the contrary, AI demands international conventions and standards.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox