Texas AI Law TRAIGA Brings on Compliance Challenges: Analysis of the Texas Responsible Artificial Intelligence Governance Act
July 16, 2025
Privacy Plus+
Privacy, Technology and Perspective
The Texas Responsible Artificial Intelligence Governance Act—aptly acronymized as "TRAIGA," which means "bring" in Spanish—certainly brings significant compliance challenges for businesses operating in Texas. The Act, effective January 1, 2026, establishes AI regulation in Texas but contains structural disconnects that may complicate compliance efforts. We highlight these key issues in this post, but also encourage you to read our previous post summarizing the text of the Act by clicking on the following link:
https://www.hoschmorris.com/privacy-plus-news/traiga
The Scope-Obligations Mismatch
TRAIGA's fundamental challenge lies in its misaligned application and compliance obligations. Section 551.002 applies broadly to any person who "(1) promotes, advertises, or conducts business in this state; (2) produces a product or service used by residents of this state; or (3) develops or deploys an artificial intelligence system in this state."
However, Chapter 552's substantive obligations are structured around only two roles defined in Section 552.001: "(1) 'Deployer' means a person who deploys an artificial intelligence system for use in this state" and "(2) 'Developer' means a person who develops an artificial intelligence system that is offered, sold, leased, given, or otherwise provided in this state."
Other entities covered under Section 551.002(1)-(2)—that is, entities who promote, advertise or conduct business in Texas or product a product or service used by Texans—may not meet the definition either of “deployer” or “developer” in Section 552.001, yet Chapter 552's prohibitions still appear to apply to them as "person[s]" generally (Sections 552.052, 552.055, 552.056, 552.057) or specifically to "deployers" and "developers" (Section 552.051).
This creates uncertainty about which entities are covered and what obligations they face.
Definitional Inconsistencies
TRAIGA also uses key terms inconsistently, and invokes the terms "use," "deploy," "develop," and "distribute" without defining their relationships or boundaries:
- Section 551.003 references "development and use of artificial intelligence systems"
- Section 552.051(f) applies when "an artificial intelligence system is used in relation to health care service"
- Section 552.053 prohibits governmental entities from "use or deploy[ment]" of AI systems
- Section 552.057 prohibits persons from "develop[ing] or distribute[ing]" AI systems
These undefined terms create compliance uncertainty for entities which seem clearly within TRAIGA's scope under Section 551.002 (because they promote, advertise, or conduct business in Texas, or produce a product or service used by Texas residents of this state), but are still unclear about their roles as “deployers” or “developers” under Section 552.001.
Limited Consumer Protection Scope
Section 551.001(2) defines "consumer" as "an individual who is a resident of this state acting only in an individual or household context" and explicitly excludes "an individual acting in a commercial or employment context."
This “commercial or employment” exclusion appears to undermine the stated purpose in TRAIGA's Section 551.003(2), which is to "protect individuals and groups of individuals from known and reasonably foreseeable risks associated with artificial intelligence systems." Employment and business-to-business AI applications represent significant areas of documented AI bias that fall outside TRAIGA's consumer protection framework.
In the employment context, particularly, employers increasingly rely on AI for resume screening, video interview analysis, performance evaluation, scheduling, workplace monitoring, hiring, and even termination decisions. In fact, AI systems make thousands of employment decisions daily, often with little transparency or oversight. The risks of AI use are well-known, extensively documented, and highly foreseeable, especially where, as here, inherent power imbalances exist between employers and employees that make worker protection particularly crucial. We would like to see Texas lead rather than lag in this area, extending TRAIGA's anti-discrimination and constitutional protection provisions to employment contexts, requiring transparency, creating accountability, and ensuring human oversight of AI systems. The exclusion of these applications from protection is a policy inconsistency that weakens the entire regulatory framework.
Weak Consent and Disclosure Standards
Section 552.054(b) prohibits government use of AI for biometric identification "without the individual's consent," but TRAIGA provides no definition of valid consent, standards for obtaining it, or withdrawal mechanisms, the absence of which creates ambiguity about how consent should be obtained.
Moreover, TRAIGA's disclosure requirements are limited to government agencies (Section 552.051(b)) and healthcare contexts (Section 552.051(f)), leaving most private sector AI interactions without transparency requirements – in effect, meaning no notice or consent is required at all.
Enforcement Framework
Section 552.101(a) grants exclusive enforcement authority to the attorney general, with civil penalties under Section 552.105(a) ranging from $10,000 to $200,000 per violation. The Act includes a 60-day cure period (Section 552.104) and a rebuttable presumption of reasonable care (Section 552.105(c)).
However, enforcement faces the same definitional challenges: Section 552.103's detailed investigative demands assume clarity about which entities have specific obligations—clarity that TRAIGA's structure doesn't consistently provide.
Positive Elements
Chapter 553's regulatory “sandbox” program allows testing of AI systems for up to 36 months with certain requirements waived (Section 553.053(a)). The Texas Artificial Intelligence Council established in Chapter 554 provides ongoing governance and policy development functions, which mean aid in clarifying the issues identified here.
Our Thoughts
We’ll summarize here - TRAIGA's structural issues create several compliance challenges:
Coverage uncertainty: Section 551.002(1)-(2) creates sweeping applicability, but Chapter 552 seems only to lay specific obligations only a limited subset of defined entities.
Definitional gaps: Inconsistent use of "use," "deploy," "develop," and "distribute" complicates role identification.
Limited scope: Employment and B2B exclusions leave significant AI applications unaddressed.
Enforcement complexity: Investigative and penalty frameworks assume clearer obligations than TRAIGA provides.
While TRAIGA represents important progress in AI governance, these structural disconnects between scope and obligations, combined with definitional ambiguities and significant gaps in protective coverage, may limit its effectiveness and create compliance uncertainty for businesses operating in Texas.
---
Hosch & Morris, PLLC is a boutique law firm dedicated to data privacy and protection, cybersecurity, the Internet and technology. Open the Future℠.