Texas Responsible Artificial Intelligence Governance Act (TRAIGA): A Summary
July 3, 2025
Privacy Plus+
Privacy, Technology and Perspective
On June 22, 2025, the Texas Responsible Artificial Intelligence Governance Act (“TRAIGA” or the “Act”) was signed into law. TRAIGA represents Texas's entry into the growing landscape of state-level AI regulation, joining California, Colorado, and Utah in establishing comprehensive AI governance frameworks, and will take effect on January 1, 2026.
With the January 2026 effective date rapidly approaching, government agencies and private companies operating AI systems in Texas need to understand their distinct compliance obligations under this new law. This post provides an analysis of TRAIGA's requirements, organized by entity type to help readers quickly identify their specific responsibilities.
A link to TRAIGA follows, along with our thoughts and suggestions for compliance:
https://capitol.texas.gov/tlodocs/89R/billtext/pdf/HB00149F.pdf
Texas Responsible AI Governance Act (TRAIGA)
H.B. 149, 89th Legislature, Regular Session (2025)
EFFECTIVE DATE: January 1, 2026
I. Scope and Key Definitions
A. Legislative Purpose and Construction (Section 551.003):
TRAIGA "shall be broadly construed and applied to promote its underlying purposes," which are to:
1. Facilitate and advance the responsible development and use of artificial intelligence systems;
2. Protect individuals and groups of individuals from known and reasonably foreseeable risks associated with artificial intelligence systems;
3. Provide transparency regarding risks in the development, deployment, and use of artificial intelligence systems; and
4. Provide reasonable notice regarding the use or contemplated use of artificial intelligence systems by state agencies.
B. Applicability and Covered Parties:
1. Non-Governmental Coverage (Section 551.002): TRAIGA applies to any "person" who:
· Promotes, advertises, or conducts business in this state; OR
· Produces a product or service used by residents of this state; OR
· Develops or deploys an artificial intelligence system in this state.
2. Governmental Coverage (Section 552.001):
a. “Governmental entity” means "any department, commission, board, office, authority, or other administrative unit of this state or of any political subdivision of this state, that exercises governmental functions under the authority of the laws of this state."
b. Specifically Excludes (Section 552.001(3)): Hospital districts and institutions of higher education.
C. Key Definitions
1. Core AI Terms
· Artificial Intelligence System (Section 551.001(1)): "Any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments."
· Developer (Section 552.001(2)): "A person who develops an artificial intelligence system that is offered, sold, leased, given, or otherwise provided in this state."
· Deployer (Section 552.001(1)): "A person who deploys an artificial intelligence system for use in this state."
2. Protected Persons and Classes
· Consumer (Section 551.001(2)): "An individual who is a resident of this state acting only in an individual or household context." Importantly, this excludes individuals acting in commercial or employment contexts.
· Protected Class (Section 552.056(a)(3)): “A group or class of persons with a characteristic, quality, belief, or status protected from discrimination by state or federal civil rights laws, and includes race, color, national origin, sex, age, religion, or disability."
3. Specialized Terms
· Biometric data (Section 552.054(a)): "data generated by automatic measurements of an individual's biological characteristics," including fingerprints, voiceprints, eye retina or iris, or other unique biological patterns used to identify specific individuals.
· Excludes:
1. Physical or digital photographs and data generated from them;
2. Video or audio recordings and data generated from them;
3. Information collected for healthcare under HIPAA.
· Health Care Services (Section 552.051(a)): Services related to human health or diagnosis, prevention, or treatment of disease/impairment provided by licensed, registered, or certified individuals.
II. Governmental Entity Obligations
A. Consumer Disclosure Requirements (Section 552.051)
1. Mandatory Disclosure: Governmental agencies must disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system. This requirement applies regardless of whether it would be obvious to a reasonable consumer that they are interacting with AI.
2. Disclosure Standards: All disclosures must meet three specific requirements:
1. Be clear and conspicuous
2. Be written in plain language
3. Not use a "dark pattern" – meaning “a user interface designed or manipulated with the effect of substantially subverting or impairing user autonomy, decision-making, or choice, and includes any practice the Federal Trade Commission refers to as a dark pattern.
3. Hyperlinks: Disclosures may be provided through hyperlinks to a separate webpage.
B. Government Prohibitions
1. Social Scoring (Section 552.053): Governmental entities can use AI systems that evaluate or classify persons based on social behavior or personal characteristics to calculate or assign a social score that results in:
· Detrimental treatment in unrelated social contexts
· Unjustified or disproportionate treatment
· Infringement of constitutional or legal rights
2. Biometric Data Capture (Section 552.054): Governmental entities cannot use AI systems to uniquely identify specific individuals using biometric data or gather images from the Internet without consent if such gathering would infringe on constitutional or legal rights.
III. Other Obligations for All “Persons”
A. Health Care Service Disclosures (Section 552.051(f)):
For AI systems used in Health Care Services providers must disclose the use of AI to recipients or their personal representatives no later than when the service is first provided, except in emergencies where disclosure must occur as soon as reasonably possible.
B. Manipulation of Human Behavior (Section 552.052):
No person may develop or deploy an AI system that intentionally aims to incite or encourage a person to: (1) commit physical self-harm, including suicide; (2) harm another person; or (3) engage in criminal activity.
C. Constitutional Protection (Section 552.055):
No person may develop or deploy an AI system "with the sole intent for the artificial intelligence system to infringe, restrict, or otherwise impair an individual's rights guaranteed under the United States Constitution."
D. Unlawful Discrimination (Section 552.056)
1. General Prohibition: No person may develop or deploy an AI system "with the intent to unlawfully discriminate against a protected class in violation of state or federal law."
2. Disparate Impact Standard: Critically, TRAIGA specifies that "a disparate impact is not sufficient by itself to demonstrate an intent to discriminate."
3. Industry Exemptions. The prohibition does not apply to:
· Insurance entities subject to applicable insurance discrimination statutes. (Id. § 552.056(d)).
· Federally insured financial institutions complying with federal and state banking laws. (Id. § 552.056(e)).
E. Sexually Explicit Content and Child Pornography (Section 552.057):
TRAIGA prohibits persons from: (1) Developing or distributing AI systems with the sole intent of producing visual material violating Section 43.26 of the Penal Code or deep fake videos/images violating Section 21.165; and (2) Intentionally developing AI systems for text-based conversations simulating sexual conduct while impersonating a child under 18.
IV. Enforcement Framework
A. Enforcement Authority:
The Texas Attorney General has exclusive authority to enforce TRAIGA, with limited exceptions for state licensing agencies. (Id. § 552.101(a)). The Act explicitly provides no private right of action. (Id. § 552.101(b)).
B. Complaint Mechanism:
The Attorney General must create and maintain an online complaint mechanism for consumers. (Id. § 552.102)
C. Notice and Cure Provisions
1. Written Notice Requirement: Before bringing an enforcement action, the Attorney General must provide written notice identifying specific alleged violations. (Id. § 552.104(a)).
2. 60-Day Cure Period: Alleged violators have 60 days to:
· Cure the violation (Id. § 552.104(b)(2)(A)).
· Provide supporting documentation showing the cure (Id. § 552.104(b)(2)(B)(ii)).
· Update internal policies to prevent further violations. (Id. § 552.104(b)(2)(B)(iii)).
D. Civil Penalties (Section 552.105):
1. Monetary penalties: TRAIGA establishes a tiered penalty structure:
· Curable violations: $10,000 - $12,000 per violation
· Uncurable violations: $80,000 - $200,000 per violation
· Ongoing violations: $2,000 - $40,000 per day
2. Injunctive Relief: The Attorney General may also seek injunctive relief, attorney's fees, and investigative costs. The Act contains a rebuttable presumption that a person used reasonable care as required under the law. Id. § 552.105(c)).
E. Safe Harbor Provisions (Section 552.105(e)):
Defendants cannot be found liable if:
Third-party misuse: Another person uses the AI system in a prohibited manner.
Good faith discovery: The violation is discovered through feedback, testing, state agency guidelines, or substantial compliance with recognized AI risk management frameworks like NIST's AI Risk Management Framework.
F. State Agency Enforcement (Section 552.106(b)):
If the Attorney General finds violations by licensed professionals and recommends additional enforcement, state agencies may impose sanctions including: (1) License suspension, probation, or revocation; and (2) Monetary penalties up to $100,000
V. Additional TRAIGA Provisions
A. Regulatory Sandbox Program (Section 553):
TRAIGA establishes a regulatory sandbox program administered by the Texas Department of Information Resources that allows companies to test innovative AI systems for up to 36 months with certain regulatory requirements waived. Participants receive legal protection from enforcement actions for waived laws during the testing period, though TRAIGA's core prohibitions remain in effect. program requires detailed applications including system descriptions, benefit assessments, and mitigation plans.
B. Texas Artificial Intelligence Council (Section 554):
The Act creates a seven-member Texas Artificial Intelligence Council with appointments split between the Governor, Lieutenant Governor, and House Speaker. The Council's eleven functions include ensuring ethical AI development, protecting public safety, identifying regulatory barriers to innovation, and evaluating competitive concerns. Council members must have expertise in areas such as AI systems, data privacy, ethics, public policy, or risk management. The Council may issue reports and conduct training but cannot adopt binding rules or interfere with state agencies.
Our Thoughts
We will save our comments (and criticism) for a future post. Sufficed to say, here, TRAIGA represents a significant development in state-level AI governance, establishing disclosure requirements, behavioral prohibitions, and enforcement mechanisms. With the January 1, 2026 effective date approaching, organizations operating AI systems in Texas should begin compliance preparations. The Act's broad definitional scope and substantial penalties make early assessment and compliance planning prudent
Considerations should include:
Actions for All Organizations
1. AI System Inventory: Catalog all AI systems developed or deployed in Texas, including third-party tools and embedded AI capabilities.
2. Risk Analysis: Evaluate whether systems interact with consumers, affect protected classes, potentially infringe constitutional rights, or could encourage harmful behavior.
3. Safe Harbor Alignment: To qualify for liability protections, consider aligning with NIST's AI Risk Management Framework or other nationally recognized standards.
Government-Specific Requirements
4. Disclosure Implementation: Develop clear, conspicuous, plain-language disclosures for all consumer-facing AI interactions (including healthcare contexts).
5. System Elimination: Remove or restrict any AI systems that could constitute social scoring or unauthorized biometric identification.
6. Policy Updates: Revise internal procedures to ensure ongoing compliance with disclosure and operational restrictions.
Other Considerations
7. Healthcare Compliance: If providing AI-enabled healthcare services, ensure disclosure requirements are met for recipients or their personal representatives, with special procedures for emergency situations.
8. Sandbox Evaluation: Assess whether innovative systems would benefit from the 36-month testing program with legal protections.
9. Compliance Program: Implement robust testing, monitoring, and documentation protocols to qualify for safe harbor defenses.
10. Legal Review: Conduct a comprehensive review of AI development, deployment and use practices with counsel familiar with TRAIGA requirements. Consider that the Act contains peculiar ambiguities between its broad applicability criteria and its substantive obligations framework, which may require considerable analysis to decipher answers to even the most basic questions, like “is my business covered by this law? And what are my obligations if it is?” (We plan to address this in a later post).
Important Note: Despite potential federal moratorium discussions, organizations should begin TRAIGA compliance preparations well in advance of the January 1, 2026 effective date due to regulatory uncertainty and the comprehensive nature of the requirements.
---
Hosch & Morris, PLLC is a boutique law firm dedicated to data privacy and protection, cybersecurity, the Internet and technology. Open the Future℠.