Groundbreaking Texas AI Law Also Brings Needed Clarity on Use of Biometric Technologies for Security

texas map

On June 22, 2025, Texas Gov. Greg Abbott signed into law H.B. 149, the Texas Responsible Artificial Intelligence Governance Act (TRIAGA).

Despite a hodgepodge of state measures aimed at regulating AI introduced across dozens of states in 2024 and 2025, Texas is only the second state after Colorado to enact an AI measure with broad requirements applicable to both private and public sector entities versus laws more narrowly tailored to specific applications. Utah’s 2024 law, for example, requires disclosure that a consumer is interacting with generative AI in certain circumstances.

The move by Texas lawmakers came as a surprise to many, against the backdrop of concerns from the Trump administration and Congress about the potential for a patchwork of haphazard state rules, recently expressed by White House Office of Science and Technology Director Michael Kratsios and demonstrated by the 10-year moratorium on enforcement of certain AI-specific state laws that was recently approved for inclusion in Congressional “reconciliation” legislation under consideration summer.

TRIAGA is a so-called “prohibited use” model focused on restricting use of AI for harmful purposes, which is district from other broad regulatory approaches focused on regulating use of AI for “automated decision-making” or imposing a requirement for premarket licenses to provide or use certain AI technologies.

Yet the Texas measure is targeted in its approach, addressing use cases that are clearly harmful and conditioning certain government uses of AI, while intended to avoid placing burdens on broader applications that could limit the societal benefits. Both public- and private-sector entities are prohibited from using AI to induce harm or self-harm or produce exploitative materials, for example. State and local government is additionally prohibited from using AI for “social scoring” and required to provide notice if it is providing AI technology that consumers are interacting with. And government is prohibited from collecting biometric data without consent when this infringes on rights or violates other state laws – but the measure does not otherwise restrict use of biometrics. This particular section within TRIAGA may seem complex for those not following its evolution through the legislative process. Unified Law Group in Dallas, Texas, has produced an excellent analysis of the measure and specific implications for use of biometric technologies. 

What is the most significant for the security industry is the clarity H.B. 149 additionally brings to other Texas laws on biometrics. In 2009, when technology was not widely understood, the Texas Capture or Use of Biometric Identifiers Act (CUBI) was enacted to regulate the collection of biometric identifiers for a “commercial purpose.” Notably, this key term, and others, are left undefined in the measure. This ambiguity has resulted in differing understandings among users and providers regarding what may or may not be subject to CUBI, especially regarding safety, security and antifraud applications of biometrics that would be unworkable under CUBI’s consent requirement.

The uncertainty has led some providers of security products utilizing biometric technologies to refrain from offering their products or certain features to Texas customers altogether out of an abundance of caution, leaving them without better capabilities to prevent unauthorized facility access or quickly respond to emergencies for example.

It has also resulted in inquiries from customers seeking additional clarity, in light of significant CUBI enforcement actions recently brought against Google and Meta Platforms, including a massive settlement against the latter in 2024. One Texas county even asked for clarification from the attorney general regarding whether specific security system features could be provided by its school district vendor under CUBI’s requirements.

With amendments via H.B. 149, CUBI now clearly distinguishes between use of biometric information for commercial purposes and for security purposes, by exempting safety, security and law enforcement applications (provided it is an “artificial intelligence system” as defined in the measure). This clarifying exemption is described as follows:

(3) the development or deployment of an artificial intelligence model or system for the purposes of:

(A) preventing, detecting, protecting against, or responding to security incidents, identity theft, fraud, harassment, malicious or deceptive activities, or any other illegal activity;

(B) preserving the integrity or security of a system; or

(C) investigating, reporting, or prosecuting a person responsible for a security incident, identity theft, fraud, harassment, a malicious or deceptive activity, or any other illegal activity.

Prior to this, an exception was provided only for “voiceprint data retained by a financial institution or an affiliate of a financial institution.”

Not only should this additional exemption clear up any confusion regarding the allowable use of biometric-enabled technologies for public safety and security, it also brings the law in to alignment with the Texas’ comprehensive data privacy law passed in 2023, the Texas Data Privacy and Security Act (TDPSA), which contains a similar security exemption. Both laws cover biometric data but there were no provisions within TDPSA indicating how it relates to CUBI.

Though each of these three acts (CUBI, TDPSA and TRIAGA) are distinct laws with their own scope that will continue to coexist and overlap – the changes brought by H.B 149 provide much greater clarity and alignment with respect to biometric technologies in security and public safety. And importantly, the clarifying exemption is unaffected by whether or not Congress ultimately adopts the AI moratorium in the U.S. Senate’s reconciliation measure.

Developers of biometric technologies should additionally take note of a further clarification added by the measure applicable to algorithm training, which is that CUBI does not apply to:

(2) the training, processing, or storage of biometric identifiers involved in developing, training, evaluating, disseminating, or otherwise offering artificial intelligence models or systems, unless a system is used or deployed for the purpose of uniquely identifying a specific individual;

It further specifies regarding training, that that CUBI’s notice, consent and data destruction requirements apply to technology that is subsequently deployed for biometric identification – outside of the exempted purposes:

              (f) If a biometric identifier captured for the purpose of training an artificial intelligence system is subsequently used for a commercial purpose not described by Subsection (e), the person possessing the biometric identifier is subject to: (1) this section ’s provisions for the possession and destruction of a biometric identifier; and (2) the penalties associated with a violation of this section.

Developers should also be aware of additional stipulations and caveats added in the measure regarding when CUBI’s requirements apply to processing images:

(b-1) For purposes of Subsection (b), an individual has not been informed of and has not provided consent for the capture or storage of a biometric identifier of an individual for a commercial purpose based solely on the existence of an image or other media containing one or more biometric identifiers of the individual on the Internet or other publicly available source unless the image or other media was made publicly available by the individual to whom the biometric identifiers relate.

Again, this provision, along with other CUBI requirements, would not apply to systems developed or deployed for the safety and security purposes described in the exemption.

Ultimately, the clarifications regarding security will help make our communities safer by ensuring that biometric technology can be responsibly leveraged for physical access control and security systems, as well as successful law enforcement investigative applications that help crack cold cases, fight human trafficking and prosecute sex crimes against children. These technologies have also become critical tools in school safety applications, which is increasingly acknowledged by lawmakers. For example, in April, Colorado Gov. Jared Polis signed into law a measure approving use of facial recognition technology for school safety purposes, such as quickly locating missing children and alerting staff when individuals prohibited from school property attempt to enter.

The Security Industry Association (SIA) and our members believe biometric technologies must be used only for beneficial purposes that are lawful, ethical and nondiscriminatory, and we encourage developers, end users and policymakers to take a look at resources SIA has provided on these issues, including its Principles for the Responsible and Effective Use of Facial Recognition Technology.