Misgivings Cloud First-in-Nation Colorado AI Law: Implications and Considerations for the Security Industry

Governor Asks for Further Refinement and Federal Preemption

On May 17, 2024, Colorado Gov. Jared Polis signed into law Senate Bill 205 (SB-205), making Colorado the first state in the nation to enact a measure regulating artificial intelligence (AI) that is applicable broadly. Following a winding path that started with importing a bill drafted in Connecticut with just three weeks left in the legislative session for consideration, numerous last-minute changes were made with little time for stakeholder review. Connecticut Gov. Ned Lamont issued a veto threat over this similar measure, which ended its consideration before that state’s legislature adjourned on May 8.

The rushed process in Colorado prompted opposition from dozens of Colorado businesses, technology groups and AI startups at the Senate Committee hearing on the measure, and opposition from national organizations like the Consumer Technology Association and the U.S. Chamber of Commerce, citing key areas of ambiguity that could make the measure difficult to interpret and comply with, and the creation of compliance risks across multiple business sectors.

In an unusual move, Gov. Polis signed the Colorado measure into law despite significant misgivings outlined in his signing statement, which expressed concern about impact on “an industry that is fueling critical technological advancements across our state.” Polis further noted “Laws that seek to prevent discrimination generally focus on prohibiting intentional discriminatory conduct. Notably, this bill deviates from that practice by regulating the results of AI system use, regardless of intent, and I encourage the legislature to reexamine this concept … of creating a separate anti-discrimination framework for AI systems only…”

Polis urged that issues be fixed during next year’s legislative session, stating that “Stakeholders, including industry leaders, must take the intervening two years before this measure takes effect to fine tune the provisions and ensure that the final product does not hamper development and expansion of new technologies in Colorado that can improve the lives of individuals across our state.”

He also asked for preemptive federal legislation, stating that approaches to addressing AI-driven discrimination are “better considered and applied by the federal government to limit and preempt varied compliance burdens…Should the federal government not preempt this with a needed cohesive federal approach, I encourage the General Assembly to work closely with stakeholders to craft future legislation for my signature that will amend this bill to conform with evidence based findings.”

The enactment of a measure considered unfinished under the assumption it will be fixed later should give pause to stakeholders seeking harmonization of similar requirements nationwide and increases the possibility of a patchwork of vastly different requirements across states.

But future implications and potential for changes aside, here are the key provisions of the measure as currently enacted.


Like the several other similar state proposals this year, SB-205 focuses most requirements on use of “high-risk” AI systems used in forms of automated decision-making, primarily to limit “algorithmic discrimination.” Despite broad applicably across sectors and types of applications, this is notably far narrower in scope than the recently finalized EU AI Act and does not include prohibitions on specific technology applications as that measure does.

The new law would place a long list of reporting, disclosure, and testing obligations on both “developers” and “deployers” of “high-risk artificial intelligence systems.” Developers and deployers face risk of enforcement actions by the Colorado attorney general for violating any of these requirements. This includes, for example, a requirement that developers disclose to deployers “reasonably foreseeable risks” of algorithmic discrimination, and the requirement for deployers to notify the attorney general when a high-risk system has caused algorithmic discrimination.

The definitional aspects of SB-205 and several similar proposals are critical, as the requirements associated with “high-risk AI systems” are likely to effectively discourage the fielding of technology applications meeting this description. This is why much of the discussion around the bill leading up to its passage surrounded what would or would not be classified as a high-risk AI system, which ended up considerably narrower than where it started.

Key Terms and Implications

High-Risk Artificial Intelligence System

This key term in the bill triggers most of its requirements when a given technology falls into this category. Its scope hinges directly on the definition of the term “consequential decision” (below). A High-Risk Artificial Intelligence System is defined as “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, consequential decisions.” Note, an “artificial intelligence system” is defined broadly as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions or recommendations, that can influence physical or virtual environments.

Consequential Decisions

The scope of this most important term in the bill is the key element to determining whether an AI system is considered “high-risk” and thus the impact to a business or its customers. While broader as first formulated, in the final measure consequential decisions are limited to “the provision or denial to any consumer of, or the cost or terms of” the following:

  1. Education enrollment or an education opportunity
  2. Employment or an employment opportunity
  3. A financial or lending service
  4. An essential government service
  5. Health care services
  6. Housing
  7. Insurance
  8. A legal service

Algorithmic Discrimination

Under the measure, a new duty is imposed on both developers and deployers to avoid and protect consumers from any algorithmic discrimination resulting from use of high-risk artificial intelligence systems.

Algorithmic discrimination is defined as “Any condition in which the use of an artificial intelligence systems results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex veteran status, or other classification protected under the laws of this state or federal law.”

Substantial Factor

Incorporated into the definition of “high-risk artificial intelligence system” (i.e., whether the AI system is a “substantial factor” in making a “consequential decision”), this term also appears in several related state AI proposals, but it has not been consistently defined. In SB-205 it is defined as “a factor that: 1) assists in making a consequential decision; 2) is capable of altering the outcome of a consequential decision; and 3) is generated by an AI system; and explicitly includes use of an AI system to generate any decision or recommendation that is used as a basis to make a consequential decision about the consumer.” Keep in mind, this term is only applicable in the context of a “consequential decision,” under the definition of high-risk artificial intelligence system. 

Broad Disclosure Requirement

In a departure from other requirements in the measure applicable only to use of high-risk systems, the bill imposes disclosure requirements applicable to all deployers or developers that provide or use an AI system “intended to interact with” consumers (meaning individuals in Colorado). Because there is no definition of the term “intended to interact with” and it applies to all AI systems as broadly defined, this requirement has the potential to apply to a wide range of products and services. 


The measure includes lists of exclusions, both from the definition of high-risk AI systems and from the requirements altogether. Excluded from high-risk are antifraud and cybersecurity technologies, among others. Several general exclusions would also apply to use for security purposes, highlighted below.  Among others, these exclusions apply to the ability of deployers, developers and other persons to:

  • “Cooperate with a law enforcement agency concerning conduct or activity that the developer, deployer, or other person reasonably and in good faith believes may violate federal, state, or municipal laws, ordinances, or regulation”
  • “Take immediate steps to protect an interest that is essential for the life or physical safety of a consumer or another individual”
  • “By any means other than the use of facial recognition technology, prevent, detect, protect against, or respond to security incidents, identity theft, fraud, harassment, malicious or deceptive activities, or illegal activity; investigate, report, or prosecute the persons responsible for any such action; or preserve the integrity or security of systems”

Additionally, excluded from the definition of high-risk AI system are  systems intended to “Detect decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence a previously completed human assessment without sufficient human review.” 


The measure does not provide a private right of action, tasking the Colorado attorney general with enforcement of the requirements. However, it also provides rulemaking authority to the attorney general “for the purpose of implementing and enforcing” the bill.

Implications and Key Considerations for the Security Industry

  • SB-205 does not become effective for nearly two years, in February 2026.
  • At least some details could change through 2025 legislation.
  • Future rulemaking by the attorney general could potentially alter or further define the scope of applicability. In any case, significant ambiguity as to how the different sections and requirements work together prevents initial analysis with 100% certainty.  
  • While there are exclusions for safety and security uses of technology, there is no direct indication that such uses would otherwise be covered. The underlying requirements apply only to high-risk AI systems (aside from the disclosure requirements), so whether a technology application would meet the definition of a high-risk AI system in the first place is the most important question in evaluating the potential applicability of the new law. This primary consideration also applies to review of other similar proposals at the state and federal level. Some proposals have included additional broader categories under the definition of consequential decisions, potentially encompassing a broader range of AI applications.
  • Another key concern for companies doing business in Colorado will be making a determination whether their AI-enabled products “interact with consumers,” triggering notification requirements.
  • There is a small business exemption from high-risk AI system requirements for firms with 50 or fewer employees; however, it is only applicable if such businesses are not using their “own data” to train the system.

The Security Industry Association (SIA) will continue to track developments and provide analysis related to Colorado’s new AI law, and we continue to welcome any member feedback. For further information contact SIA Senior Director of Government Relations Jake Parker at jparker@securityindustry.org