Five months after Gov. Jared Polis summoned business, education and consumer-rights advocates to break a two-year stalemate over artificial-intelligence regulation, the group has agreed to a plan on how the state should work to prevent discrimination by AI systems.
The agreement, announced Tuesday by the Democratic governor, should pave the way for legislators to propose and pass a consensus AI regulatory framework to replace widely derided rules that Polis reluctantly signed into law in May 2024. Those rules are scheduled to go into place at the end of June but would be replaced by a new stratagem that would require the Colorado Attorney General’s Office instead to finish rulemaking by the end of 2026, just days before current AG Phil Weiser is termed out of office.
Colorado’s 2024 law, originally scheduled to be enacted by this February, represented the most comprehensive AI regulation in the country but drew immediate criticism for provisions that tech companies said would force AI developers to leave the state. Among those were extremely detailed disclosures that deployers and developers must make about the risk of discrimination by AI and a review process that could have allowed consumers upset by AI decisions on everything from job applications to insurance approvals to petition developers and deployers for reconsideration.
Senate Majority Leader Robert Rodriguez, the Denver Democrat who sponsored the 2024 law, oversaw a task force that tried to iron out problems in the law in time for the 2025 session, but he ended up killing his attempted fix when few participants backed it. Two competing bills in the 2025 special session attempted again to rewrite regulations, but disputes centered around liability issues grounded both and led legislators simply to postpone enactment of the law until June of this year.
Governor turns to hand-picked group of AI experts
So, Polis called together a working group in October that met behind closed doors and hammered away at areas of disagreement, often with business and technology groups on one side of the table and civil-rights and labor groups on the other. As the pace of meetings picked up with the start of the 2026 session, participants bore down on issues of liability, appeals and several key definitions and produced what Polis on Tuesday called unanimous support for a new AI regulatory framework.
“I am very grateful to the hardworking members of the AI Policy Working Group that have reached a unanimous agreement on AI policy to protect consumers and support innovation in our state,” the governor said in a news release Tuesday. “I look forward to supporting the recommended framework as legislation moves through the process and commend the AI Policy Work Group for their efforts to get us here.”
The framework requires AI developers to notify deployers of the technology — from insurers to educational institutions to employers plowing through resumes — of how the system works and of any known risks and circumstances in which the system should not be used. These requirements would apply when a developer creates a system with the reasonable expectation that it may be used to make consequential decisions.
Rules would oversee consequential AI decisions
A deployer then must provide a “clear and conspicuous notice” to individuals when an automated decision-making technology is being used for a consequential decision about them. Such decisions involve:
- Educational enrollment or an educational opportunity;
- Employment or an employment opportunity;
- The lease or purchase of residential real estate;
- A financial or lending decision;
- Insurance decisions involving underwriting, pricing, coverage or claims adjudication;
- Provision of healthcare services; or,
- Eligibility and renewal determinations involving essential government services and public benefits.
If an AI system makes an adverse decision against an individual — say, it denies a healthcare service or turns someone down for a loan — the deployer shall provide within 30 days a description of the consequential decision and the role the AI system played in it. They also must offer instructions and a “simple-to-follow process” by which individuals can learn about the types of personal data that impacted the decision, as well as information on how someone can request a human-led review or reconsideration.
Appeals to be allowed when “commercially reasonable”
The wording in the proposed framework is narrower than in the 2024 law, which seemed to allow unlimited direct appeals to AI deployers — a possibility that some companies and educational institutions warned would have diverted most personnel to appeal cases. The proposed new framework directs the AG’s office to adopt rules by Dec. 31 on post-adverse disclosures and to allow human review and reconsideration through them “to the extent commercially reasonable” while also letting consumers correct materially inaccurate personal data.
As to enforcement of the law, the proposal assigns exclusive authority to the AG’s office, allowing it to seek civil penalties for violations, and it specifies that the bill would not create a new private right of action allowing the filing of civil lawsuits. It also gives deployers or developers 90 days after receiving notice to cure an alleged violation without incurring civil penalties, though it permits the AG to seek injunctive relief to prevent future violations.
Where the special-session bill ran aground over its proposed creation of joint and several liability for developers and deployers, the new framework would allocate fault among them based on their relative fault for the violation of the law. And it proposes to absolve developers of fault if deployers use the AI system in ways in which it wasn’t intended, marketed or contracted for by the developer. Also, it doesn’t limit the enforceability of contract terms agreed to between the deployer and developer.
Bill could reverse widespread criticism
The framework from the 2024 law has received criticism from President Donald Trump’s administration and has been blamed for chilling growth in the AI sector, including tech giant Palantir’s decision last month to relocate its headquarters from Denver to Miami. Sen. Mark Baisley, R-Woodland Park, quipped in the well of the Senate on Feb. 18, when a since-killed bill to decriminalize sex work was still under consideration: “Not to worry, because we’re replacing that industry with the prostitution industry.”
But through it all, the AI Policy Working Group continued to meet and to study national policy trends while pinpointing both its goals and the risks of potential wording of a new law, and the hard discussions led to the unanimous backing announced Tuesday.
Colorado Chamber of Commerce President/CEO Loren Furman, a member of that policy working group, said leaders managed to talk through initial policy differences and identify pathways of consensus on each of the identified areas. She was particularly happy about several of the solutions, including the assignment of liability and the barring of any new private right of action, and said she looks forward to the framework being introduced as a bill and moving forward in the Legislature.
“We found alignment with divergent groups in a lot of areas,” Furman said Tuesday. “That doesn’t mean there won’t be additional changes in the legislative process. But it’s our hope that the majority or the bulk of the fundamentals stay in there, because it’s taken a lot to get to this point.”
