In the wee hours of Tuesday morning, legislators dispatched to an eagerly waiting Gov. Jared Polis an artificial-intelligence regulatory bill that was years in the making but passed the House and Senate by an overwhelming vote of 91-7.
For all the laudations that Senate Bill 189 received as it traversed the legislative process in a mere eight days this year, however, it became clear that this measure, which replaces a heavily criticized 2024 law, is the beginning of the debate over AI rules rather than the end. Several technology and business groups said they’ll take their concerns to the rulemaking process that is set to begin soon, and one of the House’s leading progressives said he will be back with a bill next year to put stronger consumer protections into law.
But the policy advanced in SB 189, which Polis has signaled he will sign, is one that groups from business-advocacy organizations to civil-rights defenders to technology associations rallied behind after six months of behind-closed-doors negotiations. It took a signed but yet-to-be-enacted law that required extensive analyses and disclosures from both AI developers and the everyday deployers of the technology and turned it into a bill that requires consumer disclosure when AI is helping to make consequential decisions — and allows negatively affected consumers to seek human review.
Sponsors had to rethink what safeguards are needed on AI

Colorado Senate Majority Leader Robert Rodriguez reads a statement on the Senate floor Thursday about his AI regulatory bill.
Senate Majority Leader Robert Rodriguez, the Denver Democrat who has spearheaded AI regulatory efforts for the past three years, admitted the bill is “not everything I would have wished for” — nor everything that business or consumer groups wanted. But he sees that as the sign of a good compromise policy and reminded senators that SB 189 still is poised to go further than any law in the country in alerting consumers to the presence of AI decision-making without prescribing overly burdensome rules on its use.
“We are staring at the edge of one of the most consequential technical revolutions in history … We need to regulate AI and not have AI regulate us,” Rodriguez told the Senate on Thursday. “This bill is not about stopping innovation. It is about ensuring that innovation serves the people, not the other way around.”
After legislators failed to advance three different bills in 2025 to rein in what everyone from tech leaders to Attorney General Phil Weiser called unworkable regulations in the 2024 law — and delayed implementation from February 1 to June 30 of this year — legislative leaders asked what concerns and worries about AI they needed to prioritize.
First up, explained Democratic Rep. Jennifer Bacon of Denver, an SB 189 cosponsor in her chamber: If AI is used to make consequential decisions, who should be responsible if it gets something wrong or harms someone? What decisions are big enough for this kind of regulation? And how can state law support consumers when AI is being used in consequential decisions that affect them?
What the AI bill does

Colorado House Majority Leader Monica Duran and Rep. Jennifer Bacon present SB 189 to the House Judiciary Committee on Friday.
Seeing that these topics mattered most helped to move conversation from some of the weighty requirements outlined in SB 24-205, such as having developers explain in depth to the AG’s office how their systems were developed and what discrimination risks existed. It led to a decision that developers shouldn’t have to explain to the state’s chief law enforcement officer how their systems worked so much as they should have to explain this to deployers responsible for using it correctly, Bacon told the House Judiciary Committee on Friday.
As such, SB 189 defines automated decision-making technology and defines its consequential decisions as those involving education, employment, housing, financial or lending services, insurance, health-care services or essential government services. It requires consumers receiving adverse decisions to know that AI is involved and allow them to request human review and correction of false data if any contributed to the action. It gives developers and deployers 60 days to cure problems. And it permits the AG’s office to enforce violations of the law after beginning on Jan. 1, allowing it to divide responsibility between developers and deployers based on their actions.
Supporters of the bill, ranging from the ACLU of Colorado to the Colorado Chamber of Commerce. said the bill hits the sweet spot of holding companies responsible for discriminatory actions without overregulating them so much that the growing AI industry will move to other states with fewer rules. Even Sen. Mark Baisley, a Woodland Park Republican who is one of the most anti-regulation activists in the Legislature, applauded that industry leaders “can live with this and can still work in Colorado.”
Tech, labor groups differ on impact of AI bill
“It’s more workable than 205. It better reflects the difference between developers and deployers. And it focuses on consequential decisions,” said Kim Brown Wilmsen, chairwoman of the Colorado Technology Association board of directors.
But while many backers felt the bill cut the right balance between regulation and workability, several other groups, while not going all the way to opposing SB 189, said they felt it doesn’t do enough.

Colorado Chamber of Commerce President/CEO Loren Furman listens while Colorado AFL-CIO Executive Director Dennis Dougherty airs his concerns about Senate Bill 189 in a committee hearing on Friday.
Chief among them was the Colorado AFL-CIO, which believes that the lack of discriminatory analyses and comprehensive program explanations in the new bill rolls back needed protections for consumers and workers. It was one of several organizations that asked sponsors, without success, to remove the right to cure that lasts until 2030, saying that negates the state’s ability to hold bad actors accountable.
“This may be the best we can do right now,” executive director Dennis Dougherty told the House Judiciary Committee. “But make no mistake that this is not something that protects Colorado workers properly from harms or should be a model for other states.”
A “hop” forward
And before the final vote on the House floor, Rep. Javier Mabrey, D-Denver, expressed similar sentiments, saying that he is disappointed that consumers and workers do not have the ability through the bill to sue deployers or developers when they are wronged. He lamented the focus of bill writers on ensuring that Colorado maintains a positive atmosphere for AI developers, and he declared the technology is “dangerous” and needs more guardrails.

Colorado state Rep. Javier Mabrey speaks about the AI regulation bill on the House floor Saturday.
“So much of this conversation on this bill and the policy have been about making sure that Colorado is competitive. But this industry openly seems to be more about providing employers opportunities to cut jobs than to create jobs,” Mabrey said. “Make no mistake: We will be back next year, and we will do more to meet the moment.”
Bacon acknowledged there are some more discussions that can be had, particularly around privacy for the data that feeds the AI programs. But for now, she said, she and other sponsors and the wide range of organizations that participated in the working group believe the bill addresses the biggest consumer concerns and the biggest tech-industry concerns simultaneously — a consensus that took years to achieve.
“That (further conversation) has to come. But today we are ensuring that users of this technology are transparent and that people have the opportunity to correct their information,” she said. “This is not a leap. But this is a hop. And it’s in the right direction.”
