Developers and deployers of artificial intelligence systems are begging a legislative task force to amend definitions and an “untenable” appeals process in Colorado’s AI law — and getting pushback from some groups who feel the law doesn’t regulate enough.
The push-and-pull has played out for two months before the Artificial Intelligence Task Force, a 26-person group of elected officials and citizens put together by Gov. Jared Polis after he signed “with reservations” the most comprehensive AI regulatory law in America. With the regulations not going into place until February 2026, the task force is hearing from myriad groups affected by them and is required to submit a report to the Joint Technology Committee by February 2025 recommending any potential changes to the law.
So far, large and small companies and technology associations all have teed up requests centering on too-vague definitions in the bill that they fear could impact everyone from AI developers whose clients altered their systems to companies offering digital coupons for consumer goods. And while companies ranging in size from Amazon to startups are making the asks, smaller firms particularly argue that the current law is not just a hindrance but a threat to their existence.
“This is essentially an innovation tax for companies such as us and any others trying to recruit into Colorado,” said Luke Swanson, chief technology officer for Denver-based Ibotta, a digital-promotions company with 850 employees. “If this bill stands as written, it will absolutely cause companies to think twice before expanding into Colorado. We need to balance this regulation with pragmatism.”
Too much regulation of AI or too little?
Without changes to the law, Swanson added, Ibotta likely would focus future hiring outside of Colorado. And Adam Burrows — a co-founder of Range Ventures, a venture-capital fund focused on investing in Colorado tech companies — said that he would have to advise client firms developing and using AI to move their headquarters out of the state.
But even as these companies are asking for a pullback in regulations, civil- and consumer-rights groups have told the AI Impact Task Force that the current law is strewn with so many loopholes that it could offer only minimal protection against algorithmic discrimination. It does little to require testing of AI programs before rollout, creates loopholes in what companies must report and still lets AI systems make consequential decisions without humans knowing they are subject to decisions made by a computer, witnesses have said.
And Senate Majority Leader Robert Rodriguez, the Denver Democrat who sponsored the AI bill and who chairs the task force, expressed frustration on Oct. 21 about companies calling for so many regulatory rollbacks.
“It’s always ‘The world is falling,’” Rodriguez said after listening to a litany of concerns at the most recent meeting. “I’m concerned about businesses coming here or not … But this is basically a get-out-of-jail-free card with a rebuttable presumption of defense. And yet it’s still too burdensome?”
“Leading the way” in AI regulation
The law, passed on the final day of the 2024 legislative session, sets guardrails to prevent AI programs making substantial decisions in areas like health care or employee hiring and evaluation from using discriminatory algorithms. It requires developers to avail to deployers an explanation of system inputs, mandates developers report to the Colorado Attorney General credible reports of algorithmic discrimination and gives consumers the right to appeal AI-made decisions, among other things.
While other states have passed AI regulations dealing with deep-fake images or copyright laws, none have enacted protections as extensive as Colorado’s, said Meghan Pensyl, policy director for BSA | The Software Alliance.
“Colorado and the European Union are really leading the way in passing significant pieces of AI regulation,” Pensyl told the AI Impact Task Force on Sept. 16.
And it’s that extensive level of regulation that is rattling companies in the space and leading business executives to call for a rollback in rules before Colorado begins to scare off employers and lose its place at the head of the innovation economy.
Some businesses already pulling back AI usage
According to a September survey released by law firm Littler, about two-thirds of C-suite executives say their organizations now use generative or predictive AI in human-resources functions, including to create HR materials, recruit workers and source candidates. But 84% said they are concerned with litigation around the use of AI in such functions, and 73% say they are decreasing the use of these tools already because of regulatory uncertainty.
Zoe Argento, co-chair of the firm’s privacy and data security practice group, said she’s already seeing companies pull back on using AI in states that have passed tough regulations, and she called Colorado’s law “uniquely burdensome.” She cited aspects of the law such as its four different deployer-notice requirements, all with ambiguous terms in them, and said it will “greatly discourage” employers using AI tools.
Business leaders who have gone before the AI Impact Task Force say this is problematic not only because well-developed AI can make companies run more efficiently but because discouragement of AI use will hurt the many Colorado companies working in this space. And to thread the needle between regulation of potentially discriminatory practices and the quashing of innovation, they say changes are needed before the law goes into effect.
Because Colorado’s law regulates only high-risk systems tasked with decision-making rather than generative systems like ChatGPT, legislators need to clean up definitions in the law to narrow them to actors and actions they want to keep in check, several officials said.
Requested changes to AI law
Right now, for example, the definition of “developer” includes all entities that develop AI systems, not just those developing high-risk systems, Pensyl said. And the way it defines AI systems that could be discriminatory creates risk that companies that develop AI systems could be held liable even if the deployers who purchase the systems make consequential changes to them after being advised not to, she said.
Because high-risk systems are defined also as those in which AI plays a “substantial factor” in making decisions, that definition too must be adjusted, said Simon Morrison, who lead’s Amazon’s AI and privacy-policy efforts. Currently, the law defines substantial factors broadly to include decisions in which AI has played some role but not a major role, and that needs modification, he told the task force.
The provision driving the most criticism concerns consumers’ right to appeal decisions that are made in substantial part by AI programs and to demand from companies the reasons for the unsatisfactory decisions and the inputs that went into the system making them. Burrows and Ruthie Barko, regional director for industry group TechNet, called this “untenable” because of the amount of labor it would take to deal with every appeal and asked legislators to clarify that these appeals must be directed at the AG’s office.
Is law seeking to create broader rights?
Seth Sternberg, CEO of Boulder-based Honor Technology, noted that companies that reject job applicants without use of AI don’t have to face appeals where they must detail the inputs used to train their human managers. He asked at one point if the new law is creating regulations that don’t now exist and could affect companies well beyond the AI space.
“To drive AI growth and to foster economic security, the U.S. must remain a place of tech innovation,” said John Nordmark, the co-founder of enterprise platform Iterate.AI, adding that the regulations in the law would stifle innovation, particularly for small firms. He advocated raising the threshold on which firms are regulated by the law, which is now set at any company with at least 50 employees.
To ACLU Colorado senior policy strategist Anaya Robinson, however, the enacting law already has too many loopholes. Robinson protested that Senate Bill 205 included both an affirmative defense for companies trying to follow the law and a rebuttable presumption that he said disallows consumers from seeking effective recourse when they feel they have been the victims of discrimination.
Because many consumers don’t know when it was an AI system that denied them a medical procedure or booted them from a pool of job applicants, Colorado must require more up-front transparency on who is employing AI, said Matt Scherer, senior policy counsel for the Center for Democracy and Technology. Yet companies are seeking more exemptions, via changes to the “substantial factor” definition and allowance of exemptions for trade secrets, that would let them wriggle out of transparency, he asserted.
“We on the outside would have no idea what is going on unless we force the companies to tell us what is going on,” Scherer said.
A battle over vague definitions
To that end, Robinson asked for the elimination of exemptions in the definition of “high-risk AI systems,” essentially pushing legislators to broaden the language to incorporate more systems. And he asked that AI deployers be forced to explain the reasoning to appealing consumers not only of adverse decisions but decisions that went in their favor, so they can understand the algorithms that Brown University professor Suresh Venkatasubramanian said are too often mathematically indefensible.
During the Oct. 21 task force meeting, an argument between Scherer and Ibotta’s Swanson exemplified how widely different two people can be in their interpretations of the law’s definitions. Scherer said that “high-risk” decisions made by AI clearly did not apply to Ibotta’s offering of digital discounts, and Swanson said the language is so ambiguous that people who don’t get an offer could accuse the company’s AI system of discriminating against them in financial matters.
Such debates have made it clear that the task force has a lot of work left to do to find consensus methods for implementing the law without undercutting Polis’ efforts to attract AI firms to the state as part of his push to grow the innovation economy. It will meet next on Nov. 13.
“I think a lot of responsibility rests on what we’re going to accomplish here,” said Rep. Brianna Titone, the Arvada Democrat who serves as committee vice chair. “We don’t want to stifle innovation … Can we make sure we can eliminate those bad actors and things that can go awry and make sure we have the best technologies going forward?”