Nine months: that鈥檚 all the time left before companies have to start complying with Colorado鈥檚 first-in-the-nation anti-discrimination law for AI systems, unless policymakers act.
Business and industry groups have been begging for a delay; they say the law as it stands is unworkable 鈥 they鈥檙e urging Colorado鈥檚 lawmakers to give all sides more time to try to find a compromise.
Get top headlines and 萝莉少女 reporting directly to your mailbox each week when you subscribe to In The NoCo.
But consumer rights advocates say AI鈥檚 rapid spread into more and more areas of life makes it critical to put guardrails on how the technology is working. Many advocates for the law also feel some in the tech industry won鈥檛 be satisfied with anything other than a full capitulation on the policy鈥檚 most meaningful consumer protections.
And after a dramatic end to the legislative session, when the Senate Majority Leader introduced and then pulled what was intended to be , there鈥檚 an impasse on how to move forward. Business groups and some universities and schools right now. They want Colorado lawmakers to push back the implementation date first.
The process to revise the law began the moment it was signed, but so far hasn鈥檛 panned out
Over the past year, a task force comprised of business and tech, labor, consumer, and privacy experts tried to develop compromise policies around the various elements of the AI law. The goal was to introduce revisions in the 2025 legislative session. But the session came and went without lawmakers approving anything.
A business-led effort to delay the law鈥檚 implementation by roughly a year also failed. Now Gov. Jared Polis says he backs a federal moratorium on all state AI laws, which would effectively make Colorado鈥檚 law moot.
Colorado鈥檚 law applies to education, banking, hiring and healthcare; companies and other entities must notify people when an AI system is being used in decisions about their lives. It also lets people correct data and appeal a potentially adverse AI decision to a real person. Entities using AI would have to publish the types of AI systems they use and how they manage any known risks. The Colorado Attorney General鈥檚 office would enforce any violations, but the law doesn鈥檛 give people a new private right of action to sue.
Margot Kaminski is a University of Colorado law professor who focuses on AI law. She served on the state鈥檚 AI impact task force, and said while there鈥檚 been a lot of bipartisan energy nationally around data privacy and AI, Colorado鈥檚 approach is different because it鈥檚 an anti-discrimination law.
鈥淚t鈥檚 about using technologies in ways that discriminate against people on the basis of their membership in a historically protected class and legally protected class,鈥 she said.
鈥淚t obliges companies to do some risk mitigation around having systems that are potentially discriminatory鈥 And the right to appeal an AI decision, to my knowledge, is the first time that this has been enacted in the country,鈥 added Kaminski.
When he signed the original bill, Polis asked lawmakers to focus their regulations on the software developers who create AI systems, rather than small companies that use them. Polis also warned that, without changes, the law could be used to target those employing AI even when its application is not intentionally discriminatory.
Task force member Vivek Krishnamurthy, a CU law professor who focuses on technology law and AI system regulations, said that throughout the months of meetings, there was always a fundamental disagreement among members about whether or not a law like Colorado鈥檚 was needed.
He characterized the industry鈥檚 opposition as 鈥溾榳e have anti-discrimination laws already. Why do we need something here?鈥欌
Krishnamurthy, though, is on the side that believes specific policies are necessary. He doesn鈥檛 think current anti-discrimination laws, which are based on the prejudices a human might hold against someone else, apply to automated systems that humans build.
鈥淭hey are built in particular ways and given historic data to train upon. And unless we have some transparency into how the systems are built, some transparency into when they鈥檙e used, and some rules of the road for how you build these things, how you test them, and how you monitor them, it seems very unlikely to me that the existing laws that we have that prohibit discrimination are going to be effective.鈥
Krishnamurthy said by the time the task force wrapped up months of discussions, it hadn鈥檛 even really addressed the thorniest issues on compliance and disclosure about AI system use.
鈥淭here was a lot of fear about just the scope of the application of the law that derailed some of the conversations that we could have had about how to tailor different parts of it in ways that might be less burdensome, to sort of increase the runway for companies to comply over time.鈥
Business and other government entities push back on scope of law
The Colorado Technology Association was a key negotiator on the task force and said they support the intent of the law but not how vaguely it鈥檚 written and its broad scope.
鈥淲e have been pushing for changes to our serious concerns with this law for the past year, and it鈥檚 very unfortunate we鈥檙e here at this stage and having to talk about, how do we get back together this summer to extend this date,鈥 said CTA President Brittany Morris Saunders.
She also emphasized that companies using AI are already trying to comply with current anti-discrimination laws. 鈥淣o employer is going to deploy a system unless it is not resulting in bias or discrimination.鈥
Opponents have gained some potentially influential allies in the form of Colorado鈥檚 institutions of higher education as well as K-12 schools, who say they weren鈥檛 involved in discussions about the policy and have asked for a delay.
In two different letters to lawmakers, the education community warned the law could be costly to comply with 鈥 and might touch their students in unintended ways.
鈥淚f we had been consulted as stakeholders, we would have noted that the law could limit the ability of our students to embrace new technology in the classroom and then launch their careers in Colorado,鈥 states a letter signed by CU, the Colorado Community College System, Colorado State University, University of Northern Colorado, Colorado Mesa University, Colorado School of Mines, MSU-Denver and Western Colorado University. The letter said the law would stifle research and innovation and put faculty, students and graduates at a disadvantage compared to their peers in other states.
Separately, some K-12 school groups said compliance would put pressure on already strained school budgets and create costly and unexpected problems for teachers and students.
While the question of whether historic biases are creeping into AI systems is increasingly urgent for supporters of the law, as more and more entities use the technology to help them sift through data and resumes, many opponents note AI isn鈥檛 new, and has been used for years.
Chris Erickson, a co-founder of Range Ventures, a Colorado-based early stage venture capital fund that focuses on local startups, uses a restaurant as an example of one problem he sees with the new law 鈥 it empowers people to question any decision in which AI was used, even if the technology worked properly.
鈥淵ou鈥檝e put out a job for a hostess or something, you get a hundred applicants. You only hire one. Now 99 people get a right of explanation as to why they didn鈥檛 get the job. Ninety-nine people could appeal the decision for a job that鈥檚 already been filled. And so that鈥檚 not really at all about AI, that鈥檚 really about disclosures and hiring and a bunch of other things that actually have nothing to do with underlying technology.鈥
The right to question decisions involving AI also troubles Bryan Leach, the founder and CEO of Denver based Ibotta, a mobile tech company. He says, as the law is written, many companies would be on the hook for things far out of their control or knowledge.
鈥淓ven if they didn鈥檛 develop the software, even if they鈥檙e just a restaurant or a plumbing company, they have to get an individualized explanation of what data was considered regarding [the person], all the different steps that were taken in mitigation to make sure that this disparate impact didn鈥檛 happen.鈥
Some in the tech industry, like Erickson, also think the law鈥檚 definition of AI is too broad, because it includes data crunching and sorting programs. He鈥檇 prefer a narrower definition that鈥檚 limited to generative AI, the kinds of programs that can create content and chat with people.
鈥淚鈥檓 not saying that we are against being thoughtful about putting in some guardrails right now as this technology is being developed. A notification to people that they are interacting with an AI system where they expected to interact with a human, I actually think that that鈥檚 a thoughtful place to start with this,鈥 he said.
However, Colorado鈥檚 law is much broader, and narrowing it to something like Erickson is suggesting is a non-starter for consumer and labor advocates.
鈥淲e鈥檙e not opposed to any changes, we just want to make sure that the core protections for consumers aren鈥檛 weakened, or loopholes or exemptions aren鈥檛 added that would make the bill toothless,鈥 said Kara Williams, a law fellow with the Electronic Privacy Information Center. Her organization has been involved in negotiations.
鈥楾here needs to be greater transparency鈥
Matt Scherer served on the state AI task force and leads the workers鈥 rights project at the Center for Democracy and Technology. He feels labor and consumer groups have been negotiating in good faith and already made a lot of industry concessions. He said efforts to delay Colorado鈥檚 law and push for a federal moratorium on state AI laws makes him doubt the industry鈥檚 willingness to accept any regulations.
鈥淭hat kind of just shows that there鈥檚 not any desire for there to be meaningful guardrails or accountability on AI.鈥
CPR spoke with representatives from two large global corporations several months ago who said they believed their companies could comply with Colorado鈥檚 law as written, but generally, business opponents have had the loudest voices in this debate, leaving the defense of the law mostly in the hands of consumer advocates and progressive policymakers.
Scherer said as is often the case in policy, the loudest voices in the room often drive the discussion.
鈥淲e are deeply concerned about the widespread use of AI in decisions that are affecting untold numbers of workers and consumers, and that there needs to be greater transparency around how companies are using these tools in those types of decisions.鈥
Democratic Senate Majority Leader Robert Rodriguez has been leading the negotiations on the AI law. After the session ended without any progress, he said he was committed to further discussions.
He鈥檚 faced criticism from the business community for dropping his bill so late in the session, and a lack of willingness to extend the law鈥檚 implementation deadline or fully address their concerns. But at the end of the session, Rodriguez pushed back on how some opponents have framed it, and said the law wouldn鈥檛 punish entities that are making an effort to comply.
鈥淎t the core of the bill, it was just care, and try鈥 to prevent harm from AI, he said. 鈥淎nd that鈥檚 the frustrating part that gets lost.鈥
Rodriguez told CPR News that while he did feel he made compromises, his priority is to the public, not companies or the tech industry.
鈥淚 have a line of being the only person in the country with this policy, and I don鈥檛 want to be the person that sets up a model for the country that does not protect consumers,鈥 he said.