Listen to this article. This audio was generated by AI.
While setting guardrails around artificial intelligence is something Congress continues to grapple with, states are crafting standards and rules for businesses using the rapidly evolving technology.
Multiple states, including California, Illinois, Texas and Colorado, have introduced or passed laws focused on protecting consumers from harms caused by AI. Though businesses, insurance companies and government agencies have used AI for years in hiring, lending and housing decisions, concerns about the technology making biased and discriminatory decisions are rampant.
Indeed, research has shown that algorithms and automated decision-making tools can result in discriminatory decisions. A 2020 study published in the Journal of General Internal Medicine showed that an AI algorithm decreased transplant referral opportunities for Black patients. Another report from ProPublica in 2016 revealed that AI software used to conduct risk assessments on criminal offenders in Broward County, Fla., was more likely to falsely identify Black defendants as likely to commit future crimes.
Congress has yet to advance any legislation to regulate AI and is now grappling with new challenges with the technology brought about by the rise in popularity of OpenAI's ChatGPT. Policymakers have held several hearings this year to better understand the risks of such technology, which range from copyright infringement to the spread of AI-generated misinformation.
In a similar pattern to how states are acting on data privacy without the overarching guidance of a federal data privacy law, states are addressing concerns around AI with their own rules as Congress stalls.
States take action
Colorado is honing rules for predictive algorithms used by insurance companies that could pave the way for future regulations of AI use in other industries.
The Colorado General Assembly passed a bill in 2021 restricting insurers' use of external consumer data such as educational background, credit scores, and social media and online purchasing habits, as well as algorithms and predictive models that use such data and unfairly discriminate based on race, religion, sex and other factors. The Colorado Division of Insurance is tasked with adopting rules for specific types of insurance and in February released the first installment of draft regulations for life insurance companies. The draft rules specify governance and risk management framework requirements for predictive models used by the insurance agencies.
Mary Jane Wilson-BilikPartner, Eversheds Sutherland
The state allowed public comment on the draft rules, and now companies are waiting on the final proposal, said Mary Jane Wilson-Bilik, a partner at law firm Eversheds Sutherland. The state also plans to release a separate draft of rules later this year targeting bias testing in AI systems, which Wilson-Bilik said will be significant.
How to test AI systems for bias and discrimination is something regulators beyond Colorado are struggling to figure out -- even regulators in the European Union are fighting to set standards in the AI Act.
"They are really looked to as a bellwether for regulation on this issue nationwide," she said of Colorado. "Not just in insurance, but with other sectors as well."
Meanwhile, California State Assemblymember Rebecca Bauer-Kahan introduced a bill earlier this year to regulate automated decision-making tools. The bill would require anyone using automated decision-making tools to notify individuals about their use and would prohibit tools causing algorithmic discrimination. Similarly, councilmembers in Washington, D.C., introduced a bill requiring companies to assess any AI algorithms in use for signs of bias.
When Congress will act
Wilson-Bilik said she believes Congress recognizes the potential harms, as evidenced by NIST's AI Risk Management Framework and the White House's Blueprint for an AI Bill of Rights. However, she said regulators are trying to be thoughtful about how to regulate and not crush innovation.
"You have a lot of ways in which the same principles are being articulated," she said. "It's now a question of putting all that into action."
Along with releasing guidance on implementing AI, several enforcement agencies, including the Federal Trade Commission, have indicated that existing consumer protection laws will be applied to businesses using AI systems until new AI regulations are proposed -- something several U.S. senators have called for. The U.S. Department of Commerce also recently released a request for information on AI accountability measures.
Indeed, AI regulation has become a hot topic, said Cameron Kerry, a global thought leader on AI and a visiting fellow at the Brookings Institution.
"There's broadly a lot of interest in ways of putting some guardrails around it in terms of accountability, transparency and doing things like risk assessments, measuring performance and outcomes," he said.
However, Kerry said that while federal AI rules are a possibility, he's not surprised to see states acting on the issue in the meantime, and he expects to continue to see both the federal and state governments creating AI rules.
"I think we'll see both," he said. "I believe in strong and consistent national regulation, but there are aspects of this that have historically been state rules. Employment, insurance -- these have mostly been regulated at the state and local levels."
Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.