Getty Images/iStockphoto

Documenting, testing will help businesses navigate AI rules

Given the U.S.'s absence of AI regulations, businesses will likely mitigate AI deployment risks through contractual language.

As AI rules are developed and evolve in the U.S. and abroad, businesses will need to be prepared to provide documentation on and test AI systems as well as send clear messages about their purpose.

The European Union advanced its regulatory guidance on artificial intelligence, the EU AI Act, earlier this month. It will require assessment and classification of AI systems into high, medium or low risk categories. The U.S. has yet to follow a similar measure, leaving the establishment of AI rules and standards to states, localities, standards bodies and other countries.

A diverse regulatory climate also means businesses will need to cover as many bases as possible in contractual agreements when using AI systems, Gartner analyst Whit Andrews said. Andrews spoke at the Gartner Tech Growth & Innovation Conference in Grapevine, Texas, on Wednesday.

While there isn't a clear set of AI rules for businesses to follow in the U.S., Andrews said CIOs and chief AI officers can implement strict documenting and testing processes to help navigate the AI rules that a business could encounter.

When laws, precedents and traditions don't cover something, he said, "then they must be established in documentation that does the best that it can."

Lack of AI rules in U.S. will drive contract changes

The EU AI Act gives Congress a temporary reprieve from advancing AI legislation because many businesses operating in the U.S. are often doing business in the EU, Andrews said. Compliance with the EU AI Act could carry through somewhat.

It is clear that at a federal level, there is no appetite to establish standards. That leaves a lot of open space.
Whit AndrewsAnalyst, Gartner

He said it's unlikely that Congress will advance a similar measure to the EU AI Act, meaning that AI rules and standards development will fall to industry associations, states and localities, and even contracts developed between businesses and AI vendors.

Andrews said he expects to see "that kind of fragmentation and fractalization" of AI rules continue in the U.S. and that "new generations of contractual agreements" will arise to address the use of AI, including generative AI, as well as intellectual property rights.

"It is clear that at a federal level, there is no appetite to establish standards," Andrews said. "That leaves a lot of open space."

Businesses working with government should prioritize transparency

Businesses providing AI services to the federal government will need to focus strongly on AI system documentation and testing, Andrews said. Businesses must also clearly communicate what AI they are using and how they are using it.

Indeed, in President Joe Biden's executive order on AI, the administration highlighted the necessity of impact assessments for AI systems used by the federal government. Andrews recommended that business leaders follow the NIST Risk Management Framework to prepare AI products and services.

"The most important things you can do in preparing to work with the federal government is save your work, document what you're doing, choose a stringency standard or what level of documentation you're establishing, and how you're approaching things from a legal perspective," he said.

Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Dig Deeper on CIO strategy

Cloud Computing
Mobile Computing
Data Center
Sustainability
and ESG
Close