Listen to this article. This audio was generated by AI.
Federal agencies that want to procure AI tools need help understanding what they can safely buy. Yet the Office of Management and Budget's draft policy for advancing federal AI use includes vague terms and definitions that might make federal AI procurement more difficult.
That's according to a panel of experts speaking during the House Subcommittee on Cybersecurity, Information Technology and Government Innovation on Wednesday. President Joe Biden signed an executive order on AI in October and the Office of Management and Budget (OMB) soon followed up with draft implementation guidance for federal agencies. The draft OMB guidance, which builds on the Blueprint for an AI Bill of Rights and National Institute of Standards and Technology (NIST) AI Risk Management Framework, mandates evaluation, monitoring and risk mitigation practices for federal agencies wanting to adopt AI, including generative AI tools.
Some of the challenges federal agencies would face following the draft OMB guidance is lack of clarity around determining whether an AI system affects human rights or safety and a continued lack of action from Congress on establishing U.S. AI policy. Biden's order and the OMB guidance for federal agencies "does not replace the need for Congressional action on AI," said Kate Goodloe, managing director of The Software Alliance and a witness during the hearing. Without Congressional action, the draft OMB guidance simply builds off the AI Risk Management Framework rather than mandating federal agencies follow its recommendations.
"First, Congress should pass legislation that ensures the NIST framework guides the government's use and procurement of AI systems," she said. "Second, Congress should enact legislation that establishes new safeguards for private sector companies that develop and deploy high risk AI."
Goodloe said that while Congressional action on AI is imperative, adding clarity to the draft OMB guidance to assist federal agencies in procuring and implementing commercial AI products is also critical.
"The government should be encouraging agencies to buy commercially available products, which are historically subject to less high failure rates. They are not likely to go obsolete. They're easier to update and therefore less vulnerable to threats and often less expensive than products that are made specifically for the government," she said.
OMB guidance needs clarity for AI procurement
The OMB guidance could inadvertently keep AI companies from working with the public sector without further clarity, said Ross Nodurft, executive director of the Alliance for Digital Innovation and a witness during the hearing.
"The OMB memo creates a series of fractured and unevenly administered new processes across departments and agencies that will deter many companies, including small and midsize technology companies, from working with the federal government," he said.
Ross NodurftExecutive director, Alliance for Digital Innovation
One of the mandates outlined in the guidance is "the implementation of specific safeguards for uses of AI that impact the rights and safety of the public." Nodurft said instead of broadly recommending safeguards to address both rights and safety issues, OMB needs to outline specifications for determining which technologies affect rights and which affect safety.
Nodurft argued that agencies need to use current government procurement processes for AI before producing plans to comply with the mandates set out in the executive order and draft OMB guidance.
"We cannot have agencies trying to implement the new processes without fully considering how they fit into current technology and security governance regimes [as well as] how they're optimized for AI adoption," he said.
The order tasks different agencies with defining rules for AI, while the OMB memo requires agencies to develop individual AI strategies. The guidance needs to apply consistently across agencies, Goodloe said.
"The need to coordinate those to ensure we have a harmonized policy across agencies is imperative," she said.
Planning for vendors
For AI vendors seeking federal government contracts that meet the executive order and draft OMB guidance requirements for AI procurement, it's critical to outline who will be providing human oversight of the technology, said Morgan Reed, president of The App Association, during the GovAI Summit this week.
"The most important mandate you are going to see on every policy statement -- whether it's domestic, in the European Union or at the state level -- is going to be about what is the human role in AI you are deploying," he said. "That is the No. 1 issue."
Along with outlining human oversight, Reed said vendors must provide ways for overriding AI decisions.
Next, Reed said AI system developers have to demonstrate quality management and how AI models will protect data privacy as well as avoid AI bias and discrimination.
Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.