Getty Images/iStockphoto

White House receives feedback on national AI priorities

The White House plans to use public input on national priorities for AI to guide its adoption of a broad, comprehensive national AI strategy focused on AI's risks and benefits.

Stakeholders are weighing in on the White House Office of Science and Technology Policy's request for information on establishing national priorities for artificial intelligence.

Responses run the gamut from adopting AI regulation and creating a centralized oversight agency to coordinating government enforcement action to more hands-off reliance on businesses for building and deploying trustworthy AI systems.

The White House has already taken steps such as releasing a set of guidelines named the Blueprint for an AI Bill of Rights for building trustworthy AI and securing commitments from a handful of companies to test their AI models before deploying and watermarking AI-generated content. But the RFI aims to help the Biden administration build on those steps and develop a more comprehensive national AI strategy addressing AI's risks and benefits.

The Center for AI and Digital Policy (CAIDP), a nonprofit research organization, in its comments recommends collaborating with international partners on AI standards; prohibiting the release of certain AI systems, such as mass facial surveillance; and adopting legal standards to ensure AI systems are developed with safety in mind.

"We need regulation, and we need regulation in a way that implements the U.S.'s existing commitments and existing policy work," said Merve Hickok, CAIDP president and research director. "The Blueprint for an AI Bill of Rights is great policy work, but we need evidence of a legal framework."

Others are advocating for government intervention only in specific use cases or for known risks rather than establishing broad, overarching rules for AI systems.

Federation of American Scientists: Nonprofit global policy thinktank

The Federation of American Scientists recommends that federal agencies, with the White House, should develop a pre-deployment risk assessment protocol for "powerful and unprecedented frontier AI models."

The think tank noted the downside of the voluntary nature of existing risk management frameworks, such as the National Institute of Standards and Technology's AI Risk Management Framework. Instead, such risk assessments should be mandated for AI models that receive federal funding, according to the organization.

"The goal of this protocol is to rigorously analyze frontier AI models for potential risks, vulnerabilities, and misuse scenarios before deployment," according to the organization's comments. "Implementation of such a system would serve as a critical safety practice within our national AI strategy."

MITRE: Nonprofit

We need regulation, and we need regulation in a way that implements the U.S.'s existing commitments and existing policy work.
Merve HickokPresident, Center for AI and Digital Policy

The Biden administration needs an overarching universal vision for AI in the U.S. along with a series of goals to support the vision, according to MITRE. It should also focus on voluntary collaboration from a range of entities within the federal government, private sector and public sector, with a single entity facilitating coordination between the government and public and private sector.

If AI regulation should advance, MITRE argues it should focus on providing a clear and consistent definition of AI, be scalable and combine voluntary self-regulation with government-mandated policies.

"Any attempt to secure or regulate a new technology should be informed by its vulnerabilities, threats that exploit those vulnerabilities either intentionally or unintentionally, and the ultimate risk of damage, harm, or loss of human life, health, property, or the environment," according to MITRE's comments.

IBM: Technology company

IBM's view is that companies developing or deploying AI systems should implement strong internal governance processes to ensure safe AI systems, such as creating an overarching AI ethics board.

If the federal government implements AI regulations, IBM said that any regulations should target specific use cases instead of regulating the technology itself. That could include setting different rules for different levels of risk from AI systems.

"A chatbot that can share restaurant recommendations or draft an email has different impacts on society than a system that supports decisions on credit, housing or employment," according to IBM's comments.

Business Roundtable: Nonprofit lobbying association

Business Roundtable, which is an organization composed of CEOs across various industries, puts the focus on companies to develop responsible AI systems, which would include implementing safeguards against unfair bias, disclosing when users are interacting with AI systems and explaining the relationship between AI systems' inputs and outputs, particularly for systems with high-consequence outcomes.

The organization also advocates for AI regulation that is use case specific rather than broadly targeting AI systems.

"Requirements should be outcome-focused and take a risk-based approach to avoid over-regulating uses of AI which have no significant impact on individuals or do not pose potential for societal harm," according to the organization's comments.

Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Dig Deeper on CIO strategy

Cloud Computing
Mobile Computing
Data Center
Sustainability
and ESG
Close