Listen to this article. This audio was generated by AI.
The explosion of generative artificial intelligence tools in 2023 captured Congress' and the Biden administration's attention and could eventually lead to legislation targeting AI-generated content and misinformation in 2024.
Though AI has been around for years, popular tools such as ChatGPT introduced at the end of 2022 made generative AI capabilities such as creating text, video and images from prompts more accessible to the public. It also raised concerns that generative AI could propel misinformation, an issue that's plagued social media platforms for years. Indeed, Congress' failure to pass legislation targeting social media platforms has led to a proliferation of misinformation issues with the rise of AI-generated content, Sen. Richard Blumenthal (D-Conn.) said during a hearing this year.
President Joe Biden issued an executive order on AI in October that could lay the groundwork for future AI legislation. In the order, Biden tasked entities including NIST with creating standards for extensive AI safety testing, and the Department of Commerce with developing standards for content authentication and watermarking to label or identify content as AI-generated.
Gartner analyst Avivah Litan said there is potential for Congress to act on misinformation if the White House develops standards for AI content authentication and watermarking.
Misinformation a bipartisan concern
If the U.S. government successfully develops content authentication and watermarking standards, social media platforms and other tech giants could start voluntarily following the standards for content authentication, Litan said.
However, the federal government lacks enforcement mechanisms for such standards unless Congress takes action -- something Litan thinks could happen regarding AI-generated content and misinformation.
Avivah LitanAnalyst, Gartner
"It's a bipartisan issue," she said. "If you do have a standard that they can say, 'All the social media platforms have to adopt this standard,' I think that there would be wide consensus to do that."
AI will "challenge us as a society to respond," said Tom Wheeler, visiting fellow at the Brookings Institution Center for Technology Innovation. He said he hopes generative AI will serve as the impetus for Congress to act, since the technology is only exacerbating misinformation, data privacy and competition issues in digital markets.
Biden order starts move from recommendations to enforcement
While Biden's executive order mostly applies to federal agencies or vendors that work with federal agencies, it takes a step toward enforceable action by requiring businesses that develop powerful AI models to submit AI safety test results to the government under the Defense Production Act (DPA). The DPA is a federal law that gives emergency authority to the president to enact control over industries for national security purposes.
Biden's order marked the most significant effort on enforceable actions around AI in 2023 in the U.S., Litan said.
Congress has held a number of hearings on AI risks, but "the only thing that I've seen that has any meat in it from a U.S. point of view at the federal level is the Biden executive order," she said.
Indeed, Wheeler said one of the most interesting facets of the Biden administration's response to AI has been its evolution from recommendations such as the Blueprint for an AI Bill of Rights to the executive order, which implements the limited use of presidential authority for enforcement. Wheeler said the same evolution could happen within Congress.
"We've moved across a spectrum of unenforceable activities to land on the Defense Production Act and at least some enforceable activities there," he said. "I think we're going through a similar process in Congress."
Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.