Getty Images

Lawmakers concerned about deepfake AI's election impact

Lawmakers want Congress to intervene and tackle AI manipulations that could affect U.S. elections. However, legislation has yet to advance to the House or Senate floor.

Deepfake AI and the ability to easily replicate political candidates' voices and faces has U.S. lawmakers on edge as the 2024 presidential election approaches.

Deepfake AI is a type of technology used to create realistic but fraudulent voices, videos and photos. While policymakers have introduced bills targeting AI and deepfakes -- including the bipartisan No Fakes Act seeking to set federal rules on how a person's voice, name or face could be used -- none has progressed onto the House or Senate floor.

At a Senate subcommittee hearing on April 16 weighing the risks of deepfake AI and its impact on elections, Sen. Richard Blumenthal (D-Conn.) said the threat of political deepfakes is real and Congress needs to take action and "stop this AI nightmare." Bad actors are already using AI to spread misinformation about candidates, notably about President Joe Biden. In January, thousands of New Hampshire voters received robocalls impersonating Biden and telling them not to vote in the state's primary election.

Beyond voice cloning, Blumenthal, chair of the Subcommittee on Privacy, Technology and the Law, said deepfake images and videos are "disturbingly easy" for anyone to create.

"A deluge of deception, disinformation and deepfakes are about to descend on the American public," he said. "The form of their arrival will be political ads and other forms of disinformation that are made possible by artificial intelligence. There is a clear and present danger to our democracy."

Lawmakers worry about foreign and local bad actors

National security leaders have long shared their fears about deepfake AI and foreign disinformation's impact on elections, something Microsoft verified earlier this month, Blumenthal said.

When the American people can no longer recognize fact from fiction, it will be impossible to have a democracy.
U.S. Sen. Richard Blumenthal

Microsoft released a report showing that members of the Chinese Communist Party are using fake social media accounts to generate divisive content and possibly influence the U.S. presidential election. The report indicated that there has also been an increased use of Chinese AI-generated content encouraging disruptive discussion on numerous topics including the Maui wildfires in August 2023.

"When the American people can no longer recognize fact from fiction, it will be impossible to have a democracy," Blumenthal said.

The spread of misinformation and use of deepfake AI is also concerning on a smaller scale, not just when created by foreign bad actors or targeting global figures, he added.

While deepfakes that target widely recognized figures like the Biden impersonation gain enough attention to be caught and called out, Blumenthal said, it's harder to catch deepfakes at a smaller level, such as in local and state elections. As local news outlets decline, he said, it's unlikely there will be as many reputable sources fact-checking statements, photos or videos about candidates.

David Scanlan, the New Hampshire secretary of state who was involved in stemming the effects of the Biden AI-generated robocalls that spread in January, said during the hearing that what concerned him most about the incident was the ease with which a random member of the public created the call. A New Orleans-based magician named Paul Carpenter created the call after being paid to do so by Democratic political consultant Steve Kramer.

"If you add what happened with video to go along with that, you could show candidates in compromising situations that never existed, it could be a state election official giving misinformation about elections and worse," Scanlan said. "To me, that is incredibly problematic."

The deepfake AI issue is not confined to one political party or one primary election, but has affected multiple candidates in multiple elections across the country. That points to the need for Congress to intervene, said Sen. Josh Hawley (R-Mo.), ranking member of the Senate subcommittee.

"The dangers of this technology without guardrails and without safety features are becoming painfully apparent," he said.

Tackling deepfakes involves watermarking, platform responsibility

Zohaib Ahmed, CEO and co-founder of Resemble AI, testified at the hearing that clear labeling of AI-generated content will be necessary to prevent bad situations arising from deepfake AI technology in the future. Resemble AI builds AI voice generators for businesses as well as deepfake audio detection products.

Ahmed said Congress should enforce rules requiring platforms to use AI watermarking technology or deepfake AI detection that will enable users to determine if content is real or fake. Ahmed also recommended establishing a certification program to vet such technologies and ensure accuracy.

"AI watermarking technology is a readily available solution that can already check the integrity of audio content," he said.

Congress needs to pass legislation mandating that content platforms are responsible for detecting and removing deepfakes and AI-generated media, said Ben Colman, CEO and co-founder of deepfake detection company Reality Defender, during the hearing. Colman commended the bipartisan Protect Elections From Deceptive AI Act, which would prohibit using AI to generate deceptive content about candidates.

"AI developments move fast -- legislation must move faster," he said.

Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Dig Deeper on AI technologies

Business Analytics
CIO
Data Management
ERP
Close