sdecoret - stock.adobe.com

AI, the 2024 U.S. election and the spread of disinformation

Generative technology-fueled deepfakes could interfere with the November election due to ease of use and power of the technology. The outlook for regulation seems dim.

With the advent of generative AI, the 2024 elections have taken on a new dimension.

Generative AI systems and other advanced AI technology have made it easier than ever to create deepfakes and spread misinformation.

Most recently, the events before the New Hampshire primaries in January were a preview of what could come and how much generative AI technology could affect the runup to the U.S. general election in November and other elections around the world this year.

Voters in New Hampshire received calls from a voice that sounded like President Joe Biden telling them to stay home and not vote in the primary.

The AI-generated robocalls might have been created using voice cloning technology from AI voice vendor ElevenLabs, according to recent reports.

Previous elections and AI

Election interference using technology is not new.

For example, the 2016 general election between Hillary Clinton and Donald Trump was filled with fake digital media campaigns centering on each candidate. Among the more notable of these was the "Pizzagate" conspiracy that went viral, claiming that a pedophile ring linked to the Democratic Party had been found by police.

While Pizzagate did not involve the use of AI, it showed the ease with which disinformation and misinformation can spread on social media.

Since Pizzagate, deepfake videos -- fake generated video footage of an individual or group -- have widely impersonated politicians, including former U.S. President Barack Obama, President Joe Biden. and Trump.

"AI is the invisible hand of political campaigns today," RPA2AI analyst Kashyap Kompella said. "When generative AI meets the wide reach of social media networks, misinformation spreads like wildfire."

The age of generative AI is marked by just how easy it is for users to create deepfakes or other artificially generated images or audio tracks of candidates.

"I don't have to have a team of developers or data scientists or people who know these technologies," said Rebecca Wettemann, an analyst at Valoir. "It's much more sort of a point-and-click to be able to take text, take a voice and translate that."

With the profusion of generative models, AI technology has also improved dramatically over the past 12 to 18 months, she noted.

"It's very difficult to distinguish real from fake," she said.

Large language models and elections

While the use of AI tools and systems to generate false information will be a threat leading up to the election, the big threat is the LLMs fueling these AI tools and systems. LLMs often spew false information and hallucinations.

In a recent research paper, the Berryville Institute of Machine Learning (BIML) identified 23 risks in the architecture of closed LLMs and foundation models that threaten companies, election integrity and democracy.

A key risk in black box LLMs such as ChatGPT or Google Bard is recursive pollution, or when bad output from LLMs is pumped back into the training data pool and creates a future LLM that used the same polluted data as a the original training set, according to BIML.

"It creates more wrong and puts it back into the internet," BIML founder Gary McGraw said. "Later, it gets eaten again, either by itself or by another LLM. Our concern is that's going to happen around the worldwide elections in 2024."

AI-generated election image.
Image of U.S. President Joe Biden and former U.S. President Donald Trump generated by generative AI system Stable Diffusion.

Since LLMs are created with trillions of data points around the web, both good and bad, they contain both good and accurate data as well as misinformation. In the case of the election, the models can be filled with misinformation about candidates and current events.

"If they've got these weird associations that they scraped off the internet somewhere with regard to misinformation or elections or conspiracy theories or all that stuff, it's in there somewhere," McGraw said. "Tickling that content that's built into the model gives you information or conspiracy theories or alternative facts, just wrong stuff. It's going to be easy for somebody who wants to abuse the generative model for propaganda."

While generative AI creators tend to use data labeling companies like Sama to try to annotate or remove toxic datasets, the models are too large -- up to 14 trillion parameters or more -- to cleanse completely, McGraw added.

"The models are so incredibly large that nobody can check the datasets," he said.

Bad actors and moving targets

The other big problem is that AI technology gives bad actors the ability to spread misinformation and disinformation at a scale that has not been seen before, said Neil Johnson, professor of physics at George Washington University.

In newly published research on bad actor AI online activity in 2024, Johnson and his team predicted the activity will escalate by midyear and exacerbate the online threat during the election.

Bad actors only need basic GPT-2 systems to accomplish this, not even advanced ones like GPT-3 or GPT-4.

GPT-2 systems can easily replicate the content seen in online communities like Facebook. The tactics of social media executives, which include trying to remove every piece of disinformation on their platforms, is the wrong approach, Johnson argued.

"Everybody's kind of missing the point because you have robocalls, [then] next week or the day after, it's Taylor Swift," he said, referring to the AI deepfake nude images of the pop superstar that appeared last month.

While X, formerly known as Twitter, or Facebook can try to eliminate misinformation on their platform, the content will keep appearing. It is often not being created by people on those mass platforms but by those on smaller platforms, such as Telegram, Johnson added.

"There are 25 million people that are part of these harmful communities, just feeding on other types of platforms," Johnson said. "When they get pushed off of Facebook or X, they go there, they regroup and they come back."

These hackers and digital provocateurs also have connections to the various communities on Facebook and X to spread their misinformation. The leaders of X and Facebook are constantly trying to eliminate content that keeps popping up somewhere else because they don't know where it's coming from.

"If you just wait for it to appear, it's going to look like it is in random places," Johnson said.

Moreover, with AI technology, online troublemakers can work at a different level. Previously, the troublemakers were limited in that they had to sleep and take breaks in between the chaos they caused. Now the AI technology puts their action on autopilot.

"This has been going on for a while, but never at this scale," Johnson said. "It's 24/7, and that's a scale that we've never seen before."

He added that one approach to dealing with the problem is looking at it scientifically and mapping out a connection from the various bad social media sites to the communities within X or Facebook.

"There has to be a shift," Johnson said. Lawmakers and the leaders of popular social media sites must take a different approach in how they try to get rid of the content produced by the bad actors.

However, it's unlikely that social media leaders or lawmakers can do anything effective to combat election AI meddling in the next six months, he added.

"Voice AI is a very easy thing to do," he said, referring to the technology behind the New Hampshire robocalls. "Unfortunately, this is an arms race, and we're a bit behind."

Attempts at regulation

Lawmakers have made some attempts to take action.

For example, on Jan. 10, lawmakers in Congress proposed a bill called the No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act, or the No AI Fraud Act.

The act would ban the creation of AI-generated replicas of people's likeness, voice or other personal characteristics without consent.

After the wide release of AI-generated explicit photos of Taylor Swift, lawmakers also proposed a bill named the Disrupt Explicit Forged Images and Non-Consensual Edits Act. It will let people sue over fake pornographic images of themselves.

Moreover, the Federal Communications Commission revealed in January that it is seeking to make AI-generated robocalls illegal.

However, these bills are unlikely to be approved anytime soon.

"Between this moment ... and the election, it's hard to imagine a big piece of legislation making its way through Congress," said Michael Bennett, responsible AI lead at the Institute for Experiential AI at Northeastern University.

Instead, it's more likely that those concerned about digital election interference will have to rely on watchdogs within the government both at the state and federal level, such as regulatory agencies, as well as in private organizations. For example, the California Institute for Technology and Democracy created by California Common Cause is an organization that has been lobbying for AI regulation before the 2024 election cycle.

"We will wind up seeing all of them collaborating to the best of their abilities to one; monitor for these types of malicious applications; and then, when they find them, acting to just knock them out as quickly as possible," Bennett said.

However, the work won't be simple, Kompella said.

"There are no easy solutions to curb the misuse of AI in politics," Kompella said. "Mass-produced misinformation can swamp fact checkers."

When generative AI meets the wide reach of social media networks, misinformation spreads like wildfire.
Kashyap KompellaAnalyst, RPA2AI Research

Social media platforms should boost the resources allocated to content moderation, build tools to flag misinformation and be fully transparent about moderation policies, he added.

It would also be helpful for the big tech companies that produce generative AI technologies to remain sensitive to signals that indicate misinformation and report them to official authorities, Bennett said. The vendors should also keep voters informed about what they're doing, he said.

However, while social media platform providers and creators of generative AI tools will try to deter the misuse of their tools, it's unlikely they will limit access to many of these freely accessible tools, said David Glancy, professor of strategy and statecraft at the Institute of World Politics.

"They have an interest and a business interest in getting their AI to be adopted, to be used by most people," Glancy said. Because of this, it's unlikely that they will impose significant security controls to limit abusers, he added.

Recently, Microsoft CEO Satya Nadella said his company and others are working to identify misinformation and interference by creating tools such as watermarking to make voters aware of what is real and fake.

The impact

On the other hand, despite the widespread AI-fueled misinformation that is likely to proliferate this year, it's unclear how much of an effect it could have on the actual elections, Glancy added.

"What we've seen with misinformation, disinformation [and] online propaganda is [that] it really helps kind of drive people to extremes," Glancy said. "It doesn't necessarily change their views, but it gets them to be more extreme in their positions."

Misinformation or disinformation often must also be repeated and include some action to have an impact, he added.

"It takes a lot to change people's core beliefs," he said. "I just don't know that there's that many people who are undecided right now here in America with the likely two major candidates."

Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems.

Next Steps

What does vibecession mean and will it continue in 2024?

Dig Deeper on AI technologies

Business Analytics
CIO
Data Management
ERP
Close