FBI: Disinformation attacks on election results 'likely' 3 common election security vulnerabilities pros should know

The deepfake 2020 election threat is real, but containable

Disinformation could harm the 2020 presidential election, and technology simply isn't advanced enough to detect manipulated content, especially deepfakes.

On the eve of the 2020 presidential election, deepfakes are easier to create than ever before and looming as a threat over what already is a controversial election.

These digitally manipulated images -- shrewdly designed with advanced tech to disrupt, distort and deceive -- are still as hard to detect as they were when they gained notoriety as a tool to try to disrupt the 2016 election.

"Deepfakes could impact the 2020 election, but not in the way we think," Forrester Research analyst Brandon Purcell said.

The odds of a believable deepfaked video of a candidate not being immediately and widely discredited are small, he said. The greater danger involves deepfakes of local leaders conveying disinformation about this year's unique electoral process, which is expected to include an unprecedented number of mail-in ballots due to the COVID-19 pandemic, he continued.

"Deepfakes spreading lies and incorrect information about how, where and when to vote could disenfranchise large swaths of the population," Purcell added.

Deepfakes are only one facet of the disinformation that malicious actors -- notably the Russian government -- have used to threaten elections. Other disinformation techniques such as fake news articles and maliciously edited media have also advanced.

Manipulated video, Donald Trump, Nancy Pelosi
President Donald Trump shared a severely edited video of Nancy Pelosi from Fox on Twitter last year.

Manipulated media

Deepfake refers to images, videos or audio that have been manipulated using sophisticated machine learning and AI tools. Making deepfakes generally involves training a neural network architecture such as a generative adversarial network (GAN).

GANs consist of two neural networks that essentially work against each other. Both networks are trained on the same set of images, videos or audio, but one tries to create fake content realistic enough to fool the second network, while the second network tries to determine the content it sees is fake or real. By doing this over and over, the networks enhance the quality of fake content.

This technology can create more convincing video, image and audio hoaxes than conventional editing methods.

Foreign and domestic political agents create deepfakes or other manipulated images and videos to try to influence voters.

As the presidential campaigns heated up in late August, for example, White House Deputy Chief of Staff Dan Scavino tweeted a video that appeared to show Democratic presidential nominee Joe Biden sleeping through an interview with CBS.

The video, however, was fake. Someone had digitally placed Biden's face over Harry Belafonte's face. Belafonte was seemingly sleeping during the actual interview, which happened in 2011, although the singer later claimed his earpiece wasn't working.

CBS anchor John Dabkovich confirmed the video was manipulated, and Twitter flagged the post.

Ahead of the election, social media platforms have ramped up efforts to flag manipulated content and disinformation. In May, Twitter added cautionary labels to two tweets posted by President Trump with unsubstantiated claims about absentee ballots.

States are also taking stronger stances on deepfake content.

Last year, Texas passed a bill criminalizing the creation and distribution of deceptive videos intended to influence the outcome of an election.

California followed with a measure, signed into law in October 2019, that prohibits the distribution of manipulated audio or video meant to harm a political candidate's reputation or sway voters unless the manipulated content is clearly marked as false.

Deepfakes spread

Even as policymakers and organizations work to ban deepfakes, the technology to create them is constantly growing more powerful, making deepfakes harder to detect than ever before.

"Are we at a point where one can create deepfakes that could fool the average person? Yes, but only as long as those fakes escape serious scrutiny," said Aditya Jain, a technologist with a background in data visualization and elections. Jain has worked on election coverage in newsrooms in India and the U.S.

Social media platforms and their fact-checkers are equipped to flag deepfakes, he said. But, deepfakes sent over person-to-person communication platforms, such as WhatsApp, are harder to detect.

Deepfakes spreading lies and incorrect information about how, where and when to vote could disenfranchise large swaths of the population.
Brandon PurcellAnalyst, Forrester Research

"So, if someone is looking to influence an election, they would be walking a tightrope between trying to influence an election at scale and not becoming too big to attract the attention of watchdogs and platform owners," Jain said.

But even when posted to a public forum, deepfakes or manipulated videos are hard to catch before they do damage. Also, even if a social media platform catches one, it doesn't always remove it.

An example is the manipulated video of House Speaker Nancy Pelosi that spread across Facebook last year. A simple edit slowed and slurred her speech, making it appear that she was drunk in the three-minute viral video.

Millions of people watched the video, and Facebook, despite knowing the video was fake, decided to leave it up. The video seemingly avoided Facebook's manipulated media policy, although experts are not clear on why. The platform added a "partially false" label to the video, however.

Trump and his supporters subsequently used the video to call Pelosi's mental competence into question.

These types of videos, with simple edits, could affect the election more than deepfakes, said Claire Leibowicz, a program lead directing the strategy and execution of projects in the Partnership on AI's (PAI) AI and Media Integrity portfolio.

Based in San Francisco, PAI is a nonprofit coalition of more than 100 partners from academia, civil society, industry and nonprofits dedicated to the responsible use of AI. Its founding members include Google, IBM, Microsoft, Amazon and Facebook.

Videos, like the Pelosi one, are easier to make than deepfakes, because they usually require only simple, non-AI-powered editing.

Experts PAI works with said deepfakes could affect the election, but other manipulated videos could likely do more damage, Leibowicz said.

Still, a damaging deepfake video before the election is "a low-likelihood, very high-risk event," she said.

Failure of technology

One of the chief problems deepfakes pose is that, from a technology perspective, few foolproof tools are available to flag manipulated content.

"On the types of data they are trained on, they do really well," Leibowicz said. Not so on other types of data.

Big-name technology vendors such as Microsoft and Intel have created AI-powered tools to detect manipulated content. But it's unclear how well these deepfake detection tools actually work.

The Deepfake Detection Challenge, a recent Kaggle competition sponsored by AWS, Facebook, Microsoft, PAI and others, challenged participants to create an AI model to detect deepfake content.

More than 2,000 teams competed for the first-place prize of $500,000. The winning algorithm of the challenge, which ended in June, only had an accuracy level of about 65%.

According to Leibowicz, not only are AI-powered tools imperfect at detecting false artifacts, but they also can't understand most context around the image or video, meaning they can't tell if the content is satirical or malicious. In addition, these tools, when they work, can only spot inauthenticity, not authenticity.

"The technical solution is half the battle," Leibowicz said.

Even with a perfect detection model, again, that model won't be able to detect context, and even if it did, the public could still choose to reject the model's label of unauthentic, simply because they don't want to believe it's true, she said.

Given wide distrust of science and the media, it's not unlikely the public could reject a label of unauthentic.

"We don't even all agree what misinformation is," Leibowicz said.

Technological solutions to fight deepfakes aren't "exactly a silver bullet," said David Cohn, senior director of Alpha Group, the in-house tech and media incubator for media company Advanced Local.

It's a "constant arms race," he said. As AI-powered deepfake detectors get better, so do the AI-powered tools to create deepfakes.

Content provenance

According to Ben Goodman, senior vice president of global business and corporate development at identity and access management vendor ForgeRock, the best way to combat manipulated content isn't with AI-powered software, but by establishing provenance, or record of ownership.

"Fundamentally, fighting deepfakes is about being able to establish content authenticity, so you can decide what's real from what's fake," he said. "The problem we have today is we have a bunch of content where we don't know where it came from."

Disinformation, he added, moves extremely quickly, while correction of disinformation neither moves as quickly nor as widely. So, it's important to flag or remove manipulated content quickly.

Content creators have various ways to establish provenance, including putting digital signatures in the content itself.

The Content Authenticity Initiative (CAI) founded by Adobe, The New York Times Company and Twitter, is a system to develop industry standards for content attribution.

The system, which is expected to be integrated as a feature in Adobe products, aims to enable content creators to securely attach attribution data to content they choose to share, said Will Allen, vice president of community products at Adobe and one of the CAI leads for the vendor.

The CAI platform, set to roll out for some users of Adobe's Photoshop software and Behance networking platform this year, can record who has handled or edited content. The platform can also keep track of where and when content was published.

"We hope to significantly increase trust and transparency online by providing consumers with more information about the origins and level of modification of the content they consume, while also protecting the work of creators,"Allen said.

But it's up to content providers and consumers to use standards for content attribution.

There's a larger question, too, of who should have access to these detection tools. If the wrong people get them, they can use the technology to help make their manipulated media or less likely to be caught. But, if only a few organizations and journalists have access to them, it's easier for manipulated content to circulate unnoticed.

Easy to make

It's a delicate line, and realistic-looking deepfake content is already easy to make.

Earlier this year, Jain, the creative technologist, made and posted a deepfake video featuring Supreme Court Justice Brett Kavanaugh to the SFWdeepfakes subreddit on Reddit. The subreddit, filled with mostly satirical deepfake images and videos, highlights the entertaining side of some deepfakes.

In Jain's video, Justice Kavanaugh enters his 2018 Supreme Court confirmation hearing angry and flustered. He sits down, still angry, and begins talking about his friends, Donkey Doug and drinking beer.

Of course, it's not actually Justice Kavanaugh, although someone might not know it by only looking at the video. It's a deepfake, a manipulated version of a skit on the Saturday Night Live TV show with actor Matt Damon playing Kavanaugh from late 2018. Damon's voice survives in the manipulated video, but his face is replaced with Kavanaugh's, creating a realistic-looking moving image.

The video is satirical -- while it features a political figure, Jain's intention isn't to trick anyone into believing that's the actual Kavanaugh saying those things. Yet, if posted to a different forum, or posted with different intent, it's possible the video could, indeed, deceive.

"I thought this was real," one user commented on it. "This is just a clip of the real hearing," commented another.

Jain used DeepFaceLab, one of the most popular open source deepfake creation tools, to make his video. Easily accessible on GitHub, DeepFaceLab requires some basic coding to use, although enough tutorials are online that even someone with virtually no coding experience could quickly make a deepfake with the program.

Jain has extensive coding experience, and the video, not counting the time it took to train the model overnight, took him about an hour to make. It was the second deepfake video he had made.

"The time it takes to create a deepfake mostly depends on the quality of the output you're looking for," he explained.

Gathering images and footage for model training -- Jain used C-SPAN footage for his Kavanaugh video -- is cumbersome, he continued. Still, he spent most of his time away from the computer, waiting for the model to train.

Still easier tools

There are still more accessible tools to use. Reface, for example, a mobile app with more than 40 million installs, uses deepfake technology to enable users to convincingly put their face into popular videos and images. Unlike other deepfake approaches, which require separately trained networks for each switched face, Reface provides a universal neural network to swap all possible human faces, said Oles` Petriv, CTO of RefaceAI.

Using machine learning methods like GAN, Reface's cloud-based technology can change facial features with a single photo. It's converted into "face embeddings," an anonymized set of numbers used to describe features of a person, to them from those of other people.

High-quality content on the app is prepared by Reface only and is thus premoderated.

"We strictly regulate every piece of content we add to the app and we don't support any kind of usage for negative purposes," Petriv said.

The vendor plans to launch Reface Check, a tool to detect any content made with RefaceAI technology.

"We want to show the example that you can give access to synthetic media with control and minimize the potentially negative use cases," Petriv noted.

Meanwhile, open source tools such as DeepFaceLab and Faceswap don't have built-in content moderation.

But, even with moderated content, deepfakes pose a threat. The ease of making them, and their pervasiveness, have created another problem -- fake deepfakes.

Earlier this year, Winnie Heartstrong, a Republican candidate who unsuccessfully ran for Congress in Missouri, published a lengthy report claiming that George Floyd, the Black man allegedly killed by police officers in Minneapolis earlier this year, isn't real.

Heartstrong claimed the video of Floyd's arrest, which triggered protests against police brutality and institutional racism around the world, was staged. The people in the video, Heartstrong claimed in her report, aren't real; instead, they are digital composites of multiple people made with deepfake technology.

The baseless claim highlights the other problem deepfakes pose -- that real photos and videos can be dismissed as fake.

Unfortunately, technology, in its modern forms, simply still can't distinguish real from fake.

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close