Getty Images/iStockphoto

AI and disinformation in the Russia-Ukraine war

From false videos circulating on TikTok to AI-generated humans and deepfakes, the Russia-Ukraine war is playing out both in the physical world and virtually.

Opening her Facebook account on March 10, one of the first things Aleksandra Przegalinska saw on her newsfeed was a post from a Russian troll spreading disinformation and praising Russian President Vladimir Putin.

The post claimed Putin was doing a good job in the Russia-Ukraine war.

As someone following the conflict between Russia and Ukraine, the AI expert and Polish university administrator was taken back by what she believed to be an inaccurate post.

Although she realized that the post was created by someone who was friends with one of her Facebook friends and not anyone she knew directly, Przegalinska said it shows that search engines are prioritizing information that can potentially generate conflict and are controversial.

"Recommendation systems are still very crude," said Przegalinska, who is also a research fellow at Harvard University and a visiting fellow at the American Institute for Economic Research in Great Barrington, Mass. "If they see I'm interested in a conflict and Ukraine -- which is clear when you analyze the content on my social media -- they can just try to analyze and promote content that's related to that."

Recommendation algorithms, disinformation and TikTok

Recently recommendation algorithms have led to the promotion of disinformation on social media about the Russia-Ukraine war.

Particularly on TikTok, disinformation and misinformation are rife. Some users -- wanting to go viral, make money or spread Putin's agenda -- are mixing together war videos with old audio to create false information about what's going on.

While some posts about the war offer real accounts of what's going on, many others appear to be unverifiable.

For example, many TikTok videos during the war have included an audio clip that showed a Russian military unit telling 13 Ukrainian soldiers on Snake Island, a small island off the coast of Ukraine, to surrender. Some of those videos stated the men were killed.

TikTok video falsely classifying Ukrainian soldiers as dead.
Recommendation algorithms lead to the spread of misinformation on TikTok, such as this video saying the soldiers were killed.

This was initially confirmed by Ukraine President Volodymyr Zelenskyy, but Russian state media showed the soldiers arriving in Crimea as prisoners of war. Ukrainian officials later confirmed that the soldiers were alive but were being held captive.

TikTok has also become a platform for Russia to promote Putin's agenda for invading Ukraine. Although the platform recently suspended all livestreaming and new content from Russia, it did so days after videos of influencers supporting the war were already circulating.

Using precisely the same words, Russian TikTok users repeated false Russian claims about the "genocide" committed by Ukrainians against other Ukrainians in the Russian-speaking separatist Donetsk and Luhansk regions. The posts condemn Ukraine for killing innocent children, but there is no evidence to support this false claim.

On March 6, TikTok suspended videos from Russia, after Putin signed a law introducing jail time for up to 15 years for anyone who publishes what the state considers "fake news" about the Russian army.

Disinformation, AI and war

The spread of disinformation in war between both sides is not new, said Forrester analyst Mike Gualtieri.

However, using AI and training machine learning models to be sources of disinformation is new, he said.

Machine learning is exceptionally good at learning how to exploit human psychology because the internet provides a vast and fast feedback loop to learn what will reinforce and or break beliefs by demographic cohorts.
Mike GualtieriAnalyst, Forrester

"Machine learning is exceptionally good at learning how to exploit human psychology because the internet provides a vast and fast feedback loop to learn what will reinforce and or break beliefs by demographic cohorts," Gualtieri continued.

Because these machine learning capabilities are at the foundation of social media, government entities and private citizens can also use the platforms to try to sway opinions of masses of people.

Transformer networks such as GPT-3 are also new, Gualtieri said. They can be used to generate messages, taking the human out of the process altogether.

"Now you have an AI engine that can generate messages and immediately test if the message is effective," he continued. "Rapid-fire this 1,000 times per day, and you have an AI that quickly learns how to sway targeted demographic cohorts. It's scary."

What seems even scarier is how easy it is for social media users to build these kinds of  AI engines and machine learning models.

AI and disinformation in the Russia-Ukraine war

Deepfakes and the spread of disinformation

One species of machine learning model that has circulated during the war consists of AI-generated humans or deepfakes.

Twitter and Facebook took down two fake profiles of AI-generated humans claiming to be from Ukraine. One was a blogger ostensibly named Vladimir Bondarenko, from Kyiv, who spread anti-Ukrainian discourse. The other was Irina Kerimova, based in Kharkiv, supposedly a teacher who became the editor in chief of "Ukraine Today."

Unless one examines both extremely closely, it's nearly impossible to tell that they're not real. This supports findings from a recent report in the Proceedings of the National Academy of Sciences that AI-synthesized faces are hard to distinguish from real faces and even look more trustworthy.

Generative adversarial networks help create such AI-generated images and deepfakes. Two neural networks (a generator and a discriminator) work together to create the fictional image.

Creating deepfakes used to be complicated and required a complex skill set, Przegalinska said.

"Currently, the worrying part is that many of the deepfakes can be created without coding knowledge," she said, adding that tools that can be used to create deepfakes are now easy to find online.

Also worrying is that there are few limits on how neural networks can be used to create deepfakes, such as a video portraying Zelenskyy surrendering, Przegalinska said.

"We don't really know what the full scale of using deepfakes in this particular conflict or war will be, but what we do know is that already we have a few documented cases of synthetic characters," she said.

And since Russia has banned many social media platforms including Facebook and Twitter, many citizens in the country only know what the Russian state TV shows. It would be easy to use the deepfake technology on Russian TV to portray Putin's agenda that Russia is Ukraine's savior, Przegalinska said.

It's important for consumers of social media to pay close attention to the news because it's hard to know what is real and what's fake.

"There's this alarmist aspect to it that says, 'Listen, you have to pay attention to what you're seeing because it can be a defect,'" she continued.

"Russia is very good at the misinformation game," Przegalinska said. "Even though the tools that they're using are maybe very refined … they're an obvious weapon."

Meanwhile, the West is not as prepared in the disinformation game and at this moment two wars are going on, she said.

"This is a parallel war happening to the physical world and obviously, the physical war is the most important one because there are people dying in that world, including little children," Przegalinska said. "However, the information war is just as important in terms of impact that it has."

Next Steps

A quick explanation of the splinternet

6 content moderation guidelines to consider

Dig Deeper on AI technologies

Business Analytics
CIO
Data Management
ERP
Close