Getty Images/iStockphoto

How AI can counteract disinformation in Russia-Ukraine war

Irina Rish, an AI expert at the University of Montreal, discusses the use of bots to combat disinformation and other ways AI technology can be used non-violently in warfare.

As the Russia-Ukraine war rages on, technology researchers and practitioners are looking at ways technology can play a positive role instead of serving as a weapon to spread disinformation.

Experts agree that a cyberwar using AI technology is going on across social media. The disinformation/misinformation battle comes as Russia has banned  platforms including Facebook, Instagram and Twitter, and as TikTok decided to stop operating in the country.

Despite the misuse of AI algorithms, some people are using AI models for defensive purposes in the war.

For example, Clearview AI -- a tech vendor embroiled in controversy in recent years for selling controversial services to law enforcement -- said it has started using its facial recognition technology to identify the dead and vet people of interest at checkpoints in Ukraine.

While this is one application of AI technology in warfare, experts say others are yet to be explored.

Irina RishIrina Rish

In this Q&A, Irina Rish, an associate professor and AI expert in the computer science and operations research department at the University of Montreal and a member of the Mila-Quebec AI Institute, explores alternatives for using AI in war.

Her ideas go beyond training AI algorithms fight disinformation in the Russia-Ukraine war, and even include using AI to disable physical weapons.

How can AI technology play a role in the Russia-Ukraine war?

Irina Rish: In principle, there are various things you could do [with AI technology]. You could train AI models to better classify disinformation and help people analyze whether an image is fake, whether the image was recent, or if it was taken some time ago.

One could build filters that will shield you from potential disinformation. You can install them, and they can classify whatever is incoming. How likely it is to be correct information? Does it look like propaganda? Does it look like hate speech?

You could train AI models to better classify disinformation and help people analyze whether an image is fake, whether the image was recent, or if it was taken some time ago.
Irina RishAssociate professor, University of Montreal

Moreover, instead of bots that spread disinformation, you might think about deploying an 'army of bots' that would be trying to detect whether certain discussions on social media, for example, are going into a completely misinformed and potentially heated direction.

You cannot deploy an army of people constantly seeking out [disinformation and misinformation] on all social media and constantly entering such conversations and trying to explain realities versus misinformation. But that potentially mass deployment of truthful bots could potentially be useful.

If you have an army of misinformation bots that try to sway people's opinion in one direction, you need to produce some countermeasures for that.

What kind of AI skills or tools are needed to create these 'war-correcting bots'?

Rish: There are some promising developments in recent AI that might be very helpful. You've heard about GPT-3.

It is not just GPT-3. There are other kinds of systems, both for language and for images and video. There are nonprofit organizations like Eleuther AI trying to train very large-scale models and other multimedia on large amounts of diverse data.

There is lot of research in this area. They are called foundation models, foundation because you do not use them directly in your applications, but you can build on top of them. But also, they are the foundations because they incorporate so much information and they are so capable [of] adapting to your needs and different tasks.

In addition to counteracting disinformation, in what other ways could AI models help in the Russia-Ukraine war?

Rish: As to actual war, it is an emotional topic for me because I do have friends in Ukraine and Russia and friends who have relatives there. Watching videos of toddlers dying because of bombings is very difficult.

It would be good if AI researchers try to make sure that AI models are not used for automated weapons, although it is hard to control new inventions, just like nuclear energy.

What I really would like to for someone to build automated weapon destroyers, some kind of technology involving AI models that could disable the weapons without hurting anyone, without hurting the aggressor. You do not want to respond by hurting whatever aggressor. You do not want to hurt either their army or whoever on their side. You just want to make sure that they will be unable to use their weapons. I think that that would be very useful.                                                               

Editor's note: This interview had been edited for clarity and conciseness.

Dig Deeper on Enterprise applications of AI

Business Analytics
CIO
Data Management
ERP
Close