Laurent - stock.adobe.com

Ethical concerns of AI call growing adoption into question

AI tools are getting easier to use every day, putting powerful tools into the hands of potentially malicious users. The time to think about the ethics of AI advances is now.

When Hao Li first started working on AI tools to enhance visual effects in videos, he thought the applications would be limited to the entertainment industry. But it didn't take long for him to start seeing the type of applications he built being applied in much different venues.

Today, some of the same AI tools are used to create deepfakes, which involve using deep learning tools to convincingly swap one person's face for another's in a video or photo. They have been used to create fake videos of prominent people, like Presidents Barack Obama and Donald Trump, as well as Facebook CEO Mark Zuckerberg.

The emergence of deepfake machine learning tools has led Li and others in the field to question how the applications they develop are shared with the wider world and to think about the ethical concerns of AI adoption. Leaders in the field are now grappling with what it means to work on technology that could potentially be put to harmful use.

"Suddenly, you can create realistic images in real time," Li said at the EmTech conference organized by the MIT Technology Review. "It changes things. Now, we have to worry about what if it gets in the wrong hands. We have to rethink what privacy and security mean."

The potential risks of deepfake machine learning tools don't mean technologies like face swapping and artificially generated avatars should be abandoned, Li said. In 2015, he worked on the film Fast 7, which involved combining live action footage with artificial renderings of actor Paul Walker, the star of the previous movies in the franchise who was killed prior to filming.

Now, Li is working for a company called Pinscreen, which lets users create digital avatars of themselves automatically. These are just a few of the uses for which AI facial recognition and face swapping technology was intended, he said.

"I don't think we should stop what we're doing, because, clearly, it's not the technology that's causing the harm," he said. "There are other factors. The technologies we create have benefits. They were not designed to create deepfakes. But we have to be careful."

Yoshua Bengio, the University of Montreal professor who is considered one of the pioneers of deep learning and neural networks, said at the conference that researchers have a responsibility to think about the ethical concerns of AI and how the techniques they develop could be put to harmful uses.

He said the rapid growth in the intelligence of artificial systems needs to be paired with increases in the wisdom of researchers. Just because something is technically possible doesn't mean it should be done or shared broadly, particularly if it could be used to cause harm.

"We can't just do our research with blinders on," Bengio said. "We need to balance that increase in power that we're delivering to the world with an increase in wisdom. Otherwise, it would be like delivering nuclear bombs to children. We can see how that could end badly."

There are potential technical solutions to some emerging ethical concerns of AI. For example, Li talked about a project he's working on with Hany Farid, a professor at the University of California, Berkeley, to analyze motion patterns and other biometrics to detect deepfake videos.

But, ultimately, he said educating the public about the potential presence of deepfakes in their social media feeds could be the best way to combat them. The technology is advancing so rapidly that automatic detection tools are likely to lag behind.

"One of the best things you can do is let people know that deepfakes are possible," Li said. "A lot of people and groups are likely to want to spread sensational news even if they know it's not true."

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close