Sergey - stock.adobe.com
The rise of the term deepfake has brought a negative connotation to their underlying technology, generative adversarial networks. But GANs have data use cases in the enterprise.
These generative models have significant power, but the proliferation of fake clips of politicians and adult content has initiated controversy. While companies like Google and Facebook are working to curb the popularity and spread of these videos, deepfake AI technology remains full of enterprise potential.
Behind the technology
A generative adversarial network (GAN) consists of two competing neural networks. One neural network trains on a data set and generates data to match it, while the other -- the discriminatory network -- judges the creation. For example, GANs in image processing are trained on legitimate images and then create their own.
This process happens over and over until the image created by the generative neural network can match enough criteria to pass as realistic. The network generating these fake images is constantly trying to "fool" the discriminatory network.
"The purpose behind inventing [generative adversarial networks] was to create the ability to augment data sets if you didn't have enough data, or if you have incomplete data," said Michael Clauser, head of data and trust at Access Partnership, a global tech policy consultancy. "This is a really powerful artificial intelligence that can create near data and similar data."
The power behind generative adversarial neural networks is what worries citizens and experts alike. Trouble can easily arise with a proliferation of realistic but false and incriminating photographs. When people perceive a false photograph as a real image, incorrect conclusions are drawn and people can suffer.
This collection on generative adversarial networks includes basic and high-level descriptions of GANs, training strategies and their use cases in the enterprise.
But deepfake technology, and GANs in general, are not inherently malicious or misleading. Their ability to produce something close to a reality can assist in image processing, as well as image analysis and information processing.
These GANs are a machine learning framework and, in their more benevolent use cases, the technology is generally referred to as generative adversarial networks rather than the term deepfake.
GANs find their healthy home in organizations seeking to simulate data or supplement limited datasets. Their ability to work off a training set of data to create realistic and reliable synthetic data makes them useful in the enterprise. The synthetic data from GANs can assist in analytics applications and trend analysis, and can also be useful within the medical industry.
When it comes to image processing in healthcare, the neural networks in GANs can detect anomalies in patient scans by comparing them with data set images. This is especially applicable in detecting tumors in x-rays.
Medical training has seen a proliferation in GANs because there can sometimes be a lack of data and medical images. Generative adversarial networks can supplement healthcare organizations struggling to find enough training material for their employees.
The technology has the potential to be an asset in content production, particularly when it comes to personalized content. Businesses that use mass personalization, or need to up their game on the volume and variety of content they produce, can use GANs' simulated data to help, said Andrew Frank, research vice president and analyst at Gartner.
"Content production is still rather expensive. I think there is a transformation that uses more computer techniques to generate a lot more video communications than was previously feasible," Frank said.
GANs are important for U.S. technological leadership, particularly when it comes to the U.S.-China AI race. Due to the nature of its surveillance laws, policies and history, China has access to a backlog of citizen data that the U.S. doesn't. While the U.S. has increased its surveillance activities, it has also enacted many restrictions on surveillance, which limits the data pool it uses for AI research.
"AI supremacy is determined by access to training data in order to get better trained, more intelligent algorithms and AI systems," Clauser said. "For the U.S. to compete, it needs to create data it doesn't have and China does, especially when it comes to facial recognition, motion picture video, surveillance and audio logs."
How to mitigate risks of deepfakes
While there are positive uses of this technology, the risks associated with generative models are real. In 2019, thieves used an audio deepfake to steal $243,000 from a company. By calling the office after business hours and using a generated audio file with the voice of the company's CEO, the thieves were able to convince the managing director to transfer money to avoid late payment fines.
Social media companies are grappling with how to handle this increasingly prevalent form of disinformation. Google, Twitter, Facebook and Reddit have all made policy changes in recent months to balance the risks posed by deepfake videos with freedom of speech on their platforms. However, Frank said, there aren't any technical solutions that specifically protect against deepfakes yet, so he recommends focusing on process-oriented measures.
Andrew FrankResearch VP and analyst, Gartner
"This is just an extension of any public relations response mechanism that deals with escalating situations," Frank said. "If one is keeping track of all of the kind of things that would require some kind of crisis management, this goes on the list."
As technological solutions develop in the coming years, Clauser believes that businesses should take a risk-based approach because security and prevention are expensive.
"If you're a bulge bracket bank or a nuclear power company, your posture toward a deepfake threat factor should be quite different than if you're a candy company or a small business," Clauser said.
The emergence of deepfakes and GANs has created a demand for video authentication tools to help viewers and publishing platforms distinguish between real videos and synthetic or deepfake videos.
"There has been talk about using blockchain technology to authenticate the provenance of video by capturing something in the camera when a video is recorded that would authenticate its origin," Frank said. "These are similar to some of the techniques that are being used to authenticate the origin of physical products to fight counterfeiting and make sure that there are no leaks in the supply chain."
The way social media has evolved has led to a huge loss of trust in digital content. According to a 2017 Pew Research Center study, only 5% of web-using adults trust the information they get from social media. Deepfakes thrive in -- and are a product of -- this atmosphere of distrust. The current concern about deepfakes is potential political fallout, but this technology has ramifications for any person or business that operates in the digital realm.
The existential question for brands is how they regain customers' trust and establish a dependable reputation in a world where people no longer inherently believe what they see on their screens. The answer, according to Frank, is authenticity.
"Brands really need to think about how they can establish more direct relationships," Frank said. "Brands are becoming too dependent on artifice when they do things like deploy synthetic customer service representatives, which initially appear to be real people and then you later discover that they are not. All of that contributes to a general loss of trust, so maybe rediscovering the human element is the key to fighting all of this."