metamorworks - stock.adobe.com

Tip

How deepfakes threaten biometric security controls

Biometric security controls are under attack by deepfakes -- convincing images, videos and audio created by generative AI. But all is not lost. Learn how to mitigate the risk.

Technology is a truly remarkable thing, but too much of a good thing can cause big problems.

Deepfake AI, for example, has advanced to the point where many people, and even systems, are not able to tell the difference between fantasy and reality.

Increasingly convincing deepfakes have huge implications for biometric security, with the very technology that authenticates users now informing imitations that undermine those controls. While the situation is concerning, however, the industry can take steps to deal with this latest assault on security.

What are deepfakes?

Deepfakes are manipulated video, image or audio files created with generative AI (GenAI) algorithms. The intent is to create an artificial representation that is indistinguishable from the likeness of a real person, object or scene.

Some deepfakes are relatively harmless. Many people, for example, might enjoy seeing entertaining videos of themselves dancing with, say, Fred Astaire or Ginger Rogers.

The same technology, however, can be a means to any number of malicious ends, such as presenting misleading or patently false information for political reasons or simply to commit fraud. Deepfakes, for example, could enable threat actors to impersonate authorized users and access their financial accounts or secure work facilities.

How deepfakes threaten biometric security controls

Biometric security controls use fingerprints, voice and facial patterns to make sure individuals are who they say they are.

Where passwords are easy to forget, easy to share and relatively easy to steal, biometrics were conceptually supposed to be easy to use and manage -- since we carry them with us -- and difficult to replicate.

GenAI, however, is eliminating the replication difficulty piece of the equation, shaking the very foundation of biometric authentication.

Before GenAI became widely accessible, many businesses -- often those for which data security is most important -- had already fully embraced biometric authentication. The rise of convincing deepfakes brings such organizations under immediate threat. Those in finance, healthcare and government, for example, possess sensitive information that, if compromised, would mean huge financial and reputational consequences.

As bad as this is, evaporation of trust in biometric authentication systems could foster massive societal distrust, compounding the problem. If there is no confidence in any of these systems, people might be forced to rely on older authentication technology, or even stop using digital services entirely.

This Luddite-esque approach to protection would, at a minimum, slow down technological innovation, and could cause serious negative economic impacts.

Mitigating the risks

Despite the serious threats that deepfakes pose, companies and individuals can take the following steps to reduce their impact on biometric security controls.

1. Double down on technology

Reject any Luddite impulses to eliminate biometric authentication.

Instead, developers of these systems should fight fire with fire, continuing to improve their algorithms so they can effectively detect and prevent deepfakes. AI and machine learning technology can best identify anomalies in AI-generated images, recognizing the patterns that created them in the first place.

2. Multifactor authentication

MFA can improve biometric security by using additional, nonbiometric verification -- for example, a one-time passcode plus device- and context-specific metrics, such as MAC addresses, geolocations and time-specific windows of use.

By combining biometrics with other types of authentication mechanisms, companies can dramatically reduce their susceptibility to deepfake attacks. This should also be a natural outgrowth of zero-trust architectures, which are highly advisable in these environments.

3. Liveness detection

Liveness detection capabilities help ensure biometric data captures are happening in real time on the user's end, with static or repeating images indicating deepfake activity.

For example, biometric authentication systems can ask users to perform random movements, such as lip pursing or blinking, to help reveal prerecorded and static images.

How to spot a deepfake: Unnatural pupil dilation, inconsistent blinking, blurred irregular shadowing, poor lipsyncing
As deepfakes improve, they will be increasingly difficult for humans to spot.

4. User education and awareness

The more people are aware of how prevalent and realistic deepfakes are, the more vigilant they will be in guarding against potential threats and suspicious requests. Organizations should take steps to educate their user bases accordingly.

The public, in general, also needs to understand the limitations of existing biometric security controls. This will give them a reason to push vendors to strengthen their biometric systems.

5. Compliance

Governments and regulatory bodies could play an important role in attacking the deepfake threat. For example, new regulations and standards for the collection, storage and use of biometric data could drive organizations to better document and enforce security and privacy in their authentication practices.

Additionally, requiring companies to embed digital watermarks in their AI-generated content, indicating its artificiality, would increase transparency and make it harder to generate undetectable fakes.

Deepfakes could enable threat actors to impersonate authorized users and access their financial accounts or secure work facilities.

Deepfake technology is bad now, but it is only going to get worse. GenAI, in the role of the criminal's accomplice, has the potential to unleash a tsunami of economic, political and societal destruction. As deepfake technology improves, the risk of unauthorized access, fraud and identity theft is sure to continue unabated.

That said, by embracing innovation and multilayered security, augmented by continuous education and improved regulations and compliance, organizations can reduce risks and improve the effectiveness of biometric authentication systems in an era of pervasive digital deception.

Jerald Murphy is senior vice president of research and consulting with Nemertes Research. With more than three decades of technology experience, Murphy has worked on a range of technology topics, including neural networking research, integrated circuit design, computer programming and global data center design. He was also the CEO of a managed services company.

Next Steps

An explanation of Worldcoin

Dig Deeper on Threats and vulnerabilities