How liveness detection catches deepfakes and spoofing attacks
Biometric liveness detection can stop fake users in their tracks. Learn how the technology works to distinguish real humans from deepfakes and other spoofing attacks.
Many security experts believe biometrics-based verification -- for example, capturing users' faces through their device cameras to confirm their identities -- is critical for achieving strong cybersecurity in a user-friendly way.
However, fraudsters can now use generative AI technology to impersonate users and access their private accounts, threatening the viability of biometric systems. Defenders need tools and techniques to differentiate real humans from deepfake doppelgangers and other spoofing attempts.
One of the key techniques for spotting deepfakes is known as liveness detection: the use of an algorithm to verify that a live person is generating biometric data in real time. In addition to thwarting the use of AI-generated deepfakes for biometric authentication, liveness verification technology can also identify if an attacker is using prerecorded biometric data. Liveness detection complements authentication mechanisms, which are still responsible for determining whether the biometric data corresponds to a particular person, by making sure the identified person is authenticating now.
In this article, we look at how liveness detection -- also known as liveness tests and liveness checks -- can help cybersecurity practitioners to protect against fraud.
Defenders need tools and techniques to differentiate real humans from deepfake doppelgangers.
Types of liveness detection
There are two basic approaches to liveness checks: active and passive.
Active liveness detection. Involves asking the user to perform one or more unexpected actions, such as making certain facial expressions or gestures or saying particular words -- also known as challenging the user -- and capturing and analyzing that activity for signs of AI generation.
Active liveness detection techniques are most effective at thwarting replays of biometric data. They can also be quite effective at detecting the use of AI through careful digital analysis.
Active liveness detection can be prone to false positives, however, effectively denying service to legitimate users. Also, active liveness detection usually means the verification and authentication process requires more time and effort from users.
Passive liveness detection. Involves analyzing biometric data -- such as from a fingerprint or an image from a facial-recognition selfie -- for signs of AI generation, without requiring any additional action from the user.
Monitoring the user's regular behavior means faster and easier authentication. But passive liveness detection also tends to be less accurate than active liveness detection and easier to fool with replays.
How liveness detection works to catch deepfakes
Liveness detection technologies use a combination of techniques to look for deepfakes, pre-recorded data and other suspicious activity. These commonly include the following:
Sensing depth. This involves analyzing depth in photo or video data to confirm three-dimensionality and look for any inconsistencies that indicate a spoofing attack. A 2D authentication attempt suggests the use of a deepfake or a flat image.
Analyzing human motion. This usually focuses on monitoring a person during a video selfie session to check for natural movements. A face-liveness detection tool, for example, might look for typical blinking patterns. Human motion monitoring might also include hand and arm gestures.
Inspecting skin texture. Deepfakes tend to have skin texture with unnatural patterns or flatness that liveness detection technology can recognize as suspicious. This technique might also flag the use of 3D masks.
Future of liveness detection technology
Today, liveness detection gets the most attention for its use in Know Your Customer efforts to reduce financial account fraud. It's possible that, in the future, it will also enjoy wider adoption across enterprise apps -- for example, to combat deepfake-based insider threats and phishing campaigns.
The increasing sophistication of AI technologies means it keeps getting more difficult to identify deepfakes. At the same time, the liveness detection technologies themselves use AI to strengthen their capabilities. With both sides taking advantage of AI, it remains to be seen whether liveness detection or deepfake generation will come out on top.
Karen Scarfone is a general cybersecurity expert who helps organizations communicate their technical information through written content. She co-authored the Cybersecurity Framework (CSF) 2.0 and was formerly a senior computer scientist for NIST.