Getty Images/iStockphoto

Tip

Prepare for deepfake phishing attacks in the enterprise

Deepfake phishing has already cost at least one company $243,000. Learn how cybersecurity leaders can train users to recognize this emerging attack vector.

Increasingly sophisticated AI, audio and video technology, along with a wealth of users' personal data available on social media, have made deepfake phishing an emerging attack vector that should concern CISOs.

Deepfake technology uses AI to fabricate misleading audio, video and images. To date, deepfakes have mostly served entertainment and political purposes, both innocuous and malicious. Experts warn, however, that deepfake technology also poses a variety of enterprise IT risks. Deepfake phishing, for example, involves using deepfake content to trick users into making unauthorized payments or volunteering sensitive information that cybercriminals can use to their advantage.

In one high-profile example from 2019, cybercriminals used deepfake phishing to trick the CEO of a U.K.-based energy firm into wiring them $243,000, according to The Wall Street Journal. Using AI-based voice spoofing software, the criminals successfully impersonated the head of the firm's parent company, making the CEO believe he was speaking with his boss.

As technology continues to evolve, such deepfake phishing campaigns will almost certainly become more common and more effective. CISOs can prepare enterprise users to fend off these attacks by teaching them what deepfake phishing is and how it works.

Chart showing how generative and discriminative algorithms create deepfakes
Two kinds of AI algorithms work together to create deepfake images. First, a generative algorithm studies the data of legitimate images to create artificial ones. A discriminative algorithm then vets the images, rejecting any that it recognizes as fake.

Types of deepfake phishing attacks

Deepfake phishing attacks fall into the following categories:

  1. Real-time attacks. In a successful real-time attack, deepfake audio or video is so sophisticated that it tricks the victim into believing the person on the other end of a call is who they claim to be -- perhaps a colleague or a client, for example. In these interactions, attackers are likely to create a strong sense of urgency, throwing imaginary deadlines, penalties and other consequences for delay at victims to get them to panic and react.
  2. Nonreal-time attacks. In nonreal-time attacks, a cybercriminal impersonates someone via deepfake audio or video messages that they then distribute through asynchronous communication channels, such as chat, email, voicemail or social media. This type of communication reduces the pressure on criminals to respond believably in real time, letting them perfect a deepfake clip before distributing it. As a result, a nonreal-time attack may be quite polished and less likely to raise user suspicions. When distributed via email, a deepfake video or audio clip may also be more likely to slip past security filters than traditional, text-based phishing campaigns.

    Nonreal-time attacks also let attackers cast a wide net. Someone impersonating a CFO, for example, could send the same audio or video memo to every member of the finance organization, with the goal of soliciting sensitive information from as many people as possible.

In both kinds of attacks, social media footprints usually provide enough information for attackers to strategically strike when targets are most likely to be distracted or overwhelmed.

How to fight deepfake phishing

Train, train, train

Security leaders need to make end users aware of this and other emerging attack vectors through ongoing training. Security awareness training fatigue is real, but making lessons fun, competitive and rewarding can help keep them fresh and top of mind.

Fortunately, employees will likely find deepfake phishing awareness training to be uniquely interesting, engaging and educational. Try sharing convincing deepfake videos, for example, and challenging users to spot suspicious visual cues, such as unblinking eyes, inconsistent lighting and unnatural facial movements. Such an exercise in how to detect deepfake attacks is bound to make an impression.

Hit pause

This principle must be a cornerstone of ongoing security awareness training, and every manager and leader should continually remind employees of its importance. Cybercriminals try to rush victims into making ill-advised decisions, so a sense of urgency in any interaction should immediately set off an alarm. If anyone -- even the CEO or a top client -- demands an immediate wire transfer or product shipment, for example, users should stop and verify the authenticity of the request before taking any additional steps.

Train employees to respond to those making urgent real-time requests by politely communicating that, due to an increase in phishing attacks, it will be necessary to confirm their identities through separate channels. For nonreal-time requests, the same principles apply.

Challenge the other party

This is not a mitigation technique that employees often learn in security awareness training, but it is highly effective. If an interaction seems suspicious, a user can challenge the person on the other end of a call, email or message to provide information both parties should know, such as when they started working together. A close associate can even ask more personal questions, such as how many pets the other person has or when they last shared a meal.

This is uncomfortable and takes practice, but it is a powerful, efficient mechanism for identifying imposters before they can do damage.

Next Steps

Strong identity security could've saved MGM, Caesars, Retool

Dig Deeper on Threats and vulnerabilities

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close