Getty Images/iStockphoto

Tip

Deepfake phishing is here, but many enterprises are unprepared

Deepfake phishing attacks are on the rise, as attackers use AI to deceive and defraud end users and their employers. Learn what CISOs can do to protect their organizations.

Deepfake-related cybercrime is on the rise as threat actors exploit AI to deceive and defraud unsuspecting targets, including enterprise users. Deepfakes use deep learning, a category of AI that relies on neural networks, to generate synthetic image, video and audio content.

While deepfakes can be used for benign reasons, threat actors create them with the primary objective of duping targets into enabling them access to digital and financial assets. In 2025, 41% of security professionals reported deepfake campaigns had recently targeted executives at their organizations, according to a Ponemon Institute survey. Deloitte's Center for Financial Services also recently warned that financial losses resulting from generative AI could reach $40 billion by 2027, up from $12.3 billion in 2023.

As deepfake technology becomes both more convincing and widely accessible, CISOs must take proactive steps to protect their organizations and end users from fraud.

3 ways CISOs can defend against deepfake phishing attacks

Even as attackers race to capitalize on deepfake technology, research suggests that enterprises' defensive capabilities are lagging. Just 12% have safeguards in place to detect and deflect deepfake voice phishing, for example, and only 17% have deployed protections against AI-driven attacks, according to a 2025 Verizon survey.

It's crucial that CISOs take the following key steps to identify and repel synthetic AI attacks.

1. Practice good organizational cyber hygiene

As is so often the case, cyber hygiene fundamentals go a long way toward protecting against emerging and evolving threats, including deepfake phishing attacks.

  • Authentication. Assess the effectiveness of existing authentication systems and the risk that synthetic AI poses to biometric security controls.
  • Identity and access management. Carefully manage end users' identities. Promptly decommission those of former employees, for example -- and limit their access privileges to just the resources they need to do their jobs.
  • Data loss prevention and encryption. Ensure the appropriate policies, procedures and controls are in place to protect sensitive and high-value data.

2. Consider defensive AI tools

While defensive AI technology is still in its early stages, some providers are already integrating machine learning-driven deepfake detection capabilities into their tools and services. CISOs should keep an eye on available offerings, as they are likely to grow and improve quickly in the coming months and years.

Even as attackers race to capitalize on deepfake technology, research suggests that enterprises' defensive capabilities are lagging.

Alternatively, enterprises with sufficient resources can build and train in-house AI models to assess and detect synthetic content, based on technical and behavioral baselines, patterns and anomalies.

3. Step up security awareness training

Even as technology evolves, the first and most important step in phishing prevention remains the same: awareness. But synthetic AI has improved at such a rapid rate that many end users are still unaware of the following:

  • How convincing deepfake content has become. In one high-profile deepfake phishing case, a staff member joined a video call with what appeared to be the company's CFO, plus several other employees. All were deepfake impersonations, and the scammers successfully tricked the employee into transferring $25 million to their accounts.
  • How threat actors use deepfakes to threaten individuals and organizations and compromise their reputations. Malicious hackers can create damaging deepfake content that appears to show corporate staff involved in incriminating activities. They could then try to blackmail employees into giving them access to corporate resources, blackmail the organization into paying a ransom or broadcast the fake content to undermine the company's reputation and stock value.
  • How criminals combine stolen data and deepfakes. Bad actors often blend a mix of stolen identity data, such as usernames and passwords, with AI-generated images and voice cloning to try to impersonate real users and circumvent MFA. They might then apply for credit, access existing business and personal accounts, open new accounts and more.

With social engineering and phishing threats evolving at the speed of AI, the threat landscape now changes too much each year to rely solely on annual cybersecurity awareness training. With this in mind, CISOs should regularly disseminate information about new tactics bad actors use to manipulate unsuspecting targets, along with guidance for employees should they encounter such attacks.

Signs of deepfake content, including unnatural pupil dilation, shadows and lip syncing
CISOs should educate end users on the tell-tale signs of synthetic media, while also emphasizing that the most sophisticated deepfakes are often undetectable to humans.

Amy Larsen DeCarlo has covered the IT industry for more than 30 years, as a journalist, editor and analyst. As a principal analyst at GlobalData, she covers managed security and cloud services.

Dig Deeper on Threats and vulnerabilities