kras99 - stock.adobe.com

Tip

How executives can counter AI impersonation

AI deepfakes are driving financial fraud and targeting executives, forcing organizations to rethink verification, training and policies to protect trust and security.

Executive summary

  • AI-driven deepfake attacks are rapidly increasing, targeting executives and evading traditional cybersecurity controls, which can lead to significant financial and reputational risks.
  • Effective mitigation requires a combination of verification protocols, employee training, cross-functional coordination and layered security measures.
  • IT leaders must shift from technical defenses to building a "trust architecture" that validates authenticity across communications and transactions.

In 2024, a Hong Kong engineering firm lost millions of dollars after employees joined what appeared to be a routine Zoom call that included the company's U.K.-based CFO and several colleagues from the Hong Kong office. The executive on screen looked authentic, sounded natural and issued urgent instructions for a financial transfer. There was just one problem — it was a deepfake.

That one incident cost British engineering firm Arup $25.6 million. There have been numerous other such episodes since. In February 2025, Italian scammers used AI voice cloning to impersonate Defense Minister Guido Crosetto, successfully defrauding businessman Massimo Moratti before authorities recovered the funds. CrowdStrike's 2025 Global Threat Report documented a 442% increase in voice phishing attacks between the first and second half of 2024. DeepStrike reported that the average cost per deepfake incident for businesses reached approximately $500,000 in 2024.

With the continued improvement in AI video and audio capabilities, the sophistication of deepfakes is increasing, and with it, so are the risks for enterprises. Mitigating the risk isn't about AI governance, but rather about understanding the attack vector.

"These are attacks, not components of an AI Governance Framework, which addresses how an organization designs and implements its own AI systems," Ira Winkler, chief information security officer and vice president at CYE and former chief security architect for Walmart, said. "Deepfake impersonation is an attack vector and mitigating it requires a multi-tiered approach."

Why this threat hits the C-suite first

C-suite and executive leadership have long been a primary target for cyberattacks. In the pre-AI era, business email compromise (BEC) was a common attack and remains a popular target today, exploiting a lucrative opportunity. In the AI era, the C-suite has become even more targeted due to a variety of reasons, including the following:

  • Public data exposure. Leadership voices and faces are featured in earnings calls, conference presentations, media interviews, and social media. Generative AI models trained on this publicly available data can produce convincing clones capable of deceiving direct reports and colleagues.
  • Authority exploitation. Deepfakes weaponize organizational hierarchies and trust networks. As Keith Wojcieszek, managing director at FTI Consulting, explained, "The most sophisticated attacks today succeed because the victims believe and trust their senses." Finance, HR and IT functions typically grant executives elevated authority, enabling urgent directives to override standard controls.
  • Psychological barriers. Employees hesitate to question executive requests during apparent crises. This psychological dimension proves more effective than technical sophistication, as it creates vulnerabilities that target organizational behavior rather than technical infrastructure.

The trust crisis in corporate communications

In the modern era of generative AI (GenAI), a trust crisis has emerged in corporate communications, as traditional verification mechanisms are becoming increasingly ineffective. Voice recognition, video confirmation and professional demeanor can all be synthesized. The tools required to create convincing deepfakes are accessible to any cybercriminal with moderate technical capability.

"Even text-based phishing is now more effective, as old methods of looking for poor grammar and spelling are not going to cut it anymore with ChatGPT-style bots writing the scripts," Brian Jackson, principal research director at Info-Tech Research Group, said.

Operational impacts extend beyond immediate financial losses to include data breaches resulting from information sharing with impersonators, reputational damage from compromised communications and erosion of internal trust that impairs decision-making speed.

There is also a cultural dimension that impacts operations, and senior leaders shouldn't always be trusted by default.

"Leadership should clearly state that even senior personnel should not be trusted for out-of-band requests, and that employees will not be penalized for slowing down operations when validating unusual or policy-violating requests," Winkler said.

Why traditional cybersecurity falls short

Traditional cybersecurity controls are not particularly effective at reducing AI impersonation risks. Firewalls, phishing filters and multi-factor authentication were designed to stop malicious code and verify user credentials, not AI impersonators.

The detection arms race

Jackson noted that Intel and other vendors have developed solutions around techniques to identify video-based deepfakes over the past couple of years. However, cyberattackers have become increasingly sophisticated.

"Earlier this year, researchers proved that video deepfakes can now imitate a pulse and fool these existing systems," Jackson said. "While it's not seen out in the wild, this demonstration introduces enough doubt about using technology to detect AI and implies that we'll constantly need to improve and update tactics in an endless game of cat and mouse with adversaries."

Cross-channel gaps

Detection methods effective for video provide no protection against audio-only voice phishing. "The difficulty is finding a solution that integrates with enterprise communication tools across all channels," Jackson notes, adding that "almost no one has deployed these tools in my experience."

Fundamental risk shift

The core vulnerability has shifted from technical domains, where security tools function effectively, to behavioral and governance domains, where human decision-making under perceived authority becomes the primary attack surface.

Fundamentally, it's the same challenge of evolving threat sophistication that isn't uncommon across the cybersecurity landscape. "Like all cyberthreats, as models evolve, so does the quality of deception and the creativity of threat actors," Wojcieszek said.

What IT leaders must do now

CIOs and IT leaders might not be able to eliminate deepfake risk entirely, but there are numerous things that can be done to reduce organizational vulnerability:

Out-of-band verification protocols

Wojcieszek recommends mandatory secondary verification for sensitive communications. He suggests that organizations always confirm through a second trusted channel, such as a phone call or text, before acting on instructions involving money, data or access. Winkler specifies that "if a verbal request is made to transfer funds, it should still require authorization through traditional systems."

For genuine emergencies requiring expedited processing, organizations should establish verification mechanisms using pre-established trusted contact information. Jackson suggests multi-person authorization requirements for financial transfers exceeding defined thresholds.

Detection tool deployment with realistic expectations

Integrate real time deepfake detection into meeting and communication platforms where technically feasible. Wojcieszek cautions that CIOs should view detection tools as a layer of defense, not a standalone resolution. These systems demonstrate limited reliability in real time enterprise-scale deployment, requiring pairing with human review processes rather than automated trust decisions.

Behavioral training programs

Having some form of executive awareness training is critical. Wojcieszek noted that simulating an AI-driven phishing or video impersonation exercise demonstrates the risk more effectively than theoretical instruction. Jackson notes that educational materials are widely available for immediate deployment without significant budget requirements. Training objectives should focus on behavioral modification rather than awareness metrics, ensuring employees understand verification protocols take precedence over response speed for unusual requests.

Risk governance integration

Wojcieszek recommends comprehensive policy audits that cover authorization for transfers, contract approvals, and internal asset sharing. "Many organizations discover they lack clear escalation paths or multifactor safeguards for high-risk approvals," he said. Deepfake risk necessitates explicit integration into enterprise risk management frameworks, rather than being treated as isolated IT concerns.

Cross-functional coordination

Winkler emphasizes that while cybersecurity teams maintain operational roles, this issue also touches on fraud prevention and operational procedures. Attack vectors may involve finance, HR, intellectual property or system access, depending on threat actor objectives. Effective defense requires coordination protocols that span IT, HR, communications, legal and compliance functions.

Looking ahead: From security to trust architecture

The rise of synthetic media forces a shift from information security to trust architecture. That's an architecture that integrates systems, processes and culture designed to validate authenticity at every interaction. For IT leaders and CIOs, there are a few critical steps on the journey toward trust architecture:

Building a verify-first culture

Wojcieszek predicts verification will become as routine as multifactor authentication (MFA). Organizations will need to validate not just who sent a message but also the voice, video or document itself. This represents a fundamental change -- treating all communications as unverified until proven authentic, rather than assuming legitimacy by default.

Implementing layered defenses

Jackson describes the future of deepfake fraud prevention as "a multi-layered patchwork of solutions" with three components:

  • Training and education for employees to identify deepfakes and implement multiple authorization steps before acting on phone or video conference requests.
  • Trusted enterprise communications channels with persistent identity, authorized access and embedded deepfake detection technologies where users are "clearly alerted when an inbound communication is coming from outside of their trusted network."
  • New organizational policies requiring multi-factor verification for critical transactions.

Developing technical standards

Future platforms will embed verification directly into communication systems. Long-term approaches include digital watermarking for executive communications, cryptographic identity tokens and real time probability scores indicating deepfake likelihood.

Wojcieszek expects that the future verification will follow a similar path to what has already happened with web security.

"Similar to how we use security certificates to prove a website is real, future messaging and video platforms will include digital proofs showing who created or sent something, operating as a zero-trust network," he said.

The human factor remains critical

Technology provides necessary but insufficient protection. CIOs must protect both data and credibility, a task that requires a combination of technology and human expertise.

"Companies will need to focus on combining secure technology with positive communication habits," Wojcieszek said. "Tools can't replace human judgment, so employees will need to know when and how to verify what they see and hear."

Sean Michael Kerner is an IT consultant, technology enthusiast and tinkerer. He has pulled Token Ring, configured NetWare and been known to compile his own Linux kernel. He consults with industry and media organizations on technology issues.

Dig Deeper on CIO strategy