metamorworks - stock.adobe.com

Rethinking UC security amid the rise of deepfake technology

IT leaders at Enterprise Connect discussed why organizations must adopt new security measures to identify and stop deepfakes in voice and video calls.

The growth of AI capabilities for collaboration workflows offers organizations productivity benefits, but it also creates new security threats that require new security strategies.

In a time when AI avatars can attend video meetings and AI assistants can make calls, how can you trust the person you're talking to is real?

Deepfakes disrupting collaboration workflows isn't theoretical -- it's happening.

The most prominent example of a deepfake scam occurred in 2024, when a finance employee of engineering firm Arup attended a video conference with senior management. The company's CFO asked the employee to transfer millions of dollars to multiple external accounts.

What that employee didn't realize was that the CFO and everyone else on the video conference were AI avatars, and the employee ended up transferring $25 million to cybercriminals.

"A whole conference call fooling somebody who had that level of authority -- that's a big deal," said Robert Lee Harris, president of consulting firm Communications Advantage, Inc.

Deepfake scams are most effective when they fit into existing workflows, Harris said. The realism of deepfake technology matters less than the context of the interaction, where users are more likely to fall for deepfake scams if they fit into users' knowledge and cultural context of collaboration workflows.

Harris and other IT leaders spoke at Enterprise Connect on how deepfakes are an emerging security challenge affecting both communications and contact center platforms.

Redefining communication security strategies

As AI-powered deepfake technology becomes more sophisticated, it will be harder to spot subtle cues that a person on a video call isn't real, such as an avatar blinking, said Allen Ohanian, information security officer for the Los Angeles County Department of Children and Family Services.  

Other methods of verifying users, such as voice and video biometrics, are becoming obsolete with the rise of deepfakes.

"It's evolving so quickly with deepfakes," said Rodney Hassard, head of product and apps at Vonage. "Things like facial recognition and voice biometrics you can't really do because the impersonation of people is so advanced."

Organizations need to rethink their security strategies to spot and mitigate deepfake scams and consider the following practices: 

  • Conditional access policies that provide ongoing enforcement of security controls based on the user, location or device.
  • Behavioral baselines to establish normal patterns of interaction and flag abnormal patterns.
  • Multi-channel authentication to verify a user's identity using independent channels, such as text messages, email and authenticator apps.
  • AI-enhanced security that can analyze access requests in real time.

Ongoing monitoring is key to ensuring that, after the initial authentication, users remain monitored for any conditions that change. If a user authenticates access in Las Vegas and then again from New York an hour later, for example, that user's credentials are compromised. Systems must be in place to detect and block that access, said Jean Chavez, director of IAM at Mastercard.

"That's why you have the basics, you have controls, you have checks and balances," said Ariel Golubitsky, senior manager of IT security at Amicus Therapeutics.

Unified communications and contact center vendors are also responding to deepfake threats with their own security measures. Zoom, for example, recently announced a deepfake detection feature that provides real-time alerts when it detects synthetic voice or video on a meeting.

Vonage approaches security through multiple factors to prevent fraud before it reaches the application layer, where most fraud happens, Hassard said.

"It's not going to be a single solution," he said. "It's going to require multiple factors of authentication, looking at different data sources, devices, networks or other areas to pull signals."

Addressing the human equation

But organizations shouldn't forget the human equation in defending against deepfakes. Harris said he believes changing company culture is 80% of the solution to securing against deepfakes.

A fragile company culture is vulnerable because it prioritizes speed over verification, authority isn't questioned, and doubt feels unprofessional. But a resilient company culture is better protected because verification is in place at every level, and leadership is not exempt from security protocols, Harris said.

"When a synthetic voice or video arrives, you're just looking at pixels and soundwaves. If it's a real person, you're still looking at pixels," he said. "But people are evaluating 'Does it fit what I'm expecting to see, and am I about to question if it doesn't?'"

Katherine Finnell is senior site editor for Informa TechTarget's unified communications site. She writes and edits articles on a variety of business communications technology topics, including unified communications as a service, video conferencing and collaboration.

Dig Deeper on Collaboration and communication security