sp3n - stock.adobe.com

Tip

Why IT leaders need to be aware of deepfake security risks

While IT security leaders are not yet the target of deepfake attacks, with the increased use of AI, it's important they consider how it can be of harm to the enterprise.

Deepfake technology is aimed at producing audio and video material that is fake but does not appear to be. The usual context of this is social or political transgression -- faking a picture, a film or audio clip of someone saying or doing something they didn't actually say or do, for purposes as benign as amusement and as malign as embarrassment, social disruption, extortion or concealment of a crime or treaty violation.

It's obvious why this is a major concern to law enforcement, political establishments, the military and the national security apparatus, as well as media and PR companies.

So, why should IT leaders and IT security professionals be concerned about deepfake security risks in the enterprise? Many will think they don't act based on audio or video. But don't they?

IT security is often responsible for managing data streams from security cameras, whether they are watching the dealers on a casino floor, a vault door or an entrance. While the problems with creating a fake video stream in real time are still considerable, what about doctoring of footage already captured? Is all the relevant data stored only on write-once media? If not, how can you tell whether footage has been altered, say to show a person entering a vault who did not, or to hide a person who did?

Even if the goal is not a systems compromise, IT security is square in the crosshairs of deepfake security risks. And if audio or video evidence can be undetectably fabricated, why not other forms of forensic evidence -- system and network logs and the like.

A successful deepfake

Also consider that the "insider threat" is still the most frequent source of a serious compromise of systems. There is the possibility that one of your employees, contractors or partners could be blackmailed into committing or facilitating a system breach via a convincing and compelling -- but completely fabricated -- video of them committing a different crime. In a world of nation-state cybercrime, this can no longer be waved away as out of the reach of criminals. The more sensitive the job and company, the more likely such an attack becomes.

In this context, AI is a double-edged sword. On the one hand, it is the driving force behind deepfake risks, as AI techniques like machine learning are applied to the problem and iteratively improve on the outcomes. On the other hand, AI tools can be the best weapon for detecting fakes, as they are able to analyze questionable material with considerably greater depth and subtlety than the unaided -- even if trained -- observer.

We are in the early phases of a deepfake arms race, with detection algorithms and systems seeking to thwart evolving fakery algorithms, which are in turn continually seeking to fool them and humans anew.

IT organizations are currently on the sidelines in most industries in this regard and are not yet targeted or forced to grapple with the deepfake security risks. But this can't last forever. IT security teams and policymakers need to begin taking this into consideration as they plan for the next five to 10 years.

Next Steps

Deepfake technology risky but intriguing for enterprises

Intel deepfake detector raises questions

Dig Deeper on Risk management and governance

Cloud Computing
Mobile Computing
Data Center
Sustainability
and ESG
Close