Alex - stock.adobe.com

News brief: Rise of AI exploits and the cost of shadow AI

Check out the latest security news from the Informa TechTarget team.

Organizations and employees everywhere continue to rush to use AI to boost productivity and tackle rote job functions, but new research shows this might prove disastrous. Malicious actors could use AI exploits to access sensitive data, experts say, especially if targets don't have proper AI governance and security controls in place.

IBM's 2025 "Cost of a Data Breach Report" found that 13% of organizations have experienced recent breaches involving their AI models or applications. More than half of these -- 60% -- said the incidents led to broad data compromise, while one in three reported operational disruption. Attackers increasingly view AI as a high-value target, researchers concluded, even as AI security and governance measures lag behind adoption rates. Meanwhile, one in six data breaches involved AI-based attacks.

This week's featured articles highlight the potential for AI exploits and the importance of taking steps to protect AI, such as creating AI security policies and implementing AI governance. Read more from IBM's research and learn how AI exploits could hurt your company.

'Man in the prompt' attack could target ChatGPT and GenAI tools

LayerX researchers demonstrated the possibility of using a "man in the prompt" attack, which they say can affect major AI tools including ChatGPT, Gemini and Copilot. This exploit uses browser extensions' ability to access the Document Object Model (DOM), allowing them to read from or inject prompts into AI tools without special permissions.

Attackers can deploy malicious extensions through various traditional methods -- such as social engineering or purchasing access to legitimate extensions -- potentially stealing sensitive data from both commercial and internal LLMs. Internal company LLMs are particularly vulnerable, as they often contain proprietary data and have fewer security guardrails.

LayerX CEO and co-founder Or Eshed called this attack vector "very low-hanging fruit," as traditional security tools often lack visibility into DOM-level interactions.

Read the full story by Alexander Culafi on Dark Reading.

Shadow AI increases cost of data breaches

IBM's annual data breach research suggested that unmonitored shadow AI could increase costs by an average of $670,000 per breach. One in five organizations reported experiencing cyberattacks at least partially related to shadow AI, with 97% of AI-related breaches occurring due to a lack of proper access controls.

Supply-chain intrusions through compromised apps, APIs or plug-ins were the most common method for accessing the shadow AI tools.

Despite the increasing risk of shadow AI, 63% of breached companies lacked an AI governance policy. Even those with policies often failed to implement approval processes or strong access controls, and just 34% of them regularly checked for unsanctioned tool use.

At the same time, hackers increasingly used GenAI for phishing and deepfake impersonation attacks.

Read the full story by Eric Geller on Cybersecurity Dive.

LLMs capable of emulating sophisticated attacks

Carnegie Mellon University researchers, partnering with Anthropic, demonstrated that LLMs can autonomously execute sophisticated cyberattacks without human intervention.

Researchers created an attack toolkit called Incalmo, which used the same cyberattack strategy from the 2017 Equifax cyberattack. The LLM provided high-level strategic guidance, while LLM and non-LLM agents performed lower-level tasks, such as deploying exploits. In nine of 10 tests across small enterprise environments, Incalmo succeeded at exfiltrating some sensitive data, lead researcher Brian Singer said.

The researcher explained that it isn't clear how well Incalmo would work in other networks and how effective it would be against modern security controls. Still, Singer expressed concern about the speed and low cost of such attacks, noting that human-operated defenses might struggle against machine-timescale threats.

Read the full story by David Jones on Cybersecurity Dive.

Editor's note: An editor used AI tools to aid in the generation of this news brief. Our expert editors always review and edit content before publishing.

Kyle Johnson is technology editor for Informa TechTarget's SearchSecurity site.

Dig Deeper on Threats and vulnerabilities