
LuckyStep - stock.adobe.com
News brief: AI security threats surge as governance lags
Check out the latest security news from the Informa TechTarget team.
AI has taken the world by storm, and enterprises of all shapes and sizes want their share of the action.
According to consulting firm McKinsey & Co., 78% of organizations had adopted AI by the beginning of 2025, up from 55% in mid-2023. Moreover, 92% of companies said they are going to increase their AI spending in the next three years. Participants in a Lenovo survey said their organizations are allocating nearly 20% of their tech budgets to AI in 2025.
The security industry is no stranger to the benefits of AI. It has helped teams detect threats and vulnerabilities, automate time-consuming manual tasks, speed up incident response times and reduce false positives and alert fatigue.
Yet, security teams also know that investment without oversight is dangerous. Companies must train employees how to properly use AI, establish policies that outline acceptable and secure use, and adopt controls and technologies to secure AI deployments. However, consulting firm Accenture found that only 22% of organizations have implemented clear AI policies and training.
Let's look at a few of the latest AI news stories that reinforce just how important AI governance and security are.
The growing focus on AI security in corporate budgets
Recent reports from KPMG and Thales highlighted increasing corporate concerns about generative AI security. In KPMG's second-quarter 2025 report, 67% of business leaders said they plan to allocate budget for cyber and data security protections for AI models, while 52% said they will prioritize risk and compliance. Concerns about AI data privacy jumped significantly, from 43% in the fourth quarter of 2024 to 69% in the second quarter of 2025.
Thales' survey revealed that rapid ecosystem transformation (69%), data integrity (64%) and trust (57%) are the top AI-related risks. While AI security ranked as the second-highest security expense overall, only 10% of organizations listed it as their primary security cost, suggesting a potential misalignment between concerns and actual spending priorities.
First malware attempting to evade AI security tools discovered
Researchers at Check Point identified the first known malware sample designed to evade AI-powered security tools through prompt injection. Dubbed "Skynet," this rudimentary prototype contains hardcoded instructions prompting AI tools to ignore any malicious code and for the tool to respond "NO MALWARE DETECTED."
While Check Point's large language model and GPT-4.1 detected Skynet, security experts view it as the beginning of an inevitable trend where malware authors will increasingly target AI vulnerabilities. The discovery highlights critical challenges for AI security tools and emphasizes the importance of planning defense-in-depth security approaches, rather than relying solely on AI-based detection systems that attackers could potentially manipulate.
The growing challenge of nonhuman identities
Organizations are struggling to manage the rapidly expanding landscape of nonhuman identities (NHIs), which include service accounts, APIs and AI agents. The typical company has evolved from having 10 NHIs for every user in 2020 to 50 to 1 today, with 40% of these identities having no clear ownership.
AI agents particularly complicate matters because they blur the lines between human and machine identities by acting on users' behalf. And while 72% of companies said they feel confident in preventing human-identity attacks, only 57% said the same about NHI-based threats.
AI-generated misinformation in the Israel-Iran-U.S. conflict
Recent conflicts between Israel, Iran and the U.S. have been accompanied by a surge in AI-generated misinformation. Following U.S. strikes on Iranian nuclear facilities on June 22, for example, fake AI-generated images circulated on social media, including one purportedly showing a downed U.S. B2 bomber in Iran.
Similarly, after Iran's missile attacks on Israeli cities, AI-generated videos falsely depicted destruction in Tel Aviv, Isreal. Chirag Shah, professor of information and computer science at the University of Washington, warned that detecting deepfakes is becoming increasingly difficult as AI technology advances.
More on managing AI security
- How to craft an effective AI security policy for enterprises
- How to secure AI infrastructure: Best practices
- How to create an AI acceptable use policy, plus template
- Security risks of AI-generated code and how to manage them
Editor's note: An editor used AI tools to aid in the generation of this news brief. Our expert editors always review and edit content before publishing.
Sharon Shea is executive editor of Informa TechTarget's SearchSecurity site.