Getty Images/iStockphoto
News brief: AI threats to shape 2026 cybersecurity
Check out the latest security news from the Informa TechTarget team.
2023 was the year of AI hype. 2024 was the year of AI experimentation. 2025 was the year of AI hype correction. So, what will 2026 bring? Will the bubble burst -- or maybe deflate a little? Will AI ROI be realized?
In the cybersecurity realm, one of the big questions is how adversaries will use AI in their attacks. It's well known that AI enables threat actors to craft more realistic phishing attacks at a greater scale than ever, create deepfakes that impersonate legitimate employees and generate polymorphic malware that evades detection. Additionally, AI systems have vulnerabilities that bad actors exploit, for example, using prompt injection attacks.
Here's what some experts predict for offensive AI in 2026:
- "An agentic AI deployment will cause a public breach and lead to employee dismissals." Paddy Harrington, analyst at Forrester.
- "Offensive autonomous and agentic AI will emerge as a mainstream threat, with attackers unleashing fully automated phishing, lateral movement and exploit-chain engines that require little or no human operator engagement." Marcus Sachs, senior vice president and chief engineer at the Center for Internet Security (CIS).
- "As attackers continue to use AI and shift to agent-based attacks, the prevalence of living-off-the-land attacks will only grow." John Grady, analyst at Omdia, a division of Informa TechTarget.
- "AI continues to dominate the headlines and security landscape." Sean Atkinson, CISO at CIS.
Atkinson's prediction is already proving true just nine days into the year, as evidenced in this week's featured news.
Moody's 2026 outlook: AI threats and regulatory challenges
Moody's 2026 cyber outlook report warned of escalating AI-driven cyberattacks, including adaptive malware and autonomous threats, as companies increasingly adopt AI without adequate safeguards.
AI has already enabled more personalized phishing and deepfake attacks, and future risks include model poisoning and faster, AI-assisted hacking. While AI-powered defenses are essential, Moody's cautioned that they introduce new risks, such as unpredictable behavior, requiring strong governance.
The report also highlighted the contrasting regulatory approaches of the EU, the U.S. and Asia-Pacific countries. As the EU pursues coordinated frameworks, such as the Network and Information Security Directive, the Trump administration has scaled back or delayed regulatory efforts. Regional harmonization might progress in 2026, however, Moody's predicted global alignment will remain challenging due to conflicting domestic priorities.
AI-driven cyberattacks push CIOs to strengthen security measures
As AI accelerates innovation, it also introduces significant cyber-risks. Nearly 90% of CISOs identified AI-driven attacks as a major threat, according to a study from cybersecurity vendor Trellix.
Healthcare systems are particularly vulnerable, with 275 million patient records exposed in 2024 alone. CIOs, like those at UC San Diego Health, are increasing investments in AI-powered cybersecurity tools while balancing budgets for innovation.
AI is also fueling sophisticated phishing attacks, with 40% of business email compromise emails now AI-generated. Experts emphasized the importance of basic security practices, such as zero trust, security awareness training and MFA, as critical defenses against evolving AI threats.
NIST seeks public input on managing AI security risks
NIST is inviting public feedback on approaches to managing security risks associated with AI agents. Through its Center for AI Standards and Innovation (CAISI), NIST aims to gather insights on best practices, methodologies and case studies to improve the secure development and deployment of AI systems.
The agency highlighted growing concerns over poorly secured AI agents, which could expose critical infrastructure to cyberattacks and jeopardize public safety. Public input will help CAISI develop technical guidelines and voluntary security standards to address vulnerabilities, assess risks and enhance AI security measures. Submissions are open for 60 days.
AI-powered impersonation scams to surge in 2026
A report from identity vendor Nametag predicted a sharp rise in AI-driven impersonation scams targeting enterprises, fueled by the growing accessibility of deepfake technology. Fraudsters are increasingly using AI to mimic voices, images and videos, enabling attacks such as hiring fraud and social engineering schemes.
High-profile cases, such as a $25 million scam involving British firm Arup, highlight the risks. IT, HR and finance departments are prime targets, with deepfake impersonation becoming a standard tactic. Nametag warned that agentic AI could amplify threats, and urged organizations to rethink workforce identity verification to ensure the right human is behind every action.
Read the full story by Alexei Alexis on Cybersecurity Dive.
Editor's note: An editor used AI tools to aid in the generation of this news brief. Our expert editors always review and edit content before publishing.
Sharon Shea is executive editor of Informa TechTarget's SearchSecurity site.