https://www.techtarget.com/searchsecurity/news/366637045/News-brief-AI-threats-to-shape-2026-cybersecurity
2023 was the year of AI hype. 2024 was the year of AI experimentation. 2025 was the year of AI hype correction. So, what will 2026 bring? Will the bubble burst -- or maybe deflate a little? Will AI ROI be realized?
In the cybersecurity realm, one of the big questions is how adversaries will use AI in their attacks. It's well known that AI enables threat actors to craft more realistic phishing attacks at a greater scale than ever, create deepfakes that impersonate legitimate employees and generate polymorphic malware that evades detection. Additionally, AI systems have vulnerabilities that bad actors exploit, for example, using prompt injection attacks.
Here's what some experts predict for offensive AI in 2026:
Atkinson's prediction is already proving true just nine days into the year, as evidenced in this week's featured news.
Moody's 2026 cyber outlook report warned of escalating AI-driven cyberattacks, including adaptive malware and autonomous threats, as companies increasingly adopt AI without adequate safeguards.
AI has already enabled more personalized phishing and deepfake attacks, and future risks include model poisoning and faster, AI-assisted hacking. While AI-powered defenses are essential, Moody's cautioned that they introduce new risks, such as unpredictable behavior, requiring strong governance.
The report also highlighted the contrasting regulatory approaches of the EU, the U.S. and Asia-Pacific countries. As the EU pursues coordinated frameworks, such as the Network and Information Security Directive, the Trump administration has scaled back or delayed regulatory efforts. Regional harmonization might progress in 2026, however, Moody's predicted global alignment will remain challenging due to conflicting domestic priorities.
Read the full story by Eric Geller on Cybersecurity Dive.
As AI accelerates innovation, it also introduces significant cyber-risks. Nearly 90% of CISOs identified AI-driven attacks as a major threat, according to a study from cybersecurity vendor Trellix.
Healthcare systems are particularly vulnerable, with 275 million patient records exposed in 2024 alone. CIOs, like those at UC San Diego Health, are increasing investments in AI-powered cybersecurity tools while balancing budgets for innovation.
AI is also fueling sophisticated phishing attacks, with 40% of business email compromise emails now AI-generated. Experts emphasized the importance of basic security practices, such as zero trust, security awareness training and MFA, as critical defenses against evolving AI threats.
Read the full story by Jen A. Miller on Cybersecurity Dive.
NIST is inviting public feedback on approaches to managing security risks associated with AI agents. Through its Center for AI Standards and Innovation (CAISI), NIST aims to gather insights on best practices, methodologies and case studies to improve the secure development and deployment of AI systems.
The agency highlighted growing concerns over poorly secured AI agents, which could expose critical infrastructure to cyberattacks and jeopardize public safety. Public input will help CAISI develop technical guidelines and voluntary security standards to address vulnerabilities, assess risks and enhance AI security measures. Submissions are open for 60 days.
Read the full story by Eric Gellar on Cybersecurity Dive.
A report from identity vendor Nametag predicted a sharp rise in AI-driven impersonation scams targeting enterprises, fueled by the growing accessibility of deepfake technology. Fraudsters are increasingly using AI to mimic voices, images and videos, enabling attacks such as hiring fraud and social engineering schemes.
High-profile cases, such as a $25 million scam involving British firm Arup, highlight the risks. IT, HR and finance departments are prime targets, with deepfake impersonation becoming a standard tactic. Nametag warned that agentic AI could amplify threats, and urged organizations to rethink workforce identity verification to ensure the right human is behind every action.
Read the full story by Alexei Alexis on Cybersecurity Dive.
Editor's note: An editor used AI tools to aid in the generation of this news brief. Our expert editors always review and edit content before publishing.
Sharon Shea is executive editor of Informa TechTarget's SearchSecurity site.
09 Jan 2026