
Alex - stock.adobe.com
AI-powered attacks: What CISOs need to know now
AI-powered attacks are transforming cybersecurity, using AI to automate and personalize threats at an unprecedented scale since 2022.
Artificial intelligence (AI), particularly generative AI (GenAI), has deeply impacted IT, enabling both easy content creation and complex data analysis.
As with any tool, GenAI can help or harm, and today's chief information security officers (CISOs) must recognize and embrace that duality. For example, GenAI aids a CISO in writing a report on operations, but attackers use GenAI to craft sophisticated business email compromise scams or phishing attacks.
AI-powered attacks are cyber attacks that use AI technologies to automate, enhance and personalize malicious activities at scale, making them far more dangerous than traditional attacks.
Since ChatGPT's debut in 2022, the volume and sophistication of AI-powered attacks have increased. AI makes phishing attacks more potent and is impacting ransomware. According to research released by Google in January 2025, state-sponsored threat actors now actively use AI. The FBI, too, warned of increased AI use by cyber criminals.
AI-powered attacks vs. traditional cybersecurity threats
CISOs face several traditional cybersecurity threats each day. Common cyber attacks include:
- Malware.
- Ransomware.
- Phishing.
- Social engineering.
- Distributed denial of service, or DDoS.
- SQL injection.
- Cross-site scripting, or XSS.
- Vulnerability exploitation.
In the past, cyber attacks relied on manual effort from humans. While automation has been part of the traditional cybersecurity landscape for decades, AI changes that scene: With automation, batch files and rule-based decisions repeat and scale a process, but AI brings greater sophistication, both in automation and threat development.
AI-powered attacks differ from traditional cybersecurity threats in the following aspects:
- Intelligence. Unlike traditional threats that used only rule-based automation, AI-powered attacks learn from failed attempts.
- Automation at scale. While traditional attacks employed lists or simple scans to identify targets, AI now fully automates the process of identifying potential victims and their vulnerabilities.
- Customized attacks. Spear phishing is a traditional attack targeting a specific individual or organization. AI, though, analyzes vast volumes of data about an individual or organization, creating customized, precisely targeted attacks used for phishing, account takeover or exploitation.
- Impersonation. Traditional cyber attacks had difficulty impersonating humans. That’s not the case with AI and its capacity to create deepfakes. This deepfake technology, when used for voice or video, underpins vishing and other forms of exploitation.
Types of AI-powered attacks
AI-powered attacks include both enhanced versions of traditional cybersecurity risks and a few new attack vectors unique to AI.
Among the most reported AI-powered attacks are the following:
- AI-enhanced phishing. Well-written and convincing phishing emails are literally a click away for an attacker. The debut of GenAI preceded a rise in phishing attempts, as attackers benefit from the ease of use and quality content AI generates.
- Deepfake social engineering. AI creates realistic audio and video of individuals. Attackers use it to defraud using social engineering attacks.
- AI-optimized ransomware. Ransomware gets a boost from AI that identifies specific targets, as well as exploitable vulnerabilities, enabling data encryption and exfiltration. Groups such as FunkSec have ramped up their use of AI in developing and deploying highly effective ransomware campaigns that evade common detection platforms.
- Automated vulnerability discovery. AI accelerates the vulnerability discovery process in software with its ability to analyze large volumes of code quickly to identify any exploitable risk. (This news story examines unpatched.ai, an AI-powered vulnerability discovery platform that reported Microsoft's late-2024 issues.)
Attacks against AI
While AI use improves and expands the capabilities of attackers, AI is also under attack itself in these different ways, including AI-on-AI attacks:
- AI model poisoning. Also known as data poisoning, attackers deliberately manipulate training data on an AI model to affect its output.
- Prompt injection attack. In this type of attack, hackers manipulate the prompt given to the model to produce malicious and otherwise restricted output.
Learn more about the four kinds of prompt injection attacks.
Methods to detect and prevent AI-powered attacks
Just as AI-powered attacks have become more sophisticated, enterprises and CISOs must respond in kind, defending against those attacks. Many techniques already used to thwart non-AI attacks remain, some with specific optimizations for AI.
The methods below have proven to detect and prevent attacks:
- Deploy AI-optimized threat detection systems. Businesses regularly update and optimize their threat detection and response (TDR) platforms to counter AI-powered attacks. (Read about AI's effects on TDR and its future here.)
- Install UEBA. User and entity behavior analytics technologies detect anomalies indicating potential AI-driven malicious behavior.
- Use strong authentication and access controls. Weak passwords are common attack vectors, and AI makes targeting them easier. Enforcing strict access controls and multifactor authentication reduces this risk.
- Keep systems patched. AI is also particularly adept at taking advantage of an unpatched system's known vulnerabilities. Keeping systems and applications updated, part of patch management, limits that attack vector.
Best practices for CISOs to protect their organization from AI-powered attacks
To reduce the risk of AI-powered attacks, CISOs must find and fuse the best tech tools with the best practices, garnering board-level support for these money-saving tasks.
The following actions better prepare CISOs – and their organizations – against AI-powered threats:
- Conduct AI risk assessments. Understand the true risks, both from AI-based attacks and attacks on the organization's own AI systems. The Factor Analysis of Information Risk methodology, or FAIR model, is a good starting point for CISOs to quantify AI risk.
- Create AI-powered threat hunting teams. Threat hunting is a foundational aspect of modern security teams. With the rise of AI-powered threats, CISOs must aggressively rebuild and train teams with the expertise to stop AI-powered cyber incidents.
- Update incident response capabilities. AI-powered threats inevitably beat defenses, so it's critical to develop and test incident response plans that specifically address AI-powered attacks.
- Invest in counter-AI technologies. An emerging category of IT security tools employs adversarial AI models to confuse and limit AI-powered malware, a response to threat actors' use of it. Specific tools are also available to detect deepfakes.
- Conduct AI security audits and penetration testing. AI-focused security audits are another best practice that identifies at-risk areas. Penetration testing, often part of that audit, deploys AI-powered resources to reveal specific areas of weakness.
- Boost threat intelligence and collaboration. AI threats are relatively new and evolving quickly. Subscribe to threat intelligence feeds that offer research on the latest AI-powered risks, including actionable intelligence. Participate in sector-specific industry sharing groups such as an information sharing and analysis center (ISAC). Some ISACs currently face budgetary and staff cuts.
- Train employees on AI risks. CISOs must recognize that their fellow employees are always an important line of defense. Train employees to recognize AI-powered risks, particularly from advanced phishing and deepfakes.
Future of AI-powered attacks
As AI adoption and experience with it grows, the instances and sophistication of AI-powered attacks increase as well.
Expect AI-powered attacks to become the norm as their intelligence, customization, automation and scalability ease the process for attackers. Along with an increased number of AI-powered attacks, more attack vectors are likely to emerge. AI-generated supply chain attacks, still in their infancy, are another probable issue in the years ahead. Advanced, AI-powered and fully autonomous botnets, which far outpace their forerunners, are another growing threat to CISOs and their organizations.
Sean Michael Kerner is an IT consultant, technology enthusiast and tinkerer. He has pulled Token Ring, configured NetWare and been known to compile his own Linux kernel. He consults with industry and media organizations on technology issues.