Blue Planet Studio - stock.adobe

Tip

How AI malware works and how to defend against it

AI malware is evolving faster than traditional defenses. Learn how attackers weaponize AI and how organizations can implement effective countermeasures.

Malicious actors continuously tweak their tools, techniques and tactics to bypass cyberdefenses and perform successful cyberattacks. Today, the focus is on AI, with threat actors finding ways to integrate this powerful technology into their toolkits.

AI malware is quickly changing the game for attackers. Let's examine the current state of AI malware, some real-world examples and how organizations can defend against it.

What is AI malware?

AI malware is malicious software that has been enhanced with AI and machine learning capabilities to improve its effectiveness and evasiveness.

Unlike traditional malware, AI malware can autonomously adapt, learn and modify its techniques. Namely, AI enables malware to do the following:

  • Adapt to avoid detection by security tools.
  • Automate operations, speeding the process for attackers.
  • Personalize attacks against target victims, as in phishing attacks.
  • Identify vulnerabilities to exploit.
  • Mimic real people or legitimate software, as in deepfake attacks.

Using AI malware against a victim is a type of AI-powered attack, also known as an AI-enabled attack.

Types and examples of AI malware

The main types of AI malware include polymorphic malware, AI-generated malware, AI worms, AI-enabled social engineering and deepfakes.

Polymorphic malware

Polymorphic malware is software that continuously alters its structure to avoid signature-based detection systems. Polymorphic AI malware uses generative AI to create, modify and obfuscate its code and, thus, evade detection.

BlackMamba, for example, is a proof-of-concept malware that changes its code to bypass detection technology, such as endpoint detection and response. Researchers at HYAS Labs demonstrated how BlackMamba connected to OpenAI's API to create a polymorphic keylogger that collects usernames, passwords and other sensitive information.

AI-generated malware

Many malicious actors use AI components in their attacks. In September 2024, HP identified an email campaign in which a standard malware payload was delivered using an AI-generated dropper. This marked a significant step toward the deployment of AI-generated malware in real-world attacks and reflects how evasive and innovative AI-generated attacks have become.

In another example, researchers at security vendor Tenable demonstrated how the open source AI model DeepSeek R1 could generate rudimentary malware, such as keyloggers and ransomware. Although the AI-generated code required manual debugging, it underscores how bad actors can use AI to fuel malware development.

Similarly, a researcher from Cato Networks bypassed ChatGPT's security measures by engaging it in a role-playing scenario and leading it to generate malware capable of breaching Google Chrome's Password Manager. This prompt engineering attack showcases how attackers prompt AI into writing malware.

AI worms

AI worms are computer worms that use AI to exploit large language models (LLMs) to propagate and spread the worm to other systems.

Researchers demonstrated a proof-of-concept AI worm dubbed Morris II, referencing the first computer worm that infected about 10% of internet-connected devices in the U.S. in 1988. Morris II exploits retrieval-augmented generation (RAG), a technique that enhances LLM outputs by retrieving external data to improve responses, to propagate autonomously to other systems.

AI-enabled social engineering

Attackers are using AI to improve the effectiveness and success of their social engineering and phishing campaigns. For example, AI can help attackers do the following:

  • Create more effective and professional email phishing scams with fewer grammatical errors.
  • Gather information from websites to make campaigns more timely.
  • Conduct spear phishing, whaling and business email compromise attacks more quickly than human operators.
  • Impersonate voices to create vishing scams.

Deepfakes

Attackers use deepfake technology -- AI-generated videos, photos and audio recordings -- for fraud, misinformation, and social engineering and phishing attacks.

In a high-profile example, the British engineering group Arup was scammed out of $25 million in February 2025 after attackers used deepfake voices and images to impersonate the company's CFO and dupe an employee into transferring money to the attackers' bank accounts.

How to defend against AI malware

Given the ease with which AI malware adapts to evade defenses, signature-based detection methods are less effective against it. Consider the following defenses:

  • Behavioral analytics. Deploy behavioral analytics software that monitors and flags unusual activity and patterns in code execution and network traffic. Integrate more in-depth analysis techniques as AI malware evolves.
  • Use AI against AI. Adopt AI-enhanced cybersecurity tools capable of real-time threat detection and response. These systems adapt to shifting attack vectors more efficiently than traditional methods, effectively fighting fire with fire.
  • Learn how to spot a deepfake. Know common characteristics of deepfakes. For example, facial and body movement, lip-sync detection, inconsistent eye blinking, irregular reflections or shadowing, pupil dilation and artificial audio noise.
  • Use deepfake detection technology. The following technologies can help detect deepfakes:
    • Spectral artifact analysis detects suspicious artifacts and patterns, such as unnatural gestures and sounds.
    • Liveness detection algorithms base authenticity on a subject's movements and background.
    • Behavioral analysis detects inconsistencies in user behavior, such as how a subject moves a mouse, types or navigates applications.
    • Behavioral analysis ensures the video or audio shows normal user behavior.
    • Path protection detects when camera or microphone device drivers change, potentially indicating deepfake injection.
  • Adhere to cybersecurity hygiene best practices. For example, require MFA, use the zero-trust security model and hold regular security awareness trainings.
  • Follow phishing prevention best practices. Get back to basics and teach employees how to spot and respond to phishing scams, AI-enabled or otherwise.
  • Use the NIST CSF and AI RMF. Combining recommendations in the NIST Cybersecurity Framework and NIST AI Risk Management Framework can help organizations identify, assess and manage AI-related risks.
  • Stay informed. Keep up to date with how attackers use AI in malware and how to defend against the newest AI-enabled attacks.

Matthew Smith is a vCISO and management consultant specializing in cybersecurity risk management and AI.

Sharon Shea is executive editor of Informa TechTarget's SearchSecurity site.

Dig Deeper on Threats and vulnerabilities