kras99 - stock.adobe.com
Combating the new wave of AI crimes and threats
Attackers of any skill level can now use open source tools to target businesses for AI crimes. Here's how to cover organizations' expanded attack surface and prevent disaster.
The emergence of AI has provided a significant boost to productivity. But it has also given threat actors an open door to exploit the new technology for malicious purposes.
As organizations adopt new AI technologies, their attack surfaces widen exponentially. The cybercrime arena that once required skilled operators with sophisticated hacking skills to navigate is now accessible to anyone with a laptop who can use AI prompts effectively. The spectrum of AI crimes is vast, and they present several dangers to the enterprise, including financial fraud, data poisoning and malware.
But enterprises can create an environment where AI serves as an asset rather than a security risk by investing in AI security tools, fostering a culture of verification, enforcing strict governance and joining the global security community. These steps can help businesses be resilient to AI attacks and prevent and manage their effects.
AI crimes and their implications for businesses
AI crimes are cyberattacks that use AI technology, either as a weapon or a facilitator, to attack organizations and individuals. AI crimes are more dangerous than traditional cybercrimes. For example, AI-generated malware can change its code signature multiple times, unlike ordinary, static malware. This makes it very challenging for detection systems to discover.
AI in cybercrime has evolved from simple spam email automation to attackers now executing fully autonomous attacks, from reconnaissance to exploit. Before 2023, attackers focused on using machine learning to bypass automated spam filters and to automate vulnerability scanning against IT infrastructure. Then, the widespread availability of GenAI enabled cybercriminals to use LLMs to generate customized email messages in multiple languages and generate basic malware code, boosting their social engineering attacks.
Now, agentic AI dominates the landscape. AI agents can execute an entire series of steps without human intervention. Instead of a hacker executing individual tools, one by one, to achieve a single function, an AI agent can plan and execute steps across an entire attack lifecycle with minimal human oversight.
We can't consider AI-driven crimes as an extension of regular cybercrimes, because they exceed them in scale, speed and believability. The consequences of AI crimes for businesses can be severe, ranging from operational to financial to strategic. Consider the following business implications of AI crimes:
- Financial fraud. Threat actors can execute sophisticated attacks using GenAI to create convincing voice clones and deepfake videos. These can enable attackers to bypass traditional verification controls and execute malicious actions, such as authorizing an urgent wire transfer using a CFO's voice note.
- Erode digital trust. When customers cannot distinguish between real and fake communications, such as emails, support chats and executive messages, their trust in the brand erodes. This can negatively affect a business, leading to lost customers, market share and partner relationships.
- Data breaches. Cybercriminals can automate phishing emails, malware generation, credential stuffing and vulnerability discovery to extract sensitive business data. These attacks have high success rates because bad actors can customize their strategies according to each target.
- Reputational damage. Threat actors can generate a large volume of content, such as fake news, reviews or executive statements, and spread it across social media platforms to damage the target enterprise's public image within hours.
Types of AI crimes
The rapid advancement of AI has introduced new and sophisticated threats in the cybersecurity landscape. Understanding these threats is essential in order to propose mitigation strategies to counter them.
AI-powered phishing and social engineering
This is the most immediate and financially devastating form of AI crime. GenAI lets attackers generate clean, grammatically correct messages in multiple languages at scale. They can generate custom emails in minutes according to specific target profiles. These messages can trick victims into providing credentials, authorizing payments or uncovering sensitive data.
According to research by security firm Barracuda, in collaboration with researchers from Columbia University and the University of Chicago, AI now generates 51% of all spam. This percentage was almost zero before the advent of ChatGPT in late 2022. Attackers can often use spam emails to send malware to unsuspecting victims.
Targeted email attacks, such as spear phishing and business email compromise, are becoming more widespread and effective with AI. AegisAI's report "State of the AI threat in Email: 2025" found that AI-generated spear phishing emails can bypass traditional spam filters more than 50% of the time.
Deepfake scams
Deepfake scams are increasingly becoming the most threatening content generated by hackers using AI. Deepfakes use AI-generated synthetic media such as audio, video or images to create realistic impersonations of individuals. The use of deepfake technology is increasing while its cost continues to drop, making it accessible to criminals with less technical skills.
The cost of losses from deepfake attacks in 2025 was estimated to have tripled to $1.1 billion from $360 million in 2024, a ninefold rise from $128 million for 2020 to 2023, according to research from Surfshark, a VPN provider.
In one high-profile case, attackers targeted a British multinational engineering company and succeeded in convincing a finance employee to give them $25 million. The employee had attended a video call meeting with the supposed CFO and other members of staff; however, attackers used deepfake technology to generate the other members of the call.
AI-driven malware and ransomware
Threat actors are also using AI to create new malware strains. Using AI in malware code and distribution mechanisms could improve attackers' precision and ability to evade detection. According to SQ Magazine, 41% of ransomware families included AI components for adaptive payload delivery as of 2025.
Autonomous malware, such as PromptLock, uses GenAI to execute attacks. Such malware runs a locally accessible AI language model to generate harmful Lua scripts in real time that work on Windows, Linux and macOS. Based on set text prompts, these tools independently decide whether to steal or encrypt the data they find.
Data poisoning and model manipulation
Data poisoning and model manipulation threaten the accuracy and integrity of AI systems. Data poisoning involves inserting malicious data into a model's training data set -- or its supply chain data sets -- to affect its output. Poisoning data can severely affect AI model decisions. IACIS's "Data poisoning 2018–2025: A systematic review of risks, impacts and mitigation challenges" research said that a disturbance of 0.001% of training data could reduce model accuracy by up to 30%. Inserting malicious data into training data sets can also create backdoors for threat actors to exploit.
This issue can be problematic for enterprises that are building their own internal AI models. Suppose a company trains its customer service chatbot on malicious data. This bot could provide inappropriate results to customers, and the business could continue operating without discovering this for a long time.
Criminal LLMs
Criminal LLMs represent the latest addition to the AI crime ecosystem. Threat actors exploit LLMs for malicious purposes, sometimes using legitimate LLMs, such as ChatGPT or Gemini, and remove the safeguards set by their developers (i.e., jailbreak). This provides attackers with malicious outputs or develops a criminal LLM specifically for malicious purposes.
Different criminal LLM tools exist, such as WormGPT, GhostGPT and KawaiiGPT, the last being a free, open source and malicious LLM that can bypass the safety restrictions of standard AI models. It lets its users create unrestricted, malicious output for cyberattacks.
How to build resilience against AI crimes
As AI crimes become more sophisticated, organizations must think beyond traditional defenses to build a comprehensive security model that resists AI-powered cyberattacks. Building such a system requires a multilayer approach that combines technical defenses and controls, informed personnel, strong governance and collaborative networks.
Invest in AI security tools
Technical controls remain the frontline defense against AI threats. Organizations should invest in deploying a new generation of security tools that can understand and intercept AI-specific threats. For example, traditional firewalls and antivirus software might not be able to detect prompt injection or deepfake manipulation attacks. The table below lists some tools that can help organizations detect the most common types of AI threats.
Employee training and awareness
Humans remain the weakest element in any cyber defense strategy, and AI amplifies this risk by making deception highly convincing. Businesses must upgrade their traditional security awareness programs beyond detecting basic phishing emails to include lessons on detecting AI threats. Employees should also understand how to detect deepfakes and highly customized spear-phishing emails.
For high-risk work operations, such as executing wire transfers, resetting credentials or sharing sensitive data, enterprises should set a multi-channel verification mechanism. Suppose an employee receives a payment request via email. They should confirm it through a known phone number or a secure internal channel, not the one provided in the message. This validation prevents threat actors from controlling the entire communication chain.
Strengthen governance policies
AI governance provides the necessary guardrails for safe AI adoption, ensuring it does not come at the cost of security. Strong governance includes several areas, such as creating clear rules for AI use within the work environment, ensuring full visibility over AI systems and ensuring humans remain in the loop when AI makes critical decisions.
A major element in governance policy is having an acceptable use policy (AUP) for AI. This is a formal document that establishes rules for employees using AI in workspaces. The AI AUP must address the risks of using AI inappropriately, including data leaks, copyright infringement and bias in decision-making.
Enterprises should also maintain an AI bill of materials, a document listing all the components they use to build an AI model. This should include AI-specific components such as models, data sets and prompts. As enterprises depend on AI to run their workloads, it is critical to understand the components that make up their system, so they can answer important questions, such as: "Where did this model come from?" or "What data influenced its behavior to give a specific response or decision?"
Collaborate with industry experts
The sophistication of AI crime makes defending against it in isolation very difficult. Collaborating with other enterprises and government bodies is critical to stay ahead of emerging AI threats. Threat intelligence feeds provide real-time information on emerging attacks and indicators of compromise related to AI, such as new prompt injection payloads or deepfake signatures. Adopting standards, such as OWASP Top 10 for LLM Applications or the NIST AI Risk Management Framework, helps enterprises establish a language for exchanging information about AI attacks and best countermeasures.
Nihad A. Hassan is an independent cybersecurity consultant, digital forensics and cyber OSINT expert, online blogger and author with more than 15 years of experience in information security research. He has authored six books and numerous articles on information security. Nihad is highly involved in security training, education and motivation.