4 lessons in the new era of AI-enabled cybercrime
Cyberattacks have evolved rapidly as GenAI use has become more widespread. An RSAC Conference 2025 panel shared what they've learned over the past two years.
Generative AI has fundamentally -- and rapidly -- changed how malicious actors plan and execute their attacks. Threat actors can now do more with fewer resources and less time.
To keep their organizations safe, security teams must remain abreast of how attackers use GenAI and how to mitigate such threats. One way to do this is to learn from the past.
A panel at RSAC Conference 2025 shared four key lessons learned since the explosion of GenAI use, following the release of ChatGPT in late 2022:
- GenAI enhances attackers' capabilities.
- Current laws can apply to AI-enabled attackers.
- We still have a lot to learn.
- AI-based attack mitigation best practices have emerged.
1. GenAI enriches attackers' capabilities
While GenAI hasn't changed attackers' tactics quite yet, it is making them more efficient.
"We don't see threat actors at this time using AI for something they couldn't do a little slower on their own," said Sherrod DeGrippo, director of threat intelligence strategy at Microsoft. In most cases, she said, attackers use GenAI for the same reasons most professionals do -- for example, to conduct research, improve communications and translate content -- just with malicious intent.
For example, Adam Maruyama, field CTO for digital transformation and AI at cybersecurity vendor Everfox, said AI helps improve the credibility of scams. "It's no longer your long-lost great uncle suddenly needing your bank account information. It's 'Hi, this is your child's preschool' -- and that preschool is right," he said. "Or 'We had a water main break. To read more about that incident, please click this link.' And it sends you to a page with malware."
Beyond making scams more believable, GenAI has also helped increase attack volume. Maruyama said, since the introduction of ChatGPT in 2022, the volume of phishing emails has increased 1,000% and the number of phishing-related domains has risen by 120% -- probably not a coincidence.
2. Using current laws and regulations against AI attacks
"The use of AI, of course, in and of itself, is not a crime. The use to facilitate crime is still part of the underlying criminal conduct that can be prosecuted," said Jacqueline Brown, partner at law firm Wiley Rein LLP.
This means existing laws, such as civil provisions, the Computer Fraud and Abuse Act, and copyright and trademark laws, can be used to prosecute attackers using AI for crimes, including identity theft, wire fraud and sanctions violations.
For example, Brown said the government has seen an increase in the number of Democratic People's Republic of Korea (DPRK) remote worker IT fraud cases. These are scams in which attackers use AI to enhance identity documents and LinkedIn profiles to trick U.S. organizations into hiring them as remote workers. DPRK employees can then help fund nuclear programs or otherwise evade sanctions. In December 2024, a federal court in St. Louis indicted 14 DPRK nationals on counts of wire fraud, money laundering and identity theft.
In another example of recent litigation, Microsoft's Digital Crimes Unit took legal action in February 2025 against four threat actors in Iran, the U.K., China and Vietnam. The company alleged the attackers were members of the global cybercrime network Storm-2139 and their use of Microsoft's GenAI services violated the company's acceptable use policy and code of conduct.
3. We've still got a long way to go
Cynthia Kaiser, deputy assistant director at the FBI's Cyber Division, said the government's current efforts to counter adversary campaigns are primarily driven by criticality or scope of target, not novel attack methods, such as AI. Whether that will change in time -- meaning malicious AI use, in and of itself, would trigger an investigation -- remains to be seen.
Maruyama also noted that data leakage has been a worry since GenAI's inception; companies don't want to share their proprietary information with public large language models (LLMs). The immediate solution, he said, is for organizations to create internal private models to know what data they feed them. "That's all great," he said," except you have created a crown jewel for your adversary."
For example, an attacker could ask the LLM for the API for the company payroll or use it to exfiltrate intellectual property. "Unless you have the right guardrails on that AI, that information will come right out," he added.
Another central point that arose is the need for AI-specific laws. No comprehensive federal AI governance laws exist. That doesn't mean AI is unregulated, Brown said, but it has resulted in fragmented and overlapping laws at the federal, state and sector-specific levels.
For example, Brown noted that more than 700 AI-specific state laws were proposed last year, and 40 states currently have laws pending, with California and Colorado at the forefront. Plus, 34 states have laws criminalizing deepfakes, with four states adding them in the past month. The Take It Down Act, which criminalizes the nonconsensual use of sexually explicit deepfakes, passed the U.S. Senate in February and the House just the day before this RSAC panel (April 28). Brown said it is considered the first major law to tackle the harms of AI.
4. Best practices to mitigate AI security challenges
The panel concluded by sharing the following best practices that have emerged over the past two years that both mitigate AI-based attacks and help ensure secure enterprise AI use:
- Use AI to defend against AI. DeGrippo noted AI's ability to improve the speed of anomaly detection and its importance in code review -- for example, to find hardcoded credentials in code. Maruyama suggested using AI to detect malicious users and shadow AI on enterprise networks.
- Create an AI bill of materials. To build an AIBOM, "you're going to need a list of all of your AI vendors, where that AI is spread like peanut butter across your organization and how you can extract it if something happens and it needs to get out of your environment," DeGrippo said. Like a software BOM, AIBOMs include information about all the proprietary and open source AI components used in the development, training and deployment of an AI system.
- Follow security hygiene best practices. AI-enabled attacks have highlighted the importance of strengthening security basics, namely the following:
- Requiring MFA.
- Using the zero-trust security model.
- Conducting regular security awareness training to educate users on secure AI use and how to detect AI-based attacks.
- Keep up to date with laws and regulations. Brown noted that organizations should monitor the AI legal and regulatory landscape because it is evolving quickly. Organizations must navigate changing and emerging AI regulations, understand how AI laws link with privacy laws and develop an AI governance framework.
- Follow responsible AI development and deployment practices. Secure development and testing are crucial. Microsoft, for example, has an AI red team that tests its AI models for malicious behaviors. It also uses a bug bounty program to find vulnerabilities in its AI products. Maruyama also noted it's essential to be selective about the data organizations feed their LLMs and to test those LLMs to ensure they don't inadvertently give out too much information.
Sharon Shea is executive editor of Informa TechTarget's SearchSecurity site.