News brief: AI security risks highlighted at RSAC 2025
Check out the latest security news from the Informa TechTarget team.
Was AI on your RSAC Conference 2025 bingo card? To no one's surprise, it was the topic of the year at the cybersecurity industry's big show, which drew to a close last week.
Following its emergence as the breakout star of RSAC 2024, AI -- and its 2025 buzzphrase companion agentic AI -- couldn't be avoided at keynotes, sessions and on social media.
It's not shocking. AI adoption is booming. According to the latest research from McKinsey & Co., 78% of organizations use AI in at least one business function. The most cited AI use cases are in IT, marketing, sales and service operations.
Yet with the adoption boom has come some dire warnings about AI security. The following roundup highlights Informa TechTarget's RSAC 2025 AI coverage:
Most cyber-resilient organizations aren't necessarily ready for AI risks
A report from managed security service provider LevelBlue released at RSAC found that while cyber-resilient organizations are well-equipped to handle current threats, many underestimate AI-related risks.
The report noted that AI adoption is happening too fast for regulations, governance and mature cybersecurity controls to keep pace, yet only 30% of survey respondents said they recognize AI adoption as a supply chain risk. This represents a major disconnect -- and a concern for future AI-enabled attacks.
Fraudulent North Korean IT workers more prevalent than thought
A panel at RSAC outlined how North Korean IT workers are infiltrating Western companies by posing as remote American employees, generating millions for North Korea's weapons program. A single vendor, CrowdStrike, found malicious activity in more than 150 organizations in 2024 alone, with half experiencing data theft.
These operatives use stolen identities to secure positions at organizations of all sizes, from Fortune 500 companies to small businesses. The panel discussed red flags to look for -- such as requests for alternate equipment delivery addresses and suspicious technical behaviors -- as well as how organizations can protect themselves through careful hiring practices and enhanced monitoring.
AI is preventing threat sharing due to data and privacy regulations
During a SANS Institute panel about the most dangerous new attack techniques, Rob T. Lee, chief of research and head of faculty at SANS Institute, highlighted that the cybersecurity industry is facing significant challenges when it comes to AI regulation. In particular, privacy laws such as GDPR restrict defenders' ability to fully use AI for threat detection while attackers operate without such constraints.
Lee said these regulations prevent organizations from comprehensively analyzing their environments and sharing crucial threat intelligence.
GenAI lessons learned emerge after two years with ChatGPT
An RSAC panel explained how, since the release of ChatGPT in late 2022, generative AI has dramatically transformed how cybercriminals operate. The panel highlighted four key lessons:
- GenAI hasn't introduced new tactics, but it has enhanced attackers' capabilities, leading to a 1,000% increase in phishing emails and more convincing scams.
- Existing laws can be used to prosecute AI-enabled crimes, as demonstrated by recent cases against DPRK workers and the Storm-2139 network.
- Significant challenges remain, including data leakage risks and the need for comprehensive AI legislation.
- AI security best practices are emerging.
Read the full story by Sharon Shea on SearchSecurity.
Editor's note: Our staff used AI tools to assist in the creation of this news brief.
Sharon Shea is executive editor of Informa TechTarget's SearchSecurity site.