How AI caught a malicious North Korean insider at Exabeam

A North Korean posing as an American tech worker used GenAI to infiltrate Exabeam's network. But agentic AI found the signals among UEBA noise and exposed him in a matter of seconds.

In the summer of 2025, a young tech professional named Trevor Roth* landed a remote job at cybersecurity vendor Exabeam.

Roth had aced his technical interview and test with flying colors. He also passed his video interview -- although the hiring team felt he might have leaned on generative AI tools for real time assistance -- and Exabeam extended an offer. After the standard pre-employment clearance process, including a background check and I-9 validation, he received his laptop from IT and immediately got to work.

There was just one problem. "Trevor Roth" was actually a malicious foreign actor from North Korea, using a stolen identity and forged documents. And he was now inside Exabeam's private network.

Malicious foreign actors from the Democratic People's Republic of Korea, or DPRK, represent a pervasive and escalating threat to Fortune 500 companies. The U.S. Department of the Treasury estimates thousands are on American companies' payrolls and have access to their corporate systems. North Korean operatives' goals are twofold: first, to earn money for their nation's authoritarian regime, and second, to enable malicious intrusions. In recent cases, American employers have been victims of cryptocurrency theft, sensitive data theft and data extortion at the hands of malicious insiders from the DPRK.

Complicating detection efforts is the fact that such foreign threat actors often aim to keep their jobs for months, if not years, motivating them to keep their heads down. "Typically, you're going to see these low-and-slow types of attacks, living off the land, stuff that is not super obvious," said Exabeam Vice President of AI and Security Research Steve Povolny, during a presentation at RSAC 2026. "You'll see behaviors that fly under the radar, until they don't."

Unfortunately for Exabeam's new hire, his first day of employment was also his last -- thanks in part to agentic AI.

To catch a malicious foreign threat actor

The first time "Trevor Roth" signed into his Exabeam corporate account, the SOC's threat intelligence feed flagged his username as high risk, noting that it had been associated with North Korean threat actor activity. Based on that information, incident responders quietly accessed Roth's laptop and isolated it from the rest of the network.

Initially, the incident response team was open to the possibility that the threat intelligence was wrong, said CISO Kevin Kirkwood, who presented alongside Povolny at RSAC. "At first, we ascribed positive intent. This is a brand-new user, and maybe we just got the wrong guy," he added.

At the same time, the SIEM started generating scattered alerts on Roth's activity, which included the following:

  • Downloaded files from a malicious Zoom site.
  • Attempted to connect to a third-party VPN.
  • Installed Jump Desktop software.
  • Loaded a streaming service.

Taken individually and out of context -- and without the heads up from the threat intelligence feed -- each alert could have amounted to little more than noise, according to Kirkwood. That's when AI entered the chat.

Exabeam Nova, the organization's investigative AI agent in the SOC, autonomously collected Roth's scattered user and entity behavior analytics (UEBA) data and evaluated it in the context of his role and new-hire status. Deciding a full investigation was warranted, Nova then analyzed the user's behavior and likely intent and presented human operators with its conclusion:

"The pattern of activities aligns with the 'Malicious Software' threat vector, which is a precursor to a compromised insider scenario."

Finally, the AI assistant suggested SOC analysts take the following next steps:

  1. Isolate the affected host to prevent further compromise or lateral movement.
  2. Initiate a full forensic analysis of the affected host to identify the initial infection vector and full scope of compromise.
  3. Review the user's activity, including recent emails and browser history, for potential phishing attempts or unauthorized software downloads that could have led to the malware execution.
  4. Check for persistence mechanisms, including scheduled tasks and modified registry keys.
  5. Analyze network traffic for connections made by the affected host to suspicious external IPs or domains.
  6. Update endpoint protection, ensuring endpoint detection and response and antivirus software are up to date, and perform a full scan on the affected machine and other potentially vulnerable systems.

An investigation that Kirkwood said would have taken SOC analysts three to four hours took the AI agent seconds.

"This is really where the combination of traditional UEBA and modern AI capabilities becomes really, really powerful -- being able to take all that scattered, [seemingly] unrelated, nonsuspicious noise and turn it into signals," Povolny added. "The AI that we had deployed internally caught this very, very quickly."

After quietly isolating the DPRK threat actor's device, Kirkwood and his incident response team spent the next five hours observing his behavior, which included installing command-and-control software and trying to exfiltrate company data.

"It was a fun five hours," Kirkwood said. "It was kind of like sitting back and watching the prize fights. You're drinking beer and eating peanuts and watching the blows land."

It was kind of like sitting back and watching the prize fights. You're drinking beer and eating peanuts and watching the blows land.
Kevin KirkwoodCISO, Exabeam

When the malicious foreign actor finally realized he was being watched, he started trying to delete his temporary files. That's when Kirkwood called time, and the incident response team bricked the machine. "It was a massive piece of metal at that point -- nothing more," he said.

Next, the Exabeam team sent the indicators of compromise they had collected to the FBI, along with the address in Austin where the threat actor had asked the company to send his laptop.

"About a week after that, we saw that the FBI had shut down a laptop farm in the Austin area," Kirkwood said.

How to mitigate the AI-enabled malicious foreign actor threat

North Korean IT workers began infiltrating American companies in large numbers in 2020, during the remote work boom. Now, AI is making an already bad problem worse. According to researchers at CrowdStrike, DPRK-affiliated adversary group Famous Chollima infiltrated more than 320 companies in 2025 -- a 220% year-over-year increase. Researchers attributed the group's recent success to its use of GenAI throughout the hiring and employment processes.

With AI, malicious actors can easily forge official documents and cheat on technical exams. Deepfake and voice cloning technology lets them impersonate others in real time. And according to Kirkwood and Povolny, many job candidates -- North Korean and otherwise -- now use AI-powered interview copilots to optimize their answers during remote job interviews. Many such tools are designed to be invisible to third parties when users share their screens, making detection difficult.

To vet for unsanctioned AI use and possible malicious foreign actor activity during video interviews, the Exabeam executives suggested the following tactics:

  • Intentionally under-specify problems to observe candidates' clarification skills.
  • Ask candidates to share personal experiences that illustrate how they make decisions.
  • Change technical problems mid-answer to test candidates' adaptability.
  • Introduce off-topic or unexpected prompts -- e.g., how would you build a bridge? -- to see if the candidate responds with human confusion or AI confidence.
  • Ask job candidates to use external webcams that show their workspaces and monitors, rather than share their screens.

Kirkwood and Povolny also urged CISOs to put all new hires on a SOC watchlist for enhanced monitoring, ideally with support from agentic AI.

"When you have 500 or 1,000 new employees, you should have agents that are capable of understanding and prioritizing their behaviors, driving a cherry-picked handful to your human analysts, who remain in the loop," Povolny said. "Those human analysts can then double-click on that employee and dig deeper to see if it's a threat."

*Editor's note: SearchSecurity has changed the name that the threat actor fraudulently used to protect a potential victim of identity theft.

Alissa Irei is senior site editor of Informa TechTarget Security.

Dig Deeper on Threat detection and response