Getty Images/iStockphoto

Microsoft, OpenAI warn nation-state hackers are abusing LLMs

Microsoft and OpenAI observed five nation-state threat groups leveraging generative AI and large language models for social engineering, vulnerability research and other tasks.

Microsoft published new research Wednesday that detailed how various nation-state threat actors are using generative AI in their operations.

While Microsoft said attackers' increased use of GenAI does not pose an imminent threat to enterprises, the tech giant emphasized the importance of preparing additional security protocols due to recent nation-state activity.

In a blog post Wednesday, Microsoft Threat Intelligence and its collaborative partner OpenAI highlighted five nation-state threat actors that were observed using large language models (LLMs) such as ChatGPT to bolster attacks.

According to the research, nation-state actors located around the world used LLMs to research specific technologies and vulnerabilities, as well as to gain information on regional geopolitics and high-profile individuals. So far, AI tools have not made attacks more dangerous, but Microsoft anticipates that will change.

"Importantly, our research with OpenAI has not identified significant attacks employing the LLMs we monitor closely. At the same time, we feel this is important research to publish to expose early-stage, incremental moves that we observe well-known threat actors attempting, and share information on how we are blocking and countering them with the defender community," Microsoft Threat Intelligence wrote in the blog post.

The imminent risks were noted earlier this year by the U.K.'s National Cyber Security Centre, which said AI will increase cyberthreats over the next two years.

In addition to the blog post, Microsoft Threat Intelligence published the quarterly "Cyber Signals" report with an introduction by Bret Arsenault, chief cybersecurity adviser at Microsoft. Arsenault emphasized that AI tools are beneficial for both defenders and adversaries, which complicates the threat.

"While AI has the potential to empower organizations to defeat cyberattacks at machine speed and drive innovation and efficiency in threat detection, hunting, and incident response, adversaries can use AI as part of their exploits," Arsenault wrote. "It's never been more critical for us to design, deploy, and use AI securely."

The report warned that traditional security tools are ineffective in keeping up with threats across the landscape and recent attacks show that cybercriminals have increased "speed, scale, and sophistication." Attacks have also increased in "frequency and severity" amid a cybersecurity workforce shortage.

Now, Microsoft believes generative AI will only add to the challenges. The tech giant observed commonalties among cybercrime groups such as conducting reconnaissance, coding and improving malware development, and using both human and machine languages.

To illustrate how nation-state adversaries are currently using LLMs, Microsoft Threat Intelligence detailed five threat groups it tracks as Forest Blizzard, Emerald Sleet, Charcoal Typhoon, Crimson Sandstorm and Salmon Typhoon. Forest Blizzard is a Russian advanced persistent threat (APT) actor, more commonly referred to as Fancy Bear or APT28, that is associated with the Russian government's military intelligence service.

In December, Microsoft revealed that Forest Blizzard, which is known to target the defense, government and energy sectors, continued to exploit an Exchange vulnerability against unpatched instances. Patches were initially released in March.

Nation-state groups embrace GenAI

Microsoft Threat Intelligence expanded on Forest Blizzard's LLM activity in the blog post. The threat actor was observed leveraging LLMs mainly for research purposes into various satellite and radar technologies that could be relevant to Ukrainian military operations.

In addition, Microsoft told TechTarget Editorial that Forest Blizzard's LLM use indicated that the threat actor is exploring use cases of a new technology.

"Forest Blizzard used LLM technology to understand satellite communications protocols, radar technology and other specific technical parameters. The queries suggest an attempt to acquire in-depth knowledge of satellite capabilities," Microsoft said in an email.

The report emphasized that nation-state adversaries commonly use LLMs during the intelligence-gathering stage of an attack.

North Korean nation-state threat actor Emerald Sleet was observed using LLMs to research think tanks and experts on North Korea. The group also used the technology for "basic scripting tasks" as well as generating spear phishing campaigns. Previous research into GenAI and phishing content showed mixed results as some vendors found that LLMs did not make the emails more effective.

"Emerald Sleet also interacted with LLMs to understand publicly known vulnerabilities, to troubleshoot technical issues, and for assistance with using various web technologies," Microsoft wrote in the report, adding that the group specifically researched a Microsoft Support Diagnostic Tool vulnerability tracked as CVE-2022-30190 and known as "Follina."

China-affiliated threat actor Charcoal Typhoon also used LLMs for technical research purposes and understanding vulnerabilities. Microsoft noted how the group used GenAI tools to enhance scripting techniques "potentially to streamline and automate complex cyber tasks and operations," as well as for advanced operational commands.

Another China-backed threat actor known as Salmon Typhoon tested the effectiveness of LLMs for research purposes. "Notably, Salmon Typhoon's interactions with LLMs throughout 2023 appear exploratory and suggest that this threat actor is evaluating the effectiveness of LLMs in sourcing information on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs," Microsoft wrote in the blog post. "This tentative engagement with LLMs could reflect both a broadening of their intelligence-gathering toolkit and an experimental phase in assessing the capabilities of emerging technologies."

Microsoft and OpenAI observed Crimson Sandstorm, an Iranian threat group associated with the country's Islamic Revolutionary Guard Corps, using LLMs for social engineering support and troubleshooting errors, as well as for information on .NET development and evasion techniques on compromised systems.

Microsoft said "all accounts and assets" associated with the five nation-state groups have been disabled.

Will GenAI improve social engineering?

Microsoft's "Cyber Signals" report highlighted AI's effect on social engineering. The company expressed concern over how AI could be used to undermine identity proofing and impersonate a targeted victim's voice, face, email address or writing style. Improved accuracy in those areas could lead to more successful social engineering campaigns.

An attack against developer platform Retool last year highlighted the dangers of successful social engineering campaigns. After gaining essential knowledge on the victim organization, the attacker manipulated an MFA form and impersonated a member of Retool's IT team in a vishing call to gain highly privileged internal access.

"Microsoft anticipates that AI will evolve social engineering tactics, creating more sophisticated attacks including deepfakes and voice cloning, particularly if attackers find AI technologies operating without responsible practices and built-in security controls," the report said.

For example, Microsoft found that email threats have already become more dangerous due to AI. The report noted that "there has been an influx of perfectly written emails" that contain fewer grammatical and language errors. To address the threat, Microsoft said it's working on capabilities to help identify a malicious email beyond the composition.

Microsoft said it believes that understanding how AI could further identity proofing is essential to combat fraud and social engineering attacks. The report warned that enterprises should be on alert regarding free trials or promotional pricing of services or products, which are used as social engineering lures for enterprise users.

"Because threat actors understand that Microsoft uses multifactor authentication (MFA) rigorously to protect itself -- all our employees are set up for MFA or passwordless protection -- we've seen attackers lean into social engineering in an attempt to compromise our employees," the report said.

One of Microsoft's recommendations to quell social engineering was continued employee education, because it "relies 100 percent on human error." The report said education should focus on recognizing phishing emails, vishing and SMS-based phishing attacks. Microsoft also urged enterprises to apply security best practices for Microsoft Teams. Regarding defensive use for generative AI, Microsoft recommended using tools such as Microsoft Security Copilot, which launched last year and became generally available in November.

Microsoft launched a list of actions or "principles" to help mitigate the risks of nation-state threat actors and APTs using AI platforms. The principles include mandating transparency across the AI supply chain; continually assessing AI vendors and applying access controls; implementing "strict input validation and sanitization for user-provided prompts" in AI tools and services; and proactively communicating policies and potential risks around AI to employees.

While GenAI might lead to an increase in attack volume, Microsoft told TechTarget Editorial that the technology is simply a tool being used by threat actors, like many tools before it. However, it is likely to make them more effective in the future.

"Attackers' ability to use AI for accelerating and scaling threats is something we see on the horizon," Microsoft said.

Arielle Waldman is a Boston-based reporter covering enterprise security news.

Next Steps

Microsoft: Nation state activity blurring with cybercrime

FBI: Criminals using AI to commit fraud 'on a larger scale'

Dig Deeper on Application and platform security