Listen to this article. This audio was generated by AI.
New research from Group-IB found that threat actors are increasingly compromising ChatGPT accounts and could be using the access to collect sensitive information and stage additional targeted attacks.
Group-IB's report showed ChatGPT credentials have become a major target for nefarious activities over the last year. Because OpenAI's chatbot stores past user queries and AI responses by default, each account serves as an entry point for threat actors to access users' information, researchers warned.
"Their exposed information, be it personal or professional, may be at risk of being used for malicious purposes, such as identity theft, financial fraud, targeted scams, etc.," Dmitry Shestakov, head of threat intelligence at Group-IB, told TechTarget Editorial.
Group-IB researchers over the past year identified 101,134 information stealer-infected devices with saved ChatGPT data. Group-IB's Threat Intelligence platform provided visibility into dark web communities, allowing researchers to find compromised ChatGPT credentials within the logs of information stealers sold by threat actors via illicit marketplaces, with most victims located in the Asia-Pacific region.
"The number of stealer logs containing ChatGPT credentials rose consistently from June 2022 through March 2023, and the monthly figure for May 2023 was the highest on record," Shestakov said, with 26,802 compromised accounts discovered last month.
Most of the ChatGPT credentials were compromised with "Raccoon" malware, a notorious information stealer. In March 2022, Ukrainian national Mark Sokolovsky was arrested in the Netherlands and charged with operating Raccoon's malware as a service; he was later indicted by the U.S. Department of Justice.
Threat actors employ information stealer malware to collect credentials stored in infected browsers, such as bank card details and cryptocurrency wallet information. The data extracted using the malware presents itself in the form of a log file.
UPDATE 6/28: An OpenAI spokesperson sent the following statement to TechTarget Editorial: "The findings from Group-IB’s Threat Intelligence report is the result of commodity malware on people's devices and not an OpenAI breach. We are currently investigating the accounts that have been exposed. OpenAI maintains industry best practices for authenticating and authorizing users to services including ChatGPT, and we encourage our users to use strong passwords and install only verified and trusted software to personal computers."
Present and future risks
With the broad availability of ChatGPT, employees have increasingly used the chatbot to optimize their operational procedures. However, new integrations within workplaces open vectors for exploitation of confidential data.
Management and sales agents might use ChatGPT to augment outgoing emails, for example. According to Shestakov, outbound emails containing confidential information such as proprietary product details and internal pricing structures can serve as a goldmine for cybercriminals if accessed.
Enterprises have also found ChatGPT valuable for cybersecurity purposes as the AI can aid developers in a variety of ways, including code refinement. Shestakov said those uses could create a risk for proprietary product code to be intercepted, leading to future security breaches.
"A more alarming prospect is that developers might unintentionally transmit the complete code, including all service credentials, which could compromise an entire infrastructure if fallen into the wrong hands," Shestakov said.
As ChatGPT users all over the world have had account credentials compromised, Group-IB researchers recommend implementing multiple layers of security such as using strong passwords, enabling multifactor authentication and installing software updates to mitigate risk of unauthorized access.
Despite the arrest of Sokolovsky, Raccoon and other information stealers are still a prevalent threat. Group-IB detected more than 96 million logs being actively sold on underground markets between July 2021 and June 2022. While Shestakov expects that ChatGPT users specifically will continue to be targets, he also said this tactic of gleaning personal data will continue as well.
"More and more cybercriminals are making use of stealer-obtained credentials that are offered for sale on the initial access broker market to then launch sophisticated cyber attacks, such as ransomware attacks," Shestakov said. "As ChatGPT continues to grow in popularity, we expect more accounts to appear in stealer logs."
Alexis Zacharakos is a student studying journalism and criminal justice at Northeastern University in Boston.