kras99 -


ChatGPT plugin flaws introduce enterprise security risks

Insecure plugin design -- one of the top 10 LLM vulnerabilities, according to OWASP -- opens enterprises to attacks. Explore ChatGPT plugin security risks and how to mitigate them.

ChatGPT has established itself as the current standard for generative AI technology, encouraging a multitude of businesses to open their APIs for direct integration with the large language model.

More than a thousand third-party plugins -- which give the LLM access to third-party applications on users' behalf -- are available through ChatGPT's subscription-based plugin store. But while third-party ChatGPT plugins can significantly enhance productivity and efficiency, they also present unique security challenges for enterprises.

For example, researchers at API security vendor Salt Security recently discovered multiple critical security vulnerabilities related to ChatGPT plugins. They have since been remediated. While the researchers said they saw no evidence of exploitation, the flaws could have enabled threat actors to do the following:

  • Install malicious plugins.
  • Steal user credentials and take over user accounts on connected third-party apps, such as GitHub -- potentially giving attackers access to proprietary code repositories.
  • Access personally identifiable information and other sensitive data.

4 ChatGPT plugin security risks

Enterprises that enable ChatGPT plugin use should consider the following security and privacy issues.

1. Data privacy and confidentiality

As the integration of ChatGPT in the workplace increases and employees begin to incorporate it into their daily tasks, the primary risk is the potential exposure of confidential or proprietary enterprise information.

When employees use ChatGPT plugins to process or analyze internal company or customer data, there is always a risk that unauthorized third parties -- such as plugin developers, application providers or cloud infrastructure providers -- could access and use it, maliciously or otherwise.

2. Compliance risks

Enterprises often operate under strict regulatory frameworks that dictate how they should handle and protect sensitive data. The use of ChatGPT plugins -- particularly those that transmit data to third parties -- potentially violates regulations, such as GDPR, HIPAA and others. This, in turn, could lead to significant legal and financial implications for the enterprise.

3. Dependency and reliability

Using external plugins for critical business operations introduces risks related to third-party vendor dependency. Unlike native plugins that might undergo thorough internal vetting, these ChatGPT plugins could receive less scrutiny.

The growing marketplace for these plugins also encourages constant experimentation on the developer side. As such, users might be more likely to experience disruptions in service due to outages or changes in service terms. The long-term viability of the plugin could also be questionable, depending on the developer's commitment to maintaining it.

4. Introduction of new security vulnerabilities

ChatGPT plugins could potentially create new vulnerabilities within an enterprise's IT ecosystem and increase susceptibility to cyberattacks, either via bugs in the plugin itself or through flawed integrations with existing systems.

As mentioned above, Salt Security researchers found multiple security flaws related to ChatGPT plugins, including one vulnerability that they discovered during the plugin installation process.

During installation of a new plugin, ChatGPT redirects the user to the plugin's website to get an approval code, the user submits the approval code to ChatGPT and ChatGPT automatically installs the plugin on the user's account.

The security flaw in question, however, enabled attackers to intercept the approval code and substitute their own, thus getting the user to approve a malicious plugin. Attackers could then install their credentials on the unsuspecting victim's account and gain access to private information.

While this vulnerability has since been remediated, it illustrates the potential to introduce new security vulnerabilities along with a new ChatGPT plugin.

How to mitigate ChatGPT plugin security risks

While third-party ChatGPT plugins can significantly enhance productivity and efficiency, they also present unique security challenges for enterprises.

To mitigate these plugin risks, enterprises should consider the following strategies and tactics.

Risk assessments

Conduct thorough risk assessments before adopting any ChatGPT plugins. This could mean monitoring independent third-party assessments and internally blocklisting risky plugins.

Additionally, periodically inventory and assess all plugins in use internally and check against known vulnerabilities and any updates issued. Alert employees accordingly.

Data privacy and security policies

Ensure any ChatGPT plugin in use internally complies with the company's data privacy and security policies. This could involve reaching out to the developer or provider of the plugin if the relevant information is not readily available. Exercise data deletion and retraction rights -- as espoused by GDPR for instance -- for any noncompliance.

User training and awareness

Because this is a new and rapidly evolving space, the rate of adoption of plugins could be quite fast. As such, security leaders should consider adding ChatGPT plugin security content to ongoing security awareness training curricula, even if employees haven't yet demonstrated interest in using such plugins.

Keep training memorable, impactful and relatively brief, so it stays top of mind among users.

Behavioral monitoring

Implement behavioral monitoring to track how data is being used and accessed through these plugins. While completely banning the use of ChatGPT inside the enterprise might be challenging, security leaders still need to constantly alert users to the dangers of sharing sensitive enterprise and customer data with LLMs and plugins.

In addition, organizations should consider taking the following steps:

  • Implement policies on their secure web gateways or security service edge platforms to identify the use of tools like ChatGPT.
  • Apply data loss prevention policies to identify what data is being submitted to these tools and their extended plugins.

In summary, while ChatGPT plugins can offer powerful enhancements to enterprise operations, they come with a set of security challenges that need careful management. Enterprises must adopt a cautious and strategic approach to integrate these tools safely into their workflows.

Ashwin Krishnan is a technical writer based in California. He hosts StandOutin90Sec, where he interviews cybersecurity newcomers, employees and executives in short, high-impact conversations.

Dig Deeper on Threats and vulnerabilities

Enterprise Desktop
Cloud Computing