arthead - stock.adobe.com

Tip

How to manage generative AI security risks in the enterprise

Despite its benefits, generative AI poses numerous -- and potentially costly -- security challenges for companies. Review possible threats and best practices to mitigate risks.

The rapid adoption of generative AI models after the launch of ChatGPT promises to radically change how enterprises do business and interact with customers and suppliers.

Generative AI can support a wide range of business needs, such as writing marketing content, improving customer service, generating source code for software applications and producing business reporting. The numerous benefits of generative AI tools -- especially reduced costs and enhanced work speed and quality -- have encouraged enterprises and individuals alike to test such tools' capabilities in their work.

However, as with any emerging technology, rapid implementation could be risky, opening the door to threat actors to exploit organizations' vulnerabilities. In today's complex IT threat landscape, using generative AI tools without careful consideration could result in catastrophic consequences for enterprises.

Security risks associated with using generative AI in enterprise environments

Understanding the potential risks of using generative AI in an enterprise context is crucial to benefiting from this technology while maintaining regulatory compliance and avoiding security breaches. Keep the following risks in mind when planning a generative AI deployment.

1. Employees exposing sensitive work information

In enterprise environments, users should be cautious about any piece of data they share with others -- including ChatGPT and other AI-powered chatbots.

A noticeable recent incident is the data leakage caused by Samsung employees who shared sensitive data with ChatGPT. Engineers at Samsung uploaded confidential source code to the ChatGPT model, in addition to using the service to create meeting notes and summarize business reports containing sensitive work-related information.

The Samsung case is just one highly publicized example of leaking sensitive information to AI-powered chatbots. Many other companies and employees using generative AI tools could make similar mistakes by revealing sensitive work information, such as internal code, copyrighted materials, trade secrets, personally identifiable information (PII) and confidential business information.

OpenAI's standard policy for ChatGPT is to keep users' records for 30 days to monitor for possible abuse, even if a user chooses to turn off chat history. For companies that integrate ChatGPT into their business processes, this means employees' ChatGPT accounts might contain sensitive information. Thus, a threat actor who successfully compromises employees' ChatGPT accounts could potentially access any sensitive data included in those users' queries and the AI's responses.

2. Security vulnerabilities in AI tools

Like any other software, generative AI tools themselves can contain vulnerabilities that expose companies to cyberthreats.

In March, for example, OpenAI took ChatGPT offline to fix a bug in the chatbot's open source library that had enabled some users to see chat titles from another active user's chat history. It was also possible to see the first message of a newly created conversation in someone else's chat history if both users were active around the same time.

In addition, the same bug revealed the payment-related information of 1.2% of ChatGPT Plus subscribers who were active during a specific time period, including the customers' first and last name, email address, and last four credit card number digits. In another recent incident, the cyberintelligence firm Group-IB found more than 100,000 compromised ChatGPT accounts advertised for sale on dark net marketplaces.

3. Data poisoning and theft

Generative AI tools must be fed with massive amounts of data to work properly. This training data comes from various sources, many of which are publicly available on the internet -- and, in some cases, could include an enterprise's previous interactions with clients.

In a data poisoning attack, threat actors could manipulate the pre-training phase of the AI model's development. By injecting malicious information into the training data set, adversaries could influence the model's prediction behavior down the line, leading to false or otherwise harmful responses.

Another data-related risk involves threat actors stealing the data set used to train a generative AI model. Without sufficient encryption and controls around data access, any sensitive information contained in a model's training data could become visible to attackers who obtain the data set.

4. Breaching compliance obligations

When using AI-powered chatbots in enterprise environments, IT leaders should evaluate the following risks related to violating relevant regulations:

  • Incorrect responses. AI-powered tools sometimes give false or superficial answers. Exposing customers to misleading information could give rise to legal liability in addition to negatively affecting the enterprise's reputation.
  • Data leakage. Employees could share sensitive work information, including customers' PII or protected health information (PHI), during conversations with an AI chatbot. This, in turn, could violate regulatory standards such as GDPR, PCI DSS and HIPAA, risking fines and legal action.
  • Bias. AI models' responses sometimes demonstrate bias on the basis of race, gender or other protected characteristics, which could violate anti-discrimination laws.
  • Breaching intellectual property and copyright laws. AI-powered tools are trained on massive amounts of data and are typically unable to accurately provide specific sources for their responses. Some of that training data might include copyrighted materials, such as books, magazines and academic journals. Using AI output based on copyrighted works without citation could subject enterprises to legal fines.
  • Laws concerning chatbot use. Many enterprises have begun integrating ChatGPT and other generative AI tools into their applications, with some using AI-powered chatbots to answer their customers' inquiries immediately. But doing so without informing customers in advance risks penalties under statutes such as California's bot disclosure law.
  • Data privacy. Some enterprises might want to develop their own generative AI models, a process likely to involve collecting large amounts of training data. If threat actors successfully breach enterprise IT infrastructure and gain unauthorized access to training data, the resulting exposure of sensitive information contained in compromised data sets could violate data privacy laws.
Generative AI's key business challenges affect people, processes and technology.
Besides security, generative AI also poses challenges such as algorithmic bias, hallucinations and technical complexity.

Best practices for security when using generative AI tools in the enterprise

To address the numerous security risks associated with generative AI, enterprises should keep the following strategies in mind when implementing generative AI tools.

1. Classify, anonymize and encrypt data before building or integrating generative AI

Enterprises should classify their data before feeding it to chatbots or using it to train generative AI models. Determine which data is acceptable for those use cases, and do not share any other information with AI systems.

Likewise, anonymize sensitive data in training data sets to avoid revealing sensitive information. Encrypt data sets for AI models and all connections to them, and protect the organization's most sensitive data with robust security policies and controls.

2. Train employees on generative AI security risks and create internal usage policies

Employee training is the most critical protective measure to mitigate the risk of generative AI-related cyber attacks. To implement generative AI responsibly, organizations must educate employees about the risks associated with using this technology.

Organizations can set guidelines for generative AI use at work by developing a security and acceptable use policy. Although specifics will vary from organization to organization, a general best practice is to require human oversight. Don't automatically trust content generated by AI; humans should review and edit everything AI tools create.

AI use and security policies should also specifically mention what data can be included in queries to chatbots and what is not permitted. For example, developers should never feed intellectual property, copyrighted materials, PII or PHI into AI tools.

3. Vet generative AI tools for security

Conduct security audits and regular penetration testing exercises against generative AI tools to identify security vulnerabilities before deploying them into production.

Security teams can also train AI tools to recognize and withstand attack attempts by feeding them with examples of cyber attacks. This reduces the likelihood that a hacker will successfully exploit the organization's AI systems.

4. Govern employees' access to sensitive work data

Apply the principle of least privilege within enterprise environments, enabling only authorized personnel to access AI training data sets and the underlying IT infrastructure.

Using an identity and access management tool can help centralize and control employees' access credentials and rights. Likewise, implementing multifactor authentication can help safeguard AI systems and data access.

5. Ensure underlying networks and infrastructure are secure

Deploy AI systems on a dedicated network segment. Using a separate network segment with restricted access to host AI tools enhances both security and availability.

For organizations hosting AI tools in the cloud, select a reputable cloud provider that implements strict security controls and has valid compliance certifications. Ensure all connections to and from cloud infrastructure are encrypted.

6. Keep an eye on compliance requirements, including regularly auditing vendors

Compliance regulations are constantly evolving, and with the uptick in enterprise AI adoption, organizations will likely see more compliance requirements related to generative AI technology.

Enterprises should closely monitor compliance regulations affecting their industry for any changes related to the use of AI systems. As part of this process, when using AI tools from a third-party vendor, regularly review the vendor's security controls and vulnerability assessments. This helps ensure any security weaknesses in the vendor's systems do not traverse into the enterprise's IT environment.

Next Steps

The implications of generative AI for trust and safety

The data privacy risks of third-party enterprise AI services 

Adversarial machine learning: Threats and countermeasures

GenAI development should follow secure-by-design principles

Dig Deeper on Enterprise applications of AI

Business Analytics
CIO
Data Management
ERP
Close