kras99 - stock.adobe.com

Tip

What CISOs should know about DeepSeek cybersecurity risks

DeepSeek poses significant risks to U.S. enterprises -- even those that don't greenlight it for internal use. CISOs should take steps to reduce the threat.

As generative AI platforms like ChatGPT and Claude become embedded in enterprise workflows, a new class of large language models from China is also gaining traction globally. Among them, DeepSeek -- an open-source, bilingual Chinese-English LLM developed by DeepSeek AI -- is drawing attention for its advanced technical capabilities and claims to work more cheaply and efficiently than American-based rivals.  

Yet, for cybersecurity leaders and IT risk managers, DeepSeek introduces a new spectrum of cybersecurity, privacy and compliance risks that demand immediate attention.

DeepSeek security risks

DeepSeek is a family of LLMs that relies on hundreds of billions of tokens and boasts performance comparable to that of GPT-3.5 and GPT-4. Unlike many Western LLMs, DeepSeek is optimized for Chinese-English bilingual tasks and has gained popularity due to its open licensing and cost-effectiveness.

From a cybersecurity standpoint, DeepSeek stands out because it is developed and maintained in China, where data protection laws and oversight structures differ significantly from Western norms. Some versions are hosted in China-based cloud infrastructure and are therefore subject to Chinese laws requiring private companies to cooperate with state intelligence.

Finally, due to DeepSeek's open source roots, enterprises can't easily detect its use, especially if it's integrated into internal tools or workflows. Let's delve deeper into some of the most important DeepSeek cybersecurity risks.

Cyberespionage and nation-state threats

DeepSeek's development in a jurisdiction with Chinese state-level monitoring requirements raises significant cyberespionage concerns. Any data submitted to DeepSeek APIs or hosted versions --especially in regulated industries -- could be subject to surveillance under Chinese law.

China's Personal Information Protection Law, for example, grants the Chinese government exceptionally broad latitude in the actions it can take to protect its citizens' data. That includes installing Chinese monitoring software on other nations' servers.

DeepSeek's development in a jurisdiction with Chinese state-level monitoring requirements raises significant cyberespionage concerns.

Enterprise users unwittingly feeding DeepSeek sensitive data -- such as intellectual property, trade secrets, internal strategy documents and personally identifiable information -- could expose it to unauthorized third-party access. That information could, in turn, be used for targeted attacks or corporate intelligence gathering.

Data security and model leakage

DeepSeek, like other generative models, can retain patterns or tokens from training inputs or user interactions. This creates a risk of data leakage through model outputs, particularly when used without strict safeguards. If fine-tuned or embedded in enterprise systems, model drift or prompt leakage may inadvertently expose proprietary content.

In addition, shadow AI deployments -- say, by developers testing DeepSeek via GitHub repos or browser extensions -- could bypass traditional data loss prevention and security incident event management controls.

Privacy and compliance risks

Use of DeepSeek in sectors governed by regulations such as GDPR, HIPAA, CCPA or FINRA introduces various compliance liabilities, including the following:

  • Cross-border data transfer. Sending personal or health data to servers in China may violate regional data sovereignty requirements.
  • Lack of processing transparency. DeepSeek does not offer the same level of explainability, red-teaming disclosure or audit logs as Western enterprise LLMs.
  • Accountability gaps. Who is responsible if DeepSeek generates responses that are biased, incorrect or legally damaging? Most versions lack enterprise-grade indemnification.

Shadow AI and unmonitored use

Because DeepSeek is open source and freely available, developers or business users may experiment with it outside official IT channels. This creates shadow AI blind spots for CISOs and compliance teams. DeepSeek effectively broadens the attack surface, increasing the possibility for prompt injection or supply chain compromise. Finally, there is a risk of internal models being able to interface with untrusted external APIs.

Best practices for managing DeepSeek risks

To responsibly manage DeepSeek cybersecurity risks, organizations should adopt a multilayered strategy, including policy enforcement, proactive risk assessment, secure model hosting and the use of zero-trust principles -- all augmented by employee education and compliance governance.

  • Policy enforcement and discovery. Deploy endpoint detection tools and cloud access security brokers to identify unsanctioned DeepSeek use. Extend AI usage policies to prohibit the unauthorized use of foreign-hosted models, DeepSeek explicitly.
  • Vendor and model risk assessment. If the organization plans to sanction DeepSeek use in any capacity, subject it -- before adoption -- to the same third-party risk assessments used for any external software or data processor. Consider information such as hosting location, data flow maps and legal frameworks to which the model is subject.
  • Secure model hosting. DeepSeek, if sanctioned for internal use, should be self-hosted in an isolated, monitored environment. Implement data minimization, prompt sanitization and output monitoring to reduce leakage and bias risks.
  • Zero trust, with emphasis on data segmentation. Apply zero-trust architecture principles to the integration of AI tools such as DeepSeek. Segregate access to sensitive data from AI systems unless explicitly required and approved. We strongly caution against giving DeepSeek AI models access to any sensitive data.
  • Employee education and governance. Last, and certainly not least, train staff on the risks of using unsanctioned AI tools and outline the implications for data security and compliance. This should be done regardless of whether your company is considering the sanctioned use of DeepSeek in its environment. It is better to assume some form of shadow DeepSeek adoption rather than believing no one in your company will use it, only to find out employees did because nobody told them not to. Require formal review for any code libraries, prompts or plugins that leverage DeepSeek or other foreign-developed LLMs.

DeepSeek represents the dual-edged promise of open innovation in generative AI. While its capabilities appear impressive, its use in enterprise contexts introduces incredible risks related to cyberespionage, data security and regulatory compliance, especially given its ties to Chinese infrastructure and laws.

Practitioners and business decision-makers must approach DeepSeek cybersecurity with caution. Embed  its evaluation into broader AI governance and risk management frameworks. As AI becomes an even bigger part of tomorrow's business processes, vigilance -- not velocity -- must guide any DeepSeek deployment.

Jerald Murphy is senior vice president of research and consulting with Nemertes Research. With more than three decades of technology experience, Murphy has worked on a range of technology topics, including neural networking research, integrated circuit design, computer programming and global data center design. He was also the CEO of a managed services company.

Dig Deeper on Risk management