traffic_analyzer/DigitalVision V

Shadow AI in healthcare: The hidden risk to data security

In addition to traditional shadow IT, shadow AI can make healthcare organizations vulnerable to patient data exposure and compliance troubles.

Shadow IT is software or hardware that a business unit adopts without the support or awareness of the organization's IT department. Left unchecked, shadow IT can leave organizations vulnerable to security risks. As AI adoption increases in healthcare and other industries, a new risk has emerged: shadow AI. Failure to notify IT teams of new AI tools can lead to governance gaps, functionality issues and legal troubles while heightening the risk of data breaches.

"What makes shadow AI particularly dangerous is its invisibility and autonomy," said Vishal Kamat, vice president of data security at IBM.

"These tools can learn, adapt and generate outputs without clear traceability or governance. For security leaders, the challenge isn't just identifying rogue tools, but understanding how they interact with sensitive workflows and data."

Understanding the risks of shadow AI can help healthcare organizations avoid common pitfalls and ensure that AI tools are implemented with the proper oversight.

Unpacking security, privacy risks of shadow AI

Shadow IT is an issue for IT teams everywhere, and healthcare is no exception. According to a 2025 survey conducted by enterprise healthcare operations software company symplr, 86% of IT executives reported instances of shadow IT in their health systems, up from 81% in 2024.

"Common instances of shadow IT often stem from employees trying to work more efficiently or collaborate faster," Kamat said. "These include the use of personal cloud storage, unauthorized messaging apps or unvetted project management platforms that fall outside the organization's approved ecosystem but still handle sensitive data or communications."

Shadow IT is driven by a desire to improve workflows, but it often creates disruptions instead. This holds true for shadow AI as well.

"Shadow AI introduces a deeper layer of risk," Kamat added. "Employees may deploy open-source LLMs within enterprise cloud environments, use AI code assistants without oversight or upload confidential patient data to public generative AI platforms. These actions not only bypass security controls but also expose organizations to data leakage, model misuse and regulatory violations."

IBM's 2025 "Cost of a Data Breach" report -- in which healthcare was ranked as the costliest industry for data breaches for the 14th consecutive year -- explored the growing prevalence of shadow AI. The report found that 20% of surveyed organizations across all sectors combined suffered a breach due to security incidents involving shadow AI. This figure was 7 percentage points higher than security incidents involving sanctioned AI.

What's more, organizations with high levels of shadow AI reported higher breach costs, contributing $200,000 to the global average breach cost. Shadow AI displaced the security skills shortage as one of the top three costliest breach factors, IBM found.

"Even well-intentioned experimentation with unsanctioned tools can unleash serious security and compliance risks, especially in healthcare, where patient data breaches, algorithmic bias in diagnostics, and violations of HIPAA regulations can have life-threatening consequences," Kamat stated.

Without the proper guardrails, employees might inadvertently expose PHI with the use of an unsanctioned tool, leading to data privacy violations. Customers' personally identifiable information was the most compromised data type in shadow AI incidents, IBM's report revealed. Additionally, intellectual property was compromised in 40% of the tracked shadow AI incidents.

"The core issue is visibility: when security teams lack awareness of AI tools in use, they're effectively blindfolded. They can't assess risk, enforce policy or ensure accountability," Kamat added.

"In healthcare, this invisibility is especially dangerous as it means sensitive patient data could be processed by unvetted models, clinical decisions could be influenced by unvalidated algorithms and compliance violations could go unnoticed until it's too late."

Practical strategies for preventing shadow AI

Implementing AI tools can help healthcare organizations work more efficiently, transforming provider workflows and even speeding up the revenue cycle process. However, careful implementation is the key to realizing the full benefits of AI in healthcare and avoiding instances of shadow AI.

"To manage the growing risks of shadow IT and shadow AI, healthcare security leaders must begin with visibility," Kamat emphasized. "This means deploying tools that continuously detect unauthorized applications, AI usage and data flows, especially those involving sensitive patient information."

Once unauthorized tools are identified, Kamat said, they should be assessed for risk and brought into a formal review process to ensure compliance.

"Communication is equally critical," he added. "Simply having approved AI tools or policies isn't enough. Employees must be aware of them and understand how to use them responsibly. Without consistent communication, even well-designed policies can be bypassed or ignored."

More than 60% of organizations across all sectors included in IBM's report said that they did not have governance policies in place to manage AI or detect shadow AI. Implementing proper AI access controls and performing regular audits to detect unsanctioned AI use can help organizations reduce the risk of data breaches and maintain compliance with privacy and security regulations.

"After all, in healthcare, shadow AI isn't just a technology risk, it's a compliance and patient safety threat," Kamat said. "With strict mandates like HIPAA and growing public scrutiny, proactive governance is essential, not only to meet regulatory requirements but also to maintain patient trust."

Jill McKeon has covered healthcare cybersecurity and privacy news since 2021.

Dig Deeper on Cybersecurity strategies