putilov_denis - stock.adobe.com

As genAI usage increases, health data exposure concerns rise: report

Healthcare organizations are increasingly concerned about shadow AI and data protection as generative AI becomes part of clinical and administrative workflows.

Healthcare employees are embracing generative AI tools, integrating them into clinical, operational and administrative settings. However, as genAI maturity increases, healthcare organizations have taken steps to curb the use of personal genAI applications to reduce shadow AI risks and protect sensitive data, according to Netskope Threat Labs' new threat report on the healthcare sector. 

Netskope analyzed anonymized usage data from its platform and found that over the past year, the use of personal genAI applications dropped from 82% to 32% in healthcare. 

Meanwhile, adoption of organization-managed genAI tools rose from 12% to 56%, indicating that healthcare organizations are encouraging the use of controlled, secure tools and discouraging public-facing tools that pose data security risks. 

Despite the rise in organization-managed genAI tools, the proportion of users switching back and forth between personal and enterprise accounts rose from 5% to 10% over the past year. 

"This trend suggests that organizations still have work to do to match the convenience, accessibility, and features that users expect, even as managed platforms become more widely adopted," the report stated.  

In the past year, ChatGPT has remained the frontrunner in healthcare, with 68% of organizations using it. However, Microsoft Copilot (63%) and Google Gemini (57%) are gaining momentum, and more specialized tools like Google NotebookLM saw increases as well. 

"Overall, these trends reflect a diversifying genAI ecosystem in the healthcare sector, as organizations expand beyond early leaders and adopt a broader mix of integrated and specialized solutions," the report stated. 

As genAI adoption increases, data protection is becoming an even higher priority. Netskope found that among data policy violations in healthcare, regulated data accounted for 89% of incidents. This is far higher than the global average of 31%. Source code and intellectual property accounted for 5% each, and passwords and keys accounted for 1%. 

What's more, many organizations see genAI tools as a potential risk to data security, leading them to block specific applications. ZeroGPT is the most frequently blocked genAI app in healthcare, at 63%, followed by Particular Audience at 52%.  

"These patterns indicate that healthcare organizations are not only reacting to risks posed by specific applications but are also reinforcing broader governance strategies to ensure genAI usage aligns with strict privacy, security, and compliance requirements," the report noted. 

GenAI is not the only type of personal tool to be used by healthcare employees in the workplace. Email and file-sharing platforms are also widely used across healthcare, often for legitimate purposes such as collaboration, networking and communication. However, all these tools open organizations up to potential security risks and require strong governance. 

"With the growing use of genAI tools, both managed and personal, and the misuse of personal cloud apps, it is essential to strengthen visibility, refine policies, and prioritize proactive defenses to protect your organization in this fast-changing threat landscape," the report stated. 

As such, the researchers recommended that healthcare organizations improve their security posture by inspecting all HTTP and HTTPS downloads to prevent malware, blocking access to apps that do not serve any legitimate business purpose and using data loss prevention policies to detect when sensitive information is sent to personal apps. 

Jill Hughes has covered health tech news since 2021.

Dig Deeper on Health data threats