traffic_analyzer/DigitalVision V

Evaluating AI tools for healthcare cybersecurity in a saturated market

As AI-powered tools continue to flood the market, thorough risk assessments are crucial for ensuring that organizations choose the right products and partners.

Companies touting AI-powered cybersecurity tools have made big promises to security leaders across all industries -- from enhanced threat detection to improved vulnerability management and faster incident response times.

In healthcare, an industry that faces sophisticated cyberthreats and increasingly tight budgets, AI tools have the potential to streamline workflows and enhance efficiency. Of course, engaging with the right vendors is crucial for reducing risk rather than contributing to it.

As such, healthcare leaders are evaluating AI-driven tools as they would any other emerging technology, with careful consideration of the vendor's risk profile, experts shared at a virtual Healthcare Dive event held on Nov. 5, 2025, entitled "How healthcare can prepare for cyberattacks."

Key considerations for engaging with emerging technology

Panelists discussed how they evaluate vendors in an era of AI hype, as cyberattacks against the healthcare sector continue to become increasingly sophisticated.

"I think that certainly there's been just an explosion of cyber-related risk in healthcare. I think it's happened in multiple industries, but healthcare typically has been behind in cybersecurity and cybersecurity protections," Heather Costa, director of technology resilience at Mayo Clinic, said during a panel session.

"We have not been at the forefront of those things historically. The advent of emerging technologies certainly has complicated that, both from the positive and negative sides of things."

Panelists agreed that AI technology holds promise, particularly in detecting threats. What's more, many health systems have already embraced some AI-powered tools.

"Many of the security tools today already have embedded AI capabilities," said panelist Sanjeev Sah, senior vice president of enterprise technology services and chief information security officer (CISO) at Novant Health.

"Think about the number of security events and incidents that an organization faces. They can go from thousands to millions in a short time period. How you rationalize that needs to be automated, it needs to be enabled through a rationalization process, and AI gives an advantage in being able to determine that, pinpoint it and focus on that particular activity that warrants our attention right away."

AI has its pros and cons, just like any emerging technology, Costa added, stressing the importance of evaluating new technology from both security and operational perspectives. Governance, proper controls and ethics are all critical parts of the conversation when evaluating any new vendor.

"I often remind folks that AI is an emerging technology. The assembly line, at some point in time, was an emerging technology," Costa said. "So, all of these things have a place, but we have to make sure that we're thinking about it in terms of whether we have the right leadership, people and processes in place first."

Assessing prospective vendors: Balancing value with risk

Given the sensitivity of healthcare data and a healthcare organization's responsibility to ensure patient safety, bringing new vendors into an organization's ecosystem requires careful consideration.

Panelist William Scandrett, CISO at Allina, noted the value of evaluating an AI company for economic viability first.

"There are a lot of startups, and there are a lot of fly-by-night AI companies and whatnot that are just created out of the blue, and everybody wants to get a piece of that. And when the industry moves very fast, we tend to let down our guard from a controls perspective," Scandrett said.

"So, one of the things that we look at outside of normal security hygiene, we look at company viability -- 10-Ks and 10-Qs. Is a company actually sustainable? Have they been in existence for more than a month? And you'll be surprised at how many of them haven't."

Branding things as AI when they are not actually using AI adds further complication to the situation, Scandrett noted.

"We have to kind of get in there and figure out, is this really AI or is this just marketing or rebranding some type of tool because it sells better if it has an AI moniker attached to it?" he said.

In addition to assessing the viability and legitimacy of any given tool, healthcare organizations must consider the operational risks associated with bringing a new partner into the ecosystem.

"It's about viability, but it's also about their operations and the type of incidents or events that have happened in the past," Sah noted. "What is their mechanism for monitoring? How do they ensure that their security practices are sound? We look at all of these elements before we engage with a partner. If they do not meet our baseline requirements, we look for other options, frankly."

The panelists also stressed the importance of working across departments to assess risk and determine what vendors and products are right for the organization's specific needs.

Interested in learning more? Sign up to watch the free event on-demand. Link to free event on-demand.

Disclosure: Healthcare Dive is owned by Informa TechTarget, the parent company of Xtelligent. Informa TechTarget has no influence over Xtelligent's coverage.

Jill McKeon has covered healthcare cybersecurity and privacy news since 2021.

Dig Deeper on Cybersecurity strategies