your123 - stock.adobe.com

HSCC previews upcoming AI cybersecurity guidance

The AI cybersecurity guidance will focus on the interdependencies between several critical AI workstreams, including governance, third-party risk and education and enablement.

The Health Sector Coordinating Council (HSCC) has released a preview of its upcoming guidance on AI cybersecurity, scheduled for release in the first quarter of 2026. As AI cybersecurity risks continue to evolve in healthcare, the HSCC is calling on healthcare organizations to adopt its best practices and share this guidance with their peers.

The guidance documents will be the first product of the HSCC Cybersecurity Working Group's (CWG) AI Cybersecurity Task Group, which was formed in October 2024 and comprises 115 healthcare organizations.

The task group divided AI cybersecurity issues into five critical workstreams:

  • Education and enablement
  • Cyber operations and defense
  • Governance
  • Secure by design
  • Third-party AI risk and supply chain transparency

HSCC's newly released preview of its upcoming guidance consists of one-page summaries surrounding each of the five workstreams, detailing the objectives, key focus areas and deliverables and outcomes for each subgroup.

Under the education and enablement workstream, task group members have been working to define key AI terminology, understand the use and risks of AI in healthcare and explore AI and machine learning fundamentals.

The cyber operations and defense subgroup is focusing on creating practical playbooks to help healthcare organizations respond to AI-related cyber incidents, which are increasing in scope and scale across all industries. This group plans to integrate AI-specific risk assessments into existing cybersecurity frameworks and craft best practices for business continuity and regulatory compliance amid an AI-driven cyber incident.

The governance workstream aims to map AI governance controls to legal and regulatory requirements, helping healthcare organizations assess their capabilities and identify gaps through an AI governance maturity model.

The secure by design group is specifically focused on developing principles tailored for the use of AI-enabled medical devices.

"This includes delivering practical guidance and tools that empower medical device manufacturers and stakeholders to embed cybersecurity throughout the entire product lifecycle," the guidance preview states.

Goals of this group include enabling cross-functional collaboration, supporting the integration of an AI Bill of Materials and Trusted AI Bill of Materials and developing an AI security risk taxonomy.

Lastly, the third-party risk and supply chain management workstream focuses on monitoring third-party AI tools and evaluating those tools for security and privacy risks in accordance with established frameworks. This group aims to bridge the gap between healthcare organizations and vendors, fostering collaboration and emphasizing fairness and human oversight.

As healthcare organizations continue to prioritize AI adoption, careful consideration of security risks will prove crucial for success, the AI task group suggests.

"Together, we can ensure that innovation in healthcare is matched by a steadfast commitment to patient safety, data privacy and operational resilience," HSCC stated.

Jill McKeon has covered healthcare cybersecurity and privacy news since 2021.

Dig Deeper on Cybersecurity strategies