Laurent - stock.adobe.com
CTA targets AI in healthcare with an AI standard
A team of more than 50 companies is working together to develop AI standards for healthcare. A recent effort was accredited by the American National Standards Institute.
Working with a group of more than 50 companies, the Consumer Technology Association is developing a set of AI standards in healthcare to create a framework that would, in part, build clinician trust in AI products.
Last week, a Consumer Technology Association (CTA) working group that was created last April and includes heavyweights like Amazon, Google and Microsoft, made an announcement. Its first AI standard was released and accredited by the American National Standards Institute, a nonprofit organization that supports voluntary standards and represents 270,000 companies and organizations worldwide. The AI standard defines 11 AI-related terms in healthcare.
René Quashie, CTA's vice president of policy and regulatory affairs for digital health, said the association wanted to bring different industries and leaders together to have diverse perspectives included in the discussion. The group's first AI standard started with the basics: Agreeing on what terms such as clinical decision support system and de-identified data mean. AI-related terms like these are often used in different ways by different organizations.
"We thought creating a common language would be incredibly helpful as we think through a lot of the other challenging issues involved in AI," he said. The AI standards work is part of CTA's AI initiative, which aims to address some concerns about AI, such as bias, ethics and trustworthiness.
While other countries, associations and federal regulators have also been developing guidelines and AI standards, Shruthi Parakkal, a consultant for market research firm Frost & Sullivan's transformation health team, said the CTA standard stands out because it is industry-led.
When market leaders are involved, companies are more likely to adopt voluntary standards like CTA's and bring together the knowledge of different stakeholders to address common challenges in the market, according to Parakkal.
"When market leaders become the front-runners to adopt, other companies and market players will follow suit," she said.
The 52-member group will continue working to create two other standards on trustworthiness and data integrity to target challenges associated with AI in healthcare.
Creating AI standards for healthcare
Parakkal said with the increasing traction of AI in healthcare, it's time to develop AI standards, particularly around data security, safety and intended use. And it's not the first time an industry-led working group has come together to establish standards for the healthcare industry.
Parakkal cited Health Level Seven International, which initiated the Argonaut Project in 2014 with industry players such as Epic, Cerner, Meditech, Mayo Clinic and Intermountain Healthcare, to speed up the creation and adoption of a standardized API for the electronic exchange of healthcare information.
Together, they created the Fast Healthcare Interoperability Resources (FHIR) standard, a data format and API standard. FHIR is now a key component in a proposed rule from the Office of the National Coordinator for Health IT, which would require healthcare organizations use FHIR-based APIs to give patients access to their data. The proposed rule could be approved any day now.
Similarly, Parakkal said the CTA AI standard could help establish an accepted set of terms and definitions and a foundation for AI in healthcare.
"It is important to remember that this is only a first step, or one of many steps, toward addressing challenges and complexities of AI in healthcare, but will definitely contribute to the momentum," she said.
René QuashieVice president of policy and regulatory affairs, digital health, CTA
Indeed, CTA's Quashie said its healthcare working group will continue working on the project and addressing AI in healthcare challenges, such as clinician trust. The working group wants to devise a standard to explain how AI algorithms arrive at a decision and how that decision can be reproduced. Providing that kind of transparency could help clinicians trust AI systems, many of which today operate in a so-called black box.
The definitions are an important foundational step, Quashie said, but next steps like addressing AI's trustworthiness need to move forward.
Quashie said he's hopeful that vendors and clinicians alike begin to use the standardized definitions and that they will gain support from other groups as well.
"Our hope is that, like any language, over time when you say one word, everybody understands what you mean by that word," he said.