
traffic_analyzer/DigitalVision V
New AI method mitigates bias in health datasets
Mount Sinai researchers' new AI method has the potential to significantly reduce bias in the datasets used in development, which would, in turn, make health AI more equitable.
Mount Sinai researchers have developed a new AI method to reduce biases in data used to train machine-learning AI algorithms, which could help improve diagnostic accuracy overall.
The research team detailed the development of the method in the Journal of Medical Internet Research. Called AEquity, the method leverages a learning curve approximation to identify and curb bias via guided dataset collection or relabeling. Using various machine-learning models, they tested AEquity on different types of health data, including images, patient records and the National Health and Nutrition Examination Survey.
The researchers found that using AEquity to guide data collection for each diagnostic finding in a chest radiograph dataset decreased bias by between 29% and 96.5%. Additionally, AEquity-guided data collection reduced bias by up to 80% on mortality prediction with the National Health and Nutrition Examination Survey.
The study authors concluded that "AEquity is a robust tool by applying it to different datasets, algorithms, and intersectional analyses and measuring its effectiveness with respect to a range of traditional fairness metrics."
They also noted that AEquity could be used during algorithm development and in audits before deployment to improve fairness in health AI.
"Tools like AEquity are an important step toward building more equitable AI systems, but they’re only part of the solution,” said Girish N. Nadkarni, M.D., the study's senior corresponding author and chief AI officer of the Mount Sinai Health System, in the press release. "If we want these technologies to truly serve all patients, we need to pair technical advances with broader changes in how data is collected, interpreted, and applied in health care. The foundation matters, and it starts with the data."
Bias in health AI tools is a trenchant problem. All AI models have the potential to display bias as a result of the data they are trained on. Thus, healthcare providers must understand the biases in tools that are rapidly being integrated into healthcare.
A study published in May found that generative AI models may recommend different treatments for the same medical condition based solely on a patient's sociodemographic background, which could result in health inequities.
Though the Trump administration has taken a hands-off approach to ensuring AI trustworthiness, industry groups are working to fill in the gaps. For instance, groups like URAC, the Joint Commission and the Coalition for Health AI are creating resources, guidelines and accreditation processes to ensure safe, fair and equitable AI development.
Anuja Vaidya has covered the healthcare industry since 2012. She currently covers the virtual healthcare landscape, including telehealth, remote patient monitoring and digital therapeutics.