gmast3r/istock via Getty Images

Epic clinical decision support tools show modest performance

Epic's integrated clinical decision support tools, including its sepsis and readmission risk models, show limited achievement in real-world settings, according to a meta-analysis.

A recent study shows that Epic's clinical decision support tools demonstrate modest real-world performance, which could result in false positives or missed cases in some clinical use cases, such as sepsis detection or intensive care unit transfers.

Published in the Journal of General Internal Medicine, the study examined the clinical decision support (CDS) tools integrated with the Epic EHR. Epic, which accounted for 42% of the 2024 U.S. acute care market, trained several of its own predictive machine-learning models for its customers, including a sepsis prediction model, a deterioration index for unplanned intensive care unit (ICU) transfers or death on the hospital floor and a 30-day readmission risk score.

Though the tools are widely used, there has been no systematic evaluation of their real-world performance, especially in diverse populations and healthcare settings, the study authors wrote. Their study aims to fill that gap.

The researchers from Northwell Health conducted a systematic review, searching for external validations of Epic's CDS tools on the PubMed, Scopus and Embase databases from January 2018 to August 2025. They included 22 studies in the review, covering the use of the CDS tools across 2.3 million patients and 34 sites. 

The meta-analysis shows that none of the models achieved a mean area under the receiver operating characteristic curve (AUROC) greater than 0.79. The AUROC measures how well a classification model distinguishes between two categories.

The Epic Deterioration Index achieved an AUROC of 0.79, while the Epic Sepsis Model achieved an AUROC of 0.65 and the Epic Unplanned Readmission Model 0.70. The study also assessed Epic's End-of-Life Care Index, which achieved an AUROC of 0.76 and the Epic Risk of Patient No-Show model, which achieved an AUROC of 0.62.

The study also found that the mean AUROC across studies included in the analysis is consistently lower than the numbers reported by Epic. Three of the models – the Epic Sepsis Model, Epic Unplanned Readmission Model and Epic End-of-Life Care Index -- underperformed Epic's reported confidence intervals, which are the ranges of values the true estimate is expected to fall within, even when a study is repeated many times. Epic's intervals were consistently higher than the study's pooled estimates.

While the researchers noted several study limitations, including significant variability in study designs, populations and methods, they concluded that the study findings highlight the need for "local validation and customization to specific populations."

"High heterogeneity among studies and the impact of threshold selection mean hospitals must carefully tailor models and operating points for optimal patient care," they wrote.  

In a statement shared with Health IT and EHR, an Epic spokesperson noted that the meta-analysis includes studies that were largely conducted on its first-generation models, and some of its current models include those customization capabilities.

"Epic's current sepsis model, which includes built-in localization capabilities designed to address the site-level variability described in the paper, was independently validated across four health systems with results published in JAMA Network Open in 2026," the spokesperson said. "The central finding of the paper, that CDS models benefit from local validation and customization, is consistent with guidance Epic provides customers."

Anuja Vaidya has covered the healthcare industry since 2012. She currently covers the virtual healthcare landscape, including telehealth, remote patient monitoring and digital therapeutics.

Dig Deeper on Health IT optimization