Study shows AI can help identify more patients for cancer clinical trials
A new study evaluated whether AI could enhance the accuracy and efficiency of clinical trial prescreening.
Ravi B. Parikh, MD, MPP
Researchers have demonstrated that pairing artificial intelligence with human expertise can improve how patients with cancer are identified for clinical trials, potentially allowing more patients to be offered promising investigational treatments.
Ravi B. Parikh, MD, MPP, a medical oncologist and researcher at Winship Cancer Institute of Emory University and an associate professor at Emory University School of Medicine, served as a co-lead and co-first author on the study.
The study, published in Nature Communications, evaluated whether AI could enhance the accuracy and efficiency of clinical trial prescreening, a labor-intensive process that relies on research staff reviewing complex electronic health records to determine whether patients meet eligibility criteria. Fewer than 10 percent of adult patients with cancer ultimately enroll in clinical trials, even though most report being willing to participate.
Improving accuracy without replacing people
In a randomized evaluation using hundreds of real-world patient records, researchers compared three approaches to trial prescreening: human review alone, AI alone and a human-AI collaboration. The most accurate results came from human reviewers supported by AI tools.
“AI can augment existing research staff screening processes to improve the accuracy of identifying patients for clinical trials,” Parikh says. “We estimate that at a high-volume cancer center, that improvement could translate to 10 to 20 additional patients screened each week, meaning at least one extra patient per day being offered a potentially lifesaving clinical trial they otherwise might not have been offered.”
The study found that AI support was particularly helpful in identifying complex eligibility criteria such as tumor biomarkers and cancer staging, areas where manual review is especially prone to error. Prescreening errors tend to disproportionately affect patients from underrepresented groups, whose medical records may be more fragmented, contain care delivered across multiple health systems or lack standardized documentation of key clinical details.
While AI did not significantly reduce the time required for chart review, it improved accuracy without adding to staff workload.
Why human-AI collaboration matters
Unlike many previous studies that compare humans versus AI, this research focused on how the two can work together.
“Framing this as humans versus AI is often a false comparison,” Parikh says. “The real question is how humans and AI can work together to achieve better cancer care. In this use case, we show there are clear benefits to that collaboration.”
The findings also underscore the importance of human oversight. AI alone was less accurate than human reviewers, and the study identified areas where overreliance on automated outputs could introduce bias. These insights help define how AI can be responsibly integrated into clinical research workflows.
“TrialTranslator focused on using AI after trials are completed, helping clinicians understand how results might apply to vulnerable or underrepresented patients,” Parikh says. “This new work tackles the opposite end of the spectrum. We are applying AI upstream, before a trial even begins, to help identify patients who may be eligible to participate.”
Together, the two efforts highlight how AI is influencing multiple stages of the clinical trial process, from patient identification to interpretation of trial outcomes.
What comes next
The researchers say the next step is moving from pilot studies to broader implementation.
“At Winship, our vision is to integrate AI across the clinical trial journey, from patient education about available trials to automated prescreening that supports our research staff, and even to estimating the potential benefit of a trial before results are available,” Parikh says. “While this work was conducted at a single cancer center, the approach is designed to be scalable and could be adapted by other health systems seeking to improve clinical trial operations.”
By improving the accuracy of trial screening at scale, the approach could help expand access to clinical trials, increase enrollment and, over time, support more diverse participation by reducing structural barriers that have historically limited who is offered the opportunity to enroll.
The study was funded by Mendel.ai. The funder had no role in the study design, data collection, analysis or interpretation of the findings.