

Equitable Healthcare: Bias-Free Algorithms Reducing Costly Care Disparities
Information
This project addresses algorithmic bias in commercial EHR algorithms that can exacerbate healthcare disparities across sociodemographic groups. Using two real-world clinical classification models—one predicting asthma-related emergency department visits and the other unplanned readmissions—our team evaluated two post-processing bias mitigation methods to reduce differences in false negative rates across race/ethnicity, sex, language and insurance. After reviewing the literature and pilot testing the most frequently cited methods, threshold adjustment was identified as the most effective approach. Threshold adjustment maintained model accuracy and alert rates while significantly reducing bias, as measured by Equal Opportunity Difference. Sparse and outdated documentation of existing open-source bias mitigation tools prompted the development of an in-house R code that enhanced both transparency and bias reduction. The project culminated in an open-source playbook and code repository to guide other health systems with limited resources in applying these methods. This work demonstrates a low-resource pathway to mitigating algorithmic bias in healthcare applications, with potential for broad application in safety-net and other under-resourced systems. Attendees will gain insights on practical bias mitigation strategies, real-world implementation challenges, and how to leverage the project’s open-source playbook to promote health equity in AI-driven clinical decision support.


