2025-07-16
Toward fair medical advice: Addressing and mitigating bias in large language model-based healthcare applications
2025-07-16 • Haohui Lu, Ye Lin, Zhidong Li, Man Lung Yiu, Yu Gao, Shahadat Uddin
Large Language Models (LLMs) are increasingly deployed in web-based medical advice applications, offering scalable and accessible healthcare solutions. However, their outputs often reflect demographic biases, raising concerns about fairness and equity for vulnerable populations. In this work, we propose FairMed, a framework designed to mitigate biases in LLM-generated medical advice through fine-tuning and prompt engineering strategies. We evaluate FairMed using language-based and content-level metrics across demo…