JBHI Journal 2025 Journal Article
Infusing Multi-Hop Medical Knowledge Into Smaller Language Models for Biomedical Question Answering
- Jing Chen
- Zhihua Wei
- Wen Shen
- Rui Shang
MedQA-USMLE is a challenging biomedical question answering (BQA) task, as its questions typically involve multi-hop reasoning. To solve this task, BQA systems should possess not only extensive medical professional knowledge but also strong medical reasoning capabilities. While state-of-the-art larger language models, such as Med-PaLM 2, have overcome this challenge, smaller language models (SLMs) still struggle with it. To bridge this gap, we introduces a multi-hop medical knowledge infusion (MHMKI) procedure to endow SLMs with medical reasoning capabilities. Specifically, we categorize MedQA-USMLE questions into distinct reasoning types, then tailor pre-training instances for each type of questions using the semi-structured information and hyperlinks of Wikipedia articles. To enable SLMs to efficiently capture the multi-hop knowledge contained in these instances, we design a reasoning chain masked language model to further pre-train BERT models. Moreover, we convert the pre-training instances into a composite question answering dataset for intermediate fine-tuning of GPT models. We evaluate MHMKI on six SLMs across five datasets spanning three BQA tasks. The results demonstrate that MHMKI consistently improves SLMs' performance, particularly on tasks requiring substantial medical reasoning. For instance, the accuracy of MedQA-USMLE shows a significant increase of 5. 3% on average.