An Empirical Evaluation of Low-Rank Adapted Vision–Language Models for Radiology Image Captioning Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.3390/bioengineering12121330
· OA: W4417047245
Rapidly growing medical imaging volumes have increased radiologist workloads, creating demand for automated tools that support interpretation and reduce reporting delays. Vision-language models (VLMs) can generate clinically relevant captions to accelerate report drafting, yet their varying parameter scales require systematic evaluation for clinical utility. This study evaluated ten multimodal models fine-tuned on the Radiology Objects in Context version 2 (ROCOv2) dataset containing 116,635 images across eight modalities. We compared four Large VLMs (LVLMs) including LLaVA variants and IDEFICS-9B against four Small VLMs (SVLMs) including MoonDream2, Qwen variants, and SmolVLM, alongside two fully fine-tuned baseline architectures (VisionGPT2 and CNN-Transformer). Low-Rank Adaptation (LoRA), applied to fewer than 1% of selected model parameters, proved optimal among adaptation strategies, outperforming broader LoRA configurations. Models were assessed on relevance (semantic similarity) and factuality (concept-level correctness) metrics. Performance showed clear stratification: LVLMs (0.273 to 0.317 overall), SVLMs (0.188 to 0.279), and baselines (0.154 to 0.177). LLaVA-Mistral-7B achieved the highest performance with relevance and factuality scores of 0.516 and 0.118, respectively, substantially exceeding the VisionGPT2 baseline (0.325, 0.028). Among the SVLMs, MoonDream2 demonstrated competitive relevance (0.466), approaching the performance of some LVLMs despite its smaller size. To investigate performance enhancement strategies for underperforming SVLMs, we prepended predicted imaging modality labels at inference time, which yielded variable results. These findings provide quantitative benchmarks for VLM selection in medical imaging, demonstrating that while model scale influences performance, architectural design and targeted adaptation enable select compact models to achieve competitive results.