Multi-Aspect Knowledge-Enhanced Medical Vision-Language Pretraining with Multi-Agent Data Generation Article Swipe
Vision-language pretraining (VLP) has emerged as a powerful paradigm in medical image analysis, enabling representation learning from large-scale image-text pairs without relying on expensive manual annotations. However, existing methods often struggle with the noise inherent in web-collected data and the complexity of unstructured long medical texts. To address these challenges, we propose a novel VLP framework integrating a Multi-Agent data GENeration (MAGEN) system and Ontology-based Multi-Aspect Knowledge-Enhanced (O-MAKE) pretraining. First, MAGEN enhances data quality by synthesizing knowledge-enriched descriptions via a foundation model-assisted captioning and retrieval-based verification pipeline. Second, O-MAKE addresses the difficulty of learning from long, unstructured texts by decomposing them into distinct knowledge aspects. This facilitates fine-grained alignment at both global and patch levels, while explicitly modeling medical concept relationships through ontology-guided mechanisms. We validate our framework in the field of dermatology, where comprehensive experiments demonstrate the effectiveness of each component. Our approach achieves state-of-the-art zero-shot performance on disease classification and cross-modal retrieval tasks across eight datasets. Our code and the augmented dataset Derm1M-AgentAug, comprising over 400k skin-image-text pairs, will be released at https://github.com/SiyuanYan1/Derm1M.
Related Topics
- Type
- preprint
- Landing Page
- http://arxiv.org/abs/2512.03445
- https://arxiv.org/pdf/2512.03445
- OA Status
- green
- OpenAlex ID
- https://openalex.org/W4417029036
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4417029036Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2512.03445Digital Object Identifier
- Title
-
Multi-Aspect Knowledge-Enhanced Medical Vision-Language Pretraining with Multi-Agent Data GenerationWork title
- Type
-
preprintOpenAlex work type
- Publication year
-
2025Year of publication
- Publication date
-
2025-12-03Full publication date if available
- Authors
-
Zongyuan GeList of authors in order
- Landing page
-
https://arxiv.org/abs/2512.03445Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2512.03445Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2512.03445Direct OA link when available
- Cited by
-
0Total citation count in OpenAlex
Full payload
| id | https://openalex.org/W4417029036 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2512.03445 |
| ids.doi | https://doi.org/10.48550/arxiv.2512.03445 |
| ids.openalex | https://openalex.org/W4417029036 |
| fwci | |
| type | preprint |
| title | Multi-Aspect Knowledge-Enhanced Medical Vision-Language Pretraining with Multi-Agent Data Generation |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| language | |
| locations[0].id | pmh:oai:arXiv.org:2512.03445 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | cc-by-nc-nd |
| locations[0].pdf_url | https://arxiv.org/pdf/2512.03445 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | https://openalex.org/licenses/cc-by-nc-nd |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2512.03445 |
| locations[1].id | doi:10.48550/arxiv.2512.03445 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2512.03445 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5005014252 |
| authorships[0].author.orcid | https://orcid.org/0000-0002-5880-8673 |
| authorships[0].author.display_name | Zongyuan Ge |
| authorships[0].author_position | middle |
| authorships[0].raw_author_name | Ge, Zongyuan |
| authorships[0].is_corresponding | True |
| has_content.pdf | True |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2512.03445 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-12-05T00:00:00 |
| display_name | Multi-Aspect Knowledge-Enhanced Medical Vision-Language Pretraining with Multi-Agent Data Generation |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-12-05T23:25:22.460635 |
| primary_topic | |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2512.03445 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | cc-by-nc-nd |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2512.03445 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by-nc-nd |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2512.03445 |
| primary_location.id | pmh:oai:arXiv.org:2512.03445 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | cc-by-nc-nd |
| primary_location.pdf_url | https://arxiv.org/pdf/2512.03445 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | https://openalex.org/licenses/cc-by-nc-nd |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2512.03445 |
| publication_date | 2025-12-03 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 6, 52, 57, 79 |
| abstract_inverted_index.To | 46 |
| abstract_inverted_index.We | 124 |
| abstract_inverted_index.as | 5 |
| abstract_inverted_index.at | 109, 173 |
| abstract_inverted_index.be | 171 |
| abstract_inverted_index.by | 74, 98 |
| abstract_inverted_index.in | 9, 35, 128 |
| abstract_inverted_index.of | 41, 92, 131, 139 |
| abstract_inverted_index.on | 22, 148 |
| abstract_inverted_index.we | 50 |
| abstract_inverted_index.Our | 142, 158 |
| abstract_inverted_index.VLP | 54 |
| abstract_inverted_index.and | 38, 63, 83, 112, 151, 160 |
| abstract_inverted_index.has | 3 |
| abstract_inverted_index.our | 126 |
| abstract_inverted_index.the | 32, 39, 90, 129, 137, 161 |
| abstract_inverted_index.via | 78 |
| abstract_inverted_index.400k | 167 |
| abstract_inverted_index.This | 105 |
| abstract_inverted_index.both | 110 |
| abstract_inverted_index.code | 159 |
| abstract_inverted_index.data | 37, 59, 72 |
| abstract_inverted_index.each | 140 |
| abstract_inverted_index.from | 16, 94 |
| abstract_inverted_index.into | 101 |
| abstract_inverted_index.long | 43 |
| abstract_inverted_index.over | 166 |
| abstract_inverted_index.them | 100 |
| abstract_inverted_index.will | 170 |
| abstract_inverted_index.with | 31 |
| abstract_inverted_index.(VLP) | 2 |
| abstract_inverted_index.MAGEN | 70 |
| abstract_inverted_index.eight | 156 |
| abstract_inverted_index.field | 130 |
| abstract_inverted_index.image | 11 |
| abstract_inverted_index.long, | 95 |
| abstract_inverted_index.noise | 33 |
| abstract_inverted_index.novel | 53 |
| abstract_inverted_index.often | 29 |
| abstract_inverted_index.pairs | 19 |
| abstract_inverted_index.patch | 113 |
| abstract_inverted_index.tasks | 154 |
| abstract_inverted_index.texts | 97 |
| abstract_inverted_index.these | 48 |
| abstract_inverted_index.where | 133 |
| abstract_inverted_index.while | 115 |
| abstract_inverted_index.First, | 69 |
| abstract_inverted_index.O-MAKE | 88 |
| abstract_inverted_index.across | 155 |
| abstract_inverted_index.global | 111 |
| abstract_inverted_index.manual | 24 |
| abstract_inverted_index.pairs, | 169 |
| abstract_inverted_index.system | 62 |
| abstract_inverted_index.texts. | 45 |
| abstract_inverted_index.(MAGEN) | 61 |
| abstract_inverted_index.Second, | 87 |
| abstract_inverted_index.address | 47 |
| abstract_inverted_index.concept | 119 |
| abstract_inverted_index.dataset | 163 |
| abstract_inverted_index.disease | 149 |
| abstract_inverted_index.emerged | 4 |
| abstract_inverted_index.levels, | 114 |
| abstract_inverted_index.medical | 10, 44, 118 |
| abstract_inverted_index.methods | 28 |
| abstract_inverted_index.propose | 51 |
| abstract_inverted_index.quality | 73 |
| abstract_inverted_index.relying | 21 |
| abstract_inverted_index.through | 121 |
| abstract_inverted_index.without | 20 |
| abstract_inverted_index.(O-MAKE) | 67 |
| abstract_inverted_index.However, | 26 |
| abstract_inverted_index.achieves | 144 |
| abstract_inverted_index.approach | 143 |
| abstract_inverted_index.aspects. | 104 |
| abstract_inverted_index.distinct | 102 |
| abstract_inverted_index.enabling | 13 |
| abstract_inverted_index.enhances | 71 |
| abstract_inverted_index.existing | 27 |
| abstract_inverted_index.inherent | 34 |
| abstract_inverted_index.learning | 15, 93 |
| abstract_inverted_index.modeling | 117 |
| abstract_inverted_index.paradigm | 8 |
| abstract_inverted_index.powerful | 7 |
| abstract_inverted_index.released | 172 |
| abstract_inverted_index.struggle | 30 |
| abstract_inverted_index.validate | 125 |
| abstract_inverted_index.addresses | 89 |
| abstract_inverted_index.alignment | 108 |
| abstract_inverted_index.analysis, | 12 |
| abstract_inverted_index.augmented | 162 |
| abstract_inverted_index.datasets. | 157 |
| abstract_inverted_index.expensive | 23 |
| abstract_inverted_index.framework | 55, 127 |
| abstract_inverted_index.knowledge | 103 |
| abstract_inverted_index.pipeline. | 86 |
| abstract_inverted_index.retrieval | 153 |
| abstract_inverted_index.zero-shot | 146 |
| abstract_inverted_index.GENeration | 60 |
| abstract_inverted_index.captioning | 82 |
| abstract_inverted_index.complexity | 40 |
| abstract_inverted_index.component. | 141 |
| abstract_inverted_index.comprising | 165 |
| abstract_inverted_index.difficulty | 91 |
| abstract_inverted_index.explicitly | 116 |
| abstract_inverted_index.foundation | 80 |
| abstract_inverted_index.image-text | 18 |
| abstract_inverted_index.Multi-Agent | 58 |
| abstract_inverted_index.challenges, | 49 |
| abstract_inverted_index.cross-modal | 152 |
| abstract_inverted_index.decomposing | 99 |
| abstract_inverted_index.demonstrate | 136 |
| abstract_inverted_index.experiments | 135 |
| abstract_inverted_index.facilitates | 106 |
| abstract_inverted_index.integrating | 56 |
| abstract_inverted_index.large-scale | 17 |
| abstract_inverted_index.mechanisms. | 123 |
| abstract_inverted_index.performance | 147 |
| abstract_inverted_index.pretraining | 1 |
| abstract_inverted_index.Multi-Aspect | 65 |
| abstract_inverted_index.annotations. | 25 |
| abstract_inverted_index.dermatology, | 132 |
| abstract_inverted_index.descriptions | 77 |
| abstract_inverted_index.fine-grained | 107 |
| abstract_inverted_index.pretraining. | 68 |
| abstract_inverted_index.synthesizing | 75 |
| abstract_inverted_index.unstructured | 42, 96 |
| abstract_inverted_index.verification | 85 |
| abstract_inverted_index.comprehensive | 134 |
| abstract_inverted_index.effectiveness | 138 |
| abstract_inverted_index.relationships | 120 |
| abstract_inverted_index.web-collected | 36 |
| abstract_inverted_index.Ontology-based | 64 |
| abstract_inverted_index.classification | 150 |
| abstract_inverted_index.model-assisted | 81 |
| abstract_inverted_index.representation | 14 |
| abstract_inverted_index.Vision-language | 0 |
| abstract_inverted_index.ontology-guided | 122 |
| abstract_inverted_index.retrieval-based | 84 |
| abstract_inverted_index.skin-image-text | 168 |
| abstract_inverted_index.Derm1M-AgentAug, | 164 |
| abstract_inverted_index.state-of-the-art | 145 |
| abstract_inverted_index.Knowledge-Enhanced | 66 |
| abstract_inverted_index.knowledge-enriched | 76 |
| abstract_inverted_index.https://github.com/SiyuanYan1/Derm1M. | 174 |
| cited_by_percentile_year | |
| corresponding_author_ids | https://openalex.org/A5005014252 |
| countries_distinct_count | 0 |
| institutions_distinct_count | 1 |
| citation_normalized_percentile |