Is Your Text-to-Image Model Robust to Caption Noise? Article Swipe
YOU?
·
· 2024
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2412.19531
In text-to-image (T2I) generation, a prevalent training technique involves utilizing Vision Language Models (VLMs) for image re-captioning. Even though VLMs are known to exhibit hallucination, generating descriptive content that deviates from the visual reality, the ramifications of such caption hallucinations on T2I generation performance remain under-explored. Through our empirical investigation, we first establish a comprehensive dataset comprising VLM-generated captions, and then systematically analyze how caption hallucination influences generation outcomes. Our findings reveal that (1) the disparities in caption quality persistently impact model outputs during fine-tuning. (2) VLMs confidence scores serve as reliable indicators for detecting and characterizing noise-related patterns in the data distribution. (3) even subtle variations in caption fidelity have significant effects on the quality of learned representations. These findings collectively emphasize the profound impact of caption quality on model performance and highlight the need for more sophisticated robust training algorithm in T2I. In response to these observations, we propose a approach leveraging VLM confidence score to mitigate caption noise, thereby enhancing the robustness of T2I models against hallucination in caption.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2412.19531
- https://arxiv.org/pdf/2412.19531
- OA Status
- green
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4405903305
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4405903305Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2412.19531Digital Object Identifier
- Title
-
Is Your Text-to-Image Model Robust to Caption Noise?Work title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2024Year of publication
- Publication date
-
2024-12-27Full publication date if available
- Authors
-
Weichen Yu, Zidong Yang, Shanchuan Lin, Qi Zhao, Jianyi Wang, Liangke Gui, Matt Fredrikson, Lu JiangList of authors in order
- Landing page
-
https://arxiv.org/abs/2412.19531Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2412.19531Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2412.19531Direct OA link when available
- Concepts
-
Image (mathematics), Noise (video), Computer science, Computer vision, Artificial intelligenceTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
0Total citation count in OpenAlex
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4405903305 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2412.19531 |
| ids.doi | https://doi.org/10.48550/arxiv.2412.19531 |
| ids.openalex | https://openalex.org/W4405903305 |
| fwci | |
| type | preprint |
| title | Is Your Text-to-Image Model Robust to Caption Noise? |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T11714 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9987999796867371 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Multimodal Machine Learning Applications |
| topics[1].id | https://openalex.org/T11439 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.989300012588501 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1707 |
| topics[1].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[1].display_name | Video Analysis and Summarization |
| topics[2].id | https://openalex.org/T13310 |
| topics[2].field.id | https://openalex.org/fields/12 |
| topics[2].field.display_name | Arts and Humanities |
| topics[2].score | 0.9840999841690063 |
| topics[2].domain.id | https://openalex.org/domains/2 |
| topics[2].domain.display_name | Social Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1203 |
| topics[2].subfield.display_name | Language and Linguistics |
| topics[2].display_name | Subtitles and Audiovisual Media |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C115961682 |
| concepts[0].level | 2 |
| concepts[0].score | 0.6261643171310425 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q860623 |
| concepts[0].display_name | Image (mathematics) |
| concepts[1].id | https://openalex.org/C99498987 |
| concepts[1].level | 3 |
| concepts[1].score | 0.5506649613380432 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q2210247 |
| concepts[1].display_name | Noise (video) |
| concepts[2].id | https://openalex.org/C41008148 |
| concepts[2].level | 0 |
| concepts[2].score | 0.54240882396698 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[2].display_name | Computer science |
| concepts[3].id | https://openalex.org/C31972630 |
| concepts[3].level | 1 |
| concepts[3].score | 0.43462345004081726 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[3].display_name | Computer vision |
| concepts[4].id | https://openalex.org/C154945302 |
| concepts[4].level | 1 |
| concepts[4].score | 0.389691025018692 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[4].display_name | Artificial intelligence |
| keywords[0].id | https://openalex.org/keywords/image |
| keywords[0].score | 0.6261643171310425 |
| keywords[0].display_name | Image (mathematics) |
| keywords[1].id | https://openalex.org/keywords/noise |
| keywords[1].score | 0.5506649613380432 |
| keywords[1].display_name | Noise (video) |
| keywords[2].id | https://openalex.org/keywords/computer-science |
| keywords[2].score | 0.54240882396698 |
| keywords[2].display_name | Computer science |
| keywords[3].id | https://openalex.org/keywords/computer-vision |
| keywords[3].score | 0.43462345004081726 |
| keywords[3].display_name | Computer vision |
| keywords[4].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[4].score | 0.389691025018692 |
| keywords[4].display_name | Artificial intelligence |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2412.19531 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2412.19531 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2412.19531 |
| locations[1].id | doi:10.48550/arxiv.2412.19531 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | cc-by |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | https://openalex.org/licenses/cc-by |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2412.19531 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5108259382 |
| authorships[0].author.orcid | https://orcid.org/0009-0003-7935-2358 |
| authorships[0].author.display_name | Weichen Yu |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Yu, Weichen |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5101919600 |
| authorships[1].author.orcid | https://orcid.org/0000-0003-0277-3333 |
| authorships[1].author.display_name | Zidong Yang |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Yang, Ziyan |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5048135392 |
| authorships[2].author.orcid | |
| authorships[2].author.display_name | Shanchuan Lin |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Lin, Shanchuan |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5047419128 |
| authorships[3].author.orcid | https://orcid.org/0000-0003-3054-8934 |
| authorships[3].author.display_name | Qi Zhao |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Zhao, Qi |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5101615046 |
| authorships[4].author.orcid | https://orcid.org/0000-0001-7025-3626 |
| authorships[4].author.display_name | Jianyi Wang |
| authorships[4].author_position | middle |
| authorships[4].raw_author_name | Wang, Jianyi |
| authorships[4].is_corresponding | False |
| authorships[5].author.id | https://openalex.org/A5037727565 |
| authorships[5].author.orcid | |
| authorships[5].author.display_name | Liangke Gui |
| authorships[5].author_position | middle |
| authorships[5].raw_author_name | Gui, Liangke |
| authorships[5].is_corresponding | False |
| authorships[6].author.id | https://openalex.org/A5057424614 |
| authorships[6].author.orcid | https://orcid.org/0000-0003-1820-1698 |
| authorships[6].author.display_name | Matt Fredrikson |
| authorships[6].author_position | middle |
| authorships[6].raw_author_name | Fredrikson, Matt |
| authorships[6].is_corresponding | False |
| authorships[7].author.id | https://openalex.org/A5090730336 |
| authorships[7].author.orcid | https://orcid.org/0000-0003-0286-8439 |
| authorships[7].author.display_name | Lu Jiang |
| authorships[7].author_position | last |
| authorships[7].raw_author_name | Jiang, Lu |
| authorships[7].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2412.19531 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Is Your Text-to-Image Model Robust to Caption Noise? |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T11714 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9987999796867371 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Multimodal Machine Learning Applications |
| related_works | https://openalex.org/W2772917594, https://openalex.org/W2036807459, https://openalex.org/W2058170566, https://openalex.org/W2755342338, https://openalex.org/W2166024367, https://openalex.org/W3116076068, https://openalex.org/W2229312674, https://openalex.org/W2951359407, https://openalex.org/W2079911747, https://openalex.org/W1969923398 |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2412.19531 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2412.19531 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2412.19531 |
| primary_location.id | pmh:oai:arXiv.org:2412.19531 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2412.19531 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2412.19531 |
| publication_date | 2024-12-27 |
| publication_year | 2024 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 4, 53, 151 |
| abstract_inverted_index.In | 0, 144 |
| abstract_inverted_index.as | 90 |
| abstract_inverted_index.in | 76, 99, 107, 142, 170 |
| abstract_inverted_index.of | 36, 116, 126, 165 |
| abstract_inverted_index.on | 40, 113, 129 |
| abstract_inverted_index.to | 22, 146, 157 |
| abstract_inverted_index.we | 50, 149 |
| abstract_inverted_index.(1) | 73 |
| abstract_inverted_index.(2) | 85 |
| abstract_inverted_index.(3) | 103 |
| abstract_inverted_index.Our | 69 |
| abstract_inverted_index.T2I | 41, 166 |
| abstract_inverted_index.VLM | 154 |
| abstract_inverted_index.and | 59, 95, 132 |
| abstract_inverted_index.are | 20 |
| abstract_inverted_index.for | 14, 93, 136 |
| abstract_inverted_index.how | 63 |
| abstract_inverted_index.our | 47 |
| abstract_inverted_index.the | 31, 34, 74, 100, 114, 123, 134, 163 |
| abstract_inverted_index.Even | 17 |
| abstract_inverted_index.T2I. | 143 |
| abstract_inverted_index.VLMs | 19, 86 |
| abstract_inverted_index.data | 101 |
| abstract_inverted_index.even | 104 |
| abstract_inverted_index.from | 30 |
| abstract_inverted_index.have | 110 |
| abstract_inverted_index.more | 137 |
| abstract_inverted_index.need | 135 |
| abstract_inverted_index.such | 37 |
| abstract_inverted_index.that | 28, 72 |
| abstract_inverted_index.then | 60 |
| abstract_inverted_index.(T2I) | 2 |
| abstract_inverted_index.These | 119 |
| abstract_inverted_index.first | 51 |
| abstract_inverted_index.image | 15 |
| abstract_inverted_index.known | 21 |
| abstract_inverted_index.model | 81, 130 |
| abstract_inverted_index.score | 156 |
| abstract_inverted_index.serve | 89 |
| abstract_inverted_index.these | 147 |
| abstract_inverted_index.(VLMs) | 13 |
| abstract_inverted_index.Models | 12 |
| abstract_inverted_index.Vision | 10 |
| abstract_inverted_index.during | 83 |
| abstract_inverted_index.impact | 80, 125 |
| abstract_inverted_index.models | 167 |
| abstract_inverted_index.noise, | 160 |
| abstract_inverted_index.remain | 44 |
| abstract_inverted_index.reveal | 71 |
| abstract_inverted_index.robust | 139 |
| abstract_inverted_index.scores | 88 |
| abstract_inverted_index.subtle | 105 |
| abstract_inverted_index.though | 18 |
| abstract_inverted_index.visual | 32 |
| abstract_inverted_index.Through | 46 |
| abstract_inverted_index.against | 168 |
| abstract_inverted_index.analyze | 62 |
| abstract_inverted_index.caption | 38, 64, 77, 108, 127, 159 |
| abstract_inverted_index.content | 27 |
| abstract_inverted_index.dataset | 55 |
| abstract_inverted_index.effects | 112 |
| abstract_inverted_index.exhibit | 23 |
| abstract_inverted_index.learned | 117 |
| abstract_inverted_index.outputs | 82 |
| abstract_inverted_index.propose | 150 |
| abstract_inverted_index.quality | 78, 115, 128 |
| abstract_inverted_index.thereby | 161 |
| abstract_inverted_index.Language | 11 |
| abstract_inverted_index.approach | 152 |
| abstract_inverted_index.caption. | 171 |
| abstract_inverted_index.deviates | 29 |
| abstract_inverted_index.fidelity | 109 |
| abstract_inverted_index.findings | 70, 120 |
| abstract_inverted_index.involves | 8 |
| abstract_inverted_index.mitigate | 158 |
| abstract_inverted_index.patterns | 98 |
| abstract_inverted_index.profound | 124 |
| abstract_inverted_index.reality, | 33 |
| abstract_inverted_index.reliable | 91 |
| abstract_inverted_index.response | 145 |
| abstract_inverted_index.training | 6, 140 |
| abstract_inverted_index.algorithm | 141 |
| abstract_inverted_index.captions, | 58 |
| abstract_inverted_index.detecting | 94 |
| abstract_inverted_index.emphasize | 122 |
| abstract_inverted_index.empirical | 48 |
| abstract_inverted_index.enhancing | 162 |
| abstract_inverted_index.establish | 52 |
| abstract_inverted_index.highlight | 133 |
| abstract_inverted_index.outcomes. | 68 |
| abstract_inverted_index.prevalent | 5 |
| abstract_inverted_index.technique | 7 |
| abstract_inverted_index.utilizing | 9 |
| abstract_inverted_index.comprising | 56 |
| abstract_inverted_index.confidence | 87, 155 |
| abstract_inverted_index.generating | 25 |
| abstract_inverted_index.generation | 42, 67 |
| abstract_inverted_index.indicators | 92 |
| abstract_inverted_index.influences | 66 |
| abstract_inverted_index.leveraging | 153 |
| abstract_inverted_index.robustness | 164 |
| abstract_inverted_index.variations | 106 |
| abstract_inverted_index.descriptive | 26 |
| abstract_inverted_index.disparities | 75 |
| abstract_inverted_index.generation, | 3 |
| abstract_inverted_index.performance | 43, 131 |
| abstract_inverted_index.significant | 111 |
| abstract_inverted_index.collectively | 121 |
| abstract_inverted_index.fine-tuning. | 84 |
| abstract_inverted_index.persistently | 79 |
| abstract_inverted_index.VLM-generated | 57 |
| abstract_inverted_index.comprehensive | 54 |
| abstract_inverted_index.distribution. | 102 |
| abstract_inverted_index.hallucination | 65, 169 |
| abstract_inverted_index.noise-related | 97 |
| abstract_inverted_index.observations, | 148 |
| abstract_inverted_index.ramifications | 35 |
| abstract_inverted_index.sophisticated | 138 |
| abstract_inverted_index.text-to-image | 1 |
| abstract_inverted_index.characterizing | 96 |
| abstract_inverted_index.hallucination, | 24 |
| abstract_inverted_index.hallucinations | 39 |
| abstract_inverted_index.investigation, | 49 |
| abstract_inverted_index.re-captioning. | 16 |
| abstract_inverted_index.systematically | 61 |
| abstract_inverted_index.under-explored. | 45 |
| abstract_inverted_index.representations. | 118 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 8 |
| citation_normalized_percentile |