Unlabeled Data Improves Fine-Grained Image Zero-shot Classification with Multimodal LLMs Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2506.03195
Despite Multimodal Large Language Models (MLLMs) showing promising results on general zero-shot image classification tasks, fine-grained image classification remains challenging. It demands precise attention to subtle visual details to distinguish between visually similar subcategories--details that MLLMs may easily overlook without explicit guidance. To address this, we introduce AutoSEP, an iterative self-supervised prompt learning framework designed to enhance MLLM fine-grained classification capabilities in a fully unsupervised manner. Our core idea is to leverage unlabeled data to learn a description prompt that guides MLLMs in identifying crucial discriminative features within an image, and boosts classification accuracy. We developed an automatic self-enhancing prompt learning framework called AutoSEP to iteratively improve the description prompt using unlabeled data, based on instance-level classification scoring function. AutoSEP only requires black-box access to MLLMs, eliminating the need for any training or fine-tuning. We evaluate our approach on multiple fine-grained classification datasets. It consistently outperforms other unsupervised baselines, demonstrating the effectiveness of our self-supervised optimization framework. Notably, AutoSEP on average improves 13 percent over standard zero-shot classification and 5 percent over the best-performing baselines. Code is available at: https://github.com/yq-hong/AutoSEP
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2506.03195
- https://arxiv.org/pdf/2506.03195
- OA Status
- green
- OpenAlex ID
- https://openalex.org/W4416071332
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4416071332Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2506.03195Digital Object Identifier
- Title
-
Unlabeled Data Improves Fine-Grained Image Zero-shot Classification with Multimodal LLMsWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-06-01Full publication date if available
- Authors
-
Yunqi Hong, Sohyun An, Andrew Bai, Neil Y. C. Lin, Cho‐Jui HsiehList of authors in order
- Landing page
-
https://arxiv.org/abs/2506.03195Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2506.03195Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2506.03195Direct OA link when available
- Cited by
-
0Total citation count in OpenAlex
Full payload
| id | https://openalex.org/W4416071332 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2506.03195 |
| ids.doi | https://doi.org/10.48550/arxiv.2506.03195 |
| ids.openalex | https://openalex.org/W4416071332 |
| fwci | |
| type | preprint |
| title | Unlabeled Data Improves Fine-Grained Image Zero-shot Classification with Multimodal LLMs |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2506.03195 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2506.03195 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2506.03195 |
| locations[1].id | doi:10.48550/arxiv.2506.03195 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2506.03195 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5113113006 |
| authorships[0].author.orcid | |
| authorships[0].author.display_name | Yunqi Hong |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Hong, Yunqi |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5101207410 |
| authorships[1].author.orcid | |
| authorships[1].author.display_name | Sohyun An |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | An, Sohyun |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5064475330 |
| authorships[2].author.orcid | |
| authorships[2].author.display_name | Andrew Bai |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Bai, Andrew |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5075965877 |
| authorships[3].author.orcid | https://orcid.org/0000-0002-8653-1894 |
| authorships[3].author.display_name | Neil Y. C. Lin |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Lin, Neil Y. C. |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5010841999 |
| authorships[4].author.orcid | https://orcid.org/0000-0002-3520-9627 |
| authorships[4].author.display_name | Cho‐Jui Hsieh |
| authorships[4].author_position | last |
| authorships[4].raw_author_name | Hsieh, Cho-Jui |
| authorships[4].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2506.03195 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Unlabeled Data Improves Fine-Grained Image Zero-shot Classification with Multimodal LLMs |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-28T10:02:32.668473 |
| primary_topic | |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2506.03195 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2506.03195 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2506.03195 |
| primary_location.id | pmh:oai:arXiv.org:2506.03195 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2506.03195 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2506.03195 |
| publication_date | 2025-06-01 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.5 | 169 |
| abstract_inverted_index.a | 62, 76 |
| abstract_inverted_index.13 | 162 |
| abstract_inverted_index.It | 20, 143 |
| abstract_inverted_index.To | 42 |
| abstract_inverted_index.We | 94, 134 |
| abstract_inverted_index.an | 48, 88, 96 |
| abstract_inverted_index.in | 61, 82 |
| abstract_inverted_index.is | 69, 176 |
| abstract_inverted_index.of | 152 |
| abstract_inverted_index.on | 9, 114, 138, 159 |
| abstract_inverted_index.or | 132 |
| abstract_inverted_index.to | 24, 28, 55, 70, 74, 104, 124 |
| abstract_inverted_index.we | 45 |
| abstract_inverted_index.Our | 66 |
| abstract_inverted_index.and | 90, 168 |
| abstract_inverted_index.any | 130 |
| abstract_inverted_index.at: | 178 |
| abstract_inverted_index.for | 129 |
| abstract_inverted_index.may | 36 |
| abstract_inverted_index.our | 136, 153 |
| abstract_inverted_index.the | 107, 127, 150, 172 |
| abstract_inverted_index.Code | 175 |
| abstract_inverted_index.MLLM | 57 |
| abstract_inverted_index.core | 67 |
| abstract_inverted_index.data | 73 |
| abstract_inverted_index.idea | 68 |
| abstract_inverted_index.need | 128 |
| abstract_inverted_index.only | 120 |
| abstract_inverted_index.over | 164, 171 |
| abstract_inverted_index.that | 34, 79 |
| abstract_inverted_index.Large | 2 |
| abstract_inverted_index.MLLMs | 35, 81 |
| abstract_inverted_index.based | 113 |
| abstract_inverted_index.data, | 112 |
| abstract_inverted_index.fully | 63 |
| abstract_inverted_index.image | 12, 16 |
| abstract_inverted_index.learn | 75 |
| abstract_inverted_index.other | 146 |
| abstract_inverted_index.this, | 44 |
| abstract_inverted_index.using | 110 |
| abstract_inverted_index.MLLMs, | 125 |
| abstract_inverted_index.Models | 4 |
| abstract_inverted_index.access | 123 |
| abstract_inverted_index.boosts | 91 |
| abstract_inverted_index.called | 102 |
| abstract_inverted_index.easily | 37 |
| abstract_inverted_index.guides | 80 |
| abstract_inverted_index.image, | 89 |
| abstract_inverted_index.prompt | 51, 78, 99, 109 |
| abstract_inverted_index.subtle | 25 |
| abstract_inverted_index.tasks, | 14 |
| abstract_inverted_index.visual | 26 |
| abstract_inverted_index.within | 87 |
| abstract_inverted_index.(MLLMs) | 5 |
| abstract_inverted_index.AutoSEP | 103, 119, 158 |
| abstract_inverted_index.Despite | 0 |
| abstract_inverted_index.address | 43 |
| abstract_inverted_index.average | 160 |
| abstract_inverted_index.between | 30 |
| abstract_inverted_index.crucial | 84 |
| abstract_inverted_index.demands | 21 |
| abstract_inverted_index.details | 27 |
| abstract_inverted_index.enhance | 56 |
| abstract_inverted_index.general | 10 |
| abstract_inverted_index.improve | 106 |
| abstract_inverted_index.manner. | 65 |
| abstract_inverted_index.percent | 163, 170 |
| abstract_inverted_index.precise | 22 |
| abstract_inverted_index.remains | 18 |
| abstract_inverted_index.results | 8 |
| abstract_inverted_index.scoring | 117 |
| abstract_inverted_index.showing | 6 |
| abstract_inverted_index.similar | 32 |
| abstract_inverted_index.without | 39 |
| abstract_inverted_index.AutoSEP, | 47 |
| abstract_inverted_index.Language | 3 |
| abstract_inverted_index.Notably, | 157 |
| abstract_inverted_index.approach | 137 |
| abstract_inverted_index.designed | 54 |
| abstract_inverted_index.evaluate | 135 |
| abstract_inverted_index.explicit | 40 |
| abstract_inverted_index.features | 86 |
| abstract_inverted_index.improves | 161 |
| abstract_inverted_index.learning | 52, 100 |
| abstract_inverted_index.leverage | 71 |
| abstract_inverted_index.multiple | 139 |
| abstract_inverted_index.overlook | 38 |
| abstract_inverted_index.requires | 121 |
| abstract_inverted_index.standard | 165 |
| abstract_inverted_index.training | 131 |
| abstract_inverted_index.visually | 31 |
| abstract_inverted_index.accuracy. | 93 |
| abstract_inverted_index.attention | 23 |
| abstract_inverted_index.automatic | 97 |
| abstract_inverted_index.available | 177 |
| abstract_inverted_index.black-box | 122 |
| abstract_inverted_index.datasets. | 142 |
| abstract_inverted_index.developed | 95 |
| abstract_inverted_index.framework | 53, 101 |
| abstract_inverted_index.function. | 118 |
| abstract_inverted_index.guidance. | 41 |
| abstract_inverted_index.introduce | 46 |
| abstract_inverted_index.iterative | 49 |
| abstract_inverted_index.promising | 7 |
| abstract_inverted_index.unlabeled | 72, 111 |
| abstract_inverted_index.zero-shot | 11, 166 |
| abstract_inverted_index.Multimodal | 1 |
| abstract_inverted_index.baselines, | 148 |
| abstract_inverted_index.baselines. | 174 |
| abstract_inverted_index.framework. | 156 |
| abstract_inverted_index.description | 77, 108 |
| abstract_inverted_index.distinguish | 29 |
| abstract_inverted_index.eliminating | 126 |
| abstract_inverted_index.identifying | 83 |
| abstract_inverted_index.iteratively | 105 |
| abstract_inverted_index.outperforms | 145 |
| abstract_inverted_index.capabilities | 60 |
| abstract_inverted_index.challenging. | 19 |
| abstract_inverted_index.consistently | 144 |
| abstract_inverted_index.fine-grained | 15, 58, 140 |
| abstract_inverted_index.fine-tuning. | 133 |
| abstract_inverted_index.optimization | 155 |
| abstract_inverted_index.unsupervised | 64, 147 |
| abstract_inverted_index.demonstrating | 149 |
| abstract_inverted_index.effectiveness | 151 |
| abstract_inverted_index.classification | 13, 17, 59, 92, 116, 141, 167 |
| abstract_inverted_index.discriminative | 85 |
| abstract_inverted_index.instance-level | 115 |
| abstract_inverted_index.self-enhancing | 98 |
| abstract_inverted_index.best-performing | 173 |
| abstract_inverted_index.self-supervised | 50, 154 |
| abstract_inverted_index.subcategories--details | 33 |
| abstract_inverted_index.https://github.com/yq-hong/AutoSEP | 179 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 5 |
| citation_normalized_percentile |