Proxy-FDA: Proxy-based Feature Distribution Alignment for Fine-tuning Vision Foundation Models without Forgetting Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2505.24088
Vision foundation models pre-trained on massive data encode rich representations of real-world concepts, which can be adapted to downstream tasks by fine-tuning. However, fine-tuning foundation models on one task often leads to the issue of concept forgetting on other tasks. Recent methods of robust fine-tuning aim to mitigate forgetting of prior knowledge without affecting the fine-tuning performance. Knowledge is often preserved by matching the original and fine-tuned model weights or feature pairs. However, such point-wise matching can be too strong, without explicit awareness of the feature neighborhood structures that encode rich knowledge as well. We propose a novel regularization method Proxy-FDA that explicitly preserves the structural knowledge in feature space. Proxy-FDA performs Feature Distribution Alignment (using nearest neighbor graphs) between the pre-trained and fine-tuned feature spaces, and the alignment is further improved by informative proxies that are generated dynamically to increase data diversity. Experiments show that Proxy-FDA significantly reduces concept forgetting during fine-tuning, and we find a strong correlation between forgetting and a distributional distance metric (in comparison to L2 distance). We further demonstrate Proxy-FDA's benefits in various fine-tuning settings (end-to-end, few-shot and continual tuning) and across different tasks like image classification, captioning and VQA.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2505.24088
- https://arxiv.org/pdf/2505.24088
- OA Status
- green
- OpenAlex ID
- https://openalex.org/W4414856077
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4414856077Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2505.24088Digital Object Identifier
- Title
-
Proxy-FDA: Proxy-based Feature Distribution Alignment for Fine-tuning Vision Foundation Models without ForgettingWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-05-30Full publication date if available
- Authors
-
Chen Huang, Skyler Seto, Hadi Pouransari, Mehrdad Farajtabar, Raviteja Vemulapalli, Fartash Faghri, Oncel Tuzel, Barry-John Theobald, Josh SusskindList of authors in order
- Landing page
-
https://arxiv.org/abs/2505.24088Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2505.24088Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2505.24088Direct OA link when available
- Cited by
-
0Total citation count in OpenAlex
Full payload
| id | https://openalex.org/W4414856077 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2505.24088 |
| ids.doi | https://doi.org/10.48550/arxiv.2505.24088 |
| ids.openalex | https://openalex.org/W4414856077 |
| fwci | |
| type | preprint |
| title | Proxy-FDA: Proxy-based Feature Distribution Alignment for Fine-tuning Vision Foundation Models without Forgetting |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10052 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9648000001907349 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Medical Image Segmentation Techniques |
| topics[1].id | https://openalex.org/T10036 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9646000266075134 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1707 |
| topics[1].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[1].display_name | Advanced Neural Network Applications |
| topics[2].id | https://openalex.org/T10627 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9609000086784363 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1707 |
| topics[2].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[2].display_name | Advanced Image and Video Retrieval Techniques |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2505.24088 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2505.24088 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2505.24088 |
| locations[1].id | doi:10.48550/arxiv.2505.24088 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2505.24088 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5100606858 |
| authorships[0].author.orcid | https://orcid.org/0009-0005-0654-5081 |
| authorships[0].author.display_name | Chen Huang |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Huang, Chen |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5059839283 |
| authorships[1].author.orcid | |
| authorships[1].author.display_name | Skyler Seto |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Seto, Skyler |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5059295598 |
| authorships[2].author.orcid | |
| authorships[2].author.display_name | Hadi Pouransari |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Pouransari, Hadi |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5050499655 |
| authorships[3].author.orcid | https://orcid.org/0000-0002-5510-518X |
| authorships[3].author.display_name | Mehrdad Farajtabar |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Farajtabar, Mehrdad |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5071825172 |
| authorships[4].author.orcid | https://orcid.org/0000-0003-0425-7797 |
| authorships[4].author.display_name | Raviteja Vemulapalli |
| authorships[4].author_position | middle |
| authorships[4].raw_author_name | Vemulapalli, Raviteja |
| authorships[4].is_corresponding | False |
| authorships[5].author.id | https://openalex.org/A5036601505 |
| authorships[5].author.orcid | https://orcid.org/0000-0001-5975-5158 |
| authorships[5].author.display_name | Fartash Faghri |
| authorships[5].author_position | middle |
| authorships[5].raw_author_name | Faghri, Fartash |
| authorships[5].is_corresponding | False |
| authorships[6].author.id | https://openalex.org/A5028613002 |
| authorships[6].author.orcid | |
| authorships[6].author.display_name | Oncel Tuzel |
| authorships[6].author_position | middle |
| authorships[6].raw_author_name | Tuzel, Oncel |
| authorships[6].is_corresponding | False |
| authorships[7].author.id | https://openalex.org/A5112911728 |
| authorships[7].author.orcid | |
| authorships[7].author.display_name | Barry-John Theobald |
| authorships[7].author_position | middle |
| authorships[7].raw_author_name | Theobald, Barry-John |
| authorships[7].is_corresponding | False |
| authorships[8].author.id | https://openalex.org/A5043808400 |
| authorships[8].author.orcid | |
| authorships[8].author.display_name | Josh Susskind |
| authorships[8].author_position | last |
| authorships[8].raw_author_name | Susskind, Josh |
| authorships[8].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2505.24088 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Proxy-FDA: Proxy-based Feature Distribution Alignment for Fine-tuning Vision Foundation Models without Forgetting |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T10052 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9648000001907349 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Medical Image Segmentation Techniques |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2505.24088 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2505.24088 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2505.24088 |
| primary_location.id | pmh:oai:arXiv.org:2505.24088 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2505.24088 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2505.24088 |
| publication_date | 2025-05-30 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 96, 156, 162 |
| abstract_inverted_index.L2 | 169 |
| abstract_inverted_index.We | 94, 171 |
| abstract_inverted_index.as | 92 |
| abstract_inverted_index.be | 15, 77 |
| abstract_inverted_index.by | 20, 61, 132 |
| abstract_inverted_index.in | 107, 176 |
| abstract_inverted_index.is | 58, 129 |
| abstract_inverted_index.of | 10, 34, 42, 49, 83 |
| abstract_inverted_index.on | 4, 26, 37 |
| abstract_inverted_index.or | 69 |
| abstract_inverted_index.to | 17, 31, 46, 139, 168 |
| abstract_inverted_index.we | 154 |
| abstract_inverted_index.(in | 166 |
| abstract_inverted_index.aim | 45 |
| abstract_inverted_index.and | 65, 122, 126, 153, 161, 182, 185, 193 |
| abstract_inverted_index.are | 136 |
| abstract_inverted_index.can | 14, 76 |
| abstract_inverted_index.one | 27 |
| abstract_inverted_index.the | 32, 54, 63, 84, 104, 120, 127 |
| abstract_inverted_index.too | 78 |
| abstract_inverted_index.VQA. | 194 |
| abstract_inverted_index.data | 6, 141 |
| abstract_inverted_index.find | 155 |
| abstract_inverted_index.like | 189 |
| abstract_inverted_index.rich | 8, 90 |
| abstract_inverted_index.show | 144 |
| abstract_inverted_index.such | 73 |
| abstract_inverted_index.task | 28 |
| abstract_inverted_index.that | 88, 101, 135, 145 |
| abstract_inverted_index.image | 190 |
| abstract_inverted_index.issue | 33 |
| abstract_inverted_index.leads | 30 |
| abstract_inverted_index.model | 67 |
| abstract_inverted_index.novel | 97 |
| abstract_inverted_index.often | 29, 59 |
| abstract_inverted_index.other | 38 |
| abstract_inverted_index.prior | 50 |
| abstract_inverted_index.tasks | 19, 188 |
| abstract_inverted_index.well. | 93 |
| abstract_inverted_index.which | 13 |
| abstract_inverted_index.(using | 115 |
| abstract_inverted_index.Recent | 40 |
| abstract_inverted_index.Vision | 0 |
| abstract_inverted_index.across | 186 |
| abstract_inverted_index.during | 151 |
| abstract_inverted_index.encode | 7, 89 |
| abstract_inverted_index.method | 99 |
| abstract_inverted_index.metric | 165 |
| abstract_inverted_index.models | 2, 25 |
| abstract_inverted_index.pairs. | 71 |
| abstract_inverted_index.robust | 43 |
| abstract_inverted_index.space. | 109 |
| abstract_inverted_index.strong | 157 |
| abstract_inverted_index.tasks. | 39 |
| abstract_inverted_index.Feature | 112 |
| abstract_inverted_index.adapted | 16 |
| abstract_inverted_index.between | 119, 159 |
| abstract_inverted_index.concept | 35, 149 |
| abstract_inverted_index.feature | 70, 85, 108, 124 |
| abstract_inverted_index.further | 130, 172 |
| abstract_inverted_index.graphs) | 118 |
| abstract_inverted_index.massive | 5 |
| abstract_inverted_index.methods | 41 |
| abstract_inverted_index.nearest | 116 |
| abstract_inverted_index.propose | 95 |
| abstract_inverted_index.proxies | 134 |
| abstract_inverted_index.reduces | 148 |
| abstract_inverted_index.spaces, | 125 |
| abstract_inverted_index.strong, | 79 |
| abstract_inverted_index.tuning) | 184 |
| abstract_inverted_index.various | 177 |
| abstract_inverted_index.weights | 68 |
| abstract_inverted_index.without | 52, 80 |
| abstract_inverted_index.However, | 22, 72 |
| abstract_inverted_index.benefits | 175 |
| abstract_inverted_index.distance | 164 |
| abstract_inverted_index.explicit | 81 |
| abstract_inverted_index.few-shot | 181 |
| abstract_inverted_index.improved | 131 |
| abstract_inverted_index.increase | 140 |
| abstract_inverted_index.matching | 62, 75 |
| abstract_inverted_index.mitigate | 47 |
| abstract_inverted_index.neighbor | 117 |
| abstract_inverted_index.original | 64 |
| abstract_inverted_index.performs | 111 |
| abstract_inverted_index.settings | 179 |
| abstract_inverted_index.Alignment | 114 |
| abstract_inverted_index.Knowledge | 57 |
| abstract_inverted_index.Proxy-FDA | 100, 110, 146 |
| abstract_inverted_index.affecting | 53 |
| abstract_inverted_index.alignment | 128 |
| abstract_inverted_index.awareness | 82 |
| abstract_inverted_index.concepts, | 12 |
| abstract_inverted_index.continual | 183 |
| abstract_inverted_index.different | 187 |
| abstract_inverted_index.generated | 137 |
| abstract_inverted_index.knowledge | 51, 91, 106 |
| abstract_inverted_index.preserved | 60 |
| abstract_inverted_index.preserves | 103 |
| abstract_inverted_index.captioning | 192 |
| abstract_inverted_index.comparison | 167 |
| abstract_inverted_index.distance). | 170 |
| abstract_inverted_index.diversity. | 142 |
| abstract_inverted_index.downstream | 18 |
| abstract_inverted_index.explicitly | 102 |
| abstract_inverted_index.fine-tuned | 66, 123 |
| abstract_inverted_index.forgetting | 36, 48, 150, 160 |
| abstract_inverted_index.foundation | 1, 24 |
| abstract_inverted_index.point-wise | 74 |
| abstract_inverted_index.real-world | 11 |
| abstract_inverted_index.structural | 105 |
| abstract_inverted_index.structures | 87 |
| abstract_inverted_index.Experiments | 143 |
| abstract_inverted_index.Proxy-FDA's | 174 |
| abstract_inverted_index.correlation | 158 |
| abstract_inverted_index.demonstrate | 173 |
| abstract_inverted_index.dynamically | 138 |
| abstract_inverted_index.fine-tuning | 23, 44, 55, 178 |
| abstract_inverted_index.informative | 133 |
| abstract_inverted_index.pre-trained | 3, 121 |
| abstract_inverted_index.(end-to-end, | 180 |
| abstract_inverted_index.Distribution | 113 |
| abstract_inverted_index.fine-tuning, | 152 |
| abstract_inverted_index.fine-tuning. | 21 |
| abstract_inverted_index.neighborhood | 86 |
| abstract_inverted_index.performance. | 56 |
| abstract_inverted_index.significantly | 147 |
| abstract_inverted_index.distributional | 163 |
| abstract_inverted_index.regularization | 98 |
| abstract_inverted_index.classification, | 191 |
| abstract_inverted_index.representations | 9 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 9 |
| citation_normalized_percentile |