Test-Time Canonicalization by Foundation Models for Robust Perception Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2507.10375
Perception in the real world requires robustness to diverse viewing conditions. Existing approaches often rely on specialized architectures or training with predefined data augmentations, limiting adaptability. Taking inspiration from mental rotation in human vision, we propose FOCAL, a test-time robustness framework that transforms the input into the most typical view. At inference time, FOCAL explores a set of transformed images and chooses the one with the highest likelihood under foundation model priors. This test-time optimization boosts robustness while requiring no retraining or architectural changes. Applied to models like CLIP and SAM, it significantly boosts robustness across a wide range of transformations, including 2D and 3D rotations, contrast and lighting shifts, and day-night changes. We also explore potential applications in active vision. By reframing invariance as a test-time optimization problem, FOCAL offers a general and scalable approach to robustness. Our code is available at: https://github.com/sutkarsh/focal.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2507.10375
- https://arxiv.org/pdf/2507.10375
- OA Status
- green
- OpenAlex ID
- https://openalex.org/W4414740197
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4414740197Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2507.10375Digital Object Identifier
- Title
-
Test-Time Canonicalization by Foundation Models for Robust PerceptionWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-07-14Full publication date if available
- Authors
-
Utkarsh Singhal, Ryan Feng, Stella X. Yu, Atul PrakashList of authors in order
- Landing page
-
https://arxiv.org/abs/2507.10375Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2507.10375Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2507.10375Direct OA link when available
- Cited by
-
0Total citation count in OpenAlex
Full payload
| id | https://openalex.org/W4414740197 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2507.10375 |
| ids.doi | https://doi.org/10.48550/arxiv.2507.10375 |
| ids.openalex | https://openalex.org/W4414740197 |
| fwci | |
| type | preprint |
| title | Test-Time Canonicalization by Foundation Models for Robust Perception |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10531 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9947999715805054 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Advanced Vision and Imaging |
| topics[1].id | https://openalex.org/T12111 |
| topics[1].field.id | https://openalex.org/fields/22 |
| topics[1].field.display_name | Engineering |
| topics[1].score | 0.9886999726295471 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/2209 |
| topics[1].subfield.display_name | Industrial and Manufacturing Engineering |
| topics[1].display_name | Industrial Vision Systems and Defect Detection |
| topics[2].id | https://openalex.org/T13114 |
| topics[2].field.id | https://openalex.org/fields/22 |
| topics[2].field.display_name | Engineering |
| topics[2].score | 0.9724000096321106 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/2214 |
| topics[2].subfield.display_name | Media Technology |
| topics[2].display_name | Image Processing Techniques and Applications |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2507.10375 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2507.10375 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2507.10375 |
| locations[1].id | doi:10.48550/arxiv.2507.10375 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | cc-by |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | https://openalex.org/licenses/cc-by |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2507.10375 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5119809720 |
| authorships[0].author.orcid | |
| authorships[0].author.display_name | Utkarsh Singhal |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Singhal, Utkarsh |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5007824710 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-4767-274X |
| authorships[1].author.display_name | Ryan Feng |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Feng, Ryan |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5042014034 |
| authorships[2].author.orcid | https://orcid.org/0000-0002-3507-5761 |
| authorships[2].author.display_name | Stella X. Yu |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Yu, Stella X. |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5101532643 |
| authorships[3].author.orcid | https://orcid.org/0000-0002-9473-4966 |
| authorships[3].author.display_name | Atul Prakash |
| authorships[3].author_position | last |
| authorships[3].raw_author_name | Prakash, Atul |
| authorships[3].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2507.10375 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Test-Time Canonicalization by Foundation Models for Robust Perception |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T10531 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9947999715805054 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Advanced Vision and Imaging |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2507.10375 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2507.10375 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2507.10375 |
| primary_location.id | pmh:oai:arXiv.org:2507.10375 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2507.10375 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2507.10375 |
| publication_date | 2025-07-14 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 37, 55, 96, 125, 131 |
| abstract_inverted_index.2D | 102 |
| abstract_inverted_index.3D | 104 |
| abstract_inverted_index.At | 50 |
| abstract_inverted_index.By | 121 |
| abstract_inverted_index.We | 113 |
| abstract_inverted_index.as | 124 |
| abstract_inverted_index.in | 1, 31, 118 |
| abstract_inverted_index.is | 140 |
| abstract_inverted_index.it | 91 |
| abstract_inverted_index.no | 79 |
| abstract_inverted_index.of | 57, 99 |
| abstract_inverted_index.on | 15 |
| abstract_inverted_index.or | 18, 81 |
| abstract_inverted_index.to | 7, 85, 136 |
| abstract_inverted_index.we | 34 |
| abstract_inverted_index.Our | 138 |
| abstract_inverted_index.and | 60, 89, 103, 107, 110, 133 |
| abstract_inverted_index.at: | 142 |
| abstract_inverted_index.one | 63 |
| abstract_inverted_index.set | 56 |
| abstract_inverted_index.the | 2, 43, 46, 62, 65 |
| abstract_inverted_index.CLIP | 88 |
| abstract_inverted_index.SAM, | 90 |
| abstract_inverted_index.This | 72 |
| abstract_inverted_index.also | 114 |
| abstract_inverted_index.code | 139 |
| abstract_inverted_index.data | 22 |
| abstract_inverted_index.from | 28 |
| abstract_inverted_index.into | 45 |
| abstract_inverted_index.like | 87 |
| abstract_inverted_index.most | 47 |
| abstract_inverted_index.real | 3 |
| abstract_inverted_index.rely | 14 |
| abstract_inverted_index.that | 41 |
| abstract_inverted_index.wide | 97 |
| abstract_inverted_index.with | 20, 64 |
| abstract_inverted_index.FOCAL | 53, 129 |
| abstract_inverted_index.human | 32 |
| abstract_inverted_index.input | 44 |
| abstract_inverted_index.model | 70 |
| abstract_inverted_index.often | 13 |
| abstract_inverted_index.range | 98 |
| abstract_inverted_index.time, | 52 |
| abstract_inverted_index.under | 68 |
| abstract_inverted_index.view. | 49 |
| abstract_inverted_index.while | 77 |
| abstract_inverted_index.world | 4 |
| abstract_inverted_index.FOCAL, | 36 |
| abstract_inverted_index.Taking | 26 |
| abstract_inverted_index.across | 95 |
| abstract_inverted_index.active | 119 |
| abstract_inverted_index.boosts | 75, 93 |
| abstract_inverted_index.images | 59 |
| abstract_inverted_index.mental | 29 |
| abstract_inverted_index.models | 86 |
| abstract_inverted_index.offers | 130 |
| abstract_inverted_index.Applied | 84 |
| abstract_inverted_index.chooses | 61 |
| abstract_inverted_index.diverse | 8 |
| abstract_inverted_index.explore | 115 |
| abstract_inverted_index.general | 132 |
| abstract_inverted_index.highest | 66 |
| abstract_inverted_index.priors. | 71 |
| abstract_inverted_index.propose | 35 |
| abstract_inverted_index.shifts, | 109 |
| abstract_inverted_index.typical | 48 |
| abstract_inverted_index.viewing | 9 |
| abstract_inverted_index.vision, | 33 |
| abstract_inverted_index.vision. | 120 |
| abstract_inverted_index.Existing | 11 |
| abstract_inverted_index.approach | 135 |
| abstract_inverted_index.changes. | 83, 112 |
| abstract_inverted_index.contrast | 106 |
| abstract_inverted_index.explores | 54 |
| abstract_inverted_index.lighting | 108 |
| abstract_inverted_index.limiting | 24 |
| abstract_inverted_index.problem, | 128 |
| abstract_inverted_index.requires | 5 |
| abstract_inverted_index.rotation | 30 |
| abstract_inverted_index.scalable | 134 |
| abstract_inverted_index.training | 19 |
| abstract_inverted_index.available | 141 |
| abstract_inverted_index.day-night | 111 |
| abstract_inverted_index.framework | 40 |
| abstract_inverted_index.including | 101 |
| abstract_inverted_index.inference | 51 |
| abstract_inverted_index.potential | 116 |
| abstract_inverted_index.reframing | 122 |
| abstract_inverted_index.requiring | 78 |
| abstract_inverted_index.test-time | 38, 73, 126 |
| abstract_inverted_index.Perception | 0 |
| abstract_inverted_index.approaches | 12 |
| abstract_inverted_index.foundation | 69 |
| abstract_inverted_index.invariance | 123 |
| abstract_inverted_index.likelihood | 67 |
| abstract_inverted_index.predefined | 21 |
| abstract_inverted_index.retraining | 80 |
| abstract_inverted_index.robustness | 6, 39, 76, 94 |
| abstract_inverted_index.rotations, | 105 |
| abstract_inverted_index.transforms | 42 |
| abstract_inverted_index.conditions. | 10 |
| abstract_inverted_index.inspiration | 27 |
| abstract_inverted_index.robustness. | 137 |
| abstract_inverted_index.specialized | 16 |
| abstract_inverted_index.transformed | 58 |
| abstract_inverted_index.applications | 117 |
| abstract_inverted_index.optimization | 74, 127 |
| abstract_inverted_index.adaptability. | 25 |
| abstract_inverted_index.architectural | 82 |
| abstract_inverted_index.architectures | 17 |
| abstract_inverted_index.significantly | 92 |
| abstract_inverted_index.augmentations, | 23 |
| abstract_inverted_index.transformations, | 100 |
| abstract_inverted_index.https://github.com/sutkarsh/focal. | 143 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 4 |
| citation_normalized_percentile |