OpenVision 2: A Family of Generative Pretrained Visual Encoders for Multimodal Learning Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2509.01644
This paper provides a simplification on OpenVision's architecture and loss design for enhancing its training efficiency. Following the prior vision-language pretraining works CapPa and AIMv2, as well as modern multimodal designs like LLaVA, our changes are straightforward: we remove the text encoder (and therefore the contrastive loss), retaining only the captioning loss as a purely generative training signal. We name this new version OpenVision 2. The initial results are promising: despite this simplification, OpenVision 2 competitively matches the original model's performance on a broad set of multimodal benchmarks while substantially cutting both training time and memory consumption. For example, with ViT-L/14, it reduces training time by about 1.5x (from 83h to 57h), and memory usage by about 1.8x (from 24.5GB to 13.8GB, equivalently allowing the maximum batch size to grow from 2k to 8k). This superior training efficiency also allows us to scale far beyond the largest vision encoder used in OpenVision, reaching more than 1 billion parameters. We hold a strong belief that this lightweight, generative-only paradigm is compelling for future vision encoder development in multimodal foundation models.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2509.01644
- https://arxiv.org/pdf/2509.01644
- OA Status
- green
- OpenAlex ID
- https://openalex.org/W4416695365
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4416695365Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2509.01644Digital Object Identifier
- Title
-
OpenVision 2: A Family of Generative Pretrained Visual Encoders for Multimodal LearningWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-09-01Full publication date if available
- Authors
-
Yanqing Liu, Xianhang Li, Letian Zhang, Zirui Wang, Zeyu ZhengList of authors in order
- Landing page
-
https://arxiv.org/abs/2509.01644Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2509.01644Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2509.01644Direct OA link when available
- Cited by
-
0Total citation count in OpenAlex
Full payload
| id | https://openalex.org/W4416695365 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2509.01644 |
| ids.doi | https://doi.org/10.48550/arxiv.2509.01644 |
| ids.openalex | https://openalex.org/W4416695365 |
| fwci | |
| type | preprint |
| title | OpenVision 2: A Family of Generative Pretrained Visual Encoders for Multimodal Learning |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2509.01644 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2509.01644 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2509.01644 |
| locations[1].id | doi:10.48550/arxiv.2509.01644 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2509.01644 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5100360935 |
| authorships[0].author.orcid | https://orcid.org/0000-0003-0412-8805 |
| authorships[0].author.display_name | Yanqing Liu |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Liu, Yanqing |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5030048445 |
| authorships[1].author.orcid | https://orcid.org/0009-0008-1970-4821 |
| authorships[1].author.display_name | Xianhang Li |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Li, Xianhang |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5090657130 |
| authorships[2].author.orcid | https://orcid.org/0000-0001-6275-6506 |
| authorships[2].author.display_name | Letian Zhang |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Zhang, Letian |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5100687847 |
| authorships[3].author.orcid | https://orcid.org/0000-0003-0626-742X |
| authorships[3].author.display_name | Zirui Wang |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Wang, Zirui |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5002874750 |
| authorships[4].author.orcid | https://orcid.org/0000-0001-5653-152X |
| authorships[4].author.display_name | Zeyu Zheng |
| authorships[4].author_position | last |
| authorships[4].raw_author_name | Zheng, Zeyu |
| authorships[4].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2509.01644 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | OpenVision 2: A Family of Generative Pretrained Visual Encoders for Multimodal Learning |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-28T20:52:32.518997 |
| primary_topic | |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2509.01644 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2509.01644 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2509.01644 |
| primary_location.id | pmh:oai:arXiv.org:2509.01644 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2509.01644 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2509.01644 |
| publication_date | 2025-09-01 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.1 | 155 |
| abstract_inverted_index.2 | 74 |
| abstract_inverted_index.a | 3, 53, 82, 160 |
| abstract_inverted_index.2. | 64 |
| abstract_inverted_index.2k | 131 |
| abstract_inverted_index.We | 58, 158 |
| abstract_inverted_index.as | 25, 27, 52 |
| abstract_inverted_index.by | 105, 115 |
| abstract_inverted_index.in | 150, 175 |
| abstract_inverted_index.is | 168 |
| abstract_inverted_index.it | 101 |
| abstract_inverted_index.of | 85 |
| abstract_inverted_index.on | 5, 81 |
| abstract_inverted_index.to | 110, 120, 128, 132, 141 |
| abstract_inverted_index.us | 140 |
| abstract_inverted_index.we | 37 |
| abstract_inverted_index.83h | 109 |
| abstract_inverted_index.For | 97 |
| abstract_inverted_index.The | 65 |
| abstract_inverted_index.and | 8, 23, 94, 112 |
| abstract_inverted_index.are | 35, 68 |
| abstract_inverted_index.far | 143 |
| abstract_inverted_index.for | 11, 170 |
| abstract_inverted_index.its | 13 |
| abstract_inverted_index.new | 61 |
| abstract_inverted_index.our | 33 |
| abstract_inverted_index.set | 84 |
| abstract_inverted_index.the | 17, 39, 44, 49, 77, 124, 145 |
| abstract_inverted_index.(and | 42 |
| abstract_inverted_index.1.5x | 107 |
| abstract_inverted_index.1.8x | 117 |
| abstract_inverted_index.8k). | 133 |
| abstract_inverted_index.This | 0, 134 |
| abstract_inverted_index.also | 138 |
| abstract_inverted_index.both | 91 |
| abstract_inverted_index.from | 130 |
| abstract_inverted_index.grow | 129 |
| abstract_inverted_index.hold | 159 |
| abstract_inverted_index.like | 31 |
| abstract_inverted_index.loss | 9, 51 |
| abstract_inverted_index.more | 153 |
| abstract_inverted_index.name | 59 |
| abstract_inverted_index.only | 48 |
| abstract_inverted_index.size | 127 |
| abstract_inverted_index.text | 40 |
| abstract_inverted_index.than | 154 |
| abstract_inverted_index.that | 163 |
| abstract_inverted_index.this | 60, 71, 164 |
| abstract_inverted_index.time | 93, 104 |
| abstract_inverted_index.used | 149 |
| abstract_inverted_index.well | 26 |
| abstract_inverted_index.with | 99 |
| abstract_inverted_index.(from | 108, 118 |
| abstract_inverted_index.57h), | 111 |
| abstract_inverted_index.CapPa | 22 |
| abstract_inverted_index.about | 106, 116 |
| abstract_inverted_index.batch | 126 |
| abstract_inverted_index.broad | 83 |
| abstract_inverted_index.paper | 1 |
| abstract_inverted_index.prior | 18 |
| abstract_inverted_index.scale | 142 |
| abstract_inverted_index.usage | 114 |
| abstract_inverted_index.while | 88 |
| abstract_inverted_index.works | 21 |
| abstract_inverted_index.24.5GB | 119 |
| abstract_inverted_index.AIMv2, | 24 |
| abstract_inverted_index.LLaVA, | 32 |
| abstract_inverted_index.allows | 139 |
| abstract_inverted_index.belief | 162 |
| abstract_inverted_index.beyond | 144 |
| abstract_inverted_index.design | 10 |
| abstract_inverted_index.future | 171 |
| abstract_inverted_index.loss), | 46 |
| abstract_inverted_index.memory | 95, 113 |
| abstract_inverted_index.modern | 28 |
| abstract_inverted_index.purely | 54 |
| abstract_inverted_index.remove | 38 |
| abstract_inverted_index.strong | 161 |
| abstract_inverted_index.vision | 147, 172 |
| abstract_inverted_index.13.8GB, | 121 |
| abstract_inverted_index.billion | 156 |
| abstract_inverted_index.changes | 34 |
| abstract_inverted_index.cutting | 90 |
| abstract_inverted_index.designs | 30 |
| abstract_inverted_index.despite | 70 |
| abstract_inverted_index.encoder | 41, 148, 173 |
| abstract_inverted_index.initial | 66 |
| abstract_inverted_index.largest | 146 |
| abstract_inverted_index.matches | 76 |
| abstract_inverted_index.maximum | 125 |
| abstract_inverted_index.model's | 79 |
| abstract_inverted_index.models. | 178 |
| abstract_inverted_index.reduces | 102 |
| abstract_inverted_index.results | 67 |
| abstract_inverted_index.signal. | 57 |
| abstract_inverted_index.version | 62 |
| abstract_inverted_index.allowing | 123 |
| abstract_inverted_index.example, | 98 |
| abstract_inverted_index.original | 78 |
| abstract_inverted_index.paradigm | 167 |
| abstract_inverted_index.provides | 2 |
| abstract_inverted_index.reaching | 152 |
| abstract_inverted_index.superior | 135 |
| abstract_inverted_index.training | 14, 56, 92, 103, 136 |
| abstract_inverted_index.Following | 16 |
| abstract_inverted_index.ViT-L/14, | 100 |
| abstract_inverted_index.enhancing | 12 |
| abstract_inverted_index.retaining | 47 |
| abstract_inverted_index.therefore | 43 |
| abstract_inverted_index.OpenVision | 63, 73 |
| abstract_inverted_index.benchmarks | 87 |
| abstract_inverted_index.captioning | 50 |
| abstract_inverted_index.compelling | 169 |
| abstract_inverted_index.efficiency | 137 |
| abstract_inverted_index.foundation | 177 |
| abstract_inverted_index.generative | 55 |
| abstract_inverted_index.multimodal | 29, 86, 176 |
| abstract_inverted_index.promising: | 69 |
| abstract_inverted_index.OpenVision, | 151 |
| abstract_inverted_index.contrastive | 45 |
| abstract_inverted_index.development | 174 |
| abstract_inverted_index.efficiency. | 15 |
| abstract_inverted_index.parameters. | 157 |
| abstract_inverted_index.performance | 80 |
| abstract_inverted_index.pretraining | 20 |
| abstract_inverted_index.OpenVision's | 6 |
| abstract_inverted_index.architecture | 7 |
| abstract_inverted_index.consumption. | 96 |
| abstract_inverted_index.equivalently | 122 |
| abstract_inverted_index.lightweight, | 165 |
| abstract_inverted_index.competitively | 75 |
| abstract_inverted_index.substantially | 89 |
| abstract_inverted_index.simplification | 4 |
| abstract_inverted_index.generative-only | 166 |
| abstract_inverted_index.simplification, | 72 |
| abstract_inverted_index.vision-language | 19 |
| abstract_inverted_index.straightforward: | 36 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 5 |
| citation_normalized_percentile |