M2-Encoder: Advancing Bilingual Image-Text Understanding by Large-scale Efficient Pretraining Article Swipe
YOU?
·
· 2024
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2401.15896
Vision-language foundation models like CLIP have revolutionized the field of artificial intelligence. Nevertheless, VLM models supporting multi-language, e.g., in both Chinese and English, have lagged due to the relative scarcity of large-scale pretraining datasets. Toward this end, we introduce a comprehensive bilingual (Chinese-English) dataset BM-6B with over 6 billion image-text pairs, aimed at enhancing multimodal foundation models to well understand images in both languages. To handle such a scale of dataset, we propose a novel grouped aggregation approach for image-text contrastive loss computation, which reduces the communication overhead and GPU memory demands significantly, facilitating a 60% increase in training speed. We pretrain a series of bilingual image-text foundation models with an enhanced fine-grained understanding ability on BM-6B, the resulting models, dubbed as $M^2$-Encoders (pronounced "M-Square"), set new benchmarks in both languages for multimodal retrieval and classification tasks. Notably, Our largest $M^2$-Encoder-10B model has achieved top-1 accuracies of 88.5% on ImageNet and 80.7% on ImageNet-CN under a zero-shot classification setting, surpassing previously reported SoTA methods by 2.2% and 21.1%, respectively. The $M^2$-Encoder series represents one of the most comprehensive bilingual image-text foundation models to date, so we are making it available to the research community for further exploration and development.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2401.15896
- https://arxiv.org/pdf/2401.15896
- OA Status
- green
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4391376524
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4391376524Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2401.15896Digital Object Identifier
- Title
-
M2-Encoder: Advancing Bilingual Image-Text Understanding by Large-scale Efficient PretrainingWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2024Year of publication
- Publication date
-
2024-01-29Full publication date if available
- Authors
-
Qingpei Guo, F. R. Xu, Hanxiao Zhang, Ren Wang, Ziping Ma, Lin Ju, Jian Wang, Jingdong Chen, Ming YangList of authors in order
- Landing page
-
https://arxiv.org/abs/2401.15896Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2401.15896Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2401.15896Direct OA link when available
- Concepts
-
Encoder, Scale (ratio), Image (mathematics), Computer science, Natural language processing, Psychology, Artificial intelligence, Cognitive psychology, Computer vision, Physics, Quantum mechanics, Operating systemTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
0Total citation count in OpenAlex
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4391376524 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2401.15896 |
| ids.doi | https://doi.org/10.48550/arxiv.2401.15896 |
| ids.openalex | https://openalex.org/W4391376524 |
| fwci | |
| type | preprint |
| title | M2-Encoder: Advancing Bilingual Image-Text Understanding by Large-scale Efficient Pretraining |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10601 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9976000189781189 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Handwritten Text Recognition Techniques |
| topics[1].id | https://openalex.org/T10627 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.983299970626831 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1707 |
| topics[1].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[1].display_name | Advanced Image and Video Retrieval Techniques |
| topics[2].id | https://openalex.org/T10824 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9824000000953674 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1707 |
| topics[2].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[2].display_name | Image Retrieval and Classification Techniques |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C118505674 |
| concepts[0].level | 2 |
| concepts[0].score | 0.8051598072052002 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q42586063 |
| concepts[0].display_name | Encoder |
| concepts[1].id | https://openalex.org/C2778755073 |
| concepts[1].level | 2 |
| concepts[1].score | 0.6712256073951721 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q10858537 |
| concepts[1].display_name | Scale (ratio) |
| concepts[2].id | https://openalex.org/C115961682 |
| concepts[2].level | 2 |
| concepts[2].score | 0.6382028460502625 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q860623 |
| concepts[2].display_name | Image (mathematics) |
| concepts[3].id | https://openalex.org/C41008148 |
| concepts[3].level | 0 |
| concepts[3].score | 0.48110431432724 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[3].display_name | Computer science |
| concepts[4].id | https://openalex.org/C204321447 |
| concepts[4].level | 1 |
| concepts[4].score | 0.42633676528930664 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q30642 |
| concepts[4].display_name | Natural language processing |
| concepts[5].id | https://openalex.org/C15744967 |
| concepts[5].level | 0 |
| concepts[5].score | 0.41358932852745056 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q9418 |
| concepts[5].display_name | Psychology |
| concepts[6].id | https://openalex.org/C154945302 |
| concepts[6].level | 1 |
| concepts[6].score | 0.39079979062080383 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[6].display_name | Artificial intelligence |
| concepts[7].id | https://openalex.org/C180747234 |
| concepts[7].level | 1 |
| concepts[7].score | 0.3725830316543579 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q23373 |
| concepts[7].display_name | Cognitive psychology |
| concepts[8].id | https://openalex.org/C31972630 |
| concepts[8].level | 1 |
| concepts[8].score | 0.3702714443206787 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[8].display_name | Computer vision |
| concepts[9].id | https://openalex.org/C121332964 |
| concepts[9].level | 0 |
| concepts[9].score | 0.2651633322238922 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q413 |
| concepts[9].display_name | Physics |
| concepts[10].id | https://openalex.org/C62520636 |
| concepts[10].level | 1 |
| concepts[10].score | 0.06793415546417236 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q944 |
| concepts[10].display_name | Quantum mechanics |
| concepts[11].id | https://openalex.org/C111919701 |
| concepts[11].level | 1 |
| concepts[11].score | 0.0 |
| concepts[11].wikidata | https://www.wikidata.org/wiki/Q9135 |
| concepts[11].display_name | Operating system |
| keywords[0].id | https://openalex.org/keywords/encoder |
| keywords[0].score | 0.8051598072052002 |
| keywords[0].display_name | Encoder |
| keywords[1].id | https://openalex.org/keywords/scale |
| keywords[1].score | 0.6712256073951721 |
| keywords[1].display_name | Scale (ratio) |
| keywords[2].id | https://openalex.org/keywords/image |
| keywords[2].score | 0.6382028460502625 |
| keywords[2].display_name | Image (mathematics) |
| keywords[3].id | https://openalex.org/keywords/computer-science |
| keywords[3].score | 0.48110431432724 |
| keywords[3].display_name | Computer science |
| keywords[4].id | https://openalex.org/keywords/natural-language-processing |
| keywords[4].score | 0.42633676528930664 |
| keywords[4].display_name | Natural language processing |
| keywords[5].id | https://openalex.org/keywords/psychology |
| keywords[5].score | 0.41358932852745056 |
| keywords[5].display_name | Psychology |
| keywords[6].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[6].score | 0.39079979062080383 |
| keywords[6].display_name | Artificial intelligence |
| keywords[7].id | https://openalex.org/keywords/cognitive-psychology |
| keywords[7].score | 0.3725830316543579 |
| keywords[7].display_name | Cognitive psychology |
| keywords[8].id | https://openalex.org/keywords/computer-vision |
| keywords[8].score | 0.3702714443206787 |
| keywords[8].display_name | Computer vision |
| keywords[9].id | https://openalex.org/keywords/physics |
| keywords[9].score | 0.2651633322238922 |
| keywords[9].display_name | Physics |
| keywords[10].id | https://openalex.org/keywords/quantum-mechanics |
| keywords[10].score | 0.06793415546417236 |
| keywords[10].display_name | Quantum mechanics |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2401.15896 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2401.15896 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2401.15896 |
| locations[1].id | doi:10.48550/arxiv.2401.15896 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2401.15896 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5086923590 |
| authorships[0].author.orcid | https://orcid.org/0009-0001-0521-9664 |
| authorships[0].author.display_name | Qingpei Guo |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Guo, Qingpei |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5076909902 |
| authorships[1].author.orcid | https://orcid.org/0000-0001-6699-0965 |
| authorships[1].author.display_name | F. R. Xu |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Xu, Furong |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5073987900 |
| authorships[2].author.orcid | https://orcid.org/0009-0002-2811-669X |
| authorships[2].author.display_name | Hanxiao Zhang |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Zhang, Hanxiao |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5100339141 |
| authorships[3].author.orcid | https://orcid.org/0000-0003-0529-9913 |
| authorships[3].author.display_name | Ren Wang |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Ren, Wang |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5091104090 |
| authorships[4].author.orcid | https://orcid.org/0000-0003-1932-821X |
| authorships[4].author.display_name | Ziping Ma |
| authorships[4].author_position | middle |
| authorships[4].raw_author_name | Ma, Ziping |
| authorships[4].is_corresponding | False |
| authorships[5].author.id | https://openalex.org/A5101282554 |
| authorships[5].author.orcid | |
| authorships[5].author.display_name | Lin Ju |
| authorships[5].author_position | middle |
| authorships[5].raw_author_name | Ju, Lin |
| authorships[5].is_corresponding | False |
| authorships[6].author.id | https://openalex.org/A5100696824 |
| authorships[6].author.orcid | https://orcid.org/0000-0002-4316-932X |
| authorships[6].author.display_name | Jian Wang |
| authorships[6].author_position | middle |
| authorships[6].raw_author_name | Wang, Jian |
| authorships[6].is_corresponding | False |
| authorships[7].author.id | https://openalex.org/A5056129529 |
| authorships[7].author.orcid | https://orcid.org/0000-0003-0083-9247 |
| authorships[7].author.display_name | Jingdong Chen |
| authorships[7].author_position | middle |
| authorships[7].raw_author_name | Chen, Jingdong |
| authorships[7].is_corresponding | False |
| authorships[8].author.id | https://openalex.org/A5009256880 |
| authorships[8].author.orcid | https://orcid.org/0000-0002-8679-9137 |
| authorships[8].author.display_name | Ming Yang |
| authorships[8].author_position | last |
| authorships[8].raw_author_name | Yang, Ming |
| authorships[8].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2401.15896 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2024-01-31T00:00:00 |
| display_name | M2-Encoder: Advancing Bilingual Image-Text Understanding by Large-scale Efficient Pretraining |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T10601 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9976000189781189 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Handwritten Text Recognition Techniques |
| related_works | https://openalex.org/W4390516098, https://openalex.org/W2181948922, https://openalex.org/W2384362569, https://openalex.org/W4205302943, https://openalex.org/W2119949815, https://openalex.org/W2561132942, https://openalex.org/W2142795561, https://openalex.org/W3155418658, https://openalex.org/W4243199227, https://openalex.org/W2379948177 |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2401.15896 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2401.15896 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2401.15896 |
| primary_location.id | pmh:oai:arXiv.org:2401.15896 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2401.15896 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2401.15896 |
| publication_date | 2024-01-29 |
| publication_year | 2024 |
| referenced_works_count | 0 |
| abstract_inverted_index.6 | 47 |
| abstract_inverted_index.a | 39, 67, 73, 94, 102, 155 |
| abstract_inverted_index.To | 64 |
| abstract_inverted_index.We | 100 |
| abstract_inverted_index.an | 110 |
| abstract_inverted_index.as | 121 |
| abstract_inverted_index.at | 52 |
| abstract_inverted_index.by | 164 |
| abstract_inverted_index.in | 18, 61, 97, 128 |
| abstract_inverted_index.it | 188 |
| abstract_inverted_index.of | 9, 30, 69, 104, 146, 174 |
| abstract_inverted_index.on | 115, 148, 152 |
| abstract_inverted_index.so | 184 |
| abstract_inverted_index.to | 26, 57, 182, 190 |
| abstract_inverted_index.we | 37, 71, 185 |
| abstract_inverted_index.60% | 95 |
| abstract_inverted_index.GPU | 89 |
| abstract_inverted_index.Our | 138 |
| abstract_inverted_index.The | 169 |
| abstract_inverted_index.VLM | 13 |
| abstract_inverted_index.and | 21, 88, 134, 150, 166, 197 |
| abstract_inverted_index.are | 186 |
| abstract_inverted_index.due | 25 |
| abstract_inverted_index.for | 78, 131, 194 |
| abstract_inverted_index.has | 142 |
| abstract_inverted_index.new | 126 |
| abstract_inverted_index.one | 173 |
| abstract_inverted_index.set | 125 |
| abstract_inverted_index.the | 7, 27, 85, 117, 175, 191 |
| abstract_inverted_index.2.2% | 165 |
| abstract_inverted_index.CLIP | 4 |
| abstract_inverted_index.SoTA | 162 |
| abstract_inverted_index.both | 19, 62, 129 |
| abstract_inverted_index.end, | 36 |
| abstract_inverted_index.have | 5, 23 |
| abstract_inverted_index.like | 3 |
| abstract_inverted_index.loss | 81 |
| abstract_inverted_index.most | 176 |
| abstract_inverted_index.over | 46 |
| abstract_inverted_index.such | 66 |
| abstract_inverted_index.this | 35 |
| abstract_inverted_index.well | 58 |
| abstract_inverted_index.with | 45, 109 |
| abstract_inverted_index.80.7% | 151 |
| abstract_inverted_index.88.5% | 147 |
| abstract_inverted_index.BM-6B | 44 |
| abstract_inverted_index.aimed | 51 |
| abstract_inverted_index.date, | 183 |
| abstract_inverted_index.e.g., | 17 |
| abstract_inverted_index.field | 8 |
| abstract_inverted_index.model | 141 |
| abstract_inverted_index.novel | 74 |
| abstract_inverted_index.scale | 68 |
| abstract_inverted_index.top-1 | 144 |
| abstract_inverted_index.under | 154 |
| abstract_inverted_index.which | 83 |
| abstract_inverted_index.21.1%, | 167 |
| abstract_inverted_index.BM-6B, | 116 |
| abstract_inverted_index.Toward | 34 |
| abstract_inverted_index.dubbed | 120 |
| abstract_inverted_index.handle | 65 |
| abstract_inverted_index.images | 60 |
| abstract_inverted_index.lagged | 24 |
| abstract_inverted_index.making | 187 |
| abstract_inverted_index.memory | 90 |
| abstract_inverted_index.models | 2, 14, 56, 108, 181 |
| abstract_inverted_index.pairs, | 50 |
| abstract_inverted_index.series | 103, 171 |
| abstract_inverted_index.speed. | 99 |
| abstract_inverted_index.tasks. | 136 |
| abstract_inverted_index.Chinese | 20 |
| abstract_inverted_index.ability | 114 |
| abstract_inverted_index.billion | 48 |
| abstract_inverted_index.dataset | 43 |
| abstract_inverted_index.demands | 91 |
| abstract_inverted_index.further | 195 |
| abstract_inverted_index.grouped | 75 |
| abstract_inverted_index.largest | 139 |
| abstract_inverted_index.methods | 163 |
| abstract_inverted_index.models, | 119 |
| abstract_inverted_index.propose | 72 |
| abstract_inverted_index.reduces | 84 |
| abstract_inverted_index.English, | 22 |
| abstract_inverted_index.ImageNet | 149 |
| abstract_inverted_index.Notably, | 137 |
| abstract_inverted_index.achieved | 143 |
| abstract_inverted_index.approach | 77 |
| abstract_inverted_index.dataset, | 70 |
| abstract_inverted_index.enhanced | 111 |
| abstract_inverted_index.increase | 96 |
| abstract_inverted_index.overhead | 87 |
| abstract_inverted_index.pretrain | 101 |
| abstract_inverted_index.relative | 28 |
| abstract_inverted_index.reported | 161 |
| abstract_inverted_index.research | 192 |
| abstract_inverted_index.scarcity | 29 |
| abstract_inverted_index.setting, | 158 |
| abstract_inverted_index.training | 98 |
| abstract_inverted_index.available | 189 |
| abstract_inverted_index.bilingual | 41, 105, 178 |
| abstract_inverted_index.community | 193 |
| abstract_inverted_index.datasets. | 33 |
| abstract_inverted_index.enhancing | 53 |
| abstract_inverted_index.introduce | 38 |
| abstract_inverted_index.languages | 130 |
| abstract_inverted_index.resulting | 118 |
| abstract_inverted_index.retrieval | 133 |
| abstract_inverted_index.zero-shot | 156 |
| abstract_inverted_index.accuracies | 145 |
| abstract_inverted_index.artificial | 10 |
| abstract_inverted_index.benchmarks | 127 |
| abstract_inverted_index.foundation | 1, 55, 107, 180 |
| abstract_inverted_index.image-text | 49, 79, 106, 179 |
| abstract_inverted_index.languages. | 63 |
| abstract_inverted_index.multimodal | 54, 132 |
| abstract_inverted_index.previously | 160 |
| abstract_inverted_index.represents | 172 |
| abstract_inverted_index.supporting | 15 |
| abstract_inverted_index.surpassing | 159 |
| abstract_inverted_index.understand | 59 |
| abstract_inverted_index.(pronounced | 123 |
| abstract_inverted_index.ImageNet-CN | 153 |
| abstract_inverted_index.aggregation | 76 |
| abstract_inverted_index.contrastive | 80 |
| abstract_inverted_index.exploration | 196 |
| abstract_inverted_index.large-scale | 31 |
| abstract_inverted_index.pretraining | 32 |
| abstract_inverted_index."M-Square"), | 124 |
| abstract_inverted_index.computation, | 82 |
| abstract_inverted_index.development. | 198 |
| abstract_inverted_index.facilitating | 93 |
| abstract_inverted_index.fine-grained | 112 |
| abstract_inverted_index.$M^2$-Encoder | 170 |
| abstract_inverted_index.Nevertheless, | 12 |
| abstract_inverted_index.communication | 86 |
| abstract_inverted_index.comprehensive | 40, 177 |
| abstract_inverted_index.intelligence. | 11 |
| abstract_inverted_index.respectively. | 168 |
| abstract_inverted_index.understanding | 113 |
| abstract_inverted_index.$M^2$-Encoders | 122 |
| abstract_inverted_index.classification | 135, 157 |
| abstract_inverted_index.revolutionized | 6 |
| abstract_inverted_index.significantly, | 92 |
| abstract_inverted_index.Vision-language | 0 |
| abstract_inverted_index.multi-language, | 16 |
| abstract_inverted_index.$M^2$-Encoder-10B | 140 |
| abstract_inverted_index.(Chinese-English) | 42 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 9 |
| citation_normalized_percentile |