Document Haystacks: Vision-Language Reasoning Over Piles of 1000+ Documents Article Swipe
YOU?
·
· 2024
· Open Access
·
· DOI: https://doi.org/10.32388/ajmacy
Large multimodal models (LMMs) have achieved impressive progress in vision-language understanding, yet they face limitations in real-world applications requiring complex reasoning over a large number of images. Existing benchmarks for multi-image question-answering are limited in scope, each question is paired with only up to 30 images, which does not fully capture the demands of large-scale retrieval tasks encountered in the real-world usages. To reduce these gaps, we introduce two document haystack benchmarks, dubbed DocHaystack and InfoHaystack, designed to evaluate LMM performance on large-scale visual document retrieval and understanding. Additionally, we propose V-RAG, a novel, vision-centric retrieval-augmented generation (RAG) framework that leverages a suite of multimodal vision encoders, each optimized for specific strengths, and a dedicated question-document relevance module. V-RAG sets a new standard, with a 9% and 11% improvement in Recall@1 on the challenging DocHaystack-1000 and InfoHaystack-1000 benchmarks, respectively, compared to the previous best baseline models. Additionally, integrating V-RAG with LMMs enables them to efficiently operate across thousands of images, yielding significant improvements on our DocHaystack and InfoHaystack benchmarks. Our code and datasets are available at https://github.com/Vision-CAIR/dochaystacks
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- https://doi.org/10.32388/ajmacy
- OA Status
- gold
- Cited By
- 1
- References
- 39
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4405466386
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4405466386Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.32388/ajmacyDigital Object Identifier
- Title
-
Document Haystacks: Vision-Language Reasoning Over Piles of 1000+ DocumentsWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2024Year of publication
- Publication date
-
2024-12-17Full publication date if available
- Authors
-
Jun Chen, Dannong Xu, Junjie Fei, Chun-Mei Feng, Mohamed ElhoseinyList of authors in order
- Landing page
-
https://doi.org/10.32388/ajmacyPublisher landing page
- Open access
-
YesWhether a free full text is available
- OA status
-
goldOpen access status per OpenAlex
- OA URL
-
https://doi.org/10.32388/ajmacyDirect OA link when available
- Concepts
-
Computer science, Natural language processing, Artificial intelligence, Linguistics, PhilosophyTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
1Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 1Per-year citation counts (last 5 years)
- References (count)
-
39Number of works referenced by this work
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4405466386 |
|---|---|
| doi | https://doi.org/10.32388/ajmacy |
| ids.doi | https://doi.org/10.32388/ajmacy |
| ids.openalex | https://openalex.org/W4405466386 |
| fwci | 0.53015756 |
| type | preprint |
| title | Document Haystacks: Vision-Language Reasoning Over Piles of 1000+ Documents |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T11714 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9998000264167786 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Multimodal Machine Learning Applications |
| topics[1].id | https://openalex.org/T10028 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9984999895095825 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1702 |
| topics[1].subfield.display_name | Artificial Intelligence |
| topics[1].display_name | Topic Modeling |
| topics[2].id | https://openalex.org/T10181 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9957000017166138 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1702 |
| topics[2].subfield.display_name | Artificial Intelligence |
| topics[2].display_name | Natural Language Processing Techniques |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C41008148 |
| concepts[0].level | 0 |
| concepts[0].score | 0.4879528880119324 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[0].display_name | Computer science |
| concepts[1].id | https://openalex.org/C204321447 |
| concepts[1].level | 1 |
| concepts[1].score | 0.46823850274086 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q30642 |
| concepts[1].display_name | Natural language processing |
| concepts[2].id | https://openalex.org/C154945302 |
| concepts[2].level | 1 |
| concepts[2].score | 0.3937756419181824 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[2].display_name | Artificial intelligence |
| concepts[3].id | https://openalex.org/C41895202 |
| concepts[3].level | 1 |
| concepts[3].score | 0.33563554286956787 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q8162 |
| concepts[3].display_name | Linguistics |
| concepts[4].id | https://openalex.org/C138885662 |
| concepts[4].level | 0 |
| concepts[4].score | 0.14062952995300293 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q5891 |
| concepts[4].display_name | Philosophy |
| keywords[0].id | https://openalex.org/keywords/computer-science |
| keywords[0].score | 0.4879528880119324 |
| keywords[0].display_name | Computer science |
| keywords[1].id | https://openalex.org/keywords/natural-language-processing |
| keywords[1].score | 0.46823850274086 |
| keywords[1].display_name | Natural language processing |
| keywords[2].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[2].score | 0.3937756419181824 |
| keywords[2].display_name | Artificial intelligence |
| keywords[3].id | https://openalex.org/keywords/linguistics |
| keywords[3].score | 0.33563554286956787 |
| keywords[3].display_name | Linguistics |
| keywords[4].id | https://openalex.org/keywords/philosophy |
| keywords[4].score | 0.14062952995300293 |
| keywords[4].display_name | Philosophy |
| language | en |
| locations[0].id | doi:10.32388/ajmacy |
| locations[0].is_oa | True |
| locations[0].source | |
| locations[0].license | cc-by |
| locations[0].pdf_url | |
| locations[0].version | acceptedVersion |
| locations[0].raw_type | posted-content |
| locations[0].license_id | https://openalex.org/licenses/cc-by |
| locations[0].is_accepted | True |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | https://doi.org/10.32388/ajmacy |
| indexed_in | crossref |
| authorships[0].author.id | https://openalex.org/A5100450148 |
| authorships[0].author.orcid | https://orcid.org/0000-0001-8883-0970 |
| authorships[0].author.display_name | Jun Chen |
| authorships[0].countries | SA |
| authorships[0].affiliations[0].institution_ids | https://openalex.org/I71920554 |
| authorships[0].affiliations[0].raw_affiliation_string | King Abdullah University of Science and Technology |
| authorships[0].institutions[0].id | https://openalex.org/I71920554 |
| authorships[0].institutions[0].ror | https://ror.org/01q3tbs38 |
| authorships[0].institutions[0].type | education |
| authorships[0].institutions[0].lineage | https://openalex.org/I71920554 |
| authorships[0].institutions[0].country_code | SA |
| authorships[0].institutions[0].display_name | King Abdullah University of Science and Technology |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Jun Chen |
| authorships[0].is_corresponding | False |
| authorships[0].raw_affiliation_strings | King Abdullah University of Science and Technology |
| authorships[1].author.id | https://openalex.org/A5111005645 |
| authorships[1].author.orcid | |
| authorships[1].author.display_name | Dannong Xu |
| authorships[1].countries | AU |
| authorships[1].affiliations[0].institution_ids | https://openalex.org/I129604602 |
| authorships[1].affiliations[0].raw_affiliation_string | University of Sydney |
| authorships[1].institutions[0].id | https://openalex.org/I129604602 |
| authorships[1].institutions[0].ror | https://ror.org/0384j8v12 |
| authorships[1].institutions[0].type | education |
| authorships[1].institutions[0].lineage | https://openalex.org/I129604602 |
| authorships[1].institutions[0].country_code | AU |
| authorships[1].institutions[0].display_name | The University of Sydney |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Dannong Xu |
| authorships[1].is_corresponding | False |
| authorships[1].raw_affiliation_strings | University of Sydney |
| authorships[2].author.id | https://openalex.org/A5086595580 |
| authorships[2].author.orcid | https://orcid.org/0000-0002-8193-3704 |
| authorships[2].author.display_name | Junjie Fei |
| authorships[2].countries | SA |
| authorships[2].affiliations[0].institution_ids | https://openalex.org/I71920554 |
| authorships[2].affiliations[0].raw_affiliation_string | King Abdullah University of Science and Technology |
| authorships[2].institutions[0].id | https://openalex.org/I71920554 |
| authorships[2].institutions[0].ror | https://ror.org/01q3tbs38 |
| authorships[2].institutions[0].type | education |
| authorships[2].institutions[0].lineage | https://openalex.org/I71920554 |
| authorships[2].institutions[0].country_code | SA |
| authorships[2].institutions[0].display_name | King Abdullah University of Science and Technology |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Junjie Fei |
| authorships[2].is_corresponding | False |
| authorships[2].raw_affiliation_strings | King Abdullah University of Science and Technology |
| authorships[3].author.id | https://openalex.org/A5049444898 |
| authorships[3].author.orcid | https://orcid.org/0000-0002-3025-8964 |
| authorships[3].author.display_name | Chun-Mei Feng |
| authorships[3].countries | SG |
| authorships[3].affiliations[0].institution_ids | https://openalex.org/I115228651 |
| authorships[3].affiliations[0].raw_affiliation_string | Agency for Science, Technology and Research |
| authorships[3].institutions[0].id | https://openalex.org/I115228651 |
| authorships[3].institutions[0].ror | https://ror.org/036wvzt09 |
| authorships[3].institutions[0].type | government |
| authorships[3].institutions[0].lineage | https://openalex.org/I115228651 |
| authorships[3].institutions[0].country_code | SG |
| authorships[3].institutions[0].display_name | Agency for Science, Technology and Research |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Chun-Mei Feng |
| authorships[3].is_corresponding | False |
| authorships[3].raw_affiliation_strings | Agency for Science, Technology and Research |
| authorships[4].author.id | https://openalex.org/A5085089542 |
| authorships[4].author.orcid | https://orcid.org/0000-0001-9659-1551 |
| authorships[4].author.display_name | Mohamed Elhoseiny |
| authorships[4].countries | SA |
| authorships[4].affiliations[0].institution_ids | https://openalex.org/I71920554 |
| authorships[4].affiliations[0].raw_affiliation_string | King Abdullah University of Science and Technology |
| authorships[4].institutions[0].id | https://openalex.org/I71920554 |
| authorships[4].institutions[0].ror | https://ror.org/01q3tbs38 |
| authorships[4].institutions[0].type | education |
| authorships[4].institutions[0].lineage | https://openalex.org/I71920554 |
| authorships[4].institutions[0].country_code | SA |
| authorships[4].institutions[0].display_name | King Abdullah University of Science and Technology |
| authorships[4].author_position | last |
| authorships[4].raw_author_name | Mohamed Elhoseiny |
| authorships[4].is_corresponding | False |
| authorships[4].raw_affiliation_strings | King Abdullah University of Science and Technology |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://doi.org/10.32388/ajmacy |
| open_access.oa_status | gold |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Document Haystacks: Vision-Language Reasoning Over Piles of 1000+ Documents |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T11714 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9998000264167786 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Multimodal Machine Learning Applications |
| related_works | https://openalex.org/W4391375266, https://openalex.org/W2899084033, https://openalex.org/W2748952813, https://openalex.org/W2390279801, https://openalex.org/W4391913857, https://openalex.org/W2358668433, https://openalex.org/W4396701345, https://openalex.org/W2376932109, https://openalex.org/W2001405890, https://openalex.org/W3204019825 |
| cited_by_count | 1 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 1 |
| locations_count | 1 |
| best_oa_location.id | doi:10.32388/ajmacy |
| best_oa_location.is_oa | True |
| best_oa_location.source | |
| best_oa_location.license | cc-by |
| best_oa_location.pdf_url | |
| best_oa_location.version | acceptedVersion |
| best_oa_location.raw_type | posted-content |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by |
| best_oa_location.is_accepted | True |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | https://doi.org/10.32388/ajmacy |
| primary_location.id | doi:10.32388/ajmacy |
| primary_location.is_oa | True |
| primary_location.source | |
| primary_location.license | cc-by |
| primary_location.pdf_url | |
| primary_location.version | acceptedVersion |
| primary_location.raw_type | posted-content |
| primary_location.license_id | https://openalex.org/licenses/cc-by |
| primary_location.is_accepted | True |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | https://doi.org/10.32388/ajmacy |
| publication_date | 2024-12-17 |
| publication_year | 2024 |
| referenced_works | https://openalex.org/W4403853618, https://openalex.org/W6873437868, https://openalex.org/W4403443917, https://openalex.org/W4385768011, https://openalex.org/W3123868215, https://openalex.org/W3120043490, https://openalex.org/W4213213306, https://openalex.org/W4312643954, https://openalex.org/W2560730294, https://openalex.org/W6868835603, https://openalex.org/W4390050357, https://openalex.org/W2963518342, https://openalex.org/W2277195237, https://openalex.org/W4285255856, https://openalex.org/W4392121811, https://openalex.org/W4403808928, https://openalex.org/W2947312908, https://openalex.org/W4312846625, https://openalex.org/W4296605665, https://openalex.org/W4402716477, https://openalex.org/W4366850747, https://openalex.org/W4387723654, https://openalex.org/W3109643012, https://openalex.org/W6869711607, https://openalex.org/W4382202558, https://openalex.org/W3135367836, https://openalex.org/W4390873312, https://openalex.org/W6912494966, https://openalex.org/W4387560058, https://openalex.org/W4387800173, https://openalex.org/W3007672467, https://openalex.org/W4389518901, https://openalex.org/W3099700870, https://openalex.org/W4385573236, https://openalex.org/W2963341956, https://openalex.org/W2613718673, https://openalex.org/W4312638656, https://openalex.org/W6869473469, https://openalex.org/W6796581206 |
| referenced_works_count | 39 |
| abstract_inverted_index.a | 22, 92, 101, 113, 120, 124 |
| abstract_inverted_index.30 | 44 |
| abstract_inverted_index.9% | 125 |
| abstract_inverted_index.To | 62 |
| abstract_inverted_index.at | 175 |
| abstract_inverted_index.in | 8, 15, 34, 58, 129 |
| abstract_inverted_index.is | 38 |
| abstract_inverted_index.of | 25, 53, 103, 158 |
| abstract_inverted_index.on | 81, 131, 163 |
| abstract_inverted_index.to | 43, 77, 140, 153 |
| abstract_inverted_index.up | 42 |
| abstract_inverted_index.we | 66, 89 |
| abstract_inverted_index.11% | 127 |
| abstract_inverted_index.LMM | 79 |
| abstract_inverted_index.Our | 169 |
| abstract_inverted_index.and | 74, 86, 112, 126, 135, 166, 171 |
| abstract_inverted_index.are | 32, 173 |
| abstract_inverted_index.for | 29, 109 |
| abstract_inverted_index.new | 121 |
| abstract_inverted_index.not | 48 |
| abstract_inverted_index.our | 164 |
| abstract_inverted_index.the | 51, 59, 132, 141 |
| abstract_inverted_index.two | 68 |
| abstract_inverted_index.yet | 11 |
| abstract_inverted_index.LMMs | 150 |
| abstract_inverted_index.best | 143 |
| abstract_inverted_index.code | 170 |
| abstract_inverted_index.does | 47 |
| abstract_inverted_index.each | 36, 107 |
| abstract_inverted_index.face | 13 |
| abstract_inverted_index.have | 4 |
| abstract_inverted_index.only | 41 |
| abstract_inverted_index.over | 21 |
| abstract_inverted_index.sets | 119 |
| abstract_inverted_index.that | 99 |
| abstract_inverted_index.them | 152 |
| abstract_inverted_index.they | 12 |
| abstract_inverted_index.with | 40, 123, 149 |
| abstract_inverted_index.(RAG) | 97 |
| abstract_inverted_index.Large | 0 |
| abstract_inverted_index.V-RAG | 118, 148 |
| abstract_inverted_index.fully | 49 |
| abstract_inverted_index.gaps, | 65 |
| abstract_inverted_index.large | 23 |
| abstract_inverted_index.suite | 102 |
| abstract_inverted_index.tasks | 56 |
| abstract_inverted_index.these | 64 |
| abstract_inverted_index.which | 46 |
| abstract_inverted_index.(LMMs) | 3 |
| abstract_inverted_index.V-RAG, | 91 |
| abstract_inverted_index.across | 156 |
| abstract_inverted_index.dubbed | 72 |
| abstract_inverted_index.models | 2 |
| abstract_inverted_index.novel, | 93 |
| abstract_inverted_index.number | 24 |
| abstract_inverted_index.paired | 39 |
| abstract_inverted_index.reduce | 63 |
| abstract_inverted_index.scope, | 35 |
| abstract_inverted_index.vision | 105 |
| abstract_inverted_index.visual | 83 |
| abstract_inverted_index.capture | 50 |
| abstract_inverted_index.complex | 19 |
| abstract_inverted_index.demands | 52 |
| abstract_inverted_index.enables | 151 |
| abstract_inverted_index.images, | 45, 159 |
| abstract_inverted_index.images. | 26 |
| abstract_inverted_index.limited | 33 |
| abstract_inverted_index.models. | 145 |
| abstract_inverted_index.module. | 117 |
| abstract_inverted_index.operate | 155 |
| abstract_inverted_index.propose | 90 |
| abstract_inverted_index.usages. | 61 |
| abstract_inverted_index.Existing | 27 |
| abstract_inverted_index.Recall@1 | 130 |
| abstract_inverted_index.achieved | 5 |
| abstract_inverted_index.baseline | 144 |
| abstract_inverted_index.compared | 139 |
| abstract_inverted_index.datasets | 172 |
| abstract_inverted_index.designed | 76 |
| abstract_inverted_index.document | 69, 84 |
| abstract_inverted_index.evaluate | 78 |
| abstract_inverted_index.haystack | 70 |
| abstract_inverted_index.previous | 142 |
| abstract_inverted_index.progress | 7 |
| abstract_inverted_index.question | 37 |
| abstract_inverted_index.specific | 110 |
| abstract_inverted_index.yielding | 160 |
| abstract_inverted_index.available | 174 |
| abstract_inverted_index.dedicated | 114 |
| abstract_inverted_index.encoders, | 106 |
| abstract_inverted_index.framework | 98 |
| abstract_inverted_index.introduce | 67 |
| abstract_inverted_index.leverages | 100 |
| abstract_inverted_index.optimized | 108 |
| abstract_inverted_index.reasoning | 20 |
| abstract_inverted_index.relevance | 116 |
| abstract_inverted_index.requiring | 18 |
| abstract_inverted_index.retrieval | 55, 85 |
| abstract_inverted_index.standard, | 122 |
| abstract_inverted_index.thousands | 157 |
| abstract_inverted_index.benchmarks | 28 |
| abstract_inverted_index.generation | 96 |
| abstract_inverted_index.impressive | 6 |
| abstract_inverted_index.multimodal | 1, 104 |
| abstract_inverted_index.real-world | 16, 60 |
| abstract_inverted_index.strengths, | 111 |
| abstract_inverted_index.DocHaystack | 73, 165 |
| abstract_inverted_index.benchmarks, | 71, 137 |
| abstract_inverted_index.benchmarks. | 168 |
| abstract_inverted_index.challenging | 133 |
| abstract_inverted_index.efficiently | 154 |
| abstract_inverted_index.encountered | 57 |
| abstract_inverted_index.improvement | 128 |
| abstract_inverted_index.integrating | 147 |
| abstract_inverted_index.large-scale | 54, 82 |
| abstract_inverted_index.limitations | 14 |
| abstract_inverted_index.multi-image | 30 |
| abstract_inverted_index.performance | 80 |
| abstract_inverted_index.significant | 161 |
| abstract_inverted_index.InfoHaystack | 167 |
| abstract_inverted_index.applications | 17 |
| abstract_inverted_index.improvements | 162 |
| abstract_inverted_index.Additionally, | 88, 146 |
| abstract_inverted_index.InfoHaystack, | 75 |
| abstract_inverted_index.respectively, | 138 |
| abstract_inverted_index.understanding, | 10 |
| abstract_inverted_index.understanding. | 87 |
| abstract_inverted_index.vision-centric | 94 |
| abstract_inverted_index.vision-language | 9 |
| abstract_inverted_index.DocHaystack-1000 | 134 |
| abstract_inverted_index.InfoHaystack-1000 | 136 |
| abstract_inverted_index.question-document | 115 |
| abstract_inverted_index.question-answering | 31 |
| abstract_inverted_index.retrieval-augmented | 95 |
| abstract_inverted_index.https://github.com/Vision-CAIR/dochaystacks | 176 |
| cited_by_percentile_year.max | 95 |
| cited_by_percentile_year.min | 91 |
| countries_distinct_count | 3 |
| institutions_distinct_count | 5 |
| citation_normalized_percentile.value | 0.64371858 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | False |