EMMA: Efficient Visual Alignment in Multi-Modal LLMs Article Swipe
YOU?
·
· 2024
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2410.02080
Multi-modal Large Language Models (MLLMs) have recently exhibited impressive general-purpose capabilities by leveraging vision foundation models to encode the core concepts of images into representations. These are then combined with instructions and processed by the language model to generate high-quality responses. Despite significant progress in enhancing the language component, challenges persist in optimally fusing visual encodings within the language model for task-specific adaptability. Recent research has focused on improving this fusion through modality adaptation modules but at the cost of significantly increased model complexity and training data needs. In this paper, we propose EMMA (Efficient Multi-Modal Adaptation), a lightweight cross-modality module designed to efficiently fuse visual and textual encodings, generating instruction-aware visual representations for the language model. Our key contributions include: (1) an efficient early fusion mechanism that integrates vision and language representations with minimal added parameters (less than 0.2% increase in model size), (2) an in-depth interpretability analysis that sheds light on the internal mechanisms of the proposed method; (3) comprehensive experiments that demonstrate notable improvements on both specialized and general benchmarks for MLLMs. Empirical results show that EMMA boosts performance across multiple tasks by up to 9.3% while significantly improving robustness against hallucinations. Our code is available at https://github.com/SaraGhazanfari/EMMA
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2410.02080
- https://arxiv.org/pdf/2410.02080
- OA Status
- green
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4403882354
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4403882354Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2410.02080Digital Object Identifier
- Title
-
EMMA: Efficient Visual Alignment in Multi-Modal LLMsWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2024Year of publication
- Publication date
-
2024-10-02Full publication date if available
- Authors
-
Sara Ghazanfari, Alexandre Araujo, P. Krishnamurthy, Siddharth Garg, Farshad KhorramiList of authors in order
- Landing page
-
https://arxiv.org/abs/2410.02080Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2410.02080Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2410.02080Direct OA link when available
- Concepts
-
Modal, Computer science, Computer vision, Artificial intelligence, Materials science, Polymer chemistryTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
0Total citation count in OpenAlex
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4403882354 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2410.02080 |
| ids.doi | https://doi.org/10.48550/arxiv.2410.02080 |
| ids.openalex | https://openalex.org/W4403882354 |
| fwci | |
| type | preprint |
| title | EMMA: Efficient Visual Alignment in Multi-Modal LLMs |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10601 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9314000010490417 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Handwritten Text Recognition Techniques |
| topics[1].id | https://openalex.org/T10181 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9121000170707703 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1702 |
| topics[1].subfield.display_name | Artificial Intelligence |
| topics[1].display_name | Natural Language Processing Techniques |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C71139939 |
| concepts[0].level | 2 |
| concepts[0].score | 0.6934751868247986 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q910194 |
| concepts[0].display_name | Modal |
| concepts[1].id | https://openalex.org/C41008148 |
| concepts[1].level | 0 |
| concepts[1].score | 0.4602159261703491 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[1].display_name | Computer science |
| concepts[2].id | https://openalex.org/C31972630 |
| concepts[2].level | 1 |
| concepts[2].score | 0.3477328419685364 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[2].display_name | Computer vision |
| concepts[3].id | https://openalex.org/C154945302 |
| concepts[3].level | 1 |
| concepts[3].score | 0.3222001791000366 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[3].display_name | Artificial intelligence |
| concepts[4].id | https://openalex.org/C192562407 |
| concepts[4].level | 0 |
| concepts[4].score | 0.05659213662147522 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q228736 |
| concepts[4].display_name | Materials science |
| concepts[5].id | https://openalex.org/C188027245 |
| concepts[5].level | 1 |
| concepts[5].score | 0.0 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q750446 |
| concepts[5].display_name | Polymer chemistry |
| keywords[0].id | https://openalex.org/keywords/modal |
| keywords[0].score | 0.6934751868247986 |
| keywords[0].display_name | Modal |
| keywords[1].id | https://openalex.org/keywords/computer-science |
| keywords[1].score | 0.4602159261703491 |
| keywords[1].display_name | Computer science |
| keywords[2].id | https://openalex.org/keywords/computer-vision |
| keywords[2].score | 0.3477328419685364 |
| keywords[2].display_name | Computer vision |
| keywords[3].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[3].score | 0.3222001791000366 |
| keywords[3].display_name | Artificial intelligence |
| keywords[4].id | https://openalex.org/keywords/materials-science |
| keywords[4].score | 0.05659213662147522 |
| keywords[4].display_name | Materials science |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2410.02080 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2410.02080 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2410.02080 |
| locations[1].id | doi:10.48550/arxiv.2410.02080 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | cc-by |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | https://openalex.org/licenses/cc-by |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2410.02080 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5017092536 |
| authorships[0].author.orcid | |
| authorships[0].author.display_name | Sara Ghazanfari |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Ghazanfari, Sara |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5053106735 |
| authorships[1].author.orcid | https://orcid.org/0000-0003-2220-5739 |
| authorships[1].author.display_name | Alexandre Araujo |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Araujo, Alexandre |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5054769060 |
| authorships[2].author.orcid | https://orcid.org/0000-0001-8264-7972 |
| authorships[2].author.display_name | P. Krishnamurthy |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Krishnamurthy, Prashanth |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5010950688 |
| authorships[3].author.orcid | https://orcid.org/0000-0002-6158-9512 |
| authorships[3].author.display_name | Siddharth Garg |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Garg, Siddharth |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5082413942 |
| authorships[4].author.orcid | https://orcid.org/0000-0002-8418-004X |
| authorships[4].author.display_name | Farshad Khorrami |
| authorships[4].author_position | last |
| authorships[4].raw_author_name | Khorrami, Farshad |
| authorships[4].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2410.02080 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | EMMA: Efficient Visual Alignment in Multi-Modal LLMs |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T10601 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9314000010490417 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Handwritten Text Recognition Techniques |
| related_works | https://openalex.org/W2772917594, https://openalex.org/W2036807459, https://openalex.org/W2058170566, https://openalex.org/W2755342338, https://openalex.org/W2166024367, https://openalex.org/W3116076068, https://openalex.org/W2229312674, https://openalex.org/W2951359407, https://openalex.org/W2079911747, https://openalex.org/W1969923398 |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2410.02080 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2410.02080 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2410.02080 |
| primary_location.id | pmh:oai:arXiv.org:2410.02080 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2410.02080 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2410.02080 |
| publication_date | 2024-10-02 |
| publication_year | 2024 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 97 |
| abstract_inverted_index.In | 88 |
| abstract_inverted_index.an | 122, 145 |
| abstract_inverted_index.at | 76, 199 |
| abstract_inverted_index.by | 11, 33, 185 |
| abstract_inverted_index.in | 44, 51, 141 |
| abstract_inverted_index.is | 197 |
| abstract_inverted_index.of | 21, 79, 156 |
| abstract_inverted_index.on | 67, 152, 167 |
| abstract_inverted_index.to | 16, 37, 102, 187 |
| abstract_inverted_index.up | 186 |
| abstract_inverted_index.we | 91 |
| abstract_inverted_index.(1) | 121 |
| abstract_inverted_index.(2) | 144 |
| abstract_inverted_index.(3) | 160 |
| abstract_inverted_index.Our | 117, 195 |
| abstract_inverted_index.and | 31, 84, 106, 130, 170 |
| abstract_inverted_index.are | 26 |
| abstract_inverted_index.but | 75 |
| abstract_inverted_index.for | 60, 113, 173 |
| abstract_inverted_index.has | 65 |
| abstract_inverted_index.key | 118 |
| abstract_inverted_index.the | 18, 34, 46, 57, 77, 114, 153, 157 |
| abstract_inverted_index.0.2% | 139 |
| abstract_inverted_index.9.3% | 188 |
| abstract_inverted_index.EMMA | 93, 179 |
| abstract_inverted_index.both | 168 |
| abstract_inverted_index.code | 196 |
| abstract_inverted_index.core | 19 |
| abstract_inverted_index.cost | 78 |
| abstract_inverted_index.data | 86 |
| abstract_inverted_index.fuse | 104 |
| abstract_inverted_index.have | 5 |
| abstract_inverted_index.into | 23 |
| abstract_inverted_index.show | 177 |
| abstract_inverted_index.than | 138 |
| abstract_inverted_index.that | 127, 149, 163, 178 |
| abstract_inverted_index.then | 27 |
| abstract_inverted_index.this | 69, 89 |
| abstract_inverted_index.with | 29, 133 |
| abstract_inverted_index.(less | 137 |
| abstract_inverted_index.Large | 1 |
| abstract_inverted_index.These | 25 |
| abstract_inverted_index.added | 135 |
| abstract_inverted_index.early | 124 |
| abstract_inverted_index.light | 151 |
| abstract_inverted_index.model | 36, 59, 82, 142 |
| abstract_inverted_index.sheds | 150 |
| abstract_inverted_index.tasks | 184 |
| abstract_inverted_index.while | 189 |
| abstract_inverted_index.MLLMs. | 174 |
| abstract_inverted_index.Models | 3 |
| abstract_inverted_index.Recent | 63 |
| abstract_inverted_index.across | 182 |
| abstract_inverted_index.boosts | 180 |
| abstract_inverted_index.encode | 17 |
| abstract_inverted_index.fusing | 53 |
| abstract_inverted_index.fusion | 70, 125 |
| abstract_inverted_index.images | 22 |
| abstract_inverted_index.model. | 116 |
| abstract_inverted_index.models | 15 |
| abstract_inverted_index.module | 100 |
| abstract_inverted_index.needs. | 87 |
| abstract_inverted_index.paper, | 90 |
| abstract_inverted_index.size), | 143 |
| abstract_inverted_index.vision | 13, 129 |
| abstract_inverted_index.visual | 54, 105, 111 |
| abstract_inverted_index.within | 56 |
| abstract_inverted_index.(MLLMs) | 4 |
| abstract_inverted_index.Despite | 41 |
| abstract_inverted_index.against | 193 |
| abstract_inverted_index.focused | 66 |
| abstract_inverted_index.general | 171 |
| abstract_inverted_index.method; | 159 |
| abstract_inverted_index.minimal | 134 |
| abstract_inverted_index.modules | 74 |
| abstract_inverted_index.notable | 165 |
| abstract_inverted_index.persist | 50 |
| abstract_inverted_index.propose | 92 |
| abstract_inverted_index.results | 176 |
| abstract_inverted_index.textual | 107 |
| abstract_inverted_index.through | 71 |
| abstract_inverted_index.Language | 2 |
| abstract_inverted_index.analysis | 148 |
| abstract_inverted_index.combined | 28 |
| abstract_inverted_index.concepts | 20 |
| abstract_inverted_index.designed | 101 |
| abstract_inverted_index.generate | 38 |
| abstract_inverted_index.in-depth | 146 |
| abstract_inverted_index.include: | 120 |
| abstract_inverted_index.increase | 140 |
| abstract_inverted_index.internal | 154 |
| abstract_inverted_index.language | 35, 47, 58, 115, 131 |
| abstract_inverted_index.modality | 72 |
| abstract_inverted_index.multiple | 183 |
| abstract_inverted_index.progress | 43 |
| abstract_inverted_index.proposed | 158 |
| abstract_inverted_index.recently | 6 |
| abstract_inverted_index.research | 64 |
| abstract_inverted_index.training | 85 |
| abstract_inverted_index.Empirical | 175 |
| abstract_inverted_index.available | 198 |
| abstract_inverted_index.efficient | 123 |
| abstract_inverted_index.encodings | 55 |
| abstract_inverted_index.enhancing | 45 |
| abstract_inverted_index.exhibited | 7 |
| abstract_inverted_index.improving | 68, 191 |
| abstract_inverted_index.increased | 81 |
| abstract_inverted_index.mechanism | 126 |
| abstract_inverted_index.optimally | 52 |
| abstract_inverted_index.processed | 32 |
| abstract_inverted_index.(Efficient | 94 |
| abstract_inverted_index.adaptation | 73 |
| abstract_inverted_index.benchmarks | 172 |
| abstract_inverted_index.challenges | 49 |
| abstract_inverted_index.complexity | 83 |
| abstract_inverted_index.component, | 48 |
| abstract_inverted_index.encodings, | 108 |
| abstract_inverted_index.foundation | 14 |
| abstract_inverted_index.generating | 109 |
| abstract_inverted_index.impressive | 8 |
| abstract_inverted_index.integrates | 128 |
| abstract_inverted_index.leveraging | 12 |
| abstract_inverted_index.mechanisms | 155 |
| abstract_inverted_index.parameters | 136 |
| abstract_inverted_index.responses. | 40 |
| abstract_inverted_index.robustness | 192 |
| abstract_inverted_index.Multi-Modal | 95 |
| abstract_inverted_index.Multi-modal | 0 |
| abstract_inverted_index.demonstrate | 164 |
| abstract_inverted_index.efficiently | 103 |
| abstract_inverted_index.experiments | 162 |
| abstract_inverted_index.lightweight | 98 |
| abstract_inverted_index.performance | 181 |
| abstract_inverted_index.significant | 42 |
| abstract_inverted_index.specialized | 169 |
| abstract_inverted_index.Adaptation), | 96 |
| abstract_inverted_index.capabilities | 10 |
| abstract_inverted_index.high-quality | 39 |
| abstract_inverted_index.improvements | 166 |
| abstract_inverted_index.instructions | 30 |
| abstract_inverted_index.adaptability. | 62 |
| abstract_inverted_index.comprehensive | 161 |
| abstract_inverted_index.contributions | 119 |
| abstract_inverted_index.significantly | 80, 190 |
| abstract_inverted_index.task-specific | 61 |
| abstract_inverted_index.cross-modality | 99 |
| abstract_inverted_index.general-purpose | 9 |
| abstract_inverted_index.hallucinations. | 194 |
| abstract_inverted_index.representations | 112, 132 |
| abstract_inverted_index.interpretability | 147 |
| abstract_inverted_index.representations. | 24 |
| abstract_inverted_index.instruction-aware | 110 |
| abstract_inverted_index.https://github.com/SaraGhazanfari/EMMA | 200 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 5 |
| citation_normalized_percentile |