Think Twice, Act Once: Token-Aware Compression and Action Reuse for Efficient Inference in Vision-Language-Action Models Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2505.21200
Vision-Language-Action (VLA) models have emerged as a powerful paradigm for general-purpose robot control through natural language instructions. However, their high inference cost-stemming from large-scale token computation and autoregressive decoding-poses significant challenges for real-time deployment and edge applications. While prior work has primarily focused on architectural optimization, we take a different perspective by identifying a dual form of redundancy in VLA models: (i) high similarity across consecutive action steps, and (ii) substantial redundancy in visual tokens. Motivated by these observations, we propose FlashVLA, the first training-free and plug-and-play acceleration framework that enables action reuse in VLA models. FlashVLA improves inference efficiency through a token-aware action reuse mechanism that avoids redundant decoding across stable action steps, and an information-guided visual token selection strategy that prunes low-contribution tokens. Extensive experiments on the LIBERO benchmark show that FlashVLA reduces FLOPs by 55.7% and latency by 36.0%, with only a 0.7% drop in task success rate. These results demonstrate the effectiveness of FlashVLA in enabling lightweight, low-latency VLA inference without retraining.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2505.21200
- https://arxiv.org/pdf/2505.21200
- OA Status
- green
- OpenAlex ID
- https://openalex.org/W4415037022
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4415037022Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2505.21200Digital Object Identifier
- Title
-
Think Twice, Act Once: Token-Aware Compression and Action Reuse for Efficient Inference in Vision-Language-Action ModelsWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-05-27Full publication date if available
- Authors
-
Xiaomeng Tan, Yanzhao Yang, Hancheng Ye, Jialin Zheng, Bizhe Bai, Xinyi Wang, Hao Jia, Tao ChenList of authors in order
- Landing page
-
https://arxiv.org/abs/2505.21200Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2505.21200Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2505.21200Direct OA link when available
- Cited by
-
0Total citation count in OpenAlex
Full payload
| id | https://openalex.org/W4415037022 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2505.21200 |
| ids.doi | https://doi.org/10.48550/arxiv.2505.21200 |
| ids.openalex | https://openalex.org/W4415037022 |
| fwci | |
| type | preprint |
| title | Think Twice, Act Once: Token-Aware Compression and Action Reuse for Efficient Inference in Vision-Language-Action Models |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T11714 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.974399983882904 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Multimodal Machine Learning Applications |
| topics[1].id | https://openalex.org/T10036 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9128000140190125 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1707 |
| topics[1].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[1].display_name | Advanced Neural Network Applications |
| topics[2].id | https://openalex.org/T10215 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9054999947547913 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1702 |
| topics[2].subfield.display_name | Artificial Intelligence |
| topics[2].display_name | Semantic Web and Ontologies |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2505.21200 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | cc-by |
| locations[0].pdf_url | https://arxiv.org/pdf/2505.21200 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | https://openalex.org/licenses/cc-by |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2505.21200 |
| locations[1].id | doi:10.48550/arxiv.2505.21200 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | cc-by |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | https://openalex.org/licenses/cc-by |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2505.21200 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5102292272 |
| authorships[0].author.orcid | https://orcid.org/0000-0001-6530-4804 |
| authorships[0].author.display_name | Xiaomeng Tan |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Tan, Xudong |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5101787494 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-1131-7711 |
| authorships[1].author.display_name | Yanzhao Yang |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Yang, Yaoxin |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5000892367 |
| authorships[2].author.orcid | https://orcid.org/0000-0002-6272-2792 |
| authorships[2].author.display_name | Hancheng Ye |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Ye, Peng |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5085191289 |
| authorships[3].author.orcid | https://orcid.org/0000-0003-2286-0151 |
| authorships[3].author.display_name | Jialin Zheng |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Zheng, Jialin |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5037100403 |
| authorships[4].author.orcid | https://orcid.org/0000-0001-8783-7353 |
| authorships[4].author.display_name | Bizhe Bai |
| authorships[4].author_position | middle |
| authorships[4].raw_author_name | Bai, Bizhe |
| authorships[4].is_corresponding | False |
| authorships[5].author.id | https://openalex.org/A5100382959 |
| authorships[5].author.orcid | https://orcid.org/0000-0003-1585-1724 |
| authorships[5].author.display_name | Xinyi Wang |
| authorships[5].author_position | middle |
| authorships[5].raw_author_name | Wang, Xinyi |
| authorships[5].is_corresponding | False |
| authorships[6].author.id | https://openalex.org/A5100668366 |
| authorships[6].author.orcid | https://orcid.org/0000-0003-4275-3820 |
| authorships[6].author.display_name | Hao Jia |
| authorships[6].author_position | middle |
| authorships[6].raw_author_name | Hao, Jia |
| authorships[6].is_corresponding | False |
| authorships[7].author.id | https://openalex.org/A5100357699 |
| authorships[7].author.orcid | https://orcid.org/0000-0001-6325-5260 |
| authorships[7].author.display_name | Tao Chen |
| authorships[7].author_position | last |
| authorships[7].raw_author_name | Chen, Tao |
| authorships[7].is_corresponding | False |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2505.21200 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Think Twice, Act Once: Token-Aware Compression and Action Reuse for Efficient Inference in Vision-Language-Action Models |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T11714 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.974399983882904 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Multimodal Machine Learning Applications |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2505.21200 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | cc-by |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2505.21200 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2505.21200 |
| primary_location.id | pmh:oai:arXiv.org:2505.21200 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | cc-by |
| primary_location.pdf_url | https://arxiv.org/pdf/2505.21200 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | https://openalex.org/licenses/cc-by |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2505.21200 |
| publication_date | 2025-05-27 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 6, 48, 53, 101, 144 |
| abstract_inverted_index.an | 115 |
| abstract_inverted_index.as | 5 |
| abstract_inverted_index.by | 51, 76, 136, 140 |
| abstract_inverted_index.in | 58, 72, 93, 147, 158 |
| abstract_inverted_index.of | 56, 156 |
| abstract_inverted_index.on | 43, 127 |
| abstract_inverted_index.we | 46, 79 |
| abstract_inverted_index.(i) | 61 |
| abstract_inverted_index.VLA | 59, 94, 162 |
| abstract_inverted_index.and | 26, 34, 68, 85, 114, 138 |
| abstract_inverted_index.for | 9, 31 |
| abstract_inverted_index.has | 40 |
| abstract_inverted_index.the | 82, 128, 154 |
| abstract_inverted_index.(ii) | 69 |
| abstract_inverted_index.0.7% | 145 |
| abstract_inverted_index.drop | 146 |
| abstract_inverted_index.dual | 54 |
| abstract_inverted_index.edge | 35 |
| abstract_inverted_index.form | 55 |
| abstract_inverted_index.from | 22 |
| abstract_inverted_index.have | 3 |
| abstract_inverted_index.high | 19, 62 |
| abstract_inverted_index.only | 143 |
| abstract_inverted_index.show | 131 |
| abstract_inverted_index.take | 47 |
| abstract_inverted_index.task | 148 |
| abstract_inverted_index.that | 89, 106, 121, 132 |
| abstract_inverted_index.with | 142 |
| abstract_inverted_index.work | 39 |
| abstract_inverted_index.(VLA) | 1 |
| abstract_inverted_index.55.7% | 137 |
| abstract_inverted_index.FLOPs | 135 |
| abstract_inverted_index.These | 151 |
| abstract_inverted_index.While | 37 |
| abstract_inverted_index.first | 83 |
| abstract_inverted_index.prior | 38 |
| abstract_inverted_index.rate. | 150 |
| abstract_inverted_index.reuse | 92, 104 |
| abstract_inverted_index.robot | 11 |
| abstract_inverted_index.their | 18 |
| abstract_inverted_index.these | 77 |
| abstract_inverted_index.token | 24, 118 |
| abstract_inverted_index.36.0%, | 141 |
| abstract_inverted_index.LIBERO | 129 |
| abstract_inverted_index.across | 64, 110 |
| abstract_inverted_index.action | 66, 91, 103, 112 |
| abstract_inverted_index.avoids | 107 |
| abstract_inverted_index.models | 2 |
| abstract_inverted_index.prunes | 122 |
| abstract_inverted_index.stable | 111 |
| abstract_inverted_index.steps, | 67, 113 |
| abstract_inverted_index.visual | 73, 117 |
| abstract_inverted_index.control | 12 |
| abstract_inverted_index.emerged | 4 |
| abstract_inverted_index.enables | 90 |
| abstract_inverted_index.focused | 42 |
| abstract_inverted_index.latency | 139 |
| abstract_inverted_index.models. | 95 |
| abstract_inverted_index.models: | 60 |
| abstract_inverted_index.natural | 14 |
| abstract_inverted_index.propose | 80 |
| abstract_inverted_index.reduces | 134 |
| abstract_inverted_index.results | 152 |
| abstract_inverted_index.success | 149 |
| abstract_inverted_index.through | 13, 100 |
| abstract_inverted_index.tokens. | 74, 124 |
| abstract_inverted_index.without | 164 |
| abstract_inverted_index.FlashVLA | 96, 133, 157 |
| abstract_inverted_index.However, | 17 |
| abstract_inverted_index.decoding | 109 |
| abstract_inverted_index.enabling | 159 |
| abstract_inverted_index.improves | 97 |
| abstract_inverted_index.language | 15 |
| abstract_inverted_index.paradigm | 8 |
| abstract_inverted_index.powerful | 7 |
| abstract_inverted_index.strategy | 120 |
| abstract_inverted_index.Extensive | 125 |
| abstract_inverted_index.FlashVLA, | 81 |
| abstract_inverted_index.Motivated | 75 |
| abstract_inverted_index.benchmark | 130 |
| abstract_inverted_index.different | 49 |
| abstract_inverted_index.framework | 88 |
| abstract_inverted_index.inference | 20, 98, 163 |
| abstract_inverted_index.mechanism | 105 |
| abstract_inverted_index.primarily | 41 |
| abstract_inverted_index.real-time | 32 |
| abstract_inverted_index.redundant | 108 |
| abstract_inverted_index.selection | 119 |
| abstract_inverted_index.challenges | 30 |
| abstract_inverted_index.deployment | 33 |
| abstract_inverted_index.efficiency | 99 |
| abstract_inverted_index.redundancy | 57, 71 |
| abstract_inverted_index.similarity | 63 |
| abstract_inverted_index.computation | 25 |
| abstract_inverted_index.consecutive | 65 |
| abstract_inverted_index.demonstrate | 153 |
| abstract_inverted_index.experiments | 126 |
| abstract_inverted_index.identifying | 52 |
| abstract_inverted_index.large-scale | 23 |
| abstract_inverted_index.low-latency | 161 |
| abstract_inverted_index.perspective | 50 |
| abstract_inverted_index.retraining. | 165 |
| abstract_inverted_index.significant | 29 |
| abstract_inverted_index.substantial | 70 |
| abstract_inverted_index.token-aware | 102 |
| abstract_inverted_index.acceleration | 87 |
| abstract_inverted_index.lightweight, | 160 |
| abstract_inverted_index.applications. | 36 |
| abstract_inverted_index.architectural | 44 |
| abstract_inverted_index.cost-stemming | 21 |
| abstract_inverted_index.effectiveness | 155 |
| abstract_inverted_index.instructions. | 16 |
| abstract_inverted_index.observations, | 78 |
| abstract_inverted_index.optimization, | 45 |
| abstract_inverted_index.plug-and-play | 86 |
| abstract_inverted_index.training-free | 84 |
| abstract_inverted_index.autoregressive | 27 |
| abstract_inverted_index.decoding-poses | 28 |
| abstract_inverted_index.general-purpose | 10 |
| abstract_inverted_index.low-contribution | 123 |
| abstract_inverted_index.information-guided | 116 |
| abstract_inverted_index.Vision-Language-Action | 0 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 8 |
| citation_normalized_percentile |