Planner-Refiner: Dynamic Space-Time Refinement for Vision-Language Alignment in Videos Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.3233/faia250846
Vision-language alignment in video must address the complexity of language, evolving interacting entities, their action chains, and semantic gaps between language and vision. This work introduces Planner-Refiner, a framework to overcome these challenges. Planner-Refiner bridges the semantic gap by iteratively refining visual elements’ space-time representation, guided by language until semantic gaps are minimal. A Planner module schedules language guidance by decomposing complex linguistic prompts into short sentence chains. The Refiner processes each short sentence—a noun-phrase and verb-phrase pair—to direct visual tokens’ self-attention across space then time, achieving efficient single-step refinement. A recurrent system chains these steps, maintaining refined visual token representations. The final representation feeds into task-specific heads for alignment generation. We demonstrate Planner-Refiner’s effectiveness on two video-language alignment tasks: Referring Video Object Segmentation and Temporal Grounding with varying language complexity. We further introduce a new MeViS-X benchmark to assess models’ capability with long queries. Superior performance versus state-of-the-art methods on these benchmarks shows the approach’s potential, especially for complex prompts.
Related Topics
- Type
- book-chapter
- Landing Page
- https://doi.org/10.3233/faia250846
- OA Status
- hybrid
- OpenAlex ID
- https://openalex.org/W4415428949
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4415428949Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.3233/faia250846Digital Object Identifier
- Title
-
Planner-Refiner: Dynamic Space-Time Refinement for Vision-Language Alignment in VideosWork title
- Type
-
book-chapterOpenAlex work type
- Publication year
-
2025Year of publication
- Publication date
-
2025-10-21Full publication date if available
- Authors
-
Tuyen Tran, Thao Minh Le, Quang-Hung Le, Truyen TranList of authors in order
- Landing page
-
https://doi.org/10.3233/faia250846Publisher landing page
- Open access
-
YesWhether a free full text is available
- OA status
-
hybridOpen access status per OpenAlex
- OA URL
-
https://doi.org/10.3233/faia250846Direct OA link when available
- Cited by
-
0Total citation count in OpenAlex
Full payload
| id | https://openalex.org/W4415428949 |
|---|---|
| doi | https://doi.org/10.3233/faia250846 |
| ids.doi | https://doi.org/10.3233/faia250846 |
| ids.openalex | https://openalex.org/W4415428949 |
| fwci | 0.0 |
| type | book-chapter |
| title | Planner-Refiner: Dynamic Space-Time Refinement for Vision-Language Alignment in Videos |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T11714 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9796000123023987 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Multimodal Machine Learning Applications |
| topics[1].id | https://openalex.org/T10627 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9531999826431274 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1707 |
| topics[1].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[1].display_name | Advanced Image and Video Retrieval Techniques |
| topics[2].id | https://openalex.org/T10824 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9178000092506409 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1707 |
| topics[2].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[2].display_name | Image Retrieval and Classification Techniques |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| language | |
| locations[0].id | doi:10.3233/faia250846 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4210201731 |
| locations[0].source.issn | 0922-6389, 1879-8314 |
| locations[0].source.type | journal |
| locations[0].source.is_oa | False |
| locations[0].source.issn_l | 0922-6389 |
| locations[0].source.is_core | True |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | Frontiers in artificial intelligence and applications |
| locations[0].source.host_organization | |
| locations[0].source.host_organization_name | |
| locations[0].license | cc-by-nc |
| locations[0].pdf_url | |
| locations[0].version | publishedVersion |
| locations[0].raw_type | book-chapter |
| locations[0].license_id | https://openalex.org/licenses/cc-by-nc |
| locations[0].is_accepted | True |
| locations[0].is_published | True |
| locations[0].raw_source_name | Frontiers in Artificial Intelligence and Applications |
| locations[0].landing_page_url | https://doi.org/10.3233/faia250846 |
| indexed_in | crossref |
| authorships[0].author.id | https://openalex.org/A5050515475 |
| authorships[0].author.orcid | https://orcid.org/0009-0002-8161-7637 |
| authorships[0].author.display_name | Tuyen Tran |
| authorships[0].countries | AU |
| authorships[0].affiliations[0].institution_ids | https://openalex.org/I149704539 |
| authorships[0].affiliations[0].raw_affiliation_string | Applied Artificial Intelligence Institute, Deakin University, Australia |
| authorships[0].institutions[0].id | https://openalex.org/I149704539 |
| authorships[0].institutions[0].ror | https://ror.org/02czsnj07 |
| authorships[0].institutions[0].type | education |
| authorships[0].institutions[0].lineage | https://openalex.org/I149704539 |
| authorships[0].institutions[0].country_code | AU |
| authorships[0].institutions[0].display_name | Deakin University |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Tuyen Tran |
| authorships[0].is_corresponding | False |
| authorships[0].raw_affiliation_strings | Applied Artificial Intelligence Institute, Deakin University, Australia |
| authorships[1].author.id | https://openalex.org/A5079045166 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-8089-9962 |
| authorships[1].author.display_name | Thao Minh Le |
| authorships[1].countries | AU |
| authorships[1].affiliations[0].institution_ids | https://openalex.org/I149704539 |
| authorships[1].affiliations[0].raw_affiliation_string | Applied Artificial Intelligence Institute, Deakin University, Australia |
| authorships[1].institutions[0].id | https://openalex.org/I149704539 |
| authorships[1].institutions[0].ror | https://ror.org/02czsnj07 |
| authorships[1].institutions[0].type | education |
| authorships[1].institutions[0].lineage | https://openalex.org/I149704539 |
| authorships[1].institutions[0].country_code | AU |
| authorships[1].institutions[0].display_name | Deakin University |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Thao Minh Le |
| authorships[1].is_corresponding | False |
| authorships[1].raw_affiliation_strings | Applied Artificial Intelligence Institute, Deakin University, Australia |
| authorships[2].author.id | https://openalex.org/A5073633711 |
| authorships[2].author.orcid | https://orcid.org/0000-0003-4727-6859 |
| authorships[2].author.display_name | Quang-Hung Le |
| authorships[2].countries | AU |
| authorships[2].affiliations[0].institution_ids | https://openalex.org/I149704539 |
| authorships[2].affiliations[0].raw_affiliation_string | Applied Artificial Intelligence Institute, Deakin University, Australia |
| authorships[2].institutions[0].id | https://openalex.org/I149704539 |
| authorships[2].institutions[0].ror | https://ror.org/02czsnj07 |
| authorships[2].institutions[0].type | education |
| authorships[2].institutions[0].lineage | https://openalex.org/I149704539 |
| authorships[2].institutions[0].country_code | AU |
| authorships[2].institutions[0].display_name | Deakin University |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Quang-Hung Le |
| authorships[2].is_corresponding | False |
| authorships[2].raw_affiliation_strings | Applied Artificial Intelligence Institute, Deakin University, Australia |
| authorships[3].author.id | https://openalex.org/A5085471517 |
| authorships[3].author.orcid | https://orcid.org/0000-0001-6531-8907 |
| authorships[3].author.display_name | Truyen Tran |
| authorships[3].countries | AU |
| authorships[3].affiliations[0].institution_ids | https://openalex.org/I149704539 |
| authorships[3].affiliations[0].raw_affiliation_string | Applied Artificial Intelligence Institute, Deakin University, Australia |
| authorships[3].institutions[0].id | https://openalex.org/I149704539 |
| authorships[3].institutions[0].ror | https://ror.org/02czsnj07 |
| authorships[3].institutions[0].type | education |
| authorships[3].institutions[0].lineage | https://openalex.org/I149704539 |
| authorships[3].institutions[0].country_code | AU |
| authorships[3].institutions[0].display_name | Deakin University |
| authorships[3].author_position | last |
| authorships[3].raw_author_name | Truyen Tran |
| authorships[3].is_corresponding | False |
| authorships[3].raw_affiliation_strings | Applied Artificial Intelligence Institute, Deakin University, Australia |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://doi.org/10.3233/faia250846 |
| open_access.oa_status | hybrid |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-24T00:00:00 |
| display_name | Planner-Refiner: Dynamic Space-Time Refinement for Vision-Language Alignment in Videos |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T11714 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9796000123023987 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Multimodal Machine Learning Applications |
| cited_by_count | 0 |
| locations_count | 1 |
| best_oa_location.id | doi:10.3233/faia250846 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4210201731 |
| best_oa_location.source.issn | 0922-6389, 1879-8314 |
| best_oa_location.source.type | journal |
| best_oa_location.source.is_oa | False |
| best_oa_location.source.issn_l | 0922-6389 |
| best_oa_location.source.is_core | True |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | Frontiers in artificial intelligence and applications |
| best_oa_location.source.host_organization | |
| best_oa_location.source.host_organization_name | |
| best_oa_location.license | cc-by-nc |
| best_oa_location.pdf_url | |
| best_oa_location.version | publishedVersion |
| best_oa_location.raw_type | book-chapter |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by-nc |
| best_oa_location.is_accepted | True |
| best_oa_location.is_published | True |
| best_oa_location.raw_source_name | Frontiers in Artificial Intelligence and Applications |
| best_oa_location.landing_page_url | https://doi.org/10.3233/faia250846 |
| primary_location.id | doi:10.3233/faia250846 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4210201731 |
| primary_location.source.issn | 0922-6389, 1879-8314 |
| primary_location.source.type | journal |
| primary_location.source.is_oa | False |
| primary_location.source.issn_l | 0922-6389 |
| primary_location.source.is_core | True |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | Frontiers in artificial intelligence and applications |
| primary_location.source.host_organization | |
| primary_location.source.host_organization_name | |
| primary_location.license | cc-by-nc |
| primary_location.pdf_url | |
| primary_location.version | publishedVersion |
| primary_location.raw_type | book-chapter |
| primary_location.license_id | https://openalex.org/licenses/cc-by-nc |
| primary_location.is_accepted | True |
| primary_location.is_published | True |
| primary_location.raw_source_name | Frontiers in Artificial Intelligence and Applications |
| primary_location.landing_page_url | https://doi.org/10.3233/faia250846 |
| publication_date | 2025-10-21 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.A | 53, 90 |
| abstract_inverted_index.a | 27, 134 |
| abstract_inverted_index.We | 111, 131 |
| abstract_inverted_index.by | 38, 46, 59 |
| abstract_inverted_index.in | 2 |
| abstract_inverted_index.of | 8 |
| abstract_inverted_index.on | 115, 150 |
| abstract_inverted_index.to | 29, 138 |
| abstract_inverted_index.The | 68, 101 |
| abstract_inverted_index.and | 16, 21, 75, 124 |
| abstract_inverted_index.are | 51 |
| abstract_inverted_index.for | 108, 158 |
| abstract_inverted_index.gap | 37 |
| abstract_inverted_index.new | 135 |
| abstract_inverted_index.the | 6, 35, 154 |
| abstract_inverted_index.two | 116 |
| abstract_inverted_index.This | 23 |
| abstract_inverted_index.each | 71 |
| abstract_inverted_index.gaps | 18, 50 |
| abstract_inverted_index.into | 64, 105 |
| abstract_inverted_index.long | 143 |
| abstract_inverted_index.must | 4 |
| abstract_inverted_index.then | 84 |
| abstract_inverted_index.with | 127, 142 |
| abstract_inverted_index.work | 24 |
| abstract_inverted_index.Video | 121 |
| abstract_inverted_index.feeds | 104 |
| abstract_inverted_index.final | 102 |
| abstract_inverted_index.heads | 107 |
| abstract_inverted_index.short | 65, 72 |
| abstract_inverted_index.shows | 153 |
| abstract_inverted_index.space | 83 |
| abstract_inverted_index.their | 13 |
| abstract_inverted_index.these | 31, 94, 151 |
| abstract_inverted_index.time, | 85 |
| abstract_inverted_index.token | 99 |
| abstract_inverted_index.until | 48 |
| abstract_inverted_index.video | 3 |
| abstract_inverted_index.Object | 122 |
| abstract_inverted_index.across | 82 |
| abstract_inverted_index.action | 14 |
| abstract_inverted_index.assess | 139 |
| abstract_inverted_index.chains | 93 |
| abstract_inverted_index.direct | 78 |
| abstract_inverted_index.guided | 45 |
| abstract_inverted_index.module | 55 |
| abstract_inverted_index.steps, | 95 |
| abstract_inverted_index.system | 92 |
| abstract_inverted_index.tasks: | 119 |
| abstract_inverted_index.versus | 147 |
| abstract_inverted_index.visual | 41, 79, 98 |
| abstract_inverted_index.MeViS-X | 136 |
| abstract_inverted_index.Planner | 54 |
| abstract_inverted_index.Refiner | 69 |
| abstract_inverted_index.address | 5 |
| abstract_inverted_index.between | 19 |
| abstract_inverted_index.bridges | 34 |
| abstract_inverted_index.chains, | 15 |
| abstract_inverted_index.chains. | 67 |
| abstract_inverted_index.complex | 61, 159 |
| abstract_inverted_index.further | 132 |
| abstract_inverted_index.methods | 149 |
| abstract_inverted_index.prompts | 63 |
| abstract_inverted_index.refined | 97 |
| abstract_inverted_index.varying | 128 |
| abstract_inverted_index.vision. | 22 |
| abstract_inverted_index.Superior | 145 |
| abstract_inverted_index.Temporal | 125 |
| abstract_inverted_index.evolving | 10 |
| abstract_inverted_index.guidance | 58 |
| abstract_inverted_index.language | 20, 47, 57, 129 |
| abstract_inverted_index.minimal. | 52 |
| abstract_inverted_index.overcome | 30 |
| abstract_inverted_index.prompts. | 160 |
| abstract_inverted_index.queries. | 144 |
| abstract_inverted_index.refining | 40 |
| abstract_inverted_index.semantic | 17, 36, 49 |
| abstract_inverted_index.sentence | 66 |
| abstract_inverted_index.Grounding | 126 |
| abstract_inverted_index.Referring | 120 |
| abstract_inverted_index.achieving | 86 |
| abstract_inverted_index.alignment | 1, 109, 118 |
| abstract_inverted_index.benchmark | 137 |
| abstract_inverted_index.efficient | 87 |
| abstract_inverted_index.entities, | 12 |
| abstract_inverted_index.framework | 28 |
| abstract_inverted_index.introduce | 133 |
| abstract_inverted_index.language, | 9 |
| abstract_inverted_index.models’ | 140 |
| abstract_inverted_index.pair—to | 77 |
| abstract_inverted_index.processes | 70 |
| abstract_inverted_index.recurrent | 91 |
| abstract_inverted_index.schedules | 56 |
| abstract_inverted_index.tokens’ | 80 |
| abstract_inverted_index.benchmarks | 152 |
| abstract_inverted_index.capability | 141 |
| abstract_inverted_index.complexity | 7 |
| abstract_inverted_index.especially | 157 |
| abstract_inverted_index.introduces | 25 |
| abstract_inverted_index.linguistic | 62 |
| abstract_inverted_index.potential, | 156 |
| abstract_inverted_index.space-time | 43 |
| abstract_inverted_index.challenges. | 32 |
| abstract_inverted_index.complexity. | 130 |
| abstract_inverted_index.decomposing | 60 |
| abstract_inverted_index.demonstrate | 112 |
| abstract_inverted_index.elements’ | 42 |
| abstract_inverted_index.generation. | 110 |
| abstract_inverted_index.interacting | 11 |
| abstract_inverted_index.iteratively | 39 |
| abstract_inverted_index.maintaining | 96 |
| abstract_inverted_index.noun-phrase | 74 |
| abstract_inverted_index.performance | 146 |
| abstract_inverted_index.refinement. | 89 |
| abstract_inverted_index.single-step | 88 |
| abstract_inverted_index.verb-phrase | 76 |
| abstract_inverted_index.Segmentation | 123 |
| abstract_inverted_index.approach’s | 155 |
| abstract_inverted_index.sentence—a | 73 |
| abstract_inverted_index.effectiveness | 114 |
| abstract_inverted_index.task-specific | 106 |
| abstract_inverted_index.representation | 103 |
| abstract_inverted_index.self-attention | 81 |
| abstract_inverted_index.video-language | 117 |
| abstract_inverted_index.Planner-Refiner | 33 |
| abstract_inverted_index.Vision-language | 0 |
| abstract_inverted_index.representation, | 44 |
| abstract_inverted_index.Planner-Refiner, | 26 |
| abstract_inverted_index.representations. | 100 |
| abstract_inverted_index.state-of-the-art | 148 |
| abstract_inverted_index.Planner-Refiner’s | 113 |
| cited_by_percentile_year | |
| countries_distinct_count | 1 |
| institutions_distinct_count | 4 |
| citation_normalized_percentile.value | 0.84853994 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | True |