OSVI-WM: One-Shot Visual Imitation for Unseen Tasks using World-Model-Guided Trajectory Generation Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2505.20425
Visual imitation learning enables robotic agents to acquire skills by observing expert demonstration videos. In the one-shot setting, the agent generates a policy after observing a single expert demonstration without additional fine-tuning. Existing approaches typically train and evaluate on the same set of tasks, varying only object configurations, and struggle to generalize to unseen tasks with different semantic or structural requirements. While some recent methods attempt to address this, they exhibit low success rates on hard test tasks that, despite being visually similar to some training tasks, differ in context and require distinct responses. Additionally, most existing methods lack an explicit model of environment dynamics, limiting their ability to reason about future states. To address these limitations, we propose a novel framework for one-shot visual imitation learning via world-model-guided trajectory generation. Given an expert demonstration video and the agent's initial observation, our method leverages a learned world model to predict a sequence of latent states and actions. This latent trajectory is then decoded into physical waypoints that guide the agent's execution. Our method is evaluated on two simulated benchmarks and three real-world robotic platforms, where it consistently outperforms prior approaches, with over 30% improvement in some cases.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2505.20425
- https://arxiv.org/pdf/2505.20425
- OA Status
- green
- OpenAlex ID
- https://openalex.org/W4415035801
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4415035801Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2505.20425Digital Object Identifier
- Title
-
OSVI-WM: One-Shot Visual Imitation for Unseen Tasks using World-Model-Guided Trajectory GenerationWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-05-26Full publication date if available
- Authors
-
Raktim Gautam Goswami, P. Krishnamurthy, Yann LeCun, Farshad KhorramiList of authors in order
- Landing page
-
https://arxiv.org/abs/2505.20425Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2505.20425Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2505.20425Direct OA link when available
- Cited by
-
0Total citation count in OpenAlex
Full payload
| id | https://openalex.org/W4415035801 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2505.20425 |
| ids.doi | https://doi.org/10.48550/arxiv.2505.20425 |
| ids.openalex | https://openalex.org/W4415035801 |
| fwci | |
| type | preprint |
| title | OSVI-WM: One-Shot Visual Imitation for Unseen Tasks using World-Model-Guided Trajectory Generation |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T12290 |
| topics[0].field.id | https://openalex.org/fields/22 |
| topics[0].field.display_name | Engineering |
| topics[0].score | 0.9993000030517578 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/2207 |
| topics[0].subfield.display_name | Control and Systems Engineering |
| topics[0].display_name | Human Motion and Animation |
| topics[1].id | https://openalex.org/T10812 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9976999759674072 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1707 |
| topics[1].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[1].display_name | Human Pose and Action Recognition |
| topics[2].id | https://openalex.org/T11714 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9959999918937683 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1707 |
| topics[2].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[2].display_name | Multimodal Machine Learning Applications |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2505.20425 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2505.20425 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2505.20425 |
| locations[1].id | doi:10.48550/arxiv.2505.20425 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2505.20425 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5090861470 |
| authorships[0].author.orcid | https://orcid.org/0000-0002-3018-7545 |
| authorships[0].author.display_name | Raktim Gautam Goswami |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Goswami, Raktim Gautam |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5054769060 |
| authorships[1].author.orcid | https://orcid.org/0000-0001-8264-7972 |
| authorships[1].author.display_name | P. Krishnamurthy |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Krishnamurthy, Prashanth |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5001226970 |
| authorships[2].author.orcid | |
| authorships[2].author.display_name | Yann LeCun |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | LeCun, Yann |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5082413942 |
| authorships[3].author.orcid | https://orcid.org/0000-0002-8418-004X |
| authorships[3].author.display_name | Farshad Khorrami |
| authorships[3].author_position | last |
| authorships[3].raw_author_name | Khorrami, Farshad |
| authorships[3].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2505.20425 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | OSVI-WM: One-Shot Visual Imitation for Unseen Tasks using World-Model-Guided Trajectory Generation |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T12290 |
| primary_topic.field.id | https://openalex.org/fields/22 |
| primary_topic.field.display_name | Engineering |
| primary_topic.score | 0.9993000030517578 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/2207 |
| primary_topic.subfield.display_name | Control and Systems Engineering |
| primary_topic.display_name | Human Motion and Animation |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2505.20425 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2505.20425 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2505.20425 |
| primary_location.id | pmh:oai:arXiv.org:2505.20425 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2505.20425 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2505.20425 |
| publication_date | 2025-05-26 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 21, 25, 119, 144, 150 |
| abstract_inverted_index.In | 14 |
| abstract_inverted_index.To | 113 |
| abstract_inverted_index.an | 99, 132 |
| abstract_inverted_index.by | 9 |
| abstract_inverted_index.in | 88, 194 |
| abstract_inverted_index.is | 160, 173 |
| abstract_inverted_index.it | 185 |
| abstract_inverted_index.of | 42, 102, 152 |
| abstract_inverted_index.on | 38, 74, 175 |
| abstract_inverted_index.or | 58 |
| abstract_inverted_index.to | 6, 50, 52, 66, 83, 108, 148 |
| abstract_inverted_index.we | 117 |
| abstract_inverted_index.30% | 192 |
| abstract_inverted_index.Our | 171 |
| abstract_inverted_index.and | 36, 48, 90, 136, 155, 179 |
| abstract_inverted_index.for | 122 |
| abstract_inverted_index.low | 71 |
| abstract_inverted_index.our | 141 |
| abstract_inverted_index.set | 41 |
| abstract_inverted_index.the | 15, 18, 39, 137, 168 |
| abstract_inverted_index.two | 176 |
| abstract_inverted_index.via | 127 |
| abstract_inverted_index.This | 157 |
| abstract_inverted_index.hard | 75 |
| abstract_inverted_index.into | 163 |
| abstract_inverted_index.lack | 98 |
| abstract_inverted_index.most | 95 |
| abstract_inverted_index.only | 45 |
| abstract_inverted_index.over | 191 |
| abstract_inverted_index.same | 40 |
| abstract_inverted_index.some | 62, 84, 195 |
| abstract_inverted_index.test | 76 |
| abstract_inverted_index.that | 166 |
| abstract_inverted_index.then | 161 |
| abstract_inverted_index.they | 69 |
| abstract_inverted_index.with | 55, 190 |
| abstract_inverted_index.Given | 131 |
| abstract_inverted_index.While | 61 |
| abstract_inverted_index.about | 110 |
| abstract_inverted_index.after | 23 |
| abstract_inverted_index.agent | 19 |
| abstract_inverted_index.being | 80 |
| abstract_inverted_index.guide | 167 |
| abstract_inverted_index.model | 101, 147 |
| abstract_inverted_index.novel | 120 |
| abstract_inverted_index.prior | 188 |
| abstract_inverted_index.rates | 73 |
| abstract_inverted_index.tasks | 54, 77 |
| abstract_inverted_index.that, | 78 |
| abstract_inverted_index.their | 106 |
| abstract_inverted_index.these | 115 |
| abstract_inverted_index.this, | 68 |
| abstract_inverted_index.three | 180 |
| abstract_inverted_index.train | 35 |
| abstract_inverted_index.video | 135 |
| abstract_inverted_index.where | 184 |
| abstract_inverted_index.world | 146 |
| abstract_inverted_index.Visual | 0 |
| abstract_inverted_index.agents | 5 |
| abstract_inverted_index.cases. | 196 |
| abstract_inverted_index.differ | 87 |
| abstract_inverted_index.expert | 11, 27, 133 |
| abstract_inverted_index.future | 111 |
| abstract_inverted_index.latent | 153, 158 |
| abstract_inverted_index.method | 142, 172 |
| abstract_inverted_index.object | 46 |
| abstract_inverted_index.policy | 22 |
| abstract_inverted_index.reason | 109 |
| abstract_inverted_index.recent | 63 |
| abstract_inverted_index.single | 26 |
| abstract_inverted_index.skills | 8 |
| abstract_inverted_index.states | 154 |
| abstract_inverted_index.tasks, | 43, 86 |
| abstract_inverted_index.unseen | 53 |
| abstract_inverted_index.visual | 124 |
| abstract_inverted_index.ability | 107 |
| abstract_inverted_index.acquire | 7 |
| abstract_inverted_index.address | 67, 114 |
| abstract_inverted_index.agent's | 138, 169 |
| abstract_inverted_index.attempt | 65 |
| abstract_inverted_index.context | 89 |
| abstract_inverted_index.decoded | 162 |
| abstract_inverted_index.despite | 79 |
| abstract_inverted_index.enables | 3 |
| abstract_inverted_index.exhibit | 70 |
| abstract_inverted_index.initial | 139 |
| abstract_inverted_index.learned | 145 |
| abstract_inverted_index.methods | 64, 97 |
| abstract_inverted_index.predict | 149 |
| abstract_inverted_index.propose | 118 |
| abstract_inverted_index.require | 91 |
| abstract_inverted_index.robotic | 4, 182 |
| abstract_inverted_index.similar | 82 |
| abstract_inverted_index.states. | 112 |
| abstract_inverted_index.success | 72 |
| abstract_inverted_index.varying | 44 |
| abstract_inverted_index.videos. | 13 |
| abstract_inverted_index.without | 29 |
| abstract_inverted_index.Existing | 32 |
| abstract_inverted_index.actions. | 156 |
| abstract_inverted_index.distinct | 92 |
| abstract_inverted_index.evaluate | 37 |
| abstract_inverted_index.existing | 96 |
| abstract_inverted_index.explicit | 100 |
| abstract_inverted_index.learning | 2, 126 |
| abstract_inverted_index.limiting | 105 |
| abstract_inverted_index.one-shot | 16, 123 |
| abstract_inverted_index.physical | 164 |
| abstract_inverted_index.semantic | 57 |
| abstract_inverted_index.sequence | 151 |
| abstract_inverted_index.setting, | 17 |
| abstract_inverted_index.struggle | 49 |
| abstract_inverted_index.training | 85 |
| abstract_inverted_index.visually | 81 |
| abstract_inverted_index.different | 56 |
| abstract_inverted_index.dynamics, | 104 |
| abstract_inverted_index.evaluated | 174 |
| abstract_inverted_index.framework | 121 |
| abstract_inverted_index.generates | 20 |
| abstract_inverted_index.imitation | 1, 125 |
| abstract_inverted_index.leverages | 143 |
| abstract_inverted_index.observing | 10, 24 |
| abstract_inverted_index.simulated | 177 |
| abstract_inverted_index.typically | 34 |
| abstract_inverted_index.waypoints | 165 |
| abstract_inverted_index.additional | 30 |
| abstract_inverted_index.approaches | 33 |
| abstract_inverted_index.benchmarks | 178 |
| abstract_inverted_index.execution. | 170 |
| abstract_inverted_index.generalize | 51 |
| abstract_inverted_index.platforms, | 183 |
| abstract_inverted_index.real-world | 181 |
| abstract_inverted_index.responses. | 93 |
| abstract_inverted_index.structural | 59 |
| abstract_inverted_index.trajectory | 129, 159 |
| abstract_inverted_index.approaches, | 189 |
| abstract_inverted_index.environment | 103 |
| abstract_inverted_index.generation. | 130 |
| abstract_inverted_index.improvement | 193 |
| abstract_inverted_index.outperforms | 187 |
| abstract_inverted_index.consistently | 186 |
| abstract_inverted_index.fine-tuning. | 31 |
| abstract_inverted_index.limitations, | 116 |
| abstract_inverted_index.observation, | 140 |
| abstract_inverted_index.Additionally, | 94 |
| abstract_inverted_index.demonstration | 12, 28, 134 |
| abstract_inverted_index.requirements. | 60 |
| abstract_inverted_index.configurations, | 47 |
| abstract_inverted_index.world-model-guided | 128 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 4 |
| citation_normalized_percentile |