ActionSync Video Transformation: Automated Object Removal and Responsive Effects in Motion Videos Using Hybrid CNN and GRU Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.1109/access.2025.3600178
Large data volumes, dynamic scenes, and intricate object motions make video analysis extremely difficult. Traditional methods depend on human-crafted features that are not scalable. This work develops a deep neural network-powered automated and controllable video transformation system. The main technique consists of an automated video transformation pipeline driven by spatial-temporal neural network components that are systematically connected. A 16-layer Convolutional Neural Network (CNN) encoder first extracts hierarchical visual features from individual frames to achieve spatial understanding. The CNN encodings are fed into a 512-unit Gated Recurrent Unit (GRU) sequencer, which models long-range sequential dynamics of object motions and interactions spanning thousands of frames to gain temporal context. Subsequently, an attentional transformer integrates the individual strengths of the CNN and GRU into unified space-time representations that reflect video content, object relationships, and interactions over both spatial and temporal dimensions simultaneously. The context-aware representations inform a specialized controller module, which deliberately adjusts backgrounds and object foreground layers based on their modeled connections. Finally, a flexible effects renderer composites the transformed backgrounds and adjusted objects into novel video sequences with effects synchronized to original timelines. Evaluations on a large 51-category video benchmark demonstrate responsive object removal and background substitution effects with strong system accuracy. The framework achieves high precision, recall, and performance on error metrics. The system demonstrates notable computational efficiency, processing 720p video clips at an average of 0.48 seconds per frame on an NVIDIA RTX 2070 GPU, highlighting its potential for near real-time applications. Ablations confirm that the fused CNN, GRU, and transformer components enable effective context modeling for deliberate video manipulations aligned to the original footage. Both quantitative and qualitative outcomes evidence the system’s capacity for automated, adaptive effects generation via joint spatio-temporal reasoning.
Related Topics
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.1109/access.2025.3600178
- OA Status
- gold
- References
- 37
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4413318913
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4413318913Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.1109/access.2025.3600178Digital Object Identifier
- Title
-
ActionSync Video Transformation: Automated Object Removal and Responsive Effects in Motion Videos Using Hybrid CNN and GRUWork title
- Type
-
articleOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-01-01Full publication date if available
- Authors
-
Muhammad Asif Habib, Umar Raza, Sohail Jabbar, Muhammad Farhan, Farhan UllahList of authors in order
- Landing page
-
https://doi.org/10.1109/access.2025.3600178Publisher landing page
- Open access
-
YesWhether a free full text is available
- OA status
-
goldOpen access status per OpenAlex
- OA URL
-
https://doi.org/10.1109/access.2025.3600178Direct OA link when available
- Concepts
-
Computer vision, Computer science, Artificial intelligence, Transformation (genetics), Motion (physics), Object (grammar), Video tracking, Computer graphics (images), Gene, Chemistry, BiochemistryTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
0Total citation count in OpenAlex
- References (count)
-
37Number of works referenced by this work
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4413318913 |
|---|---|
| doi | https://doi.org/10.1109/access.2025.3600178 |
| ids.doi | https://doi.org/10.1109/access.2025.3600178 |
| ids.openalex | https://openalex.org/W4413318913 |
| fwci | 0.0 |
| type | article |
| title | ActionSync Video Transformation: Automated Object Removal and Responsive Effects in Motion Videos Using Hybrid CNN and GRU |
| biblio.issue | |
| biblio.volume | 13 |
| biblio.last_page | 149852 |
| biblio.first_page | 149834 |
| topics[0].id | https://openalex.org/T10775 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9969000220298767 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Generative Adversarial Networks and Image Synthesis |
| topics[1].id | https://openalex.org/T10531 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9886000156402588 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1707 |
| topics[1].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[1].display_name | Advanced Vision and Imaging |
| topics[2].id | https://openalex.org/T10036 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9879000186920166 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1707 |
| topics[2].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[2].display_name | Advanced Neural Network Applications |
| is_xpac | False |
| apc_list.value | 1850 |
| apc_list.currency | USD |
| apc_list.value_usd | 1850 |
| apc_paid.value | 1850 |
| apc_paid.currency | USD |
| apc_paid.value_usd | 1850 |
| concepts[0].id | https://openalex.org/C31972630 |
| concepts[0].level | 1 |
| concepts[0].score | 0.797588586807251 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[0].display_name | Computer vision |
| concepts[1].id | https://openalex.org/C41008148 |
| concepts[1].level | 0 |
| concepts[1].score | 0.7811902761459351 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[1].display_name | Computer science |
| concepts[2].id | https://openalex.org/C154945302 |
| concepts[2].level | 1 |
| concepts[2].score | 0.7494926452636719 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[2].display_name | Artificial intelligence |
| concepts[3].id | https://openalex.org/C204241405 |
| concepts[3].level | 3 |
| concepts[3].score | 0.5559273362159729 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q461499 |
| concepts[3].display_name | Transformation (genetics) |
| concepts[4].id | https://openalex.org/C104114177 |
| concepts[4].level | 2 |
| concepts[4].score | 0.5504485368728638 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q79782 |
| concepts[4].display_name | Motion (physics) |
| concepts[5].id | https://openalex.org/C2781238097 |
| concepts[5].level | 2 |
| concepts[5].score | 0.5278672575950623 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q175026 |
| concepts[5].display_name | Object (grammar) |
| concepts[6].id | https://openalex.org/C202474056 |
| concepts[6].level | 3 |
| concepts[6].score | 0.47750920057296753 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q1931635 |
| concepts[6].display_name | Video tracking |
| concepts[7].id | https://openalex.org/C121684516 |
| concepts[7].level | 1 |
| concepts[7].score | 0.4445471167564392 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q7600677 |
| concepts[7].display_name | Computer graphics (images) |
| concepts[8].id | https://openalex.org/C104317684 |
| concepts[8].level | 2 |
| concepts[8].score | 0.0 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q7187 |
| concepts[8].display_name | Gene |
| concepts[9].id | https://openalex.org/C185592680 |
| concepts[9].level | 0 |
| concepts[9].score | 0.0 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q2329 |
| concepts[9].display_name | Chemistry |
| concepts[10].id | https://openalex.org/C55493867 |
| concepts[10].level | 1 |
| concepts[10].score | 0.0 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q7094 |
| concepts[10].display_name | Biochemistry |
| keywords[0].id | https://openalex.org/keywords/computer-vision |
| keywords[0].score | 0.797588586807251 |
| keywords[0].display_name | Computer vision |
| keywords[1].id | https://openalex.org/keywords/computer-science |
| keywords[1].score | 0.7811902761459351 |
| keywords[1].display_name | Computer science |
| keywords[2].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[2].score | 0.7494926452636719 |
| keywords[2].display_name | Artificial intelligence |
| keywords[3].id | https://openalex.org/keywords/transformation |
| keywords[3].score | 0.5559273362159729 |
| keywords[3].display_name | Transformation (genetics) |
| keywords[4].id | https://openalex.org/keywords/motion |
| keywords[4].score | 0.5504485368728638 |
| keywords[4].display_name | Motion (physics) |
| keywords[5].id | https://openalex.org/keywords/object |
| keywords[5].score | 0.5278672575950623 |
| keywords[5].display_name | Object (grammar) |
| keywords[6].id | https://openalex.org/keywords/video-tracking |
| keywords[6].score | 0.47750920057296753 |
| keywords[6].display_name | Video tracking |
| keywords[7].id | https://openalex.org/keywords/computer-graphics |
| keywords[7].score | 0.4445471167564392 |
| keywords[7].display_name | Computer graphics (images) |
| language | en |
| locations[0].id | doi:10.1109/access.2025.3600178 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S2485537415 |
| locations[0].source.issn | 2169-3536 |
| locations[0].source.type | journal |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | 2169-3536 |
| locations[0].source.is_core | True |
| locations[0].source.is_in_doaj | True |
| locations[0].source.display_name | IEEE Access |
| locations[0].source.host_organization | https://openalex.org/P4310319808 |
| locations[0].source.host_organization_name | Institute of Electrical and Electronics Engineers |
| locations[0].source.host_organization_lineage | https://openalex.org/P4310319808 |
| locations[0].source.host_organization_lineage_names | Institute of Electrical and Electronics Engineers |
| locations[0].license | cc-by |
| locations[0].pdf_url | |
| locations[0].version | publishedVersion |
| locations[0].raw_type | journal-article |
| locations[0].license_id | https://openalex.org/licenses/cc-by |
| locations[0].is_accepted | True |
| locations[0].is_published | True |
| locations[0].raw_source_name | IEEE Access |
| locations[0].landing_page_url | https://doi.org/10.1109/access.2025.3600178 |
| locations[1].id | pmh:oai:doaj.org/article:455a388e362e47a28374ada8b733d7f2 |
| locations[1].is_oa | False |
| locations[1].source.id | https://openalex.org/S4306401280 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | False |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | DOAJ (DOAJ: Directory of Open Access Journals) |
| locations[1].source.host_organization | |
| locations[1].source.host_organization_name | |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | submittedVersion |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | False |
| locations[1].raw_source_name | IEEE Access, Vol 13, Pp 149834-149852 (2025) |
| locations[1].landing_page_url | https://doaj.org/article/455a388e362e47a28374ada8b733d7f2 |
| indexed_in | crossref, doaj |
| authorships[0].author.id | https://openalex.org/A5089348789 |
| authorships[0].author.orcid | https://orcid.org/0000-0002-2675-1975 |
| authorships[0].author.display_name | Muhammad Asif Habib |
| authorships[0].countries | SA |
| authorships[0].affiliations[0].institution_ids | https://openalex.org/I240666556 |
| authorships[0].affiliations[0].raw_affiliation_string | College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia |
| authorships[0].institutions[0].id | https://openalex.org/I240666556 |
| authorships[0].institutions[0].ror | https://ror.org/05gxjyb39 |
| authorships[0].institutions[0].type | education |
| authorships[0].institutions[0].lineage | https://openalex.org/I240666556 |
| authorships[0].institutions[0].country_code | SA |
| authorships[0].institutions[0].display_name | Imam Mohammad ibn Saud Islamic University |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Muhammad Asif Habib |
| authorships[0].is_corresponding | False |
| authorships[0].raw_affiliation_strings | College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia |
| authorships[1].author.id | https://openalex.org/A5039128929 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-9810-1285 |
| authorships[1].author.display_name | Umar Raza |
| authorships[1].countries | GB |
| authorships[1].affiliations[0].institution_ids | https://openalex.org/I11983389 |
| authorships[1].affiliations[0].raw_affiliation_string | Department of Engineering, Manchester Metropolitan University, Manchester, United Kingdom |
| authorships[1].institutions[0].id | https://openalex.org/I11983389 |
| authorships[1].institutions[0].ror | https://ror.org/02hstj355 |
| authorships[1].institutions[0].type | education |
| authorships[1].institutions[0].lineage | https://openalex.org/I11983389 |
| authorships[1].institutions[0].country_code | GB |
| authorships[1].institutions[0].display_name | Manchester Metropolitan University |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Umar Raza |
| authorships[1].is_corresponding | False |
| authorships[1].raw_affiliation_strings | Department of Engineering, Manchester Metropolitan University, Manchester, United Kingdom |
| authorships[2].author.id | https://openalex.org/A5089648638 |
| authorships[2].author.orcid | https://orcid.org/0009-0007-5805-0659 |
| authorships[2].author.display_name | Sohail Jabbar |
| authorships[2].countries | SA |
| authorships[2].affiliations[0].institution_ids | https://openalex.org/I240666556 |
| authorships[2].affiliations[0].raw_affiliation_string | College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia |
| authorships[2].institutions[0].id | https://openalex.org/I240666556 |
| authorships[2].institutions[0].ror | https://ror.org/05gxjyb39 |
| authorships[2].institutions[0].type | education |
| authorships[2].institutions[0].lineage | https://openalex.org/I240666556 |
| authorships[2].institutions[0].country_code | SA |
| authorships[2].institutions[0].display_name | Imam Mohammad ibn Saud Islamic University |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Sohail Jabbar |
| authorships[2].is_corresponding | False |
| authorships[2].raw_affiliation_strings | College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia |
| authorships[3].author.id | https://openalex.org/A5075242305 |
| authorships[3].author.orcid | https://orcid.org/0000-0002-3649-5717 |
| authorships[3].author.display_name | Muhammad Farhan |
| authorships[3].countries | PK |
| authorships[3].affiliations[0].institution_ids | https://openalex.org/I16076960 |
| authorships[3].affiliations[0].raw_affiliation_string | Department of Computer Science, COMSATS University Islamabad, Sahiwal Campus, Pakistan |
| authorships[3].institutions[0].id | https://openalex.org/I16076960 |
| authorships[3].institutions[0].ror | https://ror.org/00nqqvk19 |
| authorships[3].institutions[0].type | education |
| authorships[3].institutions[0].lineage | https://openalex.org/I16076960 |
| authorships[3].institutions[0].country_code | PK |
| authorships[3].institutions[0].display_name | COMSATS University Islamabad |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Muhammad Farhan |
| authorships[3].is_corresponding | False |
| authorships[3].raw_affiliation_strings | Department of Computer Science, COMSATS University Islamabad, Sahiwal Campus, Pakistan |
| authorships[4].author.id | https://openalex.org/A5050849175 |
| authorships[4].author.orcid | https://orcid.org/0000-0002-1030-1275 |
| authorships[4].author.display_name | Farhan Ullah |
| authorships[4].countries | SA |
| authorships[4].affiliations[0].institution_ids | https://openalex.org/I138564716 |
| authorships[4].affiliations[0].raw_affiliation_string | Associate Research Professor, Cybersecurity Center of Prince Mohammad Bin Fahd University, Saudi Arabia |
| authorships[4].institutions[0].id | https://openalex.org/I138564716 |
| authorships[4].institutions[0].ror | https://ror.org/03d64na34 |
| authorships[4].institutions[0].type | education |
| authorships[4].institutions[0].lineage | https://openalex.org/I138564716 |
| authorships[4].institutions[0].country_code | SA |
| authorships[4].institutions[0].display_name | Prince Mohammad bin Fahd University |
| authorships[4].author_position | last |
| authorships[4].raw_author_name | Farhan Ullah |
| authorships[4].is_corresponding | False |
| authorships[4].raw_affiliation_strings | Associate Research Professor, Cybersecurity Center of Prince Mohammad Bin Fahd University, Saudi Arabia |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://doi.org/10.1109/access.2025.3600178 |
| open_access.oa_status | gold |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | ActionSync Video Transformation: Automated Object Removal and Responsive Effects in Motion Videos Using Hybrid CNN and GRU |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T10775 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9969000220298767 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Generative Adversarial Networks and Image Synthesis |
| related_works | https://openalex.org/W2353644209, https://openalex.org/W2737719445, https://openalex.org/W2898210368, https://openalex.org/W4239098401, https://openalex.org/W2593663830, https://openalex.org/W2348743188, https://openalex.org/W2902317490, https://openalex.org/W2463239216, https://openalex.org/W4387423451, https://openalex.org/W4205448459 |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | doi:10.1109/access.2025.3600178 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S2485537415 |
| best_oa_location.source.issn | 2169-3536 |
| best_oa_location.source.type | journal |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | 2169-3536 |
| best_oa_location.source.is_core | True |
| best_oa_location.source.is_in_doaj | True |
| best_oa_location.source.display_name | IEEE Access |
| best_oa_location.source.host_organization | https://openalex.org/P4310319808 |
| best_oa_location.source.host_organization_name | Institute of Electrical and Electronics Engineers |
| best_oa_location.source.host_organization_lineage | https://openalex.org/P4310319808 |
| best_oa_location.source.host_organization_lineage_names | Institute of Electrical and Electronics Engineers |
| best_oa_location.license | cc-by |
| best_oa_location.pdf_url | |
| best_oa_location.version | publishedVersion |
| best_oa_location.raw_type | journal-article |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by |
| best_oa_location.is_accepted | True |
| best_oa_location.is_published | True |
| best_oa_location.raw_source_name | IEEE Access |
| best_oa_location.landing_page_url | https://doi.org/10.1109/access.2025.3600178 |
| primary_location.id | doi:10.1109/access.2025.3600178 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S2485537415 |
| primary_location.source.issn | 2169-3536 |
| primary_location.source.type | journal |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | 2169-3536 |
| primary_location.source.is_core | True |
| primary_location.source.is_in_doaj | True |
| primary_location.source.display_name | IEEE Access |
| primary_location.source.host_organization | https://openalex.org/P4310319808 |
| primary_location.source.host_organization_name | Institute of Electrical and Electronics Engineers |
| primary_location.source.host_organization_lineage | https://openalex.org/P4310319808 |
| primary_location.source.host_organization_lineage_names | Institute of Electrical and Electronics Engineers |
| primary_location.license | cc-by |
| primary_location.pdf_url | |
| primary_location.version | publishedVersion |
| primary_location.raw_type | journal-article |
| primary_location.license_id | https://openalex.org/licenses/cc-by |
| primary_location.is_accepted | True |
| primary_location.is_published | True |
| primary_location.raw_source_name | IEEE Access |
| primary_location.landing_page_url | https://doi.org/10.1109/access.2025.3600178 |
| publication_date | 2025-01-01 |
| publication_year | 2025 |
| referenced_works | https://openalex.org/W4322012185, https://openalex.org/W2966421984, https://openalex.org/W3137167006, https://openalex.org/W3080642835, https://openalex.org/W2752236330, https://openalex.org/W3098774131, https://openalex.org/W3197716485, https://openalex.org/W4386498973, https://openalex.org/W3110984360, https://openalex.org/W2904835235, https://openalex.org/W4360596244, https://openalex.org/W2050579439, https://openalex.org/W3172374794, https://openalex.org/W2972189363, https://openalex.org/W3020985685, https://openalex.org/W3101308812, https://openalex.org/W2950579554, https://openalex.org/W2795087793, https://openalex.org/W2791961641, https://openalex.org/W2411063280, https://openalex.org/W2964942120, https://openalex.org/W2796830519, https://openalex.org/W3040552910, https://openalex.org/W3163658555, https://openalex.org/W4289209550, https://openalex.org/W2806175674, https://openalex.org/W2739465613, https://openalex.org/W3094270153, https://openalex.org/W3004489678, https://openalex.org/W1777628566, https://openalex.org/W3200663463, https://openalex.org/W3160886270, https://openalex.org/W3007605881, https://openalex.org/W2973673960, https://openalex.org/W3038413136, https://openalex.org/W4411869214, https://openalex.org/W4293663458 |
| referenced_works_count | 37 |
| abstract_inverted_index.A | 57 |
| abstract_inverted_index.a | 27, 82, 143, 161, 184 |
| abstract_inverted_index.an | 42, 108, 223, 231 |
| abstract_inverted_index.at | 222 |
| abstract_inverted_index.by | 48 |
| abstract_inverted_index.of | 41, 94, 101, 115, 225 |
| abstract_inverted_index.on | 17, 156, 183, 209, 230 |
| abstract_inverted_index.to | 72, 103, 179, 262 |
| abstract_inverted_index.CNN | 77, 117 |
| abstract_inverted_index.GRU | 119 |
| abstract_inverted_index.RTX | 233 |
| abstract_inverted_index.The | 37, 76, 139, 201, 212 |
| abstract_inverted_index.and | 5, 32, 97, 118, 130, 135, 151, 169, 193, 207, 250, 268 |
| abstract_inverted_index.are | 21, 54, 79 |
| abstract_inverted_index.fed | 80 |
| abstract_inverted_index.for | 239, 257, 275 |
| abstract_inverted_index.its | 237 |
| abstract_inverted_index.not | 22 |
| abstract_inverted_index.per | 228 |
| abstract_inverted_index.the | 112, 116, 166, 246, 263, 272 |
| abstract_inverted_index.via | 280 |
| abstract_inverted_index.0.48 | 226 |
| abstract_inverted_index.2070 | 234 |
| abstract_inverted_index.720p | 219 |
| abstract_inverted_index.Both | 266 |
| abstract_inverted_index.CNN, | 248 |
| abstract_inverted_index.GPU, | 235 |
| abstract_inverted_index.GRU, | 249 |
| abstract_inverted_index.This | 24 |
| abstract_inverted_index.Unit | 86 |
| abstract_inverted_index.both | 133 |
| abstract_inverted_index.data | 1 |
| abstract_inverted_index.deep | 28 |
| abstract_inverted_index.from | 69 |
| abstract_inverted_index.gain | 104 |
| abstract_inverted_index.high | 204 |
| abstract_inverted_index.into | 81, 120, 172 |
| abstract_inverted_index.main | 38 |
| abstract_inverted_index.make | 9 |
| abstract_inverted_index.near | 240 |
| abstract_inverted_index.over | 132 |
| abstract_inverted_index.that | 20, 53, 124, 245 |
| abstract_inverted_index.with | 176, 197 |
| abstract_inverted_index.work | 25 |
| abstract_inverted_index.(CNN) | 62 |
| abstract_inverted_index.(GRU) | 87 |
| abstract_inverted_index.Gated | 84 |
| abstract_inverted_index.Large | 0 |
| abstract_inverted_index.based | 155 |
| abstract_inverted_index.clips | 221 |
| abstract_inverted_index.error | 210 |
| abstract_inverted_index.first | 64 |
| abstract_inverted_index.frame | 229 |
| abstract_inverted_index.fused | 247 |
| abstract_inverted_index.joint | 281 |
| abstract_inverted_index.large | 185 |
| abstract_inverted_index.novel | 173 |
| abstract_inverted_index.their | 157 |
| abstract_inverted_index.video | 10, 34, 44, 126, 174, 187, 220, 259 |
| abstract_inverted_index.which | 89, 147 |
| abstract_inverted_index.NVIDIA | 232 |
| abstract_inverted_index.Neural | 60 |
| abstract_inverted_index.depend | 16 |
| abstract_inverted_index.driven | 47 |
| abstract_inverted_index.enable | 253 |
| abstract_inverted_index.frames | 71, 102 |
| abstract_inverted_index.inform | 142 |
| abstract_inverted_index.layers | 154 |
| abstract_inverted_index.models | 90 |
| abstract_inverted_index.neural | 29, 50 |
| abstract_inverted_index.object | 7, 95, 128, 152, 191 |
| abstract_inverted_index.strong | 198 |
| abstract_inverted_index.system | 199, 213 |
| abstract_inverted_index.visual | 67 |
| abstract_inverted_index.Network | 61 |
| abstract_inverted_index.achieve | 73 |
| abstract_inverted_index.adjusts | 149 |
| abstract_inverted_index.aligned | 261 |
| abstract_inverted_index.average | 224 |
| abstract_inverted_index.confirm | 244 |
| abstract_inverted_index.context | 255 |
| abstract_inverted_index.dynamic | 3 |
| abstract_inverted_index.effects | 163, 177, 196, 278 |
| abstract_inverted_index.encoder | 63 |
| abstract_inverted_index.methods | 15 |
| abstract_inverted_index.modeled | 158 |
| abstract_inverted_index.module, | 146 |
| abstract_inverted_index.motions | 8, 96 |
| abstract_inverted_index.network | 51 |
| abstract_inverted_index.notable | 215 |
| abstract_inverted_index.objects | 171 |
| abstract_inverted_index.recall, | 206 |
| abstract_inverted_index.reflect | 125 |
| abstract_inverted_index.removal | 192 |
| abstract_inverted_index.scenes, | 4 |
| abstract_inverted_index.seconds | 227 |
| abstract_inverted_index.spatial | 74, 134 |
| abstract_inverted_index.system. | 36 |
| abstract_inverted_index.unified | 121 |
| abstract_inverted_index.16-layer | 58 |
| abstract_inverted_index.512-unit | 83 |
| abstract_inverted_index.Finally, | 160 |
| abstract_inverted_index.achieves | 203 |
| abstract_inverted_index.adaptive | 277 |
| abstract_inverted_index.adjusted | 170 |
| abstract_inverted_index.analysis | 11 |
| abstract_inverted_index.capacity | 274 |
| abstract_inverted_index.consists | 40 |
| abstract_inverted_index.content, | 127 |
| abstract_inverted_index.context. | 106 |
| abstract_inverted_index.develops | 26 |
| abstract_inverted_index.dynamics | 93 |
| abstract_inverted_index.evidence | 271 |
| abstract_inverted_index.extracts | 65 |
| abstract_inverted_index.features | 19, 68 |
| abstract_inverted_index.flexible | 162 |
| abstract_inverted_index.footage. | 265 |
| abstract_inverted_index.metrics. | 211 |
| abstract_inverted_index.modeling | 256 |
| abstract_inverted_index.original | 180, 264 |
| abstract_inverted_index.outcomes | 270 |
| abstract_inverted_index.pipeline | 46 |
| abstract_inverted_index.renderer | 164 |
| abstract_inverted_index.spanning | 99 |
| abstract_inverted_index.temporal | 105, 136 |
| abstract_inverted_index.volumes, | 2 |
| abstract_inverted_index.Ablations | 243 |
| abstract_inverted_index.Recurrent | 85 |
| abstract_inverted_index.accuracy. | 200 |
| abstract_inverted_index.automated | 31, 43 |
| abstract_inverted_index.benchmark | 188 |
| abstract_inverted_index.effective | 254 |
| abstract_inverted_index.encodings | 78 |
| abstract_inverted_index.extremely | 12 |
| abstract_inverted_index.framework | 202 |
| abstract_inverted_index.intricate | 6 |
| abstract_inverted_index.potential | 238 |
| abstract_inverted_index.real-time | 241 |
| abstract_inverted_index.scalable. | 23 |
| abstract_inverted_index.sequences | 175 |
| abstract_inverted_index.strengths | 114 |
| abstract_inverted_index.technique | 39 |
| abstract_inverted_index.thousands | 100 |
| abstract_inverted_index.automated, | 276 |
| abstract_inverted_index.background | 194 |
| abstract_inverted_index.components | 52, 252 |
| abstract_inverted_index.composites | 165 |
| abstract_inverted_index.connected. | 56 |
| abstract_inverted_index.controller | 145 |
| abstract_inverted_index.deliberate | 258 |
| abstract_inverted_index.difficult. | 13 |
| abstract_inverted_index.dimensions | 137 |
| abstract_inverted_index.foreground | 153 |
| abstract_inverted_index.generation | 279 |
| abstract_inverted_index.individual | 70, 113 |
| abstract_inverted_index.integrates | 111 |
| abstract_inverted_index.long-range | 91 |
| abstract_inverted_index.precision, | 205 |
| abstract_inverted_index.processing | 218 |
| abstract_inverted_index.reasoning. | 283 |
| abstract_inverted_index.responsive | 190 |
| abstract_inverted_index.sequencer, | 88 |
| abstract_inverted_index.sequential | 92 |
| abstract_inverted_index.space-time | 122 |
| abstract_inverted_index.timelines. | 181 |
| abstract_inverted_index.51-category | 186 |
| abstract_inverted_index.Evaluations | 182 |
| abstract_inverted_index.Traditional | 14 |
| abstract_inverted_index.attentional | 109 |
| abstract_inverted_index.backgrounds | 150, 168 |
| abstract_inverted_index.demonstrate | 189 |
| abstract_inverted_index.efficiency, | 217 |
| abstract_inverted_index.performance | 208 |
| abstract_inverted_index.qualitative | 269 |
| abstract_inverted_index.specialized | 144 |
| abstract_inverted_index.transformed | 167 |
| abstract_inverted_index.transformer | 110, 251 |
| abstract_inverted_index.connections. | 159 |
| abstract_inverted_index.controllable | 33 |
| abstract_inverted_index.deliberately | 148 |
| abstract_inverted_index.demonstrates | 214 |
| abstract_inverted_index.hierarchical | 66 |
| abstract_inverted_index.highlighting | 236 |
| abstract_inverted_index.interactions | 98, 131 |
| abstract_inverted_index.quantitative | 267 |
| abstract_inverted_index.substitution | 195 |
| abstract_inverted_index.synchronized | 178 |
| abstract_inverted_index.Convolutional | 59 |
| abstract_inverted_index.Subsequently, | 107 |
| abstract_inverted_index.applications. | 242 |
| abstract_inverted_index.computational | 216 |
| abstract_inverted_index.context-aware | 140 |
| abstract_inverted_index.human-crafted | 18 |
| abstract_inverted_index.manipulations | 260 |
| abstract_inverted_index.relationships, | 129 |
| abstract_inverted_index.systematically | 55 |
| abstract_inverted_index.transformation | 35, 45 |
| abstract_inverted_index.understanding. | 75 |
| abstract_inverted_index.network-powered | 30 |
| abstract_inverted_index.representations | 123, 141 |
| abstract_inverted_index.simultaneously. | 138 |
| abstract_inverted_index.spatio-temporal | 282 |
| abstract_inverted_index.system’s | 273 |
| abstract_inverted_index.spatial-temporal | 49 |
| cited_by_percentile_year | |
| countries_distinct_count | 3 |
| institutions_distinct_count | 5 |
| sustainable_development_goals[0].id | https://metadata.un.org/sdg/10 |
| sustainable_development_goals[0].score | 0.550000011920929 |
| sustainable_development_goals[0].display_name | Reduced inequalities |
| citation_normalized_percentile.value | 0.37250596 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | False |