A Unified Multimodal De- and Re-coupling Framework for RGB-D Motion Recognition Article Swipe
YOU?
·
· 2022
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2211.09146
Motion recognition is a promising direction in computer vision, but the training of video classification models is much harder than images due to insufficient data and considerable parameters. To get around this, some works strive to explore multimodal cues from RGB-D data. Although improving motion recognition to some extent, these methods still face sub-optimal situations in the following aspects: (i) Data augmentation, i.e., the scale of the RGB-D datasets is still limited, and few efforts have been made to explore novel data augmentation strategies for videos; (ii) Optimization mechanism, i.e., the tightly space-time-entangled network structure brings more challenges to spatiotemporal information modeling; And (iii) cross-modal knowledge fusion, i.e., the high similarity between multimodal representations caused to insufficient late fusion. To alleviate these drawbacks, we propose to improve RGB-D-based motion recognition both from data and algorithm perspectives in this paper. In more detail, firstly, we introduce a novel video data augmentation method dubbed ShuffleMix, which acts as a supplement to MixUp, to provide additional temporal regularization for motion recognition. Secondly, a Unified Multimodal De-coupling and multi-stage Re-coupling framework, termed UMDR, is proposed for video representation learning. Finally, a novel cross-modal Complement Feature Catcher (CFCer) is explored to mine potential commonalities features in multimodal information as the auxiliary fusion stream, to improve the late fusion results. The seamless combination of these novel designs forms a robust spatiotemporal representation and achieves better performance than state-of-the-art methods on four public motion datasets. Specifically, UMDR achieves unprecedented improvements of +4.5% on the Chalearn IsoGD dataset. Our code is available at https://github.com/zhoubenjia/MotionRGBD-PAMI.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2211.09146
- https://arxiv.org/pdf/2211.09146
- OA Status
- green
- Cited By
- 3
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4309393541
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4309393541Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2211.09146Digital Object Identifier
- Title
-
A Unified Multimodal De- and Re-coupling Framework for RGB-D Motion RecognitionWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2022Year of publication
- Publication date
-
2022-11-16Full publication date if available
- Authors
-
Benjia Zhou, Pichao Wang, Jun Wan, Yanyan Liang, Fan WangList of authors in order
- Landing page
-
https://arxiv.org/abs/2211.09146Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2211.09146Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2211.09146Direct OA link when available
- Concepts
-
Computer science, RGB color model, Artificial intelligence, Motion (physics), Fusion mechanism, Representation (politics), Regularization (linguistics), Pattern recognition (psychology), Feature (linguistics), Machine learning, Computer vision, Fusion, Lipid bilayer fusion, Political science, Law, Politics, Linguistics, PhilosophyTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
3Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 2, 2024: 1Per-year citation counts (last 5 years)
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4309393541 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2211.09146 |
| ids.doi | https://doi.org/10.48550/arxiv.2211.09146 |
| ids.openalex | https://openalex.org/W4309393541 |
| fwci | |
| type | preprint |
| title | A Unified Multimodal De- and Re-coupling Framework for RGB-D Motion Recognition |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10812 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9998000264167786 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Human Pose and Action Recognition |
| topics[1].id | https://openalex.org/T10531 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.996999979019165 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1707 |
| topics[1].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[1].display_name | Advanced Vision and Imaging |
| topics[2].id | https://openalex.org/T10331 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9894999861717224 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1707 |
| topics[2].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[2].display_name | Video Surveillance and Tracking Methods |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C41008148 |
| concepts[0].level | 0 |
| concepts[0].score | 0.7834374904632568 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[0].display_name | Computer science |
| concepts[1].id | https://openalex.org/C82990744 |
| concepts[1].level | 2 |
| concepts[1].score | 0.7087315917015076 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q166194 |
| concepts[1].display_name | RGB color model |
| concepts[2].id | https://openalex.org/C154945302 |
| concepts[2].level | 1 |
| concepts[2].score | 0.6497848033905029 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[2].display_name | Artificial intelligence |
| concepts[3].id | https://openalex.org/C104114177 |
| concepts[3].level | 2 |
| concepts[3].score | 0.551367998123169 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q79782 |
| concepts[3].display_name | Motion (physics) |
| concepts[4].id | https://openalex.org/C173414695 |
| concepts[4].level | 4 |
| concepts[4].score | 0.5404068231582642 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q5510276 |
| concepts[4].display_name | Fusion mechanism |
| concepts[5].id | https://openalex.org/C2776359362 |
| concepts[5].level | 3 |
| concepts[5].score | 0.47734978795051575 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q2145286 |
| concepts[5].display_name | Representation (politics) |
| concepts[6].id | https://openalex.org/C2776135515 |
| concepts[6].level | 2 |
| concepts[6].score | 0.46930837631225586 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q17143721 |
| concepts[6].display_name | Regularization (linguistics) |
| concepts[7].id | https://openalex.org/C153180895 |
| concepts[7].level | 2 |
| concepts[7].score | 0.44954437017440796 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q7148389 |
| concepts[7].display_name | Pattern recognition (psychology) |
| concepts[8].id | https://openalex.org/C2776401178 |
| concepts[8].level | 2 |
| concepts[8].score | 0.4456864595413208 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q12050496 |
| concepts[8].display_name | Feature (linguistics) |
| concepts[9].id | https://openalex.org/C119857082 |
| concepts[9].level | 1 |
| concepts[9].score | 0.3770293593406677 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q2539 |
| concepts[9].display_name | Machine learning |
| concepts[10].id | https://openalex.org/C31972630 |
| concepts[10].level | 1 |
| concepts[10].score | 0.34910398721694946 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[10].display_name | Computer vision |
| concepts[11].id | https://openalex.org/C158525013 |
| concepts[11].level | 2 |
| concepts[11].score | 0.26731759309768677 |
| concepts[11].wikidata | https://www.wikidata.org/wiki/Q2593739 |
| concepts[11].display_name | Fusion |
| concepts[12].id | https://openalex.org/C103038307 |
| concepts[12].level | 3 |
| concepts[12].score | 0.0 |
| concepts[12].wikidata | https://www.wikidata.org/wiki/Q6556360 |
| concepts[12].display_name | Lipid bilayer fusion |
| concepts[13].id | https://openalex.org/C17744445 |
| concepts[13].level | 0 |
| concepts[13].score | 0.0 |
| concepts[13].wikidata | https://www.wikidata.org/wiki/Q36442 |
| concepts[13].display_name | Political science |
| concepts[14].id | https://openalex.org/C199539241 |
| concepts[14].level | 1 |
| concepts[14].score | 0.0 |
| concepts[14].wikidata | https://www.wikidata.org/wiki/Q7748 |
| concepts[14].display_name | Law |
| concepts[15].id | https://openalex.org/C94625758 |
| concepts[15].level | 2 |
| concepts[15].score | 0.0 |
| concepts[15].wikidata | https://www.wikidata.org/wiki/Q7163 |
| concepts[15].display_name | Politics |
| concepts[16].id | https://openalex.org/C41895202 |
| concepts[16].level | 1 |
| concepts[16].score | 0.0 |
| concepts[16].wikidata | https://www.wikidata.org/wiki/Q8162 |
| concepts[16].display_name | Linguistics |
| concepts[17].id | https://openalex.org/C138885662 |
| concepts[17].level | 0 |
| concepts[17].score | 0.0 |
| concepts[17].wikidata | https://www.wikidata.org/wiki/Q5891 |
| concepts[17].display_name | Philosophy |
| keywords[0].id | https://openalex.org/keywords/computer-science |
| keywords[0].score | 0.7834374904632568 |
| keywords[0].display_name | Computer science |
| keywords[1].id | https://openalex.org/keywords/rgb-color-model |
| keywords[1].score | 0.7087315917015076 |
| keywords[1].display_name | RGB color model |
| keywords[2].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[2].score | 0.6497848033905029 |
| keywords[2].display_name | Artificial intelligence |
| keywords[3].id | https://openalex.org/keywords/motion |
| keywords[3].score | 0.551367998123169 |
| keywords[3].display_name | Motion (physics) |
| keywords[4].id | https://openalex.org/keywords/fusion-mechanism |
| keywords[4].score | 0.5404068231582642 |
| keywords[4].display_name | Fusion mechanism |
| keywords[5].id | https://openalex.org/keywords/representation |
| keywords[5].score | 0.47734978795051575 |
| keywords[5].display_name | Representation (politics) |
| keywords[6].id | https://openalex.org/keywords/regularization |
| keywords[6].score | 0.46930837631225586 |
| keywords[6].display_name | Regularization (linguistics) |
| keywords[7].id | https://openalex.org/keywords/pattern-recognition |
| keywords[7].score | 0.44954437017440796 |
| keywords[7].display_name | Pattern recognition (psychology) |
| keywords[8].id | https://openalex.org/keywords/feature |
| keywords[8].score | 0.4456864595413208 |
| keywords[8].display_name | Feature (linguistics) |
| keywords[9].id | https://openalex.org/keywords/machine-learning |
| keywords[9].score | 0.3770293593406677 |
| keywords[9].display_name | Machine learning |
| keywords[10].id | https://openalex.org/keywords/computer-vision |
| keywords[10].score | 0.34910398721694946 |
| keywords[10].display_name | Computer vision |
| keywords[11].id | https://openalex.org/keywords/fusion |
| keywords[11].score | 0.26731759309768677 |
| keywords[11].display_name | Fusion |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2211.09146 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2211.09146 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2211.09146 |
| locations[1].id | doi:10.48550/arxiv.2211.09146 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2211.09146 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5054200099 |
| authorships[0].author.orcid | https://orcid.org/0000-0003-4883-5552 |
| authorships[0].author.display_name | Benjia Zhou |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Zhou, Benjia |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5042680345 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-1430-0237 |
| authorships[1].author.display_name | Pichao Wang |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Wang, Pichao |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5101655825 |
| authorships[2].author.orcid | https://orcid.org/0000-0002-9961-7902 |
| authorships[2].author.display_name | Jun Wan |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Wan, Jun |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5101292074 |
| authorships[3].author.orcid | https://orcid.org/0000-0002-7121-2057 |
| authorships[3].author.display_name | Yanyan Liang |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Liang, Yanyan |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5100380567 |
| authorships[4].author.orcid | https://orcid.org/0000-0003-2988-0614 |
| authorships[4].author.display_name | Fan Wang |
| authorships[4].author_position | last |
| authorships[4].raw_author_name | Wang, Fan |
| authorships[4].is_corresponding | False |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2211.09146 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2022-11-26T00:00:00 |
| display_name | A Unified Multimodal De- and Re-coupling Framework for RGB-D Motion Recognition |
| has_fulltext | True |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T10812 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9998000264167786 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Human Pose and Action Recognition |
| related_works | https://openalex.org/W2486460843, https://openalex.org/W2168109476, https://openalex.org/W1968121071, https://openalex.org/W2020254986, https://openalex.org/W2686985752, https://openalex.org/W4248431608, https://openalex.org/W4313887926, https://openalex.org/W4313888283, https://openalex.org/W3187910480, https://openalex.org/W3206560021 |
| cited_by_count | 3 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 2 |
| counts_by_year[1].year | 2024 |
| counts_by_year[1].cited_by_count | 1 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2211.09146 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2211.09146 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2211.09146 |
| primary_location.id | pmh:oai:arXiv.org:2211.09146 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2211.09146 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2211.09146 |
| publication_date | 2022-11-16 |
| publication_year | 2022 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 3, 145, 156, 169, 186, 222 |
| abstract_inverted_index.In | 139 |
| abstract_inverted_index.To | 28, 119 |
| abstract_inverted_index.as | 155, 203 |
| abstract_inverted_index.at | 254 |
| abstract_inverted_index.in | 6, 55, 136, 200 |
| abstract_inverted_index.is | 2, 16, 69, 179, 193, 252 |
| abstract_inverted_index.of | 12, 65, 217, 243 |
| abstract_inverted_index.on | 233, 245 |
| abstract_inverted_index.to | 22, 35, 46, 78, 98, 115, 125, 158, 160, 195, 208 |
| abstract_inverted_index.we | 123, 143 |
| abstract_inverted_index.(i) | 59 |
| abstract_inverted_index.And | 102 |
| abstract_inverted_index.Our | 250 |
| abstract_inverted_index.The | 214 |
| abstract_inverted_index.and | 25, 72, 133, 173, 226 |
| abstract_inverted_index.but | 9 |
| abstract_inverted_index.due | 21 |
| abstract_inverted_index.few | 73 |
| abstract_inverted_index.for | 84, 165, 181 |
| abstract_inverted_index.get | 29 |
| abstract_inverted_index.the | 10, 56, 63, 66, 90, 108, 204, 210, 246 |
| abstract_inverted_index.(ii) | 86 |
| abstract_inverted_index.Data | 60 |
| abstract_inverted_index.UMDR | 239 |
| abstract_inverted_index.acts | 154 |
| abstract_inverted_index.been | 76 |
| abstract_inverted_index.both | 130 |
| abstract_inverted_index.code | 251 |
| abstract_inverted_index.cues | 38 |
| abstract_inverted_index.data | 24, 81, 132, 148 |
| abstract_inverted_index.face | 52 |
| abstract_inverted_index.four | 234 |
| abstract_inverted_index.from | 39, 131 |
| abstract_inverted_index.have | 75 |
| abstract_inverted_index.high | 109 |
| abstract_inverted_index.late | 117, 211 |
| abstract_inverted_index.made | 77 |
| abstract_inverted_index.mine | 196 |
| abstract_inverted_index.more | 96, 140 |
| abstract_inverted_index.much | 17 |
| abstract_inverted_index.some | 32, 47 |
| abstract_inverted_index.than | 19, 230 |
| abstract_inverted_index.this | 137 |
| abstract_inverted_index.(iii) | 103 |
| abstract_inverted_index.+4.5% | 244 |
| abstract_inverted_index.IsoGD | 248 |
| abstract_inverted_index.RGB-D | 40, 67 |
| abstract_inverted_index.UMDR, | 178 |
| abstract_inverted_index.data. | 41 |
| abstract_inverted_index.forms | 221 |
| abstract_inverted_index.i.e., | 62, 89, 107 |
| abstract_inverted_index.novel | 80, 146, 187, 219 |
| abstract_inverted_index.scale | 64 |
| abstract_inverted_index.still | 51, 70 |
| abstract_inverted_index.these | 49, 121, 218 |
| abstract_inverted_index.this, | 31 |
| abstract_inverted_index.video | 13, 147, 182 |
| abstract_inverted_index.which | 153 |
| abstract_inverted_index.works | 33 |
| abstract_inverted_index.MixUp, | 159 |
| abstract_inverted_index.Motion | 0 |
| abstract_inverted_index.around | 30 |
| abstract_inverted_index.better | 228 |
| abstract_inverted_index.brings | 95 |
| abstract_inverted_index.caused | 114 |
| abstract_inverted_index.dubbed | 151 |
| abstract_inverted_index.fusion | 206, 212 |
| abstract_inverted_index.harder | 18 |
| abstract_inverted_index.images | 20 |
| abstract_inverted_index.method | 150 |
| abstract_inverted_index.models | 15 |
| abstract_inverted_index.motion | 44, 128, 166, 236 |
| abstract_inverted_index.paper. | 138 |
| abstract_inverted_index.public | 235 |
| abstract_inverted_index.robust | 223 |
| abstract_inverted_index.strive | 34 |
| abstract_inverted_index.termed | 177 |
| abstract_inverted_index.(CFCer) | 192 |
| abstract_inverted_index.Catcher | 191 |
| abstract_inverted_index.Feature | 190 |
| abstract_inverted_index.Unified | 170 |
| abstract_inverted_index.between | 111 |
| abstract_inverted_index.designs | 220 |
| abstract_inverted_index.detail, | 141 |
| abstract_inverted_index.efforts | 74 |
| abstract_inverted_index.explore | 36, 79 |
| abstract_inverted_index.extent, | 48 |
| abstract_inverted_index.fusion, | 106 |
| abstract_inverted_index.fusion. | 118 |
| abstract_inverted_index.improve | 126, 209 |
| abstract_inverted_index.methods | 50, 232 |
| abstract_inverted_index.network | 93 |
| abstract_inverted_index.propose | 124 |
| abstract_inverted_index.provide | 161 |
| abstract_inverted_index.stream, | 207 |
| abstract_inverted_index.tightly | 91 |
| abstract_inverted_index.videos; | 85 |
| abstract_inverted_index.vision, | 8 |
| abstract_inverted_index.Although | 42 |
| abstract_inverted_index.Chalearn | 247 |
| abstract_inverted_index.Finally, | 185 |
| abstract_inverted_index.achieves | 227, 240 |
| abstract_inverted_index.aspects: | 58 |
| abstract_inverted_index.computer | 7 |
| abstract_inverted_index.dataset. | 249 |
| abstract_inverted_index.datasets | 68 |
| abstract_inverted_index.explored | 194 |
| abstract_inverted_index.features | 199 |
| abstract_inverted_index.firstly, | 142 |
| abstract_inverted_index.limited, | 71 |
| abstract_inverted_index.proposed | 180 |
| abstract_inverted_index.results. | 213 |
| abstract_inverted_index.seamless | 215 |
| abstract_inverted_index.temporal | 163 |
| abstract_inverted_index.training | 11 |
| abstract_inverted_index.Secondly, | 168 |
| abstract_inverted_index.algorithm | 134 |
| abstract_inverted_index.alleviate | 120 |
| abstract_inverted_index.auxiliary | 205 |
| abstract_inverted_index.available | 253 |
| abstract_inverted_index.datasets. | 237 |
| abstract_inverted_index.direction | 5 |
| abstract_inverted_index.following | 57 |
| abstract_inverted_index.improving | 43 |
| abstract_inverted_index.introduce | 144 |
| abstract_inverted_index.knowledge | 105 |
| abstract_inverted_index.learning. | 184 |
| abstract_inverted_index.modeling; | 101 |
| abstract_inverted_index.potential | 197 |
| abstract_inverted_index.promising | 4 |
| abstract_inverted_index.structure | 94 |
| abstract_inverted_index.Complement | 189 |
| abstract_inverted_index.Multimodal | 171 |
| abstract_inverted_index.additional | 162 |
| abstract_inverted_index.challenges | 97 |
| abstract_inverted_index.drawbacks, | 122 |
| abstract_inverted_index.framework, | 176 |
| abstract_inverted_index.mechanism, | 88 |
| abstract_inverted_index.multimodal | 37, 112, 201 |
| abstract_inverted_index.similarity | 110 |
| abstract_inverted_index.situations | 54 |
| abstract_inverted_index.strategies | 83 |
| abstract_inverted_index.supplement | 157 |
| abstract_inverted_index.De-coupling | 172 |
| abstract_inverted_index.RGB-D-based | 127 |
| abstract_inverted_index.Re-coupling | 175 |
| abstract_inverted_index.ShuffleMix, | 152 |
| abstract_inverted_index.combination | 216 |
| abstract_inverted_index.cross-modal | 104, 188 |
| abstract_inverted_index.information | 100, 202 |
| abstract_inverted_index.multi-stage | 174 |
| abstract_inverted_index.parameters. | 27 |
| abstract_inverted_index.performance | 229 |
| abstract_inverted_index.recognition | 1, 45, 129 |
| abstract_inverted_index.sub-optimal | 53 |
| abstract_inverted_index.Optimization | 87 |
| abstract_inverted_index.augmentation | 82, 149 |
| abstract_inverted_index.considerable | 26 |
| abstract_inverted_index.improvements | 242 |
| abstract_inverted_index.insufficient | 23, 116 |
| abstract_inverted_index.perspectives | 135 |
| abstract_inverted_index.recognition. | 167 |
| abstract_inverted_index.Specifically, | 238 |
| abstract_inverted_index.augmentation, | 61 |
| abstract_inverted_index.commonalities | 198 |
| abstract_inverted_index.unprecedented | 241 |
| abstract_inverted_index.classification | 14 |
| abstract_inverted_index.regularization | 164 |
| abstract_inverted_index.representation | 183, 225 |
| abstract_inverted_index.spatiotemporal | 99, 224 |
| abstract_inverted_index.representations | 113 |
| abstract_inverted_index.state-of-the-art | 231 |
| abstract_inverted_index.space-time-entangled | 92 |
| abstract_inverted_index.https://github.com/zhoubenjia/MotionRGBD-PAMI. | 255 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 5 |
| citation_normalized_percentile |