SigFormer: Sparse Signal-guided Transformer for Multi-modal Action Segmentation Article Swipe
YOU?
·
· 2024
· Open Access
·
· DOI: https://doi.org/10.1145/3657296
Multi-modal human action segmentation is a critical and challenging task with a wide range of applications. Nowadays, the majority of approaches concentrate on the fusion of dense signals (i.e., RGB, optical flow, and depth maps). However, the potential contributions of sparse IoT sensor signals, which can be crucial for achieving accurate recognition, have not been fully explored. To make up for this, we introduce a S parse s i gnal- g uided Transformer ( SigFormer ) to combine both dense and sparse signals. We employ mask attention to fuse localized features by constraining cross-attention within the regions where sparse signals are valid. However, since sparse signals are discrete, they lack sufficient information about the temporal action boundaries. Therefore, in SigFormer, we propose to emphasize the boundary information at two stages to alleviate this problem. In the first feature extraction stage, we introduce an intermediate bottleneck module to jointly learn both category and boundary features of each dense modality through the inner loss functions. After the fusion of dense modalities and sparse signals, we then devise a two-branch architecture that explicitly models the interrelationship between action category and temporal boundary. Experimental results demonstrate that SigFormer outperforms the state-of-the-art approaches on a multi-modal action segmentation dataset from real industrial environments, reaching an outstanding F1 score of 0.958. The codes and pre-trained models have been made available at https://github.com/LIUQI-creat/SigFormer .
Related Topics
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.1145/3657296
- https://dl.acm.org/doi/pdf/10.1145/3657296
- OA Status
- bronze
- Cited By
- 1
- References
- 42
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4394686885
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4394686885Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.1145/3657296Digital Object Identifier
- Title
-
SigFormer: Sparse Signal-guided Transformer for Multi-modal Action SegmentationWork title
- Type
-
articleOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2024Year of publication
- Publication date
-
2024-04-10Full publication date if available
- Authors
-
Qi Liu, Xinchen Liu, K Liu, Xiaoyan Gu, Wu LiuList of authors in order
- Landing page
-
https://doi.org/10.1145/3657296Publisher landing page
- PDF URL
-
https://dl.acm.org/doi/pdf/10.1145/3657296Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
bronzeOpen access status per OpenAlex
- OA URL
-
https://dl.acm.org/doi/pdf/10.1145/3657296Direct OA link when available
- Concepts
-
Transformer, Modal, Segmentation, Computer science, Pattern recognition (psychology), Acoustics, Artificial intelligence, Speech recognition, Engineering, Electrical engineering, Materials science, Voltage, Physics, Composite materialTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
1Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 1Per-year citation counts (last 5 years)
- References (count)
-
42Number of works referenced by this work
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4394686885 |
|---|---|
| doi | https://doi.org/10.1145/3657296 |
| ids.doi | https://doi.org/10.1145/3657296 |
| ids.openalex | https://openalex.org/W4394686885 |
| fwci | 0.53015756 |
| type | article |
| title | SigFormer: Sparse Signal-guided Transformer for Multi-modal Action Segmentation |
| awards[0].id | https://openalex.org/G5989818691 |
| awards[0].funder_id | https://openalex.org/F4320334978 |
| awards[0].display_name | |
| awards[0].funder_award_id | 20220484063, and XDC02050200 |
| awards[0].funder_display_name | Beijing Nova Program |
| biblio.issue | 8 |
| biblio.volume | 20 |
| biblio.last_page | 22 |
| biblio.first_page | 1 |
| topics[0].id | https://openalex.org/T10812 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 1.0 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Human Pose and Action Recognition |
| topics[1].id | https://openalex.org/T12740 |
| topics[1].field.id | https://openalex.org/fields/22 |
| topics[1].field.display_name | Engineering |
| topics[1].score | 0.9993000030517578 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/2204 |
| topics[1].subfield.display_name | Biomedical Engineering |
| topics[1].display_name | Gait Recognition and Analysis |
| topics[2].id | https://openalex.org/T11512 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9993000030517578 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1702 |
| topics[2].subfield.display_name | Artificial Intelligence |
| topics[2].display_name | Anomaly Detection Techniques and Applications |
| funders[0].id | https://openalex.org/F4320334978 |
| funders[0].ror | https://ror.org/034k14f91 |
| funders[0].display_name | Beijing Nova Program |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C66322947 |
| concepts[0].level | 3 |
| concepts[0].score | 0.6378931999206543 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q11658 |
| concepts[0].display_name | Transformer |
| concepts[1].id | https://openalex.org/C71139939 |
| concepts[1].level | 2 |
| concepts[1].score | 0.6358171105384827 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q910194 |
| concepts[1].display_name | Modal |
| concepts[2].id | https://openalex.org/C89600930 |
| concepts[2].level | 2 |
| concepts[2].score | 0.5936952829360962 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q1423946 |
| concepts[2].display_name | Segmentation |
| concepts[3].id | https://openalex.org/C41008148 |
| concepts[3].level | 0 |
| concepts[3].score | 0.5212365388870239 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[3].display_name | Computer science |
| concepts[4].id | https://openalex.org/C153180895 |
| concepts[4].level | 2 |
| concepts[4].score | 0.37211498618125916 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q7148389 |
| concepts[4].display_name | Pattern recognition (psychology) |
| concepts[5].id | https://openalex.org/C24890656 |
| concepts[5].level | 1 |
| concepts[5].score | 0.3311924338340759 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q82811 |
| concepts[5].display_name | Acoustics |
| concepts[6].id | https://openalex.org/C154945302 |
| concepts[6].level | 1 |
| concepts[6].score | 0.32713890075683594 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[6].display_name | Artificial intelligence |
| concepts[7].id | https://openalex.org/C28490314 |
| concepts[7].level | 1 |
| concepts[7].score | 0.3214108943939209 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q189436 |
| concepts[7].display_name | Speech recognition |
| concepts[8].id | https://openalex.org/C127413603 |
| concepts[8].level | 0 |
| concepts[8].score | 0.21340808272361755 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q11023 |
| concepts[8].display_name | Engineering |
| concepts[9].id | https://openalex.org/C119599485 |
| concepts[9].level | 1 |
| concepts[9].score | 0.19065237045288086 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q43035 |
| concepts[9].display_name | Electrical engineering |
| concepts[10].id | https://openalex.org/C192562407 |
| concepts[10].level | 0 |
| concepts[10].score | 0.1733449399471283 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q228736 |
| concepts[10].display_name | Materials science |
| concepts[11].id | https://openalex.org/C165801399 |
| concepts[11].level | 2 |
| concepts[11].score | 0.16856715083122253 |
| concepts[11].wikidata | https://www.wikidata.org/wiki/Q25428 |
| concepts[11].display_name | Voltage |
| concepts[12].id | https://openalex.org/C121332964 |
| concepts[12].level | 0 |
| concepts[12].score | 0.1258430778980255 |
| concepts[12].wikidata | https://www.wikidata.org/wiki/Q413 |
| concepts[12].display_name | Physics |
| concepts[13].id | https://openalex.org/C159985019 |
| concepts[13].level | 1 |
| concepts[13].score | 0.04992374777793884 |
| concepts[13].wikidata | https://www.wikidata.org/wiki/Q181790 |
| concepts[13].display_name | Composite material |
| keywords[0].id | https://openalex.org/keywords/transformer |
| keywords[0].score | 0.6378931999206543 |
| keywords[0].display_name | Transformer |
| keywords[1].id | https://openalex.org/keywords/modal |
| keywords[1].score | 0.6358171105384827 |
| keywords[1].display_name | Modal |
| keywords[2].id | https://openalex.org/keywords/segmentation |
| keywords[2].score | 0.5936952829360962 |
| keywords[2].display_name | Segmentation |
| keywords[3].id | https://openalex.org/keywords/computer-science |
| keywords[3].score | 0.5212365388870239 |
| keywords[3].display_name | Computer science |
| keywords[4].id | https://openalex.org/keywords/pattern-recognition |
| keywords[4].score | 0.37211498618125916 |
| keywords[4].display_name | Pattern recognition (psychology) |
| keywords[5].id | https://openalex.org/keywords/acoustics |
| keywords[5].score | 0.3311924338340759 |
| keywords[5].display_name | Acoustics |
| keywords[6].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[6].score | 0.32713890075683594 |
| keywords[6].display_name | Artificial intelligence |
| keywords[7].id | https://openalex.org/keywords/speech-recognition |
| keywords[7].score | 0.3214108943939209 |
| keywords[7].display_name | Speech recognition |
| keywords[8].id | https://openalex.org/keywords/engineering |
| keywords[8].score | 0.21340808272361755 |
| keywords[8].display_name | Engineering |
| keywords[9].id | https://openalex.org/keywords/electrical-engineering |
| keywords[9].score | 0.19065237045288086 |
| keywords[9].display_name | Electrical engineering |
| keywords[10].id | https://openalex.org/keywords/materials-science |
| keywords[10].score | 0.1733449399471283 |
| keywords[10].display_name | Materials science |
| keywords[11].id | https://openalex.org/keywords/voltage |
| keywords[11].score | 0.16856715083122253 |
| keywords[11].display_name | Voltage |
| keywords[12].id | https://openalex.org/keywords/physics |
| keywords[12].score | 0.1258430778980255 |
| keywords[12].display_name | Physics |
| keywords[13].id | https://openalex.org/keywords/composite-material |
| keywords[13].score | 0.04992374777793884 |
| keywords[13].display_name | Composite material |
| language | en |
| locations[0].id | doi:10.1145/3657296 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S19610489 |
| locations[0].source.issn | 1551-6857, 1551-6865 |
| locations[0].source.type | journal |
| locations[0].source.is_oa | False |
| locations[0].source.issn_l | 1551-6857 |
| locations[0].source.is_core | True |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | ACM Transactions on Multimedia Computing Communications and Applications |
| locations[0].source.host_organization | https://openalex.org/P4310319798 |
| locations[0].source.host_organization_name | Association for Computing Machinery |
| locations[0].source.host_organization_lineage | https://openalex.org/P4310319798 |
| locations[0].source.host_organization_lineage_names | Association for Computing Machinery |
| locations[0].license | |
| locations[0].pdf_url | https://dl.acm.org/doi/pdf/10.1145/3657296 |
| locations[0].version | publishedVersion |
| locations[0].raw_type | journal-article |
| locations[0].license_id | |
| locations[0].is_accepted | True |
| locations[0].is_published | True |
| locations[0].raw_source_name | ACM Transactions on Multimedia Computing, Communications, and Applications |
| locations[0].landing_page_url | https://doi.org/10.1145/3657296 |
| indexed_in | crossref |
| authorships[0].author.id | https://openalex.org/A5100453271 |
| authorships[0].author.orcid | https://orcid.org/0009-0005-6238-6804 |
| authorships[0].author.display_name | Qi Liu |
| authorships[0].countries | CN |
| authorships[0].affiliations[0].institution_ids | https://openalex.org/I19820366, https://openalex.org/I4210156404 |
| authorships[0].affiliations[0].raw_affiliation_string | Chinese Academy of Sciences Institute of Information Engineering, Beijing, China and University of the Chinese Academy of Sciences School of Cyber Security, Beijing, China and Key Laboratory of Cyberspace Security Defense, Beijing, China |
| authorships[0].institutions[0].id | https://openalex.org/I19820366 |
| authorships[0].institutions[0].ror | https://ror.org/034t30j35 |
| authorships[0].institutions[0].type | government |
| authorships[0].institutions[0].lineage | https://openalex.org/I19820366 |
| authorships[0].institutions[0].country_code | CN |
| authorships[0].institutions[0].display_name | Chinese Academy of Sciences |
| authorships[0].institutions[1].id | https://openalex.org/I4210156404 |
| authorships[0].institutions[1].ror | https://ror.org/04r53se39 |
| authorships[0].institutions[1].type | facility |
| authorships[0].institutions[1].lineage | https://openalex.org/I19820366, https://openalex.org/I4210156404 |
| authorships[0].institutions[1].country_code | CN |
| authorships[0].institutions[1].display_name | Institute of Information Engineering |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Qi Liu |
| authorships[0].is_corresponding | False |
| authorships[0].raw_affiliation_strings | Chinese Academy of Sciences Institute of Information Engineering, Beijing, China and University of the Chinese Academy of Sciences School of Cyber Security, Beijing, China and Key Laboratory of Cyberspace Security Defense, Beijing, China |
| authorships[1].author.id | https://openalex.org/A5030704926 |
| authorships[1].author.orcid | https://orcid.org/0000-0003-4931-8821 |
| authorships[1].author.display_name | Xinchen Liu |
| authorships[1].countries | CN |
| authorships[1].affiliations[0].institution_ids | https://openalex.org/I4210103986 |
| authorships[1].affiliations[0].raw_affiliation_string | JD Explore Academy, JD.com Inc, Beijing, China |
| authorships[1].institutions[0].id | https://openalex.org/I4210103986 |
| authorships[1].institutions[0].ror | https://ror.org/01dkjkq64 |
| authorships[1].institutions[0].type | company |
| authorships[1].institutions[0].lineage | https://openalex.org/I4210103986 |
| authorships[1].institutions[0].country_code | CN |
| authorships[1].institutions[0].display_name | Jingdong (China) |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Xinchen Liu |
| authorships[1].is_corresponding | False |
| authorships[1].raw_affiliation_strings | JD Explore Academy, JD.com Inc, Beijing, China |
| authorships[2].author.id | https://openalex.org/A5107772089 |
| authorships[2].author.orcid | https://orcid.org/0009-0004-8398-6369 |
| authorships[2].author.display_name | K Liu |
| authorships[2].countries | CN |
| authorships[2].affiliations[0].institution_ids | https://openalex.org/I4210103986 |
| authorships[2].affiliations[0].raw_affiliation_string | JD.com Inc, Beijing, China |
| authorships[2].institutions[0].id | https://openalex.org/I4210103986 |
| authorships[2].institutions[0].ror | https://ror.org/01dkjkq64 |
| authorships[2].institutions[0].type | company |
| authorships[2].institutions[0].lineage | https://openalex.org/I4210103986 |
| authorships[2].institutions[0].country_code | CN |
| authorships[2].institutions[0].display_name | Jingdong (China) |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Kun Liu |
| authorships[2].is_corresponding | False |
| authorships[2].raw_affiliation_strings | JD.com Inc, Beijing, China |
| authorships[3].author.id | https://openalex.org/A5024344221 |
| authorships[3].author.orcid | https://orcid.org/0000-0003-0673-0058 |
| authorships[3].author.display_name | Xiaoyan Gu |
| authorships[3].countries | CN |
| authorships[3].affiliations[0].institution_ids | https://openalex.org/I19820366, https://openalex.org/I4210156404 |
| authorships[3].affiliations[0].raw_affiliation_string | Chinese Academy of Sciences Institute of Information Engineering, Beijing, China and University of the Chinese Academy of Sciences School of Cyber Security, Beijing, China |
| authorships[3].institutions[0].id | https://openalex.org/I19820366 |
| authorships[3].institutions[0].ror | https://ror.org/034t30j35 |
| authorships[3].institutions[0].type | government |
| authorships[3].institutions[0].lineage | https://openalex.org/I19820366 |
| authorships[3].institutions[0].country_code | CN |
| authorships[3].institutions[0].display_name | Chinese Academy of Sciences |
| authorships[3].institutions[1].id | https://openalex.org/I4210156404 |
| authorships[3].institutions[1].ror | https://ror.org/04r53se39 |
| authorships[3].institutions[1].type | facility |
| authorships[3].institutions[1].lineage | https://openalex.org/I19820366, https://openalex.org/I4210156404 |
| authorships[3].institutions[1].country_code | CN |
| authorships[3].institutions[1].display_name | Institute of Information Engineering |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Xiaoyan Gu |
| authorships[3].is_corresponding | False |
| authorships[3].raw_affiliation_strings | Chinese Academy of Sciences Institute of Information Engineering, Beijing, China and University of the Chinese Academy of Sciences School of Cyber Security, Beijing, China |
| authorships[4].author.id | https://openalex.org/A5068917997 |
| authorships[4].author.orcid | https://orcid.org/0000-0003-1633-7575 |
| authorships[4].author.display_name | Wu Liu |
| authorships[4].countries | CN |
| authorships[4].affiliations[0].institution_ids | https://openalex.org/I126520041 |
| authorships[4].affiliations[0].raw_affiliation_string | School of Information Science and Technology, University of Science and Technology of China, Hefei, China |
| authorships[4].institutions[0].id | https://openalex.org/I126520041 |
| authorships[4].institutions[0].ror | https://ror.org/04c4dkn09 |
| authorships[4].institutions[0].type | education |
| authorships[4].institutions[0].lineage | https://openalex.org/I126520041, https://openalex.org/I19820366 |
| authorships[4].institutions[0].country_code | CN |
| authorships[4].institutions[0].display_name | University of Science and Technology of China |
| authorships[4].author_position | last |
| authorships[4].raw_author_name | Wu Liu |
| authorships[4].is_corresponding | False |
| authorships[4].raw_affiliation_strings | School of Information Science and Technology, University of Science and Technology of China, Hefei, China |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://dl.acm.org/doi/pdf/10.1145/3657296 |
| open_access.oa_status | bronze |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | SigFormer: Sparse Signal-guided Transformer for Multi-modal Action Segmentation |
| has_fulltext | True |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T10812 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 1.0 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Human Pose and Action Recognition |
| related_works | https://openalex.org/W2379392295, https://openalex.org/W3160965418, https://openalex.org/W4379231730, https://openalex.org/W613940353, https://openalex.org/W2320915480, https://openalex.org/W4389858081, https://openalex.org/W2362990116, https://openalex.org/W2381300099, https://openalex.org/W2501551404, https://openalex.org/W4385583601 |
| cited_by_count | 1 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 1 |
| locations_count | 1 |
| best_oa_location.id | doi:10.1145/3657296 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S19610489 |
| best_oa_location.source.issn | 1551-6857, 1551-6865 |
| best_oa_location.source.type | journal |
| best_oa_location.source.is_oa | False |
| best_oa_location.source.issn_l | 1551-6857 |
| best_oa_location.source.is_core | True |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | ACM Transactions on Multimedia Computing Communications and Applications |
| best_oa_location.source.host_organization | https://openalex.org/P4310319798 |
| best_oa_location.source.host_organization_name | Association for Computing Machinery |
| best_oa_location.source.host_organization_lineage | https://openalex.org/P4310319798 |
| best_oa_location.source.host_organization_lineage_names | Association for Computing Machinery |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://dl.acm.org/doi/pdf/10.1145/3657296 |
| best_oa_location.version | publishedVersion |
| best_oa_location.raw_type | journal-article |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | True |
| best_oa_location.is_published | True |
| best_oa_location.raw_source_name | ACM Transactions on Multimedia Computing, Communications, and Applications |
| best_oa_location.landing_page_url | https://doi.org/10.1145/3657296 |
| primary_location.id | doi:10.1145/3657296 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S19610489 |
| primary_location.source.issn | 1551-6857, 1551-6865 |
| primary_location.source.type | journal |
| primary_location.source.is_oa | False |
| primary_location.source.issn_l | 1551-6857 |
| primary_location.source.is_core | True |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | ACM Transactions on Multimedia Computing Communications and Applications |
| primary_location.source.host_organization | https://openalex.org/P4310319798 |
| primary_location.source.host_organization_name | Association for Computing Machinery |
| primary_location.source.host_organization_lineage | https://openalex.org/P4310319798 |
| primary_location.source.host_organization_lineage_names | Association for Computing Machinery |
| primary_location.license | |
| primary_location.pdf_url | https://dl.acm.org/doi/pdf/10.1145/3657296 |
| primary_location.version | publishedVersion |
| primary_location.raw_type | journal-article |
| primary_location.license_id | |
| primary_location.is_accepted | True |
| primary_location.is_published | True |
| primary_location.raw_source_name | ACM Transactions on Multimedia Computing, Communications, and Applications |
| primary_location.landing_page_url | https://doi.org/10.1145/3657296 |
| publication_date | 2024-04-10 |
| publication_year | 2024 |
| referenced_works | https://openalex.org/W4300717114, https://openalex.org/W2963524571, https://openalex.org/W4225147643, https://openalex.org/W2021057537, https://openalex.org/W2508429489, https://openalex.org/W2963853051, https://openalex.org/W2194775991, https://openalex.org/W1893516992, https://openalex.org/W3034802267, https://openalex.org/W3119038403, https://openalex.org/W4311356630, https://openalex.org/W4282928124, https://openalex.org/W2099614498, https://openalex.org/W2550143307, https://openalex.org/W3016234935, https://openalex.org/W3083550439, https://openalex.org/W3157403981, https://openalex.org/W3183430956, https://openalex.org/W6851242572, https://openalex.org/W3014565582, https://openalex.org/W3138516171, https://openalex.org/W3044644239, https://openalex.org/W3030949666, https://openalex.org/W4225271941, https://openalex.org/W639708223, https://openalex.org/W2461621749, https://openalex.org/W3021673939, https://openalex.org/W2084856978, https://openalex.org/W4210453703, https://openalex.org/W3015880580, https://openalex.org/W4281255037, https://openalex.org/W4214493665, https://openalex.org/W4281749424, https://openalex.org/W4366352791, https://openalex.org/W2292288263, https://openalex.org/W4284897674, https://openalex.org/W4312108539, https://openalex.org/W3168126734, https://openalex.org/W2792345332, https://openalex.org/W4210582445, https://openalex.org/W4390872435, https://openalex.org/W4386050422 |
| referenced_works_count | 42 |
| abstract_inverted_index.( | 73 |
| abstract_inverted_index.) | 75 |
| abstract_inverted_index.. | 226 |
| abstract_inverted_index.S | 65 |
| abstract_inverted_index.a | 5, 11, 64, 175, 199 |
| abstract_inverted_index.g | 70 |
| abstract_inverted_index.i | 68 |
| abstract_inverted_index.s | 67 |
| abstract_inverted_index.F1 | 211 |
| abstract_inverted_index.In | 134 |
| abstract_inverted_index.To | 57 |
| abstract_inverted_index.We | 83 |
| abstract_inverted_index.an | 142, 209 |
| abstract_inverted_index.at | 127, 224 |
| abstract_inverted_index.be | 46 |
| abstract_inverted_index.by | 91 |
| abstract_inverted_index.in | 118 |
| abstract_inverted_index.is | 4 |
| abstract_inverted_index.of | 14, 19, 25, 39, 154, 166, 213 |
| abstract_inverted_index.on | 22, 198 |
| abstract_inverted_index.to | 76, 87, 122, 130, 146 |
| abstract_inverted_index.up | 59 |
| abstract_inverted_index.we | 62, 120, 140, 172 |
| abstract_inverted_index.IoT | 41 |
| abstract_inverted_index.The | 215 |
| abstract_inverted_index.and | 7, 32, 80, 151, 169, 186, 217 |
| abstract_inverted_index.are | 100, 106 |
| abstract_inverted_index.can | 45 |
| abstract_inverted_index.for | 48, 60 |
| abstract_inverted_index.not | 53 |
| abstract_inverted_index.the | 17, 23, 36, 95, 113, 124, 135, 159, 164, 181, 195 |
| abstract_inverted_index.two | 128 |
| abstract_inverted_index.RGB, | 29 |
| abstract_inverted_index.been | 54, 221 |
| abstract_inverted_index.both | 78, 149 |
| abstract_inverted_index.each | 155 |
| abstract_inverted_index.from | 204 |
| abstract_inverted_index.fuse | 88 |
| abstract_inverted_index.have | 52, 220 |
| abstract_inverted_index.lack | 109 |
| abstract_inverted_index.loss | 161 |
| abstract_inverted_index.made | 222 |
| abstract_inverted_index.make | 58 |
| abstract_inverted_index.mask | 85 |
| abstract_inverted_index.real | 205 |
| abstract_inverted_index.task | 9 |
| abstract_inverted_index.that | 178, 192 |
| abstract_inverted_index.then | 173 |
| abstract_inverted_index.they | 108 |
| abstract_inverted_index.this | 132 |
| abstract_inverted_index.wide | 12 |
| abstract_inverted_index.with | 10 |
| abstract_inverted_index.After | 163 |
| abstract_inverted_index.about | 112 |
| abstract_inverted_index.codes | 216 |
| abstract_inverted_index.dense | 26, 79, 156, 167 |
| abstract_inverted_index.depth | 33 |
| abstract_inverted_index.first | 136 |
| abstract_inverted_index.flow, | 31 |
| abstract_inverted_index.fully | 55 |
| abstract_inverted_index.gnal- | 69 |
| abstract_inverted_index.human | 1 |
| abstract_inverted_index.inner | 160 |
| abstract_inverted_index.learn | 148 |
| abstract_inverted_index.parse | 66 |
| abstract_inverted_index.range | 13 |
| abstract_inverted_index.score | 212 |
| abstract_inverted_index.since | 103 |
| abstract_inverted_index.this, | 61 |
| abstract_inverted_index.uided | 71 |
| abstract_inverted_index.where | 97 |
| abstract_inverted_index.which | 44 |
| abstract_inverted_index.(i.e., | 28 |
| abstract_inverted_index.0.958. | 214 |
| abstract_inverted_index.action | 2, 115, 184, 201 |
| abstract_inverted_index.devise | 174 |
| abstract_inverted_index.employ | 84 |
| abstract_inverted_index.fusion | 24, 165 |
| abstract_inverted_index.maps). | 34 |
| abstract_inverted_index.models | 180, 219 |
| abstract_inverted_index.module | 145 |
| abstract_inverted_index.sensor | 42 |
| abstract_inverted_index.sparse | 40, 81, 98, 104, 170 |
| abstract_inverted_index.stage, | 139 |
| abstract_inverted_index.stages | 129 |
| abstract_inverted_index.valid. | 101 |
| abstract_inverted_index.within | 94 |
| abstract_inverted_index.between | 183 |
| abstract_inverted_index.combine | 77 |
| abstract_inverted_index.crucial | 47 |
| abstract_inverted_index.dataset | 203 |
| abstract_inverted_index.feature | 137 |
| abstract_inverted_index.jointly | 147 |
| abstract_inverted_index.optical | 30 |
| abstract_inverted_index.propose | 121 |
| abstract_inverted_index.regions | 96 |
| abstract_inverted_index.results | 190 |
| abstract_inverted_index.signals | 27, 99, 105 |
| abstract_inverted_index.through | 158 |
| abstract_inverted_index.However, | 35, 102 |
| abstract_inverted_index.accurate | 50 |
| abstract_inverted_index.boundary | 125, 152 |
| abstract_inverted_index.category | 150, 185 |
| abstract_inverted_index.critical | 6 |
| abstract_inverted_index.features | 90, 153 |
| abstract_inverted_index.majority | 18 |
| abstract_inverted_index.modality | 157 |
| abstract_inverted_index.problem. | 133 |
| abstract_inverted_index.reaching | 208 |
| abstract_inverted_index.signals, | 43, 171 |
| abstract_inverted_index.signals. | 82 |
| abstract_inverted_index.temporal | 114, 187 |
| abstract_inverted_index.Nowadays, | 16 |
| abstract_inverted_index.SigFormer | 74, 193 |
| abstract_inverted_index.achieving | 49 |
| abstract_inverted_index.alleviate | 131 |
| abstract_inverted_index.attention | 86 |
| abstract_inverted_index.available | 223 |
| abstract_inverted_index.boundary. | 188 |
| abstract_inverted_index.discrete, | 107 |
| abstract_inverted_index.emphasize | 123 |
| abstract_inverted_index.explored. | 56 |
| abstract_inverted_index.introduce | 63, 141 |
| abstract_inverted_index.localized | 89 |
| abstract_inverted_index.potential | 37 |
| abstract_inverted_index.SigFormer, | 119 |
| abstract_inverted_index.Therefore, | 117 |
| abstract_inverted_index.approaches | 20, 197 |
| abstract_inverted_index.bottleneck | 144 |
| abstract_inverted_index.explicitly | 179 |
| abstract_inverted_index.extraction | 138 |
| abstract_inverted_index.functions. | 162 |
| abstract_inverted_index.industrial | 206 |
| abstract_inverted_index.modalities | 168 |
| abstract_inverted_index.sufficient | 110 |
| abstract_inverted_index.two-branch | 176 |
| abstract_inverted_index.Multi-modal | 0 |
| abstract_inverted_index.Transformer | 72 |
| abstract_inverted_index.boundaries. | 116 |
| abstract_inverted_index.challenging | 8 |
| abstract_inverted_index.concentrate | 21 |
| abstract_inverted_index.demonstrate | 191 |
| abstract_inverted_index.information | 111, 126 |
| abstract_inverted_index.multi-modal | 200 |
| abstract_inverted_index.outperforms | 194 |
| abstract_inverted_index.outstanding | 210 |
| abstract_inverted_index.pre-trained | 218 |
| abstract_inverted_index.Experimental | 189 |
| abstract_inverted_index.architecture | 177 |
| abstract_inverted_index.constraining | 92 |
| abstract_inverted_index.intermediate | 143 |
| abstract_inverted_index.recognition, | 51 |
| abstract_inverted_index.segmentation | 3, 202 |
| abstract_inverted_index.applications. | 15 |
| abstract_inverted_index.contributions | 38 |
| abstract_inverted_index.environments, | 207 |
| abstract_inverted_index.cross-attention | 93 |
| abstract_inverted_index.state-of-the-art | 196 |
| abstract_inverted_index.interrelationship | 182 |
| abstract_inverted_index.https://github.com/LIUQI-creat/SigFormer | 225 |
| cited_by_percentile_year.max | 95 |
| cited_by_percentile_year.min | 91 |
| countries_distinct_count | 1 |
| institutions_distinct_count | 5 |
| sustainable_development_goals[0].id | https://metadata.un.org/sdg/9 |
| sustainable_development_goals[0].score | 0.5400000214576721 |
| sustainable_development_goals[0].display_name | Industry, innovation and infrastructure |
| citation_normalized_percentile.value | 0.53615481 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | False |