MUSTER: A Multi-scale Transformer-based Decoder for Semantic Segmentation Article Swipe
YOU?
·
· 2022
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2211.13928
In recent works on semantic segmentation, there has been a significant focus on designing and integrating transformer-based encoders. However, less attention has been given to transformer-based decoders. We emphasize that the decoder stage is equally vital as the encoder in achieving superior segmentation performance. It disentangles and refines high-level cues, enabling precise object boundary delineation at the pixel level. In this paper, we introduce a novel transformer-based decoder called MUSTER, which seamlessly integrates with hierarchical encoders and consistently delivers high-quality segmentation results, regardless of the encoder architecture. Furthermore, we present a variant of MUSTER that reduces FLOPS while maintaining performance. MUSTER incorporates carefully designed multi-head skip attention (MSKA) units and introduces innovative upsampling operations. The MSKA units enable the fusion of multi-scale features from the encoder and decoder, facilitating comprehensive information integration. The upsampling operation leverages encoder features to enhance object localization and surpasses traditional upsampling methods, improving mIoU (mean Intersection over Union) by 0.4% to 3.2%. On the challenging ADE20K dataset, our best model achieves a single-scale mIoU of 50.23 and a multi-scale mIoU of 51.88, which is on-par with the current state-of-the-art model. Remarkably, we achieve this while significantly reducing the number of FLOPs by 61.3%. Our source code and models are publicly available at: https://github.com/shiwt03/MUSTER.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2211.13928
- https://arxiv.org/pdf/2211.13928
- OA Status
- green
- Cited By
- 2
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4310282807
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4310282807Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2211.13928Digital Object Identifier
- Title
-
MUSTER: A Multi-scale Transformer-based Decoder for Semantic SegmentationWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2022Year of publication
- Publication date
-
2022-11-25Full publication date if available
- Authors
-
Jing Xu, Wentao Shi, Pan Gao, Zhengwei Wang, Qizhu LiList of authors in order
- Landing page
-
https://arxiv.org/abs/2211.13928Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2211.13928Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2211.13928Direct OA link when available
- Concepts
-
Upsampling, Computer science, Segmentation, Encoder, Transformer, Artificial intelligence, Computer vision, Pixel, Pattern recognition (psychology), Image (mathematics), Engineering, Voltage, Operating system, Electrical engineeringTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
2Total citation count in OpenAlex
- Citations by year (recent)
-
2024: 1, 2023: 1Per-year citation counts (last 5 years)
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4310282807 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2211.13928 |
| ids.doi | https://doi.org/10.48550/arxiv.2211.13928 |
| ids.openalex | https://openalex.org/W4310282807 |
| fwci | |
| type | preprint |
| title | MUSTER: A Multi-scale Transformer-based Decoder for Semantic Segmentation |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10036 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9998000264167786 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Advanced Neural Network Applications |
| topics[1].id | https://openalex.org/T11307 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9965999722480774 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1702 |
| topics[1].subfield.display_name | Artificial Intelligence |
| topics[1].display_name | Domain Adaptation and Few-Shot Learning |
| topics[2].id | https://openalex.org/T11714 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.996399998664856 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1707 |
| topics[2].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[2].display_name | Multimodal Machine Learning Applications |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C110384440 |
| concepts[0].level | 3 |
| concepts[0].score | 0.798117995262146 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q1143270 |
| concepts[0].display_name | Upsampling |
| concepts[1].id | https://openalex.org/C41008148 |
| concepts[1].level | 0 |
| concepts[1].score | 0.7816203236579895 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[1].display_name | Computer science |
| concepts[2].id | https://openalex.org/C89600930 |
| concepts[2].level | 2 |
| concepts[2].score | 0.7074605226516724 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q1423946 |
| concepts[2].display_name | Segmentation |
| concepts[3].id | https://openalex.org/C118505674 |
| concepts[3].level | 2 |
| concepts[3].score | 0.7059941291809082 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q42586063 |
| concepts[3].display_name | Encoder |
| concepts[4].id | https://openalex.org/C66322947 |
| concepts[4].level | 3 |
| concepts[4].score | 0.692954957485199 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q11658 |
| concepts[4].display_name | Transformer |
| concepts[5].id | https://openalex.org/C154945302 |
| concepts[5].level | 1 |
| concepts[5].score | 0.6045635342597961 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[5].display_name | Artificial intelligence |
| concepts[6].id | https://openalex.org/C31972630 |
| concepts[6].level | 1 |
| concepts[6].score | 0.45222407579421997 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[6].display_name | Computer vision |
| concepts[7].id | https://openalex.org/C160633673 |
| concepts[7].level | 2 |
| concepts[7].score | 0.4155539572238922 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q355198 |
| concepts[7].display_name | Pixel |
| concepts[8].id | https://openalex.org/C153180895 |
| concepts[8].level | 2 |
| concepts[8].score | 0.37425923347473145 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q7148389 |
| concepts[8].display_name | Pattern recognition (psychology) |
| concepts[9].id | https://openalex.org/C115961682 |
| concepts[9].level | 2 |
| concepts[9].score | 0.11507624387741089 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q860623 |
| concepts[9].display_name | Image (mathematics) |
| concepts[10].id | https://openalex.org/C127413603 |
| concepts[10].level | 0 |
| concepts[10].score | 0.0913509726524353 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q11023 |
| concepts[10].display_name | Engineering |
| concepts[11].id | https://openalex.org/C165801399 |
| concepts[11].level | 2 |
| concepts[11].score | 0.07983878254890442 |
| concepts[11].wikidata | https://www.wikidata.org/wiki/Q25428 |
| concepts[11].display_name | Voltage |
| concepts[12].id | https://openalex.org/C111919701 |
| concepts[12].level | 1 |
| concepts[12].score | 0.0 |
| concepts[12].wikidata | https://www.wikidata.org/wiki/Q9135 |
| concepts[12].display_name | Operating system |
| concepts[13].id | https://openalex.org/C119599485 |
| concepts[13].level | 1 |
| concepts[13].score | 0.0 |
| concepts[13].wikidata | https://www.wikidata.org/wiki/Q43035 |
| concepts[13].display_name | Electrical engineering |
| keywords[0].id | https://openalex.org/keywords/upsampling |
| keywords[0].score | 0.798117995262146 |
| keywords[0].display_name | Upsampling |
| keywords[1].id | https://openalex.org/keywords/computer-science |
| keywords[1].score | 0.7816203236579895 |
| keywords[1].display_name | Computer science |
| keywords[2].id | https://openalex.org/keywords/segmentation |
| keywords[2].score | 0.7074605226516724 |
| keywords[2].display_name | Segmentation |
| keywords[3].id | https://openalex.org/keywords/encoder |
| keywords[3].score | 0.7059941291809082 |
| keywords[3].display_name | Encoder |
| keywords[4].id | https://openalex.org/keywords/transformer |
| keywords[4].score | 0.692954957485199 |
| keywords[4].display_name | Transformer |
| keywords[5].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[5].score | 0.6045635342597961 |
| keywords[5].display_name | Artificial intelligence |
| keywords[6].id | https://openalex.org/keywords/computer-vision |
| keywords[6].score | 0.45222407579421997 |
| keywords[6].display_name | Computer vision |
| keywords[7].id | https://openalex.org/keywords/pixel |
| keywords[7].score | 0.4155539572238922 |
| keywords[7].display_name | Pixel |
| keywords[8].id | https://openalex.org/keywords/pattern-recognition |
| keywords[8].score | 0.37425923347473145 |
| keywords[8].display_name | Pattern recognition (psychology) |
| keywords[9].id | https://openalex.org/keywords/image |
| keywords[9].score | 0.11507624387741089 |
| keywords[9].display_name | Image (mathematics) |
| keywords[10].id | https://openalex.org/keywords/engineering |
| keywords[10].score | 0.0913509726524353 |
| keywords[10].display_name | Engineering |
| keywords[11].id | https://openalex.org/keywords/voltage |
| keywords[11].score | 0.07983878254890442 |
| keywords[11].display_name | Voltage |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2211.13928 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2211.13928 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2211.13928 |
| locations[1].id | doi:10.48550/arxiv.2211.13928 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2211.13928 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5100380900 |
| authorships[0].author.orcid | https://orcid.org/0000-0002-4565-7204 |
| authorships[0].author.display_name | Jing Xu |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Xu, Jing |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5029424274 |
| authorships[1].author.orcid | https://orcid.org/0000-0003-2648-1183 |
| authorships[1].author.display_name | Wentao Shi |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Shi, Wentao |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5101992015 |
| authorships[2].author.orcid | https://orcid.org/0000-0002-5184-5674 |
| authorships[2].author.display_name | Pan Gao |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Gao, Pan |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5101514726 |
| authorships[3].author.orcid | https://orcid.org/0000-0001-7706-553X |
| authorships[3].author.display_name | Zhengwei Wang |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Wang, Zhengwei |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5022666765 |
| authorships[4].author.orcid | |
| authorships[4].author.display_name | Qizhu Li |
| authorships[4].author_position | last |
| authorships[4].raw_author_name | Li, Qizhu |
| authorships[4].is_corresponding | False |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2211.13928 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2022-11-30T00:00:00 |
| display_name | MUSTER: A Multi-scale Transformer-based Decoder for Semantic Segmentation |
| has_fulltext | True |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T10036 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9998000264167786 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Advanced Neural Network Applications |
| related_works | https://openalex.org/W2062399876, https://openalex.org/W2607795551, https://openalex.org/W3155117723, https://openalex.org/W1991429770, https://openalex.org/W1983892167, https://openalex.org/W2281134365, https://openalex.org/W4310746709, https://openalex.org/W4306309518, https://openalex.org/W4385574037, https://openalex.org/W4386075645 |
| cited_by_count | 2 |
| counts_by_year[0].year | 2024 |
| counts_by_year[0].cited_by_count | 1 |
| counts_by_year[1].year | 2023 |
| counts_by_year[1].cited_by_count | 1 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2211.13928 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2211.13928 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2211.13928 |
| primary_location.id | pmh:oai:arXiv.org:2211.13928 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2211.13928 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2211.13928 |
| publication_date | 2022-11-25 |
| publication_year | 2022 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 9, 64, 90, 166, 172 |
| abstract_inverted_index.In | 0, 59 |
| abstract_inverted_index.It | 44 |
| abstract_inverted_index.On | 157 |
| abstract_inverted_index.We | 27 |
| abstract_inverted_index.as | 36 |
| abstract_inverted_index.at | 55 |
| abstract_inverted_index.by | 153, 196 |
| abstract_inverted_index.in | 39 |
| abstract_inverted_index.is | 33, 178 |
| abstract_inverted_index.of | 83, 92, 120, 169, 175, 194 |
| abstract_inverted_index.on | 3, 12 |
| abstract_inverted_index.to | 24, 138, 155 |
| abstract_inverted_index.we | 62, 88, 186 |
| abstract_inverted_index.Our | 198 |
| abstract_inverted_index.The | 114, 132 |
| abstract_inverted_index.and | 14, 46, 76, 109, 126, 142, 171, 201 |
| abstract_inverted_index.are | 203 |
| abstract_inverted_index.at: | 206 |
| abstract_inverted_index.has | 7, 21 |
| abstract_inverted_index.our | 162 |
| abstract_inverted_index.the | 30, 37, 56, 84, 118, 124, 158, 181, 192 |
| abstract_inverted_index.0.4% | 154 |
| abstract_inverted_index.MSKA | 115 |
| abstract_inverted_index.been | 8, 22 |
| abstract_inverted_index.best | 163 |
| abstract_inverted_index.code | 200 |
| abstract_inverted_index.from | 123 |
| abstract_inverted_index.less | 19 |
| abstract_inverted_index.mIoU | 148, 168, 174 |
| abstract_inverted_index.over | 151 |
| abstract_inverted_index.skip | 105 |
| abstract_inverted_index.that | 29, 94 |
| abstract_inverted_index.this | 60, 188 |
| abstract_inverted_index.with | 73, 180 |
| abstract_inverted_index.(mean | 149 |
| abstract_inverted_index.3.2%. | 156 |
| abstract_inverted_index.50.23 | 170 |
| abstract_inverted_index.FLOPS | 96 |
| abstract_inverted_index.FLOPs | 195 |
| abstract_inverted_index.cues, | 49 |
| abstract_inverted_index.focus | 11 |
| abstract_inverted_index.given | 23 |
| abstract_inverted_index.model | 164 |
| abstract_inverted_index.novel | 65 |
| abstract_inverted_index.pixel | 57 |
| abstract_inverted_index.stage | 32 |
| abstract_inverted_index.there | 6 |
| abstract_inverted_index.units | 108, 116 |
| abstract_inverted_index.vital | 35 |
| abstract_inverted_index.which | 70, 177 |
| abstract_inverted_index.while | 97, 189 |
| abstract_inverted_index.works | 2 |
| abstract_inverted_index.(MSKA) | 107 |
| abstract_inverted_index.51.88, | 176 |
| abstract_inverted_index.61.3%. | 197 |
| abstract_inverted_index.ADE20K | 160 |
| abstract_inverted_index.MUSTER | 93, 100 |
| abstract_inverted_index.Union) | 152 |
| abstract_inverted_index.called | 68 |
| abstract_inverted_index.enable | 117 |
| abstract_inverted_index.fusion | 119 |
| abstract_inverted_index.level. | 58 |
| abstract_inverted_index.model. | 184 |
| abstract_inverted_index.models | 202 |
| abstract_inverted_index.number | 193 |
| abstract_inverted_index.object | 52, 140 |
| abstract_inverted_index.on-par | 179 |
| abstract_inverted_index.paper, | 61 |
| abstract_inverted_index.recent | 1 |
| abstract_inverted_index.source | 199 |
| abstract_inverted_index.MUSTER, | 69 |
| abstract_inverted_index.achieve | 187 |
| abstract_inverted_index.current | 182 |
| abstract_inverted_index.decoder | 31, 67 |
| abstract_inverted_index.encoder | 38, 85, 125, 136 |
| abstract_inverted_index.enhance | 139 |
| abstract_inverted_index.equally | 34 |
| abstract_inverted_index.precise | 51 |
| abstract_inverted_index.present | 89 |
| abstract_inverted_index.reduces | 95 |
| abstract_inverted_index.refines | 47 |
| abstract_inverted_index.variant | 91 |
| abstract_inverted_index.However, | 18 |
| abstract_inverted_index.achieves | 165 |
| abstract_inverted_index.boundary | 53 |
| abstract_inverted_index.dataset, | 161 |
| abstract_inverted_index.decoder, | 127 |
| abstract_inverted_index.delivers | 78 |
| abstract_inverted_index.designed | 103 |
| abstract_inverted_index.enabling | 50 |
| abstract_inverted_index.encoders | 75 |
| abstract_inverted_index.features | 122, 137 |
| abstract_inverted_index.methods, | 146 |
| abstract_inverted_index.publicly | 204 |
| abstract_inverted_index.reducing | 191 |
| abstract_inverted_index.results, | 81 |
| abstract_inverted_index.semantic | 4 |
| abstract_inverted_index.superior | 41 |
| abstract_inverted_index.achieving | 40 |
| abstract_inverted_index.attention | 20, 106 |
| abstract_inverted_index.available | 205 |
| abstract_inverted_index.carefully | 102 |
| abstract_inverted_index.decoders. | 26 |
| abstract_inverted_index.designing | 13 |
| abstract_inverted_index.emphasize | 28 |
| abstract_inverted_index.encoders. | 17 |
| abstract_inverted_index.improving | 147 |
| abstract_inverted_index.introduce | 63 |
| abstract_inverted_index.leverages | 135 |
| abstract_inverted_index.operation | 134 |
| abstract_inverted_index.surpasses | 143 |
| abstract_inverted_index.high-level | 48 |
| abstract_inverted_index.innovative | 111 |
| abstract_inverted_index.integrates | 72 |
| abstract_inverted_index.introduces | 110 |
| abstract_inverted_index.multi-head | 104 |
| abstract_inverted_index.regardless | 82 |
| abstract_inverted_index.seamlessly | 71 |
| abstract_inverted_index.upsampling | 112, 133, 145 |
| abstract_inverted_index.Remarkably, | 185 |
| abstract_inverted_index.challenging | 159 |
| abstract_inverted_index.delineation | 54 |
| abstract_inverted_index.information | 130 |
| abstract_inverted_index.integrating | 15 |
| abstract_inverted_index.maintaining | 98 |
| abstract_inverted_index.multi-scale | 121, 173 |
| abstract_inverted_index.operations. | 113 |
| abstract_inverted_index.significant | 10 |
| abstract_inverted_index.traditional | 144 |
| abstract_inverted_index.Furthermore, | 87 |
| abstract_inverted_index.Intersection | 150 |
| abstract_inverted_index.consistently | 77 |
| abstract_inverted_index.disentangles | 45 |
| abstract_inverted_index.facilitating | 128 |
| abstract_inverted_index.hierarchical | 74 |
| abstract_inverted_index.high-quality | 79 |
| abstract_inverted_index.incorporates | 101 |
| abstract_inverted_index.integration. | 131 |
| abstract_inverted_index.localization | 141 |
| abstract_inverted_index.performance. | 43, 99 |
| abstract_inverted_index.segmentation | 42, 80 |
| abstract_inverted_index.single-scale | 167 |
| abstract_inverted_index.architecture. | 86 |
| abstract_inverted_index.comprehensive | 129 |
| abstract_inverted_index.segmentation, | 5 |
| abstract_inverted_index.significantly | 190 |
| abstract_inverted_index.state-of-the-art | 183 |
| abstract_inverted_index.transformer-based | 16, 25, 66 |
| abstract_inverted_index.https://github.com/shiwt03/MUSTER. | 207 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 5 |
| sustainable_development_goals[0].id | https://metadata.un.org/sdg/11 |
| sustainable_development_goals[0].score | 0.8199999928474426 |
| sustainable_development_goals[0].display_name | Sustainable cities and communities |
| citation_normalized_percentile |