MaxViT-UNet: Multi-Axis Attention for Medical Image Segmentation Article Swipe
YOU?
·
· 2023
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2305.08396
Since their emergence, Convolutional Neural Networks (CNNs) have made significant strides in medical image analysis. However, the local nature of the convolution operator may pose a limitation for capturing global and long-range interactions in CNNs. Recently, Transformers have gained popularity in the computer vision community and also in medical image segmentation due to their ability to process global features effectively. The scalability issues of the self-attention mechanism and lack of the CNN-like inductive bias may have limited their adoption. Therefore, hybrid Vision transformers (CNN-Transformer), exploiting the advantages of both Convolution and Self-attention Mechanisms, have gained importance. In this work, we present MaxViT-UNet, a new Encoder-Decoder based UNet type hybrid vision transformer (CNN-Transformer) for medical image segmentation. The proposed Hybrid Decoder is designed to harness the power of both the convolution and self-attention mechanisms at each decoding stage with a nominal memory and computational burden. The inclusion of multi-axis self-attention, within each decoder stage, significantly enhances the discriminating capacity between the object and background regions, thereby helping in improving the segmentation efficiency. In the Hybrid Decoder, a new block is also proposed. The fusion process commences by integrating the upsampled lower-level decoder features, obtained through transpose convolution, with the skip-connection features derived from the hybrid encoder. Subsequently, the fused features undergo refinement through the utilization of a multi-axis attention mechanism. The proposed decoder block is repeated multiple times to segment the nuclei regions progressively. Experimental results on MoNuSeg18 and MoNuSAC20 datasets demonstrate the effectiveness of the proposed technique.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2305.08396
- https://arxiv.org/pdf/2305.08396
- OA Status
- green
- Cited By
- 8
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4376653885
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4376653885Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2305.08396Digital Object Identifier
- Title
-
MaxViT-UNet: Multi-Axis Attention for Medical Image SegmentationWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2023Year of publication
- Publication date
-
2023-05-15Full publication date if available
- Authors
-
Abdul Rehman, Asifullah KhanList of authors in order
- Landing page
-
https://arxiv.org/abs/2305.08396Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2305.08396Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2305.08396Direct OA link when available
- Concepts
-
Computer science, Artificial intelligence, Segmentation, Convolutional neural network, Encoder, Image segmentation, Computer vision, Transformer, Pattern recognition (psychology), Voltage, Quantum mechanics, Operating system, PhysicsTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
8Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 2, 2024: 3, 2023: 3Per-year citation counts (last 5 years)
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4376653885 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2305.08396 |
| ids.doi | https://doi.org/10.48550/arxiv.2305.08396 |
| ids.openalex | https://openalex.org/W4376653885 |
| fwci | |
| type | preprint |
| title | MaxViT-UNet: Multi-Axis Attention for Medical Image Segmentation |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T12702 |
| topics[0].field.id | https://openalex.org/fields/28 |
| topics[0].field.display_name | Neuroscience |
| topics[0].score | 0.9957000017166138 |
| topics[0].domain.id | https://openalex.org/domains/1 |
| topics[0].domain.display_name | Life Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/2808 |
| topics[0].subfield.display_name | Neurology |
| topics[0].display_name | Brain Tumor Detection and Classification |
| topics[1].id | https://openalex.org/T10862 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9919000267982483 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1702 |
| topics[1].subfield.display_name | Artificial Intelligence |
| topics[1].display_name | AI in cancer detection |
| topics[2].id | https://openalex.org/T10036 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9872000217437744 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1707 |
| topics[2].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[2].display_name | Advanced Neural Network Applications |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C41008148 |
| concepts[0].level | 0 |
| concepts[0].score | 0.7720922827720642 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[0].display_name | Computer science |
| concepts[1].id | https://openalex.org/C154945302 |
| concepts[1].level | 1 |
| concepts[1].score | 0.5997976660728455 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[1].display_name | Artificial intelligence |
| concepts[2].id | https://openalex.org/C89600930 |
| concepts[2].level | 2 |
| concepts[2].score | 0.5973814725875854 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q1423946 |
| concepts[2].display_name | Segmentation |
| concepts[3].id | https://openalex.org/C81363708 |
| concepts[3].level | 2 |
| concepts[3].score | 0.5596743226051331 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q17084460 |
| concepts[3].display_name | Convolutional neural network |
| concepts[4].id | https://openalex.org/C118505674 |
| concepts[4].level | 2 |
| concepts[4].score | 0.553077220916748 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q42586063 |
| concepts[4].display_name | Encoder |
| concepts[5].id | https://openalex.org/C124504099 |
| concepts[5].level | 3 |
| concepts[5].score | 0.485128253698349 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q56933 |
| concepts[5].display_name | Image segmentation |
| concepts[6].id | https://openalex.org/C31972630 |
| concepts[6].level | 1 |
| concepts[6].score | 0.4833928644657135 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[6].display_name | Computer vision |
| concepts[7].id | https://openalex.org/C66322947 |
| concepts[7].level | 3 |
| concepts[7].score | 0.4600844383239746 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q11658 |
| concepts[7].display_name | Transformer |
| concepts[8].id | https://openalex.org/C153180895 |
| concepts[8].level | 2 |
| concepts[8].score | 0.4319555163383484 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q7148389 |
| concepts[8].display_name | Pattern recognition (psychology) |
| concepts[9].id | https://openalex.org/C165801399 |
| concepts[9].level | 2 |
| concepts[9].score | 0.0 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q25428 |
| concepts[9].display_name | Voltage |
| concepts[10].id | https://openalex.org/C62520636 |
| concepts[10].level | 1 |
| concepts[10].score | 0.0 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q944 |
| concepts[10].display_name | Quantum mechanics |
| concepts[11].id | https://openalex.org/C111919701 |
| concepts[11].level | 1 |
| concepts[11].score | 0.0 |
| concepts[11].wikidata | https://www.wikidata.org/wiki/Q9135 |
| concepts[11].display_name | Operating system |
| concepts[12].id | https://openalex.org/C121332964 |
| concepts[12].level | 0 |
| concepts[12].score | 0.0 |
| concepts[12].wikidata | https://www.wikidata.org/wiki/Q413 |
| concepts[12].display_name | Physics |
| keywords[0].id | https://openalex.org/keywords/computer-science |
| keywords[0].score | 0.7720922827720642 |
| keywords[0].display_name | Computer science |
| keywords[1].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[1].score | 0.5997976660728455 |
| keywords[1].display_name | Artificial intelligence |
| keywords[2].id | https://openalex.org/keywords/segmentation |
| keywords[2].score | 0.5973814725875854 |
| keywords[2].display_name | Segmentation |
| keywords[3].id | https://openalex.org/keywords/convolutional-neural-network |
| keywords[3].score | 0.5596743226051331 |
| keywords[3].display_name | Convolutional neural network |
| keywords[4].id | https://openalex.org/keywords/encoder |
| keywords[4].score | 0.553077220916748 |
| keywords[4].display_name | Encoder |
| keywords[5].id | https://openalex.org/keywords/image-segmentation |
| keywords[5].score | 0.485128253698349 |
| keywords[5].display_name | Image segmentation |
| keywords[6].id | https://openalex.org/keywords/computer-vision |
| keywords[6].score | 0.4833928644657135 |
| keywords[6].display_name | Computer vision |
| keywords[7].id | https://openalex.org/keywords/transformer |
| keywords[7].score | 0.4600844383239746 |
| keywords[7].display_name | Transformer |
| keywords[8].id | https://openalex.org/keywords/pattern-recognition |
| keywords[8].score | 0.4319555163383484 |
| keywords[8].display_name | Pattern recognition (psychology) |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2305.08396 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2305.08396 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2305.08396 |
| locations[1].id | doi:10.48550/arxiv.2305.08396 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2305.08396 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5024805800 |
| authorships[0].author.orcid | https://orcid.org/0000-0002-9343-7652 |
| authorships[0].author.display_name | Abdul Rehman |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Rehman, Abdul |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5083112369 |
| authorships[1].author.orcid | https://orcid.org/0000-0003-2039-5305 |
| authorships[1].author.display_name | Asifullah Khan |
| authorships[1].author_position | last |
| authorships[1].raw_author_name | Khan, Asifullah |
| authorships[1].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2305.08396 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2023-05-17T00:00:00 |
| display_name | MaxViT-UNet: Multi-Axis Attention for Medical Image Segmentation |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T12702 |
| primary_topic.field.id | https://openalex.org/fields/28 |
| primary_topic.field.display_name | Neuroscience |
| primary_topic.score | 0.9957000017166138 |
| primary_topic.domain.id | https://openalex.org/domains/1 |
| primary_topic.domain.display_name | Life Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/2808 |
| primary_topic.subfield.display_name | Neurology |
| primary_topic.display_name | Brain Tumor Detection and Classification |
| related_works | https://openalex.org/W4293226380, https://openalex.org/W4390516098, https://openalex.org/W2181948922, https://openalex.org/W2384362569, https://openalex.org/W2142795561, https://openalex.org/W4205302943, https://openalex.org/W2561132942, https://openalex.org/W4321487865, https://openalex.org/W3155418658, https://openalex.org/W1522196789 |
| cited_by_count | 8 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 2 |
| counts_by_year[1].year | 2024 |
| counts_by_year[1].cited_by_count | 3 |
| counts_by_year[2].year | 2023 |
| counts_by_year[2].cited_by_count | 3 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2305.08396 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2305.08396 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2305.08396 |
| primary_location.id | pmh:oai:arXiv.org:2305.08396 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2305.08396 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2305.08396 |
| publication_date | 2023-05-15 |
| publication_year | 2023 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 25, 102, 138, 175, 215 |
| abstract_inverted_index.In | 96, 171 |
| abstract_inverted_index.at | 133 |
| abstract_inverted_index.by | 185 |
| abstract_inverted_index.in | 11, 33, 40, 47, 166 |
| abstract_inverted_index.is | 120, 178, 223 |
| abstract_inverted_index.of | 19, 63, 69, 87, 126, 146, 214, 243 |
| abstract_inverted_index.on | 235 |
| abstract_inverted_index.to | 52, 55, 122, 227 |
| abstract_inverted_index.we | 99 |
| abstract_inverted_index.The | 60, 116, 144, 181, 219 |
| abstract_inverted_index.and | 30, 45, 67, 90, 130, 141, 161, 237 |
| abstract_inverted_index.due | 51 |
| abstract_inverted_index.for | 27, 112 |
| abstract_inverted_index.may | 23, 74 |
| abstract_inverted_index.new | 103, 176 |
| abstract_inverted_index.the | 16, 20, 41, 64, 70, 85, 124, 128, 155, 159, 168, 172, 187, 197, 202, 206, 212, 229, 241, 244 |
| abstract_inverted_index.UNet | 106 |
| abstract_inverted_index.also | 46, 179 |
| abstract_inverted_index.bias | 73 |
| abstract_inverted_index.both | 88, 127 |
| abstract_inverted_index.each | 134, 150 |
| abstract_inverted_index.from | 201 |
| abstract_inverted_index.have | 7, 37, 75, 93 |
| abstract_inverted_index.lack | 68 |
| abstract_inverted_index.made | 8 |
| abstract_inverted_index.pose | 24 |
| abstract_inverted_index.this | 97 |
| abstract_inverted_index.type | 107 |
| abstract_inverted_index.with | 137, 196 |
| abstract_inverted_index.CNNs. | 34 |
| abstract_inverted_index.Since | 0 |
| abstract_inverted_index.based | 105 |
| abstract_inverted_index.block | 177, 222 |
| abstract_inverted_index.fused | 207 |
| abstract_inverted_index.image | 13, 49, 114 |
| abstract_inverted_index.local | 17 |
| abstract_inverted_index.power | 125 |
| abstract_inverted_index.stage | 136 |
| abstract_inverted_index.their | 1, 53, 77 |
| abstract_inverted_index.times | 226 |
| abstract_inverted_index.work, | 98 |
| abstract_inverted_index.(CNNs) | 6 |
| abstract_inverted_index.Hybrid | 118, 173 |
| abstract_inverted_index.Neural | 4 |
| abstract_inverted_index.Vision | 81 |
| abstract_inverted_index.fusion | 182 |
| abstract_inverted_index.gained | 38, 94 |
| abstract_inverted_index.global | 29, 57 |
| abstract_inverted_index.hybrid | 80, 108, 203 |
| abstract_inverted_index.issues | 62 |
| abstract_inverted_index.memory | 140 |
| abstract_inverted_index.nature | 18 |
| abstract_inverted_index.nuclei | 230 |
| abstract_inverted_index.object | 160 |
| abstract_inverted_index.stage, | 152 |
| abstract_inverted_index.vision | 43, 109 |
| abstract_inverted_index.within | 149 |
| abstract_inverted_index.Decoder | 119 |
| abstract_inverted_index.ability | 54 |
| abstract_inverted_index.between | 158 |
| abstract_inverted_index.burden. | 143 |
| abstract_inverted_index.decoder | 151, 190, 221 |
| abstract_inverted_index.derived | 200 |
| abstract_inverted_index.harness | 123 |
| abstract_inverted_index.helping | 165 |
| abstract_inverted_index.limited | 76 |
| abstract_inverted_index.medical | 12, 48, 113 |
| abstract_inverted_index.nominal | 139 |
| abstract_inverted_index.present | 100 |
| abstract_inverted_index.process | 56, 183 |
| abstract_inverted_index.regions | 231 |
| abstract_inverted_index.results | 234 |
| abstract_inverted_index.segment | 228 |
| abstract_inverted_index.strides | 10 |
| abstract_inverted_index.thereby | 164 |
| abstract_inverted_index.through | 193, 211 |
| abstract_inverted_index.undergo | 209 |
| abstract_inverted_index.CNN-like | 71 |
| abstract_inverted_index.Decoder, | 174 |
| abstract_inverted_index.However, | 15 |
| abstract_inverted_index.Networks | 5 |
| abstract_inverted_index.capacity | 157 |
| abstract_inverted_index.computer | 42 |
| abstract_inverted_index.datasets | 239 |
| abstract_inverted_index.decoding | 135 |
| abstract_inverted_index.designed | 121 |
| abstract_inverted_index.encoder. | 204 |
| abstract_inverted_index.enhances | 154 |
| abstract_inverted_index.features | 58, 199, 208 |
| abstract_inverted_index.multiple | 225 |
| abstract_inverted_index.obtained | 192 |
| abstract_inverted_index.operator | 22 |
| abstract_inverted_index.proposed | 117, 220, 245 |
| abstract_inverted_index.regions, | 163 |
| abstract_inverted_index.repeated | 224 |
| abstract_inverted_index.MoNuSAC20 | 238 |
| abstract_inverted_index.MoNuSeg18 | 236 |
| abstract_inverted_index.Recently, | 35 |
| abstract_inverted_index.adoption. | 78 |
| abstract_inverted_index.analysis. | 14 |
| abstract_inverted_index.attention | 217 |
| abstract_inverted_index.capturing | 28 |
| abstract_inverted_index.commences | 184 |
| abstract_inverted_index.community | 44 |
| abstract_inverted_index.features, | 191 |
| abstract_inverted_index.improving | 167 |
| abstract_inverted_index.inclusion | 145 |
| abstract_inverted_index.inductive | 72 |
| abstract_inverted_index.mechanism | 66 |
| abstract_inverted_index.proposed. | 180 |
| abstract_inverted_index.transpose | 194 |
| abstract_inverted_index.upsampled | 188 |
| abstract_inverted_index.Therefore, | 79 |
| abstract_inverted_index.advantages | 86 |
| abstract_inverted_index.background | 162 |
| abstract_inverted_index.emergence, | 2 |
| abstract_inverted_index.exploiting | 84 |
| abstract_inverted_index.limitation | 26 |
| abstract_inverted_index.long-range | 31 |
| abstract_inverted_index.mechanism. | 218 |
| abstract_inverted_index.mechanisms | 132 |
| abstract_inverted_index.multi-axis | 147, 216 |
| abstract_inverted_index.popularity | 39 |
| abstract_inverted_index.refinement | 210 |
| abstract_inverted_index.technique. | 246 |
| abstract_inverted_index.Convolution | 89 |
| abstract_inverted_index.Mechanisms, | 92 |
| abstract_inverted_index.convolution | 21, 129 |
| abstract_inverted_index.demonstrate | 240 |
| abstract_inverted_index.efficiency. | 170 |
| abstract_inverted_index.importance. | 95 |
| abstract_inverted_index.integrating | 186 |
| abstract_inverted_index.lower-level | 189 |
| abstract_inverted_index.scalability | 61 |
| abstract_inverted_index.significant | 9 |
| abstract_inverted_index.transformer | 110 |
| abstract_inverted_index.utilization | 213 |
| abstract_inverted_index.Experimental | 233 |
| abstract_inverted_index.MaxViT-UNet, | 101 |
| abstract_inverted_index.Transformers | 36 |
| abstract_inverted_index.convolution, | 195 |
| abstract_inverted_index.effectively. | 59 |
| abstract_inverted_index.interactions | 32 |
| abstract_inverted_index.segmentation | 50, 169 |
| abstract_inverted_index.transformers | 82 |
| abstract_inverted_index.Convolutional | 3 |
| abstract_inverted_index.Subsequently, | 205 |
| abstract_inverted_index.computational | 142 |
| abstract_inverted_index.effectiveness | 242 |
| abstract_inverted_index.segmentation. | 115 |
| abstract_inverted_index.significantly | 153 |
| abstract_inverted_index.Self-attention | 91 |
| abstract_inverted_index.discriminating | 156 |
| abstract_inverted_index.progressively. | 232 |
| abstract_inverted_index.self-attention | 65, 131 |
| abstract_inverted_index.Encoder-Decoder | 104 |
| abstract_inverted_index.self-attention, | 148 |
| abstract_inverted_index.skip-connection | 198 |
| abstract_inverted_index.(CNN-Transformer) | 111 |
| abstract_inverted_index.(CNN-Transformer), | 83 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 2 |
| sustainable_development_goals[0].id | https://metadata.un.org/sdg/10 |
| sustainable_development_goals[0].score | 0.7300000190734863 |
| sustainable_development_goals[0].display_name | Reduced inequalities |
| citation_normalized_percentile |