AISFormer: Amodal Instance Segmentation with Transformer Article Swipe
YOU?
·
· 2022
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2210.06323
Amodal Instance Segmentation (AIS) aims to segment the region of both visible and possible occluded parts of an object instance. While Mask R-CNN-based AIS approaches have shown promising results, they are unable to model high-level features coherence due to the limited receptive field. The most recent transformer-based models show impressive performance on vision tasks, even better than Convolution Neural Networks (CNN). In this work, we present AISFormer, an AIS framework, with a Transformer-based mask head. AISFormer explicitly models the complex coherence between occluder, visible, amodal, and invisible masks within an object's regions of interest by treating them as learnable queries. Specifically, AISFormer contains four modules: (i) feature encoding: extract ROI and learn both short-range and long-range visual features. (ii) mask transformer decoding: generate the occluder, visible, and amodal mask query embeddings by a transformer decoder (iii) invisible mask embedding: model the coherence between the amodal and visible masks, and (iv) mask predicting: estimate output masks including occluder, visible, amodal and invisible. We conduct extensive experiments and ablation studies on three challenging benchmarks i.e. KINS, D2SA, and COCOA-cls to evaluate the effectiveness of AISFormer. The code is available at: https://github.com/UARK-AICV/AISFormer
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2210.06323
- https://arxiv.org/pdf/2210.06323
- OA Status
- green
- Cited By
- 23
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4306178259
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4306178259Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2210.06323Digital Object Identifier
- Title
-
AISFormer: Amodal Instance Segmentation with TransformerWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2022Year of publication
- Publication date
-
2022-10-12Full publication date if available
- Authors
-
Minh Trần, Khoa Vo, Kashu Yamazaki, Arthur Gustavo Fernandes, Michael Kidd, Ngan LeList of authors in order
- Landing page
-
https://arxiv.org/abs/2210.06323Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2210.06323Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2210.06323Direct OA link when available
- Concepts
-
Amodal perception, Computer science, Artificial intelligence, Segmentation, Transformer, Computer vision, Decoding methods, Pattern recognition (psychology), Algorithm, Voltage, Physics, Cognition, Biology, Neuroscience, Quantum mechanicsTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
23Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 7, 2024: 13, 2023: 3Per-year citation counts (last 5 years)
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4306178259 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2210.06323 |
| ids.doi | https://doi.org/10.48550/arxiv.2210.06323 |
| ids.openalex | https://openalex.org/W4306178259 |
| fwci | |
| type | preprint |
| title | AISFormer: Amodal Instance Segmentation with Transformer |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T11307 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9987999796867371 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1702 |
| topics[0].subfield.display_name | Artificial Intelligence |
| topics[0].display_name | Domain Adaptation and Few-Shot Learning |
| topics[1].id | https://openalex.org/T10036 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9980000257492065 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1707 |
| topics[1].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[1].display_name | Advanced Neural Network Applications |
| topics[2].id | https://openalex.org/T11714 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.996399998664856 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1707 |
| topics[2].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[2].display_name | Multimodal Machine Learning Applications |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C174478892 |
| concepts[0].level | 3 |
| concepts[0].score | 0.8417536020278931 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q4747455 |
| concepts[0].display_name | Amodal perception |
| concepts[1].id | https://openalex.org/C41008148 |
| concepts[1].level | 0 |
| concepts[1].score | 0.7486237287521362 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[1].display_name | Computer science |
| concepts[2].id | https://openalex.org/C154945302 |
| concepts[2].level | 1 |
| concepts[2].score | 0.6601272225379944 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[2].display_name | Artificial intelligence |
| concepts[3].id | https://openalex.org/C89600930 |
| concepts[3].level | 2 |
| concepts[3].score | 0.6085762977600098 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q1423946 |
| concepts[3].display_name | Segmentation |
| concepts[4].id | https://openalex.org/C66322947 |
| concepts[4].level | 3 |
| concepts[4].score | 0.5963828563690186 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q11658 |
| concepts[4].display_name | Transformer |
| concepts[5].id | https://openalex.org/C31972630 |
| concepts[5].level | 1 |
| concepts[5].score | 0.47557225823402405 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[5].display_name | Computer vision |
| concepts[6].id | https://openalex.org/C57273362 |
| concepts[6].level | 2 |
| concepts[6].score | 0.4199673533439636 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q576722 |
| concepts[6].display_name | Decoding methods |
| concepts[7].id | https://openalex.org/C153180895 |
| concepts[7].level | 2 |
| concepts[7].score | 0.3642317056655884 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q7148389 |
| concepts[7].display_name | Pattern recognition (psychology) |
| concepts[8].id | https://openalex.org/C11413529 |
| concepts[8].level | 1 |
| concepts[8].score | 0.11544132232666016 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q8366 |
| concepts[8].display_name | Algorithm |
| concepts[9].id | https://openalex.org/C165801399 |
| concepts[9].level | 2 |
| concepts[9].score | 0.07261273264884949 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q25428 |
| concepts[9].display_name | Voltage |
| concepts[10].id | https://openalex.org/C121332964 |
| concepts[10].level | 0 |
| concepts[10].score | 0.06742960214614868 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q413 |
| concepts[10].display_name | Physics |
| concepts[11].id | https://openalex.org/C169900460 |
| concepts[11].level | 2 |
| concepts[11].score | 0.0 |
| concepts[11].wikidata | https://www.wikidata.org/wiki/Q2200417 |
| concepts[11].display_name | Cognition |
| concepts[12].id | https://openalex.org/C86803240 |
| concepts[12].level | 0 |
| concepts[12].score | 0.0 |
| concepts[12].wikidata | https://www.wikidata.org/wiki/Q420 |
| concepts[12].display_name | Biology |
| concepts[13].id | https://openalex.org/C169760540 |
| concepts[13].level | 1 |
| concepts[13].score | 0.0 |
| concepts[13].wikidata | https://www.wikidata.org/wiki/Q207011 |
| concepts[13].display_name | Neuroscience |
| concepts[14].id | https://openalex.org/C62520636 |
| concepts[14].level | 1 |
| concepts[14].score | 0.0 |
| concepts[14].wikidata | https://www.wikidata.org/wiki/Q944 |
| concepts[14].display_name | Quantum mechanics |
| keywords[0].id | https://openalex.org/keywords/amodal-perception |
| keywords[0].score | 0.8417536020278931 |
| keywords[0].display_name | Amodal perception |
| keywords[1].id | https://openalex.org/keywords/computer-science |
| keywords[1].score | 0.7486237287521362 |
| keywords[1].display_name | Computer science |
| keywords[2].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[2].score | 0.6601272225379944 |
| keywords[2].display_name | Artificial intelligence |
| keywords[3].id | https://openalex.org/keywords/segmentation |
| keywords[3].score | 0.6085762977600098 |
| keywords[3].display_name | Segmentation |
| keywords[4].id | https://openalex.org/keywords/transformer |
| keywords[4].score | 0.5963828563690186 |
| keywords[4].display_name | Transformer |
| keywords[5].id | https://openalex.org/keywords/computer-vision |
| keywords[5].score | 0.47557225823402405 |
| keywords[5].display_name | Computer vision |
| keywords[6].id | https://openalex.org/keywords/decoding-methods |
| keywords[6].score | 0.4199673533439636 |
| keywords[6].display_name | Decoding methods |
| keywords[7].id | https://openalex.org/keywords/pattern-recognition |
| keywords[7].score | 0.3642317056655884 |
| keywords[7].display_name | Pattern recognition (psychology) |
| keywords[8].id | https://openalex.org/keywords/algorithm |
| keywords[8].score | 0.11544132232666016 |
| keywords[8].display_name | Algorithm |
| keywords[9].id | https://openalex.org/keywords/voltage |
| keywords[9].score | 0.07261273264884949 |
| keywords[9].display_name | Voltage |
| keywords[10].id | https://openalex.org/keywords/physics |
| keywords[10].score | 0.06742960214614868 |
| keywords[10].display_name | Physics |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2210.06323 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2210.06323 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2210.06323 |
| locations[1].id | doi:10.48550/arxiv.2210.06323 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2210.06323 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5101612317 |
| authorships[0].author.orcid | https://orcid.org/0000-0003-4637-6081 |
| authorships[0].author.display_name | Minh Trần |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Tran, Minh |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5109007400 |
| authorships[1].author.orcid | |
| authorships[1].author.display_name | Khoa Vo |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Vo, Khoa |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5089288624 |
| authorships[2].author.orcid | https://orcid.org/0000-0001-6569-6860 |
| authorships[2].author.display_name | Kashu Yamazaki |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Yamazaki, Kashu |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5036460147 |
| authorships[3].author.orcid | https://orcid.org/0000-0002-7525-1838 |
| authorships[3].author.display_name | Arthur Gustavo Fernandes |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Fernandes, Arthur |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5101889713 |
| authorships[4].author.orcid | https://orcid.org/0000-0002-0221-1245 |
| authorships[4].author.display_name | Michael Kidd |
| authorships[4].author_position | middle |
| authorships[4].raw_author_name | Kidd, Michael |
| authorships[4].is_corresponding | False |
| authorships[5].author.id | https://openalex.org/A5023725893 |
| authorships[5].author.orcid | https://orcid.org/0000-0003-2571-0511 |
| authorships[5].author.display_name | Ngan Le |
| authorships[5].author_position | last |
| authorships[5].raw_author_name | Le, Ngan |
| authorships[5].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2210.06323 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | AISFormer: Amodal Instance Segmentation with Transformer |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T11307 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9987999796867371 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1702 |
| primary_topic.subfield.display_name | Artificial Intelligence |
| primary_topic.display_name | Domain Adaptation and Few-Shot Learning |
| related_works | https://openalex.org/W3158435931, https://openalex.org/W1589158839, https://openalex.org/W2048200892, https://openalex.org/W4284674805, https://openalex.org/W4387775854, https://openalex.org/W4321460497, https://openalex.org/W2153903859, https://openalex.org/W4294017904, https://openalex.org/W2086050082, https://openalex.org/W2951289157 |
| cited_by_count | 23 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 7 |
| counts_by_year[1].year | 2024 |
| counts_by_year[1].cited_by_count | 13 |
| counts_by_year[2].year | 2023 |
| counts_by_year[2].cited_by_count | 3 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2210.06323 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2210.06323 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2210.06323 |
| primary_location.id | pmh:oai:arXiv.org:2210.06323 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2210.06323 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2210.06323 |
| publication_date | 2022-10-12 |
| publication_year | 2022 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 71, 132 |
| abstract_inverted_index.In | 61 |
| abstract_inverted_index.We | 161 |
| abstract_inverted_index.an | 17, 67, 89 |
| abstract_inverted_index.as | 97 |
| abstract_inverted_index.by | 94, 131 |
| abstract_inverted_index.is | 185 |
| abstract_inverted_index.of | 9, 16, 92, 181 |
| abstract_inverted_index.on | 51, 168 |
| abstract_inverted_index.to | 5, 32, 38, 177 |
| abstract_inverted_index.we | 64 |
| abstract_inverted_index.(i) | 105 |
| abstract_inverted_index.AIS | 23, 68 |
| abstract_inverted_index.ROI | 109 |
| abstract_inverted_index.The | 43, 183 |
| abstract_inverted_index.and | 12, 85, 110, 114, 126, 145, 148, 159, 165, 175 |
| abstract_inverted_index.are | 30 |
| abstract_inverted_index.at: | 187 |
| abstract_inverted_index.due | 37 |
| abstract_inverted_index.the | 7, 39, 78, 123, 140, 143, 179 |
| abstract_inverted_index.(ii) | 118 |
| abstract_inverted_index.(iv) | 149 |
| abstract_inverted_index.Mask | 21 |
| abstract_inverted_index.aims | 4 |
| abstract_inverted_index.both | 10, 112 |
| abstract_inverted_index.code | 184 |
| abstract_inverted_index.even | 54 |
| abstract_inverted_index.four | 103 |
| abstract_inverted_index.have | 25 |
| abstract_inverted_index.i.e. | 172 |
| abstract_inverted_index.mask | 73, 119, 128, 137, 150 |
| abstract_inverted_index.most | 44 |
| abstract_inverted_index.show | 48 |
| abstract_inverted_index.than | 56 |
| abstract_inverted_index.them | 96 |
| abstract_inverted_index.they | 29 |
| abstract_inverted_index.this | 62 |
| abstract_inverted_index.with | 70 |
| abstract_inverted_index.(AIS) | 3 |
| abstract_inverted_index.(iii) | 135 |
| abstract_inverted_index.D2SA, | 174 |
| abstract_inverted_index.KINS, | 173 |
| abstract_inverted_index.While | 20 |
| abstract_inverted_index.head. | 74 |
| abstract_inverted_index.learn | 111 |
| abstract_inverted_index.masks | 87, 154 |
| abstract_inverted_index.model | 33, 139 |
| abstract_inverted_index.parts | 15 |
| abstract_inverted_index.query | 129 |
| abstract_inverted_index.shown | 26 |
| abstract_inverted_index.three | 169 |
| abstract_inverted_index.work, | 63 |
| abstract_inverted_index.(CNN). | 60 |
| abstract_inverted_index.Amodal | 0 |
| abstract_inverted_index.Neural | 58 |
| abstract_inverted_index.amodal | 127, 144, 158 |
| abstract_inverted_index.better | 55 |
| abstract_inverted_index.field. | 42 |
| abstract_inverted_index.masks, | 147 |
| abstract_inverted_index.models | 47, 77 |
| abstract_inverted_index.object | 18 |
| abstract_inverted_index.output | 153 |
| abstract_inverted_index.recent | 45 |
| abstract_inverted_index.region | 8 |
| abstract_inverted_index.tasks, | 53 |
| abstract_inverted_index.unable | 31 |
| abstract_inverted_index.vision | 52 |
| abstract_inverted_index.visual | 116 |
| abstract_inverted_index.within | 88 |
| abstract_inverted_index.amodal, | 84 |
| abstract_inverted_index.between | 81, 142 |
| abstract_inverted_index.complex | 79 |
| abstract_inverted_index.conduct | 162 |
| abstract_inverted_index.decoder | 134 |
| abstract_inverted_index.extract | 108 |
| abstract_inverted_index.feature | 106 |
| abstract_inverted_index.limited | 40 |
| abstract_inverted_index.present | 65 |
| abstract_inverted_index.regions | 91 |
| abstract_inverted_index.segment | 6 |
| abstract_inverted_index.studies | 167 |
| abstract_inverted_index.visible | 11, 146 |
| abstract_inverted_index.Instance | 1 |
| abstract_inverted_index.Networks | 59 |
| abstract_inverted_index.ablation | 166 |
| abstract_inverted_index.contains | 102 |
| abstract_inverted_index.estimate | 152 |
| abstract_inverted_index.evaluate | 178 |
| abstract_inverted_index.features | 35 |
| abstract_inverted_index.generate | 122 |
| abstract_inverted_index.interest | 93 |
| abstract_inverted_index.modules: | 104 |
| abstract_inverted_index.object's | 90 |
| abstract_inverted_index.occluded | 14 |
| abstract_inverted_index.possible | 13 |
| abstract_inverted_index.queries. | 99 |
| abstract_inverted_index.results, | 28 |
| abstract_inverted_index.treating | 95 |
| abstract_inverted_index.visible, | 83, 125, 157 |
| abstract_inverted_index.AISFormer | 75, 101 |
| abstract_inverted_index.COCOA-cls | 176 |
| abstract_inverted_index.available | 186 |
| abstract_inverted_index.coherence | 36, 80, 141 |
| abstract_inverted_index.decoding: | 121 |
| abstract_inverted_index.encoding: | 107 |
| abstract_inverted_index.extensive | 163 |
| abstract_inverted_index.features. | 117 |
| abstract_inverted_index.including | 155 |
| abstract_inverted_index.instance. | 19 |
| abstract_inverted_index.invisible | 86, 136 |
| abstract_inverted_index.learnable | 98 |
| abstract_inverted_index.occluder, | 82, 124, 156 |
| abstract_inverted_index.promising | 27 |
| abstract_inverted_index.receptive | 41 |
| abstract_inverted_index.AISFormer, | 66 |
| abstract_inverted_index.AISFormer. | 182 |
| abstract_inverted_index.approaches | 24 |
| abstract_inverted_index.benchmarks | 171 |
| abstract_inverted_index.embedding: | 138 |
| abstract_inverted_index.embeddings | 130 |
| abstract_inverted_index.explicitly | 76 |
| abstract_inverted_index.framework, | 69 |
| abstract_inverted_index.high-level | 34 |
| abstract_inverted_index.impressive | 49 |
| abstract_inverted_index.invisible. | 160 |
| abstract_inverted_index.long-range | 115 |
| abstract_inverted_index.Convolution | 57 |
| abstract_inverted_index.R-CNN-based | 22 |
| abstract_inverted_index.challenging | 170 |
| abstract_inverted_index.experiments | 164 |
| abstract_inverted_index.performance | 50 |
| abstract_inverted_index.predicting: | 151 |
| abstract_inverted_index.short-range | 113 |
| abstract_inverted_index.transformer | 120, 133 |
| abstract_inverted_index.Segmentation | 2 |
| abstract_inverted_index.Specifically, | 100 |
| abstract_inverted_index.effectiveness | 180 |
| abstract_inverted_index.Transformer-based | 72 |
| abstract_inverted_index.transformer-based | 46 |
| abstract_inverted_index.https://github.com/UARK-AICV/AISFormer | 188 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 6 |
| sustainable_development_goals[0].id | https://metadata.un.org/sdg/4 |
| sustainable_development_goals[0].score | 0.4099999964237213 |
| sustainable_development_goals[0].display_name | Quality Education |
| citation_normalized_percentile |