Learning Spatial-Frequency Transformer for Visual Object Tracking Article Swipe
YOU?
·
· 2022
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2208.08829
Recent trackers adopt the Transformer to combine or replace the widely used ResNet as their new backbone network. Although their trackers work well in regular scenarios, however, they simply flatten the 2D features into a sequence to better match the Transformer. We believe these operations ignore the spatial prior of the target object which may lead to sub-optimal results only. In addition, many works demonstrate that self-attention is actually a low-pass filter, which is independent of input features or key/queries. That is to say, it may suppress the high-frequency component of the input features and preserve or even amplify the low-frequency information. To handle these issues, in this paper, we propose a unified Spatial-Frequency Transformer that models the Gaussian spatial Prior and High-frequency emphasis Attention (GPHA) simultaneously. To be specific, Gaussian spatial prior is generated using dual Multi-Layer Perceptrons (MLPs) and injected into the similarity matrix produced by multiplying Query and Key features in self-attention. The output will be fed into a Softmax layer and then decomposed into two components, i.e., the direct signal and high-frequency signal. The low- and high-pass branches are rescaled and combined to achieve all-pass, therefore, the high-frequency features will be protected well in stacked self-attention layers. We further integrate the Spatial-Frequency Transformer into the Siamese tracking framework and propose a novel tracking algorithm, termed SFTransT. The cross-scale fusion based SwinTransformer is adopted as the backbone, and also a multi-head cross-attention module is used to boost the interaction between search and template features. The output will be fed into the tracking head for target localization. Extensive experiments on both short-term and long-term tracking benchmarks all demonstrate the effectiveness of our proposed framework.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2208.08829
- https://arxiv.org/pdf/2208.08829
- OA Status
- green
- Cited By
- 1
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4292436284
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4292436284Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2208.08829Digital Object Identifier
- Title
-
Learning Spatial-Frequency Transformer for Visual Object TrackingWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2022Year of publication
- Publication date
-
2022-08-18Full publication date if available
- Authors
-
Chuanming Tang, Xiao Wang, Yuanchao Bai, Zhe Wu, Jianlin Zhang, Yongmei HuangList of authors in order
- Landing page
-
https://arxiv.org/abs/2208.08829Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2208.08829Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2208.08829Direct OA link when available
- Concepts
-
Computer science, Transformer, Artificial intelligence, Softmax function, BitTorrent tracker, Pattern recognition (psychology), Computer vision, Eye tracking, Deep learning, Engineering, Voltage, Electrical engineeringTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
1Total citation count in OpenAlex
- Citations by year (recent)
-
2023: 1Per-year citation counts (last 5 years)
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4292436284 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2208.08829 |
| ids.doi | https://doi.org/10.48550/arxiv.2208.08829 |
| ids.openalex | https://openalex.org/W4292436284 |
| fwci | |
| type | preprint |
| title | Learning Spatial-Frequency Transformer for Visual Object Tracking |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10331 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9995999932289124 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Video Surveillance and Tracking Methods |
| topics[1].id | https://openalex.org/T11667 |
| topics[1].field.id | https://openalex.org/fields/22 |
| topics[1].field.display_name | Engineering |
| topics[1].score | 0.98580002784729 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/2204 |
| topics[1].subfield.display_name | Biomedical Engineering |
| topics[1].display_name | Advanced Chemical Sensor Technologies |
| topics[2].id | https://openalex.org/T12389 |
| topics[2].field.id | https://openalex.org/fields/22 |
| topics[2].field.display_name | Engineering |
| topics[2].score | 0.9598000049591064 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/2202 |
| topics[2].subfield.display_name | Aerospace Engineering |
| topics[2].display_name | Infrared Target Detection Methodologies |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C41008148 |
| concepts[0].level | 0 |
| concepts[0].score | 0.7221258878707886 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[0].display_name | Computer science |
| concepts[1].id | https://openalex.org/C66322947 |
| concepts[1].level | 3 |
| concepts[1].score | 0.5931119918823242 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q11658 |
| concepts[1].display_name | Transformer |
| concepts[2].id | https://openalex.org/C154945302 |
| concepts[2].level | 1 |
| concepts[2].score | 0.5816816091537476 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[2].display_name | Artificial intelligence |
| concepts[3].id | https://openalex.org/C188441871 |
| concepts[3].level | 3 |
| concepts[3].score | 0.5695260167121887 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q7554146 |
| concepts[3].display_name | Softmax function |
| concepts[4].id | https://openalex.org/C57501372 |
| concepts[4].level | 3 |
| concepts[4].score | 0.49252328276634216 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q2021268 |
| concepts[4].display_name | BitTorrent tracker |
| concepts[5].id | https://openalex.org/C153180895 |
| concepts[5].level | 2 |
| concepts[5].score | 0.4278145432472229 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q7148389 |
| concepts[5].display_name | Pattern recognition (psychology) |
| concepts[6].id | https://openalex.org/C31972630 |
| concepts[6].level | 1 |
| concepts[6].score | 0.40997055172920227 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[6].display_name | Computer vision |
| concepts[7].id | https://openalex.org/C56461940 |
| concepts[7].level | 2 |
| concepts[7].score | 0.26766663789749146 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q970687 |
| concepts[7].display_name | Eye tracking |
| concepts[8].id | https://openalex.org/C108583219 |
| concepts[8].level | 2 |
| concepts[8].score | 0.18418428301811218 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q197536 |
| concepts[8].display_name | Deep learning |
| concepts[9].id | https://openalex.org/C127413603 |
| concepts[9].level | 0 |
| concepts[9].score | 0.11084035038948059 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q11023 |
| concepts[9].display_name | Engineering |
| concepts[10].id | https://openalex.org/C165801399 |
| concepts[10].level | 2 |
| concepts[10].score | 0.0 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q25428 |
| concepts[10].display_name | Voltage |
| concepts[11].id | https://openalex.org/C119599485 |
| concepts[11].level | 1 |
| concepts[11].score | 0.0 |
| concepts[11].wikidata | https://www.wikidata.org/wiki/Q43035 |
| concepts[11].display_name | Electrical engineering |
| keywords[0].id | https://openalex.org/keywords/computer-science |
| keywords[0].score | 0.7221258878707886 |
| keywords[0].display_name | Computer science |
| keywords[1].id | https://openalex.org/keywords/transformer |
| keywords[1].score | 0.5931119918823242 |
| keywords[1].display_name | Transformer |
| keywords[2].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[2].score | 0.5816816091537476 |
| keywords[2].display_name | Artificial intelligence |
| keywords[3].id | https://openalex.org/keywords/softmax-function |
| keywords[3].score | 0.5695260167121887 |
| keywords[3].display_name | Softmax function |
| keywords[4].id | https://openalex.org/keywords/bittorrent-tracker |
| keywords[4].score | 0.49252328276634216 |
| keywords[4].display_name | BitTorrent tracker |
| keywords[5].id | https://openalex.org/keywords/pattern-recognition |
| keywords[5].score | 0.4278145432472229 |
| keywords[5].display_name | Pattern recognition (psychology) |
| keywords[6].id | https://openalex.org/keywords/computer-vision |
| keywords[6].score | 0.40997055172920227 |
| keywords[6].display_name | Computer vision |
| keywords[7].id | https://openalex.org/keywords/eye-tracking |
| keywords[7].score | 0.26766663789749146 |
| keywords[7].display_name | Eye tracking |
| keywords[8].id | https://openalex.org/keywords/deep-learning |
| keywords[8].score | 0.18418428301811218 |
| keywords[8].display_name | Deep learning |
| keywords[9].id | https://openalex.org/keywords/engineering |
| keywords[9].score | 0.11084035038948059 |
| keywords[9].display_name | Engineering |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2208.08829 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2208.08829 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2208.08829 |
| locations[1].id | doi:10.48550/arxiv.2208.08829 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2208.08829 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5004593135 |
| authorships[0].author.orcid | |
| authorships[0].author.display_name | Chuanming Tang |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Tang, Chuanming |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5100411426 |
| authorships[1].author.orcid | https://orcid.org/0000-0001-6117-6745 |
| authorships[1].author.display_name | Xiao Wang |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Wang, Xiao |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5024093994 |
| authorships[2].author.orcid | https://orcid.org/0000-0003-3449-6537 |
| authorships[2].author.display_name | Yuanchao Bai |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Bai, Yuanchao |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5101555594 |
| authorships[3].author.orcid | https://orcid.org/0009-0003-8750-4841 |
| authorships[3].author.display_name | Zhe Wu |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Wu, Zhe |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5100747273 |
| authorships[4].author.orcid | https://orcid.org/0000-0002-5284-2942 |
| authorships[4].author.display_name | Jianlin Zhang |
| authorships[4].author_position | middle |
| authorships[4].raw_author_name | Zhang, Jianlin |
| authorships[4].is_corresponding | False |
| authorships[5].author.id | https://openalex.org/A5058185848 |
| authorships[5].author.orcid | https://orcid.org/0000-0003-2926-4635 |
| authorships[5].author.display_name | Yongmei Huang |
| authorships[5].author_position | last |
| authorships[5].raw_author_name | Huang, Yongmei |
| authorships[5].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2208.08829 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Learning Spatial-Frequency Transformer for Visual Object Tracking |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T10331 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9995999932289124 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Video Surveillance and Tracking Methods |
| related_works | https://openalex.org/W3107204728, https://openalex.org/W4287591324, https://openalex.org/W2980176872, https://openalex.org/W4226420367, https://openalex.org/W2962876041, https://openalex.org/W3090555870, https://openalex.org/W3108503355, https://openalex.org/W2249953602, https://openalex.org/W2912971006, https://openalex.org/W3022820045 |
| cited_by_count | 1 |
| counts_by_year[0].year | 2023 |
| counts_by_year[0].cited_by_count | 1 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2208.08829 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2208.08829 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2208.08829 |
| primary_location.id | pmh:oai:arXiv.org:2208.08829 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2208.08829 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2208.08829 |
| publication_date | 2022-08-18 |
| publication_year | 2022 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 34, 69, 111, 161, 214, 232 |
| abstract_inverted_index.2D | 31 |
| abstract_inverted_index.In | 60 |
| abstract_inverted_index.To | 102, 127 |
| abstract_inverted_index.We | 41, 201 |
| abstract_inverted_index.as | 13, 227 |
| abstract_inverted_index.be | 128, 158, 194, 250 |
| abstract_inverted_index.by | 147 |
| abstract_inverted_index.in | 23, 106, 153, 197 |
| abstract_inverted_index.is | 67, 73, 81, 133, 225, 236 |
| abstract_inverted_index.it | 84 |
| abstract_inverted_index.of | 49, 75, 90, 272 |
| abstract_inverted_index.on | 261 |
| abstract_inverted_index.or | 7, 78, 96 |
| abstract_inverted_index.to | 5, 36, 56, 82, 186, 238 |
| abstract_inverted_index.we | 109 |
| abstract_inverted_index.Key | 151 |
| abstract_inverted_index.The | 155, 177, 220, 247 |
| abstract_inverted_index.all | 268 |
| abstract_inverted_index.and | 94, 121, 140, 150, 164, 174, 179, 184, 212, 230, 244, 264 |
| abstract_inverted_index.are | 182 |
| abstract_inverted_index.fed | 159, 251 |
| abstract_inverted_index.for | 256 |
| abstract_inverted_index.may | 54, 85 |
| abstract_inverted_index.new | 15 |
| abstract_inverted_index.our | 273 |
| abstract_inverted_index.the | 3, 9, 30, 39, 46, 50, 87, 91, 99, 117, 143, 171, 190, 204, 208, 228, 240, 253, 270 |
| abstract_inverted_index.two | 168 |
| abstract_inverted_index.That | 80 |
| abstract_inverted_index.also | 231 |
| abstract_inverted_index.both | 262 |
| abstract_inverted_index.dual | 136 |
| abstract_inverted_index.even | 97 |
| abstract_inverted_index.head | 255 |
| abstract_inverted_index.into | 33, 142, 160, 167, 207, 252 |
| abstract_inverted_index.lead | 55 |
| abstract_inverted_index.low- | 178 |
| abstract_inverted_index.many | 62 |
| abstract_inverted_index.say, | 83 |
| abstract_inverted_index.that | 65, 115 |
| abstract_inverted_index.then | 165 |
| abstract_inverted_index.they | 27 |
| abstract_inverted_index.this | 107 |
| abstract_inverted_index.used | 11, 237 |
| abstract_inverted_index.well | 22, 196 |
| abstract_inverted_index.will | 157, 193, 249 |
| abstract_inverted_index.work | 21 |
| abstract_inverted_index.Prior | 120 |
| abstract_inverted_index.Query | 149 |
| abstract_inverted_index.adopt | 2 |
| abstract_inverted_index.based | 223 |
| abstract_inverted_index.boost | 239 |
| abstract_inverted_index.i.e., | 170 |
| abstract_inverted_index.input | 76, 92 |
| abstract_inverted_index.layer | 163 |
| abstract_inverted_index.match | 38 |
| abstract_inverted_index.novel | 215 |
| abstract_inverted_index.only. | 59 |
| abstract_inverted_index.prior | 48, 132 |
| abstract_inverted_index.their | 14, 19 |
| abstract_inverted_index.these | 43, 104 |
| abstract_inverted_index.using | 135 |
| abstract_inverted_index.which | 53, 72 |
| abstract_inverted_index.works | 63 |
| abstract_inverted_index.(GPHA) | 125 |
| abstract_inverted_index.(MLPs) | 139 |
| abstract_inverted_index.Recent | 0 |
| abstract_inverted_index.ResNet | 12 |
| abstract_inverted_index.better | 37 |
| abstract_inverted_index.direct | 172 |
| abstract_inverted_index.fusion | 222 |
| abstract_inverted_index.handle | 103 |
| abstract_inverted_index.ignore | 45 |
| abstract_inverted_index.matrix | 145 |
| abstract_inverted_index.models | 116 |
| abstract_inverted_index.module | 235 |
| abstract_inverted_index.object | 52 |
| abstract_inverted_index.output | 156, 248 |
| abstract_inverted_index.paper, | 108 |
| abstract_inverted_index.search | 243 |
| abstract_inverted_index.signal | 173 |
| abstract_inverted_index.simply | 28 |
| abstract_inverted_index.target | 51, 257 |
| abstract_inverted_index.termed | 218 |
| abstract_inverted_index.widely | 10 |
| abstract_inverted_index.Siamese | 209 |
| abstract_inverted_index.Softmax | 162 |
| abstract_inverted_index.achieve | 187 |
| abstract_inverted_index.adopted | 226 |
| abstract_inverted_index.amplify | 98 |
| abstract_inverted_index.believe | 42 |
| abstract_inverted_index.between | 242 |
| abstract_inverted_index.combine | 6 |
| abstract_inverted_index.filter, | 71 |
| abstract_inverted_index.flatten | 29 |
| abstract_inverted_index.further | 202 |
| abstract_inverted_index.issues, | 105 |
| abstract_inverted_index.layers. | 200 |
| abstract_inverted_index.propose | 110, 213 |
| abstract_inverted_index.regular | 24 |
| abstract_inverted_index.replace | 8 |
| abstract_inverted_index.results | 58 |
| abstract_inverted_index.signal. | 176 |
| abstract_inverted_index.spatial | 47, 119, 131 |
| abstract_inverted_index.stacked | 198 |
| abstract_inverted_index.unified | 112 |
| abstract_inverted_index.Although | 18 |
| abstract_inverted_index.Gaussian | 118, 130 |
| abstract_inverted_index.actually | 68 |
| abstract_inverted_index.backbone | 16 |
| abstract_inverted_index.branches | 181 |
| abstract_inverted_index.combined | 185 |
| abstract_inverted_index.emphasis | 123 |
| abstract_inverted_index.features | 32, 77, 93, 152, 192 |
| abstract_inverted_index.however, | 26 |
| abstract_inverted_index.injected | 141 |
| abstract_inverted_index.low-pass | 70 |
| abstract_inverted_index.network. | 17 |
| abstract_inverted_index.preserve | 95 |
| abstract_inverted_index.produced | 146 |
| abstract_inverted_index.proposed | 274 |
| abstract_inverted_index.rescaled | 183 |
| abstract_inverted_index.sequence | 35 |
| abstract_inverted_index.suppress | 86 |
| abstract_inverted_index.template | 245 |
| abstract_inverted_index.trackers | 1, 20 |
| abstract_inverted_index.tracking | 210, 216, 254, 266 |
| abstract_inverted_index.Attention | 124 |
| abstract_inverted_index.Extensive | 259 |
| abstract_inverted_index.SFTransT. | 219 |
| abstract_inverted_index.addition, | 61 |
| abstract_inverted_index.all-pass, | 188 |
| abstract_inverted_index.backbone, | 229 |
| abstract_inverted_index.component | 89 |
| abstract_inverted_index.features. | 246 |
| abstract_inverted_index.framework | 211 |
| abstract_inverted_index.generated | 134 |
| abstract_inverted_index.high-pass | 180 |
| abstract_inverted_index.integrate | 203 |
| abstract_inverted_index.long-term | 265 |
| abstract_inverted_index.protected | 195 |
| abstract_inverted_index.specific, | 129 |
| abstract_inverted_index.algorithm, | 217 |
| abstract_inverted_index.benchmarks | 267 |
| abstract_inverted_index.decomposed | 166 |
| abstract_inverted_index.framework. | 275 |
| abstract_inverted_index.multi-head | 233 |
| abstract_inverted_index.operations | 44 |
| abstract_inverted_index.scenarios, | 25 |
| abstract_inverted_index.short-term | 263 |
| abstract_inverted_index.similarity | 144 |
| abstract_inverted_index.therefore, | 189 |
| abstract_inverted_index.Multi-Layer | 137 |
| abstract_inverted_index.Perceptrons | 138 |
| abstract_inverted_index.Transformer | 4, 114, 206 |
| abstract_inverted_index.components, | 169 |
| abstract_inverted_index.cross-scale | 221 |
| abstract_inverted_index.demonstrate | 64, 269 |
| abstract_inverted_index.experiments | 260 |
| abstract_inverted_index.independent | 74 |
| abstract_inverted_index.interaction | 241 |
| abstract_inverted_index.multiplying | 148 |
| abstract_inverted_index.sub-optimal | 57 |
| abstract_inverted_index.Transformer. | 40 |
| abstract_inverted_index.information. | 101 |
| abstract_inverted_index.key/queries. | 79 |
| abstract_inverted_index.effectiveness | 271 |
| abstract_inverted_index.localization. | 258 |
| abstract_inverted_index.low-frequency | 100 |
| abstract_inverted_index.High-frequency | 122 |
| abstract_inverted_index.high-frequency | 88, 175, 191 |
| abstract_inverted_index.self-attention | 66, 199 |
| abstract_inverted_index.SwinTransformer | 224 |
| abstract_inverted_index.cross-attention | 234 |
| abstract_inverted_index.self-attention. | 154 |
| abstract_inverted_index.simultaneously. | 126 |
| abstract_inverted_index.Spatial-Frequency | 113, 205 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 6 |
| citation_normalized_percentile |