Flow-guided Spatial Attention Tracking for Egocentric Activity Recognition Article Swipe
YOU?
·
· 2021
· Open Access
·
· DOI: https://doi.org/10.1109/icpr48806.2021.9412512
The popularity of wearable cameras has opened up a new dimension for egocentric activity recognition. While some methods introduce attention mechanisms into deep learning networks to capture fine-grained hand-object interactions, they often neglect exploring the spatio-temporal relationships. Generating spatial attention, without adequately exploiting temporal consistency, will result in potentially sub-optimal performance in the video-based task. In this paper, we propose a flow-guided spatial attention tracking (F-SAT) module, which is based on enhancing motion patterns and inter-frame information, to highlight the discriminative features from regions of interest across a video sequence. A new form of input, namely the optical-flow volume, is presented to provide informative cues from moving parts for spatial attention tracking. The proposed F-SAT module is deployed to a two-branch-based deep architecture, which fuses complementary information for egocentric activity recognition. Experimental results on three egocentric activity benchmarks show that the proposed method achieves state-of-the-art performance.
Related Topics
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.1109/icpr48806.2021.9412512
- OA Status
- green
- Cited By
- 3
- References
- 33
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W3163102609
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W3163102609Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.1109/icpr48806.2021.9412512Digital Object Identifier
- Title
-
Flow-guided Spatial Attention Tracking for Egocentric Activity RecognitionWork title
- Type
-
articleOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2021Year of publication
- Publication date
-
2021-01-10Full publication date if available
- Authors
-
Tianshan Liu, Kin‐Man LamList of authors in order
- Landing page
-
https://doi.org/10.1109/icpr48806.2021.9412512Publisher landing page
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://ira.lib.polyu.edu.hk/bitstream/10397/107109/1/Liu_Flow-Guided_Spatial_Attention.pdfDirect OA link when available
- Concepts
-
Computer science, Computer vision, Artificial intelligence, Flow (mathematics), Tracking (education), Pattern recognition (psychology), Psychology, Mathematics, Geometry, PedagogyTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
3Total citation count in OpenAlex
- Citations by year (recent)
-
2024: 1, 2022: 1, 2021: 1Per-year citation counts (last 5 years)
- References (count)
-
33Number of works referenced by this work
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W3163102609 |
|---|---|
| doi | https://doi.org/10.1109/icpr48806.2021.9412512 |
| ids.doi | https://doi.org/10.1109/icpr48806.2021.9412512 |
| ids.mag | 3163102609 |
| ids.openalex | https://openalex.org/W3163102609 |
| fwci | 0.30665844 |
| type | article |
| title | Flow-guided Spatial Attention Tracking for Egocentric Activity Recognition |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | 4308 |
| biblio.first_page | 4303 |
| topics[0].id | https://openalex.org/T10812 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9998000264167786 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Human Pose and Action Recognition |
| topics[1].id | https://openalex.org/T11714 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9950000047683716 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1707 |
| topics[1].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[1].display_name | Multimodal Machine Learning Applications |
| topics[2].id | https://openalex.org/T10444 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9922000169754028 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1707 |
| topics[2].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[2].display_name | Context-Aware Activity Recognition Systems |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C41008148 |
| concepts[0].level | 0 |
| concepts[0].score | 0.6354058384895325 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[0].display_name | Computer science |
| concepts[1].id | https://openalex.org/C31972630 |
| concepts[1].level | 1 |
| concepts[1].score | 0.6280077695846558 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[1].display_name | Computer vision |
| concepts[2].id | https://openalex.org/C154945302 |
| concepts[2].level | 1 |
| concepts[2].score | 0.580984354019165 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[2].display_name | Artificial intelligence |
| concepts[3].id | https://openalex.org/C38349280 |
| concepts[3].level | 2 |
| concepts[3].score | 0.5399577021598816 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q1434290 |
| concepts[3].display_name | Flow (mathematics) |
| concepts[4].id | https://openalex.org/C2775936607 |
| concepts[4].level | 2 |
| concepts[4].score | 0.5323818922042847 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q466845 |
| concepts[4].display_name | Tracking (education) |
| concepts[5].id | https://openalex.org/C153180895 |
| concepts[5].level | 2 |
| concepts[5].score | 0.3546847105026245 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q7148389 |
| concepts[5].display_name | Pattern recognition (psychology) |
| concepts[6].id | https://openalex.org/C15744967 |
| concepts[6].level | 0 |
| concepts[6].score | 0.17637217044830322 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q9418 |
| concepts[6].display_name | Psychology |
| concepts[7].id | https://openalex.org/C33923547 |
| concepts[7].level | 0 |
| concepts[7].score | 0.1319865882396698 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q395 |
| concepts[7].display_name | Mathematics |
| concepts[8].id | https://openalex.org/C2524010 |
| concepts[8].level | 1 |
| concepts[8].score | 0.07030528783798218 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q8087 |
| concepts[8].display_name | Geometry |
| concepts[9].id | https://openalex.org/C19417346 |
| concepts[9].level | 1 |
| concepts[9].score | 0.0 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q7922 |
| concepts[9].display_name | Pedagogy |
| keywords[0].id | https://openalex.org/keywords/computer-science |
| keywords[0].score | 0.6354058384895325 |
| keywords[0].display_name | Computer science |
| keywords[1].id | https://openalex.org/keywords/computer-vision |
| keywords[1].score | 0.6280077695846558 |
| keywords[1].display_name | Computer vision |
| keywords[2].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[2].score | 0.580984354019165 |
| keywords[2].display_name | Artificial intelligence |
| keywords[3].id | https://openalex.org/keywords/flow |
| keywords[3].score | 0.5399577021598816 |
| keywords[3].display_name | Flow (mathematics) |
| keywords[4].id | https://openalex.org/keywords/tracking |
| keywords[4].score | 0.5323818922042847 |
| keywords[4].display_name | Tracking (education) |
| keywords[5].id | https://openalex.org/keywords/pattern-recognition |
| keywords[5].score | 0.3546847105026245 |
| keywords[5].display_name | Pattern recognition (psychology) |
| keywords[6].id | https://openalex.org/keywords/psychology |
| keywords[6].score | 0.17637217044830322 |
| keywords[6].display_name | Psychology |
| keywords[7].id | https://openalex.org/keywords/mathematics |
| keywords[7].score | 0.1319865882396698 |
| keywords[7].display_name | Mathematics |
| keywords[8].id | https://openalex.org/keywords/geometry |
| keywords[8].score | 0.07030528783798218 |
| keywords[8].display_name | Geometry |
| language | en |
| locations[0].id | doi:10.1109/icpr48806.2021.9412512 |
| locations[0].is_oa | False |
| locations[0].source | |
| locations[0].license | |
| locations[0].pdf_url | |
| locations[0].version | publishedVersion |
| locations[0].raw_type | proceedings-article |
| locations[0].license_id | |
| locations[0].is_accepted | True |
| locations[0].is_published | True |
| locations[0].raw_source_name | 2020 25th International Conference on Pattern Recognition (ICPR) |
| locations[0].landing_page_url | https://doi.org/10.1109/icpr48806.2021.9412512 |
| locations[1].id | pmh:oai:ira.lib.polyu.edu.hk:10397/107109 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400205 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | False |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | PolyU Institutional Research Archive (Hong Kong Polytechnic University) |
| locations[1].source.host_organization | https://openalex.org/I14243506 |
| locations[1].source.host_organization_name | Hong Kong Polytechnic University |
| locations[1].source.host_organization_lineage | https://openalex.org/I14243506 |
| locations[1].license | |
| locations[1].pdf_url | http://ira.lib.polyu.edu.hk/bitstream/10397/107109/1/Liu_Flow-Guided_Spatial_Attention.pdf |
| locations[1].version | submittedVersion |
| locations[1].raw_type | Conference Paper |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | False |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | http://hdl.handle.net/10397/107109 |
| indexed_in | crossref |
| authorships[0].author.id | https://openalex.org/A5011105024 |
| authorships[0].author.orcid | https://orcid.org/0000-0003-3831-8893 |
| authorships[0].author.display_name | Tianshan Liu |
| authorships[0].countries | HK |
| authorships[0].affiliations[0].institution_ids | https://openalex.org/I14243506 |
| authorships[0].affiliations[0].raw_affiliation_string | The Hong Kong Polytechnic University, Hong Kong |
| authorships[0].institutions[0].id | https://openalex.org/I14243506 |
| authorships[0].institutions[0].ror | https://ror.org/0030zas98 |
| authorships[0].institutions[0].type | education |
| authorships[0].institutions[0].lineage | https://openalex.org/I14243506 |
| authorships[0].institutions[0].country_code | HK |
| authorships[0].institutions[0].display_name | Hong Kong Polytechnic University |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Tianshan Liu |
| authorships[0].is_corresponding | False |
| authorships[0].raw_affiliation_strings | The Hong Kong Polytechnic University, Hong Kong |
| authorships[1].author.id | https://openalex.org/A5019678322 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-0422-8454 |
| authorships[1].author.display_name | Kin‐Man Lam |
| authorships[1].countries | HK |
| authorships[1].affiliations[0].institution_ids | https://openalex.org/I14243506 |
| authorships[1].affiliations[0].raw_affiliation_string | The Hong Kong Polytechnic University, Hong Kong |
| authorships[1].institutions[0].id | https://openalex.org/I14243506 |
| authorships[1].institutions[0].ror | https://ror.org/0030zas98 |
| authorships[1].institutions[0].type | education |
| authorships[1].institutions[0].lineage | https://openalex.org/I14243506 |
| authorships[1].institutions[0].country_code | HK |
| authorships[1].institutions[0].display_name | Hong Kong Polytechnic University |
| authorships[1].author_position | last |
| authorships[1].raw_author_name | Kin-Man Lam |
| authorships[1].is_corresponding | False |
| authorships[1].raw_affiliation_strings | The Hong Kong Polytechnic University, Hong Kong |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | http://ira.lib.polyu.edu.hk/bitstream/10397/107109/1/Liu_Flow-Guided_Spatial_Attention.pdf |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Flow-guided Spatial Attention Tracking for Egocentric Activity Recognition |
| has_fulltext | True |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T10812 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9998000264167786 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Human Pose and Action Recognition |
| related_works | https://openalex.org/W2755342338, https://openalex.org/W2058170566, https://openalex.org/W2036807459, https://openalex.org/W2772917594, https://openalex.org/W2775347418, https://openalex.org/W1969923398, https://openalex.org/W2166024367, https://openalex.org/W3116076068, https://openalex.org/W2229312674, https://openalex.org/W2079911747 |
| cited_by_count | 3 |
| counts_by_year[0].year | 2024 |
| counts_by_year[0].cited_by_count | 1 |
| counts_by_year[1].year | 2022 |
| counts_by_year[1].cited_by_count | 1 |
| counts_by_year[2].year | 2021 |
| counts_by_year[2].cited_by_count | 1 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:ira.lib.polyu.edu.hk:10397/107109 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400205 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | False |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | PolyU Institutional Research Archive (Hong Kong Polytechnic University) |
| best_oa_location.source.host_organization | https://openalex.org/I14243506 |
| best_oa_location.source.host_organization_name | Hong Kong Polytechnic University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I14243506 |
| best_oa_location.license | |
| best_oa_location.pdf_url | http://ira.lib.polyu.edu.hk/bitstream/10397/107109/1/Liu_Flow-Guided_Spatial_Attention.pdf |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | Conference Paper |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://hdl.handle.net/10397/107109 |
| primary_location.id | doi:10.1109/icpr48806.2021.9412512 |
| primary_location.is_oa | False |
| primary_location.source | |
| primary_location.license | |
| primary_location.pdf_url | |
| primary_location.version | publishedVersion |
| primary_location.raw_type | proceedings-article |
| primary_location.license_id | |
| primary_location.is_accepted | True |
| primary_location.is_published | True |
| primary_location.raw_source_name | 2020 25th International Conference on Pattern Recognition (ICPR) |
| primary_location.landing_page_url | https://doi.org/10.1109/icpr48806.2021.9412512 |
| publication_date | 2021-01-10 |
| publication_year | 2021 |
| referenced_works | https://openalex.org/W2745461083, https://openalex.org/W6753924131, https://openalex.org/W2963082988, https://openalex.org/W2746726611, https://openalex.org/W2964198573, https://openalex.org/W2135658380, https://openalex.org/W6688426012, https://openalex.org/W2750400961, https://openalex.org/W2781400102, https://openalex.org/W2964083648, https://openalex.org/W2902905147, https://openalex.org/W2902557759, https://openalex.org/W2895299763, https://openalex.org/W967969822, https://openalex.org/W2990495699, https://openalex.org/W2964222622, https://openalex.org/W6750683594, https://openalex.org/W2963834878, https://openalex.org/W2980096197, https://openalex.org/W2897762477, https://openalex.org/W2997004687, https://openalex.org/W2971710819, https://openalex.org/W1947050545, https://openalex.org/W2295107390, https://openalex.org/W6682864246, https://openalex.org/W2963315828, https://openalex.org/W2212494831, https://openalex.org/W2799067027, https://openalex.org/W2885914489, https://openalex.org/W2156303437, https://openalex.org/W3105000568, https://openalex.org/W4289740432, https://openalex.org/W2800845854 |
| referenced_works_count | 33 |
| abstract_inverted_index.A | 90 |
| abstract_inverted_index.a | 8, 60, 87, 119 |
| abstract_inverted_index.In | 55 |
| abstract_inverted_index.in | 47, 51 |
| abstract_inverted_index.is | 68, 99, 116 |
| abstract_inverted_index.of | 2, 84, 93 |
| abstract_inverted_index.on | 70, 133 |
| abstract_inverted_index.to | 25, 77, 101, 118 |
| abstract_inverted_index.up | 7 |
| abstract_inverted_index.we | 58 |
| abstract_inverted_index.The | 0, 112 |
| abstract_inverted_index.and | 74 |
| abstract_inverted_index.for | 11, 108, 127 |
| abstract_inverted_index.has | 5 |
| abstract_inverted_index.new | 9, 91 |
| abstract_inverted_index.the | 34, 52, 79, 96, 140 |
| abstract_inverted_index.cues | 104 |
| abstract_inverted_index.deep | 22, 121 |
| abstract_inverted_index.form | 92 |
| abstract_inverted_index.from | 82, 105 |
| abstract_inverted_index.into | 21 |
| abstract_inverted_index.show | 138 |
| abstract_inverted_index.some | 16 |
| abstract_inverted_index.that | 139 |
| abstract_inverted_index.they | 30 |
| abstract_inverted_index.this | 56 |
| abstract_inverted_index.will | 45 |
| abstract_inverted_index.F-SAT | 114 |
| abstract_inverted_index.While | 15 |
| abstract_inverted_index.based | 69 |
| abstract_inverted_index.fuses | 124 |
| abstract_inverted_index.often | 31 |
| abstract_inverted_index.parts | 107 |
| abstract_inverted_index.task. | 54 |
| abstract_inverted_index.three | 134 |
| abstract_inverted_index.video | 88 |
| abstract_inverted_index.which | 67, 123 |
| abstract_inverted_index.across | 86 |
| abstract_inverted_index.input, | 94 |
| abstract_inverted_index.method | 142 |
| abstract_inverted_index.module | 115 |
| abstract_inverted_index.motion | 72 |
| abstract_inverted_index.moving | 106 |
| abstract_inverted_index.namely | 95 |
| abstract_inverted_index.opened | 6 |
| abstract_inverted_index.paper, | 57 |
| abstract_inverted_index.result | 46 |
| abstract_inverted_index.(F-SAT) | 65 |
| abstract_inverted_index.cameras | 4 |
| abstract_inverted_index.capture | 26 |
| abstract_inverted_index.methods | 17 |
| abstract_inverted_index.module, | 66 |
| abstract_inverted_index.neglect | 32 |
| abstract_inverted_index.propose | 59 |
| abstract_inverted_index.provide | 102 |
| abstract_inverted_index.regions | 83 |
| abstract_inverted_index.results | 132 |
| abstract_inverted_index.spatial | 38, 62, 109 |
| abstract_inverted_index.volume, | 98 |
| abstract_inverted_index.without | 40 |
| abstract_inverted_index.achieves | 143 |
| abstract_inverted_index.activity | 13, 129, 136 |
| abstract_inverted_index.deployed | 117 |
| abstract_inverted_index.features | 81 |
| abstract_inverted_index.interest | 85 |
| abstract_inverted_index.learning | 23 |
| abstract_inverted_index.networks | 24 |
| abstract_inverted_index.patterns | 73 |
| abstract_inverted_index.proposed | 113, 141 |
| abstract_inverted_index.temporal | 43 |
| abstract_inverted_index.tracking | 64 |
| abstract_inverted_index.wearable | 3 |
| abstract_inverted_index.attention | 19, 63, 110 |
| abstract_inverted_index.dimension | 10 |
| abstract_inverted_index.enhancing | 71 |
| abstract_inverted_index.exploring | 33 |
| abstract_inverted_index.highlight | 78 |
| abstract_inverted_index.introduce | 18 |
| abstract_inverted_index.presented | 100 |
| abstract_inverted_index.sequence. | 89 |
| abstract_inverted_index.tracking. | 111 |
| abstract_inverted_index.Generating | 37 |
| abstract_inverted_index.adequately | 41 |
| abstract_inverted_index.attention, | 39 |
| abstract_inverted_index.benchmarks | 137 |
| abstract_inverted_index.egocentric | 12, 128, 135 |
| abstract_inverted_index.exploiting | 42 |
| abstract_inverted_index.mechanisms | 20 |
| abstract_inverted_index.popularity | 1 |
| abstract_inverted_index.flow-guided | 61 |
| abstract_inverted_index.hand-object | 28 |
| abstract_inverted_index.information | 126 |
| abstract_inverted_index.informative | 103 |
| abstract_inverted_index.inter-frame | 75 |
| abstract_inverted_index.performance | 50 |
| abstract_inverted_index.potentially | 48 |
| abstract_inverted_index.sub-optimal | 49 |
| abstract_inverted_index.video-based | 53 |
| abstract_inverted_index.Experimental | 131 |
| abstract_inverted_index.consistency, | 44 |
| abstract_inverted_index.fine-grained | 27 |
| abstract_inverted_index.information, | 76 |
| abstract_inverted_index.optical-flow | 97 |
| abstract_inverted_index.performance. | 145 |
| abstract_inverted_index.recognition. | 14, 130 |
| abstract_inverted_index.architecture, | 122 |
| abstract_inverted_index.complementary | 125 |
| abstract_inverted_index.interactions, | 29 |
| abstract_inverted_index.discriminative | 80 |
| abstract_inverted_index.relationships. | 36 |
| abstract_inverted_index.spatio-temporal | 35 |
| abstract_inverted_index.state-of-the-art | 144 |
| abstract_inverted_index.two-branch-based | 120 |
| cited_by_percentile_year.max | 94 |
| cited_by_percentile_year.min | 89 |
| countries_distinct_count | 1 |
| institutions_distinct_count | 2 |
| sustainable_development_goals[0].id | https://metadata.un.org/sdg/10 |
| sustainable_development_goals[0].score | 0.7300000190734863 |
| sustainable_development_goals[0].display_name | Reduced inequalities |
| citation_normalized_percentile.value | 0.55599489 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | False |