A dual-branch deep learning model based on fNIRS for assessing 3D visual fatigue Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.3389/fnins.2025.1589152
Introduction Extended viewing of 3D content can induce fatigue symptoms. Thus, fatigue assessment is crucial for enhancing the user experience and optimizing the performance of stereoscopic 3D technology. Functional near-infrared spectroscopy (fNIRS) has emerged as a promising tool for evaluating 3D visual fatigue by capturing hemodynamic responses within the cerebral region. However, traditional fNIRS-based methods rely on manual feature extraction and analysis, limiting their effectiveness. To address these limitations, a deep learning model based on fNIRS was constructed for the first time to evaluate 3D visual fatigue, enabling end-to-end automated feature extraction and classification. Methods Twenty normal subjects participated in this study (mean age: 24.6 ± 0.88 years; range: 23–26 years; 13 males). This paper proposed an fNIRS-based experimental paradigm that acquires data under both comfort and fatigue conditions. Given the time-series nature of fNIRS data and the variability of fatigue responses across different brain regions, a dual-branch convolutional network was constructed to separately extract temporal and spatial features. A transformer was integrated into the convolutional network to enhance long-range feature extraction. Furthermore, to adaptively select fNIRS hemodynamic features, a channel attention mechanism was integrated to provide a weighted representation of multiple features. Results The constructed model achieved an average accuracy of 93.12% within subjects and 84.65% across subjects, demonstrating its superior performance compared to traditional machine learning models and deep learning models. Discussion This study successfully constructed a novel deep learning framework for the automatic evaluation of 3D visual fatigue using fNIRS data. The proposed model addresses the limitations of traditional methods by enabling end-to-end automated feature extraction and classification, eliminating the need for manual intervention. The integration of a transformer module and channel attention mechanism significantly enhanced the model’s ability to capture long-range dependencies and adaptively weight hemodynamic features, respectively. The high classification accuracy achieved within and across subjects highlights the model’s effectiveness and generalizability. This framework not only advances the field of fNIRS-based fatigue assessment but also provides a valuable tool for improving user experience in stereoscopic 3D applications. Future work could explore the model’s applicability to other types of fatigue assessment and further optimize its performance for real-world scenarios.
Related Topics
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.3389/fnins.2025.1589152
- https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2025.1589152/pdf
- OA Status
- gold
- References
- 43
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4411075551
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4411075551Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.3389/fnins.2025.1589152Digital Object Identifier
- Title
-
A dual-branch deep learning model based on fNIRS for assessing 3D visual fatigueWork title
- Type
-
articleOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-06-05Full publication date if available
- Authors
-
Yan Wu, Tongzhou Mu, S. Q. Qu, Xiujun Li, Qi LiList of authors in order
- Landing page
-
https://doi.org/10.3389/fnins.2025.1589152Publisher landing page
- PDF URL
-
https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2025.1589152/pdfDirect link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
goldOpen access status per OpenAlex
- OA URL
-
https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2025.1589152/pdfDirect OA link when available
- Concepts
-
Computer science, Artificial intelligence, Deep learning, Convolutional neural network, Feature learning, Feature extraction, Functional near-infrared spectroscopy, Pattern recognition (psychology), Machine learning, Cognition, Psychology, Prefrontal cortex, NeuroscienceTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
0Total citation count in OpenAlex
- References (count)
-
43Number of works referenced by this work
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4411075551 |
|---|---|
| doi | https://doi.org/10.3389/fnins.2025.1589152 |
| ids.doi | https://doi.org/10.3389/fnins.2025.1589152 |
| ids.pmid | https://pubmed.ncbi.nlm.nih.gov/40538859 |
| ids.openalex | https://openalex.org/W4411075551 |
| fwci | 0.0 |
| type | article |
| title | A dual-branch deep learning model based on fNIRS for assessing 3D visual fatigue |
| biblio.issue | |
| biblio.volume | 19 |
| biblio.last_page | 1589152 |
| biblio.first_page | 1589152 |
| topics[0].id | https://openalex.org/T10977 |
| topics[0].field.id | https://openalex.org/fields/27 |
| topics[0].field.display_name | Medicine |
| topics[0].score | 0.9990000128746033 |
| topics[0].domain.id | https://openalex.org/domains/4 |
| topics[0].domain.display_name | Health Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/2741 |
| topics[0].subfield.display_name | Radiology, Nuclear Medicine and Imaging |
| topics[0].display_name | Optical Imaging and Spectroscopy Techniques |
| topics[1].id | https://openalex.org/T11196 |
| topics[1].field.id | https://openalex.org/fields/22 |
| topics[1].field.display_name | Engineering |
| topics[1].score | 0.9922000169754028 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/2204 |
| topics[1].subfield.display_name | Biomedical Engineering |
| topics[1].display_name | Non-Invasive Vital Sign Monitoring |
| topics[2].id | https://openalex.org/T12006 |
| topics[2].field.id | https://openalex.org/fields/32 |
| topics[2].field.display_name | Psychology |
| topics[2].score | 0.9750000238418579 |
| topics[2].domain.id | https://openalex.org/domains/2 |
| topics[2].domain.display_name | Social Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/3207 |
| topics[2].subfield.display_name | Social Psychology |
| topics[2].display_name | Ergonomics and Musculoskeletal Disorders |
| is_xpac | False |
| apc_list.value | 2950 |
| apc_list.currency | USD |
| apc_list.value_usd | 2950 |
| apc_paid.value | 2950 |
| apc_paid.currency | USD |
| apc_paid.value_usd | 2950 |
| concepts[0].id | https://openalex.org/C41008148 |
| concepts[0].level | 0 |
| concepts[0].score | 0.682617723941803 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[0].display_name | Computer science |
| concepts[1].id | https://openalex.org/C154945302 |
| concepts[1].level | 1 |
| concepts[1].score | 0.6434025764465332 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[1].display_name | Artificial intelligence |
| concepts[2].id | https://openalex.org/C108583219 |
| concepts[2].level | 2 |
| concepts[2].score | 0.6207792162895203 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q197536 |
| concepts[2].display_name | Deep learning |
| concepts[3].id | https://openalex.org/C81363708 |
| concepts[3].level | 2 |
| concepts[3].score | 0.5452187657356262 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q17084460 |
| concepts[3].display_name | Convolutional neural network |
| concepts[4].id | https://openalex.org/C59404180 |
| concepts[4].level | 2 |
| concepts[4].score | 0.5123130083084106 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q17013334 |
| concepts[4].display_name | Feature learning |
| concepts[5].id | https://openalex.org/C52622490 |
| concepts[5].level | 2 |
| concepts[5].score | 0.5114787817001343 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q1026626 |
| concepts[5].display_name | Feature extraction |
| concepts[6].id | https://openalex.org/C130796691 |
| concepts[6].level | 4 |
| concepts[6].score | 0.4628761112689972 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q750537 |
| concepts[6].display_name | Functional near-infrared spectroscopy |
| concepts[7].id | https://openalex.org/C153180895 |
| concepts[7].level | 2 |
| concepts[7].score | 0.399346262216568 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q7148389 |
| concepts[7].display_name | Pattern recognition (psychology) |
| concepts[8].id | https://openalex.org/C119857082 |
| concepts[8].level | 1 |
| concepts[8].score | 0.34718430042266846 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q2539 |
| concepts[8].display_name | Machine learning |
| concepts[9].id | https://openalex.org/C169900460 |
| concepts[9].level | 2 |
| concepts[9].score | 0.23482158780097961 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q2200417 |
| concepts[9].display_name | Cognition |
| concepts[10].id | https://openalex.org/C15744967 |
| concepts[10].level | 0 |
| concepts[10].score | 0.09814783930778503 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q9418 |
| concepts[10].display_name | Psychology |
| concepts[11].id | https://openalex.org/C2781195155 |
| concepts[11].level | 3 |
| concepts[11].score | 0.0 |
| concepts[11].wikidata | https://www.wikidata.org/wiki/Q18680 |
| concepts[11].display_name | Prefrontal cortex |
| concepts[12].id | https://openalex.org/C169760540 |
| concepts[12].level | 1 |
| concepts[12].score | 0.0 |
| concepts[12].wikidata | https://www.wikidata.org/wiki/Q207011 |
| concepts[12].display_name | Neuroscience |
| keywords[0].id | https://openalex.org/keywords/computer-science |
| keywords[0].score | 0.682617723941803 |
| keywords[0].display_name | Computer science |
| keywords[1].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[1].score | 0.6434025764465332 |
| keywords[1].display_name | Artificial intelligence |
| keywords[2].id | https://openalex.org/keywords/deep-learning |
| keywords[2].score | 0.6207792162895203 |
| keywords[2].display_name | Deep learning |
| keywords[3].id | https://openalex.org/keywords/convolutional-neural-network |
| keywords[3].score | 0.5452187657356262 |
| keywords[3].display_name | Convolutional neural network |
| keywords[4].id | https://openalex.org/keywords/feature-learning |
| keywords[4].score | 0.5123130083084106 |
| keywords[4].display_name | Feature learning |
| keywords[5].id | https://openalex.org/keywords/feature-extraction |
| keywords[5].score | 0.5114787817001343 |
| keywords[5].display_name | Feature extraction |
| keywords[6].id | https://openalex.org/keywords/functional-near-infrared-spectroscopy |
| keywords[6].score | 0.4628761112689972 |
| keywords[6].display_name | Functional near-infrared spectroscopy |
| keywords[7].id | https://openalex.org/keywords/pattern-recognition |
| keywords[7].score | 0.399346262216568 |
| keywords[7].display_name | Pattern recognition (psychology) |
| keywords[8].id | https://openalex.org/keywords/machine-learning |
| keywords[8].score | 0.34718430042266846 |
| keywords[8].display_name | Machine learning |
| keywords[9].id | https://openalex.org/keywords/cognition |
| keywords[9].score | 0.23482158780097961 |
| keywords[9].display_name | Cognition |
| keywords[10].id | https://openalex.org/keywords/psychology |
| keywords[10].score | 0.09814783930778503 |
| keywords[10].display_name | Psychology |
| language | en |
| locations[0].id | doi:10.3389/fnins.2025.1589152 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S115201632 |
| locations[0].source.issn | 1662-453X, 1662-4548 |
| locations[0].source.type | journal |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | 1662-453X |
| locations[0].source.is_core | True |
| locations[0].source.is_in_doaj | True |
| locations[0].source.display_name | Frontiers in Neuroscience |
| locations[0].source.host_organization | https://openalex.org/P4310320527 |
| locations[0].source.host_organization_name | Frontiers Media |
| locations[0].source.host_organization_lineage | https://openalex.org/P4310320527 |
| locations[0].source.host_organization_lineage_names | Frontiers Media |
| locations[0].license | cc-by |
| locations[0].pdf_url | https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2025.1589152/pdf |
| locations[0].version | publishedVersion |
| locations[0].raw_type | journal-article |
| locations[0].license_id | https://openalex.org/licenses/cc-by |
| locations[0].is_accepted | True |
| locations[0].is_published | True |
| locations[0].raw_source_name | Frontiers in Neuroscience |
| locations[0].landing_page_url | https://doi.org/10.3389/fnins.2025.1589152 |
| locations[1].id | pmid:40538859 |
| locations[1].is_oa | False |
| locations[1].source.id | https://openalex.org/S4306525036 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | False |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | PubMed |
| locations[1].source.host_organization | https://openalex.org/I1299303238 |
| locations[1].source.host_organization_name | National Institutes of Health |
| locations[1].source.host_organization_lineage | https://openalex.org/I1299303238 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | publishedVersion |
| locations[1].raw_type | |
| locations[1].license_id | |
| locations[1].is_accepted | True |
| locations[1].is_published | True |
| locations[1].raw_source_name | Frontiers in neuroscience |
| locations[1].landing_page_url | https://pubmed.ncbi.nlm.nih.gov/40538859 |
| locations[2].id | pmh:oai:doaj.org/article:47ccf18eafa74f7c9cf7394b0dfdf4fb |
| locations[2].is_oa | False |
| locations[2].source.id | https://openalex.org/S4306401280 |
| locations[2].source.issn | |
| locations[2].source.type | repository |
| locations[2].source.is_oa | False |
| locations[2].source.issn_l | |
| locations[2].source.is_core | False |
| locations[2].source.is_in_doaj | False |
| locations[2].source.display_name | DOAJ (DOAJ: Directory of Open Access Journals) |
| locations[2].source.host_organization | |
| locations[2].source.host_organization_name | |
| locations[2].license | |
| locations[2].pdf_url | |
| locations[2].version | submittedVersion |
| locations[2].raw_type | article |
| locations[2].license_id | |
| locations[2].is_accepted | False |
| locations[2].is_published | False |
| locations[2].raw_source_name | Frontiers in Neuroscience, Vol 19 (2025) |
| locations[2].landing_page_url | https://doaj.org/article/47ccf18eafa74f7c9cf7394b0dfdf4fb |
| indexed_in | crossref, doaj, pubmed |
| authorships[0].author.id | https://openalex.org/A5062224634 |
| authorships[0].author.orcid | https://orcid.org/0000-0002-3589-4011 |
| authorships[0].author.display_name | Yan Wu |
| authorships[0].countries | CN |
| authorships[0].affiliations[0].institution_ids | https://openalex.org/I106645853 |
| authorships[0].affiliations[0].raw_affiliation_string | School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China. |
| authorships[0].affiliations[1].institution_ids | https://openalex.org/I106645853 |
| authorships[0].affiliations[1].raw_affiliation_string | Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, China. |
| authorships[0].affiliations[2].raw_affiliation_string | Jilin Provincial International Joint Research Center of Brain Informatics and Intelligence Science, Changchun, China. |
| authorships[0].institutions[0].id | https://openalex.org/I106645853 |
| authorships[0].institutions[0].ror | https://ror.org/007mntk44 |
| authorships[0].institutions[0].type | education |
| authorships[0].institutions[0].lineage | https://openalex.org/I106645853 |
| authorships[0].institutions[0].country_code | CN |
| authorships[0].institutions[0].display_name | Changchun University of Science and Technology |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Yan Wu |
| authorships[0].is_corresponding | False |
| authorships[0].raw_affiliation_strings | Jilin Provincial International Joint Research Center of Brain Informatics and Intelligence Science, Changchun, China., School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China., Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, China. |
| authorships[1].author.id | https://openalex.org/A5091792127 |
| authorships[1].author.orcid | https://orcid.org/0000-0003-4384-2526 |
| authorships[1].author.display_name | Tongzhou Mu |
| authorships[1].countries | CN |
| authorships[1].affiliations[0].institution_ids | https://openalex.org/I106645853 |
| authorships[1].affiliations[0].raw_affiliation_string | School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China. |
| authorships[1].institutions[0].id | https://openalex.org/I106645853 |
| authorships[1].institutions[0].ror | https://ror.org/007mntk44 |
| authorships[1].institutions[0].type | education |
| authorships[1].institutions[0].lineage | https://openalex.org/I106645853 |
| authorships[1].institutions[0].country_code | CN |
| authorships[1].institutions[0].display_name | Changchun University of Science and Technology |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | TianQi Mu |
| authorships[1].is_corresponding | False |
| authorships[1].raw_affiliation_strings | School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China. |
| authorships[2].author.id | https://openalex.org/A5107902043 |
| authorships[2].author.orcid | https://orcid.org/0009-0007-3606-3432 |
| authorships[2].author.display_name | S. Q. Qu |
| authorships[2].countries | CN |
| authorships[2].affiliations[0].institution_ids | https://openalex.org/I106645853 |
| authorships[2].affiliations[0].raw_affiliation_string | School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China. |
| authorships[2].institutions[0].id | https://openalex.org/I106645853 |
| authorships[2].institutions[0].ror | https://ror.org/007mntk44 |
| authorships[2].institutions[0].type | education |
| authorships[2].institutions[0].lineage | https://openalex.org/I106645853 |
| authorships[2].institutions[0].country_code | CN |
| authorships[2].institutions[0].display_name | Changchun University of Science and Technology |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | SongNan Qu |
| authorships[2].is_corresponding | False |
| authorships[2].raw_affiliation_strings | School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China. |
| authorships[3].author.id | https://openalex.org/A5021140826 |
| authorships[3].author.orcid | https://orcid.org/0000-0001-7771-2725 |
| authorships[3].author.display_name | Xiujun Li |
| authorships[3].countries | CN |
| authorships[3].affiliations[0].institution_ids | https://openalex.org/I106645853 |
| authorships[3].affiliations[0].raw_affiliation_string | School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China. |
| authorships[3].affiliations[1].institution_ids | https://openalex.org/I106645853 |
| authorships[3].affiliations[1].raw_affiliation_string | Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, China. |
| authorships[3].affiliations[2].raw_affiliation_string | Jilin Provincial International Joint Research Center of Brain Informatics and Intelligence Science, Changchun, China. |
| authorships[3].institutions[0].id | https://openalex.org/I106645853 |
| authorships[3].institutions[0].ror | https://ror.org/007mntk44 |
| authorships[3].institutions[0].type | education |
| authorships[3].institutions[0].lineage | https://openalex.org/I106645853 |
| authorships[3].institutions[0].country_code | CN |
| authorships[3].institutions[0].display_name | Changchun University of Science and Technology |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | XiuJun Li |
| authorships[3].is_corresponding | False |
| authorships[3].raw_affiliation_strings | Jilin Provincial International Joint Research Center of Brain Informatics and Intelligence Science, Changchun, China., School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China., Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, China. |
| authorships[4].author.id | https://openalex.org/A5100350202 |
| authorships[4].author.orcid | https://orcid.org/0000-0002-2716-449X |
| authorships[4].author.display_name | Qi Li |
| authorships[4].countries | CN |
| authorships[4].affiliations[0].institution_ids | https://openalex.org/I106645853 |
| authorships[4].affiliations[0].raw_affiliation_string | School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China. |
| authorships[4].affiliations[1].raw_affiliation_string | Jilin Provincial International Joint Research Center of Brain Informatics and Intelligence Science, Changchun, China. |
| authorships[4].affiliations[2].institution_ids | https://openalex.org/I106645853 |
| authorships[4].affiliations[2].raw_affiliation_string | Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, China. |
| authorships[4].institutions[0].id | https://openalex.org/I106645853 |
| authorships[4].institutions[0].ror | https://ror.org/007mntk44 |
| authorships[4].institutions[0].type | education |
| authorships[4].institutions[0].lineage | https://openalex.org/I106645853 |
| authorships[4].institutions[0].country_code | CN |
| authorships[4].institutions[0].display_name | Changchun University of Science and Technology |
| authorships[4].author_position | last |
| authorships[4].raw_author_name | Qi Li |
| authorships[4].is_corresponding | False |
| authorships[4].raw_affiliation_strings | Jilin Provincial International Joint Research Center of Brain Informatics and Intelligence Science, Changchun, China., School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China., Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, China. |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2025.1589152/pdf |
| open_access.oa_status | gold |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | A dual-branch deep learning model based on fNIRS for assessing 3D visual fatigue |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T10977 |
| primary_topic.field.id | https://openalex.org/fields/27 |
| primary_topic.field.display_name | Medicine |
| primary_topic.score | 0.9990000128746033 |
| primary_topic.domain.id | https://openalex.org/domains/4 |
| primary_topic.domain.display_name | Health Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/2741 |
| primary_topic.subfield.display_name | Radiology, Nuclear Medicine and Imaging |
| primary_topic.display_name | Optical Imaging and Spectroscopy Techniques |
| related_works | https://openalex.org/W2906058118, https://openalex.org/W4376595809, https://openalex.org/W1941903492, https://openalex.org/W4226493464, https://openalex.org/W3133861977, https://openalex.org/W2951211570, https://openalex.org/W3103566983, https://openalex.org/W3048601286, https://openalex.org/W2965925734, https://openalex.org/W4309346246 |
| cited_by_count | 0 |
| locations_count | 3 |
| best_oa_location.id | doi:10.3389/fnins.2025.1589152 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S115201632 |
| best_oa_location.source.issn | 1662-453X, 1662-4548 |
| best_oa_location.source.type | journal |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | 1662-453X |
| best_oa_location.source.is_core | True |
| best_oa_location.source.is_in_doaj | True |
| best_oa_location.source.display_name | Frontiers in Neuroscience |
| best_oa_location.source.host_organization | https://openalex.org/P4310320527 |
| best_oa_location.source.host_organization_name | Frontiers Media |
| best_oa_location.source.host_organization_lineage | https://openalex.org/P4310320527 |
| best_oa_location.source.host_organization_lineage_names | Frontiers Media |
| best_oa_location.license | cc-by |
| best_oa_location.pdf_url | https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2025.1589152/pdf |
| best_oa_location.version | publishedVersion |
| best_oa_location.raw_type | journal-article |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by |
| best_oa_location.is_accepted | True |
| best_oa_location.is_published | True |
| best_oa_location.raw_source_name | Frontiers in Neuroscience |
| best_oa_location.landing_page_url | https://doi.org/10.3389/fnins.2025.1589152 |
| primary_location.id | doi:10.3389/fnins.2025.1589152 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S115201632 |
| primary_location.source.issn | 1662-453X, 1662-4548 |
| primary_location.source.type | journal |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | 1662-453X |
| primary_location.source.is_core | True |
| primary_location.source.is_in_doaj | True |
| primary_location.source.display_name | Frontiers in Neuroscience |
| primary_location.source.host_organization | https://openalex.org/P4310320527 |
| primary_location.source.host_organization_name | Frontiers Media |
| primary_location.source.host_organization_lineage | https://openalex.org/P4310320527 |
| primary_location.source.host_organization_lineage_names | Frontiers Media |
| primary_location.license | cc-by |
| primary_location.pdf_url | https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2025.1589152/pdf |
| primary_location.version | publishedVersion |
| primary_location.raw_type | journal-article |
| primary_location.license_id | https://openalex.org/licenses/cc-by |
| primary_location.is_accepted | True |
| primary_location.is_published | True |
| primary_location.raw_source_name | Frontiers in Neuroscience |
| primary_location.landing_page_url | https://doi.org/10.3389/fnins.2025.1589152 |
| publication_date | 2025-06-05 |
| publication_year | 2025 |
| referenced_works | https://openalex.org/W4390587513, https://openalex.org/W2588381839, https://openalex.org/W3007031861, https://openalex.org/W4368341067, https://openalex.org/W1991550546, https://openalex.org/W2167792419, https://openalex.org/W4283653444, https://openalex.org/W3210833735, https://openalex.org/W3195804298, https://openalex.org/W2053356496, https://openalex.org/W4220782972, https://openalex.org/W3171089235, https://openalex.org/W2016005856, https://openalex.org/W2559463885, https://openalex.org/W4392152156, https://openalex.org/W3133567829, https://openalex.org/W4385976110, https://openalex.org/W3137666915, https://openalex.org/W4296793708, https://openalex.org/W2883173902, https://openalex.org/W4321499692, https://openalex.org/W2147284098, https://openalex.org/W4394994581, https://openalex.org/W4308480340, https://openalex.org/W3119011051, https://openalex.org/W4311187275, https://openalex.org/W4385064726, https://openalex.org/W4380537203, https://openalex.org/W2275872316, https://openalex.org/W4391524770, https://openalex.org/W4404141584, https://openalex.org/W4386924096, https://openalex.org/W4210339999, https://openalex.org/W3206795297, https://openalex.org/W4385984038, https://openalex.org/W2940740645, https://openalex.org/W4390547474, https://openalex.org/W4401878806, https://openalex.org/W4290994856, https://openalex.org/W3102455230, https://openalex.org/W4286001525, https://openalex.org/W3196362300, https://openalex.org/W4220836861 |
| referenced_works_count | 43 |
| abstract_inverted_index.A | 159 |
| abstract_inverted_index.a | 35, 69, 146, 179, 187, 228, 270, 321 |
| abstract_inverted_index.13 | 111 |
| abstract_inverted_index.3D | 4, 26, 40, 84, 238, 330 |
| abstract_inverted_index.To | 65 |
| abstract_inverted_index.an | 116, 198 |
| abstract_inverted_index.as | 34 |
| abstract_inverted_index.by | 43, 253 |
| abstract_inverted_index.in | 99, 328 |
| abstract_inverted_index.is | 13 |
| abstract_inverted_index.of | 3, 24, 133, 139, 190, 201, 237, 250, 269, 314, 342 |
| abstract_inverted_index.on | 56, 74 |
| abstract_inverted_index.to | 82, 152, 167, 173, 185, 214, 282, 339 |
| abstract_inverted_index.± | 105 |
| abstract_inverted_index.The | 194, 244, 267, 292 |
| abstract_inverted_index.and | 20, 60, 92, 126, 136, 156, 205, 219, 259, 273, 286, 298, 305, 345 |
| abstract_inverted_index.but | 318 |
| abstract_inverted_index.can | 6 |
| abstract_inverted_index.for | 15, 38, 78, 233, 264, 324, 350 |
| abstract_inverted_index.has | 32 |
| abstract_inverted_index.its | 210, 348 |
| abstract_inverted_index.not | 309 |
| abstract_inverted_index.the | 17, 22, 48, 79, 130, 137, 164, 234, 248, 262, 279, 302, 312, 336 |
| abstract_inverted_index.was | 76, 150, 161, 183 |
| abstract_inverted_index.0.88 | 106 |
| abstract_inverted_index.24.6 | 104 |
| abstract_inverted_index.This | 113, 224, 307 |
| abstract_inverted_index.age: | 103 |
| abstract_inverted_index.also | 319 |
| abstract_inverted_index.both | 124 |
| abstract_inverted_index.data | 122, 135 |
| abstract_inverted_index.deep | 70, 220, 230 |
| abstract_inverted_index.high | 293 |
| abstract_inverted_index.into | 163 |
| abstract_inverted_index.need | 263 |
| abstract_inverted_index.only | 310 |
| abstract_inverted_index.rely | 55 |
| abstract_inverted_index.that | 120 |
| abstract_inverted_index.this | 100 |
| abstract_inverted_index.time | 81 |
| abstract_inverted_index.tool | 37, 323 |
| abstract_inverted_index.user | 18, 326 |
| abstract_inverted_index.work | 333 |
| abstract_inverted_index.(mean | 102 |
| abstract_inverted_index.Given | 129 |
| abstract_inverted_index.Thus, | 10 |
| abstract_inverted_index.based | 73 |
| abstract_inverted_index.brain | 144 |
| abstract_inverted_index.could | 334 |
| abstract_inverted_index.data. | 243 |
| abstract_inverted_index.fNIRS | 75, 134, 176, 242 |
| abstract_inverted_index.field | 313 |
| abstract_inverted_index.first | 80 |
| abstract_inverted_index.model | 72, 196, 246 |
| abstract_inverted_index.novel | 229 |
| abstract_inverted_index.other | 340 |
| abstract_inverted_index.paper | 114 |
| abstract_inverted_index.study | 101, 225 |
| abstract_inverted_index.their | 63 |
| abstract_inverted_index.these | 67 |
| abstract_inverted_index.types | 341 |
| abstract_inverted_index.under | 123 |
| abstract_inverted_index.using | 241 |
| abstract_inverted_index.84.65% | 206 |
| abstract_inverted_index.93.12% | 202 |
| abstract_inverted_index.Future | 332 |
| abstract_inverted_index.Twenty | 95 |
| abstract_inverted_index.across | 142, 207, 299 |
| abstract_inverted_index.induce | 7 |
| abstract_inverted_index.manual | 57, 265 |
| abstract_inverted_index.models | 218 |
| abstract_inverted_index.module | 272 |
| abstract_inverted_index.nature | 132 |
| abstract_inverted_index.normal | 96 |
| abstract_inverted_index.range: | 108 |
| abstract_inverted_index.select | 175 |
| abstract_inverted_index.visual | 41, 85, 239 |
| abstract_inverted_index.weight | 288 |
| abstract_inverted_index.within | 47, 203, 297 |
| abstract_inverted_index.years; | 107, 110 |
| abstract_inverted_index.(fNIRS) | 31 |
| abstract_inverted_index.23–26 | 109 |
| abstract_inverted_index.Methods | 94 |
| abstract_inverted_index.Results | 193 |
| abstract_inverted_index.ability | 281 |
| abstract_inverted_index.address | 66 |
| abstract_inverted_index.average | 199 |
| abstract_inverted_index.capture | 283 |
| abstract_inverted_index.channel | 180, 274 |
| abstract_inverted_index.comfort | 125 |
| abstract_inverted_index.content | 5 |
| abstract_inverted_index.crucial | 14 |
| abstract_inverted_index.emerged | 33 |
| abstract_inverted_index.enhance | 168 |
| abstract_inverted_index.explore | 335 |
| abstract_inverted_index.extract | 154 |
| abstract_inverted_index.fatigue | 8, 11, 42, 127, 140, 240, 316, 343 |
| abstract_inverted_index.feature | 58, 90, 170, 257 |
| abstract_inverted_index.further | 346 |
| abstract_inverted_index.machine | 216 |
| abstract_inverted_index.males). | 112 |
| abstract_inverted_index.methods | 54, 252 |
| abstract_inverted_index.models. | 222 |
| abstract_inverted_index.network | 149, 166 |
| abstract_inverted_index.provide | 186 |
| abstract_inverted_index.region. | 50 |
| abstract_inverted_index.spatial | 157 |
| abstract_inverted_index.viewing | 2 |
| abstract_inverted_index.Extended | 1 |
| abstract_inverted_index.However, | 51 |
| abstract_inverted_index.accuracy | 200, 295 |
| abstract_inverted_index.achieved | 197, 296 |
| abstract_inverted_index.acquires | 121 |
| abstract_inverted_index.advances | 311 |
| abstract_inverted_index.cerebral | 49 |
| abstract_inverted_index.compared | 213 |
| abstract_inverted_index.enabling | 87, 254 |
| abstract_inverted_index.enhanced | 278 |
| abstract_inverted_index.evaluate | 83 |
| abstract_inverted_index.fatigue, | 86 |
| abstract_inverted_index.learning | 71, 217, 221, 231 |
| abstract_inverted_index.limiting | 62 |
| abstract_inverted_index.multiple | 191 |
| abstract_inverted_index.optimize | 347 |
| abstract_inverted_index.paradigm | 119 |
| abstract_inverted_index.proposed | 115, 245 |
| abstract_inverted_index.provides | 320 |
| abstract_inverted_index.regions, | 145 |
| abstract_inverted_index.subjects | 97, 204, 300 |
| abstract_inverted_index.superior | 211 |
| abstract_inverted_index.temporal | 155 |
| abstract_inverted_index.valuable | 322 |
| abstract_inverted_index.weighted | 188 |
| abstract_inverted_index.addresses | 247 |
| abstract_inverted_index.analysis, | 61 |
| abstract_inverted_index.attention | 181, 275 |
| abstract_inverted_index.automated | 89, 256 |
| abstract_inverted_index.automatic | 235 |
| abstract_inverted_index.capturing | 44 |
| abstract_inverted_index.different | 143 |
| abstract_inverted_index.enhancing | 16 |
| abstract_inverted_index.features, | 178, 290 |
| abstract_inverted_index.features. | 158, 192 |
| abstract_inverted_index.framework | 232, 308 |
| abstract_inverted_index.improving | 325 |
| abstract_inverted_index.mechanism | 182, 276 |
| abstract_inverted_index.model’s | 280, 303, 337 |
| abstract_inverted_index.promising | 36 |
| abstract_inverted_index.responses | 46, 141 |
| abstract_inverted_index.subjects, | 208 |
| abstract_inverted_index.symptoms. | 9 |
| abstract_inverted_index.Discussion | 223 |
| abstract_inverted_index.Functional | 28 |
| abstract_inverted_index.adaptively | 174, 287 |
| abstract_inverted_index.assessment | 12, 317, 344 |
| abstract_inverted_index.end-to-end | 88, 255 |
| abstract_inverted_index.evaluating | 39 |
| abstract_inverted_index.evaluation | 236 |
| abstract_inverted_index.experience | 19, 327 |
| abstract_inverted_index.extraction | 59, 91, 258 |
| abstract_inverted_index.highlights | 301 |
| abstract_inverted_index.integrated | 162, 184 |
| abstract_inverted_index.long-range | 169, 284 |
| abstract_inverted_index.optimizing | 21 |
| abstract_inverted_index.real-world | 351 |
| abstract_inverted_index.scenarios. | 352 |
| abstract_inverted_index.separately | 153 |
| abstract_inverted_index.conditions. | 128 |
| abstract_inverted_index.constructed | 77, 151, 195, 227 |
| abstract_inverted_index.dual-branch | 147 |
| abstract_inverted_index.eliminating | 261 |
| abstract_inverted_index.extraction. | 171 |
| abstract_inverted_index.fNIRS-based | 53, 117, 315 |
| abstract_inverted_index.hemodynamic | 45, 177, 289 |
| abstract_inverted_index.integration | 268 |
| abstract_inverted_index.limitations | 249 |
| abstract_inverted_index.performance | 23, 212, 349 |
| abstract_inverted_index.technology. | 27 |
| abstract_inverted_index.time-series | 131 |
| abstract_inverted_index.traditional | 52, 215, 251 |
| abstract_inverted_index.transformer | 160, 271 |
| abstract_inverted_index.variability | 138 |
| abstract_inverted_index.Furthermore, | 172 |
| abstract_inverted_index.Introduction | 0 |
| abstract_inverted_index.dependencies | 285 |
| abstract_inverted_index.experimental | 118 |
| abstract_inverted_index.limitations, | 68 |
| abstract_inverted_index.participated | 98 |
| abstract_inverted_index.spectroscopy | 30 |
| abstract_inverted_index.stereoscopic | 25, 329 |
| abstract_inverted_index.successfully | 226 |
| abstract_inverted_index.applicability | 338 |
| abstract_inverted_index.applications. | 331 |
| abstract_inverted_index.convolutional | 148, 165 |
| abstract_inverted_index.demonstrating | 209 |
| abstract_inverted_index.effectiveness | 304 |
| abstract_inverted_index.intervention. | 266 |
| abstract_inverted_index.near-infrared | 29 |
| abstract_inverted_index.respectively. | 291 |
| abstract_inverted_index.significantly | 277 |
| abstract_inverted_index.classification | 294 |
| abstract_inverted_index.effectiveness. | 64 |
| abstract_inverted_index.representation | 189 |
| abstract_inverted_index.classification, | 260 |
| abstract_inverted_index.classification. | 93 |
| abstract_inverted_index.generalizability. | 306 |
| cited_by_percentile_year | |
| countries_distinct_count | 1 |
| institutions_distinct_count | 5 |
| citation_normalized_percentile.value | 0.2981555 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | False |