Development of a Deep Learning Model for Predicting Speech Audiometry Using Pure-Tone Audiometry Data Article Swipe
YOU?
·
· 2024
· Open Access
·
· DOI: https://doi.org/10.3390/app14209379
Speech audiometry is a vital tool in assessing an individual’s ability to perceive and comprehend speech, traditionally requiring specialized testing that can be time-consuming and resource -intensive. This paper approaches a novel use of deep learning to predict speech audiometry using pure-tone audiometry (PTA) data. By utilizing PTA data, which measure hearing sensitivity at specific frequencies, we aim to develop a model that can bypass the need for direct speech testing. This study investigates two neural network architectures: a multi-layer perceptron (MLP) and a one-dimensional convolutional neural network (1D-CNN). These models are trained to predict key speech audiometry outcomes, including speech recognition thresholds and speech discrimination scores. To evaluate the effectiveness of these models, we employed two key performance metrics: the coefficient of determination (R2) and mean absolute error (MAE). The MLP model demonstrated predictive solid power with an R2 score of 88.79% and an average MAE of 7.26, while the 1D-CNN model achieved a slightly higher level of accuracy with an MAE score of 88.35% and an MAE of 6.90. The superior performance of the 1D-CNN model suggests that it captures relevant features from PTA data more effectively than the MLP. These results show that both models hold promise for predicting speech audiometry, potentially simplifying the audiological evaluation process. This approach is applied in clinical settings for hearing loss assessment, the selection of hearing aids, and the development of personalized auditory rehabilitation programs.
Related Topics
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.3390/app14209379
- OA Status
- gold
- Cited By
- 2
- References
- 33
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4403492151
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4403492151Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.3390/app14209379Digital Object Identifier
- Title
-
Development of a Deep Learning Model for Predicting Speech Audiometry Using Pure-Tone Audiometry DataWork title
- Type
-
articleOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2024Year of publication
- Publication date
-
2024-10-15Full publication date if available
- Authors
-
Jaeyoung Shin, Jun Ma, Seong Jun Choi, Sungyeup Kim, Min HongList of authors in order
- Landing page
-
https://doi.org/10.3390/app14209379Publisher landing page
- Open access
-
YesWhether a free full text is available
- OA status
-
goldOpen access status per OpenAlex
- OA URL
-
https://doi.org/10.3390/app14209379Direct OA link when available
- Concepts
-
Speech recognition, Computer science, Audiometry, Convolutional neural network, Artificial neural network, Pure tone audiometry, Artificial intelligence, Audiology, Hearing loss, MedicineTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
2Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 2Per-year citation counts (last 5 years)
- References (count)
-
33Number of works referenced by this work
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4403492151 |
|---|---|
| doi | https://doi.org/10.3390/app14209379 |
| ids.doi | https://doi.org/10.3390/app14209379 |
| ids.openalex | https://openalex.org/W4403492151 |
| fwci | 1.40560113 |
| type | article |
| title | Development of a Deep Learning Model for Predicting Speech Audiometry Using Pure-Tone Audiometry Data |
| biblio.issue | 20 |
| biblio.volume | 14 |
| biblio.last_page | 9379 |
| biblio.first_page | 9379 |
| topics[0].id | https://openalex.org/T10283 |
| topics[0].field.id | https://openalex.org/fields/28 |
| topics[0].field.display_name | Neuroscience |
| topics[0].score | 0.9997000098228455 |
| topics[0].domain.id | https://openalex.org/domains/1 |
| topics[0].domain.display_name | Life Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/2805 |
| topics[0].subfield.display_name | Cognitive Neuroscience |
| topics[0].display_name | Hearing Loss and Rehabilitation |
| topics[1].id | https://openalex.org/T10860 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9975000023841858 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1711 |
| topics[1].subfield.display_name | Signal Processing |
| topics[1].display_name | Speech and Audio Processing |
| topics[2].id | https://openalex.org/T11692 |
| topics[2].field.id | https://openalex.org/fields/36 |
| topics[2].field.display_name | Health Professions |
| topics[2].score | 0.9904999732971191 |
| topics[2].domain.id | https://openalex.org/domains/4 |
| topics[2].domain.display_name | Health Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/3616 |
| topics[2].subfield.display_name | Speech and Hearing |
| topics[2].display_name | Noise Effects and Management |
| is_xpac | False |
| apc_list.value | 2300 |
| apc_list.currency | CHF |
| apc_list.value_usd | 2490 |
| apc_paid.value | 2300 |
| apc_paid.currency | CHF |
| apc_paid.value_usd | 2490 |
| concepts[0].id | https://openalex.org/C28490314 |
| concepts[0].level | 1 |
| concepts[0].score | 0.6151584982872009 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q189436 |
| concepts[0].display_name | Speech recognition |
| concepts[1].id | https://openalex.org/C41008148 |
| concepts[1].level | 0 |
| concepts[1].score | 0.5821073651313782 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[1].display_name | Computer science |
| concepts[2].id | https://openalex.org/C2780554537 |
| concepts[2].level | 3 |
| concepts[2].score | 0.5758484601974487 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q758881 |
| concepts[2].display_name | Audiometry |
| concepts[3].id | https://openalex.org/C81363708 |
| concepts[3].level | 2 |
| concepts[3].score | 0.5723841190338135 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q17084460 |
| concepts[3].display_name | Convolutional neural network |
| concepts[4].id | https://openalex.org/C50644808 |
| concepts[4].level | 2 |
| concepts[4].score | 0.47136321663856506 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q192776 |
| concepts[4].display_name | Artificial neural network |
| concepts[5].id | https://openalex.org/C2781383708 |
| concepts[5].level | 4 |
| concepts[5].score | 0.4545149505138397 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q7261165 |
| concepts[5].display_name | Pure tone audiometry |
| concepts[6].id | https://openalex.org/C154945302 |
| concepts[6].level | 1 |
| concepts[6].score | 0.3741951286792755 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[6].display_name | Artificial intelligence |
| concepts[7].id | https://openalex.org/C548259974 |
| concepts[7].level | 1 |
| concepts[7].score | 0.3401775360107422 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q569965 |
| concepts[7].display_name | Audiology |
| concepts[8].id | https://openalex.org/C2780493683 |
| concepts[8].level | 2 |
| concepts[8].score | 0.284524142742157 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q16035842 |
| concepts[8].display_name | Hearing loss |
| concepts[9].id | https://openalex.org/C71924100 |
| concepts[9].level | 0 |
| concepts[9].score | 0.19031205773353577 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q11190 |
| concepts[9].display_name | Medicine |
| keywords[0].id | https://openalex.org/keywords/speech-recognition |
| keywords[0].score | 0.6151584982872009 |
| keywords[0].display_name | Speech recognition |
| keywords[1].id | https://openalex.org/keywords/computer-science |
| keywords[1].score | 0.5821073651313782 |
| keywords[1].display_name | Computer science |
| keywords[2].id | https://openalex.org/keywords/audiometry |
| keywords[2].score | 0.5758484601974487 |
| keywords[2].display_name | Audiometry |
| keywords[3].id | https://openalex.org/keywords/convolutional-neural-network |
| keywords[3].score | 0.5723841190338135 |
| keywords[3].display_name | Convolutional neural network |
| keywords[4].id | https://openalex.org/keywords/artificial-neural-network |
| keywords[4].score | 0.47136321663856506 |
| keywords[4].display_name | Artificial neural network |
| keywords[5].id | https://openalex.org/keywords/pure-tone-audiometry |
| keywords[5].score | 0.4545149505138397 |
| keywords[5].display_name | Pure tone audiometry |
| keywords[6].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[6].score | 0.3741951286792755 |
| keywords[6].display_name | Artificial intelligence |
| keywords[7].id | https://openalex.org/keywords/audiology |
| keywords[7].score | 0.3401775360107422 |
| keywords[7].display_name | Audiology |
| keywords[8].id | https://openalex.org/keywords/hearing-loss |
| keywords[8].score | 0.284524142742157 |
| keywords[8].display_name | Hearing loss |
| keywords[9].id | https://openalex.org/keywords/medicine |
| keywords[9].score | 0.19031205773353577 |
| keywords[9].display_name | Medicine |
| language | en |
| locations[0].id | doi:10.3390/app14209379 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4210205812 |
| locations[0].source.issn | 2076-3417 |
| locations[0].source.type | journal |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | 2076-3417 |
| locations[0].source.is_core | True |
| locations[0].source.is_in_doaj | True |
| locations[0].source.display_name | Applied Sciences |
| locations[0].source.host_organization | https://openalex.org/P4310310987 |
| locations[0].source.host_organization_name | Multidisciplinary Digital Publishing Institute |
| locations[0].source.host_organization_lineage | https://openalex.org/P4310310987 |
| locations[0].source.host_organization_lineage_names | Multidisciplinary Digital Publishing Institute |
| locations[0].license | cc-by |
| locations[0].pdf_url | |
| locations[0].version | publishedVersion |
| locations[0].raw_type | journal-article |
| locations[0].license_id | https://openalex.org/licenses/cc-by |
| locations[0].is_accepted | True |
| locations[0].is_published | True |
| locations[0].raw_source_name | Applied Sciences |
| locations[0].landing_page_url | https://doi.org/10.3390/app14209379 |
| locations[1].id | pmh:oai:doaj.org/article:fb7aaa60c4ed44a292d5a8fb21638fe5 |
| locations[1].is_oa | False |
| locations[1].source.id | https://openalex.org/S4306401280 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | False |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | DOAJ (DOAJ: Directory of Open Access Journals) |
| locations[1].source.host_organization | |
| locations[1].source.host_organization_name | |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | submittedVersion |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | False |
| locations[1].raw_source_name | Applied Sciences, Vol 14, Iss 20, p 9379 (2024) |
| locations[1].landing_page_url | https://doaj.org/article/fb7aaa60c4ed44a292d5a8fb21638fe5 |
| indexed_in | crossref, doaj |
| authorships[0].author.id | https://openalex.org/A5062619576 |
| authorships[0].author.orcid | https://orcid.org/0000-0003-2899-6893 |
| authorships[0].author.display_name | Jaeyoung Shin |
| authorships[0].countries | KR |
| authorships[0].affiliations[0].institution_ids | https://openalex.org/I24541011 |
| authorships[0].affiliations[0].raw_affiliation_string | Department of Software Convergence, Soonchunhyang University, Asan 31538, Republic of Korea |
| authorships[0].institutions[0].id | https://openalex.org/I24541011 |
| authorships[0].institutions[0].ror | https://ror.org/03qjsrb10 |
| authorships[0].institutions[0].type | education |
| authorships[0].institutions[0].lineage | https://openalex.org/I24541011 |
| authorships[0].institutions[0].country_code | KR |
| authorships[0].institutions[0].display_name | Soonchunhyang University |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Jae Sung Shin |
| authorships[0].is_corresponding | False |
| authorships[0].raw_affiliation_strings | Department of Software Convergence, Soonchunhyang University, Asan 31538, Republic of Korea |
| authorships[1].author.id | https://openalex.org/A5034408933 |
| authorships[1].author.orcid | https://orcid.org/0000-0003-0958-0913 |
| authorships[1].author.display_name | Jun Ma |
| authorships[1].countries | KR |
| authorships[1].affiliations[0].institution_ids | https://openalex.org/I24541011 |
| authorships[1].affiliations[0].raw_affiliation_string | Department of Software Convergence, Soonchunhyang University, Asan 31538, Republic of Korea |
| authorships[1].institutions[0].id | https://openalex.org/I24541011 |
| authorships[1].institutions[0].ror | https://ror.org/03qjsrb10 |
| authorships[1].institutions[0].type | education |
| authorships[1].institutions[0].lineage | https://openalex.org/I24541011 |
| authorships[1].institutions[0].country_code | KR |
| authorships[1].institutions[0].display_name | Soonchunhyang University |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Jun Ma |
| authorships[1].is_corresponding | False |
| authorships[1].raw_affiliation_strings | Department of Software Convergence, Soonchunhyang University, Asan 31538, Republic of Korea |
| authorships[2].author.id | https://openalex.org/A5101646308 |
| authorships[2].author.orcid | https://orcid.org/0000-0003-4478-9704 |
| authorships[2].author.display_name | Seong Jun Choi |
| authorships[2].countries | KR |
| authorships[2].affiliations[0].institution_ids | https://openalex.org/I24541011 |
| authorships[2].affiliations[0].raw_affiliation_string | Department of Otorhinolaryngology—Head and Neck Surgery, College of Medicine, Soonchunhyang University Cheonan Hospital, Cheonan 31151, Republic of Korea |
| authorships[2].institutions[0].id | https://openalex.org/I24541011 |
| authorships[2].institutions[0].ror | https://ror.org/03qjsrb10 |
| authorships[2].institutions[0].type | education |
| authorships[2].institutions[0].lineage | https://openalex.org/I24541011 |
| authorships[2].institutions[0].country_code | KR |
| authorships[2].institutions[0].display_name | Soonchunhyang University |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Seong Jun Choi |
| authorships[2].is_corresponding | False |
| authorships[2].raw_affiliation_strings | Department of Otorhinolaryngology—Head and Neck Surgery, College of Medicine, Soonchunhyang University Cheonan Hospital, Cheonan 31151, Republic of Korea |
| authorships[3].author.id | https://openalex.org/A5007643745 |
| authorships[3].author.orcid | |
| authorships[3].author.display_name | Sungyeup Kim |
| authorships[3].countries | KR |
| authorships[3].affiliations[0].institution_ids | https://openalex.org/I24541011 |
| authorships[3].affiliations[0].raw_affiliation_string | Insitute for Artificial Intelligence and Software, Soonchunhyang University, Asan 31538, Republic of Korea |
| authorships[3].institutions[0].id | https://openalex.org/I24541011 |
| authorships[3].institutions[0].ror | https://ror.org/03qjsrb10 |
| authorships[3].institutions[0].type | education |
| authorships[3].institutions[0].lineage | https://openalex.org/I24541011 |
| authorships[3].institutions[0].country_code | KR |
| authorships[3].institutions[0].display_name | Soonchunhyang University |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Sungyeup Kim |
| authorships[3].is_corresponding | False |
| authorships[3].raw_affiliation_strings | Insitute for Artificial Intelligence and Software, Soonchunhyang University, Asan 31538, Republic of Korea |
| authorships[4].author.id | https://openalex.org/A5006130083 |
| authorships[4].author.orcid | https://orcid.org/0000-0001-9963-5521 |
| authorships[4].author.display_name | Min Hong |
| authorships[4].countries | KR |
| authorships[4].affiliations[0].institution_ids | https://openalex.org/I24541011 |
| authorships[4].affiliations[0].raw_affiliation_string | Department of Computer Software Engineering, Soonchunhyang University, Asan 31538, Republic of Korea |
| authorships[4].institutions[0].id | https://openalex.org/I24541011 |
| authorships[4].institutions[0].ror | https://ror.org/03qjsrb10 |
| authorships[4].institutions[0].type | education |
| authorships[4].institutions[0].lineage | https://openalex.org/I24541011 |
| authorships[4].institutions[0].country_code | KR |
| authorships[4].institutions[0].display_name | Soonchunhyang University |
| authorships[4].author_position | last |
| authorships[4].raw_author_name | Min Hong |
| authorships[4].is_corresponding | True |
| authorships[4].raw_affiliation_strings | Department of Computer Software Engineering, Soonchunhyang University, Asan 31538, Republic of Korea |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://doi.org/10.3390/app14209379 |
| open_access.oa_status | gold |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Development of a Deep Learning Model for Predicting Speech Audiometry Using Pure-Tone Audiometry Data |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T10283 |
| primary_topic.field.id | https://openalex.org/fields/28 |
| primary_topic.field.display_name | Neuroscience |
| primary_topic.score | 0.9997000098228455 |
| primary_topic.domain.id | https://openalex.org/domains/1 |
| primary_topic.domain.display_name | Life Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/2805 |
| primary_topic.subfield.display_name | Cognitive Neuroscience |
| primary_topic.display_name | Hearing Loss and Rehabilitation |
| related_works | https://openalex.org/W2349432283, https://openalex.org/W2367864358, https://openalex.org/W4390338675, https://openalex.org/W4236162170, https://openalex.org/W2393128122, https://openalex.org/W2101423314, https://openalex.org/W2002059261, https://openalex.org/W1976832291, https://openalex.org/W2058108893, https://openalex.org/W2027914005 |
| cited_by_count | 2 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 2 |
| locations_count | 2 |
| best_oa_location.id | doi:10.3390/app14209379 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4210205812 |
| best_oa_location.source.issn | 2076-3417 |
| best_oa_location.source.type | journal |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | 2076-3417 |
| best_oa_location.source.is_core | True |
| best_oa_location.source.is_in_doaj | True |
| best_oa_location.source.display_name | Applied Sciences |
| best_oa_location.source.host_organization | https://openalex.org/P4310310987 |
| best_oa_location.source.host_organization_name | Multidisciplinary Digital Publishing Institute |
| best_oa_location.source.host_organization_lineage | https://openalex.org/P4310310987 |
| best_oa_location.source.host_organization_lineage_names | Multidisciplinary Digital Publishing Institute |
| best_oa_location.license | cc-by |
| best_oa_location.pdf_url | |
| best_oa_location.version | publishedVersion |
| best_oa_location.raw_type | journal-article |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by |
| best_oa_location.is_accepted | True |
| best_oa_location.is_published | True |
| best_oa_location.raw_source_name | Applied Sciences |
| best_oa_location.landing_page_url | https://doi.org/10.3390/app14209379 |
| primary_location.id | doi:10.3390/app14209379 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4210205812 |
| primary_location.source.issn | 2076-3417 |
| primary_location.source.type | journal |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | 2076-3417 |
| primary_location.source.is_core | True |
| primary_location.source.is_in_doaj | True |
| primary_location.source.display_name | Applied Sciences |
| primary_location.source.host_organization | https://openalex.org/P4310310987 |
| primary_location.source.host_organization_name | Multidisciplinary Digital Publishing Institute |
| primary_location.source.host_organization_lineage | https://openalex.org/P4310310987 |
| primary_location.source.host_organization_lineage_names | Multidisciplinary Digital Publishing Institute |
| primary_location.license | cc-by |
| primary_location.pdf_url | |
| primary_location.version | publishedVersion |
| primary_location.raw_type | journal-article |
| primary_location.license_id | https://openalex.org/licenses/cc-by |
| primary_location.is_accepted | True |
| primary_location.is_published | True |
| primary_location.raw_source_name | Applied Sciences |
| primary_location.landing_page_url | https://doi.org/10.3390/app14209379 |
| publication_date | 2024-10-15 |
| publication_year | 2024 |
| referenced_works | https://openalex.org/W3131625703, https://openalex.org/W3081431504, https://openalex.org/W2589750290, https://openalex.org/W2141312170, https://openalex.org/W2597304671, https://openalex.org/W6604078451, https://openalex.org/W1969730833, https://openalex.org/W3212943818, https://openalex.org/W3181851575, https://openalex.org/W3217410715, https://openalex.org/W4311677631, https://openalex.org/W2546807877, https://openalex.org/W3030370896, https://openalex.org/W3131845588, https://openalex.org/W3022557579, https://openalex.org/W4387779817, https://openalex.org/W4400917981, https://openalex.org/W4293564052, https://openalex.org/W2101423314, https://openalex.org/W4206713028, https://openalex.org/W4376121254, https://openalex.org/W2751999764, https://openalex.org/W3040253005, https://openalex.org/W2085842814, https://openalex.org/W4210432278, https://openalex.org/W3016998144, https://openalex.org/W1974283773, https://openalex.org/W6631003389, https://openalex.org/W2009151039, https://openalex.org/W4220810453, https://openalex.org/W2770854636, https://openalex.org/W1517631025, https://openalex.org/W101547076 |
| referenced_works_count | 33 |
| abstract_inverted_index.a | 3, 30, 60, 78, 83, 154 |
| abstract_inverted_index.By | 45 |
| abstract_inverted_index.R2 | 139 |
| abstract_inverted_index.To | 107 |
| abstract_inverted_index.an | 8, 138, 144, 161, 167 |
| abstract_inverted_index.at | 53 |
| abstract_inverted_index.be | 22 |
| abstract_inverted_index.in | 6, 214 |
| abstract_inverted_index.is | 2, 212 |
| abstract_inverted_index.it | 180 |
| abstract_inverted_index.of | 33, 111, 122, 141, 147, 158, 164, 169, 174, 223, 229 |
| abstract_inverted_index.to | 11, 36, 58, 93 |
| abstract_inverted_index.we | 56, 114 |
| abstract_inverted_index.MAE | 146, 162, 168 |
| abstract_inverted_index.MLP | 131 |
| abstract_inverted_index.PTA | 47, 185 |
| abstract_inverted_index.The | 130, 171 |
| abstract_inverted_index.aim | 57 |
| abstract_inverted_index.and | 13, 24, 82, 103, 125, 143, 166, 226 |
| abstract_inverted_index.are | 91 |
| abstract_inverted_index.can | 21, 63 |
| abstract_inverted_index.for | 67, 200, 217 |
| abstract_inverted_index.key | 95, 117 |
| abstract_inverted_index.the | 65, 109, 120, 150, 175, 190, 206, 221, 227 |
| abstract_inverted_index.two | 74, 116 |
| abstract_inverted_index.use | 32 |
| abstract_inverted_index.(R2) | 124 |
| abstract_inverted_index.MLP. | 191 |
| abstract_inverted_index.This | 27, 71, 210 |
| abstract_inverted_index.both | 196 |
| abstract_inverted_index.data | 186 |
| abstract_inverted_index.deep | 34 |
| abstract_inverted_index.from | 184 |
| abstract_inverted_index.hold | 198 |
| abstract_inverted_index.loss | 219 |
| abstract_inverted_index.mean | 126 |
| abstract_inverted_index.more | 187 |
| abstract_inverted_index.need | 66 |
| abstract_inverted_index.show | 194 |
| abstract_inverted_index.than | 189 |
| abstract_inverted_index.that | 20, 62, 179, 195 |
| abstract_inverted_index.tool | 5 |
| abstract_inverted_index.with | 137, 160 |
| abstract_inverted_index.(MLP) | 81 |
| abstract_inverted_index.(PTA) | 43 |
| abstract_inverted_index.6.90. | 170 |
| abstract_inverted_index.7.26, | 148 |
| abstract_inverted_index.These | 89, 192 |
| abstract_inverted_index.aids, | 225 |
| abstract_inverted_index.data, | 48 |
| abstract_inverted_index.data. | 44 |
| abstract_inverted_index.error | 128 |
| abstract_inverted_index.level | 157 |
| abstract_inverted_index.model | 61, 132, 152, 177 |
| abstract_inverted_index.novel | 31 |
| abstract_inverted_index.paper | 28 |
| abstract_inverted_index.power | 136 |
| abstract_inverted_index.score | 140, 163 |
| abstract_inverted_index.solid | 135 |
| abstract_inverted_index.study | 72 |
| abstract_inverted_index.these | 112 |
| abstract_inverted_index.using | 40 |
| abstract_inverted_index.vital | 4 |
| abstract_inverted_index.which | 49 |
| abstract_inverted_index.while | 149 |
| abstract_inverted_index.(MAE). | 129 |
| abstract_inverted_index.1D-CNN | 151, 176 |
| abstract_inverted_index.88.35% | 165 |
| abstract_inverted_index.88.79% | 142 |
| abstract_inverted_index.Speech | 0 |
| abstract_inverted_index.bypass | 64 |
| abstract_inverted_index.direct | 68 |
| abstract_inverted_index.higher | 156 |
| abstract_inverted_index.models | 90, 197 |
| abstract_inverted_index.neural | 75, 86 |
| abstract_inverted_index.speech | 38, 69, 96, 100, 104, 202 |
| abstract_inverted_index.ability | 10 |
| abstract_inverted_index.applied | 213 |
| abstract_inverted_index.average | 145 |
| abstract_inverted_index.develop | 59 |
| abstract_inverted_index.hearing | 51, 218, 224 |
| abstract_inverted_index.measure | 50 |
| abstract_inverted_index.models, | 113 |
| abstract_inverted_index.network | 76, 87 |
| abstract_inverted_index.predict | 37, 94 |
| abstract_inverted_index.promise | 199 |
| abstract_inverted_index.results | 193 |
| abstract_inverted_index.scores. | 106 |
| abstract_inverted_index.speech, | 15 |
| abstract_inverted_index.testing | 19 |
| abstract_inverted_index.trained | 92 |
| abstract_inverted_index.absolute | 127 |
| abstract_inverted_index.accuracy | 159 |
| abstract_inverted_index.achieved | 153 |
| abstract_inverted_index.approach | 211 |
| abstract_inverted_index.auditory | 231 |
| abstract_inverted_index.captures | 181 |
| abstract_inverted_index.clinical | 215 |
| abstract_inverted_index.employed | 115 |
| abstract_inverted_index.evaluate | 108 |
| abstract_inverted_index.features | 183 |
| abstract_inverted_index.learning | 35 |
| abstract_inverted_index.metrics: | 119 |
| abstract_inverted_index.perceive | 12 |
| abstract_inverted_index.process. | 209 |
| abstract_inverted_index.relevant | 182 |
| abstract_inverted_index.resource | 25 |
| abstract_inverted_index.settings | 216 |
| abstract_inverted_index.slightly | 155 |
| abstract_inverted_index.specific | 54 |
| abstract_inverted_index.suggests | 178 |
| abstract_inverted_index.superior | 172 |
| abstract_inverted_index.testing. | 70 |
| abstract_inverted_index.(1D-CNN). | 88 |
| abstract_inverted_index.assessing | 7 |
| abstract_inverted_index.including | 99 |
| abstract_inverted_index.outcomes, | 98 |
| abstract_inverted_index.programs. | 233 |
| abstract_inverted_index.pure-tone | 41 |
| abstract_inverted_index.requiring | 17 |
| abstract_inverted_index.selection | 222 |
| abstract_inverted_index.utilizing | 46 |
| abstract_inverted_index.approaches | 29 |
| abstract_inverted_index.audiometry | 1, 39, 42, 97 |
| abstract_inverted_index.comprehend | 14 |
| abstract_inverted_index.evaluation | 208 |
| abstract_inverted_index.perceptron | 80 |
| abstract_inverted_index.predicting | 201 |
| abstract_inverted_index.predictive | 134 |
| abstract_inverted_index.thresholds | 102 |
| abstract_inverted_index.-intensive. | 26 |
| abstract_inverted_index.assessment, | 220 |
| abstract_inverted_index.audiometry, | 203 |
| abstract_inverted_index.coefficient | 121 |
| abstract_inverted_index.development | 228 |
| abstract_inverted_index.effectively | 188 |
| abstract_inverted_index.multi-layer | 79 |
| abstract_inverted_index.performance | 118, 173 |
| abstract_inverted_index.potentially | 204 |
| abstract_inverted_index.recognition | 101 |
| abstract_inverted_index.sensitivity | 52 |
| abstract_inverted_index.simplifying | 205 |
| abstract_inverted_index.specialized | 18 |
| abstract_inverted_index.audiological | 207 |
| abstract_inverted_index.demonstrated | 133 |
| abstract_inverted_index.frequencies, | 55 |
| abstract_inverted_index.investigates | 73 |
| abstract_inverted_index.personalized | 230 |
| abstract_inverted_index.convolutional | 85 |
| abstract_inverted_index.determination | 123 |
| abstract_inverted_index.effectiveness | 110 |
| abstract_inverted_index.traditionally | 16 |
| abstract_inverted_index.architectures: | 77 |
| abstract_inverted_index.discrimination | 105 |
| abstract_inverted_index.individual’s | 9 |
| abstract_inverted_index.rehabilitation | 232 |
| abstract_inverted_index.time-consuming | 23 |
| abstract_inverted_index.one-dimensional | 84 |
| cited_by_percentile_year.max | 97 |
| cited_by_percentile_year.min | 95 |
| corresponding_author_ids | https://openalex.org/A5006130083 |
| countries_distinct_count | 1 |
| institutions_distinct_count | 5 |
| corresponding_institution_ids | https://openalex.org/I24541011 |
| sustainable_development_goals[0].id | https://metadata.un.org/sdg/16 |
| sustainable_development_goals[0].score | 0.5600000023841858 |
| sustainable_development_goals[0].display_name | Peace, Justice and strong institutions |
| sustainable_development_goals[1].id | https://metadata.un.org/sdg/10 |
| sustainable_development_goals[1].score | 0.4300000071525574 |
| sustainable_development_goals[1].display_name | Reduced inequalities |
| citation_normalized_percentile.value | 0.75196521 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | False |