Improving Child Speech Recognition and Reading Mistake Detection by Using Prompts Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.21437/interspeech.2025-658
Automatic reading aloud evaluation can provide valuable support to teachers by enabling more efficient scoring of reading exercises. However, research on reading evaluation systems and applications remains limited. We present a novel multimodal approach that leverages audio and knowledge from text resources. In particular, we explored the potential of using Whisper and instruction-tuned large language models (LLMs) with prompts to improve transcriptions for child speech recognition, as well as their effectiveness in downstream reading mistake detection. Our results demonstrate the effectiveness of prompting Whisper and prompting LLM, compared to the baseline Whisper model without prompting. The best performing system achieved state-of-the-art recognition performance in Dutch child read speech, with a word error rate (WER) of 5.1%, improving the baseline WER of 9.4%. Furthermore, it significantly improved reading mistake detection, increasing the F1 score from 0.39 to 0.73.
Related Topics
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.21437/interspeech.2025-658
- OA Status
- green
- OpenAlex ID
- https://openalex.org/W4415433901
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4415433901Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.21437/interspeech.2025-658Digital Object Identifier
- Title
-
Improving Child Speech Recognition and Reading Mistake Detection by Using PromptsWork title
- Type
-
articleOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-08-17Full publication date if available
- Authors
-
Lingyun Gao, Cristian Tejedor-Garcı́a, Catia Cucchiarini, Helmer StrikList of authors in order
- Landing page
-
https://doi.org/10.21437/interspeech.2025-658Publisher landing page
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2506.11079Direct OA link when available
- Cited by
-
0Total citation count in OpenAlex
Full payload
| id | https://openalex.org/W4415433901 |
|---|---|
| doi | https://doi.org/10.21437/interspeech.2025-658 |
| ids.doi | https://doi.org/10.21437/interspeech.2025-658 |
| ids.openalex | https://openalex.org/W4415433901 |
| fwci | 0.0 |
| type | article |
| title | Improving Child Speech Recognition and Reading Mistake Detection by Using Prompts |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | 2854 |
| biblio.first_page | 2850 |
| topics[0].id | https://openalex.org/T10201 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9021000266075134 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1702 |
| topics[0].subfield.display_name | Artificial Intelligence |
| topics[0].display_name | Speech Recognition and Synthesis |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| language | en |
| locations[0].id | doi:10.21437/interspeech.2025-658 |
| locations[0].is_oa | False |
| locations[0].source | |
| locations[0].license | |
| locations[0].pdf_url | |
| locations[0].version | publishedVersion |
| locations[0].raw_type | proceedings-article |
| locations[0].license_id | |
| locations[0].is_accepted | True |
| locations[0].is_published | True |
| locations[0].raw_source_name | Interspeech 2025 |
| locations[0].landing_page_url | https://doi.org/10.21437/interspeech.2025-658 |
| locations[1].id | pmh:oai:arXiv.org:2506.11079 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | https://arxiv.org/pdf/2506.11079 |
| locations[1].version | submittedVersion |
| locations[1].raw_type | text |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | False |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | http://arxiv.org/abs/2506.11079 |
| indexed_in | arxiv, crossref |
| authorships[0].author.id | https://openalex.org/A5079316816 |
| authorships[0].author.orcid | https://orcid.org/0000-0003-2509-9505 |
| authorships[0].author.display_name | Lingyun Gao |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Lingyun Gao |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5046936545 |
| authorships[1].author.orcid | https://orcid.org/0000-0001-5395-0438 |
| authorships[1].author.display_name | Cristian Tejedor-Garcı́a |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Cristian Tejedor-Garcia |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5006678637 |
| authorships[2].author.orcid | https://orcid.org/0000-0001-5908-0824 |
| authorships[2].author.display_name | Catia Cucchiarini |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Catia Cucchiarini |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5019585114 |
| authorships[3].author.orcid | https://orcid.org/0000-0003-1722-3465 |
| authorships[3].author.display_name | Helmer Strik |
| authorships[3].author_position | last |
| authorships[3].raw_author_name | Helmer Strik |
| authorships[3].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2506.11079 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-23T00:00:00 |
| display_name | Improving Child Speech Recognition and Reading Mistake Detection by Using Prompts |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T10201 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9021000266075134 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1702 |
| primary_topic.subfield.display_name | Artificial Intelligence |
| primary_topic.display_name | Speech Recognition and Synthesis |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2506.11079 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2506.11079 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2506.11079 |
| primary_location.id | doi:10.21437/interspeech.2025-658 |
| primary_location.is_oa | False |
| primary_location.source | |
| primary_location.license | |
| primary_location.pdf_url | |
| primary_location.version | publishedVersion |
| primary_location.raw_type | proceedings-article |
| primary_location.license_id | |
| primary_location.is_accepted | True |
| primary_location.is_published | True |
| primary_location.raw_source_name | Interspeech 2025 |
| primary_location.landing_page_url | https://doi.org/10.21437/interspeech.2025-658 |
| publication_date | 2025-08-17 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 30, 109 |
| abstract_inverted_index.F1 | 131 |
| abstract_inverted_index.In | 42 |
| abstract_inverted_index.We | 28 |
| abstract_inverted_index.as | 66, 68 |
| abstract_inverted_index.by | 10 |
| abstract_inverted_index.in | 71, 103 |
| abstract_inverted_index.it | 123 |
| abstract_inverted_index.of | 15, 48, 81, 114, 120 |
| abstract_inverted_index.on | 20 |
| abstract_inverted_index.to | 8, 59, 88, 135 |
| abstract_inverted_index.we | 44 |
| abstract_inverted_index.Our | 76 |
| abstract_inverted_index.The | 95 |
| abstract_inverted_index.WER | 119 |
| abstract_inverted_index.and | 24, 37, 51, 84 |
| abstract_inverted_index.can | 4 |
| abstract_inverted_index.for | 62 |
| abstract_inverted_index.the | 46, 79, 89, 117, 130 |
| abstract_inverted_index.0.39 | 134 |
| abstract_inverted_index.LLM, | 86 |
| abstract_inverted_index.best | 96 |
| abstract_inverted_index.from | 39, 133 |
| abstract_inverted_index.more | 12 |
| abstract_inverted_index.rate | 112 |
| abstract_inverted_index.read | 106 |
| abstract_inverted_index.text | 40 |
| abstract_inverted_index.that | 34 |
| abstract_inverted_index.well | 67 |
| abstract_inverted_index.with | 57, 108 |
| abstract_inverted_index.word | 110 |
| abstract_inverted_index.(WER) | 113 |
| abstract_inverted_index.0.73. | 136 |
| abstract_inverted_index.5.1%, | 115 |
| abstract_inverted_index.9.4%. | 121 |
| abstract_inverted_index.Dutch | 104 |
| abstract_inverted_index.aloud | 2 |
| abstract_inverted_index.audio | 36 |
| abstract_inverted_index.child | 63, 105 |
| abstract_inverted_index.error | 111 |
| abstract_inverted_index.large | 53 |
| abstract_inverted_index.model | 92 |
| abstract_inverted_index.novel | 31 |
| abstract_inverted_index.score | 132 |
| abstract_inverted_index.their | 69 |
| abstract_inverted_index.using | 49 |
| abstract_inverted_index.(LLMs) | 56 |
| abstract_inverted_index.models | 55 |
| abstract_inverted_index.speech | 64 |
| abstract_inverted_index.system | 98 |
| abstract_inverted_index.Whisper | 50, 83, 91 |
| abstract_inverted_index.improve | 60 |
| abstract_inverted_index.mistake | 74, 127 |
| abstract_inverted_index.present | 29 |
| abstract_inverted_index.prompts | 58 |
| abstract_inverted_index.provide | 5 |
| abstract_inverted_index.reading | 1, 16, 21, 73, 126 |
| abstract_inverted_index.remains | 26 |
| abstract_inverted_index.results | 77 |
| abstract_inverted_index.scoring | 14 |
| abstract_inverted_index.speech, | 107 |
| abstract_inverted_index.support | 7 |
| abstract_inverted_index.systems | 23 |
| abstract_inverted_index.without | 93 |
| abstract_inverted_index.However, | 18 |
| abstract_inverted_index.achieved | 99 |
| abstract_inverted_index.approach | 33 |
| abstract_inverted_index.baseline | 90, 118 |
| abstract_inverted_index.compared | 87 |
| abstract_inverted_index.enabling | 11 |
| abstract_inverted_index.explored | 45 |
| abstract_inverted_index.improved | 125 |
| abstract_inverted_index.language | 54 |
| abstract_inverted_index.limited. | 27 |
| abstract_inverted_index.research | 19 |
| abstract_inverted_index.teachers | 9 |
| abstract_inverted_index.valuable | 6 |
| abstract_inverted_index.Automatic | 0 |
| abstract_inverted_index.efficient | 13 |
| abstract_inverted_index.improving | 116 |
| abstract_inverted_index.knowledge | 38 |
| abstract_inverted_index.leverages | 35 |
| abstract_inverted_index.potential | 47 |
| abstract_inverted_index.prompting | 82, 85 |
| abstract_inverted_index.detection, | 128 |
| abstract_inverted_index.detection. | 75 |
| abstract_inverted_index.downstream | 72 |
| abstract_inverted_index.evaluation | 3, 22 |
| abstract_inverted_index.exercises. | 17 |
| abstract_inverted_index.increasing | 129 |
| abstract_inverted_index.multimodal | 32 |
| abstract_inverted_index.performing | 97 |
| abstract_inverted_index.prompting. | 94 |
| abstract_inverted_index.resources. | 41 |
| abstract_inverted_index.demonstrate | 78 |
| abstract_inverted_index.particular, | 43 |
| abstract_inverted_index.performance | 102 |
| abstract_inverted_index.recognition | 101 |
| abstract_inverted_index.Furthermore, | 122 |
| abstract_inverted_index.applications | 25 |
| abstract_inverted_index.recognition, | 65 |
| abstract_inverted_index.effectiveness | 70, 80 |
| abstract_inverted_index.significantly | 124 |
| abstract_inverted_index.transcriptions | 61 |
| abstract_inverted_index.state-of-the-art | 100 |
| abstract_inverted_index.instruction-tuned | 52 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 4 |
| citation_normalized_percentile.value | 0.23208804 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | True |