SpeechIQ: Speech-Agentic Intelligence Quotient Across Cognitive Levels in Voice Understanding by Large Language Models Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.18653/v1/2025.acl-long.1466
We introduce Speech-based Intelligence Quotient (SIQ) as a new form of human cognition-inspired evaluation pipeline for voice understanding large language models, LLM Voice, designed to assess their voice understanding ability. Moving beyond popular voice understanding metrics such as word error rate (WER), SIQ examines LLM Voice across three cognitive levels motivated by Bloom's Taxonomy: (1) Remembering (i.e., WER for verbatim accuracy); (2) Understanding (i.e., similarity of LLM's interpretations); and (3) Application (i.e., QA accuracy for simulating downstream tasks). We demonstrate that SIQ not only quantifies voice understanding abilities but also provides unified comparisons between cascaded methods (e.g., ASR LLM) and end-to-end models, identifies annotation errors in existing benchmarks, and detects hallucinations in LLM Voice. Our framework represents a first-of-its-kind intelligence examination that bridges cognitive principles with voice-oriented benchmarks, while exposing overlooked challenges in multi-modal training. Our code and data will be open source to encourage future studies.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- https://doi.org/10.18653/v1/2025.acl-long.1466
- https://aclanthology.org/2025.acl-long.1466.pdf
- OA Status
- gold
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4412889710
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4412889710Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.18653/v1/2025.acl-long.1466Digital Object Identifier
- Title
-
SpeechIQ: Speech-Agentic Intelligence Quotient Across Cognitive Levels in Voice Understanding by Large Language ModelsWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-01-01Full publication date if available
- Authors
-
Zhen Wan, Chao-Han Huck Yang, Yahan Yu, Jinchuan Tian, Sheng Li, Ke Hu, Zhehuai Chen, Shinji Watanabe, Fei Cheng, Chenhui Chu, Sadao KurohashiList of authors in order
- Landing page
-
https://doi.org/10.18653/v1/2025.acl-long.1466Publisher landing page
- PDF URL
-
https://aclanthology.org/2025.acl-long.1466.pdfDirect link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
goldOpen access status per OpenAlex
- OA URL
-
https://aclanthology.org/2025.acl-long.1466.pdfDirect OA link when available
- Concepts
-
Computer science, Cognition, Quotient, Speech recognition, Natural language processing, Psychology, Mathematics, Neuroscience, Pure mathematicsTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
0Total citation count in OpenAlex
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4412889710 |
|---|---|
| doi | https://doi.org/10.18653/v1/2025.acl-long.1466 |
| ids.doi | https://doi.org/10.18653/v1/2025.acl-long.1466 |
| ids.openalex | https://openalex.org/W4412889710 |
| fwci | 0.0 |
| type | preprint |
| title | SpeechIQ: Speech-Agentic Intelligence Quotient Across Cognitive Levels in Voice Understanding by Large Language Models |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | 30398 |
| biblio.first_page | 30381 |
| topics[0].id | https://openalex.org/T10201 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.8058000206947327 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1702 |
| topics[0].subfield.display_name | Artificial Intelligence |
| topics[0].display_name | Speech Recognition and Synthesis |
| topics[1].id | https://openalex.org/T10181 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.7964000105857849 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1702 |
| topics[1].subfield.display_name | Artificial Intelligence |
| topics[1].display_name | Natural Language Processing Techniques |
| topics[2].id | https://openalex.org/T12031 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.7804999947547913 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1702 |
| topics[2].subfield.display_name | Artificial Intelligence |
| topics[2].display_name | Speech and dialogue systems |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C41008148 |
| concepts[0].level | 0 |
| concepts[0].score | 0.6780574321746826 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[0].display_name | Computer science |
| concepts[1].id | https://openalex.org/C169900460 |
| concepts[1].level | 2 |
| concepts[1].score | 0.5992533564567566 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q2200417 |
| concepts[1].display_name | Cognition |
| concepts[2].id | https://openalex.org/C199422724 |
| concepts[2].level | 2 |
| concepts[2].score | 0.5497220754623413 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q41118 |
| concepts[2].display_name | Quotient |
| concepts[3].id | https://openalex.org/C28490314 |
| concepts[3].level | 1 |
| concepts[3].score | 0.3532087802886963 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q189436 |
| concepts[3].display_name | Speech recognition |
| concepts[4].id | https://openalex.org/C204321447 |
| concepts[4].level | 1 |
| concepts[4].score | 0.343586266040802 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q30642 |
| concepts[4].display_name | Natural language processing |
| concepts[5].id | https://openalex.org/C15744967 |
| concepts[5].level | 0 |
| concepts[5].score | 0.2751352787017822 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q9418 |
| concepts[5].display_name | Psychology |
| concepts[6].id | https://openalex.org/C33923547 |
| concepts[6].level | 0 |
| concepts[6].score | 0.0947621762752533 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q395 |
| concepts[6].display_name | Mathematics |
| concepts[7].id | https://openalex.org/C169760540 |
| concepts[7].level | 1 |
| concepts[7].score | 0.0 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q207011 |
| concepts[7].display_name | Neuroscience |
| concepts[8].id | https://openalex.org/C202444582 |
| concepts[8].level | 1 |
| concepts[8].score | 0.0 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q837863 |
| concepts[8].display_name | Pure mathematics |
| keywords[0].id | https://openalex.org/keywords/computer-science |
| keywords[0].score | 0.6780574321746826 |
| keywords[0].display_name | Computer science |
| keywords[1].id | https://openalex.org/keywords/cognition |
| keywords[1].score | 0.5992533564567566 |
| keywords[1].display_name | Cognition |
| keywords[2].id | https://openalex.org/keywords/quotient |
| keywords[2].score | 0.5497220754623413 |
| keywords[2].display_name | Quotient |
| keywords[3].id | https://openalex.org/keywords/speech-recognition |
| keywords[3].score | 0.3532087802886963 |
| keywords[3].display_name | Speech recognition |
| keywords[4].id | https://openalex.org/keywords/natural-language-processing |
| keywords[4].score | 0.343586266040802 |
| keywords[4].display_name | Natural language processing |
| keywords[5].id | https://openalex.org/keywords/psychology |
| keywords[5].score | 0.2751352787017822 |
| keywords[5].display_name | Psychology |
| keywords[6].id | https://openalex.org/keywords/mathematics |
| keywords[6].score | 0.0947621762752533 |
| keywords[6].display_name | Mathematics |
| language | en |
| locations[0].id | doi:10.18653/v1/2025.acl-long.1466 |
| locations[0].is_oa | True |
| locations[0].source | |
| locations[0].license | cc-by |
| locations[0].pdf_url | https://aclanthology.org/2025.acl-long.1466.pdf |
| locations[0].version | publishedVersion |
| locations[0].raw_type | proceedings-article |
| locations[0].license_id | https://openalex.org/licenses/cc-by |
| locations[0].is_accepted | True |
| locations[0].is_published | True |
| locations[0].raw_source_name | Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |
| locations[0].landing_page_url | https://doi.org/10.18653/v1/2025.acl-long.1466 |
| locations[1].id | pmh:oai:arXiv.org:2507.19361 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | https://arxiv.org/pdf/2507.19361 |
| locations[1].version | submittedVersion |
| locations[1].raw_type | text |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | False |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | http://arxiv.org/abs/2507.19361 |
| indexed_in | arxiv, crossref |
| authorships[0].author.id | https://openalex.org/A5100959340 |
| authorships[0].author.orcid | |
| authorships[0].author.display_name | Zhen Wan |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Zhen Wan |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5020376803 |
| authorships[1].author.orcid | https://orcid.org/0000-0003-2879-8811 |
| authorships[1].author.display_name | Chao-Han Huck Yang |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Chao-Han Huck Yang |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5039143276 |
| authorships[2].author.orcid | https://orcid.org/0000-0003-1610-1167 |
| authorships[2].author.display_name | Yahan Yu |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Yahan Yu |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5068192693 |
| authorships[3].author.orcid | https://orcid.org/0000-0002-2129-471X |
| authorships[3].author.display_name | Jinchuan Tian |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Jinchuan Tian |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5053726259 |
| authorships[4].author.orcid | https://orcid.org/0000-0001-7636-3797 |
| authorships[4].author.display_name | Sheng Li |
| authorships[4].author_position | middle |
| authorships[4].raw_author_name | Sheng Li |
| authorships[4].is_corresponding | False |
| authorships[5].author.id | https://openalex.org/A5029338576 |
| authorships[5].author.orcid | https://orcid.org/0000-0002-1599-1519 |
| authorships[5].author.display_name | Ke Hu |
| authorships[5].author_position | middle |
| authorships[5].raw_author_name | Ke Hu |
| authorships[5].is_corresponding | False |
| authorships[6].author.id | https://openalex.org/A5002433660 |
| authorships[6].author.orcid | https://orcid.org/0000-0003-4400-5340 |
| authorships[6].author.display_name | Zhehuai Chen |
| authorships[6].author_position | middle |
| authorships[6].raw_author_name | Zhehuai Chen |
| authorships[6].is_corresponding | False |
| authorships[7].author.id | https://openalex.org/A5001291873 |
| authorships[7].author.orcid | https://orcid.org/0000-0002-5970-8631 |
| authorships[7].author.display_name | Shinji Watanabe |
| authorships[7].author_position | middle |
| authorships[7].raw_author_name | Shinji Watanabe |
| authorships[7].is_corresponding | False |
| authorships[8].author.id | https://openalex.org/A5101451435 |
| authorships[8].author.orcid | https://orcid.org/0000-0001-5161-0544 |
| authorships[8].author.display_name | Fei Cheng |
| authorships[8].author_position | middle |
| authorships[8].raw_author_name | Fei Cheng |
| authorships[8].is_corresponding | False |
| authorships[9].author.id | https://openalex.org/A5102757632 |
| authorships[9].author.orcid | https://orcid.org/0000-0001-9848-6384 |
| authorships[9].author.display_name | Chenhui Chu |
| authorships[9].author_position | middle |
| authorships[9].raw_author_name | Chenhui Chu |
| authorships[9].is_corresponding | False |
| authorships[10].author.id | https://openalex.org/A5028836340 |
| authorships[10].author.orcid | https://orcid.org/0000-0001-5398-8399 |
| authorships[10].author.display_name | Sadao Kurohashi |
| authorships[10].author_position | last |
| authorships[10].raw_author_name | Sadao Kurohashi |
| authorships[10].is_corresponding | False |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://aclanthology.org/2025.acl-long.1466.pdf |
| open_access.oa_status | gold |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-08-04T00:00:00 |
| display_name | SpeechIQ: Speech-Agentic Intelligence Quotient Across Cognitive Levels in Voice Understanding by Large Language Models |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T10201 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.8058000206947327 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1702 |
| primary_topic.subfield.display_name | Artificial Intelligence |
| primary_topic.display_name | Speech Recognition and Synthesis |
| related_works | https://openalex.org/W4391375266, https://openalex.org/W2899084033, https://openalex.org/W2748952813, https://openalex.org/W2390279801, https://openalex.org/W4391913857, https://openalex.org/W2358668433, https://openalex.org/W4396701345, https://openalex.org/W2376932109, https://openalex.org/W2001405890, https://openalex.org/W2949502838 |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | doi:10.18653/v1/2025.acl-long.1466 |
| best_oa_location.is_oa | True |
| best_oa_location.source | |
| best_oa_location.license | cc-by |
| best_oa_location.pdf_url | https://aclanthology.org/2025.acl-long.1466.pdf |
| best_oa_location.version | publishedVersion |
| best_oa_location.raw_type | proceedings-article |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by |
| best_oa_location.is_accepted | True |
| best_oa_location.is_published | True |
| best_oa_location.raw_source_name | Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |
| best_oa_location.landing_page_url | https://doi.org/10.18653/v1/2025.acl-long.1466 |
| primary_location.id | doi:10.18653/v1/2025.acl-long.1466 |
| primary_location.is_oa | True |
| primary_location.source | |
| primary_location.license | cc-by |
| primary_location.pdf_url | https://aclanthology.org/2025.acl-long.1466.pdf |
| primary_location.version | publishedVersion |
| primary_location.raw_type | proceedings-article |
| primary_location.license_id | https://openalex.org/licenses/cc-by |
| primary_location.is_accepted | True |
| primary_location.is_published | True |
| primary_location.raw_source_name | Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |
| primary_location.landing_page_url | https://doi.org/10.18653/v1/2025.acl-long.1466 |
| publication_date | 2025-01-01 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 7, 117 |
| abstract_inverted_index.QA | 72 |
| abstract_inverted_index.We | 0, 78 |
| abstract_inverted_index.as | 6, 37 |
| abstract_inverted_index.be | 140 |
| abstract_inverted_index.by | 51 |
| abstract_inverted_index.in | 105, 111, 132 |
| abstract_inverted_index.of | 10, 65 |
| abstract_inverted_index.to | 24, 143 |
| abstract_inverted_index.(1) | 54 |
| abstract_inverted_index.(2) | 61 |
| abstract_inverted_index.(3) | 69 |
| abstract_inverted_index.ASR | 97 |
| abstract_inverted_index.LLM | 21, 44, 112 |
| abstract_inverted_index.Our | 114, 135 |
| abstract_inverted_index.SIQ | 42, 81 |
| abstract_inverted_index.WER | 57 |
| abstract_inverted_index.and | 68, 99, 108, 137 |
| abstract_inverted_index.but | 88 |
| abstract_inverted_index.for | 15, 58, 74 |
| abstract_inverted_index.new | 8 |
| abstract_inverted_index.not | 82 |
| abstract_inverted_index.LLM) | 98 |
| abstract_inverted_index.also | 89 |
| abstract_inverted_index.code | 136 |
| abstract_inverted_index.data | 138 |
| abstract_inverted_index.form | 9 |
| abstract_inverted_index.only | 83 |
| abstract_inverted_index.open | 141 |
| abstract_inverted_index.rate | 40 |
| abstract_inverted_index.such | 36 |
| abstract_inverted_index.that | 80, 121 |
| abstract_inverted_index.will | 139 |
| abstract_inverted_index.with | 125 |
| abstract_inverted_index.word | 38 |
| abstract_inverted_index.(SIQ) | 5 |
| abstract_inverted_index.LLM's | 66 |
| abstract_inverted_index.Voice | 45 |
| abstract_inverted_index.error | 39 |
| abstract_inverted_index.human | 11 |
| abstract_inverted_index.large | 18 |
| abstract_inverted_index.their | 26 |
| abstract_inverted_index.three | 47 |
| abstract_inverted_index.voice | 16, 27, 33, 85 |
| abstract_inverted_index.while | 128 |
| abstract_inverted_index.(WER), | 41 |
| abstract_inverted_index.(e.g., | 96 |
| abstract_inverted_index.(i.e., | 56, 63, 71 |
| abstract_inverted_index.Moving | 30 |
| abstract_inverted_index.Voice, | 22 |
| abstract_inverted_index.Voice. | 113 |
| abstract_inverted_index.across | 46 |
| abstract_inverted_index.assess | 25 |
| abstract_inverted_index.beyond | 31 |
| abstract_inverted_index.errors | 104 |
| abstract_inverted_index.future | 145 |
| abstract_inverted_index.levels | 49 |
| abstract_inverted_index.source | 142 |
| abstract_inverted_index.Bloom's | 52 |
| abstract_inverted_index.between | 93 |
| abstract_inverted_index.bridges | 122 |
| abstract_inverted_index.detects | 109 |
| abstract_inverted_index.methods | 95 |
| abstract_inverted_index.metrics | 35 |
| abstract_inverted_index.models, | 20, 101 |
| abstract_inverted_index.popular | 32 |
| abstract_inverted_index.tasks). | 77 |
| abstract_inverted_index.unified | 91 |
| abstract_inverted_index.Quotient | 4 |
| abstract_inverted_index.ability. | 29 |
| abstract_inverted_index.accuracy | 73 |
| abstract_inverted_index.cascaded | 94 |
| abstract_inverted_index.designed | 23 |
| abstract_inverted_index.examines | 43 |
| abstract_inverted_index.existing | 106 |
| abstract_inverted_index.exposing | 129 |
| abstract_inverted_index.language | 19 |
| abstract_inverted_index.pipeline | 14 |
| abstract_inverted_index.provides | 90 |
| abstract_inverted_index.studies. | 146 |
| abstract_inverted_index.verbatim | 59 |
| abstract_inverted_index.Taxonomy: | 53 |
| abstract_inverted_index.abilities | 87 |
| abstract_inverted_index.cognitive | 48, 123 |
| abstract_inverted_index.encourage | 144 |
| abstract_inverted_index.framework | 115 |
| abstract_inverted_index.introduce | 1 |
| abstract_inverted_index.motivated | 50 |
| abstract_inverted_index.training. | 134 |
| abstract_inverted_index.accuracy); | 60 |
| abstract_inverted_index.annotation | 103 |
| abstract_inverted_index.challenges | 131 |
| abstract_inverted_index.downstream | 76 |
| abstract_inverted_index.end-to-end | 100 |
| abstract_inverted_index.evaluation | 13 |
| abstract_inverted_index.identifies | 102 |
| abstract_inverted_index.overlooked | 130 |
| abstract_inverted_index.principles | 124 |
| abstract_inverted_index.quantifies | 84 |
| abstract_inverted_index.represents | 116 |
| abstract_inverted_index.similarity | 64 |
| abstract_inverted_index.simulating | 75 |
| abstract_inverted_index.Application | 70 |
| abstract_inverted_index.Remembering | 55 |
| abstract_inverted_index.benchmarks, | 107, 127 |
| abstract_inverted_index.comparisons | 92 |
| abstract_inverted_index.demonstrate | 79 |
| abstract_inverted_index.examination | 120 |
| abstract_inverted_index.multi-modal | 133 |
| abstract_inverted_index.Intelligence | 3 |
| abstract_inverted_index.Speech-based | 2 |
| abstract_inverted_index.intelligence | 119 |
| abstract_inverted_index.Understanding | 62 |
| abstract_inverted_index.understanding | 17, 28, 34, 86 |
| abstract_inverted_index.hallucinations | 110 |
| abstract_inverted_index.voice-oriented | 126 |
| abstract_inverted_index.first-of-its-kind | 118 |
| abstract_inverted_index.interpretations); | 67 |
| abstract_inverted_index.cognition-inspired | 12 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 11 |
| sustainable_development_goals[0].id | https://metadata.un.org/sdg/4 |
| sustainable_development_goals[0].score | 0.5799999833106995 |
| sustainable_development_goals[0].display_name | Quality Education |
| citation_normalized_percentile.value | 0.14439464 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | True |