Decoding the Ear: A Framework for Objectifying Expressiveness from Human Preference Through Efficient Alignment Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2510.20513
Recent speech-to-speech (S2S) models generate intelligible speech but still lack natural expressiveness, largely due to the absence of a reliable evaluation metric. Existing approaches, such as subjective MOS ratings, low-level acoustic features, and emotion recognition are costly, limited, or incomplete. To address this, we present DeEAR (Decoding the Expressive Preference of eAR), a framework that converts human preference for speech expressiveness into an objective score. Grounded in phonetics and psychology, DeEAR evaluates speech across three dimensions: Emotion, Prosody, and Spontaneity, achieving strong alignment with human perception (Spearman's Rank Correlation Coefficient, SRCC = 0.86) using fewer than 500 annotated samples. Beyond reliable scoring, DeEAR enables fair benchmarking and targeted data curation. It not only distinguishes expressiveness gaps across S2S models but also selects 14K expressive utterances to form ExpressiveSpeech, which improves the expressive score (from 2.0 to 23.4 on a 100-point scale) of S2S models. Demos and codes are available at https://github.com/FreedomIntelligence/ExpressiveSpeech
Related Topics
- Type
- preprint
- Landing Page
- http://arxiv.org/abs/2510.20513
- https://arxiv.org/pdf/2510.20513
- OA Status
- green
- OpenAlex ID
- https://openalex.org/W4416620132
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4416620132Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2510.20513Digital Object Identifier
- Title
-
Decoding the Ear: A Framework for Objectifying Expressiveness from Human Preference Through Efficient AlignmentWork title
- Type
-
preprintOpenAlex work type
- Publication year
-
2025Year of publication
- Publication date
-
2025-10-23Full publication date if available
- Authors
-
Zhiyu Lin, Jingwen Yang, Jiale Zhao, Meng Liu, Sunzhu Li, Benyou WangList of authors in order
- Landing page
-
https://arxiv.org/abs/2510.20513Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2510.20513Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2510.20513Direct OA link when available
- Cited by
-
0Total citation count in OpenAlex
Full payload
| id | https://openalex.org/W4416620132 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2510.20513 |
| ids.doi | https://doi.org/10.48550/arxiv.2510.20513 |
| ids.openalex | https://openalex.org/W4416620132 |
| fwci | |
| type | preprint |
| title | Decoding the Ear: A Framework for Objectifying Expressiveness from Human Preference Through Efficient Alignment |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| language | |
| locations[0].id | pmh:oai:arXiv.org:2510.20513 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2510.20513 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2510.20513 |
| locations[1].id | doi:10.48550/arxiv.2510.20513 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2510.20513 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5069028297 |
| authorships[0].author.orcid | https://orcid.org/0000-0001-8045-9556 |
| authorships[0].author.display_name | Zhiyu Lin |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Lin, Zhiyu |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5102753054 |
| authorships[1].author.orcid | https://orcid.org/0000-0002-6314-3856 |
| authorships[1].author.display_name | Jingwen Yang |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Yang, Jingwen |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5049807159 |
| authorships[2].author.orcid | https://orcid.org/0009-0002-7977-3083 |
| authorships[2].author.display_name | Jiale Zhao |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Zhao, Jiale |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5100457527 |
| authorships[3].author.orcid | https://orcid.org/0000-0002-9420-3874 |
| authorships[3].author.display_name | Meng Liu |
| authorships[3].author_position | middle |
| authorships[3].raw_author_name | Liu, Meng |
| authorships[3].is_corresponding | False |
| authorships[4].author.id | https://openalex.org/A5048281627 |
| authorships[4].author.orcid | |
| authorships[4].author.display_name | Sunzhu Li |
| authorships[4].author_position | middle |
| authorships[4].raw_author_name | Li, Sunzhu |
| authorships[4].is_corresponding | False |
| authorships[5].author.id | https://openalex.org/A5057282504 |
| authorships[5].author.orcid | https://orcid.org/0000-0002-1501-9914 |
| authorships[5].author.display_name | Benyou Wang |
| authorships[5].author_position | last |
| authorships[5].raw_author_name | Wang, Benyou |
| authorships[5].is_corresponding | False |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2510.20513 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-25T00:00:00 |
| display_name | Decoding the Ear: A Framework for Objectifying Expressiveness from Human Preference Through Efficient Alignment |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-28T18:48:30.529841 |
| primary_topic | |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2510.20513 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2510.20513 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2510.20513 |
| primary_location.id | pmh:oai:arXiv.org:2510.20513 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2510.20513 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2510.20513 |
| publication_date | 2025-10-23 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.= | 91 |
| abstract_inverted_index.a | 18, 52, 138 |
| abstract_inverted_index.It | 110 |
| abstract_inverted_index.To | 40 |
| abstract_inverted_index.an | 62 |
| abstract_inverted_index.as | 25 |
| abstract_inverted_index.at | 149 |
| abstract_inverted_index.in | 66 |
| abstract_inverted_index.of | 17, 50, 141 |
| abstract_inverted_index.on | 137 |
| abstract_inverted_index.or | 38 |
| abstract_inverted_index.to | 14, 125, 135 |
| abstract_inverted_index.we | 43 |
| abstract_inverted_index.14K | 122 |
| abstract_inverted_index.2.0 | 134 |
| abstract_inverted_index.500 | 96 |
| abstract_inverted_index.MOS | 27 |
| abstract_inverted_index.S2S | 117, 142 |
| abstract_inverted_index.and | 32, 68, 78, 106, 145 |
| abstract_inverted_index.are | 35, 147 |
| abstract_inverted_index.but | 7, 119 |
| abstract_inverted_index.due | 13 |
| abstract_inverted_index.for | 58 |
| abstract_inverted_index.not | 111 |
| abstract_inverted_index.the | 15, 47, 130 |
| abstract_inverted_index.23.4 | 136 |
| abstract_inverted_index.Rank | 87 |
| abstract_inverted_index.SRCC | 90 |
| abstract_inverted_index.also | 120 |
| abstract_inverted_index.data | 108 |
| abstract_inverted_index.fair | 104 |
| abstract_inverted_index.form | 126 |
| abstract_inverted_index.gaps | 115 |
| abstract_inverted_index.into | 61 |
| abstract_inverted_index.lack | 9 |
| abstract_inverted_index.only | 112 |
| abstract_inverted_index.such | 24 |
| abstract_inverted_index.than | 95 |
| abstract_inverted_index.that | 54 |
| abstract_inverted_index.with | 83 |
| abstract_inverted_index.(S2S) | 2 |
| abstract_inverted_index.(from | 133 |
| abstract_inverted_index.0.86) | 92 |
| abstract_inverted_index.DeEAR | 45, 70, 102 |
| abstract_inverted_index.Demos | 144 |
| abstract_inverted_index.codes | 146 |
| abstract_inverted_index.eAR), | 51 |
| abstract_inverted_index.fewer | 94 |
| abstract_inverted_index.human | 56, 84 |
| abstract_inverted_index.score | 132 |
| abstract_inverted_index.still | 8 |
| abstract_inverted_index.this, | 42 |
| abstract_inverted_index.three | 74 |
| abstract_inverted_index.using | 93 |
| abstract_inverted_index.which | 128 |
| abstract_inverted_index.Beyond | 99 |
| abstract_inverted_index.Recent | 0 |
| abstract_inverted_index.across | 73, 116 |
| abstract_inverted_index.models | 3, 118 |
| abstract_inverted_index.scale) | 140 |
| abstract_inverted_index.score. | 64 |
| abstract_inverted_index.speech | 6, 59, 72 |
| abstract_inverted_index.strong | 81 |
| abstract_inverted_index.absence | 16 |
| abstract_inverted_index.address | 41 |
| abstract_inverted_index.costly, | 36 |
| abstract_inverted_index.emotion | 33 |
| abstract_inverted_index.enables | 103 |
| abstract_inverted_index.largely | 12 |
| abstract_inverted_index.metric. | 21 |
| abstract_inverted_index.models. | 143 |
| abstract_inverted_index.natural | 10 |
| abstract_inverted_index.present | 44 |
| abstract_inverted_index.selects | 121 |
| abstract_inverted_index.Emotion, | 76 |
| abstract_inverted_index.Existing | 22 |
| abstract_inverted_index.Grounded | 65 |
| abstract_inverted_index.Prosody, | 77 |
| abstract_inverted_index.acoustic | 30 |
| abstract_inverted_index.converts | 55 |
| abstract_inverted_index.generate | 4 |
| abstract_inverted_index.improves | 129 |
| abstract_inverted_index.limited, | 37 |
| abstract_inverted_index.ratings, | 28 |
| abstract_inverted_index.reliable | 19, 100 |
| abstract_inverted_index.samples. | 98 |
| abstract_inverted_index.scoring, | 101 |
| abstract_inverted_index.targeted | 107 |
| abstract_inverted_index.(Decoding | 46 |
| abstract_inverted_index.100-point | 139 |
| abstract_inverted_index.achieving | 80 |
| abstract_inverted_index.alignment | 82 |
| abstract_inverted_index.annotated | 97 |
| abstract_inverted_index.available | 148 |
| abstract_inverted_index.curation. | 109 |
| abstract_inverted_index.evaluates | 71 |
| abstract_inverted_index.features, | 31 |
| abstract_inverted_index.framework | 53 |
| abstract_inverted_index.low-level | 29 |
| abstract_inverted_index.objective | 63 |
| abstract_inverted_index.phonetics | 67 |
| abstract_inverted_index.Expressive | 48 |
| abstract_inverted_index.Preference | 49 |
| abstract_inverted_index.evaluation | 20 |
| abstract_inverted_index.expressive | 123, 131 |
| abstract_inverted_index.perception | 85 |
| abstract_inverted_index.preference | 57 |
| abstract_inverted_index.subjective | 26 |
| abstract_inverted_index.utterances | 124 |
| abstract_inverted_index.(Spearman's | 86 |
| abstract_inverted_index.Correlation | 88 |
| abstract_inverted_index.approaches, | 23 |
| abstract_inverted_index.dimensions: | 75 |
| abstract_inverted_index.incomplete. | 39 |
| abstract_inverted_index.psychology, | 69 |
| abstract_inverted_index.recognition | 34 |
| abstract_inverted_index.Coefficient, | 89 |
| abstract_inverted_index.Spontaneity, | 79 |
| abstract_inverted_index.benchmarking | 105 |
| abstract_inverted_index.intelligible | 5 |
| abstract_inverted_index.distinguishes | 113 |
| abstract_inverted_index.expressiveness | 60, 114 |
| abstract_inverted_index.expressiveness, | 11 |
| abstract_inverted_index.speech-to-speech | 1 |
| abstract_inverted_index.ExpressiveSpeech, | 127 |
| abstract_inverted_index.https://github.com/FreedomIntelligence/ExpressiveSpeech | 150 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 6 |
| citation_normalized_percentile |