Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition Article Swipe
YOU?
·
· 2022
· Open Access
·
· DOI: https://doi.org/10.21437/interspeech.2022-9996
Transformers have recently dominated the ASR field. Although able to yield good performance, they involve an autoregressive (AR) decoder to generate tokens one by one, which is computationally inefficient. To speed up inference, non-autoregressive (NAR) methods, e.g. single-step NAR, were designed, to enable parallel generation. However, due to an independence assumption within the output tokens, performance of single-step NAR is inferior to that of AR models, especially with a large-scale corpus. There are two challenges to improving single-step NAR: Firstly to accurately predict the number of output tokens and extract hidden variables; secondly, to enhance modeling of interdependence between output tokens. To tackle both challenges, we propose a fast and accurate parallel transformer, termed Paraformer. This utilizes a continuous integrate-and-fire based predictor to predict the number of tokens and generate hidden variables. A glancing language model sampler then generates semantic embeddings to enhance the NAR decoder's ability to model context interdependence. Finally, we design a strategy to generate negative samples for minimum word error rate training to further improve performance. Experiments using the AISHELL-1, AISHELL-2 benchmark, and an industrial-level 20,000 hour task demonstrate that the proposed Paraformer can attain comparable performance to the state-of-the-art AR transformer, with over 10x speedup.
Related Topics
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.21437/interspeech.2022-9996
- OA Status
- green
- Cited By
- 77
- References
- 29
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4283067311
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4283067311Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.21437/interspeech.2022-9996Digital Object Identifier
- Title
-
Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech RecognitionWork title
- Type
-
articleOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2022Year of publication
- Publication date
-
2022-09-16Full publication date if available
- Authors
-
Zhifu Gao, Shiliang Zhang, Ian McLoughlin, Zhijie YanList of authors in order
- Landing page
-
https://doi.org/10.21437/interspeech.2022-9996Publisher landing page
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://figshare.com/articles/conference_contribution/Paraformer_Fast_and_Accurate_Parallel_Transformer_for_Non-autoregressive_End-to-End_Speech_Recognition/24217320Direct OA link when available
- Concepts
-
Autoregressive model, Transformer, Computer science, Speech recognition, End-to-end principle, Artificial intelligence, Engineering, Electrical engineering, Mathematics, Econometrics, VoltageTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
77Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 39, 2024: 31, 2023: 7Per-year citation counts (last 5 years)
- References (count)
-
29Number of works referenced by this work
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4283067311 |
|---|---|
| doi | https://doi.org/10.21437/interspeech.2022-9996 |
| ids.doi | https://doi.org/10.21437/interspeech.2022-9996 |
| ids.openalex | https://openalex.org/W4283067311 |
| fwci | 9.05173806 |
| type | article |
| title | Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | 2067 |
| biblio.first_page | 2063 |
| topics[0].id | https://openalex.org/T10201 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9919000267982483 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1702 |
| topics[0].subfield.display_name | Artificial Intelligence |
| topics[0].display_name | Speech Recognition and Synthesis |
| topics[1].id | https://openalex.org/T10860 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9872999787330627 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1711 |
| topics[1].subfield.display_name | Signal Processing |
| topics[1].display_name | Speech and Audio Processing |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C159877910 |
| concepts[0].level | 2 |
| concepts[0].score | 0.7505905628204346 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q2202883 |
| concepts[0].display_name | Autoregressive model |
| concepts[1].id | https://openalex.org/C66322947 |
| concepts[1].level | 3 |
| concepts[1].score | 0.6668057441711426 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q11658 |
| concepts[1].display_name | Transformer |
| concepts[2].id | https://openalex.org/C41008148 |
| concepts[2].level | 0 |
| concepts[2].score | 0.6494194865226746 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[2].display_name | Computer science |
| concepts[3].id | https://openalex.org/C28490314 |
| concepts[3].level | 1 |
| concepts[3].score | 0.5544925928115845 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q189436 |
| concepts[3].display_name | Speech recognition |
| concepts[4].id | https://openalex.org/C74296488 |
| concepts[4].level | 2 |
| concepts[4].score | 0.49261099100112915 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q2527392 |
| concepts[4].display_name | End-to-end principle |
| concepts[5].id | https://openalex.org/C154945302 |
| concepts[5].level | 1 |
| concepts[5].score | 0.27468985319137573 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[5].display_name | Artificial intelligence |
| concepts[6].id | https://openalex.org/C127413603 |
| concepts[6].level | 0 |
| concepts[6].score | 0.1670781970024109 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q11023 |
| concepts[6].display_name | Engineering |
| concepts[7].id | https://openalex.org/C119599485 |
| concepts[7].level | 1 |
| concepts[7].score | 0.16381901502609253 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q43035 |
| concepts[7].display_name | Electrical engineering |
| concepts[8].id | https://openalex.org/C33923547 |
| concepts[8].level | 0 |
| concepts[8].score | 0.14798730611801147 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q395 |
| concepts[8].display_name | Mathematics |
| concepts[9].id | https://openalex.org/C149782125 |
| concepts[9].level | 1 |
| concepts[9].score | 0.10010284185409546 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q160039 |
| concepts[9].display_name | Econometrics |
| concepts[10].id | https://openalex.org/C165801399 |
| concepts[10].level | 2 |
| concepts[10].score | 0.06577110290527344 |
| concepts[10].wikidata | https://www.wikidata.org/wiki/Q25428 |
| concepts[10].display_name | Voltage |
| keywords[0].id | https://openalex.org/keywords/autoregressive-model |
| keywords[0].score | 0.7505905628204346 |
| keywords[0].display_name | Autoregressive model |
| keywords[1].id | https://openalex.org/keywords/transformer |
| keywords[1].score | 0.6668057441711426 |
| keywords[1].display_name | Transformer |
| keywords[2].id | https://openalex.org/keywords/computer-science |
| keywords[2].score | 0.6494194865226746 |
| keywords[2].display_name | Computer science |
| keywords[3].id | https://openalex.org/keywords/speech-recognition |
| keywords[3].score | 0.5544925928115845 |
| keywords[3].display_name | Speech recognition |
| keywords[4].id | https://openalex.org/keywords/end-to-end-principle |
| keywords[4].score | 0.49261099100112915 |
| keywords[4].display_name | End-to-end principle |
| keywords[5].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[5].score | 0.27468985319137573 |
| keywords[5].display_name | Artificial intelligence |
| keywords[6].id | https://openalex.org/keywords/engineering |
| keywords[6].score | 0.1670781970024109 |
| keywords[6].display_name | Engineering |
| keywords[7].id | https://openalex.org/keywords/electrical-engineering |
| keywords[7].score | 0.16381901502609253 |
| keywords[7].display_name | Electrical engineering |
| keywords[8].id | https://openalex.org/keywords/mathematics |
| keywords[8].score | 0.14798730611801147 |
| keywords[8].display_name | Mathematics |
| keywords[9].id | https://openalex.org/keywords/econometrics |
| keywords[9].score | 0.10010284185409546 |
| keywords[9].display_name | Econometrics |
| keywords[10].id | https://openalex.org/keywords/voltage |
| keywords[10].score | 0.06577110290527344 |
| keywords[10].display_name | Voltage |
| language | en |
| locations[0].id | doi:10.21437/interspeech.2022-9996 |
| locations[0].is_oa | False |
| locations[0].source.id | https://openalex.org/S4363604309 |
| locations[0].source.issn | |
| locations[0].source.type | conference |
| locations[0].source.is_oa | False |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | Interspeech 2022 |
| locations[0].source.host_organization | |
| locations[0].source.host_organization_name | |
| locations[0].license | |
| locations[0].pdf_url | |
| locations[0].version | publishedVersion |
| locations[0].raw_type | proceedings-article |
| locations[0].license_id | |
| locations[0].is_accepted | True |
| locations[0].is_published | True |
| locations[0].raw_source_name | Interspeech 2022 |
| locations[0].landing_page_url | https://doi.org/10.21437/interspeech.2022-9996 |
| locations[1].id | pmh:oai:figshare.com:article/24217320 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400572 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | False |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | OPAL (Open@LaTrobe) (La Trobe University) |
| locations[1].source.host_organization | https://openalex.org/I196829312 |
| locations[1].source.host_organization_name | La Trobe University |
| locations[1].source.host_organization_lineage | https://openalex.org/I196829312 |
| locations[1].license | cc-by |
| locations[1].pdf_url | |
| locations[1].version | submittedVersion |
| locations[1].raw_type | Text |
| locations[1].license_id | https://openalex.org/licenses/cc-by |
| locations[1].is_accepted | False |
| locations[1].is_published | False |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://figshare.com/articles/conference_contribution/Paraformer_Fast_and_Accurate_Parallel_Transformer_for_Non-autoregressive_End-to-End_Speech_Recognition/24217320 |
| indexed_in | crossref |
| authorships[0].author.id | https://openalex.org/A5109593343 |
| authorships[0].author.orcid | |
| authorships[0].author.display_name | Zhifu Gao |
| authorships[0].countries | CN |
| authorships[0].affiliations[0].institution_ids | https://openalex.org/I45928872 |
| authorships[0].affiliations[0].raw_affiliation_string | Speech Lab, Alibaba Group, China |
| authorships[0].institutions[0].id | https://openalex.org/I45928872 |
| authorships[0].institutions[0].ror | https://ror.org/00k642b80 |
| authorships[0].institutions[0].type | company |
| authorships[0].institutions[0].lineage | https://openalex.org/I45928872 |
| authorships[0].institutions[0].country_code | CN |
| authorships[0].institutions[0].display_name | Alibaba Group (China) |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Zhifu Gao |
| authorships[0].is_corresponding | False |
| authorships[0].raw_affiliation_strings | Speech Lab, Alibaba Group, China |
| authorships[1].author.id | https://openalex.org/A5101777589 |
| authorships[1].author.orcid | https://orcid.org/0000-0001-7795-437X |
| authorships[1].author.display_name | Shiliang Zhang |
| authorships[1].countries | CN |
| authorships[1].affiliations[0].institution_ids | https://openalex.org/I45928872 |
| authorships[1].affiliations[0].raw_affiliation_string | Speech Lab, Alibaba Group, China |
| authorships[1].institutions[0].id | https://openalex.org/I45928872 |
| authorships[1].institutions[0].ror | https://ror.org/00k642b80 |
| authorships[1].institutions[0].type | company |
| authorships[1].institutions[0].lineage | https://openalex.org/I45928872 |
| authorships[1].institutions[0].country_code | CN |
| authorships[1].institutions[0].display_name | Alibaba Group (China) |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | ShiLiang Zhang |
| authorships[1].is_corresponding | False |
| authorships[1].raw_affiliation_strings | Speech Lab, Alibaba Group, China |
| authorships[2].author.id | https://openalex.org/A5000620878 |
| authorships[2].author.orcid | https://orcid.org/0000-0001-7111-2008 |
| authorships[2].author.display_name | Ian McLoughlin |
| authorships[2].countries | SG |
| authorships[2].affiliations[0].institution_ids | https://openalex.org/I168639165 |
| authorships[2].affiliations[0].raw_affiliation_string | ICT Cluster, Singapore Institute of Technology, Singapore |
| authorships[2].institutions[0].id | https://openalex.org/I168639165 |
| authorships[2].institutions[0].ror | https://ror.org/01v2c2791 |
| authorships[2].institutions[0].type | education |
| authorships[2].institutions[0].lineage | https://openalex.org/I168639165 |
| authorships[2].institutions[0].country_code | SG |
| authorships[2].institutions[0].display_name | Singapore Institute of Technology |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Ian McLoughlin |
| authorships[2].is_corresponding | False |
| authorships[2].raw_affiliation_strings | ICT Cluster, Singapore Institute of Technology, Singapore |
| authorships[3].author.id | https://openalex.org/A5061850214 |
| authorships[3].author.orcid | https://orcid.org/0000-0001-8065-0748 |
| authorships[3].author.display_name | Zhijie Yan |
| authorships[3].countries | CN |
| authorships[3].affiliations[0].institution_ids | https://openalex.org/I45928872 |
| authorships[3].affiliations[0].raw_affiliation_string | Speech Lab, Alibaba Group, China |
| authorships[3].institutions[0].id | https://openalex.org/I45928872 |
| authorships[3].institutions[0].ror | https://ror.org/00k642b80 |
| authorships[3].institutions[0].type | company |
| authorships[3].institutions[0].lineage | https://openalex.org/I45928872 |
| authorships[3].institutions[0].country_code | CN |
| authorships[3].institutions[0].display_name | Alibaba Group (China) |
| authorships[3].author_position | last |
| authorships[3].raw_author_name | Zhijie Yan |
| authorships[3].is_corresponding | False |
| authorships[3].raw_affiliation_strings | Speech Lab, Alibaba Group, China |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://figshare.com/articles/conference_contribution/Paraformer_Fast_and_Accurate_Parallel_Transformer_for_Non-autoregressive_End-to-End_Speech_Recognition/24217320 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T10201 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9919000267982483 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1702 |
| primary_topic.subfield.display_name | Artificial Intelligence |
| primary_topic.display_name | Speech Recognition and Synthesis |
| related_works | https://openalex.org/W2899084033, https://openalex.org/W2748952813, https://openalex.org/W3179968364, https://openalex.org/W2150410159, https://openalex.org/W2390279801, https://openalex.org/W1972271943, https://openalex.org/W3196421258, https://openalex.org/W2358668433, https://openalex.org/W3150905897, https://openalex.org/W4327525404 |
| cited_by_count | 77 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 39 |
| counts_by_year[1].year | 2024 |
| counts_by_year[1].cited_by_count | 31 |
| counts_by_year[2].year | 2023 |
| counts_by_year[2].cited_by_count | 7 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:figshare.com:article/24217320 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400572 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | False |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | OPAL (Open@LaTrobe) (La Trobe University) |
| best_oa_location.source.host_organization | https://openalex.org/I196829312 |
| best_oa_location.source.host_organization_name | La Trobe University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I196829312 |
| best_oa_location.license | cc-by |
| best_oa_location.pdf_url | |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | Text |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | https://figshare.com/articles/conference_contribution/Paraformer_Fast_and_Accurate_Parallel_Transformer_for_Non-autoregressive_End-to-End_Speech_Recognition/24217320 |
| primary_location.id | doi:10.21437/interspeech.2022-9996 |
| primary_location.is_oa | False |
| primary_location.source.id | https://openalex.org/S4363604309 |
| primary_location.source.issn | |
| primary_location.source.type | conference |
| primary_location.source.is_oa | False |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | Interspeech 2022 |
| primary_location.source.host_organization | |
| primary_location.source.host_organization_name | |
| primary_location.license | |
| primary_location.pdf_url | |
| primary_location.version | publishedVersion |
| primary_location.raw_type | proceedings-article |
| primary_location.license_id | |
| primary_location.is_accepted | True |
| primary_location.is_published | True |
| primary_location.raw_source_name | Interspeech 2022 |
| primary_location.landing_page_url | https://doi.org/10.21437/interspeech.2022-9996 |
| publication_date | 2022-09-16 |
| publication_year | 2022 |
| referenced_works | https://openalex.org/W4385245566, https://openalex.org/W2962780374, https://openalex.org/W3175665465, https://openalex.org/W2988975212, https://openalex.org/W3033604713, https://openalex.org/W3160622492, https://openalex.org/W3097777922, https://openalex.org/W2143612262, https://openalex.org/W2963212250, https://openalex.org/W2963434219, https://openalex.org/W3196500669, https://openalex.org/W2972686346, https://openalex.org/W2889048668, https://openalex.org/W4303633466, https://openalex.org/W2327501763, https://openalex.org/W3162431424, https://openalex.org/W3197304116, https://openalex.org/W4225299129, https://openalex.org/W3097580812, https://openalex.org/W2963242190, https://openalex.org/W3097882114, https://openalex.org/W2963747784, https://openalex.org/W3097874139, https://openalex.org/W3140684480, https://openalex.org/W3095687747, https://openalex.org/W2767206889, https://openalex.org/W4221151577, https://openalex.org/W4287117559, https://openalex.org/W3014413043 |
| referenced_works_count | 29 |
| abstract_inverted_index.A | 133 |
| abstract_inverted_index.a | 69, 108, 118, 155 |
| abstract_inverted_index.AR | 65, 195 |
| abstract_inverted_index.To | 30, 102 |
| abstract_inverted_index.an | 16, 49, 178 |
| abstract_inverted_index.by | 24 |
| abstract_inverted_index.is | 27, 60 |
| abstract_inverted_index.of | 57, 64, 86, 97, 127 |
| abstract_inverted_index.to | 10, 20, 42, 48, 62, 76, 81, 94, 123, 142, 148, 157, 167, 192 |
| abstract_inverted_index.up | 32 |
| abstract_inverted_index.we | 106, 153 |
| abstract_inverted_index.10x | 199 |
| abstract_inverted_index.ASR | 6 |
| abstract_inverted_index.NAR | 59, 145 |
| abstract_inverted_index.and | 89, 110, 129, 177 |
| abstract_inverted_index.are | 73 |
| abstract_inverted_index.can | 188 |
| abstract_inverted_index.due | 47 |
| abstract_inverted_index.for | 161 |
| abstract_inverted_index.one | 23 |
| abstract_inverted_index.the | 5, 53, 84, 125, 144, 173, 185, 193 |
| abstract_inverted_index.two | 74 |
| abstract_inverted_index.(AR) | 18 |
| abstract_inverted_index.NAR, | 39 |
| abstract_inverted_index.NAR: | 79 |
| abstract_inverted_index.This | 116 |
| abstract_inverted_index.able | 9 |
| abstract_inverted_index.both | 104 |
| abstract_inverted_index.e.g. | 37 |
| abstract_inverted_index.fast | 109 |
| abstract_inverted_index.good | 12 |
| abstract_inverted_index.have | 2 |
| abstract_inverted_index.hour | 181 |
| abstract_inverted_index.one, | 25 |
| abstract_inverted_index.over | 198 |
| abstract_inverted_index.rate | 165 |
| abstract_inverted_index.task | 182 |
| abstract_inverted_index.that | 63, 184 |
| abstract_inverted_index.then | 138 |
| abstract_inverted_index.they | 14 |
| abstract_inverted_index.were | 40 |
| abstract_inverted_index.with | 68, 197 |
| abstract_inverted_index.word | 163 |
| abstract_inverted_index.<p | 0 |
| abstract_inverted_index.(NAR) | 35 |
| abstract_inverted_index.There | 72 |
| abstract_inverted_index.based | 121 |
| abstract_inverted_index.error | 164 |
| abstract_inverted_index.model | 136, 149 |
| abstract_inverted_index.speed | 31 |
| abstract_inverted_index.using | 172 |
| abstract_inverted_index.which | 26 |
| abstract_inverted_index.yield | 11 |
| abstract_inverted_index.20,000 | 180 |
| abstract_inverted_index.attain | 189 |
| abstract_inverted_index.design | 154 |
| abstract_inverted_index.enable | 43 |
| abstract_inverted_index.field. | 7 |
| abstract_inverted_index.hidden | 91, 131 |
| abstract_inverted_index.number | 85, 126 |
| abstract_inverted_index.output | 54, 87, 100 |
| abstract_inverted_index.tackle | 103 |
| abstract_inverted_index.termed | 114 |
| abstract_inverted_index.tokens | 22, 88, 128 |
| abstract_inverted_index.within | 52 |
| abstract_inverted_index.Firstly | 80 |
| abstract_inverted_index.ability | 147 |
| abstract_inverted_index.between | 99 |
| abstract_inverted_index.context | 150 |
| abstract_inverted_index.corpus. | 71 |
| abstract_inverted_index.decoder | 19 |
| abstract_inverted_index.enhance | 95, 143 |
| abstract_inverted_index.extract | 90 |
| abstract_inverted_index.further | 168 |
| abstract_inverted_index.improve | 169 |
| abstract_inverted_index.involve | 15 |
| abstract_inverted_index.minimum | 162 |
| abstract_inverted_index.models, | 66 |
| abstract_inverted_index.predict | 83, 124 |
| abstract_inverted_index.propose | 107 |
| abstract_inverted_index.sampler | 137 |
| abstract_inverted_index.samples | 160 |
| abstract_inverted_index.tokens, | 55 |
| abstract_inverted_index.tokens. | 101 |
| abstract_inverted_index.Although | 8 |
| abstract_inverted_index.Finally, | 152 |
| abstract_inverted_index.However, | 46 |
| abstract_inverted_index.accurate | 111 |
| abstract_inverted_index.generate | 21, 130, 158 |
| abstract_inverted_index.glancing | 134 |
| abstract_inverted_index.inferior | 61 |
| abstract_inverted_index.language | 135 |
| abstract_inverted_index.methods, | 36 |
| abstract_inverted_index.modeling | 96 |
| abstract_inverted_index.negative | 159 |
| abstract_inverted_index.parallel | 44, 112 |
| abstract_inverted_index.proposed | 186 |
| abstract_inverted_index.recently | 3 |
| abstract_inverted_index.semantic | 140 |
| abstract_inverted_index.strategy | 156 |
| abstract_inverted_index.training | 166 |
| abstract_inverted_index.utilizes | 117 |
| abstract_inverted_index.AISHELL-2 | 175 |
| abstract_inverted_index.decoder's | 146 |
| abstract_inverted_index.designed, | 41 |
| abstract_inverted_index.dominated | 4 |
| abstract_inverted_index.generates | 139 |
| abstract_inverted_index.improving | 77 |
| abstract_inverted_index.predictor | 122 |
| abstract_inverted_index.secondly, | 93 |
| abstract_inverted_index.AISHELL-1, | 174 |
| abstract_inverted_index.Paraformer | 187 |
| abstract_inverted_index.accurately | 82 |
| abstract_inverted_index.assumption | 51 |
| abstract_inverted_index.benchmark, | 176 |
| abstract_inverted_index.challenges | 75 |
| abstract_inverted_index.comparable | 190 |
| abstract_inverted_index.continuous | 119 |
| abstract_inverted_index.embeddings | 141 |
| abstract_inverted_index.especially | 67 |
| abstract_inverted_index.inference, | 33 |
| abstract_inverted_index.variables. | 132 |
| abstract_inverted_index.variables; | 92 |
| abstract_inverted_index.Experiments | 171 |
| abstract_inverted_index.Paraformer. | 115 |
| abstract_inverted_index.challenges, | 105 |
| abstract_inverted_index.demonstrate | 183 |
| abstract_inverted_index.generation. | 45 |
| abstract_inverted_index.large-scale | 70 |
| abstract_inverted_index.performance | 56, 191 |
| abstract_inverted_index.single-step | 38, 58, 78 |
| abstract_inverted_index.independence | 50 |
| abstract_inverted_index.inefficient. | 29 |
| abstract_inverted_index.performance, | 13 |
| abstract_inverted_index.performance. | 170 |
| abstract_inverted_index.transformer, | 113, 196 |
| abstract_inverted_index.autoregressive | 17 |
| abstract_inverted_index.computationally | 28 |
| abstract_inverted_index.interdependence | 98 |
| abstract_inverted_index.industrial-level | 179 |
| abstract_inverted_index.interdependence. | 151 |
| abstract_inverted_index.state-of-the-art | 194 |
| abstract_inverted_index.integrate-and-fire | 120 |
| abstract_inverted_index.non-autoregressive | 34 |
| abstract_inverted_index.speedup.</p> | 200 |
| abstract_inverted_index.dir="ltr">Transformers | 1 |
| cited_by_percentile_year.max | 100 |
| cited_by_percentile_year.min | 98 |
| countries_distinct_count | 2 |
| institutions_distinct_count | 4 |
| citation_normalized_percentile.value | 0.98339981 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | True |