DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation Article Swipe
YOU?
·
· 2024
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2404.00264
Dataset distillation aims to compress a training dataset by creating a small number of informative synthetic samples such that neural networks trained on them perform as well as those trained on the original training dataset. Current text dataset distillation methods create each synthetic sample as a sequence of word embeddings instead of a text to apply gradient-based optimization; however, such embedding-level distilled datasets cannot be used for training other models whose word embedding weights are different from the model used for distillation. To address this issue, we propose a novel text dataset distillation approach, called Distilling dataset into Language Model (DiLM), which trains a language model to generate informative synthetic training samples as text data, instead of directly optimizing synthetic samples. We evaluated DiLM on various text classification datasets and showed that distilled synthetic datasets from DiLM outperform those from current coreset selection methods. DiLM achieved remarkable generalization performance in training different types of models and in-context learning of large language models. Our code will be available at https://github.com/arumaekawa/DiLM.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2404.00264
- https://arxiv.org/pdf/2404.00264
- OA Status
- green
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4393762824
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4393762824Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2404.00264Digital Object Identifier
- Title
-
DiLM: Distilling Dataset into Language Model for Text-level Dataset DistillationWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2024Year of publication
- Publication date
-
2024-03-30Full publication date if available
- Authors
-
Aru Maekawa, Satoshi Kosugi, Kotaro Funakoshi, Manabu OkumuraList of authors in order
- Landing page
-
https://arxiv.org/abs/2404.00264Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2404.00264Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2404.00264Direct OA link when available
- Concepts
-
Distillation, Computer science, Natural language processing, Language model, Artificial intelligence, Chemistry, ChromatographyTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
0Total citation count in OpenAlex
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4393762824 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2404.00264 |
| ids.doi | https://doi.org/10.48550/arxiv.2404.00264 |
| ids.openalex | https://openalex.org/W4393762824 |
| fwci | |
| type | preprint |
| title | DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10181 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9524000287055969 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1702 |
| topics[0].subfield.display_name | Artificial Intelligence |
| topics[0].display_name | Natural Language Processing Techniques |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C204030448 |
| concepts[0].level | 2 |
| concepts[0].score | 0.6846022605895996 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q101017 |
| concepts[0].display_name | Distillation |
| concepts[1].id | https://openalex.org/C41008148 |
| concepts[1].level | 0 |
| concepts[1].score | 0.6494407653808594 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[1].display_name | Computer science |
| concepts[2].id | https://openalex.org/C204321447 |
| concepts[2].level | 1 |
| concepts[2].score | 0.5052507519721985 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q30642 |
| concepts[2].display_name | Natural language processing |
| concepts[3].id | https://openalex.org/C137293760 |
| concepts[3].level | 2 |
| concepts[3].score | 0.4136425256729126 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q3621696 |
| concepts[3].display_name | Language model |
| concepts[4].id | https://openalex.org/C154945302 |
| concepts[4].level | 1 |
| concepts[4].score | 0.3636689782142639 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[4].display_name | Artificial intelligence |
| concepts[5].id | https://openalex.org/C185592680 |
| concepts[5].level | 0 |
| concepts[5].score | 0.0942370593547821 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q2329 |
| concepts[5].display_name | Chemistry |
| concepts[6].id | https://openalex.org/C43617362 |
| concepts[6].level | 1 |
| concepts[6].score | 0.08383271098136902 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q170050 |
| concepts[6].display_name | Chromatography |
| keywords[0].id | https://openalex.org/keywords/distillation |
| keywords[0].score | 0.6846022605895996 |
| keywords[0].display_name | Distillation |
| keywords[1].id | https://openalex.org/keywords/computer-science |
| keywords[1].score | 0.6494407653808594 |
| keywords[1].display_name | Computer science |
| keywords[2].id | https://openalex.org/keywords/natural-language-processing |
| keywords[2].score | 0.5052507519721985 |
| keywords[2].display_name | Natural language processing |
| keywords[3].id | https://openalex.org/keywords/language-model |
| keywords[3].score | 0.4136425256729126 |
| keywords[3].display_name | Language model |
| keywords[4].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[4].score | 0.3636689782142639 |
| keywords[4].display_name | Artificial intelligence |
| keywords[5].id | https://openalex.org/keywords/chemistry |
| keywords[5].score | 0.0942370593547821 |
| keywords[5].display_name | Chemistry |
| keywords[6].id | https://openalex.org/keywords/chromatography |
| keywords[6].score | 0.08383271098136902 |
| keywords[6].display_name | Chromatography |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2404.00264 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2404.00264 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2404.00264 |
| locations[1].id | doi:10.48550/arxiv.2404.00264 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2404.00264 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5111702849 |
| authorships[0].author.orcid | |
| authorships[0].author.display_name | Aru Maekawa |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Maekawa, Aru |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5010134544 |
| authorships[1].author.orcid | https://orcid.org/0000-0001-7556-9072 |
| authorships[1].author.display_name | Satoshi Kosugi |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Kosugi, Satoshi |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5069989297 |
| authorships[2].author.orcid | https://orcid.org/0000-0002-4529-4634 |
| authorships[2].author.display_name | Kotaro Funakoshi |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Funakoshi, Kotaro |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5035876897 |
| authorships[3].author.orcid | |
| authorships[3].author.display_name | Manabu Okumura |
| authorships[3].author_position | last |
| authorships[3].raw_author_name | Okumura, Manabu |
| authorships[3].is_corresponding | False |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2404.00264 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2024-04-03T00:00:00 |
| display_name | DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation |
| has_fulltext | True |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T10181 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9524000287055969 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1702 |
| primary_topic.subfield.display_name | Artificial Intelligence |
| primary_topic.display_name | Natural Language Processing Techniques |
| related_works | https://openalex.org/W2748952813, https://openalex.org/W2390279801, https://openalex.org/W2358668433, https://openalex.org/W2376932109, https://openalex.org/W2001405890, https://openalex.org/W2382290278, https://openalex.org/W2478288626, https://openalex.org/W4391913857, https://openalex.org/W2350741829, https://openalex.org/W3204019825 |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2404.00264 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2404.00264 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2404.00264 |
| primary_location.id | pmh:oai:arXiv.org:2404.00264 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2404.00264 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2404.00264 |
| publication_date | 2024-03-30 |
| publication_year | 2024 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 5, 10, 45, 52, 88, 103 |
| abstract_inverted_index.To | 82 |
| abstract_inverted_index.We | 121 |
| abstract_inverted_index.as | 25, 27, 44, 112 |
| abstract_inverted_index.at | 167 |
| abstract_inverted_index.be | 64, 165 |
| abstract_inverted_index.by | 8 |
| abstract_inverted_index.in | 149 |
| abstract_inverted_index.of | 13, 47, 51, 116, 153, 158 |
| abstract_inverted_index.on | 22, 30, 124 |
| abstract_inverted_index.to | 3, 54, 106 |
| abstract_inverted_index.we | 86 |
| abstract_inverted_index.Our | 162 |
| abstract_inverted_index.and | 129, 155 |
| abstract_inverted_index.are | 74 |
| abstract_inverted_index.for | 66, 80 |
| abstract_inverted_index.the | 31, 77 |
| abstract_inverted_index.DiLM | 123, 136, 144 |
| abstract_inverted_index.aims | 2 |
| abstract_inverted_index.code | 163 |
| abstract_inverted_index.each | 41 |
| abstract_inverted_index.from | 76, 135, 139 |
| abstract_inverted_index.into | 97 |
| abstract_inverted_index.such | 17, 59 |
| abstract_inverted_index.text | 36, 53, 90, 113, 126 |
| abstract_inverted_index.that | 18, 131 |
| abstract_inverted_index.them | 23 |
| abstract_inverted_index.this | 84 |
| abstract_inverted_index.used | 65, 79 |
| abstract_inverted_index.well | 26 |
| abstract_inverted_index.will | 164 |
| abstract_inverted_index.word | 48, 71 |
| abstract_inverted_index.Model | 99 |
| abstract_inverted_index.apply | 55 |
| abstract_inverted_index.data, | 114 |
| abstract_inverted_index.large | 159 |
| abstract_inverted_index.model | 78, 105 |
| abstract_inverted_index.novel | 89 |
| abstract_inverted_index.other | 68 |
| abstract_inverted_index.small | 11 |
| abstract_inverted_index.those | 28, 138 |
| abstract_inverted_index.types | 152 |
| abstract_inverted_index.which | 101 |
| abstract_inverted_index.whose | 70 |
| abstract_inverted_index.called | 94 |
| abstract_inverted_index.cannot | 63 |
| abstract_inverted_index.create | 40 |
| abstract_inverted_index.issue, | 85 |
| abstract_inverted_index.models | 69, 154 |
| abstract_inverted_index.neural | 19 |
| abstract_inverted_index.number | 12 |
| abstract_inverted_index.sample | 43 |
| abstract_inverted_index.showed | 130 |
| abstract_inverted_index.trains | 102 |
| abstract_inverted_index.(DiLM), | 100 |
| abstract_inverted_index.Current | 35 |
| abstract_inverted_index.Dataset | 0 |
| abstract_inverted_index.address | 83 |
| abstract_inverted_index.coreset | 141 |
| abstract_inverted_index.current | 140 |
| abstract_inverted_index.dataset | 7, 37, 91, 96 |
| abstract_inverted_index.instead | 50, 115 |
| abstract_inverted_index.methods | 39 |
| abstract_inverted_index.models. | 161 |
| abstract_inverted_index.perform | 24 |
| abstract_inverted_index.propose | 87 |
| abstract_inverted_index.samples | 16, 111 |
| abstract_inverted_index.trained | 21, 29 |
| abstract_inverted_index.various | 125 |
| abstract_inverted_index.weights | 73 |
| abstract_inverted_index.Language | 98 |
| abstract_inverted_index.achieved | 145 |
| abstract_inverted_index.compress | 4 |
| abstract_inverted_index.creating | 9 |
| abstract_inverted_index.dataset. | 34 |
| abstract_inverted_index.datasets | 62, 128, 134 |
| abstract_inverted_index.directly | 117 |
| abstract_inverted_index.generate | 107 |
| abstract_inverted_index.however, | 58 |
| abstract_inverted_index.language | 104, 160 |
| abstract_inverted_index.learning | 157 |
| abstract_inverted_index.methods. | 143 |
| abstract_inverted_index.networks | 20 |
| abstract_inverted_index.original | 32 |
| abstract_inverted_index.samples. | 120 |
| abstract_inverted_index.sequence | 46 |
| abstract_inverted_index.training | 6, 33, 67, 110, 150 |
| abstract_inverted_index.approach, | 93 |
| abstract_inverted_index.available | 166 |
| abstract_inverted_index.different | 75, 151 |
| abstract_inverted_index.distilled | 61, 132 |
| abstract_inverted_index.embedding | 72 |
| abstract_inverted_index.evaluated | 122 |
| abstract_inverted_index.selection | 142 |
| abstract_inverted_index.synthetic | 15, 42, 109, 119, 133 |
| abstract_inverted_index.Distilling | 95 |
| abstract_inverted_index.embeddings | 49 |
| abstract_inverted_index.in-context | 156 |
| abstract_inverted_index.optimizing | 118 |
| abstract_inverted_index.outperform | 137 |
| abstract_inverted_index.remarkable | 146 |
| abstract_inverted_index.informative | 14, 108 |
| abstract_inverted_index.performance | 148 |
| abstract_inverted_index.distillation | 1, 38, 92 |
| abstract_inverted_index.distillation. | 81 |
| abstract_inverted_index.optimization; | 57 |
| abstract_inverted_index.classification | 127 |
| abstract_inverted_index.generalization | 147 |
| abstract_inverted_index.gradient-based | 56 |
| abstract_inverted_index.embedding-level | 60 |
| abstract_inverted_index.https://github.com/arumaekawa/DiLM. | 168 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 4 |
| citation_normalized_percentile |