[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation

Aru Maekawa, Satoshi Kosugi, Kotaro Funakoshi, Manabu Okumura


Abstract
Dataset distillation aims to compress a training dataset by creating a small number of informative synthetic samples such that neural networks trained on them perform as well as those trained on the original training dataset. Current text dataset distillation methods create each synthetic sample as a sequence of word embeddings instead of a text to apply gradient-based optimization; however, such embedding-level distilled datasets cannot be used for training other models whose word embedding weights are different from the model used for distillation. To address this issue, we propose a novel text dataset distillation approach, called Distilling dataset into Language Model (DiLM), which trains a language model to generate informative synthetic training samples as text data, instead of directly optimizing synthetic samples. We evaluated DiLM on various text classification datasets and showed that distilled synthetic datasets from DiLM outperform those from current coreset selection methods. DiLM achieved remarkable generalization performance in training different types of models and in-context learning of large language models. Our code will be available at https://github.com/arumaekawa/DiLM.
Anthology ID:
2024.findings-naacl.199
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3138–3153
Language:
URL:
https://aclanthology.org/2024.findings-naacl.199
DOI:
10.18653/v1/2024.findings-naacl.199
Bibkey:
Cite (ACL):
Aru Maekawa, Satoshi Kosugi, Kotaro Funakoshi, and Manabu Okumura. 2024. DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 3138–3153, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation (Maekawa et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.199.pdf