Abstract
Learning Using Privileged Information is a particular type of knowledge distillation where the teacher model benefits from an additional data representation during training, called privileged information, improving the student model, which does not see the extra representation. However, privileged information is rarely available in practice. To this end, we propose a text classification framework that harnesses text-to-image diffusion models to generate artificial privileged information. The generated images and the original text samples are further used to train multimodal teacher models based on state-of-the-art transformer-based architectures. Finally, the knowledge from multimodal teachers is distilled into a text-based (unimodal) student. Hence, by employing a generative model to produce synthetic data as privileged information, we guide the training of the student model. Our framework, called Learning Using Generated Privileged Information (LUGPI), yields noticeable performance gains on four text classification data sets, demonstrating its potential in text classification without any additional cost during inference.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Alehdaghi, M., Josi, A., Cruz, R.M.O., Granger, E.: Visible-Infrared Person Re-Identification Using Privileged Intermediate Information. In: Proceedings of ECCVW. pp. 720–737 (2022)
Antoniou, A., Storkey, A., Edwards, H.: Augmenting image classifiers using data augmentation generative adversarial networks. In: Proceedings of ICANN. pp. 594–603 (2018)
Avrahami, O., Lischinski, D., Fried, O.: Blended diffusion for text-driven editing of natural images. In: Proceedings of CVPR. pp. 18208–18218 (2022)
Azizi, S., Kornblith, S., Saharia, C., Norouzi, M., Fleet, D.J.: Synthetic Data from Diffusion Models Improves ImageNet Classification. arXiv preprint arXiv:2304.08466 (2023)
Ba, J., Caruana, R.: Do deep nets really need to be deep? In: Proceedings of NIPS. pp. 2654–2662 (2014)
Croitoru, F.A., Hondru, V., Ionescu, R.T., Shah, M.: Diffusion models in vision: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 45(9), 10850–10869 (2023)
Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.: RandAugment: Practical Automated Data Augmentation with a Reduced Search Space. In: Proceedings of NeurIPS. vol. 33, pp. 18613–18624 (2020)
Devlin, J., Chang, M.W., Lee, K., Toutanova, L.K.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In: Proceedings of NAACL-HLT. pp. 4171–4186 (2019)
DeVries, T., Taylor, G.W.: Improved Regularization of Convolutional Neural Networks with Cutout. arXiv preprint arXiv:1708.04552 (2017)
Dhariwal, P., Nichol, A.: Diffusion models beat GANs on image synthesis. In: Proceedings of NeurIPS. vol. 34, pp. 8780–8794 (2021)
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In: Proceedings of ICLR (2021)
Feng, Y., Wang, H., Hu, R., Yi, D.T.: Triplet distillation for deep face recognition. In: Proceedings of ICIP. pp. 808–812 (2020)
Gambrell, L.B., Jawitz, P.B.: Mental Imagery, Text Illustrations, and Children’s Story Comprehension and Recall. Read. Res. Q. 28, 264–276 (1993)
Gao, Z., Wu, S., Liu, Z., Luo, J., Zhang, H., Gong, M., Li, S.: Learning the implicit strain reconstruction in ultrasound elastography using privileged information. Med. Image Anal. 58, 101534 (2019)
Garcia, N.C., Morerio, P., Murino, V.: Learning with privileged information via adversarial discriminative modality distillation. IEEE Trans. Pattern Anal. Mach. Intell. 42(10), 2581–2593 (2019)
Georgescu, M.I., Duţǎ, G.E., Ionescu, R.T.: Teacher-student training and triplet loss to reduce the effect of drastic face occlusion: Application to emotion recognition, gender identification and age estimation. Mach. Vis. Appl. 33(1), 12 (2022)
Georgescu, M.I., Ionescu, R.T.: Teacher-student training and triplet loss for facial expression recognition under occlusion. In: Proceedings of ICPR. pp. 2288–2295 (2021)
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of AISTATS. pp. 249–256 (2010)
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Proceedings of NIPS. vol. 27, pp. 2672–2680 (2014)
Gu, S., Chen, D., Bao, J., Wen, F., Zhang, B., Chen, D., Yuan, L., Guo, B.: Vector quantized diffusion model for text-to-image synthesis. In: Proceedings of CVPR. pp. 10696–10706 (2022)
Hinton, G., Vinyals, O., Dean, J.: Distilling the Knowledge in a Neural Network. In: Proceedings of NIPS Deep Learning and Representation Learning Workshop (2014)
Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Proceedings of NeurIPS. vol. 33, pp. 6840–6851 (2020)
Jung, B., Johansson, F.D.: Efficient learning of nonlinear prediction models with time-series privileged information. In: Proceedings of NeurIPS. vol. 35, pp. 19048–19060 (2022)
Lang, K.: NewsWeeder: Learning to Filter Netnews. In: Proceedings of ICML. pp. 331–339 (1995)
Lee, W., Lee, J., Kim, D., Ham, B.: Learning with privileged information for efficient image super-resolution. In: Proceedings of ECCV. pp. 465–482 (2020)
Liu, Z., Wei, J., Li, R., Zhou, J.: Learning multi-modal brain tumor segmentation from privileged semi-paired MRI images with curriculum disentanglement learning. Comput. Biol. Med. 159, 106927 (2023)
Lopez-Paz, D., Bottou, L., Schölkopf, B., Vapnik, V.: Unifying distillation and privileged information. In: Proceedings of ICLR (2016)
Loshchilov, I., Hutter, F.: Decoupled Weight Decay Regularization. In: Proceedings of ICLR (2019)
Maas, A.L., Daly, R.E., Pham, P.T., Huang, D., Ng, A.Y., Potts, C.: Learning Word Vectors for Sentiment Analysis. In: Proceedings of ACL. pp. 142–150 (2011)
Nichol, A., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., McGrew, B., Sutskever, I., Chen, M.: GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models. In: Proceedings of ICML. pp. 16784–16804 (2021)
Park, W., Kim, D., Lu, Y., Cho, M.: Relational Knowledge Distillation. In: Proceedings of CVPR. pp. 3962–3971 (2019)
Qian, Y., Hu, H., Tan, T.: Data augmentation using generative adversarial networks for robust speech recognition. Speech Commun. 114, 1–9 (2019)
Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning Transferable Visual Models From Natural Language Supervision. In: Proceedings of ICML. pp. 8748–8763 (2021)
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-Resolution Image Synthesis with Latent Diffusion Models. In: Proceedings of CVPR. pp. 10684–10695 (2022)
Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E., Ghasemipour, S.K.S., Ayan, B.K., Mahdavi, S.S., Lopes, R.G., et al.: Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. In: Proceedings of NeurIPS. vol. 35, pp. 36479–36494 (2022)
Sandfort, V., Yan, K., Pickhardt, P.J., Summers, R.M.: Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks. Sci. Rep. 9(1), 16884 (2019)
Sanh, V., Debut, L., Chaumond, J., Wolf, T.: DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In: Proceedings of EMC\(^2\) (2019)
Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: LAION-5B: An open large-scale dataset for training next generation image-text models. In: Proceedings of NeurIPS. vol. 35, pp. 25278–25294 (2022)
Shivashankar, C., Miller, S.: Semantic Data Augmentation with Generative Models. In: Proceedings of CVPRW. pp. 863–873 (2023)
Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., Ganguli, S.: Deep unsupervised learning using non-equilibrium thermodynamics. In: Proceedings of ICML. pp. 2256–2265 (2015)
Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. In: Proceedings of NeurIPS. vol. 32, pp. 11918–11930 (2019)
Vapnik, V., Vashist, A.: A new learning paradigm: Learning using privileged information. Neural Netw. 22(5–6), 544–557 (2009)
Yang, J., Li, B., Yang, F., Zeng, A., Zhang, L., Zhang, R.: Boosting human-object interaction detection with text-to-image diffusion model. arXiv preprint arXiv:2305.12252 (2023)
Yim, J., Joo, D., Bae, J., Kim, J.: A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning. In: Proceedings of CVPR. pp. 7130–7138 (2017)
Yimam, S.M., Štajner, S., Riedl, M., Biemann, C.: Multilingual and Cross-Lingual Complex Word Identification. In: Proceedings of RANLP. pp. 813–822 (2017)
You, S., Xu, C., Xu, C., Tao, D.: Learning from multiple teacher networks. In: Proceedings of KDD. pp. 1285–1294 (2017)
Yu, L., Yazici, V.O., Liu, X., van de Weijer, J., Cheng, Y., Ramisa, A.: Learning Metrics from Teachers: Compact Networks for Image Embedding. In: Proceedings of CVPR. pp. 2907–2916 (2019)
Yuan, S., Stenger, B., Kim, T.K.: RGB-based 3D hand pose estimation via privileged learning with depth images. arXiv preprint arXiv:1811.07376 (2018)
Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: Beyond Empirical Risk Minimization. In: Proceedings of ICLR (2018)
Zhao, P., Xie, L., Wang, J., Zhang, Y., Tian, Q.: Progressive privileged knowledge distillation for online action detection. Pattern Recogn. 129, 108741 (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Menadil, RE., Georgescu, MI., Ionescu, R.T. (2025). Learning Using Generated Privileged Information by Text-to-Image Diffusion Models. In: Antonacopoulos, A., Chaudhuri, S., Chellappa, R., Liu, CL., Bhattacharya, S., Pal, U. (eds) Pattern Recognition. ICPR 2024. Lecture Notes in Computer Science, vol 15307. Springer, Cham. https://doi.org/10.1007/978-3-031-78183-4_27
Download citation
DOI: https://doi.org/10.1007/978-3-031-78183-4_27
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-78182-7
Online ISBN: 978-3-031-78183-4
eBook Packages: Computer ScienceComputer Science (R0)