Conclusion
This letter investigates the sample efficiency property of recommender systems enhanced by large language models. We propose a simple yet effective framework (i.e., Laser) to validate the core viewpoint - large language models make sample-efficient recommender systems - from two aspects: (1) LLMs themselves are sample-efficient recommenders; and (2) LLMs make conventional recommender systems more sample-efficient. For future work, we aim to improve the sample efficiency of LLM-based recommender systems from the following two aspects: (1) exploring effective strategy to select the few-shot training samples instead of uniformly sampling, and (2) applying Laser for downstream applications like code snippet recommendation.
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.References
Zhang J, Bao K, Zhang Y, Wang W, Feng F, He X. Large language models for recommendation: progresses and future directions. In: Proceedings of the ACM on Web Conference 2024. 2024, 1268–1271
Pan X, Wu L, Long F, Ma A. Exploiting user behavior learning for personalized trajectory recommendations. Frontiers of Computer Science, 2022, 16(3): 163610
MindSpore, 2020
Acknowledgements
The Shanghai Jiao Tong University team was partially supported by the National Natural Science Foundation of China (Grant No. 62177033). Jianghao Lin is supported by the Wu Wen Jun Honorary Doctoral Scholarship. The work was sponsored by Huawei Innovation Research Program. We thank MindSpore [3] for the partial support of this work, which is a new deep learning computing framework.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests The authors declare that they have no competing interests or financial conflicts to disclose.
Electronic supplementary material
Rights and permissions
About this article
Cite this article
Lin, J., Dai, X., Shan, R. et al. Large language models make sample-efficient recommender systems. Front. Comput. Sci. 19, 194328 (2025). https://doi.org/10.1007/s11704-024-40039-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11704-024-40039-z