Abstract
Large language models(LLMs) have exhibited notable general-purpose task-solving abilities in language understanding and generation, including processing recommendation tasks. The majority of existing research relies on training-free recommendation models that treat LLMs as reasoning engines and directly given the recommended task response. This approach heavily relies on pre-trained knowledge and may lead to excessive costs. As such, we propose a two-stage fine-tuning framework leveraging LLaMA2 and GPT-4 Knowledge Enhancement for recommendation. In particular, we use GPT-4 Instruction-Following data to tune the LLM in first-stage instruction tuning process, achieving lower training costs and better inference performance. In the second stage, through a elaborately designed prompt template, we fine-tune LLM from the first stage in a few-shot setting by interactive sequences based on user ratings. To validate the effectiveness of our framework, we compare against state-of-the-art baseline methods on benchmark datasets. The results demonstrate that our framework has promising recommendation capabilities. Our experiments are executed on a single RTX4090 with LLaMA2-7B.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bao, K., Zhang, J., Zhang, Y., Wang, W., Feng, F., He, X.: TALLRec: an effective and efficient tuning framework to align large language model with recommendation. In: Proceedings of the 17th ACM Conference on Recommender Systems, pp. 1007–1014 (2023)
Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)
Chen, Z.: PALR: Personalization aware LLMs for recommendation. arXiv preprint arXiv:2305.07622 (2023)
Dettmers, T., Pagnoni, A., Holtzman, A., Zettlemoyer, L.: QLORA: efficient finetuning of quantized LLMs. In: Proceedings of NeurIPS (2023)
Gao, Y., Sheng, T., Xiang, Y., Xiong, Y., Wang, H., Zhang, J.: Chat-REC: Towards interactive and explainable LLMs-augmented recommender system. arXiv preprint arXiv:2303.14524 (2023)
He, R., McAuley, J.: Ups and Downs: modeling the visual evolution of fashion trends with one-class collaborative filtering. In: Proceedings of the 25th International Conference on World Wide Web, pp. 507–517 (2016)
Hidasi, B., Karatzoglou, A., Baltrunas, L., Tikk, D.: Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939 (2015)
Hou, Y., et al.: Large language models are zero-shot rankers for recommender systems. In: European Conference on Information Retrieval, pp. 364–381 (2024)
Hu, E.J., et al.: LORA: low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 (2021)
Kang, W.C., McAuley, J.: Self-attentive sequential recommendation. In: Proceeding of ICDM, pp. 197–206 (2018)
Li, J., et al.: Coarse-to-fine sparse sequential recommendation. In: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 2082–2086 (2022)
Li, J., Ren, P., Chen, Z., Ren, Z., Lian, T., Ma, J.: Neural attentive session-based recommendation. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 1419–1428 (2017)
Maxwell, F., Konstan, J.A.: The MovieLens datasets: history and context. Proceedings of ACM Transactions on Interactive Intelligent Systems (2015)
Peng, B., Li, C., He, P., Galley, M., Gao, J.: Instruction tuning with GPT-4.arXiv preprint arXiv:2304.03277 (2023)
Sileo, D., Vossen, W., Raymaekers, R.: Zero-shot recommendation as language modeling. In: Proceedings of the 44th European Conference on IR Research, pp. 223–230 (2022)
Steck, H., Baltrunas, L., Elahi, E., Liang, D., Raimond, Y., Basilico, J.: Deep learning for recommender systems: a netflix case study. AI Mag. 42(3), 7–18 (2021)
Tang, J., Wang, K.: Personalized top-n sequential recommendation via convolutional sequence embedding. In: Proceedings of WSDM, pp. 565–573 (2018)
Touvron, H., et al.: LLaMA: open and efficient foundation language models. arXiv preprint arXiv:2307.09288 (2023)
Wang, L., Lim, E.P.: Zero-shot next-item recommendation using large pretrained language models. arXiv preprint arXiv:2304.03153 (2023)
Wang, Z., Wei, W., Cong, G., Li, X., Mao, X., Qiu, M.: Global context enhanced graph neural networks for session-based recommendation. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 169–178 (2020)
Wu, L., et al.: A survey on large language models for recommendation. arXiv preprint arXiv:2305.19860 (2023)
Wu, S., Tang, Y., Zhu, Y., Wang, L., Xie, X., Tan, T.: Session-based recommendation with graph neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 346–353 (2019)
Wu, Z., Geiger, A., Icard, T., Potts, C., Goodman, N.D.: Interpretability at scale: identifying causal mechanisms in alpaca. In: Proceedings of NeurIPS (2023)
Yang, Z., et al.: A generic learning framework for sequential recommendation with distribution shifts. arXiv preprint arXiv:2310.20487 (2023)
Yue, Z., Rabhi, S., Moreira, G.d.S.P., Wang, D., Oldridge, E.: LlamaRec: two-stage recommendation using large language models for ranking. In: Proceedings of CIKM (2023)
Ziegler, C.N., McNee, S.M., Konstan, J.A., Lausen, G.: Improving recommendation lists through topic diversification. In: Proceedings of the 14th International Conference on World Wide Web (WWW ’05), pp. 22–32 (2005)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zeng, B., Shi, H., Li, Y., Li, R., Deng, H. (2025). Leveraging Large Language Models Knowledge Enhancement Dual-Stage Fine-Tuning Framework for Recommendation. In: Wong, D.F., Wei, Z., Yang, M. (eds) Natural Language Processing and Chinese Computing. NLPCC 2024. Lecture Notes in Computer Science(), vol 15360. Springer, Singapore. https://doi.org/10.1007/978-981-97-9434-8_26
Download citation
DOI: https://doi.org/10.1007/978-981-97-9434-8_26
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-9433-1
Online ISBN: 978-981-97-9434-8
eBook Packages: Computer ScienceComputer Science (R0)