[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
article

Exploring the Potential of Large Language Models (LLMs)in Learning on Graphs

Published: 28 March 2024 Publication History

Abstract

Learning on Graphs has attracted immense attention due to its wide real-world applications. The most popular pipeline for learning on graphs with textual node attributes primarily relies on Graph Neural Networks (GNNs), and utilizes shallow text embedding as initial node representations, which has limitations in general knowledge and profound semantic understanding. In recent years, Large Language Models (LLMs) have been proven to possess extensive common knowledge and powerful semantic comprehension abilities that have revolutionized existing workflows to handle text data. In this paper, we aim to explore the potential of LLMs in graph machine learning, especially the node classification task, and investigate two possible pipelines: LLMs-as-Enhancers and LLMs-as-Predictors. The former leverages LLMs to enhance nodes' text attributes with their massive knowledge and then generate predictions through GNNs. The latter attempts to directly employ LLMs as standalone predictors. We conduct comprehensive and systematical studies on these two pipelines under various settings. From comprehensive empirical results, we make original observations and find new insights that open new possibilities and suggest promising directions to leverage LLMs for learning on graphs. Our codes and datasets are available at: https://github.com/CurryTang/Graph-LLM .

References

[1]
R. Anil, A. M. Dai, O. Firat, M. Johnson, D. Lepikhin, A. Passos, S. Shakeri, E. Taropa, P. Bailey, Z. Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
[2]
S. Bubeck, V. Chandrasekaran, R. Eldan, J. A. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y. T. Lee, Y.-F. Li, S. M. Lundberg, H. Nori, H. Palangi, M. T. Ribeiro, and Y. Zhang. Sparks of artificial general intelligence: Early experiments with gpt-4. ArXiv, abs/2303.12712, 2023.
[3]
Z. Chai, T. Zhang, L. Wu, K. Han, X. Hu, X. Huang, and Y. Yang. Graphllm: Boosting graph reasoning ability of large language model. arXiv preprint arXiv:2310.05845, 2023.
[4]
Z. Chen, H. Mao, H. Wen, H. Han, W. Jin, H. Zhang, H. Liu, and J. Tang. Label-free node classification on graphs with large language models (llms). arXiv preprint arXiv:2310.04668, 2023.
[5]
W.-L. Chiang, X. Liu, S. Si, Y. Li, S. Bengio, and C.-J. Hsieh. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019.
[6]
E. Chien, W.-C. Chang, C.-J. Hsieh, H.-F. Yu, J. Zhang, O. Milenkovic, and I. S. Dhillon. Node feature extraction by self-supervised multi-scale neighborhood prediction. In ICLR 2022, 2022.
[7]
A. Creswell, M. Shanahan, and I. Higgins. Selectioninference: Exploiting large language models for interpretable logical reasoning. In The Eleventh International Conference on Learning Representations, 2023.
[8]
E. Dai, C. Aggarwal, and S. Wang. Nrgnn: Learning a label noise resistant graph neural network on sparsely and noisily labeled graphs. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD '21, page 227--236, New York, NY, USA, 2021. Association for Computing Machinery.
[9]
J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171--4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics.
[10]
K. Duan, Q. Liu, T.-S. Chua, S. Yan, W. T. Ooi, Q. Xie, and J. He. Simteg: A frustratingly simple approach improves textual graph learning. arXiv preprint arXiv:2308.02565, 2023.
[11]
V. P. Dwivedi, L. Ramp´a'sek, M. Galkin, A. Parviz, G. Wolf, A. T. Luu, and D. Beaini. Long range graph benchmark. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022.
[12]
K. Ethayarajh. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 55--65, Hong Kong, China, Nov. 2019. Association for Computational Linguistics.
[13]
M. Fey and J. E. Lenssen. Fast graph representation learning with pytorch geometric. ArXiv, abs/1903.02428, 2019.
[14]
Y. Gao, T. Sheng, Y. Xiang, Y. Xiong, H. Wang, and J. Zhang. Chat-rec: Towards interactive and explainable llms-augmented recommender system. ArXiv, abs/2303.14524, 2023.
[15]
C. L. Giles, K. D. Bollacker, and S. Lawrence. Citeseer: An automatic citation indexing system. In Proceedings of the Third ACM Conference on Digital Libraries, DL ´98, pages 89--98, New York, NY, USA, 1998. ACM.
[16]
J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing for quantum chemistry. ArXiv, abs/1704.01212, 2017.
[17]
S. Gui, X. Li, L. Wang, and S. Ji. GOOD: A graph outof- distribution benchmark. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022.
[18]
J. Guo, L. Du, and H. Liu. Gpt4graph: Can large language models understand graph structured data? an empirical evaluation and benchmarking. arXiv preprint arXiv:2305.15066, 2023.
[19]
W. L. Hamilton, R. Ying, and J. Leskovec. Inductive representation learning on large graphs. In NIPS, 2017.
[20]
Z. S. Harris. Distributional structure. Word, 10(2- 3):146--162, 1954.
[21]
P. He, X. Liu, J. Gao, and W. Chen. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654, 2020.
[22]
X. He, X. Bresson, T. Laurent, and B. Hooi. Explanations as features: Llm-based features for text-attributed graphs. arXiv preprint arXiv:2305.19523, 2023.
[23]
W. Hu, M. Fey, M. Zitnik, Y. Dong, H. Ren, B. Liu, M. Catasta, and J. Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118--22133, 2020.
[24]
Z. Hu, Y. Dong, K. Wang, K.-W. Chang, and Y. Sun. Gpt-gnn: Generative pre-training of graph neural networks. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020.
[25]
J. Huang, X. Zhang, Q. Mei, and J. Ma. Can llms effectively leverage graph structural information: When and why. arXiv preprint arXiv:2309.16595, 2023.
[26]
Y. Ji, Y. Gong, Y. Peng, C. Ni, P. Sun, D. Pan, B. Ma, and X. Li. Exploring chatgpt's ability to rank content: A preliminary study on consistency with human preferences. ArXiv, abs/2303.07610, 2023.
[27]
T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017.
[28]
G. Li, M. M¨uller, B. Ghanem, and V. Koltun. Training graph neural networks with 1000 layers. In International conference on machine learning, pages 6437-- 6449. PMLR, 2021.
[29]
H. Li, X. Wang, Z. Zhang, and W. Zhu. Out-ofdistribution generalization on graphs: A survey. arXiv preprint arXiv:2202.07987, 2022.
[30]
J. Li, Y. Liu, W. Fan, X. Wei, H. Liu, J. Tang, and Q. Li. Empowering molecule discovery for moleculecaption translation with large language models: A chatgpt perspective. ArXiv, abs/2306.06615, 2023.
[31]
Q. Li, X. Li, L. Chen, and D. Wu. Distilling knowledge on text graph for social media attribute inference. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '22, page 2024--2028, New York, NY, USA, 2022. Association for Computing Machinery.
[32]
Y. Li, J. Yin, and L. Chen. Informative pseudo-labeling for graph neural networks with few labels. Data Mining and Knowledge Discovery, 37(1):228--254, 2023.
[33]
H. Liu, J. Feng, L. Kong, N. Liang, D. Tao, Y. Chen, and M. Zhang. One for all: Towards training one graph model for all classification tasks. arXiv preprint arXiv:2310.00149, 2023.
[34]
H. Liu, B. Hu, X. Wang, C. Shi, Z. Zhang, and J. Zhou. Confidence may cheat: Self-training on graph neural networks under distribution shift. In Proceedings of the ACM Web Conference 2022, WWW '22, page 1248--1258, New York, NY, USA, 2022. Association for Computing Machinery.
[35]
J. Liu, C. Liu, R. Lv, K. Zhou, and Y. Zhang. Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149, 2023.
[36]
W. Liu, P. Zhou, Z. Zhao, Z. Wang, Q. Ju, H. Deng, and P.Wang. K-bert: Enabling language representation with knowledge graph. In AAAI Conference on Artificial Intelligence, 2019.
[37]
Z. Liu, C. Xiong, M. Sun, and Z. Liu. Fine-grained fact verification with kernel graph attention network. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7342--7351, Online, July 2020. Association for Computational Linguistics.
[38]
Y. Ma and J. Tang. Deep Learning on Graphs. Cambridge University Press, 2021.
[39]
H. Mao, Z. Chen, W. Jin, H. Han, Y. Ma, T. Zhao, N. Shah, and J. Tang. Demystifying structural disparity in graph neural networks: Can one size fit all? arXiv preprint arXiv:2306.01323, 2023.
[40]
A. McCallum, K. Nigam, J. D. M. Rennie, and K. Seymore. Automating the construction of internet portals with machine learning. Information Retrieval, 3:127-- 163, 2000.
[41]
A. Miaschi and F. Dell'Orletta. Contextual and noncontextual word embeddings: an in-depth linguistic investigation. In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 110--119, Online, July 2020. Association for Computational Linguistics.
[42]
T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
[43]
N. Muennigho?, N. Tazi, L. Magne, and N. Reimers. MTEB: Massive text embedding benchmark. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2014--2037, Dubrovnik, Croatia, May 2023. Association for Computational Linguistics.
[44]
A. Neelakantan, T. Xu, R. Puri, A. Radford, J. M. Han, J. Tworek, Q. Yuan, N. A. Tezak, J. W. Kim, C. Hallacy, J. Heidecke, P. Shyam, B. Power, T. E. Nekoul, G. Sastry, G. Krueger, D. P. Schnurr, F. P. Such, K. S.- K. Hsu, M. Thompson, T. Khan, T. Sherbakov, J. Jang, P. Welinder, and L. Weng. Text and code embeddings by contrastive pre-training. ArXiv, abs/2201.10005, 2022.
[45]
OpenAI. Introducing chatgpt, 2022.
[46]
OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023.
[47]
Z. Pang, Z. Xie, Y. Man, and Y.-X.Wang. Frozen transformers in language models are effective visual encoder layers. arXiv preprint arXiv:2310.12973, 2023.
[48]
F. Petroni, T. Rockt¨aschel, P. Lewis, A. Bakhtin, Y. Wu, A. H. Miller, and S. Riedel. Language models as knowledge bases? ArXiv, abs/1909.01066, 2019.
[49]
S. Purchase, A. Zhao, and R. D. Mullins. Revisiting embeddings for graph neural networks. ArXiv, abs/2209.09338, 2022.
[50]
Y. Qin, X. Wang, Z. Zhang, and W. Zhu. Disentangled representation learning with large language models for text-attributed graphs. arXiv preprint arXiv:2310.18152, 2023.
[51]
X. Qiu, T. Sun, Y. Xu, Y. Shao, N. Dai, and X. Huang. Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63:1872 -- 1897, 2020.
[52]
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. 2019.
[53]
N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified textto- text transformer. Journal of Machine Learning Research, 21(140):1--67, 2020.
[54]
N. Reimers and I. Gurevych. Sentence-BERT: Sentence embeddings using Siamese BERT-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982--3992, Hong Kong, China, Nov. 2019. Association for Computational Linguistics.
[55]
M. Roughan and S. J. Tuke. Unravelling graphexchange file formats. ArXiv, abs/1503.02781, 2015.
[56]
T. Schick, J. Dwivedi-Yu, R. Dess'?, R. Raileanu, M. Lomeli, L. Zettlemoyer, N. Cancedda, and T. Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023.
[57]
P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad. Collective classification in network data. AI Magazine, 29(3):93, Sep. 2008.
[58]
C. Sun, H. Gu, and J. Hu. Scalable and adaptive graph neural networks with self-label-enhanced training. arXiv preprint arXiv:2104.09376, 2021.
[59]
T. Sun, Y. Shao, H. Qian, X. Huang, and X. Qiu. Blackbox tuning for language-model-as-a-service. In International Conference on Machine Learning, pages 20841-- 20855. PMLR, 2022.
[60]
X. Sun, X. Li, J. Li, F. Wu, S. Guo, T. Zhang, and G. Wang. Text classification via large language models. ArXiv, abs/2305.08377, 2023.
[61]
Y. Sun, S. Wang, Y. Li, S. Feng, X. Chen, H. Zhang, X. Tian, D. Zhu, H. Tian, and H. Wu. Ernie: Enhanced representation through knowledge integration. ArXiv, abs/1904.09223, 2019.
[62]
J. Tang, Y. Yang, W. Wei, L. Shi, L. Su, S. Cheng, D. Yin, and C. Huang. Graphgpt: Graph instruction tuning for large language models. arXiv preprint arXiv:2310.13023, 2023.
[63]
H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozi'ere, N. Goyal, E. Hambro, F. Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
[64]
G. Cucurull, A. Casanova, A. Romero, P. Li'o, and Y. Bengio. Graph attention networks. In International Conference on Learning Representations, 2018.
[65]
H. Wang, S. Feng, T. He, Z. Tan, X. Han, and Y. Tsvetkov. Can language models solve graph problems in natural language? arXiv preprint arXiv:2305.10037, 2023.
[66]
H. Wang, Y. Gao, X. Zheng, P. Zhang, H. Chen, and J. Bu. Graph neural architecture search with gpt-4. arXiv preprint arXiv:2310.01436, 2023.
[67]
J. Wang, X. Hu, W. Hou, H. Chen, R. Zheng, Y. Wang, L. Yang, H. Huang, W. Ye, X. Geng, et al. On the robustness of chatgpt: An adversarial and out-of-distribution perspective. arXiv preprint arXiv:2302.12095, 2023.
[68]
L. Wang, N. Yang, X. Huang, B. Jiao, L. Yang, D. Jiang, R. Majumder, and F. Wei. Text embeddings by weakly-supervised contrastive pre-training. arXiv preprint arXiv:2212.03533, 2022.
[69]
M. Wang, L. Yu, D. Zheng, Q. Gan, Y. Gai, Z. Ye, M. Li, J. Zhou, Q. Huang, C. Ma, Z. Huang, Q. Guo, H. Zhang, H. Lin, J. J. Zhao, J. Li, A. Smola, and Z. Zhang. Deep graph library: Towards efficient and scalable deep learning on graphs. ArXiv, abs/1909.01315, 2019.
[70]
J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
[71]
T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, and J. Brew. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771, 2019.
[72]
Y. Wu, Y. Xu, A. Singh, Y. Yang, and A. W. Dubrawski. Active learning for graph neural networks via node feature propagation. ArXiv, abs/1910.07567, 2019.
[73]
F. Xia, K. Sun, S. Yu, A. Aziz, L. Wan, S. Pan, and H. Liu. Graph learning: A survey. IEEE Transactions on Artificial Intelligence, 2:109--127, 2021.
[74]
J. Yang, Z. Liu, S. Xiao, C. Li, D. Lian, S. Agrawal,A. S, G. Sun, and X. Xie. Graphformers: GNNnested transformers for representation learning on textual graph. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems, 2021.
[75]
Z. Yang, W. W. Cohen, and R. Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. ArXiv, abs/1603.08861, 2016.
[76]
L. Yao, C. Mao, and Y. Luo. Graph convolutional networks for text classification. ArXiv, abs/1809.05679, 2018.
[77]
M. Yasunaga, A. Bosselut, H. Ren, X. Zhang, C. D. Manning, P. Liang, and J. Leskovec. Deep bidirectional language-knowledge graph pretraining. In Neural Information Processing Systems (NeurIPS), 2022.
[78]
M. Yasunaga, J. Leskovec, and P. Liang. Linkbert: Pretraining language models with document links. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8003--8016, 2022.
[79]
R. Ye, C. Zhang, R. Wang, S. Xu, and Y. Zhang. Natural language is all a graph needs. arXiv preprint arXiv:2308.07134, 2023.
[80]
J. Zhang. Graph-toolformer: To empower llms with graph reasoning ability via prompt augmented by chatgpt. arXiv preprint arXiv:2304.11116, 2023.
[81]
Z. Zhang, X. Wang, Z. Zhang, H. Li, Y. Qin, S. Wu, and W. Zhu. Llm4dyg: Can large language models solve problems on dynamic graphs? arXiv preprint arXiv:2310.17110, 2023.
[82]
Z. Zhang, A. Zhang, M. Li, and A. J. Smola. Automatic chain of thought prompting in large language models. ArXiv, abs/2210.03493, 2022.
[83]
J. Zhao, M. Qu, C. Li, H. Yan, Q. Liu, R. Li, X. Xie, and J. Tang. Learning on large-scale text-attributed graphs via variational inference. In The Eleventh International Conference on Learning Representations, 2023.
[84]
J. Zhao, L. Zhuo, Y. Shen, M. Qu, K. Liu, M. Bronstein, Z. Zhu, and J. Tang. Graphtext: Graph reasoning in text space. arXiv preprint arXiv:2310.01089, 2023.
[85]
W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong, Y. Du, C. Yang, Y. Chen, Z. Chen, J. Jiang, R. Ren, Y. Li, X. Tang, Z. Liu, P. Liu, J. Nie, and J. rong Wen. A survey of large language models. ArXiv, abs/2303.18223, 2023.
[86]
J. Zhu, Y. Cui, Y. Liu, H. Sun, X. Li, M. Pelger, L. Zhang, T. Yan, R. Zhang, and H. Zhao. Textgnn: Improving text encoder via graph neural network in sponsored search. Proceedings of the Web Conference 2021, 2021.
[87]
J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D. Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 7793--7804. Curran Associates, Inc., 2020.

Cited By

View all
  • (2025)An Embodied Intelligence System for Coal Mine Safety Assessment Based on Multi-Level Large Language ModelsSensors10.3390/s2502048825:2(488)Online publication date: 16-Jan-2025
  • (2024)Disentangled continual graph neural architecture search with invariant modular supernetProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3694549(59975-59991)Online publication date: 21-Jul-2024
  • (2024)PositionProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693547(36294-36307)Online publication date: 21-Jul-2024
  • Show More Cited By

Index Terms

  1. Exploring the Potential of Large Language Models (LLMs)in Learning on Graphs
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image ACM SIGKDD Explorations Newsletter
          ACM SIGKDD Explorations Newsletter  Volume 25, Issue 2
          December 2023
          58 pages
          ISSN:1931-0145
          EISSN:1931-0153
          DOI:10.1145/3655103
          Issue’s Table of Contents
          Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 28 March 2024
          Published in SIGKDD Volume 25, Issue 2

          Check for updates

          Qualifiers

          • Article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)2,596
          • Downloads (Last 6 weeks)217
          Reflects downloads up to 18 Jan 2025

          Other Metrics

          Citations

          Cited By

          View all
          • (2025)An Embodied Intelligence System for Coal Mine Safety Assessment Based on Multi-Level Large Language ModelsSensors10.3390/s2502048825:2(488)Online publication date: 16-Jan-2025
          • (2024)Disentangled continual graph neural architecture search with invariant modular supernetProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3694549(59975-59991)Online publication date: 21-Jul-2024
          • (2024)PositionProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693547(36294-36307)Online publication date: 21-Jul-2024
          • (2024)PositionProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692965(22260-22276)Online publication date: 21-Jul-2024
          • (2024)Graph Convolutional Network for Image Restoration: A SurveyMathematics10.3390/math1213202012:13(2020)Online publication date: 28-Jun-2024
          • (2024)Enabling Perspective-Aware Ai with Contextual Scene Graph GenerationInformation10.3390/info1512076615:12(766)Online publication date: 2-Dec-2024
          • (2024)MIRA-ChatGLM: A Fine-Tuned Large Language Model for Intelligent Risk Assessment in Coal MiningApplied Sciences10.3390/app14241207214:24(12072)Online publication date: 23-Dec-2024
          • (2024)A Survey of Large Language Models for GraphsProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671460(6616-6626)Online publication date: 25-Aug-2024
          • (2024)CXSimulator: A User Behavior Simulation using LLM Embeddings for Web-Marketing Campaign AssessmentProceedings of the 33rd ACM International Conference on Information and Knowledge Management10.1145/3627673.3679894(3817-3821)Online publication date: 21-Oct-2024
          • (2024)Can GNN be Good Adapter for LLMs?Proceedings of the ACM on Web Conference 202410.1145/3589334.3645627(893-904)Online publication date: 13-May-2024
          • Show More Cited By

          View Options

          Login options

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media