[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Different paths to the same destination: : Diversifying LLMs generation for multi-hop open-domain question answering

Published: 18 February 2025 Publication History

Abstract

The powerful natural language reasoning capabilities of large language models (LLMs) have led to widespread application in knowledge-intensive multi-hop question reasoning. However, when answering questions with multiple possible solutions, Chain-of-Thoughts (CoT) -based methods that rely on a single reasoning path perform average. The reason is that there is no opportunity to correct the reasoning process when errors occur. To address this, we propose DP-CoT that utilizes diverse generation for multi-hop question reasoning. Concisely, we introduce two methods of generating diverse evidence with different granularity: passage-level sampling and sentence-level proposal generation. Meanwhile, we train a BERT-style evidence classifier to prune the reasoning path. Finally, we integrate the best-performing classifier into the reasoning module to obtain an end-to-end framework. We evaluate DP-CoT on several prevalent multi-hop open-domain question answering datasets and achieve highly competitive results compared to the state-of-the-art baselines. Specifically, compared to IRCoT with GPT3 as the backbone language model, DP-CoT achieves an recall improvement of 4.8% and 1.1% on the HotpotQA and 2WikiMultihopQA datasets, respectively. Extensive experimental results validate the effectiveness of our method. Code and data are available at https://github.com/XD-BDIV-NLP/DP-CoT.

Graphical abstract

Display Omitted

Highlights

We propose DP-CoT, which utilizes two-level diverse CoT generation.
We design a BERT-style evidence classifier for reasoning path pruning.
The DP-CoT achieves 1.1%-4.8% improvement compared to GPT3-based baselines.

References

[1]
M.R. Glass, G. Rossiello, M.F.M. Chowdhury, A. Gliozzo, Robust retrieval augmented generation for zero-shot slot filling, in: Moens M., Huang X., Specia L., Yih S.W. (Eds.), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, Association for Computational Linguistics, 2021, pp. 1939–1949,.
[2]
J. Maillard, V. Karpukhin, F. Petroni, W. Yih, B. Oguz, V. Stoyanov, G. Ghosh, Multi-task retrieval for knowledge-intensive tasks, in: Zong C., Xia F., Li W., Navigli R. (Eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, Association for Computational Linguistics, 2021, pp. 1098–1111,.
[3]
D. Metzler, Y. Tay, D. Bahri, M. Najork, Rethinking search: making domain experts out of dilettantes, SIGIR Forum 55 (1) (2021) 13:1–13:27,.
[4]
F. Petroni, A. Piktus, A. Fan, P.S.H. Lewis, M. Yazdani, N.D. Cao, J. Thorne, Y. Jernite, V. Karpukhin, J. Maillard, V. Plachouras, T. Rocktäschel, S. Riedel, KILT: a benchmark for knowledge intensive language tasks, in: Toutanova K., Rumshisky A., Zettlemoyer L., Hakkani-Tür D., Beltagy I., Bethard S., Cotterell R., Chakraborty T., Zhou Y. (Eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, Association for Computational Linguistics, 2021, pp. 2523–2544,.
[5]
Y. Tay, V. Tran, M. Dehghani, J. Ni, D. Bahri, H. Mehta, Z. Qin, K. Hui, Z. Zhao, J.P. Gupta, T. Schuster, W.W. Cohen, D. Metzler, Transformer memory as a differentiable search index, in: Koyejo S., Mohamed S., Agarwal A., Belgrave D., Cho K., Oh A. (Eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, la, USA, November 28 - December 9, 2022, 2022, URL http://papers.nips.cc/paper_files/paper/2022/hash/892840a6123b5ec99ebaab8be1530fba-Abstract-Conference.html.
[6]
M. Bevilacqua, G. Ottaviano, P.S.H. Lewis, S. Yih, S. Riedel, F. Petroni, Autoregressive search engines: Generating substrings as document identifiers, in: Koyejo S., Mohamed S., Agarwal A., Belgrave D., Cho K., Oh A. (Eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, la, USA, November 28 - December 9, 2022, 2022, URL http://papers.nips.cc/paper_files/paper/2022/hash/cd88d62a2063fdaf7ce6f9068fb15dcd-Abstract-Conference.html.
[7]
J. Chen, R. Zhang, J. Guo, M. de Rijke, Y. Liu, Y. Fan, X. Cheng, A unified generative retriever for knowledge-intensive language tasks via prompt learning, in: Chen H., Duh W.E., Huang H., Kato M.P., Mothe J., Poblete B. (Eds.), Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2023, Taipei, Taiwan, July 23-27, 2023, ACM, 2023, pp. 1448–1457,.
[8]
L. Gao, X. Ma, J. Lin, J. Callan, Precise zero-shot dense retrieval without relevance labels, in: Rogers A., Boyd-Graber J.L., Okazaki N. (Eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, Association for Computational Linguistics, 2023, pp. 1762–1777,.
[9]
Y. Li, N. Yang, L. Wang, F. Wei, W. Li, Multiview identifiers enhanced generative retrieval, in: Rogers A., Boyd-Graber J.L., Okazaki N. (Eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, Association for Computational Linguistics, 2023, pp. 6636–6648,.
[10]
W. Xiong, X.L. Li, S. Iyer, J. Du, P.S.H. Lewis, W.Y. Wang, Y. Mehdad, S. Yih, S. Riedel, D. Kiela, B. Oguz, Answering complex open-domain questions with multi-hop dense retrieval, in: 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021, OpenReview.net, 2021, URL https://openreview.net/forum?id=EMHoBG0avc1.
[11]
S. Li, X. Li, L. Shang, X. Jiang, Q. Liu, C. Sun, Z. Ji, B. Liu, HopRetriever: Retrieve hops over wikipedia to answer complex questions, in: Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, the Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, AAAI Press, 2021, pp. 13279–13287,.
[12]
N.D. Cao, W. Aziz, I. Titov, Question answering by reasoning across documents with graph convolutional networks, in: Burstein J., Doran C., Solorio T. (Eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), Association for Computational Linguistics, 2019, pp. 2306–2317,.
[13]
B. Dhingra, Q. Jin, Z. Yang, W.W. Cohen, R. Salakhutdinov, Neural models for reasoning over multiple mentions using coreference, in: Walker M.A., Ji H., Stent A. (Eds.), Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), Association for Computational Linguistics, 2018, pp. 42–48,.
[14]
M. Ding, C. Zhou, Q. Chen, H. Yang, J. Tang, Cognitive graph for multi-hop reading comprehension at scale, in: Korhonen A., Traum D.R., Màrquez L. (Eds.), Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, Association for Computational Linguistics, 2019, pp. 2694–2703,.
[15]
L. Qiu, Y. Xiao, Y. Qu, H. Zhou, L. Li, W. Zhang, Y. Yu, Dynamically fused graph network for multi-hop reasoning, in: Korhonen A., Traum D.R., Màrquez L. (Eds.), Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, Association for Computational Linguistics, 2019, pp. 6140–6150,.
[16]
L. Wang, N. Yang, F. Wei, Query2doc: Query expansion with large language models, in: Bouamor H., Pino J., Bali K. (Eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, Association for Computational Linguistics, 2023, pp. 9414–9423,.
[17]
J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E.H. Chi, Q.V. Le, D. Zhou, Chain-of-thought prompting elicits reasoning in large language models, in: Koyejo S., Mohamed S., Agarwal A., Belgrave D., Cho K., Oh A. (Eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, la, USA, November 28 - December 9, 2022, 2022, URL http://papers.nips.cc/paper_files/paper/2022/hash/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html.
[18]
H. Trivedi, N. Balasubramanian, T. Khot, A. Sabharwal, Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions, in: Rogers A., Boyd-Graber J.L., Okazaki N. (Eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, Association for Computational Linguistics, 2023, pp. 10014–10037,.
[19]
X. Wang, J. Wei, D. Schuurmans, Q.V. Le, E.H. Chi, S. Narang, A. Chowdhery, D. Zhou, Self-consistency improves chain of thought reasoning in language models, in: The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023, OpenReview.net, 2023, URL https://openreview.net/pdf?id=1PL1NIMMrw.
[20]
S. Yao, D. Yu, J. Zhao, I. Shafran, T. Griffiths, Y. Cao, K. Narasimhan, Tree of thoughts: Deliberate problem solving with large language models, in: Oh A., Naumann T., Globerson A., Saenko K., Hardt M., Levine S. (Eds.), Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, la, USA, December 10 - 16, 2023, 2023, URL http://papers.nips.cc/paper_files/paper/2023/hash/271db9922b8d1f4dd7aaef84ed5ac703-Abstract-Conference.html.
[21]
X. Ho, A.D. Nguyen, S. Sugawara, A. Aizawa, Constructing a multi-hop QA dataset for comprehensive evaluation of reasoning steps, in: Scott D., Bel N., Zong C. (Eds.), Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, International Committee on Computational Linguistics, 2020, pp. 6609–6625,.
[22]
Z. Yang, P. Qi, S. Zhang, Y. Bengio, W.W. Cohen, R. Salakhutdinov, C.D. Manning, HotpotQA: A dataset for diverse, explainable multi-hop question answering, in: Riloff E., Chiang D., Hockenmaier J., Tsujii J. (Eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, Association for Computational Linguistics, 2018, pp. 2369–2380,.
[23]
B. Dhingra, M. Zaheer, V. Balachandran, G. Neubig, R. Salakhutdinov, W.W. Cohen, Differentiable reasoning over a virtual knowledge base, in: 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020, OpenReview.net, 2020, URL https://openreview.net/forum?id=SJxstlHFPH.
[24]
R. Li, L. Wang, S. Wang, Z. Jiang, Asynchronous multi-grained graph network for interpretable multi-hop reading comprehension, in: Zhou Z. (Ed.), Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, ijcai.org, 2021, pp. 3857–3863,.
[25]
S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K.R. Narasimhan, Y. Cao, ReAct: Synergizing reasoning and acting in language models, in: The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023, OpenReview.net, 2023, URL https://openreview.net/pdf?id=WE_vluYUL-X.
[26]
T. Khot, H. Trivedi, M. Finlayson, Y. Fu, K. Richardson, P. Clark, A. Sabharwal, Decomposed prompting: A modular approach for solving complex tasks, in: The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023, OpenReview.net, 2023, URL https://openreview.net/pdf?id=_nGgzQjzaRy.
[27]
O. Press, M. Zhang, S. Min, L. Schmidt, N.A. Smith, M. Lewis, Measuring and narrowing the compositionality gap in language models, in: Bouamor H., Pino J., Bali K. (Eds.), Findings of the Association for Computational Linguistics, EMNLP 2023, Singapore, December 6-10, 2023, Association for Computational Linguistics, 2023, pp. 5687–5711. URL https://aclanthology.org/2023.findings-emnlp.378.
[28]
Y. Gao, Y. Zhu, Y. Cao, Y. Zhou, Z. Wu, Y. Chen, S. Wu, H. Hu, X. Dai, Dr3: Ask large language models not to give off-topic answers in open domain multi-hop question answering, in: Calzolari N., Kan M., Hoste V., Lenci A., Sakti S., Xue N. (Eds.), Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy, ELRA and ICCL, 2024, pp. 5350–5364. URL https://aclanthology.org/2024.lrec-main.476.
[29]
S.E. Robertson, H. Zaragoza, The probabilistic relevance framework: BM25 and beyond, Found. Trends Inf. Retr. 3 (4) (2009) 333–389,.
[30]
L. Ouyang, J. Wu, X. Jiang, D. Almeida, C.L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P.F. Christiano, J. Leike, R. Lowe, Training language models to follow instructions with human feedback, in: Koyejo S., Mohamed S., Agarwal A., Belgrave D., Cho K., Oh A. (Eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, la, USA, November 28 - December 9, 2022, 2022, URL http://papers.nips.cc/paper_files/paper/2022/hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html.
[31]
H.W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, E. Li, X. Wang, M. Dehghani, S. Brahma, A. Webson, S.S. Gu, Z. Dai, M. Suzgun, X. Chen, A. Chowdhery, S. Narang, G. Mishra, A. Yu, V.Y. Zhao, Y. Huang, A.M. Dai, H. Yu, S. Petrov, E.H. Chi, J. Dean, J. Devlin, A. Roberts, D. Zhou, Q.V. Le, J. Wei, Scaling instruction-finetuned language models, 2022,. CoRR abs/2210.11416, arXiv:2210.11416.
[32]
J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding, 2018, arXiv preprint arXiv:1810.04805.
[33]
K. Santhanam, O. Khattab, J. Saad-Falcon, C. Potts, M. Zaharia, ColBERTv2: Effective and efficient retrieval via lightweight late interaction, in: Carpuat M., de Marneffe M., Ruíz I.V.M. (Eds.), Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, Association for Computational Linguistics, 2022, pp. 3715–3734,.
[34]
D. Chen, A. Fisch, J. Weston, A. Bordes, Reading wikipedia to answer open-domain questions, in: Barzilay R., Kan M. (Eds.), Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, Association for Computational Linguistics, 2017, pp. 1870–1879,.
[35]
K. Lee, M. Chang, K. Toutanova, Latent retrieval for weakly supervised open domain question answering, in: Korhonen A., Traum D.R., Màrquez L. (Eds.), Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, Association for Computational Linguistics, 2019, pp. 6086–6096,.
[36]
Y. Nie, S. Wang, M. Bansal, Revealing the importance of semantic retrieval for machine reading at scale, in: Inui K., Jiang J., Ng V., Wan X. (Eds.), Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, Association for Computational Linguistics, 2019, pp. 2553–2566,.
[37]
A. Abdallah, A. Jatowt, Generator-retriever-generator: A novel approach to open-domain question answering, 2023,. CoRR abs/2307.11278, arXiv:2307.11278.
[38]
K. Ma, H. Cheng, Y. Zhang, X. Liu, E. Nyberg, J. Gao, Chain-of-skills: A configurable model for open-domain question answering, in: Rogers A., Boyd-Graber J.L., Okazaki N. (Eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, Association for Computational Linguistics, 2023, pp. 1599–1618,.
[39]
H. Lee, A. Kedia, J. Lee, A. Paranjape, C.D. Manning, K. Woo, You only need one model for open-domain question answering, in: Goldberg Y., Kozareva Z., Zhang Y. (Eds.), Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, Association for Computational Linguistics, 2022, pp. 3047–3060,.
[40]
W. Yang, Y. Xie, A. Lin, X. Li, L. Tan, K. Xiong, M. Li, J. Lin, End-to-end open-domain question answering with bertserini, in: Ammar W., Louis A., Mostafazadeh N. (Eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Demonstrations, Association for Computational Linguistics, 2019, pp. 72–77,.
[41]
T. Zhao, X. Lu, K. Lee, SPARTA: Efficient open-domain question answering via sparse transformer matching retrieval, in: Toutanova K., Rumshisky A., Zettlemoyer L., Hakkani-Tür D., Beltagy I., Bethard S., Cotterell R., Chakraborty T., Zhou Y. (Eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, Association for Computational Linguistics, 2021, pp. 565–575,.
[42]
C. Clark, M. Gardner, Simple and effective multi-paragraph reading comprehension, in: Gurevych I., Miyao Y. (Eds.), Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, Association for Computational Linguistics, 2018, pp. 845–855,. URL https://aclanthology.org/P18-1078/.
[43]
C. Tan, F. Wei, N. Yang, B. Du, W. Lv, M. Zhou, S-net: From answer extraction to answer synthesis for machine reading comprehension, in: McIlraith S.A., Weinberger K.Q. (Eds.), Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th Innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, AAAI Press, 2018, pp. 5940–5947,.
[44]
X. Li, G. Cheng, Z. Chen, Y. Sun, Y. Qu, AdaLoGN: Adaptive logic graph network for reasoning-based machine reading comprehension, in: Muresan S., Nakov P., Villavicencio A. (Eds.), Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, Association for Computational Linguistics, 2022, pp. 7147–7161,.
[45]
S. Yavuz, K. Hashimoto, Y. Zhou, N.S. Keskar, C. Xiong, Modeling multi-hop question answering as single sequence prediction, in: Muresan S., Nakov P., Villavicencio A. (Eds.), Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, Association for Computational Linguistics, 2022, pp. 974–990,.
[46]
M. Suzgun, N. Scales, N. Schärli, S. Gehrmann, Y. Tay, H.W. Chung, A. Chowdhery, Q.V. Le, E. Chi, D. Zhou, J. Wei, Challenging BIG-bench tasks and whether chain-of-thought can solve them, in: Rogers A., Boyd-Graber J.L., Okazaki N. (Eds.), Findings of the Association for Computational Linguistics, ACL 2023, Toronto, Canada, July 9-14, 2023, Association for Computational Linguistics, 2023, pp. 13003–13051,.
[47]
A. Srivastava, A. Rastogi, A. Rao, A.A.M. Shoeb, A. Abid, A. Fisch, A.R. Brown, A. Santoro, A. Gupta, A. Garriga-Alonso, A. Kluska, A. Lewkowycz, A. Agarwal, A. Power, A. Ray, A. Warstadt, A.W. Kocurek, A. Safaya, A. Tazarv, A. Xiang, A. Parrish, A. Nie, A. Hussain, A. Askell, A. Dsouza, A. Rahane, A.S. Iyer, A. Andreassen, A. Santilli, A. Stuhlmüller, A.M. Dai, A. La, A.K. Lampinen, A. Zou, A. Jiang, A. Chen, A. Vuong, A. Gupta, A. Gottardi, A. Norelli, A. Venkatesh, A. Gholamidavoodi, A. Tabassum, A. Menezes, A. Kirubarajan, A. Mullokandov, A. Sabharwal, A. Herrick, A. Efrat, A. Erdem, A. Karakas, et al., Beyond the imitation game: Quantifying and extrapolating the capabilities of language models, 2022,. CoRR abs/2206.04615. arXiv:2206.04615.
[48]
D. Zhou, N. Schärli, L. Hou, J. Wei, N. Scales, X. Wang, D. Schuurmans, C. Cui, O. Bousquet, Q.V. Le, E.H. Chi, Least-to-most prompting enables complex reasoning in large language models, in: The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023, OpenReview.net, 2023, URL https://openreview.net/pdf?id=WZH7099tgfM.
[49]
Y. Fu, H. Peng, A. Sabharwal, P. Clark, T. Khot, Complexity-based prompting for multi-step reasoning, in: The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023, OpenReview.net, 2023, URL https://openreview.net/pdf?id=yf1icZHC-l9.
[50]
N. Ho, L. Schmid, S. Yun, Large language models are reasoning teachers, in: Rogers A., Boyd-Graber J.L., Okazaki N. (Eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, Association for Computational Linguistics, 2023, pp. 14852–14882,.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Knowledge-Based Systems
Knowledge-Based Systems  Volume 309, Issue C
Jan 2025
1589 pages

Publisher

Elsevier Science Publishers B. V.

Netherlands

Publication History

Published: 18 February 2025

Author Tags

  1. Multi-hop question answering
  2. Chain-of-thought prompting
  3. Open-domain retrieval
  4. Large language models

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media