[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Deep Reinforcement Learning with Hierarchical Action Exploration for Dialogue Generation

Itsugun Cho, Ryota Takahashi, Yusaku Yanase, Hiroaki Saito


Abstract
Traditionally, approximate dynamic programming is employed in dialogue generation with greedy policy improvement through action sampling, as the natural language action space is vast. However, this practice is inefficient for reinforcement learning (RL) due to the sparsity of eligible responses with high action values, which leads to weak improvement sustained by random sampling. This paper presents theoretical analysis and experiments that reveal the performance of the dialogue policy is positively correlated with the sampling size. To overcome this limitation, we introduce a novel dual-granularity Q-function that explores the most promising response category to intervene in the sampling process. Our approach extracts actions based on a grained hierarchy, thereby achieving the optimum with fewer policy iterations. Additionally, we use offline RL and learn from multiple reward functions designed to capture emotional nuances in human interactions. Empirical studies demonstrate that our algorithm outperforms baselines across automatic metrics and human evaluations. Further testing reveals that our algorithm exhibits both explainability and controllability, as well as generates responses with higher expected rewards.
Anthology ID:
2024.lrec-main.408
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
4566–4579
Language:
URL:
https://aclanthology.org/2024.lrec-main.408
DOI:
Bibkey:
Cite (ACL):
Itsugun Cho, Ryota Takahashi, Yusaku Yanase, and Hiroaki Saito. 2024. Deep Reinforcement Learning with Hierarchical Action Exploration for Dialogue Generation. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 4566–4579, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Deep Reinforcement Learning with Hierarchical Action Exploration for Dialogue Generation (Cho et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.408.pdf