[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

DYME: A Dynamic Metric for Dialog Modeling Learned from Human Conversations

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2021)

Abstract

With increasing capabilities of dialog generation methods, modeling human conversation characteristics to steer the dialog generation towards natural, human-like interactions has garnered research interest. So far, dialogs have mostly been modeled with developer-defined, static metrics. This work shows that metrics change within individual conversations and differ between conversations, illustrating the need for flexible metrics to model human dialogs. We propose DYME, a DYnamic MEtric for dialog modeling learned from human conversational data with a neural-network-based approach. DYME outperforms a moving average baseline in predicting the metrics for the next utterance of a given conversation by about 20%, demonstrating the ability of this new approach to model dynamic human communication characteristics.

F. von Unold and M. Wintergerst—Contributed equally

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 87.50
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 109.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Code and models on GitHub: https://github.com/florianvonunold/DYME.

References

  1. Bao, S., et al.: PLATO-2: Towards building an open-domain chatbot via curriculum learning. arXiv preprint arXiv:2006.16779 (2020)

  2. Brown, C.E.: Coefficient of variation. In: Applied Multivariate Statistics in Geohydrology and Related Sciences, pp. 155–157. Springer (1998). https://doi.org/10.1007/978-3-642-80328-4_13

  3. Cer, D., et al.: Universal sentence encoder. arXiv preprint arXiv:1803.11175 (2018)

  4. Chaves, A.P., Gerosa, M.A.: How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design. Int. J. Hum.-Comput. Interact. 37(8), 729–758 (2021)

    Google Scholar 

  5. Conneau, A., Kiela, D., Schwenk, H., Barrault, L., Bordes, A.: Supervised learning of universal sentence representations from natural language inference data. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 670–680. Association for Computational Linguistics, Copenhagen, Denmark, September 2017. https://doi.org/10.18653/v1/D17-1070

  6. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186 (2019)

    Google Scholar 

  7. Felbo, B., Mislove, A., Søgaard, A., Rahwan, I., Lehmann, S.: Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In: Conference on Empirical Methods in Natural Language Processing (EMNLP) (2017)

    Google Scholar 

  8. Ghosal, D., Majumder, N., Mihalcea, R., Poria, S.: Utterance-level dialogue understanding: an empirical study. arXiv preprint arXiv:2009.13902 (2020)

  9. Jaques, N., et al.: Way off-policy batch deep reinforcement learning of implicit human preferences in dialog. arXiv preprint arXiv:1907.00456 (2019)

  10. Li, J., Monroe, W., Ritter, A., Jurafsky, D., Galley, M., Gao, J.: Deep reinforcement learning for dialogue generation. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1192–1202. Association for Computational Linguistics, Austin, Texas, November 2016. https://doi.org/10.18653/v1/D16-1127

  11. Li, Y., Su, H., Shen, X., Li, W., Cao, Z., Niu, S.: DailyDialog: A manually labelled multi-turn dialogue dataset. In: Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 986–995 (2017)

    Google Scholar 

  12. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017)

  13. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)

  14. Rashkin, H., Smith, E.M., Li, M., Boureau, Y.L.: Towards empathetic open-domain conversation models: A new benchmark and dataset. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 5370–5381 (2019)

    Google Scholar 

  15. Saleh, A., Jaques, N., Ghandeharioun, A., Shen, J., Picard, R.: Hierarchical reinforcement learning for open-domain dialog. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 8741–8748 (2020)

    Google Scholar 

  16. Sharma, A., Lin, I.W., Miner, A.S., Atkins, D.C., Althoff, T.: Towards facilitating empathic conversations in online mental health support: A reinforcement learning approach. In: Proceedings of the Web Conference 2021, pp. 194–205 (2021)

    Google Scholar 

  17. Sharma, A., Miner, A., Atkins, D., Althoff, T.: A computational approach to understanding empathy expressed in text-based mental health support. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 5263–5276 (2020)

    Google Scholar 

  18. Wolf, T., et al.: Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. Association for Computational Linguistics, Online, October 2020

    Google Scholar 

  19. Xiao, D., et al.: ERNIE-GEN: An enhanced multi-flow pre-training and fine-tuning framework for natural language generation. arXiv preprint arXiv:2001.11314 (2020)

  20. Xu, C., Wu, W., Wu, Y.: Towards explainable and controllable open domain dialogue generation with dialogue acts. arXiv preprint arXiv:1807.07255 (2018)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Monika Wintergerst .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

von Unold, F., Wintergerst, M., Belzner, L., Groh, G. (2021). DYME: A Dynamic Metric for Dialog Modeling Learned from Human Conversations. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds) Neural Information Processing. ICONIP 2021. Communications in Computer and Information Science, vol 1516. Springer, Cham. https://doi.org/10.1007/978-3-030-92307-5_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-92307-5_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-92306-8

  • Online ISBN: 978-3-030-92307-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics