[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.5555/1944506.1944537dlproceedingsArticle/Chapter ViewAbstractPublication PagessigdialConference Proceedingsconference-collections
research-article
Free access

Non-humanlike spoken dialogue: a design perspective

Published: 24 September 2010 Publication History

Abstract

We propose a non-humanlike spoken dialogue design, which consists of two elements: non-humanlike turn-taking and non-humanlike acknowledgment. Two experimental studies are reported in this paper. The first study shows that the proposed non-humanlike spoken dialogue design is effective for reducing speech collisions. It also presents pieces of evidence that show quick humanlike turn-taking is less important in spoken dialogue system design. The second study supports a hypothesis found in the first study that user preference on response timing varies depending on interaction patterns. Upon receiving these results, this paper suggests a practical design guideline for spoken dialogue systems.

References

[1]
C. Bartneck, T. Kanda, H. Ishiguro, and N. Hagita. 2007. Is the uncanny valley an uncanny cliff? In Proc. RO-MAN 2007.
[2]
H. Clark. 1996. Using Language. Cambridge U. P.
[3]
R. Fernández, D. Schlangen, and T. Lucht. 2007. Push-to-talk ain't always bad! comparing different interactivity settings in task-oriented dialogue. In Proc. DECALOG 2007.
[4]
K. Funakoshi, K. Kobayashi, M. Nakano, T. Komatsu, and S. Yamada. 2010. Reducing speech collisions by using an artificial subtle expression in a decelerated spoken dialogue. In Proc. 2nd Intl. Symp. New Frontiers in Human-Robot Interaction.
[5]
J. Hirasawa, M. Nakano, T. Kawabata, and K. Aikawa. 1999. Effects of system barge-in responses on user impressions. In Proc. EUROSPEECH'99.
[6]
N. Kitaoka, M. Takeuchi, R. Nishimura, and S. Nakagawa. 2005. Response timing detection using prosodic and linguistic information for human-friendly spoken dialog systems. Journal of The Japanese Society for AI, 20(3).
[7]
T. Komatsu and S. Yamada. 2010. Effects of adaptation gap on user's variation of impressions of artificial agents. In Proc. WMSCI 2010.
[8]
T. Komatsu, S. Yamada, K. Kobayashi, K. Funakoshi, and M. Nakano. 2010. Artificial subtle expressions: Intuitive notification methodology of artifacts. In Proc. CHI 2010.
[9]
A. Raux and M. Eskenazi. 2009. A finite-state turn-taking model for spoken dialog systems. In Proc. NAACL-HLT 2009.
[10]
T. Shiwa, T. Kanda, M. Imai, H. Ishiguro, and N. Hagita. 2008. How quickly should communication robots respond? In Proc. HRI 2008.

Cited By

View all
  • (2018)Vibrational Artificial Subtle ExpressionsProceedings of the 2018 CHI Conference on Human Factors in Computing Systems10.1145/3173574.3174052(1-9)Online publication date: 21-Apr-2018
  • (2013)Expressing a robot's confidence with motion-based artificial subtle expressionsCHI '13 Extended Abstracts on Human Factors in Computing Systems10.1145/2468356.2468539(1023-1028)Online publication date: 27-Apr-2013
  • (2012)Can users live with overconfident or unconfident systems?CHI '12 Extended Abstracts on Human Factors in Computing Systems10.1145/2212776.2223678(1595-1600)Online publication date: 5-May-2012
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image DL Hosted proceedings
SIGDIAL '10: Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue
September 2010
347 pages
ISBN:9781932432855

Publisher

Association for Computational Linguistics

United States

Publication History

Published: 24 September 2010

Qualifiers

  • Research-article

Acceptance Rates

Overall Acceptance Rate 19 of 46 submissions, 41%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)20
  • Downloads (Last 6 weeks)5
Reflects downloads up to 12 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2018)Vibrational Artificial Subtle ExpressionsProceedings of the 2018 CHI Conference on Human Factors in Computing Systems10.1145/3173574.3174052(1-9)Online publication date: 21-Apr-2018
  • (2013)Expressing a robot's confidence with motion-based artificial subtle expressionsCHI '13 Extended Abstracts on Human Factors in Computing Systems10.1145/2468356.2468539(1023-1028)Online publication date: 27-Apr-2013
  • (2012)Can users live with overconfident or unconfident systems?CHI '12 Extended Abstracts on Human Factors in Computing Systems10.1145/2212776.2223678(1595-1600)Online publication date: 5-May-2012
  • (2011)Interpretations of artificial subtle expressions (ASEs) in terms of different types of artifactProceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II10.5555/2062850.2062854(22-30)Online publication date: 9-Oct-2011
  • (2011)Effects of different types of artifacts on interpretations of artificial subtle expressions (ASEs)CHI '11 Extended Abstracts on Human Factors in Computing Systems10.1145/1979742.1979756(1249-1254)Online publication date: 7-May-2011

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media