[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3064663.3064668acmconferencesArticle/Chapter ViewAbstractPublication PagesdisConference Proceedingsconference-collections
research-article

Voodle: Vocal Doodling to Sketch Affective Robot Motion

Published: 10 June 2017 Publication History

Abstract

Social robots must be believable to be effective; but creating believable, affectively expressive robot behaviours requires time and skill. Inspired by the directness with which performers use their voices to craft characters, we introduce Voodle (vocal doodling), which uses the form of utterances -- e.g., tone and rhythm -- to puppet and eventually control robot motion. Voodle offers an improvisational platform capable of conveying hard-to-express ideas like emotion. We created a working Voodle system by collecting a set of vocal features and associated robot motions, then incorporating them into a prototype for sketching robot behaviour. We explored and refined Voodle's expressive capacity by engaging expert performers in an iterative design process. We found that users develop a personal language with Voodle; that a vocalization's meaning changed with narrative context; and that voodling imparts a sense of life to the robot, inviting designers to suspend disbelief and engage in a playful, conversational style of design.

Supplementary Material

suppl.mov (disfp0120-file3.mp4)
Supplemental video

References

[1]
Anki. 2017. (2017). https://anki.com/en-us/cozmo.
[2]
Farah Arab, Sabrina Paneels, Margarita Anastassova, Stephanie Coeugnet, Fanny Le Morellec, Aurelie Dommes, and Aline Chevalier. 2015. Haptic patterns and older adults: To repeat or not to repeat?. In 2015 IEEE World Haptics Conference (WHC). IEEE, 248--253.
[3]
Jeremy N Bailenson and Nick Yee. 2005. Digital chameleons automatic assimilation of nonverbal gestures in immersive virtual environments. Psychological science 16, 10 (2005), 814--819.
[4]
Rainer Banse and Klaus R Scherer. 1996. Acoustic profiles in vocal emotion expression. Journal of personality and social psychology 70, 3 (1996), 614.
[5]
Joseph Bates and others. 1994. The role of emotion in believable agents. Commun. ACM 37, 7 (1994), 122--125.
[6]
Aryel Beck, Antoine Hiolle, and Lola Canamero. 2013. Using perlin noise to generate emotional expressions in a robot. In Proceedings of annual meeting of the cognitive science society (Cog Sci 2013). 1845--1850.
[7]
H Russell Bernard. 2011. Research methods in anthropology: Qualitative and quantitative approaches. Rowman Altamira.
[8]
Jeff A Bilmes, Xiao Li, Jonathan Malkin, Kelley Kilanski, Richard Wright, Katrin Kirchhoff, Amarnag Subramanya, Susumu Harada, James A Landay, Patricia Dowden, and Howard Chizeck. 2005. The Vocal Joystick: A Voice-based Human-computer Interface for Individuals with Motor Impairments. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (HLT '05). Association for Computational Linguistics, Stroudsburg, PA, USA, 995--1002.
[9]
Cynthia Breazeal. 2003. Emotion and sociable humanoid robots. International Journal of Human-Computer Studies 59, 1--2 (jul 2003), 119--155.
[10]
Cynthia Breazeal and Lijin Aryananda. 2002. Recognition of Affective Communicative Intent in Robot-Directed Speech. Autonomous Robots 12, 1 (2002), 83--104.
[11]
L. Brunet, C. Megard, S. Paneels, G. Changeon, J. Lozada, M. P. Daniel, and F. Darses. 2013. "Invitation to the voyage": The design of tactile metaphors to fulfill occasional travelers' needs in transportation networks. In 2013 World Haptics Conference (WHC). IEEE, 259--264.
[12]
Paul Bucci, Laura Cang, Sazi Valair, David Marino, Lucia Tseng, Merel Jung, Jussi Rantala, Oliver Schneider, and Karon MacLean. 2017. Sketching CuddleBits: Coupled Prototyping of Body and Behaviour for an Affective Robot Pet. To appear in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI) 2017. (2017).
[13]
Paul Bucci, Xi Laura Cang, Matthew Chun, David Marino, Oliver Schneider, Hasti Seifi, and Karon MacLean. 2016. CuddleBits: an iterative prototyping platform for complex haptic display. In EuroHaptics '16 Demos.
[14]
Noam Chomsky and Morris Halle. 1968. The sound pattern of English. New York: Harper and Row. 435 pages.
[15]
Howie Choset, Kevin M Lynch, Seth Hutchinson, G Kantor, Wolfram Burgard, Lydia E Kavraki, and Sebastian Thrun. 2005. Principles of Robot Motion: Theory, Algorithms, and Implementations. The MIT Press.
[16]
M Chung, E Rombokas, Q An, Y Matsuoka, and J Bilmes. 2012. Continuous vocalization control of a full-scale assistive robot. In 2012 4th IEEE RAS EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob). 1464--1469.
[17]
Juliet Corbin and Anselm Strauss. 2008. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory (3 ed.). Sage Publications, Inc. 379 pages.
[18]
Ferdinand De Saussure, Wade Baskin, and Perry Meisel. 2011. Course in general linguistics. Columbia University Press. 336 pages.
[19]
Terrence Fong, Illah Nourbakhsh, and Kerstin Dautenhahn. 2003. A survey of socially interactive robots. Robotics and Autonomous Systems 42, 3 - 4 (2003), 143--166.
[20]
Simon Garrod and Martin J Pickering. 2004. Why is conversation so easy? Trends in cognitive sciences 8, 1 (2004), 8--11.
[21]
Masataka Goto, Koji Kitayama, Katunobu Itou, and Tetsunori Kobayashi. 2004. Speech Spotter: On-demand Speech Recognition. In in Human-Human Conversation on the Telephone or in Face-to-Face Situations. Proc. ICSLP'04. 1533--1536.
[22]
Jonathan Gratch, Anna Okhmatovskaia, Francois Lamothe, Stacy Marsella, Mathieu Morales, Rick J van der Werf, and Louis-Philippe Morency. 2006. Virtual rapport. In International Workshop on Intelligent Virtual Agents. Springer, 14--27.
[23]
Jesse Gray, Guy Hoffman, Sigurdur Orn Adalgeirsson, Matt Berlin, and Cynthia Breazeal. 2010. Expressive, interactive robots: Tools, techniques, and insights based on collaborations. In Human Robot Interaction (HRI) 2010 Workshop: What do collaborations with the arts have to say about HRI.
[24]
Susumu Harada, James A Landay, Jonathan Malkin, Xiao Li, and Jeff A Bilmes. 2006. The Vocal Joystick:: Evaluation of Voice-based Cursor Control Techniques. In Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility (Assets '06). ACM, New York, NY, USA, 197--204.
[25]
John Harris and Ehud Sharlin. 2011. Exploring the affect of abstract motion in social human-robot interaction. In 2011 Ro-Man. IEEE, 441--448.
[26]
Fritz Heider and Marianne Simmel. 1944. An experimental study of apparent behavior. The American Journal of Psychology 57, 2 (1944), 243--259.
[27]
Guy Hoffman. 2011. On stage: robots as performers. In RSS 2011 Workshop on Human-Robot Interaction: Perspectives and Contributions to Robotics from the Human Sciences. Los Angeles, CA, Vol. 1.
[28]
Guy Hoffman and Wendy Ju. 2014a. Designing robots with movement in mind. Journal of Human-Robot Interaction 3, 1 (2014), 89--122.
[29]
Guy Hoffman and Wendy Ju. 2014b. Designing robots with movement in mind. Journal of Human-Robot Interaction 3, 1 (2014), 89--122.
[30]
Brandi House, Jonathan Malkin, and Jeff Bilmes. 2009. The VoiceBot: A Voice Controlled Robot Arm. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '09). ACM, New York, NY, USA, 183--192.
[31]
Takeo Igarashi and John F Hughes. 2001. Voice As Sound: Using Non-verbal Voice Input for Interactive Control. In Proceedings of the 14th Annual ACM Symposium on User Interface Software and Technology (UIST '01). ACM, New York, NY, USA, 155--156.
[32]
Johnny-Five. 2017. (2017). http://johnny-five.io.
[33]
Jessica L Lakin, Valerie E Jefferis, Clara Michelle Cheng, and Tanya L Chartrand. 2003. The chameleon effect as social glue: Evidence for the evolutionary significance of nonconscious mimicry. Journal of nonverbal behavior 27, 3 (2003), 145--162.
[34]
George Lakoff and Mark Johnson. 2003. Metaphors we live by. 276 pages.
[35]
RM Maatman, Jonathan Gratch, and Stacy Marsella. 2005. Natural behavior of a listening agent. In International Workshop on Intelligent Virtual Agents. Springer, 25--36.
[36]
David H McFarland. 2001. Respiratory markers of conversational interaction. Journal of Speech, Language, and Hearing Research 44, 1 (2001), 128--143.
[37]
Ajung Moon, Chris AC Parker, Elizabeth A Croft, and HF Machiel Van der Loos. 2011. Did you see it hesitate--Empirically grounded design of hesitation trajectories for collaborative robots. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 1994--1999.
[38]
Jennifer S Pardo. 2006. On phonetic convergence during conversational interaction. The Journal of the Acoustical Society of America 119, 4 (2006), 2382--2393.
[39]
Jamie Pearson, Jiang Hu, Holly P Branigan, Martin J Pickering, and Clifford I Nass. 2006. Adaptive language behavior in HCI: how expectations and beliefs about a system affect users' word choice. In Proceedings of the SIGCHI conference on Human Factors in computing systems. ACM, 1177--1180.
[40]
CS Peirce. 1955. The philosophical writings of Peirce. New York: Dover, 98--119.
[41]
Marcus Perlman and Ashley A Cain. 2014. Iconicity in vocalization, comparisons with gesture, and implications for theories on the evolution of language. Gesture 14, 3 (2014), 320--350.
[42]
Pamela Perniss and Gabriella Vigliocco. 2014. The bridge of iconicity: from a world of experience to the experience of language. Phil. Trans. R. Soc. B 369, 1651 (2014), 20130300.
[43]
React. 2017. (2017). https://facebook.github.io/react.
[44]
Ralf Rummer, Judith Schweppe, René Schlegelmilch, and Martine Grice. 2014. Mood is linked to vowel type: The role of articulatory movements. Emotion 14, 2 (2014), 246.
[45]
James A Russel, Anna Weiss, and Gerald A Mendelsohn. 1989. Affect grid: A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology 57, 3 (1989), 493--502.
[46]
James A Russell. 1980. A circumplex model of affect. Journal of Personality and Social Psychology 39, 6 (1980), 1161.
[47]
Gery W. Ryan and H. Russell Bernard. 2003. Techniques to Identify Themes. Field Methods 15, 1 (feb 2003), 85--109.
[48]
Daisuke Sakamoto, Takanori Komatsu, and Takeo Igarashi. 2013. Voice Augmented Manipulation: Using Paralinguistic Information to Manipulate Mobile Devices. In Proceedings of the 15th International Conference on Human-computer Interaction with Mobile Devices and Services (MobileHCI '13). ACM, New York, NY, USA, 69--78.
[49]
Oliver S Schneider and Karon E MacLean. 2014. Improvising design with a haptic instrument. In 2014 IEEE Haptics Symposium (HAPTICS). IEEE, 327--332.
[50]
Oliver S. Schneider and Karon E. MacLean. 2016. Studying Design Process and Example Use with Macaron, a Web-based Vibrotactile Effect Editor. In HAPTICS '16: Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems.
[51]
Oliver S. Schneider, Hasti Seifi, Salma Kashani, Matthew Chun, and Karon E. MacLean. 2016. HapTurk: Crowdsourcing Affective Ratings for Vibrotactile Icons. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI) '16. ACM Press, New York, New York, USA, 3248--3260.
[52]
Yasaman S Sefidgar, Karon E MacLean, Steve Yohanan, HF Machiel Van der Loos, Elizabeth A Croft, and E Jane Garland. 2016. Design and Evaluation of a Touch-Centered Calming Interaction with a Social Robot. IEEE Transactions on Affective Computing 7, 2 (2016), 108--121.
[53]
Hasti Seifi, Kailun Zhang, and Karon E MacLean. 2015. VibViz: Organizing, visualizing and navigating vibration libraries. In World Haptics Conference (WHC), 2015 IEEE. IEEE, 254--259.
[54]
Hadas Shintel, Howard C Nusbaum, and Arika Okrent. 2006. Analog acoustic expression in speech communication. Journal of Memory and Language 55, 2 (2006), 167--177.
[55]
Joren Six, Olmo Cornelis, and Marc Leman. 2014. TarsosDSP, a Real-Time Audio Processing Framework in Java. In Proceedings of the 53rd AES Conference (AES 53rd).
[56]
Anselm Strauss and Juliet Corbin. 1998. Basics of qualitative research: Techniques and procedures for developing grounded theory. Sage Publications, Inc.
[57]
Leila Takayama, Doug Dooley, and Wendy Ju. 2011. Expressing thought: improving robot readability with animation principles. In Proceedings of the 6th international conference on Human-robot interaction. ACM, 69--76.
[58]
Kazuyoshi Wada and Takanori Shibata. 2007. Living with seal robots--its sociopsychological and physiological influences on the elderly at a care house. IEEE Transactions on Robotics 23, 5 (2007), 972--980.
[59]
Rebecca M Warner. 1996. Coordinated cycles in behavior and physiology during face-to-face social interactions. (1996).
[60]
Junji Watanabe and Maki Sakamoto. 2012. Comparison between onomatopoeias and adjectives for evaluating tactile sensations. In Proceedings of the 6th International Conference of Soft Computing and Intelligent Systems and the 13th International Symposium on Advanced Intelligent Systems (SCIS-ISIS 2012). 2346--2348.
[61]
David Watson, Lee A Clark, and Auke Tellegen. 1988. Development and validation of brief measures of positive and negative affect: the PANAS scales. Journal of personality and social psychology 54, 6 (1988), 1063.
[62]
Steve Yohanan and Karon E MacLean. 2011. Design and assessment of the haptic creature's affect display. In Proceedings of the 6th international conference on Human-robot interaction. ACM, 473--480.

Cited By

View all
  • (2023)Feellustrator: A Design Tool for Ultrasound Mid-Air HapticsProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580728(1-16)Online publication date: 19-Apr-2023
  • (2023)Affective Robots Need TherapyACM Transactions on Human-Robot Interaction10.1145/354351412:2(1-22)Online publication date: 15-Mar-2023
  • (2023)Rhythm Research in Interactive System Design: A Literature ReviewInternational Journal of Human–Computer Interaction10.1080/10447318.2023.2294628(1-20)Online publication date: 27-Dec-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
DIS '17: Proceedings of the 2017 Conference on Designing Interactive Systems
June 2017
1444 pages
ISBN:9781450349222
DOI:10.1145/3064663
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 10 June 2017

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. animation
  2. haptics
  3. human-robot interaction
  4. sound symbolism
  5. vocal interfaces
  6. vocal-haptic interface
  7. voice input

Qualifiers

  • Research-article

Funding Sources

Conference

DIS '17
Sponsor:
DIS '17: Designing Interactive Systems Conference 2017
June 10 - 14, 2017
Edinburgh, United Kingdom

Acceptance Rates

DIS '17 Paper Acceptance Rate 107 of 487 submissions, 22%;
Overall Acceptance Rate 1,158 of 4,684 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)30
  • Downloads (Last 6 weeks)6
Reflects downloads up to 25 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Feellustrator: A Design Tool for Ultrasound Mid-Air HapticsProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580728(1-16)Online publication date: 19-Apr-2023
  • (2023)Affective Robots Need TherapyACM Transactions on Human-Robot Interaction10.1145/354351412:2(1-22)Online publication date: 15-Mar-2023
  • (2023)Rhythm Research in Interactive System Design: A Literature ReviewInternational Journal of Human–Computer Interaction10.1080/10447318.2023.2294628(1-20)Online publication date: 27-Dec-2023
  • (2022)Evaluating Singing for Computer Input Using Pitch, Interval, and MelodyProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3517691(1-15)Online publication date: 29-Apr-2022
  • (2022)Designing affective haptic experience for wellness and social communication: where designers need affective neuroscience and psychologyCurrent Opinion in Behavioral Sciences10.1016/j.cobeha.2022.10111345(101113)Online publication date: Jun-2022
  • (2021)Weirding Haptics: In-Situ Prototyping of Vibrotactile Feedback in Virtual Reality through VocalizationThe 34th Annual ACM Symposium on User Interface Software and Technology10.1145/3472749.3474797(936-953)Online publication date: 10-Oct-2021
  • (2018)Is it Happy?Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems10.1145/3173574.3174083(1-11)Online publication date: 21-Apr-2018

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media