Authors:
Enas Altarawneh
and
Michael Jenkin
Affiliation:
Electrical Engineering and Computer Science, York University, Toronto, ON and Canada
Keyword(s):
Human-robot Interaction, Cloud-based AI, Realistic Human Avatar.
Related
Ontology
Subjects/Areas/Topics:
Human-Robots Interfaces
;
Informatics in Control, Automation and Robotics
;
Intelligent Control Systems and Optimization
;
Robot Design, Development and Control
;
Robotics and Automation
;
Software Agents for Intelligent Control Systems
Abstract:
Although there has been significant advances in human-machine interaction systems in recent years, cloud-based advances are not easily integrated in autonomous machines. Here we describe a toolkit that supports interactive avatar animation and modeling for human-computer interaction. The avatar toolkit utilizes cloud-based speech-to-text software that provides active listening by detecting sound and reducing noise, a cloud-based AI to generate appropriate textual responses to user queries, and a cloud-based text-to-speech generation engine to generate utterances for this text. This output is combined with a cloud-based 3D avatar animation synchronized to the spoken response. Generated text responses are embedded within an XML structure that allows for tuning the nature of the avatar animation to simulate different emotional states. An expression package controls the avatar’s facial expressions. Latency is minimized and obscured through parallel processing in the cloud and an idle loo
p process that animates the avatar between utterances.
(More)