[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2491599acmotherconferencesBook PagePublication PagesfaaConference Proceedingsconference-collections
FAA '12: Proceedings of the 3rd Symposium on Facial Analysis and Animation
ACM2012 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
FAA '12: Facial Analysis and Animation 2012 Vienna Austria 21 September 2012
ISBN:
978-1-4503-1793-1
Published:
21 September 2012

Reflects downloads up to 18 Jan 2025Bibliometrics
Abstract

No abstract available.

Skip Table Of Content Section
research-article
High-performance face tracking

Face tracking is an extensively studied field. Nevertheless, it is still a challenge to make a robust and efficient face tracker, especially on mobile devices. This extended abstract briefly describes our implementation of a high-performance multi-...

research-article
Using multimodal speech production data to evaluate articulatory animation for audiovisual speech synthesis

The importance of modeling speech articulation for high-quality audiovisual (AV) speech synthesis is widely acknowledged. Nevertheless, while state-of-the-art, data-driven approaches to facial animation can make use of sophisticated motion capture ...

research-article
High-fidelity facial performance capture with non-sequential temporal alignment

The 3D capture of facial performance has become a necessary tool for delivering visual effects related to an actor's face or for creating new realistic characters. We present a novel system using a practical acquisition setup without active illumination ...

research-article
A simulation for the creation of soft-looking facial expressions

The approximation of realistic facial expressions is an essential part of many virtual simulations, such as video games or avatar applications. Unfortunately, a lot of manual work is usually needed to create realistic facial expressions. In this paper, ...

research-article
Realistic eye model for embodied conversational agents

The eyes play an essential role during face to face communication. They provide important information about visual attention and turn-taking during human-human and human-avatar interaction. In fact, the eye is a complex organ and gaze is only one of its ...

research-article
Seeing through the face: a morphological approach in physical anthropology

For decades, morphological research on human faces was performed either on the living subject, or by means of 2D photographs. Within the last few years and with the development of new systems we are now able to produce three dimensional models of human ...

research-article
Effects of humanness of virtual agents on impression formation

In recent years, the use of virtual agents that act as an interface between human and computer has become increasingly popular. Such agents typically appear as embodied characters and display various types of life-like behaviour. To ensure the ...

research-article
Talking heads on mobile devices

The number and quality of smartphones on the market has dramatically raised lately. Researchers and developers are, thus, more and more pushed to bring algorithms and techniques from desktop environments to mobile platforms. One of the biggest ...

research-article
Perception of animacy in Caucasian and Indian faces

Masahiro Mori, who introduced the concept of the Uncanny Valley, recommended settling for moderate levels of human likeness in robotic design in order to avoid the eeriness upon encountering entities closely resembling humans [Mori 1970]. However, the ...

research-article
ViSAC: acoustic-visual speech synthesis: the system and its evaluation

In the vast majority of recent works, data-driven audiovisual speech synthesis, i.e., the generation of face animation together with the corresponding acoustic speech, is still considered as the synchronization of two independent sources: synthesized ...

research-article
Developing design guidelines for characters from analyzing empirical studies on the uncanny valley
Article No.: 11, Pages 1–2https://doi.org/10.1145/2491599.2491610

The original theory of the uncanny valley (see Figure 1) has been proposed by the Japanese scientist Masahiro Mori in 1970 [1970], where he supposed that likeability of robots does not linearly increase with human likeness. Instead, the likeability ...

research-article
Towards interactive conversational talking heads

This work is part of the long term project of the Department of Computer Engineering and Industrial Automation (DCA) of University of Campinas of developing videorealistic interactive conversational agents to provide more intuitive and efficient human-...

research-article
Estimation of FAPs and intensities of AUs based on real-time face tracking

Imitation of natural facial behavior in real-time is still challenging when it comes to natural behavior such as laughter and nonverbal expressions. This paper explains our ongoing work on methodologies and tools for estimating Facial Animation ...

research-article
Vocal and facial trustworthiness of talking heads

Trust is a key aspect to human communication due to its link to co-operation and survival. Recent research by [Ballew and Todorov 2007] has shown that humans can generate an initial trustworthiness judgement based on facial features within 100ms. ...

research-article
Real-time video-based character animation

The ability to animate a 3D virtual character in real-time has great potential in terms of connecting and interacting with the audience, for example allowing a virtual character to answer questions from audience members or make spontaneous comments ...

research-article
Interactive, musculoskeletal model for animating virtual faces

The simulation of facial movements is a difficult task because the complex and sophisticated structure of the human head involves the motion, deformation, and contact handling between bio-tissues that are viscoelastic, nonlinear, anisotropic, and ...

research-article
HapFACS 1.0: software/API for generating FACS-based facial expressions

In this article, we present HapFACS 1.0, a new software/API for generating static and dynamic three-dimensional facial expressions based on the Facial Action Coding System (FACS). HapFACS provides total control over the FACS Action Units (AUs) activated ...

Contributors
  • Austrian Academy of Sciences
  • Microsoft Corporation
  • The University of Edinburgh
  • The University of Edinburgh
  • University of York
Please enable JavaScript to view thecomments powered by Disqus.

Recommendations