default search action
8th VRCAI 2009: Yokohama, Japan
- Stephen N. Spencer, Masayuki Nakajima, Enhua Wu, Kazunori Miyata, Daniel Thalmann, Zhiyong Huang:
Proceedings of the 8th International Conference on Virtual Reality Continuum and its Applications in Industry, VRCAI 2009, Yokohama, Japan, December 14-15, 2009. ACM 2009, ISBN 978-1-60558-912-1
Keynote addresses
- Nadia Magnenat-Thalmann:
Modelling socially intelligent virtual humans. 9 - Hyun Seung Yang:
Realistic e-learning system based on mixed reality. 10
CG modeling and rendering
- Patrick Salamin, Daniel Thalmann, Frédéric Vexo:
Context aware, multimodal, and semantic rendering engine. 11-16 - Ruoxi Sun, Jinyuan Jia, Hongyu Li, Marc Jaeger:
Image-based lightweight tree modeling. 17-22 - Hitomi Imaizumi, Takayuki Itoh:
IGEL: a virtual heat cutter for 3D shape modeling. 23-27 - Arthur Niswar, Ee Ping Ong, Hong Thai Nguyen, Zhiyong Huang:
Real-time 3D talking head from a synthetic viseme dataset. 29-33
Simulation and application
- Shaohui Jiao, Enhua Wu:
Simulation of weathering fur. 35-40 - Seung-Hyun Ji, Woo-Keun Chung, Hwan-Gue Cho:
Extended social network: a new generalized framework for virtual world analysis. 41-46 - Meng Yang, Bin Sheng, Enhua Wu, Hanqiu Sun:
Multi-resolution tree motion synthesis in angular shell space. 47-52 - Jing Fan, Jiaohong Yu, Tian-yang Dong, Li-Rong Xiong:
Modeling and simulation of the plant growth based on reciprocity. 53-58 - Takashi Yoneyama, Etsuo Genda, Kunio Kondo:
Painterly renderings using a synthesis of styles based on visual perception. 59-64
VR system
- Kai Ki Lee, Man Chuen Leung, Kin-hong Wong, Michael Ming Yuen Chang:
A hand-held 3D display system that facilitates direct manipulation of 3D virtual objects. 65-70 - Pawan Harish, P. J. Narayanan:
A view-dependent, polyhedral 3D display. 71-75 - Arturo S. García, José Pascual Molina, Pascual González, Diego Martínez, Jonatan Martínez:
An experimental study of collaborative interaction tasks supported by awareness and multimodal feedback. 77-82 - Naoki Hashimoto, Kenji Honda, Makoto Sato, Mie Sato:
Making an immersive projection environment with our living room. 83-87 - Jian Zhu, Youquan Liu, Kai Bao, Yuanzhang Chang, Enhua Wu:
Warping of a spherical representation of image-based models on GPU. 89-94
Facial animation and construction
- Seung-Yeob Baek, Byoung-Youn Kim, Kunwoo Lee:
3D face model reconstruction from single 2D frontal image. 95-101 - Hong Thai Nguyen, Ee Ping Ong, Arthur Niswar, Zhiyong Huang, Susanto Rahardja:
Automatic and real-time 3D face synthesis. 103-106 - Mei Hwan Loke, Ka Yin Tang, Gim Guan Chua, Odelia Yiling Tan, Farzam Farbiz:
The effect of facial animation on a dancing character. 107-112 - Yibin Ye, Mingmin Zhang, Huansen Li, Ruiying Jiang, Xing Tang, Zhigeng Pan:
EasyFace: a realistic face modeling and facial animation authoring system. 113-117
Image processing and GPU
- Weiliang Meng, Bin Sheng, Shandong Wang, Hanqiu Sun, Enhua Wu:
Interactive image deformation using cage coordinates on GPU. 119-126 - Xiaojuan Ning, Xiaopeng Zhang, Yinghui Wang, Marc Jaeger:
Segmentation of architecture shape information from 3D point cloud. 127-132 - Xin Chen, Wencheng Wang:
Texture synthesis by interspersing patches in a chessboard pattern. 133-138 - Bin Sheng, Hanqiu Sun, Baoquan Liu, Enhua Wu:
GPU-based refraction and caustics rendering on depth textures. 139-144 - Kangying Cai, Wencheng Wang, Zhibo Chen, Quqing Chen, Jun Teng:
Exploiting repeated patterns for efficient compression of massive models. 145-150
Game and contents
- Myonghee Lee, Gerard J. Kim:
Effects of heightened sensory feedback to presence and arousal in virtual driving simulators. 151-156 - Song Zou, Haiying Xiao, Huagen Wan, Xiaolong Zhou:
Vision-based hand interaction and its application in pervasive games. 157-162 - Jaeyong Chung, Henry J. Gardner:
Measuring temporal variation in presence during game playing. 163-168 - Kazuhito Shiratori, Hiroshi Mori, Junichi Hoshino:
The trampoline entertainment system for aiding exercise. 169-174 - Andrei Sherstyuk, Anton Treskunov:
Collision-free travel with terrain maps. 175-178
Augmented and virtual reality applications
- Susanna Nilsson, Björn J. E. Johansson, Arne Jönsson:
A co-located collaborative augmented reality application. 179-184 - W. T. Fong, Soh-Khim Ong, Andrew Y. C. Nee:
Computer vision centric hybrid tracking for augmented reality in outdoor urban environments. 185-190 - Felipe Gomes de Carvalho, Alberto Barbosa Raposo, Marcelo Gattass:
Designing a hybrid user interface: a case study on an oil and gas application. 191-196 - Ken Ishibashi, Toni Da Luz, Remy Eynard, Naoki Kita, Nan Jiang, Hiroshi Segi, Keisuke Terada, Kyohei Fujita, Kazunori Miyata:
Spider hero: a VR application using pulling force feedback system. 197-202 - Jinki Jung, Kyusung Cho, Hyun Seung Yang:
Real-time robust body part tracking for augmented reality interface. 203-207 - Sikun Li, Xiaoxia Lu:
A self-adaptive HVS-optimized texture compression algorithm. 209-214
3D image generation
- Hanhoon Park, Hideki Mitsumine, Masato Fujii, Jong-Il Park:
Analytic fusion of visual cues in model-based camera tracking. 215-220 - Art Subpa-Asa, Natchapon Futragoon, Pizzanu Kanongchaiyos:
Adaptive 3-D scene construction from single image using extended object placement relation. 221-226 - Kota Aoki, Yoshihiko Sakuraba, Hiroshi Nagahashi:
A multilevel surface modeling method and its application to range image registration. 227-232 - Hiroki Sato, Sho Ogura, Tadaaki Hosaka, Takayuki Hamamoto, Akira Kubota, Ryutaro Oi, Kazuya Kodama:
Arbitrary viewpoint image synthesis for real-time processing system using multiple image sensors. 233-237 - Dongdong Weng, Yetao Huang, Yue Liu, Yongtian Wang:
Study on an indoor tracking system with infrared projected markers for large-area applications. 239-245 - Andrei Sherstyuk, Anton Treskunov:
Dynamic light amplification for immersive environment rendering. 247-251
Poster paper session 1
- Tai-Wei Kan, Chin-Hung Teng, Wen-Shou Chou:
Applying QR code in augmented reality applications. 253-257 - Ryota Takeuchi, Taichi Watanabe:
Illustration based sculpture modeling system by point set surface. 259-261 - Ali Parchamy Araghy, Mie Sato, Masao Kasuga:
The proposal of the extraction algorithm for desirable skin fractal dimension calculation. 263-266 - Baofeng Sang, Kazunori Mizuno, Yukio Fukui, Seiichi Nishihara:
Introducing recognition ratios for urban traffic flow simulation in virtual cities. 267-270 - Ippeita Izawa, Takayuki Hamamoto, Kazuya Kodama:
Free viewpoint image reconstruction from 3-D multi-focus imaging sequences and its implementation by GPU-based computing. 271-272 - Yoitsu Takahashi, Noriya Hayashimoto, Yoshihiro Kanamori, Jun Mitani, Yukio Fukui, Seiichi Nishihara:
Generating a shoe last shape using Laplacian deformation. 273-274 - Junji Sone, Katsumi Yamada, Itaru Kaneko, Jun Chen, Takayuki Kurosu, Shoichi Hasegawa, Makoto Sato:
Mechanical design of multi-finger haptic display allowing changes in contact location. 275-276 - Koki Wakunami, Masahiro Yamaguchi:
Hologram calculation for deep 3D scene from multi-view images. 277-278 - Libo Sun, Yan Liu, Jizhou Sun, Lin Bian:
The hierarchical behavior model for crowd simulation. 279-284 - Jing Fan, Jian-wei Ren, Ying Tang:
Controllable texture synthesis for runtime simulation of large-scale vegetation. 285-288 - Yayoi Itoh, Yang Chen, Kazuhiko Iida, Mamoru Shiiki, Keiji Mitsubuchi:
Experiment of metaverse learning method using anatomical 3D object. 289-294 - Ding Lin, Chongcheng Chen, Liyu Tang, Qinmin Wang:
Geometrical shapes and swaying movements of realistic tree: design and implementation. 295-302 - Kenji Honda, Naoki Hashimoto, Mie Sato, Makoto Sato:
Pseudo wide-angle image reconstruction based on continuousness of optical flow. 303-304 - Shigeki Mukai, Daisuke Murayama, Keiichi Kimura, Tadaaki Hosaka, Takayuki Hamamoto, Nao Shibuhisa, Seiichi Tanaka, Shunichi Sato, Sakae Saito:
Arbitrary view generation for eye-contact communication using projective transformations. 305-306
Poster paper session 2
- Huai-Yu Wu, LingFeng Wang, Tao Luo, Hongbin Zha:
3D shape consistent correspondence by using Laplace-Beltrami spectral embeddings. 307-309 - Huai-Yu Wu, Tao Luo, LingFeng Wang, Xulei Wang, Hongbin Zha:
3D shape retrieval by using manifold harmonics analysis with an augmentedly local feature representation. 311-313 - Andrei Sherstyuk, Dale Vincent, Anton Treskunov:
Towards Virtual Reality games. 315-316 - Asako Soga, Masahito Shiba, Jonah Salz:
Choreography composition and live performance on a Noh stage. 317-318 - Junyeong Choi, Byung-Kuk Seo, Jong-Il Park:
Robust hand detection for augmented reality interface. 319-321 - Kangsoo Kim, Byung-Kuk Seo, Jae-Hyek Han, Jong-Il Park:
Augmented reality tour system for immersive experience of cultural heritage. 323-324 - Ajay Surendranath, Geetika Sharma:
Using image-based modeling for exporting 3D content from Second Life. 325-329 - Aditya Zutshi, Geetika Sharma:
A study of virtual environments for enterprise collaboration. 331-333 - Yushi Tajima, Yuta Miura, Makoto Arisawa:
"GalgeInterface": communication tool using interface of Japanese game. 335-337 - Corey Manders, Farzam Farbiz:
Virtual reality interactions using inexpensive webcams. 339-343 - Myungho Lee, Myonghee Lee, Gerard J. Kim:
Loosely-coupled vs. tightly-coupled mixed reality: using the environment metaphorically. 345-349 - Seamus Hickey, Goshiro Yamamoto, Antti Pitkänen, Jaakko Hyry, Daisuke Yoshitake:
Implementation of a picture based user interface to assist the elderly suffering from memory problems. 351-356 - Yeong Nam Chae, Young-Ho Kim, Jin Choi, Kyusung Cho, Hyun Seung Yang:
An adaptive sensor fusion based objects tracking and human action recognition for interactive virtual environments. 357-362 - Miao Song, Serguei A. Mokhov, Alison R. Loader, Maureen J. Simmonds:
A stereoscopic OpenGL-based interactive plug-in framework for Maya and beyond. 363-368
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.